That is my experience, it’s generally quite decent for small and simple stuff (as I said, distillation of documentation). I use it for rust, where I am sure the training material was much smaller than other languages. It’s not a matter a prompting though, it’s not my prompt that makes it hallucinate functions that don’t exist in libraries or make it write code that doesn’t compile, it’s a feature of the technology itself.
GPTs are statistical text generators after all, they don’t “understand” the problem.
It’s also pretty young, human toddlers hallucinate and make things up. Adults too. Even experts are known to fall prey to bias and misconception.
I don’t think we know nearly enough about the actual architecture of human intelligence to start asserting an understanding of “understanding”. I think it’s a bit foolish to claim with certainty that LLMs in a MoE framework with self-review fundamentally can’t get there. Unless you can show me, materially, how human “understanding” functions, we’re just speculating on an immature technology.
As much as I agree with you, humans can learn a bunch of stuff without first learning the content of the whole internet and without the computing power of a datacenter or consuming the energy of Belgium. Humans learn to count at an early age too, for example.
I would say that the burden of proof is therefore reversed. Unless you demonstrate that this technology doesn’t have the natural and inherent limits that statistical text generators (or pixel) have, we can assume that our mind works differently.
Also you say immature technology but this technology is not fundamentally (I.e. in terms of principle) different from what Weizenabum’s ELIZA in the '60s. We might have refined model and thrown a ton of data and computing power at it, but we are still talking of programs that use similar principles.
So yeah, we don’t understand human intelligence but we can appreciate certain features that absolutely lack on GPTs, like a concept of truth that for humans is natural.
No actually it has changed pretty fundamentally. These aren’t simply a bunch of FCNs put together. Look up what a transformer is, that was one of the major breakthroughs that made modern LLMs possible.
humans can learn a bunch of stuff without first learning the content of the whole internet and without the computing power of a datacenter or consuming the energy of Belgium. Humans learn to count at an early age too, for example.
I suspect that if you took into consideration the millions of generations of evolution that “trained” the basic architecture of our brains, that advantage would shrink considerably.
I would say that the burden of proof is therefore reversed. Unless you demonstrate that this technology doesn’t have the natural and inherent limits that statistical text generators (or pixel) have, we can assume that our mind works differently.
I disagree. I’d argue evidence suggests we’re just a more sophisticated version of a similar principle, refined over billions of years. We learn facts by rote, and learn similarities by rote until we develop enough statistical text (or audio) correlations to “understand” the world.
Conversations are a slightly meandering chain of statistically derived cliches. English adjective order is universally “understood” by native speakers based purely on what sounds right, without actually being able to explain why (unless you’re a big grammar nerd). More complex conversations might seem novel, but they’re just a regurgitation of rote memorized facts and phrases strung together in a way that seems appropriate to the conversation based on statistical experience with past conversations.
Also you say immature technology but this technology is not fundamentally (I.e. in terms of principle) different from what Weizenabum’s ELIZA in the '60s. We might have refined model and thrown a ton of data and computing power at it, but we are still talking of programs that use similar principles.
As with the evolution of our brains, which have operated on basically the same principles for hundreds of millions of years. The special sauce between human intelligence and a flatworm’s is a refined model.
So yeah, we don’t understand human intelligence but we can appreciate certain features that absolutely lack on GPTs, like a concept of truth that for humans is natural.
I’m not sure you can claim that absolutely. That kind of feature is an internal experience, you can’t really confirm or deny if a GPT has something similar. Besides, humans have a pretty tenuous relationship with the concept of truth. There are certainly humans that consider objective falsehoods to be Truth.
There is a lot that can be discussed in a philosophical debate.
However, any 8 years old would be able to count how many letters are in a word. LLMs can’t reliably do that by virtue of how they work.
This suggests me that it’s not just a model/training difference.
Also evolution over million of years improved the “hardware” and the genetic material. Neither of this is compares to computing power or amount of data which is used to train LLMs.
I believe a lot of this conversation stems from the marketing (calling “intelligence”) and the anthropomorphization of AI.
Anyway, time will tell. Personally I think it’s possible to reach a general AI eventually, I simply don’t think the LLMs approach is the one leading there.
There is a lot that can be discussed in a philosophical debate. However, any 8 years old would be able to count how many letters are in a word. LLMs can’t reliably do that by virtue of how they work. This suggests me that it’s not just a model/training difference. Also evolution over million of years improved the “hardware” and the genetic material. Neither of this is compares to computing power or amount of data which is used to train LLMs.
Actually humans have more computing power than is required to run an LLM. You have this backwards. LLMs are comparably a lot more efficient given how little computing power they need to run by comparison. Human brains as a piece of hardware are insanely high performance and energy efficient. I mean they include their own internal combustion engines and maintenance and security crew for fuck’s sake. Give me a human built computer that has that.
Anyway, time will tell. Personally I think it’s possible to reach a general AI eventually, I simply don’t think the LLMs approach is the one leading there.
I agree here. I do think though that LLMs are closer than you think. They do in fact have both attention and working memory, which is a large step forward. The fact they can only process one medium (only text) is a serious limitation though. Presumably a general purpose AI would ideally have the ability to process visual input, auditory input, text, and some other stuff like various sensor types. There are other model types though, some of which take in multi-modal input to make decisions like a self-driving car.
I think a lot of people romanticize what humans are capable of while dismissing what machines can do. Especially with the processing power and efficiency limitations that come with the simple silicon based processors that current machines are made from.
That is my experience, it’s generally quite decent for small and simple stuff (as I said, distillation of documentation). I use it for rust, where I am sure the training material was much smaller than other languages. It’s not a matter a prompting though, it’s not my prompt that makes it hallucinate functions that don’t exist in libraries or make it write code that doesn’t compile, it’s a feature of the technology itself.
GPTs are statistical text generators after all, they don’t “understand” the problem.
It’s also pretty young, human toddlers hallucinate and make things up. Adults too. Even experts are known to fall prey to bias and misconception.
I don’t think we know nearly enough about the actual architecture of human intelligence to start asserting an understanding of “understanding”. I think it’s a bit foolish to claim with certainty that LLMs in a MoE framework with self-review fundamentally can’t get there. Unless you can show me, materially, how human “understanding” functions, we’re just speculating on an immature technology.
As much as I agree with you, humans can learn a bunch of stuff without first learning the content of the whole internet and without the computing power of a datacenter or consuming the energy of Belgium. Humans learn to count at an early age too, for example.
I would say that the burden of proof is therefore reversed. Unless you demonstrate that this technology doesn’t have the natural and inherent limits that statistical text generators (or pixel) have, we can assume that our mind works differently.
Also you say immature technology but this technology is not fundamentally (I.e. in terms of principle) different from what Weizenabum’s ELIZA in the '60s. We might have refined model and thrown a ton of data and computing power at it, but we are still talking of programs that use similar principles.
So yeah, we don’t understand human intelligence but we can appreciate certain features that absolutely lack on GPTs, like a concept of truth that for humans is natural.
No actually it has changed pretty fundamentally. These aren’t simply a bunch of FCNs put together. Look up what a transformer is, that was one of the major breakthroughs that made modern LLMs possible.
I suspect that if you took into consideration the millions of generations of evolution that “trained” the basic architecture of our brains, that advantage would shrink considerably.
I disagree. I’d argue evidence suggests we’re just a more sophisticated version of a similar principle, refined over billions of years. We learn facts by rote, and learn similarities by rote until we develop enough statistical text (or audio) correlations to “understand” the world.
Conversations are a slightly meandering chain of statistically derived cliches. English adjective order is universally “understood” by native speakers based purely on what sounds right, without actually being able to explain why (unless you’re a big grammar nerd). More complex conversations might seem novel, but they’re just a regurgitation of rote memorized facts and phrases strung together in a way that seems appropriate to the conversation based on statistical experience with past conversations.
As with the evolution of our brains, which have operated on basically the same principles for hundreds of millions of years. The special sauce between human intelligence and a flatworm’s is a refined model.
I’m not sure you can claim that absolutely. That kind of feature is an internal experience, you can’t really confirm or deny if a GPT has something similar. Besides, humans have a pretty tenuous relationship with the concept of truth. There are certainly humans that consider objective falsehoods to be Truth.
Agree to disagree.
There is a lot that can be discussed in a philosophical debate. However, any 8 years old would be able to count how many letters are in a word. LLMs can’t reliably do that by virtue of how they work. This suggests me that it’s not just a model/training difference. Also evolution over million of years improved the “hardware” and the genetic material. Neither of this is compares to computing power or amount of data which is used to train LLMs.
I believe a lot of this conversation stems from the marketing (calling “intelligence”) and the anthropomorphization of AI.
Anyway, time will tell. Personally I think it’s possible to reach a general AI eventually, I simply don’t think the LLMs approach is the one leading there.
Actually humans have more computing power than is required to run an LLM. You have this backwards. LLMs are comparably a lot more efficient given how little computing power they need to run by comparison. Human brains as a piece of hardware are insanely high performance and energy efficient. I mean they include their own internal combustion engines and maintenance and security crew for fuck’s sake. Give me a human built computer that has that.
I agree here. I do think though that LLMs are closer than you think. They do in fact have both attention and working memory, which is a large step forward. The fact they can only process one medium (only text) is a serious limitation though. Presumably a general purpose AI would ideally have the ability to process visual input, auditory input, text, and some other stuff like various sensor types. There are other model types though, some of which take in multi-modal input to make decisions like a self-driving car.
I think a lot of people romanticize what humans are capable of while dismissing what machines can do. Especially with the processing power and efficiency limitations that come with the simple silicon based processors that current machines are made from.