I’m a bit torn on this. On one hand: obviously LLMs do this, since they’re essentially just huge pattern recognition and prediction machines, and basically any person probing them with new complex problems has made that exact observation already. On the other hand: a lot of everyday things us humans do are not that dissimilar from recognizing patterns and remembering a solution, and it feels like doing this step well is a reasonable intermediate step towards AGI, and not as hugely far off as this article makes it out to be.
The human brain is not an ordered, carefully engineered thinking machine; it’s a massive hodge-podge of heuristic systems to solve a lot of different classes of problems, which makes sense when you remember it evolved over millions of years as our very distant ancestors were exposed to radically different environments and challenges.
Likewise, however AGI is built, in order to communicate with humans and solve most of the same problems, it’s probably going to take an amalgamation of different algorithms, just like brains.
All of this to say, I agree memorization will probably be an integral part of that system, but it’s also going to be a small part of the final system. So I also agree with the article that we’re way off from AGI.
I’m a bit torn on this. On one hand: obviously LLMs do this, since they’re essentially just huge pattern recognition and prediction machines, and basically any person probing them with new complex problems has made that exact observation already. On the other hand: a lot of everyday things us humans do are not that dissimilar from recognizing patterns and remembering a solution, and it feels like doing this step well is a reasonable intermediate step towards AGI, and not as hugely far off as this article makes it out to be.
The human brain is not an ordered, carefully engineered thinking machine; it’s a massive hodge-podge of heuristic systems to solve a lot of different classes of problems, which makes sense when you remember it evolved over millions of years as our very distant ancestors were exposed to radically different environments and challenges.
Likewise, however AGI is built, in order to communicate with humans and solve most of the same problems, it’s probably going to take an amalgamation of different algorithms, just like brains.
All of this to say, I agree memorization will probably be an integral part of that system, but it’s also going to be a small part of the final system. So I also agree with the article that we’re way off from AGI.