Saved by Prashanth Narayan and
Deep Learning Is Hitting a Wall
For at least four reasons, hybrid AI, not deep learning alone (nor symbols alone) seems the best way forward:
- So much of the world’s knowledge, from recipes to history to technology is currently available mainly or only in symbolic form.
- Deep learning on its own continues to struggle even in domains as orderly as arithmetic. A hybrid system may have
Gary Marcus • Deep Learning Is Hitting a Wall
Gary Marcus • Deep Learning Is Hitting a Wall
In time we will see that deep learning was only a tiny part of what we need to build if we’re ever going to get trustworthy AI.
Gary Marcus • Deep Learning Is Hitting a Wall
Classical computer science, of the sort practiced by Turing and von Neumann and everyone after, manipulates symbols in a fashion that we think of as algebraic, and that’s what’s really at stake.
Gary Marcus • Deep Learning Is Hitting a Wall
In 2020, Jared Kaplan and his collaborators at OpenAI suggested that there was a set of “scaling laws” for neural network models of language; they found that the more data they fed into their neural networks, the better those networks performed.10 The implication was that we could do better and better AI if we gather more data and apply deep learni... See more
Gary Marcus • Deep Learning Is Hitting a Wall
With all the challenges in ethics and computation, and the knowledge needed from fields like linguistics, psychology, anthropology, and neuroscience, and not just mathematics and computer science, it will take a village to raise to an AI.
Gary Marcus • Deep Learning Is Hitting a Wall
Gary Marcus • Deep Learning Is Hitting a Wall
Deep learning, which is fundamentally a technique for recognizing patterns, is at its best when all we need are rough-ready results, where stakes are low and perfect results optional.
Gary Marcus • Deep Learning Is Hitting a Wall
Indeed, we may already be running into scaling limits in deep learning, perhaps already approaching a point of diminishing returns. In the last several months, research from DeepMind and elsewhere on models even larger than GPT-3 have shown that scaling starts to falter on some measures, such as toxicity, truthfulness, reasoning, and common sense