AI Must Embrace Specialization via Superhuman Adaptable Intelligence: Immediate Thoughts
These are my immediate toughts after reading the paper AI Must Embrace Specialization via Superhuman Adaptable Intelligence by Judah Goldfeder, Philippe Wyder, Yann LeCun, and Ravid Shwartz-Ziv. I recommed you give the paper a look before continuing just to have some ideas about what I am talking about.
\(\Rightarrow\) I agree on the points of human intelligence being specific rather than general, therefore, making any AGI definition that focuse on the replication of human intelligence actually not general.
\(\Rightarrow\) It is also clear we don’t have a good, objective definition of AGI.
\(\Rightarrow\) I don’t agree with the point the paper makes about specialization winning over a general system. The paper states:
“Organisms face persistent trade-offs: improving performance on one niche often reduces performance elsewhere, and selection therefore tends to favor designs that are sharply tuned to the local payoff landscape rather than uniformly competent across all possible conditions…”
“AI systems are not exempt from this pressure: models that are too costly, too unreliable, or insufficiently accurate in the domains that matter will be neglected in favor of systems that are better matched to those domains.”
The paper states this after making analogies with the cost of trying to be a generalist organism or organization.
While at face value this might seem to make sense, I don’t like the potential and unstated conclusion from this: We should focus on building specialized intelligence rather than general intelligence that is subpar to specialized systems. For two reasons I can think of right now, I don’t like this argument:
- The argument is based on limits these entities have, but AI doesn’t.
An organism can’t easily change its environment. It has to obey nature and its laws. An organization can’t easily redefine the market it is in. It has to obey demand and supply. AI can change its environment, whether it’s its compute needs, cost, or architecture. Or its environment can be changed for it. I find this analogy overlooks any potential unseen discoveries in AI.
- The argument indirectly recommends we focus our resources and attention on specialized AI.
Should we focus on working on specialized systems that are going to outperform general systems? Or should we focus our efforts on trying to improve the general AI approach we seem to be chasing?
Personally, I think the latter is a better focus and more feasible. We should focus on reducing the cost of general AI, increasing the reliability of general intelligence even in specific domains, and, of course, the accuracy in these specific domains. We solve more problems much more quickly this way. The other approach would take a long time for us to cover most of the needs we want to use AI for. We should focus on reducing the “negative transfer” from unrelated tasks rather than completely dismissing general AI in favor of specialized AI.
\(\Rightarrow\) I like the indirect question the authors present: Would general AI have solved the protein solving problem with the accuracy the specialized AlphaFold did? I would add my own question to this line of questioning: Would the general approach we have now even have thought of solving the AlphaFold problem, which was thought to be unsolvable due to its complexity at the time? General AI must pass the Galileo Test: being able to disagree with general consensus for the sake of the right thing.
\(\Rightarrow\) I agree specialization will help us in targeting precisely domains that are out of the scope of human intelligence. In other words, specialization can help us find new “Echolocations.”
\(\Rightarrow\) 100% agree with this:
“We must embrace specialization rather than fight it.”
I think this will also be true if I write it the opposite way: “We must embrace generalization rather than fight it.”
\(\Rightarrow\) In this paper, they propose Superhuman Adaptable AI (SAI).
“Superhuman Adaptable Intelligence (SAI) is capable of adapting to exceed humans at any task humans can do, while also being able to adapt to tasks outside the human domain that have utility.”
“SAI is measured by the speed with which it takes an agent to acquire new skills and learn new tasks”
“One promising path forward is therefore to emphasize self-supervised learning approaches, predictive world models, and modularity—and to judge advances by how quickly and reliably they produce new competence, rather than by how closely they imitate human behavior.”
\(\Rightarrow\) As a key insight, they write the following:
“Key Insight: The AI that folds our proteins should not be the AI that folds our laundry!”
I find this statement to be very biased against general AI.
Why not? If it’s cheap and really good at both tasks, why not?