What in the World is AGI?
Artificial general intelligence (AGI) might one day take the wonders of modern AI to a level beyond our comprehension. But what is AGI, really?
I would not recommend anyone to believe that there is something impossible nowadays. SpaceX and their reusable launch vehicles (RLVs) are a strong reminder for that. Just a few years ago, who could have imagined we would see Starship, the most massive and powerful fully reusable transportation system, making round trips to Earth orbit? And who would dare to say what is not possible now?
I firmly believe we will continuously be amazed by the increasingly frequent advances in many areas of technology, including AI.
What is Intelligence?
In 1983, in his book Frames of Mind: The Theory of Multiple Intelligences, Howard Gardner proposes that intelligence is not a single, unified ability, instead, is a set of distinct human capacities. He goes on to define intelligence as the ability to solve problems or to create products that are valued within one or more cultural settings. Gardner's multiple intelligence theory includes linguistic, logical/mathematical, interpersonal, spatial and other capacities.
Just a couple of years later, in 1985, Robert Sternberg published the book Beyond IQ: A Triarchic Theory of Human Intelligence in which he defines intelligence as the ability to adapt, shape, and select environments to accomplish one's and society's goals. Sternberg's thriarchic theory focuses on analytical intelligence (capacities for problem solving), creative intelligence (capacities for novelty and innovation), and practical intelligence (capacities for adapting to real-world contexts).
On page 291 (of the first version published in 1985), Sternberg is discussing exceptional intelligence observed in gifted individuals who show superior performance to others due to better metacomponential (higher-order cognitive processes associated with reasoning and problem solving strategies), performance metacomponential (the ability of thinking about thinking or metacognition) and knowledge-acquisition-componential skills (cognitive processes and abilities involved in acquiring, processing, and retaining knowledge).
While studying gifted individuals, Sternberg emphasizes that although they will excel in some specific tasks and environments (to the point of automizing “highly practiced performances to a great extent”), such skills “will not necessarily generalize to other environments and kinds of environments.”
A Much Earlier Discussion
If we really go back in time, we have De Anima (On the Soul) and The Nicomachean Ethics, both by Aristotle (384-322 BC), who discussed foundational ideas about human cognitive faculties such as reasoning and understanding, distinguishing humans from animals. Aristotle proposed that existing things are to the soul either sensible or thinkable. He believed the intellect was the capacity of grasping universals and thinking abstractly.
Plato (428-348 BC), in The Republic and Meno, discusses knowledge and practical wisdom in pursuit of determining if intellectual ability is teachable. From these works, we obtain an early definition of intelligence as an innate capacity for learning and reasoning.
What is Artificial Intelligence?
The term “artificial intelligence” was coined by John McCarthy in 1955. McCarthy, joined by Marvin Minsky, Nathaniel Rochester, and Claude Shannon (what a group!) submitted “A Proposal for the Dartmouth Summer Research Project on Artificial Intelligence" to the seminal event for artificial intelligence.
McCarthy's work was based on the conjecture that “every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it.”
McCarthy's work aimed to creating machines capable of mimicking human cognitive abilities.
What Makes Artificial Intelligence “Artificial”?
AI is “artificial” mainly due to is origin. AI is designed, programmed, a human-made artifact from algorithms, data, and computational systems while human intelligence naturally evolves through biological processes supported by organic neurons and lived experience.
Machine learning relies on mathematical models and statistical patterns rather than consciousness, intuition, and emotion. As an example, the AI's ability to adapt is an optimization based on predefined goals and trained data rather than subjective understanding and self-awareness.
Our human intelligence allows us to be flexible in terms of purpose. We can adapt to unpredictable situations, be creative in the face of the unknown, and reshape our goals based on feelings, morals or ethics.
AI (as it is now) is also known as Narrow AI (or Weak AI) in the sense that it is specialized to narrow domains while lacking broad and intuitive adaptability humans have.
Perhaps one of the most important notions to be understood before we discuss AGI is AI's lack of agency. AI does not have intentions, desires, or a sense of self. When AI solves a problem, it is executing a task it was designed or trained for (even in an extended manner). While doing so, AI is not pursuing its own goals. To highlight the difference here, “solving a problem” means different things for humans and its definition can be reevaluated based on each context and personal values.
AI mimics certain outcomes of intelligence such as problem-solving, learning, and adapting capacities without replicating important aspects of the underlying essence of human cognition such as consciousness, emotional depth, and intrinsic motivation (at least not yet).
Enter AGI
The notion of artificial general intelligence was discussed in a more limited context in 1997 by Mark Gubrud in the paper “Nanotechnology and International Security” (here is another paper that refers to the original one). However the term “artificial general intelligence” was popularized by Shane Legg and Ben Goertzel.
You will see many people crediting Shane Legg as the one who coined the term AGI, as he suggested the term to Ben Goertzel, who was preparing a series of publications on the subject.
But here is the background story told by Goertzel himself:
Around 2002, Goertzel was one of the editors of a book about approaches to “powerful AI”. Goertzel's candidate title was “Real AI” but he gave up on it because it “was too controversial”. He emailed some friends asking for new ideas and Legg suggested Artificial General Intelligence. Years later, Goertzel learned that the term was used by Gubrud in his 1997 paper.
Although not those who coined the term, Legg and Goertzel certainly directly contributed to make it mainstream, Legg with his suggestion to Goertzel, and Goertzel editing the book “Artificial General Intelligence” in 2007.
In the book, Pennachin and Goertzel describe AGI as “a software program that can solve a variety of complex problems in a variety of different domains, and that controls itself autonomously, with its own thoughts, worries, feelings, strengths, weaknesses and predispositions.”
They go on to say that AGI was actually the original focus on the AI studies but it was such a difficult problem to solve that early researchers gave up on it. So much so that working with AGI got such a bad reputation that its pursuit was somewhat “analogous to building a perpetual motion machine.”
The brave AGI researchers continued based on the belief that solving narrowly defined problems (in isolation) contributes to creating “real AI”. Pennachin and Goertzel acknowledge that while this statement is somewhat true, “both cognitive theory and practical experience suggest that it is not so true as is commonly believed.”
In a Nutshell
AGI aims to be a system that can perform any intellectual task a human can do in any domain. In this sense, such a system would be able to learn, reason, adapt across diverse challenges without being explicitly programmed for each case.
AGI would then possess flexible, human-like learning and the ability to transfer knowledge between unrelated domains and learn efficiently (or sufficiently) from limited data, just like humans do.
Eventually, AGI would be able to set its own goals, chose what problems to solve and how without reading any instructions, what would resemble human cognitive autonomy.
Is AGI Possible?
I don't know. I don't think anyone knows for sure at this point. I know that many people do believe it and that's why AGI research continuous. Furthermore, without curiosity and stubborn belief, we would not see remarkable advances in technology and science.
Before 2018, most people didn't know what LLMs were. Now we have an entire family of LLM-powered tools accessible to anyone. Who would know? Who can say exactly what is possible or not after remarkable engineering advances like RLVs?
Scenes of the next chapters.
Very informative and intriguing. Thank you for putting this together 🙏🏽
A thoughtful and clear take on AGI that makes complex ideas easy to grasp. Worth reading for anyone curious about the future of intelligence and technolog