Are we close to achieving Artificial General Intelligence?

Abhi Avasthi
7 min readJun 8, 2022
ePhoto by Maximalfocus on Unsplash

In the summer of 1956, AI pioneers John McCarthy, Marvin Minsky, Nat Rochester, and Claude Shannon wrote: “The study is to proceed on the basis of the conjecture that every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it. An attempt will be made to find how to make machines use language, form abstractions and concepts, solve kinds of problems now reserved for humans, and improve themselves.” They figured this would take 10 people two months.

Fast-forward to 1970 and they went again : “In from three to eight years, we will have a machine with the general intelligence of an average human being. I mean a machine that will be able to read Shakespeare, grease a car, play office politics, tell a joke, have a fight. At that point the machine will begin to educate itself with fantastic speed. In a few months it will be at genius level, and a few months after that, its powers will be incalculable.

The research in last 50 years showed us that these problems are actually more difficult than scientists thought. All those unfulfilled promises caused a distrust towards the AI field and investments were reduced for a long period, which is referred to as “AI winter”.

Now we have “deep learning” and another period of optimism has emerged. Deep learning really seems to achieve many difficult tasks and already causing revolutionary changes in many different industries.

To be honest, there are indeed some ways in which AI truly is making progress — synthetic images look more and more realistic, and speech recognition can often work in noisy environments — but we are still light-years away from general purpose, human-level AI that can understand the true meanings of articles and videos, or deal with unexpected obstacles and interruptions. We are still stuck on precisely the same challenges that academic scientists have been pointing out for years: getting AI to be reliable and getting it to cope with unusual circumstances.

As a Quora user Mehmet pointed out, if you look at the problems that deep learning (or in general machine learning) is solving, they are all kinds of problems that brain does automatically. We don’t know how we recognise faces, our brains just do it automatically. And this is only one part of brain function. And it is the most primitive one. The more interesting, and the more complicated side of our intelligence is our reasoning. The things that we are doing consciously, not automatically. Our “thinking” ability.

As an example: we watch a movie and think about it, solve causal relations between events, reason why some character behaved a certain way, draw conclusions, etc. We have not seen a huge progress in such reasoning in AI yet.

If we’re going to build an AI-driven therapy bot, we’d rather have a bot that can do that one thing well than a bot that makes mistakes that are much subtler than telling patients to commit suicide. We’d rather have a bot that can collaborate intelligently with humans than one that needs to be watched constantly to ensure that it doesn’t make any mistakes.

To take another example, a Tesla on autopilot recently drove directly towards a human worker carrying a stop sign in the middle of the road, only slowing down when the human driver intervened.

The system could recognise humans on their own (as they appeared in the training data) and stop signs in their usual locations (again as they appeared in the trained images), but failed to slow down when confronted by the unusual combination of the two, which put the stop sign in a new and unusual position.

In May 2022, DeepMind, a subsidiary of Alphabet (parent company of Google), announced Gato, perhaps the most versatile artificial intelligence model in existence. Billed as a “generalist agent,” Gato can perform over 600 different tasks. It can drive a robot, caption images, identify objects in pictures, and more. It is probably the most advanced AI system on the planet that isn’t dedicated to a singular function. And, to some computing experts, it is evidence that the industry is on the verge of reaching a long-awaited, much-hyped milestone: Artificial General Intelligence.

Unlike ordinary AI, Artificial General Intelligence wouldn’t require giant troves of data to learn a task. Whereas ordinary artificial intelligence has to be pre-trained or programmed to solve a specific set of problems, a general intelligence can learn through intuition and experience.

An AGI would in theory be capable of learning anything that a human can, if given the same access to information. Basically, if you put an AGI on a chip and then put that chip into a robot, the robot could learn to play tennis the same way you or I do: by swinging a racket around and getting a feel for the game. That doesn’t necessarily mean the robot would be sentient or capable of cognition. It wouldn’t have thoughts or emotions, it’d just be really good at learning to do new tasks without human aid.

Even if we accept that Gato is a huge step on the path towards AGI, and that scaling is the only problem that’s left, it is more than a bit problematic to think that scaling is a problem that’s easily solved. We don’t know how much power it took to train Gato, but GPT-3 required about 1.3 Gigawatt-hours: roughly 1/1000th the energy it takes to run the Large Hadron Collider for a year. Granted, Gato is much smaller than GPT-3, though it doesn’t work as well; Gato’s performance is generally inferior to that of single-function models. And granted, a lot can be done to optimize training (and DeepMind has done a lot of work on models that require less energy). But Gato has just over 600 capabilities, focusing on natural language processing, image classification, and game playing. These are only a few of many tasks an AGI will need to perform. How many tasks would a machine be able to perform to qualify as a “general intelligence”? Thousands? Millions? Can those tasks even be enumerated? At some point, the project of training an artificial general intelligence sounds like something from Douglas Adams’ novel The Hitchhiker’s Guide to the Galaxy, in which the Earth is a computer designed by an AI called Deep Thought to answer the question “What is the question to which 42 is the answer?”

For low-level brain simulation, an extremely powerful computer would be required. The human brain has a huge number of synapses. Each of the 1011 (one hundred billion) neurons has on average 7,000 synaptic connections (synapses) to other neurons. It has been estimated that the brain of a three-year-old child has about 1015 synapses (1 quadrillion). This number declines with age, stabilizing by adulthood. Estimates vary for an adult, ranging from 1014 to 5×1014 synapses (100 to 500 trillion).[56] An estimate of the brain’s processing power, based on a simple switch model for neuron activity, is around 1014 (100 trillion) synaptic updates per second (SUPS).[57] In 1997, Kurzweil looked at various estimates for the hardware required to equal the human brain and adopted a figure of 1016 computations per second (cps).[e] (For comparison, if a “computation” was equivalent to one “floating-point operation” — a measure used to rate current supercomputers — then 1016 “computations” would be equivalent to 10 petaFLOPS, achieved in 2011). He used this figure to predict the necessary hardware would be available sometime between 2015 and 2025, if the exponential growth in computer power at the time of writing continued.

The fact that human intelligence is itself not general intelligence and most humans are highly specialised makes this even more difficult. This post by Mike Ferguson goes into depth about intelligence , it’s a must read.

AGI will be different, not superior to human intelligence. This is true and human intelligence is also different from animal intelligence. Some animals are capable of amazing mental feats like squirrels remembering where they hid hundreds of nuts for months.

Besides, Consciousness and intelligence seem to require some sort of agency. An AI can’t choose what it wants to learn, neither can it say “I don’t want to play Go, I’d rather play Chess.” Now that we have computers that can do both, can they “want” to play one game or the other? One reason we know our children (and, for that matter, our pets) are intelligent and not just automatons is that they’re capable of disobeying. A child can refuse to do homework; a dog can refuse to sit. And that refusal is as important to intelligence as the ability to solve differential equations, or to play chess. Indeed, the path towards artificial intelligence is as much about teaching us what intelligence isn’t (as Turing knew) as it is about building an AGI.

Evolution of the animal brain took billions of years. Even then, the most advanced brains in nature, such as dogs, or cats, are very limited in capabilities such as reasoning and analysing. Only humans have reached that level so far, and that is with thousands of years of cultural evolution added to the biological evolution. Now we are trying to mimic the result of all this process of billions of years, just by doing a few decades of research. The bitter fact is, the only example to the intelligence level we aim is the human intelligence, and we are still far from understanding how it works.

To sum up, we thought that AI problem was easy 50 years ago, and things proved that we were wrong. Now we start thinking “it is difficult but can be solved soon with some more research”, which will also be proved to be wrong within a few years. Soon we will understand that developing a general AI is an extremely difficult problem, which will require at least (optimistically) several decades of more research, and will probably require understanding the human brain.

Bibliography:

The Long, Hype-Strewn Road to General Artificial Intelligence : https://www.motherjones.com/environment/2022/06/general-artificial-intelligence-technology/

This wonderful Quora answer by Mehmet : https://qr.ae/pvFQAH

Closer to AGI? : https://www.oreilly.com/radar/closer-to-agi/

A generalist agent : https://www.deepmind.com/publications/a-generalist-agent

Deep Learning is hitting a wall : https://nautil.us/deep-learning-is-hitting-a-wall-14467/

AGI might not be as imminent as you think : https://www.scientificamerican.com/article/artificial-general-intelligence-is-not-as-imminent-as-you-might-think1/

--

--

Abhi Avasthi

I write about things that fascinate me, and make me think.