Human Level AI Technology Still Years Away: Meta AI Chief

Current artificial intelligence technology is good at responding to queries, but it is not great at reasoning or making plans, Yann LeCun said.
Human Level AI Technology Still Years Away: Meta AI Chief
People check their phones as AMECA, an AI robot, looks on at the All In artificial intelligence conference in Montreal on Sept. 28, 2023. Ryan Remiorz /The Canadian Press
Austin Alonzo
Updated:
0:00

The point at which artificial intelligence rivals the intelligence of a typical human is still more than five years away, according to an AI leader at Meta Platforms Inc.

On Jan. 8, Yann LeCun, vice president and chief AI scientist at the Menlo Park, California, social media giant and technology company, expressed skepticism about comments recently made by OpenAI CEO Sam Altman that the technology industry could soon develop so-called artificial general intelligence, or AGI.

“I don’t see this until five or six years [later],” LeCun said at a panel event held at the 2025 Consumer Electronics Show in Las Vegas.

In a recent blog post, Altman said OpenAI is getting closer to developing a working AGI model.

“We are now confident we know how to build AGI as we have traditionally understood it,” Altman wrote in the post published on Jan. 5. “We believe that, in 2025, we may see the first AI agents ‘join the workforce’ and materially change the output of companies.”

First, LeCun said he isn’t supportive of comparing the AGI to human-level intelligence. Humans are abstract thinkers who use visual, auditory, and tactile cues—among others—to understand the world around them. Currently, the most advanced AI is using only words to understand the world and isn’t close to reaching the level of intelligence that a being processing visual information with its eyes can achieve.

Second, LeCun said an AGI-powered robot couldn’t do the work of a plumber right now. Robotic technology exists to interact with complicated systems, and AI cannot replicate the work of skilled tradesmen.

“We’re not even close to matching the understanding of the physical world of an animal like a cat or a dog,” LeCun said.

Additionally, artificial intelligence is not yet demonstrating what LeCun called common sense. For years, technologists have predicted that a computer smarter than a human is right around the corner. For example, a chess-playing AI program knows how to play chess better than a human only because it has been programmed to understand chess moves and make a winning response. The chess-playing program doesn’t have many other practical applications.

To truly replicate human-level intelligence, AI needs to be able to reason and plan a sequence of actions like the human mind does. Right now, generative AI working on large language models—such as ChatGPT—searches for the right sequence of words to answer a question and renders a response based on its processing. It doesn’t truly reason the way a human brain does, LeCun said.

LeCun said that to achieve that feat, the network architecture behind generative AI would need to change substantially.

AI currently lacks the ability to process visual information like humans do. Until AI can train on more than text, it will not reach a human level of intelligence.

“It’s always harder than we think,” LeCun said of creating an AI system that can be as smart as a human. “It’s been sort of a repeated history of AI that people have been so enthusiastic about the capabilities of the new technique they just came up with and turned out to be disappointed.”

Austin Alonzo
Austin Alonzo
Reporter
Austin Alonzo covers U.S. political and national news for The Epoch Times. He has covered local, business and agricultural news in Kansas City, Missouri, since 2012. He is a graduate of the University of Missouri. You can reach Austin via email at [email protected]
twitter