AI Development Encounters Bottleneck, Giving Humanity a Breather

AI Development Encounters Bottleneck, Giving Humanity a Breather
A visitor takes a picture of humanoid AI robot "Ameca" at the booth of Engineered Arts company during the world's largest gathering of humanoid AI Robots as part of International Telecommunication Union (ITU) AI for Good Global Summit in Geneva, on July 5, 2023. (Fabrice Coferini/AFP via Getty Images)
Pinnacle View Team
Sean Tseng
Updated:
0:00
Commentary

Concerns about machines or unchecked technological advances bringing disaster to humanity have persisted for over a century. Many apocalyptic movies feature themes related to artificial intelligence (AI) systems. Due to a lack of understanding of AI principles, many people are understandably apprehensive about an uncertain future.

However, an increasing number of high-tech industry professionals, including AI experts, have begun voicing their concerns. So, what are these worries based on? Are they well-founded, or are they merely exaggerated fears? Could AI potentially lead humanity to doomsday?

The Looming Threat of Advanced AI

Independent TV producer Li Jun said that numerous high-tech experts, scholars, and politicians share deep concerns about AI.

He cited a notable instance from May 2023, when OpenAI CEO Sam Altman and over 350 AI experts signed an open letter warning that artificial intelligence might threaten human existence. They argued that AI should be regarded as a societal risk akin to pandemics and nuclear warfare. Additionally, Mr. Altman has advocated for the establishment of an international body similar to the International Atomic Energy Agency to ensure AI’s safe development, emphasizing the need for more governmental control.

Similarly, Tesla CEO Elon Musk has said there’s a 10 percent to 20 percent chance that AI could wipe out humanity. Adding to the conversation, Mr. Li recalled a statement by Russian President Vladimir Putin, who said that “whoever leads in AI will rule the world.” He expressed concern that if misused by unethical forces, AI could be catastrophic.

Mr. Li said that “as AI progresses from weak AI through general intelligence to superintelligence, it might begin to fulfill roles traditionally held by humans, including that of family members like spouses.”

“This transformation could drastically reshape societal development and yield unforeseen outcomes, potentially positioning AI as humanity’s superior,” he said.

“Can humans avoid this fate?”

Jason Ma, who holds a PhD in AI and machine learning from Ohio State University, agreed with Mr. Musk’s 10 percent to 20 percent risk estimation of AI destroying humanity.

“Extending current technology by several generations could result in AI developing self-awareness,” he said.

“One particularly dangerous direction for AI now is called ‘Agent,’ where a series of AI programs each plays different roles to accomplish a task through their interactions. For example, constructing a software development process with 10 AI agents. This process is not so frightening under current AI models, but as AI becomes more intelligent, the controllability by humans in their interactions could be significantly weakened. Without clear legal or ethical guidelines, what might these agents develop? If the outcome is negative, this is a cause for concern, and I am worried about this direction.”

OpenAI CEO Sam Altman gestures during a session of the World Economic Forum (WEF) meeting in Davos, Switzerland on Jan. 18, 2024. (Fabrice Coferini/AFP via Getty Images)
OpenAI CEO Sam Altman gestures during a session of the World Economic Forum (WEF) meeting in Davos, Switzerland on Jan. 18, 2024. (Fabrice Coferini/AFP via Getty Images)

Bottleneck in AI Development

Despite the widespread concerns about advanced AI, Mr. Ma shared a slightly different perspective.

“In 2023, the possibility [of AI dominating humanity] seemed imminent; however, this year, my outlook is slightly more optimistic,” Mr. Ma said.

“My initial astonishment came in November 2022 with the release of OpenAI’s GPT-3.5, a model that was already impressively sophisticated. The subsequent launch of GPT-4 in March 2023 was even more astonishing. If GPT-3 could be likened to a kindergartener or early elementary student, GPT-4 seemed to possess the intellectual capabilities of a high school or college student, having scored in the top 10 percent on a law exam. Such rapid development in a brief span indicated a pivotal moment—a singularity suggesting a point of no return for humanity.

“This year, however, the pace appears to have stabilized. GPT-4 remains at the pinnacle of current AI capabilities, and while subsequent models like Google’s Gemini and Claude3 have made slight improvements, they are marginal at best. This progression indicates that, absent new technological breakthroughs, AI development might decelerate based on existing technologies.”

Mr. Ma said that while initial predictions posited that artificial general intelligence could emerge this year or next, it now seems those forecasts may have been premature. It appears that a lack of new and usable data has led to a bottleneck for GPT-4, with any further advances likely to be incremental rather than the significant leaps seen in earlier versions.

“This suggests humanity is indeed encountering a data bottleneck,” he said, adding that the main limitation of technological advancement is “data direction.”

“GPT-4 has already assimilated a significant portion of the internet’s publicly accessible information. In essence, during the phase of human data growth, it has nearly exhausted the available data. Expanding this tenfold would likely exceed the natural data generation capacity of humans and would require artificial generation by AI itself. This becomes somewhat circular, as AI’s intelligence is derived from existing knowledge.

“The output from AI, created from this knowledge and then used as new data, lacks the quality, diversity, and creativity of original human-generated data. Thus, data has now become a critical bottleneck in AI development, potentially offering society a moment to regroup and prepare for the future challenges posed by AI.”

In this photo illustration, the welcome screen for the OpenAI "ChatGPT" app is displayed on a laptop screen in London on Feb. 3, 2023. (Leon Neal/Getty Images)
In this photo illustration, the welcome screen for the OpenAI "ChatGPT" app is displayed on a laptop screen in London on Feb. 3, 2023. (Leon Neal/Getty Images)

AI’s Role in Nuclear Military Strategy

Shi Shan, a senior editor at The Epoch Times, expressed concerns over the militarization of AI, particularly its integration with nuclear capabilities, which he describes as a top global discussion point.

“The uncontrollable nature of nuclear weapons, once deployed, poses a significant risk,” he said. “The United States and Europe understand this threat and are engaging in discussions with China and Russia to ensure that AI does not gain control over nuclear arsenals. The harrowing scenario of AI impersonating a president to issue nuclear strike commands underscores the urgency of these talks.”

Mr. Ma said that “no insane nation would consider allowing AI to control nuclear weapons, but this is under a peaceful and rational state of affairs.”

“However, many of those encouraging AI development, whether technicians or experts, base their statements on the current situation but do not consider how things might develop under dynamic extreme conditions,” he said.

According to Mr. Ma, the Chinese Communist Party (CCP), known for its disregard for moral boundaries as seen in biotechnology and other fields, could potentially escalate its use of AI in military applications under certain future scenarios, such as tensions in the South China Sea or the Taiwan Strait. While the immediate use of nuclear weapons may not be on the table, the broader application of AI in missile technology and other military areas remains a distinct possibility.

“Authoritarian regimes often assert that they would sacrifice millions of lives for specific objectives, illustrating why AI is currently at a crossroads internationally,” Mr. Ma said.

“On the one hand, there is global consensus on the need to regulate AI development; on the other, there is pressure to advance AI capabilities to prevent the CCP from establishing a technological lead.

“This dichotomy raises critical questions: If the West manages to control AI advancements, what happens if the CCP progresses independently? Humanity finds itself in a complex battle, where the misuse of technology by malign forces under the guise of competition could lead to grave global risks.”

AI’s Profound Reach

Guo Jun, president of The Epoch Times’ Hong Kong edition, said that while the militarization of AI, particularly its use in controlling nuclear weapons, is a critical concern, AI’s impact reaches far beyond militaristic applications.
She cited an alarming instance where a Swiss laboratory employed a regular computer and AI technology to command the AI to identify chemical structures potentially toxic to humanity. Within six hours, the AI model generated more than 40,000 new toxic molecules, a discovery that both astonished and terrified the scientists. This scenario raises a global alarm—what if such technology falls into the wrong hands?

“[Moreover,] AI’s capability for deep fakes extends to forging documents, voices, and images, challenging the age-old belief that ‘seeing is believing.’ This foundational principle of human understanding and knowledge is now under threat in an era where the authenticity of what we see is increasingly questionable,” Ms. Guo said.

“Human civilization has evolved through various information flow models—from spoken language to writing, and from print to digital media. These methods of information transmission have profoundly shaped our civilization. However, AI’s advancement is drastically altering the creation and dissemination of information, presenting a significant shock to traditional ways of human civilization.”

Ms. Guo said that the predictive concern is that AI technology, at its current rate of development, could replace most jobs within the next 20 years, potentially leading to widespread unemployment or a transformed concept of work. She said that this rapid pace of so-called progress prompts the question “Can society adapt quickly enough?”

Views expressed in this article are opinions of the author and do not necessarily reflect the views of The Epoch Times.
“Pinnacle View,” a joint venture by NTD and The Epoch Times, is a high-end TV forum centered around China. The program gathers experts from around the globe to dissect pressing issues, analyze trends, and offer profound insights into societal affairs and historical truths.