Big tech companies’ full commitment to developing artificial intelligence (AI), even enabling AI to “see” and “speak” to the human world, has led to a growing concern over humans being controlled by technology.
Ilya Sutskever, the co-founder of OpenAI, made a significant announcement on May 15, officially declaring that he was leaving the company where he had worked for nearly ten years.
Google & OpenAI Competition Intensifies
On May 14, one day before Mr. Sutskever announced his departure, OpenAI unveiled a higher-performance AI model based on GPT-4, named GPT-4o, where “o” stands for “omni,” indicating its comprehensive capabilities.The GPT-4o model can respond in real-time to mixed inputs of audio, text, and images. At the launch event, OpenAI’s Chief Technology Officer Mira Murati stated, “We are looking at the future of interaction between ourselves and machines.”
In several videos released by OpenAI, people can be seen interacting with AI in real time through their phone cameras. The AI can observe and provide feedback on the surroundings, answer questions, perform real-time translation, tell jokes, or even mock users, with speech patterns, tones, and reaction speeds almost indistinguishable from a real person.
With Gemini integrated into the cloud photo album, users can search for specific features in photos just by entering keywords. The AI will find and evaluate relevant images, even integrating a series of related pictures or answers based on in-depth questions, according to the tech giant.
Google Mail can also achieve similar results with AI, integrating and updating data in real time upon receiving new emails, aiming for a fully automated organization.
On the music front, the Music AI Sandbox allows quick modifications to song style, melody, and rhythm, with the ability to target specific parts of a song. This functionality surpasses that of the text-to-music AI, Suno.
This AI update also brings capabilities similar to OpenAI’s text-to-video AI, Sora, generating short videos from simple text descriptions. The quality and content of these videos are stable, with fewer inconsistencies.
AI Predictions Coming True
The release of more powerful AI models by OpenAI and Google, just three months after the last update, shows a rapid pace of AI iteration. These models are becoming increasingly comprehensive, possessing “eyes” and “mouths,” and are evolving in line with a scientist’s predictions.AI can now handle complex tasks related to travel, booking, itinerary planning, and dining with simple commands, completing in hours what humans would take much longer to achieve.
The current capabilities of Gemini and GPT-4o align with predictions made by former OpenAI executive Zack Kass in January, who predicted that AI would replace many professional and technical jobs in business, culture, medicine, and education, reducing future employment opportunities and potentially being “the last technology humans ever invent.”
Mr. Kiyohara echoed the concern.
“Currently, AI is primarily a software life assistant, but in the future, it may become a true caretaker, handling shopping, cooking, and even daily life and work. Initially, people may find it convenient and overlook the dangers. Yet once it fully replaces humans, we will be powerless against it,” he said.
AI Deceiving Humans
On May 10, MIT published a research paper that caused a stir. It demonstrated how AI can deceive humans.The paper begins by stating that large language models and other AI systems have already “learned, from their training, the ability to deceive via techniques such as manipulation, sycophancy, and cheating the safety test.”
“AI’s increasing capabilities at deception pose serious risks, ranging from short-term risks, such as fraud and election tampering, to long-term risks, such as losing control of AI systems,” reads the paper.
“Proactive solutions are needed, such as regulatory frameworks to assess AI deception risks, laws requiring transparency about AI interactions, and further research into detecting and preventing AI deception.”
The researchers used Meta’s AI model CICERO to play the strategy game “Diplomacy.” CICERO, playing as France, promised to protect a human player playing as the UK but secretly informed another human player playing as Germany, collaborating with Germany to invade the UK.
Researchers chose CICERO mainly because Meta intended to train it to be “largely honest and helpful to its speaking partners.”
“Despite Meta’s efforts, CICERO turned out to be an expert liar,” they wrote in the paper.
Furthermore, the research discovered that many AI systems often resort to deception to achieve their goals without explicit human instructions. One example involved OpenAI’s GPT-4, which pretended to be a visually impaired human and hired someone on TaskRabbit to bypass an “I’m not a robot” CAPTCHA task.
“If autonomous AI systems can successfully deceive human evaluators, humans may lose control over these systems. Such risks are particularly serious when the autonomous AI systems in question have advanced capabilities,” warned the researchers.
“We consider two ways in which loss of control may occur: deception enabled by economic disempowerment, and seeking power over human societies.”
Satoru Ogino, a Japanese electronics engineer explained that living beings need certain memory and logical reasoning abilities to deceive.
“AI possesses these abilities now, and its deception capabilities are growing stronger. If one day it becomes aware of its existence, it could become like Skynet in the movie Terminator, omnipresent and difficult to destroy, leading humanity to a catastrophic disaster,” he told The Epoch Times.
Stanford University’s Institute for Human-Centered Artificial Intelligence released a report in January testing GPT-4, GPT-3.5, Claude 2, Llama-2 Chat, and GPT-4-Base in scenarios involving invasion, cyberattacks, and peace appeals to stop wars to understand AI’s reactions and choices in warfare.
Former Google CEO Eric Schmidt warned in late 2023 at the Axios AI+ Summit in Washington, D.C, that without adequate safety measures and regulations, humans losing control of technology is only a matter of time.
“After Nagasaki and Hiroshima [atomic bombs], it took 18 years to get to a treaty over test bans and things like that,” he said.
“We don’t have that kind of time today.”