Bill Gates recently praised the evolution of artificial intelligence, his relationship with OpenAI, and gave a short warning on the situation being portrayed differently by other subject experts, including Elon Musk.
Gates said that AI can help with several progressive agendas, including climate change and economic inequities, but that the technology is “disruptive,” and will “make people uneasy.”
“AIs also make factual mistakes and experience hallucinations.” AI hallucinations are confident responses by an AI that are not grounded in its training data. Frequent hallucinations are considered to be a major issue with large language models like ChatGPT.
“In addition, advances in AI will enable the creation of a personal agent. Think of it as a digital personal assistant: It will see your latest emails, know about the meetings you attend, read what you read, and read the things you don’t want to bother with,” Gates said.
These personal assistants will be part of company meetings, and take care of administrative tasks like “filing insurance claims, dealing with paperwork, and drafting notes from a doctor’s visit” in the health care industry. In the later stage, “they’ll be able to predict side effects and figure out dosing levels.”
The Other Side of AI
Gates starts this section with the fact that AI does not understand “context for a human’s request,” leading to “strange results.” For example, “when you ask for advice about a trip you want to take, it may suggest hotels that don’t exist.”Although such technical issues will get resolved, there are some problems that pose a greater danger.
“For example, there’s the threat posed by humans armed with AI. Like most inventions, artificial intelligence can be used for good purposes or malign ones.”
He then added, “Then there’s the possibility that AIs will run out of control. Could a machine decide that humans are a threat, conclude that its interests are different from ours, or simply stop caring about us?”
Gates then proceeds to talk about superintelligent AIs—a learning algorithm that runs at the speed of a computer—which are maybe “a decade away or a century away.”
“These ‘strong’ AIs, as they’re known, will probably be able to establish their own goals. What will those goals be? What happens if they conflict with humanity’s interests?”
Gates mentioned his relationship with OpenAI—the company behind ChapGPT—going back to 2016. At the end of January, OpenAI and Microsoft shared an announcement regarding their partnership and investment.
OpenAI makes use of Microsoft’s Azure cloud platform. Microsoft is investing $10 billion into OpenAI, building on previous funding rounds done in 2019 and 2021. OpenAI and Microsoft have a complicated partnership structure with the AI platform remaining a “capped-for-profit” company while its operations are governed by the OpenAI non-profit organization.
Elon Musk’s Not-So-Bullish Response
Elon Musk said in a tweet on March 27, “I remember the early meetings with Gates. His understanding of AI was limited. Still is.”Musk’s relationship with OpenAI began in 2015 when the project launched along with other industry veterans like Y Combinator’s Sam Altman and Ilya Sutskever, a research scientist at Google. Musk was one of the original funders of OpenAI. He left the organization in 2018, possibly due to a conflict of interest with Tesla’s AI division.
GPT-4 is the latest version of ChatGPT, released in March.
The letter started off by saying that AI should be “planned for and managed with commensurate care and resources” but that it is not happening.
As AI becomes “human-competitive at general tasks,” the letter asks: “Should we let machines flood our information channels with propaganda and untruth? Should we automate away all the jobs, including the fulfilling ones? Should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete and replace us? Should we risk loss of control of our civilization?”
“Such decisions must not be delegated to unelected tech leaders.”
The letter calls for “safety protocols” in building such technology with AI developers working in tandem with policymakers. These systems must make AI “safe beyond a reasonable doubt.”
“Society has hit pause on other technologies with potentially catastrophic effects on society. We can do so here. Let’s enjoy a long AI summer, not rush unprepared into a fall,” concluded the letter.