One of the great ways of encouraging kids to read for enjoyment back in the 1970s, especially boys, was to buy them the “Guinness Book of World Records.” And along with the fascinating facts about 8-foot-11 Robert Wadlow of Alton, Illinois, and communist Russia’s Motherland statue dwarfing the Statue of Liberty (but only if you exclude Liberty’s pedestal), they would eventually find themselves reading a terrifying observation in the section on the largest man-made explosions:
“No official estimate has been published of the potential power of the device known as Doomsday, but this far surpasses any tested weapon. If it were practicable to construct, it is speculated that a 50,000-megaton cobalt-salted device could wipe out the entire human race except people who were deep underground and did not emerge for more than five years.”
They might later see the film “Doctor Strangelove” and laugh this passage in the Guinness Book off, but unfortunately a 21st-century doomsday machine has unwittingly been under construction for many years, and few in the private sector or government are taking it seriously.
The letter warns that “AI systems with human-competitive intelligence can pose profound risks to society and humanity, as shown by extensive research and acknowledged by top AI labs.” Therefore, advanced AI “should be planned for and managed.” Instead, however, AI labs in recent months have been “locked in an out-of-control race to develop and deploy ever more powerful digital minds that no one—not even their creators—can understand, predict, or reliably control.”
Unfortunately, even the Future of Life letter is dangerously naive in regard to the threat of AI. “Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable,” it cautions, recommending that government and industry work together to impose “robust AI governance systems” including “a robust auditing and certification ecosystem.” But once an AI mechanized mind has exceeded human capability, and at the same time is capable of self-improvement, there’s no predicting its behavior. And predicting when this occurs may be impossible.
“Many researchers steeped in these issues, including myself, expect that the most likely result of building a superhumanly smart AI, under anything remotely like the current circumstances, is that literally everyone on Earth will die,” Yudkowsky wrote. In a short time it could devise technologies centuries beyond that of today and “build artificial life forms or bootstrap straight to postbiological molecular manufacturing.”
Without somehow imbuing Western civilization’s ethics into the machine’s thinking, which scientists do not know how to do (human ethics, by the way, that countless humans themselves have defied over the centuries), Yodkowsky warns that “the AI does not love you, nor does it hate you, and you are made of atoms it can use for something else.” He wants all advanced AI training prohibited indefinitely, enforced by immediate multilateral agreements, with “preventing AI extinction scenarios ... considered a priority above preventing a full nuclear exchange” and major world powers even being “willing to destroy a rogue datacenter by airstrike.”
Filmmaker and author James Barrat warned of all this nearly 10 years ago in his terrifying, extensively researched book, “Our Final Invention: Artificial Intelligence and the End of the Human Era.” Barrat, who signed the Future of Life letter and is planning a new book on AI, is no less concerned today. He told The Epoch Times that the development of AI is driven by a poisonous mixture of narcissism and greed.
“There is a huge economic incentive in play here, with expectations of AI technologies adding $16 trillion to global GDP by 2030, and astronomical wages for those currently conducting the research,” according to Barrat. “There is way too much arrogance among some leading figures in the AI field and definitely a great deal of ‘Hey, look at us, we’re building God.’”
Barrat says: “Sam Altman is doing a bizarre fan dance with GPT [generative pre-trained transformer] capabilities, alternately expressing appropriate concern about its unpredictable powers, then teasing a global release. It’s about hyping for money. And the world is his captive, but what did he do to deserve that job? One person shouldn’t have that much responsibility.”
He cites Altman’s comments that he wanted “to build and release ‘successively more powerful systems’ as ‘the best way to carefully steward AGI into existence.’”
“On what planet does this strategy make sense? Speed and caution don’t go together,” Barrat writes.
“Many of GPT-3 and 4’s capabilities weren’t planned. They were discovered after the fact. No one knows what’s happening inside these black box architectures. Some scary things we can’t combat could emerge at any time.”
Before this there was Canadian science fiction writer Laurence Manning’s 1933 novel “The Man Who Awoke,” in which in A.D. 10,000, an emotionless, omnipotent super-computer, “the Brain,” controls all human activity from cradle to grave.
But even these fictional extrapolations are naive. Unfortunately, AI expert Yodkowsky’s scenario seems the most plausible: that a self-improving super-intelligence would act as we do toward, say, insects, with measured indifference. When they don’t interfere with our activities, we ignore them. But when termites trespass into our homes or ants invade our picnic tables, we swat them or poison them, that is, destroy them in the most effective way available.
Curie and Bogdanov were the casualties of their experimentation, but the heedless self-destructiveness of those pursuing advanced AI extends to the rest of us.