​​How Should We Humans Cope With AI?

​​How Should We Humans Cope With AI?
A visitor watches an AI (Artificial Intelligence) sign on an animated screen at the Mobile World Congress (MWC), the telecom industry's biggest annual gathering, in Barcelona. Josep Lago/AFP via Getty Images
Mark Hendrickson
Updated:
0:00
Commentary

Artificial intelligence (AI) keeps surging to the fore, commanding more and more attention. I wouldn’t be surprised if, by year-end (if not already), it’s one of the 10 most common topics covered by the media.

The most elementary question some people ask about AI is, is AI good? Perhaps here we should remember Shakespeare’s line, “There is nothing either good or bad, but thinking makes it so.” A corollary of the Bard’s wit and wisdom would be that no technology is inherently good or bad, but how humans make use of technology can be beneficent or wicked.

Take handguns, for example: When a handgun is used to commit a robbery, that’s bad; when someone uses a handgun to thwart or halt the commission of a crime, that’s good. Nuclear technology can provide clean, steady energy to millions of people, but it also (when uranium is enriched to a far purer degree than is used to generate electricity) can destroy millions of people.

AI has the potential to greatly increase human productivity, to discover new uses for Earth’s elements, or to find the cure for cancer. It also appears to have some pretty nasty potential, too: to shut down our electrical grid and thereby paralyze or destroy modern society; to develop a mega-lethal bioweapon; or to manipulate humans into a self-annihilating nuclear war by bombarding us with an overwhelming amount of misinformation.

Perhaps AI will evolve someday to a point where, like HAL—the rogue computer in “2001: A Space Odyssey”—or the intelligent machine Nomad in the Star Trek episode “The Changeling,” the AI somehow grasps that it’s more powerful and less flawed than human beings, and so embarks on a mission of human extinction.

Fears of technology subjugating or destroying human life used to be confined to fans of science fiction literature. Today, the concerns are much more widespread.

In March, the venerable Henry Kissinger, former Google CEO Eric Schmidt, and Daniel Huttenlocher, the former dean of the College of Computing at MIT, wrote about today’s most famous AI tool, ChatGPT, at length in The Wall Street Journal.
A more recent report says that 42 percent of CEOs believe that AI has the potential to destroy the human race in as little as five to ten years. (If it’s any comfort, the other 58 percent of the CEOs surveyed don’t believe that’s a possibility.)

As concerns about AI getting out of control grow, suggestions for how to regulate it are becoming more frequent. Earlier this month, the European Union agreed to rate the riskiness of various applications of AI, with the scale ranging from “minimal” to “unacceptable.” Here in the States, the National Institute of Standards and Technology has already devised a framework for assessing risk.

Some members of Congress want to go further. Calls for some sort of central control of AI are being heard. There’s one enormous (and some would say insuperable) challenge to such a “solution”: Who would you trust with such power? Would you really want the unelected bureaucrats at the United Nations to control AI? How about the elites at the World Economic Forum? I can assure you that people in both of those organizations would salivate at the prospect of being in charge of AI, but it wouldn’t be so good for the rest of us.

What if we avoid the multilateral and/or international organizations entirely? How about a U.S. government monopoly over AI in all American jurisdictions? But what if such an agency would perform like the Centers for Disease Control and Prevention did during the pandemic? Oops, that isn’t very appealing, is it?

The unavoidable problem with proposals for a central regulatory authority over AI, whether national or international, is the same one that our Founding Fathers had when devising the U.S. Constitution: the age-old problem of checks and balances. Who would oversee the overseers of AI?

Many would be inclined to say, “Trust them; they will do right,” but that’s a blind faith. In fact, it points to perhaps the most pervasive and persistent of political self-delusions: the comforting assumption that the person holding the most power will have the same values you do and run the system for your benefit. Sorry, it doesn’t work that way.

Perhaps it’s the economist in me, but I place more faith in competition than in monopoly. Just as there’s competition today between nations (and even between states) that results in patterns of migration moving in particular directions from less people-friendly to more people-friendly jurisdictions, so human beings’ best hope is to have choices—to move to where they’re treated most fairly.

Still, we can’t escape the possibility that bad actors would attack “the good countries.” AI may have awesome powers, but it won’t be able to transform human nature. Thus, it wouldn’t be particularly surprising if AI will be used by competing geopolitical entities in new forms of warfare. And perhaps such warfare will indeed be the war to end all wars by ending human life.

Do I want that? No. Do I think that grim picture is inevitable? No. Do I think it’s possible? Unfortunately, yes.

But just as humans have so far managed to have enough wisdom to keep from using technology to destroy our race, I have hopes that “there is a spirit in man” (Job 32:8) that will keep us from using AI in a way that pushes us over the brink into the abyss. Now if only we can figure out how to implant a similar spirit into the various iterations of AI ...
Views expressed in this article are opinions of the author and do not necessarily reflect the views of The Epoch Times.
Mark Hendrickson
Mark Hendrickson
contributor
Mark Hendrickson is an economist who retired from the faculty of Grove City College in Pennsylvania, where he remains fellow for economic and social policy at the Institute for Faith and Freedom. He is the author of several books on topics as varied as American economic history, anonymous characters in the Bible, the wealth inequality issue, and climate change, among others.
Related Topics