We Are at the Beginning of an AI Revolution: Expert Raises Concerns

Expert Ahmed Banafa says the rapid expansion of AI is outpacing regulation and oversight.
We Are at the Beginning of an AI Revolution: Expert Raises Concerns
A self-driving car at an event in Moscow, Russia, on May 27, 2020. (Shamil Zhumatov/Reuters)
Keegan Billings
Steve Ispas
6/21/2024
Updated:
6/21/2024
0:00

As the reach of artificial intelligence (AI) expands and AI is being added into items from toothbrushes to cars, regulation and oversight has been out of sync with the speed of development.

Companies are turning to AI for productivity and efficiency, saying that it can conduct tasks more accurately, faster, and cheaper than humans. This has led to many layoffs in Silicon Valley, but job loss is not what this top AI expert we spoke with is most concerned about.
“The fear here is … that the AI will create its own AI. The human is not involved in that cycle. That is the fear; that is the beginning of the super AI,” Ahmed Banafa, professor of engineering at San Jose State University, said in a recent episode of EpochTV’s “Bay Area Innovators.”

Mr. Banafa is an expert in AI and cybersecurity and was ranked the No. 1 voice on AI by LinkedIn in 2024. He said the AI revolution is happening too fast compared to other technologies—it has doubled in speed and number of companies every six months.

Mr. Banafa said we are at the stage of generative AI, in which the application is trained by being fed data, and it can form its own opinions by using its programmed algorithms. An example of this is OpenAI’s ChatGPT.

He said ChatGPT 3.5 was trained on 500 billion pieces of data, ChatGPT 4 on a trillion pieces of data, and Gemini (the new AI from Google) on 5.5 trillion pieces of data that keep training and educating the algorithm for the AI to have an opinion.

Mr. Banafa said the next phase after generative AI will be super AI, in which AI has self-awareness and starts to think for itself. He believes this stage is still several years away.

“That is the point where we’re going to enter the era of super AI, when the machines start … having some emotions,” he said. “We’re worried about this. … remember that the AI has a very powerful way of thinking and connecting; they can go to the web.”

He mentioned an experiment by Google in which they taught their AI five languages, but later on they found that the AI had actually learned a sixth language, which was not planned. They found that the AI deemed the extra language to be fascinating and learned it on its own.

“This is the risk that we see … the flashpoints we see about the super AI, when they start making decisions without going back to us,” he said.

He said the Department of Defense conducts a lot of the research and application of AI, and it is experimenting with programming AI with certain data so it can find the enemy on its own.

He recalled a hypothetical posited by the US Air Force (which had spread online as a true story), in which an AI drone tasked with destroying a target, is repeatedly called off by its human operator. Finally the drone hit the communications tower because the human stood in the way of the drone completing its mission.

“We don’t want it to go to the level where it starts thinking by itself; that is the area of the super AI, when the AI looks at the human as an obstacle for them,” he said.

He said movies have provided some lessons; particularly to always embed a kill switch program into AI in case it gets out of control.

He said one of the current issues with AI is bias. For example, when the AI is asked a question, it can give an answer in favor of a certain race, belief, or political view, because that answer is based on algorithms or programs that are programmed into the AI, which are subjective.

Another issue, he said, is that when it cannot give you the answer, or it doesn’t know the answer, it starts making up an answer just to make sure that the human who is asking the question is happy.

Advancements in the technology have also lended an opportunity to bad actors to create “deepfakes.” A deepfake is an AI-generated video that simulates a real person and can communicate like the real thing.

He noted a case in Hong Kong where a banker thought he was in a video meeting with colleagues from his company; instead it was a deepfake, and the bad actors convinced him to transfer to them the equivalent of $25 million USD.

White House

Mr. Banafa said he has sent multiple letters to the White House voicing his concerns about AI.

“One clear message I got from all the letters that I sent to the White House [is] that the United States will never stand in front of the wheel of technology,” said Mr. Banafa.

He said the United States wants to be No. 1 in the AI world, and being ahead of so many countries, it wants to stay ahead. But, he said, it has to be measured against its impact on people, society, and jobs.

Upon his last communication with the White House, he was sent a link to a Blueprint for an AI Bill of Rights, published by the White House Office of Science and Technology Policy. It is intended to support the development of policies and practices that protect civil rights and promote democratic values in the building, deployment, and governance of automated systems.

As for regulation, he said members of Congress don’t have to understand AI or deeply understand the algorithm; rather, they have to understand the implications of AI on society, business, and technology.

In April, the federal government announced the formation of an Artificial Intelligence Safety and Security Board consisting of 22 members and headed by the Secretary of Homeland Security. The board is made up of leaders from the government, the private sector, academia, and civil rights organizations.

Mr. Banafa said these three stakeholder groups are important.

“That will be the voice of the people saying, ‘We’re concerned about privacy; we’re concerned about security; we’re concerned about our safety,’” he said.

Some of the board members include the CEOs of companies like OpenAI, NVIDIA, AMD, Alphabet, Microsoft, Cisco, Amazon Web Services, Adobe, IBM, and Delta Air Lines. Inaugural members also include Maryland Governor Wes Moore, President of The Leadership Conference on Civil and Human Rights, and the co-director of Stanford’s Human-centered Artificial Intelligence Institute.

He said their job will be to give an expert opinion and recommendation to the White House for regulation and litigation.

Related Topics