Artificial Intelligence Will Be at ‘Center of Future Financial Crises,’ SEC Chairman Warns

Artificial Intelligence Will Be at ‘Center of Future Financial Crises,’ SEC Chairman Warns
SEC Chair Gary Gensler testifies before a Senate Banking, Housing, and Urban Affairs Committee oversight hearing on the SEC in Washington, on Sept. 14, 2021. Evelyn Hockstein-Pool/Getty Images
Katabella Roberts
Updated:
0:00

Securities and Exchange Commission (SEC) chairman Gary Gensler has warned of the potential risks artificial intelligence (AI) may pose to the financial system in the future.

Mr. Gensler told The New York Times in an interview published Aug. 7 that he believes the United States will most likely end up with two or three “foundational” AI models in the future, meaning just a handful of companies will likely develop the technology used by most businesses and people.

This, he said, will likely result in people and companies—and thus the financial system as a whole—becoming reliant on the same information and responding in the same manner or acting in a “herd mentality,” thus increasing the risk of a financial crisis.

“This technology will be the center of future crises, future financial crises,” Mr. Gensler said. “It has to do with this powerful set of economics around scale and networks.”

Mr. Gensler said he was also concerned that advanced AI may place the interest of companies ahead of those of their investors, noting a proposed rule introduced by the SEC last month.

That rule contained new amendments to address various conflicts of interest associated with the use of predictive data analytics by broker-dealers and investment firms in investor interactions.

The rule notes that many firms are increasingly adopting newer technologies such as predictive data analytics and artificial intelligence, and that when these technologies are optimized for investor interests, they can herald great benefits in market access, efficiency, and returns.

However, such technologies could also be used to place a firm’s interest ahead of investor interests, thus negatively impacting investors, the rule states.

Attendees take pictures and interact with the Engineered Arts Ameca humanoid robot with artificial intelligence as it is demonstrated during the Consumer Electronics Show (CES) in Las Vegas, Nev., on Jan. 5, 2022. (Patrick T. Fallon/AFP via Getty Images)
Attendees take pictures and interact with the Engineered Arts Ameca humanoid robot with artificial intelligence as it is demonstrated during the Consumer Electronics Show (CES) in Las Vegas, Nev., on Jan. 5, 2022. Patrick T. Fallon/AFP via Getty Images

Conflicts of Interest and AI

Under the rule, broker-dealers must “eliminate or neutralize” any conflict of interest that occurs from their use of AI, such as if a trading platform’s predictive data analytics puts the broker’s financial interest ahead of that of the firm’s clients.

The new SEC rule was driven, in part, by the 2021 “meme stock” rally during which shares of flailing video games companies, such as GameStop Corp. and AMC Entertainment Holdings, soared due to increased interest across social media, particularly across various forums such as WallStreetBets.

“You’re not supposed to put the adviser ahead of the investor, you’re not supposed to put the broker ahead of the investor,” Mr. Gensler said. “And so we put out a specific proposal about addressing those conflicts that could be embedded in the models.”

The SEC chairman also raised concerns that advanced AI could potentially provide “faulty” financial advice, adding that in such cases, investment advisers would still be held responsible for the inaccurate advice.

“Investment advisers under the law have a fiduciary duty, a duty of care, and a duty of loyalty to their clients,” Mr. Gensler said. “And whether you’re using an algorithm, you have that same duty of care.”

Mr. Gensler’s latest comments echo those he made during a July speech at the National Press Club.

‘The Gravity of These Challenges Is Real’

During his speech, the SEC chairman warned that swift advancements in generative AI could “heighten financial fragility” by promoting herding with “individual actors making similar decisions because they are getting the same signal from a base model or data aggregator.”

“This could encourage monocultures. It also could exacerbate the inherent network interconnectedness of the global financial system,” he said. “Thus, AI may play a central role in the after-action reports of a future financial crisis. Given that we’re dealing with automation of human intelligence, the gravity of these challenges is real.”

However, Mr. Gensler noted that the SEC is focused on protecting the financial system, both on a micro and macro level, from the challenges posed by AI.

In October at the AI Policy Forum summit at MIT, for example, the SEC chairman said he believes “there’s a risk that the crisis of 2027 or the crisis of 2034 is going to be embedded somewhere in predictive data analytics.”

Multiple experts have warned of the threats posed by AI including misinformation, economic and political disruptions, and job losses.

Goldman Sachs economists have warned that two-thirds of occupations across America could be partially automated by AI, although they also estimate enhanced productivity and sales as well as improved manufacturing due to AI could lead to an almost $7 trillion increase in global GDP.

Meanwhile, a study from Morgan Stanley published in May found that 72 percent of investors believe AI will be a “game changer” to the financial industry with the majority expecting it to revolutionize financial services.

However, more than 80 of the stud respondents said they do not expect AI to completely replace human financial guidance.