Top Fed Regulator Cautions AI Risks Violating Lending Laws

Top Fed Regulator Cautions AI Risks Violating Lending Laws
Federal Reserve Board Vice Chair for Supervision Michael S. Barr speaks during a hearing with the Senate Banking Committee on Capitol Hill on May 18, 2023. Anna Moneymaker/Getty Images
Andrew Moran
Updated:
0:00

Artificial intelligence (AI) could come with advantages and risks in the financial system, according to the Federal Reserve’s (Fed’s) chief banking regulator.

Michael S. Barr, Fed vice chair for supervision, told the National Fair Housing Alliance 2023 National Conference on July 18 that underserved communities could have greater access to housing because AI technology, such as machine learning, might ensure that affordable credit is given to “people who otherwise can’t access it.”

At the same time, Mr. Barr said he is concerned that it would pose challenges by “violating fair lending laws and perpetuating the very disparities that they have the potential to address.”

“Use of machine learning or other artificial intelligence may perpetuate or even amplify bias or inaccuracies inherent in the data used to train the system or make incorrect predictions if that data set is incomplete or nonrepresentative,” he said in prepared remarks. “There are also risks that the data points used could be correlated with a protected class and lack a sufficient nexus to creditworthiness.”

Mr. Barr cited the possibility of “digital redlining,” something that could result in minorities unable to take advantage of credit or housing opportunities. Likewise, there is also the threat of “reverse redlining,” which could push costly or inferior financial products to minority communities.

“While banks are still in the early days of adopting artificial intelligence and other machine learning technologies, we are working to ensure that our supervision keeps pace,” he added.

This year, the Federal Reserve has bolstered its assessment of artificial intelligence amid the considerable increase in its advancement and prevalence.

In April, Fed Governor Christopher Waller revealed that the central bank has “regular discussions” with the financial institutions the body supervises regarding the risks associated with AI.

“Whether and how they might make use of generative language models remains to be seen,” Mr. Waller said at the Cryptocurrency and the Future of Global Finance conference in Sarasota, Florida. “The technology may bring new efficiencies to banks’ software development processes or have applications in customer service—or it may be useful in some way we haven’t foreseen yet.”

During his recent semi-annual Monetary Policy Report to Congress, Fed Chair Jerome Powell acknowledged that the Fed is trying to keep up with the acceleration of AI adoption and that more research and supervision are necessary.

SEC’s AI Regulations

Artificial intelligence could exacerbate “financial fragility,” according to Securities and Exchange Commission (SEC) Chair Gary Gensler.

Mr. Gensler said the stock market watchdog is developing regulations for AI technologies in the financial markets to ensure that products and advice benefit investors.

Speaking at the National Press Club on July 17, Mr. Gensler said that he envisions that the SEC’s regulatory efforts will prevent conflicts of interest and shield against fraudulent and deceptive uses of AI.

“For the SEC, the challenge here is to promote competitive, efficient markets in the face of what could be dominant base layers at the center of the capital markets. I believe we closely have to assess this so that we can continue to promote competition, transparency, and fair access to markets,” the SEC head said.

U.S. Securities and Exchange Commission (SEC) Chair Gary Gensler testifies before a Senate Banking, Housing, and Urban Affairs Committee oversight hearing on the SEC on Capitol Hill on Sept. 14, 2021. (Evelyn Hockstein/Reuters)
U.S. Securities and Exchange Commission (SEC) Chair Gary Gensler testifies before a Senate Banking, Housing, and Urban Affairs Committee oversight hearing on the SEC on Capitol Hill on Sept. 14, 2021. Evelyn Hockstein/Reuters

As more advisers and brokers insert these technologies into their services, advice and recommendations must be made “in the best interest of the clients and retail customer and not place their interests ahead of investors’ interests.”

The SEC began determining possible conflicts of interest related to technologies in 2021 as the financial sector looked at how to target consumers with customized marketing, pricing, and prompts.

The Federal Trade Commission (FTC) has also worked to grapple with algorithmic discrimination.

“Rulemaking may prove a useful tool to address the breadth of challenges that can result from commercial surveillance and other data practices and could establish clear market-wide requirements,” FTC Chair Lina Khan wrote in an August 2022 report.

AI-Fueled Growth

Despite growing concerns, economists have highlighted how much artificial intelligence could boost the economy and bolster stocks.

Goldman Sachs Research economists estimate that AI adoption could grow global GDP by 7 percent and boost productivity growth by 1.5 percentage points per year over a decade. It could also lead to an S&P 500 compound annual growth rate of 5.4 percent in earnings per share over the next 20 years.

“Increased economy-wide output could translate into increased revenues and earnings for S&P 500 companies, even beyond those firms directly involved in the development of AI,” Goldman analysts wrote in June.

Screens displaying the logos of Microsoft and ChatGPT, a conversational artificial intelligence application software developed by OpenAI. (Lionel Bonaventure/AFP via Getty Images)
Screens displaying the logos of Microsoft and ChatGPT, a conversational artificial intelligence application software developed by OpenAI. Lionel Bonaventure/AFP via Getty Images
Speaking at a recent tech event, Microsoft CEO Satya Nadella described AI as “a massive partner opportunity.”

“If you have an economy that’s around $100 trillion, we may have $7 to $10 trillion more of GDP growth driven by this next generation of AI technology,” Mr. Nadella said.

But representatives of the United Nations have said that a new international body needs to be established to govern the rise of AI to identify potential risks and benefits.

“The malicious use of AI systems for terrorist, criminal or state purposes could cause horrific levels of deaths and destruction, widespread trauma and deep psychological damage on an unimaginable scale,” U.N. Secretary-General António Guterres said at the first-ever meeting of the U.N. Security Council devoted to AI governance on July 18.

“Generative AI has enormous potential for good and evil at scale. Its creators themselves have warned that much bigger, potentially catastrophic and existential risks lie ahead. Without action to address these risks, we are derelict in our responsibilities to present and future generations.”

Andrew Moran
Andrew Moran
Author
Andrew Moran has been writing about business, economics, and finance for more than a decade. He is the author of "The War on Cash."
Related Topics