AI Can Lead to Financial Instability, Low Trust in Banks: BoE Analyst

AI bias can result in decisions to offer exploitative interest rates to ethnic minorities and smaller lines of credit to women.
AI Can Lead to Financial Instability, Low Trust in Banks: BoE Analyst
Screens displaying the logos of OpenAI and ChatGPT are pictured in in Toulouse, France, on Jan. 23, 2023. LIONEL BONAVENTURE/AFP via Getty Images
Evgenia Filimianova
Updated:
0:00

Artificial intelligences poses a risk to financial stability and the trust in banks, a Bank of England (BoE) analyst has warned.

AI models are increasingly used across various industries to optimise operations. Banking is no exception. However, biased data or unethical algorithms used in AI could exacerbate financial stability risks, according a blog by the BoE staff.

Analyst Kathleen Blake wrote in a post, published on Wednesday, that AI models could bring the bias embedded in their training data into decisions made by financial institutions.

This can range from decisions to offer exploitative interest rates to ethnic minorities to offering smaller lines of credit to women.

“For individual financial institutions, the use of biased or unfair AI could lead to reputational and legal risk, risks that many prudential regulators consider in setting capital requirements,” Ms. Blake said.

She cited an example of the algorithm used by Apple and Goldman Sachs for decisions on credit card applications, “which seemingly offered smaller lines of credit to women than to men.”

While the New York State Department of Financial Services found no violation of fair lending requirements, Ms. Blake argued that in the future such incidents can damage reputation and trust.

“Trust is an important concept for financial stability of the financial system in aggregate, but also the stability of individual institutions,” she said.

Isolated incidents caused by biased AI data may not appear significant, but in combination with other risks they could lead to loss of capital, the analyst warned.

AI models are fed data through which they can arrive at a decision. AI models recognise certain patterns and make predictions, when sufficient information is provided.

If the data informing the AI algorithm is biased or flawed, the model’s decisions will reflect that.

Ms. Blake argued that AI bias can’t be prevented by simply removing some features from the input data.

She discussed the unlawful practice of redlining in insurance and mortgage lending, when white people were historically given better interest rates than ethnic groups.

“If firms train their models on biased historical data which includes redlining, there is a risk of such algorithms learning to copy patterns of discriminatory decision-making,” Ms. Blake said.

She suggested that use of historical data sets could shape decision-making processes in the future and significantly impact the output of AI models in adverse ways.

Society’s trust in the financial sector is key, the blog said. In times of low trust, which can be exacerbated by the cost-of-living crisis and high inflation, the public’s confidence in its financial institutions could be swayed.

Central banks will have to consider this in their use of AI models, Ms. Blake said.

British bank Santander said AI has countless uses in the banking sector, including improving customer service and optimizing loan management. It can also boost the safety of transactions, the bank said.

“For example, if you live in one city and there is activity on your bank card in another city, it is possible to use data from the geolocation systems on your mobile phone to check your location, notify you of the activity and ask you to verify it. This helps prevent crimes such as identity theft involving cards,” said Santander.

The Head of Technology at Barclays, Helena Sans, has argued that the strength of AI lies in its ability to empower people “helping them to interpret complex data to make more informed choices and better decisions.”
The issue of AI regulation has been a hot topic among government ministers, currently working on a white paper and a public consultation. The document includes proposals for regulatory reform.

Britain will host the first major global summit on AI safety in November at Bletchley Park in Buckinghamshire.

Evgenia Filimianova
Evgenia Filimianova
Author
Evgenia Filimianova is a UK-based journalist covering a wide range of national stories, with a particular interest in UK politics, parliamentary proceedings and socioeconomic issues.
Related Topics