Skip to main content
All
August 1, 2023

Yet Another Warning From Banking Regulators About AI Bias

Advisory

On July 18, 2023, Federal Reserve Vice Chair for Supervision Michael Barr cautioned banks against fair lending violations arising from their use of artificial intelligence (AI). Training on data reflecting societal biases; data sets that are incomplete, inaccurate, or nonrepresentative; algorithms specifying variables unintentionally correlated with protected characteristics; and other problems can produce discriminatory results.

Vice Chair Barr’s remarks come against a backdrop of increased concern among federal financial regulators about the deployment of AI and other automated systems. These authorities fear that deployment without appropriate guardrails can lead financial institutions to breach various laws and to take on unnecessary risks to their safety and soundness. Similar fears are shared across the federal government, causing leaders of key U.S. federal enforcement agencies this spring to stress their common intent to crack down against algorithmic discrimination. (For more on their joint statement, see our previous Blog Post here).

Vice Chair Barr welcomed the appropriate use of new AI technology, recognizing its potential to leverage digital data sources at scale and at low cost to expand access to credit. However, because AI use also carries risks of violating fair lending laws and perpetuating disparities in credit transactions, Vice Chair Barr called it “critical” for regulators to update their applications of the Fair Housing Act (FHA) and Equal Credit Opportunity Act (ECOA) to keep pace with these new technologies and prevent new versions of old harms. Violations can result both from disparate treatment (treating credit applicants differently based on a protected characteristic) and disparate impact (apparently neutral practices that produce different results based on a protected characteristic). As an example, Vice Chair Barr cited digital redlining, with majority-minority communities or minority applicants being denied access to credit and housing opportunities. He also called out reverse redlining, with “more expensive or otherwise inferior products” being pushed to minority communities. Relatedly, Vice Chair Barr also mentioned expected changes to the implementing regulations under the Community Reinvestment Act (CRA), which was enacted following the FHA and ECOA to further address redlining and other systemic inequities in access to credit, investment, and banking services. As part of CRA exams, bank examiners evaluate whether there is any evidence of discriminatory or other illegal credit practices inconsistent with helping to meet community credit needs. Vice Chair Barr noted the interagency work on adapting CRA regulations and evaluations to address technological advancements in banking.

Financial institutions that don’t take heed of these warnings may face problems in compliance exams or be subjected to investigations. Identified patterns or practices of discrimination by financial institutions could result in referrals by the federal banking regulators to the Department of Justice for enforcement.

During his speech, Vice Chair Barr also supported two recent policy initiatives to address appraisal discrimination and bias in mortgage transactions. On June 1, 2023, the Federal Reserve and several federal financial agencies issued a Notice of Proposed Rulemaking on the use of AI and other algorithmic systems in appraising home values (Proposed Rule). For more information about the Proposed Rule, check out our prior Advisory. On June 8, 2023, the same agencies invited public comment on guidance to assist financial institutions to incorporate “reconsiderations of value” into their home appraisal process, which could help mitigate the risk of improperly valuing real estate.

With all these changes (and others in the pipeline), financial institutions should consider establishing comprehensive AI risk-management programs now and keep a close watch on the evolving legal landscape. For questions regarding this Advisory or managing AI’s regulatory and other risks, contact any of the authors listed here or your usual Arnold & Porter contact.

© Arnold & Porter Kaye Scholer LLP 2023 All Rights Reserved. This Advisory is intended to be a general summary of the law and does not constitute legal advice. You should consult with counsel to determine applicable legal requirements in a specific fact situation.