Joint Letter on Fair Lending and the Executive Order on AI

NCRC and Fintechs – Joint Letter on Fair Lending and the Executive Order on AI

(Download)

September 25, 2024

Dear Director Chopra and Director Thompson,

As nonprofit consumer advocates and for-profit fintech companies, we have found common ground that artificial intelligence (AI) can, and in many cases should, be used to improve fair lending practices. We write today to encourage further action pursuant to the Administration’s Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence.[1] In the following letter, we share several actions that the Consumer Financial Protection Bureau (CFPB) and Federal Housing Finance Agency (FHFA) could take in carrying out the directives of that Executive Order.

The Executive Order’s Section 7 addresses the salutary use of AI to reduce biases in consumer and small business financial services. Under the topic, “Strengthening AI and Civil Rights in the Broader Economy,” the Executive Order encourages the CFPB and FHFA to consider requiring their regulated entities to use appropriate methodologies, including AI, to ensure compliance with federal anti-discrimination law.[2]

We commend the CFPB and FHFA’s recent approval of a new rule that addresses the use of AI in automated valuation models, as encouraged by Section 7.3.ii of this Executive Order.[3]  We also commend the FHFA as the first among federal agencies to publish AI-specific guidance in 2022, which provides important clarity for Fannie Mae and Freddie Mac (government sponsored entities, or GSEs).[4]

Both as nonprofit advocacy groups and for-profit fintech companies, we share common goals of addressing discrimination in lending and creating a more inclusive and fair financial system. Additionally, in line with the White House Executive Order, we believe that appropriate use of AI can play a constructive role in improving accuracy, fairness and inclusion in credit decisions and pricing.

AI is a fast-changing and broad category of technologies. In this letter we focus on machine learning and deep learning categories of AI, rather than generative AI, which does not have known debiasing applications in fair lending at this time.

AI Can Improve Fair Lending and Reduce Discrimination

 AI and other improvements in data technology can enable fair lending testing to be more efficient and effective than in the past.[5] In particular, these tools can be used to build more fair and inclusive credit models in addition to conducting robust and efficient searches for Less Discriminatory Alternatives (LDAs), as required under the Equal Credit Opportunity Act (ECOA) and Fair Housing Act.

The importance of improving on the fair lending practices of the past decades should not be understated. By continuing to follow the status quo of credit underwriting, an unnecessary degree of exclusion perpetuates. According to CFPB researchers, over 20% of American consumers, 45 million people, are either not visible using legacy scoring methods or are unscorable due to insufficient or “stale” information.[6] Separately, traditional credit scoring can, in some cases, lock people into low scores and out of the credit market.[7] In particular, this can include younger demographics, low-income individuals, gig workers, Black and Hispanic people, and anyone who lives in areas with a high concentration of minorities, renters, and foreign-born people—but many of these individuals are more creditworthy than their credit score suggests.

One of AI/ML’s beneficial applications is to make it possible, even using traditional credit history data, to score previously excluded or unscorable consumers. In some cases, AI models are enabling access and inclusivity.[8] The evidence of potential benefits of newer modeling techniques that use AI underscores the benefit of regulatory clarity to better address AI in fair lending testing as well.

As these AI methods are explored, transparency is essential. Internal and external stakeholders must be able to understand how a model works and correct for biases embedded in historical data used for building these machine learning models. Some of these tools describe themselves as utilizing transparent machine learning, a subfield of AI that is being used in the market today and can produce inclusive credit decisions.

Fair lending compliance mandates that data variables are assessed for disparate treatment and disparate impact. Individual data inputs must be reviewed, and those that are potential proxies for protected class characteristics that are not legal to consider should be removed. Tradeline data can have embedded bias when it reflects performance that may have been negatively impacted by predatory credit terms, as well as lack of access, and little or no credit history. Responsibly used AI can make it possible to conduct a more comprehensive assessment of the data inputs before they are used in a model.

AI technology has also created the means to accomplish robust LDA searches that represent a meaningful and effective reduction of disparate impact. In fact, without machine learning, it can be very difficult to conduct as comprehensive a search for less discriminatory alternatives. Lenders have sometimes struggled with manual and less sophisticated algorithmic approaches to testing and reducing the impact of data variables on the discriminatory effect of their model. AI/ML can do this analysis efficiently and effectively, enabling the rapid generation of multiple model variations for the lender’s consideration, to help identify the best option that is less discriminatory.

Because of these benefits in ease and comprehensiveness of searches, AI can now enable fair lending testing to happen more feasibly and conveniently during earlier stages of modeling, as well as on an ongoing basis.

AI Poses Discrimination Risks As Well, WhichRequire Strong Fair Lending Tools

The inappropriately managed use of AI can also frustrate fair lending goals. For example, when models are trained on data that describes the past or present, that data may reflect unfairness that exists in society today.[9] A model development process that is not supervised for fair lending compliance may fixate on correlations between protected class status, such as race, and undesired outcomes, such as defaults on loans. However, these correlations do not mean that a given person is less likely to be able to repay a loan because of their race or other protected class status. Instead, the correlations may be evidence of a historical discrimination, in the form of disparities in family wealth, income stability, or other parts of life, statistically associated with the protected class.

If the model development process mistakes the unfairness in the data as a predictor of who deserves to be approved for credit, it will recreate that unfairness in the decisions it makes. Model development can thus encode the discrimination of the past into the future.

AI and more complex modeling can exacerbate this discrimination by automating and masking it. For example, if the decisions of a model cannot be explained, it could be difficult to identify whether a model’s decision making is penalizing people based on their race, or another protected class status that they are associated with, rather than on evidence of their own personal creditworthiness. These risks require fair lending tools capable of addressing them–in some cases through the use of AI.

As models themselves become more complicated and technologically advanced, it is necessary to use similarly sophisticated quantitative testing for de-biasing and fair lending testing. Additionally, responsible use of AI requires a diverse use of human stakeholders to oversee the development and implementation of AI at every step of the way.

Obstacles to the Adoption of Improved Fair Lending Practices

 Despite the evidence that new AI tools can reduce discrimination and improve fair lending, many lenders remain reluctant to adopt them. Some may believe that continuing to use outdated fair lending tools and practices remains sufficient for regulatory purposes and is thus the safest, easiest approach. As a result, unnecessary discrimination is being perpetuated and accepted.

Some lenders may be continuing to use inadequate, or merely nominal, fair lending practices out of a desire to not discover reasons to improve. More effective fair lending tools might jeopardize a lender’s “see no evil, hear no evil” rationale for maintaining a model that results in a certain level of discrimination. In other words, if a lender isn’t aware of the opportunity to do better, then they may believe they are protected from the legal requirement to do better. If they can avoid finding a problem, then they don’t need to address it. This portion of the industry that is sitting on the sidelines may continue to strategically avoid identifying the improvements they could and should be making, until they are prompted to improve practices by organizations like the CFPB and FHFA.

Recommendations

Pursuant to the goals of the Executive Order section 7, we encourage the CFPB and FHFA to help the market evolve in a fairer direction by undertaking the following actions:

1)    Don’t wait for perfect information to act.

AI is changing rapidly, and it is not possible for an agency to answer all questions that may arise. However, agencies can provide helpful direction to supervised entities with the information that is known today. Some AI/ML practices are already established and known and can be addressed. For example, in publications such as Supervisory Highlights, agencies could publish observations of practices that are furthering fair lending compliance and those that raise concerns.[10]

2)    FHFA should continue to build upon its 2022 AI Advisory Opinions.

The FHFA has led the way as the first among federal agencies to provide AI-specific guidance, which provides important clarity for the GSEs. Both of the GSEs report cautious exploration of use cases that have the potential to improve housing finance outcomes for millions of Americans.[11] The FHFA has continued to explore beneficial applications of AI.[12] There may be further opportunities to aid the secondary mortgage market overseen by the FHFA by replacing manual underwriting to streamline the ability of both GSEs and private capital to provide greater liquidity to the market. Continued use of pilots can be a promising approach for regulators to engage with AI.[13]

3)    Clarify that fair lending compliance should not lag relative to other parts of the lending process.

As AI models come to be used in the lending process, a lender’s level of effort and due diligence in complying with fair lending laws should not fall behind their other efforts in credit decisioning. The CFPB should clarify that, for example, if a company is using AI in its credit decisioning process, it may not represent sufficient compliance effort to use decades-old fair lending techniques if more effective and appropriate methods are available. An avoidance of effectively identifying fair lending issues is not compliant.

4)    Provide further written guidance on when fair lending compliance is expected.

We commend the CFPB’s recent confirmation that “regular” testing for disparate treatment and disparate impact, including LDA searches, is expected.[14] We also commend the CFPB’s stated exploration of open-source automated debiasing methodologies for LDA searches.[15] Regulatory adoption of contemporary AI tools for fair lending can both improve regulatory effectiveness, and improve industry compliance by reassuring regulated entities that these tools are valid and potentially valuable.

We believe the CFPB can further clarify regulatory expectations on the triggers for, and frequency of, searches for less discriminatory alternatives that are required. Several thoughtful proposals for further clarification have been brought forward.[16] Other agencies have also clarified that anti-discrimination testing of models should take place both during and after model development.[17]

5)    Clarify that fair lending applies not only to how applicant pools are treated, but also to how applicant pools are selected.

Lending is not fair if the group of applicants selected to perform well in fair lending testing excluded potentially creditworthy applicants from protected classes who may be more likely to be declined. This process could happen during the development of customer lists who will be approached in marketing.[18] AI and automation tools may make it more feasible, scalable, and affordable to do fair testing in applicant selection.

6)    Supervisory examination procedures and training should address routine review of financial institutions’ model testing protocols and results.

Federal regulators have stressed that fairness is as important as safety and soundness in a financial institution’s compliance review.[19] Regulators should ensure that fair lending examinations routinely include reviews of model test protocols, and assessment of the sufficiency of searches for LDAs.[20] Additionally, to ensure that regulatory understanding of new fair lending tools is reflected in examination practice, we recommend that all supervisory regulators create forums where examiners themselves, in addition to policy staff and management, can learn about how AI tools can be used in fair lending compliance.

As nonprofit civil rights and community advocates, and for-profit companies engaged in the use of technology in lending, we applaud the efforts of the CFPB and FHFA in promoting fair, inclusive and transparent lending practices. Despite our differences as nonprofit and for-profit organizations, we believe there is meaningful potential for AI to make financing more inclusive, and are hopeful for a future where credit is accessible and fair for all Americans. We look forward to the possibility of hosting an open discussion with you to further this important conversation.

Sincerely,

National Community Reinvestment Coalition

Zest AI

Upstart

Stratyfy

FairPlay


 

[1] White House, “Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence,” October 30, 2023. https://www.whitehouse.gov/briefing-room/presidential-actions/2023/10/30/executive-order-on-the-safe-secure-and-trustworthy-development-and-use-of-artificial-intelligence/

[2] See Section 7.3: “7.3.  Strengthening AI and Civil Rights in the Broader Economy.  …  (b) To address discrimination and biases against protected groups in housing markets and consumer financial markets, the Director of the Federal Housing Finance Agency and the Director of the Consumer Financial Protection Bureau are encouraged to consider using their authorities, as they deem appropriate, to require their respective regulated entities, where possible, to use appropriate methodologies including AI tools to ensure compliance with Federal law and: (i) evaluate their underwriting models for bias or disparities affecting protected groups…”

[3] Rohit Chopra and Zixta Martinez, “CFPB Approves Rule to Ensure Accuracy and Accountability in the Use of AI and Algorithms in Home Appraisals,” July 24, 2024. https://www.consumerfinance.gov/about-us/blog/cfpb-approves-rule-to-ensure-accuracy-and-accountability-in-the-use-of-ai-and-algorithms-in-home-appraisals/

[4] FHFA, “Advisory Bulletin AB 2022-02: Artificial Intelligence/Machine Learning Risk Management,” February 2022. https://www.fhfa.gov/SupervisionRegulation/AdvisoryBulletins/AdvisoryBulletinDocuments/Advisory-Bulletin-2022-02.pdf; FHFA Office of Inspector General, “Enterprise Use of Artificial Intelligence and Machine Learning,” September 2022. https://www.fhfaoig.gov/sites/default/files/WPR-2022-002.pdf

[5] See, e.g. FinRegLab, “Explainability and Fairness in Machine Learning for Credit Underwriting,” Dec 2023. https://finreglab.org/wp-content/uploads/2023/12/FinRegLab_2023-12-07_Research-Report_Explainability-and-Fairness-in-Machine-Learning-for-Credit-Undewriting_Policy-Analysis.pdf

[6] Consumer Financial Protection Bureau, “Data Point: Credit Invisibles,” 2015. https://files.consumerfinance.gov/f/201505_cfpb_data-point-credit-invisibles.pdf

[7] See e.g. “Financial Inclusion and Access to Credit,” Oliver Wyman. 2022. https://www.experianplc.com/newsroom/press-releases/2022/experian-and-oliver-wyman-find-expanded-data-and-advanced-analytics-can-improve-access-to-credit-for-nearly-50-million-credit-invisible-and-unscoreable-americans

[8] For example, signatory members through the use of AI have increased loan approvals significantly while maintaining consistent risk levels.  https://info.upstart.com/hubfs/7835881/2022%20Access%20to%20Credit%20Report-final.pdf. In 2024 another signatory member reported that models it develops using AI/ML have helped financial institutions achieve a 49% increase in approvals for Latino applicants, 41% for Black applicants, 40% for women, 36% for elderly applicants, and 31% for AAPI applicants, all while maintaining risk levels constant.  https://www.prnewswire.com/news-releases/zest-ais-credit-models-proven-to-increase-loan-approvals-for-every-protected-class-302048828.html

[9] See e.g. Jennifer Chien and Adam Rust, “Urgent Call for Regulatory Clarity on the Need to Search for and Implement Less Discriminatory Algorithms,” Consumer Federation of America and Consumer Reports, June 2024, https://consumerfed.org/wp-content/uploads/2024/06/240625-CR-CFA-Statement-on-Obligation-to-Search-for-and-Implement-Less-Discriminatory-Algorithms-FINAL.pdf

[10] See, e.g. Brad Blower and Adam Rust, “The CFPB Has an Opportunty to Greatly Advance the Ethical and Non-Discriminatory Use of AI in Financial Services and Should Take it,” Consumer Federation of America, January 2024.https://consumerfed.org/the-cfpb-has-an-opportunity-to-greatly-advance-the-ethical-and-non-discriminatory-use-of-ai-in-financial-services-and-should-take-it

[11] For example, Freddie Mac reported that they observed increased performance and risk reduction after adding AI/ML into certain models, benefits that help offset risks heightened by AI/ML. Fannie Mae has reported positive results from its pilot program using AI to analyze rental data to help move former tenants into successful mortgage applications.

[12] See, e.g. FHFA’s “2024 TechSprint: Generative AI in Housing Finance,” https://www.fhfa.gov/programs/2024-techsprint-generative-ai-housing-finance

[13] See, e.g. Urban Institute and Federal Home Loan Bank of San Francisco, “Harnessing Artificial Intelligence for Equity in Mortgage Finance,” November, 2023. https://www.urban.org/sites/default/files/2023-11/Harnessing%20Artificial%20Intelligence%20for%20Equity%20in%20Mortgage%20Finance.pdf

[14] “Robust fair lending testing of models should include regular testing for disparate treatment and disparate impact, including searches for and implementation of less discriminatory alternatives using manual or automated techniques.” CFPB, “Fair Lending Report of the Consumer Financial Protection Bureau,” June 2024. https://files.consumerfinance.gov/f/documents/cfpb_fair-lending-report_fy-2023.pdf

[15] “CFPB exam teams will continue to explore the use of open-source automated debiasing methodologies to produce potential alternative models to the institutions’ credit scoring models.” CFPB, “Fair Lending Report of the Consumer Financial Protection Bureau,” June 2024. https://files.consumerfinance.gov/f/documents/cfpb_fair-lending-report_fy-2023.pdf

[16] See e.g. Jennifer Chien and Adam Rust, “Urgent Call for Regulatory Clarity on the Need to Search for and Implement Less Discriminatory Algorithms,” Consumer Federation of America and Consumer Reports, June 2024, https://consumerfed.org/wp-content/uploads/2024/06/240625-CR-CFA-Statement-on-Obligation-to-Search-for-and-Implement-Less-Discriminatory-Algorithms-FINAL.pdf; See also, Lisa Rice, “National

Fair Housing Alliance, Response to Request for Information on the Equal Credit Opportunity Act and Regulation B,” National Fair Housing Alliance, 2020.  https://nationalfairhousing.org/wp-content/uploads/2021/05/NFHA-Comments-CFPB-ECOA-RFI-FINAL.pdf; See also, NCRC, Upturn, and Zest AI, “CFPB Should Encourage Lenders To Look For Less Discriminatory Models” April, 2022. https://ncrc.org/cfpb-should-encourage-lenders-to-look-for-less-discriminatory-models/

[17]  For example, the Equal Employment Opportunity Commision has described how algorithmic decision-making tools have made it more feasible, and expected, that anti-discrimination testing take place during the development of models, rather than beginning once a model has been developed. See, “Select Issues: Assessing Adverse Impact in Software, Algorithms, and Artificial Intelligence Used in Employment Selection Procedures Under Title VII of the Civil Rights Act of 1964.” EEOC, May 18, 2023. https://www.eeoc.gov/laws/guidance/select-issues-assessing-adverse-impact-software-algorithms-and-artificial.

[18] See, e.g. “…if upstream marketing or acquisition strategies have screened out historically disadvantaged applicants, a lender should not take comfort solely because a fair lending assessment of its underwriting practices does not reveal disparities.” Relman Colfax PLLC, “Fair Lending Monitorship of Upstart Network’s Lending Model: Fourth and Final Report of the Independent Monitor,” March 27, 2024. https://www.relmanlaw.com/media/cases/1511_Upstart%20Final%20Report.pdf

[19] Ebrima Santos Sanneh, “OCC’s Hsu: fairness as important to bank health as safety and soundness,” American Banker, March 2023. https://www.americanbanker.com/news/occs-hsu-fairness-as-important-to-bank-health-as-safety-and-soundness

[20] We applaud the CFPB’s issuance of Matters Requiring Attention and Memoranda of Understanding to instruct lenders to improve fair lending practices, including by documenting the specific business needs that models serve and testing for disparate impact discrimination. See e.g. CFPB, “Fair Lending Report of the Consumer Financial Protection Bureau,” June 2024, (Page 8).

Scroll to Top