Calls on Federal Banking Regulators to Protect Consumers
The National Community Reinvestment Coalition (NCRC) submitted a detailed comment letter to five federal agencies yesterday, outlining how consumers should be protected when lenders use artificial intelligence (AI) and machine learning (ML) to underwrite loans.
Twenty-eight state and local groups signed on in support of NCRC’s comment.
Jesse Van Tol, CEO of NCRC, made the following statement:
“Artificial intelligence represents the new frontier for how consumers will access credit. It is essential that regulators make sure consumers remain protected from discriminatory practices now and not at some time in the future. We cannot allow innovation to come before the protection of consumers.
“Every person in a community, regardless of their race, age, or socioeconomic status, should have the opportunity to build wealth. We should bear in mind that our nation’s persistent racial wealth gaps mean that equal access to financial products and services is critical, and building community prosperity requires a long-term plan to expand and preserve access to credit and capital.
“The use of black-box models heightens the risk of discriminatory credit markets. We have concerns that without adequate and ongoing review, some lenders will develop algorithms that use proxies for protected class status, make credit available on unequal terms, or shut some groups out from credit altogether.
“The supervision, rulemaking and enforcement of AI and ML should focus on the principles of equity, transparency and accountability. Lenders cannot be permitted to shift responsibility for unfair models to third-party vendors, nor should any regulator allow a lender to claim the defense that they didn’t understand how their models worked.
“While the use of artificial intelligence is relatively uncommon in mortgage lending, it is increasingly common among online installment lenders, certain small business lenders, and other types of consumer credit. The algorithms used to underwrite these loans can be far more complicated, opaque, and harder to monitor than traditional credit scoring models. Additionally, lenders may pull in hundreds or thousands of data points to populate their models.
“We emphasize that models should not use data that are proxies for race, ethnicity, or any other protected class. As a part of that, our comment places a priority on holding lenders accountable to develop models fairly, use data that has been tested, and understand how their models work so that they are able to explain their decisions to consumers.”
Additional information:
Six fintech companies joined with NCRC to affirm their commitment to preventing discrimination by monitoring for disparate impact. In a joint statement, the NCRC Innovation Council for Financial Inclusion (IC) called on regulators to establish that any credit-related policy or practice must meet a “substantial, legitimate, and non-discriminatory interest” that could not be achieved through an alternative with a less discriminatory effect, so that lenders will not assert that profit alone can justify unfair lending. The IC also sought guidance from the Bureau on key questions related to implementing an effective disparate impact framework.
NCRC also joined a separate comment letter from a coalition of civil rights, consumer, technology and other organizations, led by the National Fair Housing Alliance.