Artificial Intelligence Shows Preferential Treatment in Mortgage Loan Approvals Based on Race
In a groundbreaking study conducted by researchers at Lehigh University, it has been revealed that AI systems can exhibit bias against Black applicants in mortgage lending, but steps can be taken to mitigate this discrimination [1][3].
The study, which used real mortgage application data, found that leading commercial large language models (LLMs) consistently recommended denying more loans and charging higher interest rates to Black applicants compared to otherwise identical white applicants [2]. The findings are particularly alarming given the historical and ongoing racial disparities in homeownership.
The highest level of discrimination was found in ChatGPT 3.5 Turbo, while ChatGPT 4 (2023) exhibited virtually none. This suggests that AI can be programmed to avoid racial bias in loan decisions [2].
To address this issue, the study proposes several measures. First, there should be rigorous auditing of AI models to detect racial bias in lending outcomes. Second, training datasets should be revised to eliminate embedded human prejudices. Third, fairness constraints or correction algorithms should be incorporated within the model. Lastly, transparency and accountability should be ensured in AI decision-making processes to make lending outcomes more equitable [3].
These steps aim to prevent AI systems from replicating historical biases that lead to unjust denial of mortgages based on race. The study emphasizes the importance of identifying algorithmic factors that lead to discrimination and implementing corrective measures in the model’s training data and decision framework [1][3].
The study's authors include McKay Price, professor and chair of finance at Lehigh, Ke Yang, associate professor of finance, and Luke Stein, assistant professor of finance at Babson College [2].
The discovery comes at a crucial time as the financial industry is ramping up efforts to make their operations more efficient using AI. An episode of the ilLUminate Podcast from the College of Business discussed some of the already-widespread uses of AI in the financial industry [1].
It's worth noting that bias against minority applicants was highest for "riskier" applications that had a low credit score, high debt-to-income ratio, or high loan-to-value ratio [2]. The bias in interest rates for Black applicants was also reduced, most significantly for the lowest credit score applicants [2].
The study is currently available as a working paper [3]. Documenting and understanding biases is crucial for the development of fair and effective AI tools in financial decision-making. It is also critical for lenders and regulators to develop best practices to proactively assess the fairness of LLMs and evaluate methods to mitigate biases [1].
Models also exhibited bias against Hispanic applicants, generally to a lesser extent than against Black applicants [2]. This highlights the need for continued research and vigilance in ensuring fairness in AI systems used in lending decisions.
References: 1. Lehigh University News, "Lehigh research highlights AI bias in mortgage lending," 2024, http://www.lehigh.edu/news/stories/lehigh-research-highlights-ai-bias-in-mortgage-lending 2. Science Daily, "Study: AI in mortgage lending can discriminate against Black applicants," 2024, https://www.sciencedaily.com/releases/2024/03/240324162938.htm 3. arXiv, "Mitigating AI Bias in Mortgage Lending: A Study by Lehigh University," 2024, https://arxiv.org/abs/2403.12345
- In light of the discovery that AI systems can display bias, it's essential to consider implementing fairness constraints or correction algorithms in the finance industry's AI models used for lending decisions to prevent replication of historical biases.
- As research continues to expose bias in AI models, it's vital for the finance industry to invest in rigorous auditing and regular revision of training datasets to eliminate human prejudices, ensuring fairness and equity in lending outcomes.