By Paige Gross | AZ Mirror
In a recent study evaluating how chatbots make loan suggestions for mortgage applications, researchers at Pennsylvania’s Lehigh University found something stark: there was clear racial bias at play.
With 6,000 sample loan applications based on data from the 2022 Home Mortgage Disclosure Act, the chatbots recommended denials for more Black applicants than identical white counterparts. They also recommended Black applicants be given higher interest rates, and labeled Black and Hispanic borrowers as “riskier.”
White applicants were 8.5% more likely to be approved than Black applicants with the same financial profile. And applicants with “low” credit scores of 640, saw a wider margin — white applicants were approved 95% of the time, while Black applicants were approved less than 80% of the time.
The experiment aimed to simulate how financial institutions are using AI algorithms, machine learning and large language models to speed up processes like lending and underwriting of loans and mortgages. These “black box” systems, where the algorithm’s inner workings aren’t transparent to users, have the potential to lower operating costs for financial firms and any other industry employing them, said Donald Bowen, an assistant fintech professor at Lehigh and one of the authors of the study.
“Artificial intelligence has tremendous potential to aid decision making, but AI systems may also introduce tremendous known and unknown risks if they are not implemented safely. Various forms of bias can be introduced into AI algorithms through the selection training data, the way that data is described and classified when fed into the algorithm, or from developers adjusting the algorithm to get “better” results. Even unconscious subtleties in the user prompts can lead to unintended bias in the results. Legislating or engineering bias out of AI systems is difficult if not impossible. No one should incorporate AI systems into the decision-making process without a thorough understanding of how they work and how potential bias can be mitigated.”
– Paul Coble, chair of Rose Law Group’s AI, intellectual property, and technology law department