Paul Coble, chair of Rose Law Group’s AI, intellectual property, and technology law department, comments on: Signs of perpetuating historic biases emerge as AI takes the helm of decision making

By Paige Gross | AZ Mirror

In a recent study evaluating how chatbots make loan suggestions for mortgage applications, researchers at Pennsylvania’s Lehigh University found something stark: there was clear racial bias at play.

With 6,000 sample loan applications based on data from the 2022 Home Mortgage Disclosure Act, the chatbots recommended denials for more Black applicants than identical white counterparts. They also recommended Black applicants be given higher interest rates, and labeled Black and Hispanic borrowers as “riskier.”

White applicants were 8.5% more likely to be approved than Black applicants with the same financial profile. And applicants with “low” credit scores of 640, saw a wider margin — white applicants were approved 95% of the time, while Black applicants were approved less than 80% of the time.

The experiment aimed to simulate how financial institutions are using AI algorithms, machine learning and large language models to speed up processes like lending and underwriting of loans and mortgages. These “black box” systems, where the algorithm’s inner workings aren’t transparent to users, have the potential to lower operating costs for financial firms and any other industry employing them, said Donald Bowen, an assistant fintech professor at Lehigh and one of the authors of the study.

READ ON:

“Artificial intelligence has tremendous potential to aid decision making, but AI systems may also introduce tremendous known and unknown risks if they are not implemented safely.  Various forms of bias can be introduced into AI algorithms through the selection training data, the way that data is described and classified when fed into the algorithm, or from developers adjusting the algorithm to get “better” results. Even unconscious subtleties in the user prompts can lead to unintended bias in the results.  Legislating or engineering bias out of AI systems is difficult if not impossible.  No one should incorporate AI systems into the decision-making process without a thorough understanding of how they work and how potential bias can be mitigated.”

– Paul Coble, chair of Rose Law Group’s AI, intellectual property, and technology law department

Share this!

Additional Articles

News Categories

Get Our Twice Weekly Newsletter!

* indicates required

Rose Law Group pc values “outrageous client service.” We pride ourselves on hyper-responsiveness to our clients’ needs and an extraordinary record of success in achieving our clients’ goals. We know we get results and our list of outstanding clients speaks to the quality of our work.

PRTA suspends operations

(Disclosure: Rose Law Group represents a coalition of property and business owners throughout Pinal County who have worked to bring new transportation infrastructure to the

Read More »
October 2024
M T W T F S S
 123456
78910111213
14151617181920
21222324252627
28293031