By Kaveh Waddell | Axios
Artificial intelligence experts — concerned about reported blunders with high-stakes AI systems from makers like Amazon and IBM — are urging more oversight, testing, and perhaps a fundamental rethinking of the underlying technology.
Related: ASU team uses AI to detect wildfires
Why it matters: Wall Street, the military, and other sectors expect AI to make increasingly weighty decisions in the future — with less and less human involvement. But if the systems behave inaccurately or display biases, the consequences outside the lab could cause harm to real people.
Amazon’s face-recognition platform, Rekognition, matched 28 members of Congress with mugshots when it was put through testing by the ACLU, which announced the results Thursday. The misidentified faces disproportionately belonged to people of color. Responding on its blog, Amazon said the ACLU didn’t test Rekognition with the correct settings, and that its system is meant to help humans make big decisions — not final determinations on its own.