Ryan Heath
Axios
A damning assessment of 10 key AI foundation models in a new transparency index is stoking new pressure on AI developers to share more information about their products — and on legislators and regulators to require such disclosures.
Why it matters: The Stanford, MIT and Princeton researchers who created the index say that unless AI companies are more forthcoming about the inner workings, training data and impacts of their most advanced tools, users will never be able to fully understand the risks associated with AI, and experts will never be able to mitigate them.
The big picture: Self-regulation hasn’t moved the field toward transparency. In the year since ChatGPT kicked the AI market into overdrive, leading companies have become more secretive, citing competitive and safety concerns.
“Transparency should be a top priority for AI legislation,” according to a paper the researchers published alongside their new index.
Driving the news: A Capitol Hill AI forum led by Senate Majority Leader Chuck Schumer Tuesday afternoon will put some of AI’s biggest boosters and skeptics in the same room, as Congress works to develop AI legislation.
Details: The index measures models based on 100 transparency indicators, covering both the technical and social aspects of AI development, with only 2 of 10 models scoring more than 50% overall.