AI developers are failing on transparency, new index shows

Ryan Heath

Axios

A damning assessment of 10 key AI foundation models in a new transparency index is stoking new pressure on AI developers to share more information about their products — and on legislators and regulators to require such disclosures.

Why it matters: The Stanford, MIT and Princeton researchers who created the index say that unless AI companies are more forthcoming about the inner workings, training data and impacts of their most advanced tools, users will never be able to fully understand the risks associated with AI, and experts will never be able to mitigate them.

The big picture: Self-regulation hasn’t moved the field toward transparency. In the year since ChatGPT kicked the AI market into overdrive, leading companies have become more secretive, citing competitive and safety concerns.

“Transparency should be a top priority for AI legislation,” according to a paper the researchers published alongside their new index.

Driving the news: A Capitol Hill AI forum led by Senate Majority Leader Chuck Schumer Tuesday afternoon will put some of AI’s biggest boosters and skeptics in the same room, as Congress works to develop AI legislation.

Details: The index measures models based on 100 transparency indicators, covering both the technical and social aspects of AI development, with only 2 of 10 models scoring more than 50% overall.

More:

Share this!

Additional Articles

News Categories

Get Our Twice Weekly Newsletter!

* indicates required

Rose Law Group pc values “outrageous client service.” We pride ourselves on hyper-responsiveness to our clients’ needs and an extraordinary record of success in achieving our clients’ goals. We know we get results and our list of outstanding clients speaks to the quality of our work.

October 2023
M T W T F S S
 1
2345678
9101112131415
16171819202122
23242526272829
3031