By Kaveh Waddell | Axios
The advancement of AI-fueled technologies like robotics and self-driving cars is creating a confusing legal landscape that leaves manufacturers, programmers, and even robots themselves open to liability, according to legal scholars who study AI.
Why it matters: As autonomous vehicles take to the road and get into collisions, drivers, insurers and manufacturers want to know who — or what — is liable when a harmful mistake occurs. The degree of liability comes down to whether AI is treated as a product, service, or a human decision-maker.
Some car makers, including Volvo, Google, and Mercedes, have already said they would accept full liability for their vehicles’ actions when they are in autonomous mode.
Even without such a pledge, it’s likely that manufacturers would end up paying if their autonomous car caused harm. If the offending car were considered a defective product, its maker could be held liable under strict product-design standards, potentially leading to class-action lawsuits and expensive product recalls — like Takata faced for its dangerous airbags.
“Advanced AI and autonomous technology is progressing rapidly towards the consumer market. Legislatures should be prepared to address related liability issues to the extent established legal principles do not.”