As they improve, we’ll likely trust AI models with more and more responsibility. But if their autonomous decisions end up causing harm, our current legal frameworks may not be up to scratch.
Throughout the entire chain based on value/value add. Not to the consumer.
So if a car manufacturer adds a shitty 3rd party self-driving to their car. And the license etc is 100 euro per car and the car 10k and sold by the dealer for 20k…
100/20k for the 3rd party
10k/20k for the manufacturer
10k/20k for the dealer
Hhmm how would this work for private re-sale… Still the dealer imho.
An insurer is an interesting one for sure. They’d have the stats of how many times that AI model makes mistakes and be able to charge accordingly. They’d also have the funds and evidence to go after big corps if their AI was faulty.
They seem like a good starting point, until negligence elsewhere can be proven.
This topic came up when self-driving was first coming up. If a car runs over someone, who is to blame?
Most of these would likely be indemnified by all kinds of legal and contractual agreements, but the matter would still stand that someone died.
Throughout the entire chain based on value/value add. Not to the consumer.
So if a car manufacturer adds a shitty 3rd party self-driving to their car. And the license etc is 100 euro per car and the car 10k and sold by the dealer for 20k…
Hhmm how would this work for private re-sale… Still the dealer imho.
Dealers don’t (and shouldn’t have to) validate safety features. If they’re approved by the NHTSA, that’s their responsibility handled.
It’s all the manufacturer.
Fair enough.
An insurer is an interesting one for sure. They’d have the stats of how many times that AI model makes mistakes and be able to charge accordingly. They’d also have the funds and evidence to go after big corps if their AI was faulty.
They seem like a good starting point, until negligence elsewhere can be proven.