Guidelines for self-driving cars: all lives matter
In terms of regulations for self-driving cars, Germany takes the lead. The German federal government will adopt new guidelines for self-driving cars inside the country, which will prioritize the value and equality of human life over damage to property or animals.
German Guidelines
These guidelines were presented on Aug. 23 2017 by an ethics committee on automated driving. They stress that self-driving cars must do the least amount of harm if put into a situation where hitting a human is unavoidable, and cannot discriminate based on age, gender, race, disability, or any other observable factors. In other words, all self-driving cars must be programmed to understand that human life is equal. This position takes action on an ethical dilemma for which there is no unique solution. From an ethical point of view it is not allowed to trade one human life against an other one. A long analysis of this problem may be found in a paper by Alexander Heyelke and Julian Nida-Rümelin.
(Alexander Heyelke and Juian Nida-Rümelin: Responsibility for Crashes of Autonomous Vehicles, An Ethical Analyis; Sci Eng. Ethics. 2015; 21(3); 619-630.
https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4430591/pdf/11948_2014_Article_9565.pdf).
The moral machine
This question of who a vehicle should kill when placed in a situation where every outcome would end in death is often called the “Trolley problem.” It’s an ethical debate that’s lasted more than 60 years: Ethicists riddle each other with questions of whether it’s excusable to kill two elderly people to save one child, or save a pregnant woman while killing a man and a child, and on and on. MIT even made a game to test your own ethical predisposition in the situation.
Germany’s rules undercut the myriad arguments possible when weighing the potential of ending one life instead of another based on circumstances of birth. A self-driving car in Germany would choose to hit whichever person it determines it would hurt less, no matter age, race, or gender. How a car would determine the damage it would cause, however, remains uncertain.
Regardless, the country expects a net benefit from the technology; the ethics committee stated that a robot vehicle system would decrease human-caused accidents country-wide, and is thus ethically necessary.
Implementing these rules before fully autonomous cars are on the road puts Germany ahead of the rest of the world, especially the US. While the US Congress is hopeful that bipartisan guidelines can be achieved in the near future (without a defined timeline), individual states like California and Nevada have begun drafting their own set of rules, setting the stage for a confusing patchwork of regulation.
The Ethics Knob
The positions taken above are in contrast to an article that appeared recently in the New Scientist. Would you ride in a car that was prepared to kill you? An “ethical knob” could let the owners of self-driving cars choose their car’s ethical setting. You could set the car to sacrifice you for the survival of others, or even to always sacrifice others to save you. The dilemma of how self-driving cars should tackle moral decisions is one of the major problems facing manufacturers. When humans drive cars, instinct governs our reaction to danger. When fatal crashes occur, it is usually clear who is responsible.
But if cars are to drive themselves, they cannot rely on instinct, they must rely on code. And when the worst happens will it be the software engineers, the manufacturers or the car owner who is ultimately responsible?
Would a driver be in a position to take a position about this in an emergency situation. This is highly debatable.
Dear Gerhard,
thanks for an overview on the discussion on self-driving cars. This discussion is irrational, and therefore immoral (because moral is a branch of philosophy, which is solely based on ration) in several aspects.
That “trolley problem” is a nice intellectual game. If you look at reality, reading the police reports or your newspaper, you won’t find any cases like this: “Driver had the choice to run over a woman with a child, or two elderly people, or to drive towards a tree and kill himself.” In reality, most accidents are caused by “missing something”: The truck driver taking a turn to the right and missing the biker who went straight on and therefore had priority. Or the driver who changed lanes and missed another car on that lane.
Algorithms are good at concentrating on what’s important – humans are not. Algorithms don’t just “miss” anything because they get detracted (by the looks of a nice girl on the other side of the road, or a pushy boss on the back seat). Algorithms don’t get tired after a few hours of operation.
Drivers who think they could do better than an algorithm in a critical situation should attend a safe driving training. In the motherwithchild/elderlypeople/myselftowardstree situation, they will realize how little influence, in the fraction of a second, they have. What they teach in that training is: Break like hell, because that’s the only way to minimize damage. And that can certainly (and much better) be achieved by an algorithm.
The main point in favor of algorithms is: When an algorithm fails, the reasons can be determined, and all instances of the algorithm can be improved. Whereas, when a human driver makes a mistake, he probably will learn from the experience, but all other drivers are unaffected.
So the key question is: We have approximately 10 deaths in roads traffic per day in Germany. Assume that could be reduced by self-driving cars, just because they don’t “miss” so much as we do. So we could save a few hundred lives. Would the motherwithchild/elderlypeople problem still be significant?9
Thank you for your remarks, Thomas. I agree, that the “trolley problem” does not help us to progress here. And, as it turns out, many people who refer to ethics are also on the wrong track. From an ethical point of view it is not allowed to trade one human live against an other one.
So let’s look at algorithms. Because in future we will have accidents caused by robots and not by humans. Who is liable in this case?
The German Federal Statistical Office records approximately 2.4 million annual traffic accidents, causing economic damages of over € 30 billion. Significantly over 80% of the incidents are caused by human error. Only less than 1% are results of technical failures, whereas most of such can again be linked to poor maintenance.
As self-driving vehicles will progressively eliminate the (direct) human factor as causal element, technical issues will proportionally significantly increase as root-causes for accidents. Further, due to the vast technical complexity, such failures are also likely to increase in absolute terms.
De lege lata, the German product liability regime would significantly shift the overall liability to OEMs and suppliers (respectively their product liability insurances). The German legislator is aware of this situation and this topic is also being discussed in round tables initiated by the German Transportation Ministry (cf. above). However, assuming the absence of any near-term amendments or changes in the legislative framework, OEMs and suppliers will stay subject to the regular German liability regime and the further technical development of self-driving vehicles is likely to raise various issues in this regard, some of which are outlined in the following.
Interesting enough the Swedish Government has now published a 2.000 pages report concerning where in their legal systems they need changes to cope with autonomous driving. Their conclusion: in case of an accident the owner of the car would be liable.
Sorry: Brake like hell, not break