Basic Knowledge Basics of autonomous driving: Part 6
Autonomous vehicles are expected to reduce the number of accidents in the future significantly. However, there is a risk of moral dilemmas and the problem of different prioritization depending on the respective origin.
When we take the wheel of a car, all we often think about is where we are going, how long the journey will probably take, and which route will cause the least amount of traffic jams. Something unforeseen may happen on our trip. In that case, we have to react accordingly - for example, if a car takes our right of way or a child runs onto the road. We can - based on the situation and our experience - decide quickly to take action to minimize the danger as much as possible, either by stepping on the brakes or by taking evasive action to avoid a collision.
Thanks to advanced 3D imaging technology (LiDAR and ToF), as well as data from infrastructure and other road users in the vicinity (via V2V/V2I communication), autonomous vehicles can avert potential hazards more efficiently than human drivers. The misjudgments we all make - no matter how much experience we have - and the possibility of distraction at the wrong time are eliminated.
Vehicle manufacturers, their tier-one suppliers, and chip manufacturers are investing heavily in the design and development of hardware that will make autonomous driving possible in the future. As systems progress and the degree of autonomy increases, we need to discuss new ethical issues that affect road users, legislators, and the industry as a whole.
For instance, an important issue is how AI (artificial intelligence) technology, which will part of the next generation of vehicles, deals with different accident scenarios - especially the decision in situations where damage cannot be avoided.
Life or death
Numerous situations are conceivable. For example, imagine that the vehicle has to decide whether to collide with a bus that has hit the opposite lane and there are many people in it, or to swerve onto the sidewalk and run over a mother and child. How should it react? This scenario is extreme, but not inevitable, so the vehicle's algorithms must be able to deal with it and find a solution immediately.
Germany seems to have been the most proactive in addressing these problems so far and has already developed an ethical code of conduct for autonomous vehicles. In 2017, the Ethics Commission of the Federal Ministry of Transport and Digital Infrastructure produced a comprehensive report on automated and connected driving. This report explains how autonomous vehicles should react in seemingly hopeless situations, such as the one just described. It states that the protection of human life has the highest priority when weighing up legally protected interests. For this reason, systems must be programmed to accept damage to animals or property in cases of doubt if this can prevent personal injury.
This hierarchical approach, in which humans are at the top and inanimate objects at the bottom, could be implemented universally in the future. This would place greater emphasis on the vulnerability of road users - for example, first pedestrians, then cyclists, then cars with occupants, and finally trucks.
In the German model, in the event of an unavoidable accident, no distinction based on age, gender, or physical or mental condition would be allowed, and it would also be forbidden to set off potential victims against each other. In a situation where a collision with people cannot be prevented, the system would have to take action to minimize the adverse effects (i.e., the number of fatalities).
This egalitarian approach may not be supported in other regions because other parts of the world have different priorities. In a survey, researchers at MIT investigated how vehicles in various markets across the globe prioritize the safety of specific groups. The "Moral Machine Experiment" has collected feedback from over 2 million people in 200 different countries.
The results show, as expected, a consensus: Preferably people are saved instead of animals, many instead of few and young instead of old. However, individual regional differences became apparent. For example, while there was a strong tendency almost everywhere in the world to protect young people rather than the elderly, in countries in the Far East, the elderly were given priority. Based on this, central AI algorithms in autonomous vehicles may need to be adapted to take into account international cultural/ethical implications.
Who is responsible?
An area where clarity is also needed is the question: Who is responsible for an accident? This could be the car manufacturer, the software company that developed the AI algorithms, the telecommunications service provider responsible for V2V/V2I communications, or any other party involved in any way in the development of the vehicle or operation of the supporting infrastructure. Despite the complexity and the difficulties mentioned above in investigations, autonomous vehicles offer a crucial advantage in answering any question asked: the amount of data they record about their environment and operational parameters before an incident occurs.
California authorities already require companies with autonomous test vehicles to transmit data from the integrated sensors to the Department of Motor Vehicles for 30 seconds before each accident. This makes it easier to reconstruct incidents and determine who is responsible.
The aim is for autonomous technology to eliminate or at least reduce accidents to almost zero in the future. This is more likely to happen if there are only autonomous vehicles on our roads that communicate with each other. But even then, programming errors, service interruptions, the threat of hackers and various other things can still endanger the safety of road users.
This article was first published in German by next-mobility.news.