31.03.2019

Dignity and Democracy in the Context of Artificial Intelligence

E-Mail Adresse für Einreichungen: antonio.bikic@lrz.uni-muenchen.de
Deadline: 31.03.2019
Call-Bezeichnung: Call for Papers
Ort: Lucerne
Datum: 07.07.2019
Ende: 13.07.2019 --:--
Veranstaltende Institution: University of Lucerne

What Ways to Program Autonomous Systems (such as Cars) are Permissible in a Liberal-Democratic Rechtsstaat?

Special Workshop at the IVR World Congress on “Dignity, Democracy, Diversity”

 

Suppose you’re steering a car and facing the dilemmatic choice of either killing two pedestrians or one biker. Should you aggregate the victim counts and save the greater number, i.e. kill the biker? Or should you (time allowing) randomise your decision, so as to allocate every person involved an equal 50% survival chance? Or would fairness require a weighted lottery, i.e. allocating a 1/3 survival chance to the biker and a 2/3 probability to the pedestrians?

 

Slow human reaction times usually save us from having to make such life-and-death decisions on the road. However, the algorithms steering self-driving cars are likely to become able to process the relevant data sufficiently quickly. We therefore face a large class of ex ante life-and-death decisions when programming them.

 

This raises a number of moral-cum-legal issues: If we program a car to automatically save the two pedestrians at the expense of the one biker in the above case, are we violating the biker’s dignity or autonomy? Would it thus be incompatible with the principles of the liberal Rechtsstaat to allow democratic majorities to opt for an aggregative algorithm? Or are such conclusions unwarranted because we're in a "veiled" ex ante situation when programming the car, such that aggregation is in each person's best interest?

Dateien:
1547543303-100.pdf118 Ki
Veranstaltung melden
Stellenausschreibung melden