User:TeaMaloney/sandbox/sandbox/Discussion Assignment 4

Computing devices are already used to perform a variety of life-critical functions. Airliners use computers (“fly-by-wire”) to turn the pilots' control inputs (move the control stick, push on the right rudder pedal, shift the engine throttle, etc.) into actual actions on the part of the aircraft (move the control surfaces, speed up or slow down the engines, etc.). Medical devices are also controlled by computer (X-ray treatment machines, for example, have been computer-controlled since the 1960's). Military robots are getting more common, and more autonomous. All these rely on highly complicated computer programs to run.

Everyone involved in software development knows that once a program reaches a certain level of complexity, it will contain bugs—programming mistakes that cause the program to behave in a way it wasn't intended to. Diligent programming practice can reduce the number and the severity of the bugs in a program, but you can't, in practice, make a program totally bug-free. The same is true of security flaws (which are really a particular kind of bug).

Given this, using computers for life-critical tasks always involves some level of risk. (In fact, using computers for any task involves some level of risk, although with less critical tasks the risk might be pretty minor.) What level of risk is acceptable for a life-critical task? How can society ensure that individual actors respect that level of safety?

Many of these critical tasks involve making ethical decisions. This means that the program will inevitably reflect a particular ethical framework—a particular set of choices that guide the program in making its ethical decisions. Who is responsible for the results of the program's ethical decisions? The programmers? The device manufacturer? The purchaser? Are there different levels of responsibility involved?

What kind of accountability should those ethical decision-makers have, and to whom?

One approach to putting ethics into a computer program is the one used by the MIT Moral Machine—ask lots of people how they would behave in a particular situation, and use some kind of average of their answers as the ethically-correct answer. Another approach is to use understandings of fundamental human rights. What are the strengths and weaknesses of these two approaches?

In the cases where the ethics aren't implemented properly (because of the inevitable bugs), who's responsible for those?

What can society do to enforce that responsibility?