Aug 11, 2021
By Marie Oldfield (CStat, CSci, FIScT)
Who is responsible for AI? TThis question has come up time and again. We can look at examples such as an operator pulling the trigger on an armed drone or the driverless car. There are many people who could be responsible from the person who generates the algorithm to the person pulling the trigger. However, one problem with this is that, as we saw in the Boeing Air Max disasters, even when the operator, the pilot in its instance, goes to lever the control column, the AI doesn’t let them because the AI is programmed to take another course of action contrary to the pilot.This then becomes an interesting philosophical problem.
Recently a robot said it would kill a group of people in a car accident rather than injure the driver. Sometimes AI isn’t going to make the decision we think is right but is the optimal decision for the programming. Therefore the algorithms have a lot to answer for. The problem is that we cannot put all of our human expertise and knowledge as well as our ethical and moral codes into a programme. After all, who would be happy with the car was instructed to kill the occupants if it meant saving 5 people outside and who would be happy to code that.
If we truly want to exist in a world of algorithms the first things we need to confront are our own moral and ethical codes and death. There might be no choice in an accident and someone will die, but who would you choose?