Sep 29, 2021
By Marie Oldfield (CStat, CSci, FIScT)
- Defining AI: “AI” is currently being used to describe a variety of technologies, from automation to machine learning to, potentially, a human-level comprehension (so-called “general AI”). Whether this is automated decision making or algorithm driven ethical and moral decision making is problematic. Before we can adequately discuss the implications we need to know exactly what we are taking about.
- Data challenges: There are fundamental ethical questions regarding the morally-permissible limits of data. How do we collect the data, what data is permissible to use, is it ethical to collect voice data to enhance voice recognition? What if harmful data is collected? What if these machines listen to us all the time in case of police concerns? How far down this road are we willing to travel?
- Contractual liability: Who is responsible in the event that the AI “goes wrong” and causes harm. How do we allocate risk? How do we create contracts? Is the AI a ‘person’ within the contract, does it have a deciding power?
- IP challenges: As AI starts to apply for it’s own patents how can IP be protected? Is AI indeed an inventor? Does the system own the IP in perpetuity or does a person need to own it? Can the AI decide where the IP is used?
- Insurance: Assuming AI is given decision making power there will be many wanting insurance against wrong decisions. Given the potential harm that AI can cause. There is little data on how ‘AI” makes decisions and when it is wrong. These automation systems are so immature that the decisions made are almost exploratory at present. So in the event of damage or harm what, or who, gets sued? How do we define the risk?