Ethics risk toolkit

Assuming a business intends to make a decision in principle to use or develop AI, there is a key question it will need to consider: will the AI result in a shift of ethical and legal responsibility within the business’s supply chain? 

A clear example of AI’s potential to do this has already been mentioned: the driverless car. In the absence of mechanical fault with the car, the driver is typically liable for loss it causes. The introduction of AI, however, has the potential to shift that liability up the supply chain to the manufacturer or to another AI Supply Chain Participant. 

Changes like these are likely to have a profound impact on business models over time, necessitating a business which might be affected to address the ethical-legal consequences. In what follows we introduce an Ethics Risk Toolkit for managing the distinctive ethical-legal problems in relation to the development or use of AI. As the potential for AI to reallocate ethical and legal responsibility will vary according to the type of AI at issue and the relevant sector, the Toolkit is a framework only, and will require detailed customisation to accommodate the circumstances of a particular business and its sector. 

How does AI expand the scope of ethical decision-making?

AI acting autonomously expands the scope of ethical decision-making:

  1. new ethical judgments: decisions that would have been decided by split-second reaction by a human are made by AI autonomously. Instead of an instant, unpremeditated judgment, there is a precise calculation based on principles articulated when the AI was designed and built. An instantaneous reaction becomes an ethical judgment.

    The AI programmer’s dilemma
    Justin Moore, AI, Autonomous Cars and Moral Dilemmas, Techcrunch.com, 19 October 2016

    “Take, for example, an autonomous car self-driving along the road when another car comes flying through an intersection. The imminent t-bone crash has a 90 percent chance of killing the self-driving car’s passenger, as well as the other driver. If it swerves to the left, it’ll hit a child crossing the street with a ball. If it swerves to the right, it’ll hit an old woman crossing the street in a wheelchair.”

    Here the programming of the AI within the car determines the outcome even before the driver has turned the key to start the engine. That programming does not involve a split-second reaction as the outcome follows from the programming, determined at the time of coding. The programming itself therefore involves an ethical judgment.

  2. ethical judgments by manufacturers not users: decisions that would have been made by the user at the time the situation arises are instead made by the manufacturer (or other AI Supply Chain Participant) when the AI was designed and built. This is easily obscured by the fact that an AI system appears to make the judgment when the situation arises (as would an ordinary user).  However, an AI system does not have any independent agency.  It is simply carrying out the processes built into it by the manufacturer.

    Shift in ethical responsibility

    Because the introduction of AI has the potential to shift responsibility up the supply chain to the manufacturer or to another AI Supply Chain Participant, in the case of, say, a driverless car, it will be the manufacturer (or other responsible AI Supply Chain Participant), rather than the driver, who will be forced to make new ethical judgments.

  3. consistency of ethical judgments: decisions that would have been made at different times by different users are made all at once by a single manufacturer. Consistency (a key requirement of ethical judgments) becomes measurable for the first time.

    Same facts, same result

    In State v Loomis39 the Wisconsin Supreme Court  recently approved a trial court’s use of an algorithmic risk assessment in relation to the assessment of risk of criminal recidivism for sentencing purposes.  Historically there has been a wide disparity in sentencing terms for the same offences, in part because of subjective and sometimes wildly varying human prejudices when applying sentencing guidelines. An argument in favour of using such AI is that, based on the same set of facts, it produces the same sentencing term, and so achieves consistency of treatment.

How can a business deal with this new risk: that is, multiple scenarios demanding ethical judgments that are difficult to foresee, must be agreed in advance, and must be consistent?  It is not possible to eliminate this risk by guaranteeing that all ethical judgments will be correct. In fact, as people will disagree about the correct decision, it is futile to seek judgments that command universal approval.

Instead, businesses should consider creating a defensible process for making ethical judgments. Elements of such approach are set out in this Toolkit. In response to a query about an ethical judgment, the business could point, not to the correctness of the decision, but rather to the robustness of the process which led to that decision.

What constitutes the Ethics Risk Toolkit?

The Toolkit has three elements:

  • Unmasking: exposing the technical steps in the AI design and building process that are involved in making ethical judgments;
  • Process: imposing a robust, defensible process on the steps exposed in the Unmasking stage; and
  • Validation:40 adjusting the results based on real-world activity and reinforcement learning, leading to an iterative process.

Unmasking

Ethical judgments are often mixed in with other parts of the automation process. Weighting of different actions or outcomes may be delegated to programmers or accountants; and verification41 may be carried out by statisticians.  If the AI is an ANN, then these decisions may be implicit in the architecture and available only indirectly through parameters or hyper-parameters.

A robust process cannot be created until all these different elements are exposed and categorised, so that they can be adjusted, verified, and audited. This calls for an interdisciplinary approach.  Lawyers, coders, accountants, statisticians, and philosophers will all be needed to tease apart the existing design of the AI and to identify where ethical judgments are made.

“Bringing together a multidisciplinary and diverse group of individuals will ensure that all potential ethical issues are covered.”

IEEE, Ethically Aligned Design: A Vision for Prioritizing Human Wellbeing with Artificial Intelligence and Autonomous Systems, version one, 2016, page 44

Process

Once the locus of ethical decision-making in an AI system has been identified, it is necessary to impose a process on those decisions.

What does the Process stage involve?

A process imposed in relation to decision-making undertaken by AI involves:

  • Ethics Board: with supervisory power over all ethical decisions. It includes external members;
  • Ethics Policy: created by the Ethics Board, it sets out rules and principles for ethical decision-making in the creation of an AI system. Such a policy can govern “how the AI should be used, who is qualified to use it, what training is required for operators, and what operators and other people can expect from the AI”;42
  • Compliance by design: amending the interface between ethics and the rest of the AI system to ensure compliance with the Ethics Policy, such as by creating “privacy by default” or by introducing “kill switches”. Businesses should mandate consideration of ethics at the pre-product design stage.43 For example, it is a realistic goal to embed explicit norms into such systems, because norms can be considered instructions to act in defined ways in defined contexts;44
  • Industry standards, codes of conduct and certification: technical industry standards (such as ISO 10218-1:2006 promulgated by the International Standards Organization, which covers safety-associated design, protective measures, and industrial robot applications); codes of conduct (such as those put forward by the European Parliament for robotics engineers and research ethics committees, along with model licences for designers and users);45 and external certification of compliance with standards and code. The Ethics Policy can incorporate such requirements by reference; 
  • Ethics Compliance Log: an audit trail designed to deliver accountability (see What Steps are Required to Deliver Accountability?, above). The log records compliance with the Ethics Policy. Logs may cover computer code, algorithms, behaviour, or tests. In each case, they will show how that functionality was evaluated for ethical compliance, which may then be tested in the Validation stage (see Validation);
  • ethics-bounded optimisation: assuming that ethical principles are embedded in any AI system, it is then necessary to ensure that the AI itself does not learn to work around them - particularly where the system is an ANN. That is, if ethical principles are treated simply as additional constraints on behaviour, the AI may learn to circumvent those constraints. Any learning process must give ethical principles additional power to control the optimisation carried out by the AI over and above other constraints or rewards;46
  • Ethics training programme: to disseminate the Ethics Policy and associated guidelines. “Ethical training for AI is a necessary part of the solution.”47

The supervision of the Ethics Board and the principles and rules set out in the Ethics Policy aim to address the consistency issue - ensuring that all similar ethical judgments are made in a similar way.  It can expressly deal with issues like bias and discrimination, although these will still have to be verified by monitoring outcomes under the next stage (see Validation).

The Ethics Policy will be able to take account of industry-wide guidance (such as standards and codes of practice). Similarly, the Ethics Board can take advantage of work by other Ethics Boards.

The Process stage ends with an AI system that has been built to comply with ethical rules together with a set of associated documentation (from high-level Ethics Board principles down to coding functional specifications) that;

  • describes the end result; and
  • can be used in the Validation stage.

Validation

“The most effective way to minimise the risk of unintended outcomes is through extensive testing.”48 Testing ethics frameworks resulting from the Unmasking and Process stages, together with ongoing verification, also requires an interdisciplinary team. For instance, examination of underlying code will need programmers, analysis of test data will need statisticians, and comparison with laws and regulations will need lawyers. 

“Technologists should be able to characterise what their algorithms or systems are going to do via transparent and traceable standards. … Similar to the idea of the flight recorder in the field of aviation, algorithmic traceability can provide insights on what computations led to specific results ending up in questionable or dangerous behaviours.  Even where such processes remain somewhat opaque, technologists should seek indirect means of validating results and detecting harms.”

IEEE, Ethically Aligned Design: A Vision for Prioritizing Human Wellbeing with Artificial Intelligence and Autonomous Systems, version one, 2016, page 48

How is Validation implemented?

There are two broad approaches to Validation:

  • intrinsic validation: examining the rules governing ethical behaviour to confirm that they comply with the Ethics Policy. This is appropriate for an expert-system architecture and could be carried out by lawyers and associated professionals. Generally speaking, if there is transparency in relation to the algorithm, intrinsic validation may be possible; on the other hand, if there is no such transparency, then extrinsic validation may instead need to be used;
  • extrinsic validation: analysing the behaviour of the AI system in a real-world environment or simulation to measure its compliance with the outcomes expected from ethical judgments in accordance with the Ethics Policy. This effectively treats the system as a “black box” and tries to infer compliance from observation of its behaviour. It requires an interdisciplinary team with particular emphasis on statistical knowledge. This may be the only option for ANN systems (where interrogation of the underlying structure in a meaningful way is not possible).

Incorporated as part of the design of the system, ideally AI systems “should generate audit trails recording the facts and law supporting decisions”49 which can then be used as part of the Validation exercise.

In practice, a combination of both intrinsic and extrinsic Validation will be used. For ANN-based systems, judicious tuning of parameters will allow some insights into the internal models used to control behaviour. Even for a transparent rules-based system, a swift double-check using so-called “Monte Carlo” methods (that is, repeated random sampling to obtain numerical results) will provide extra comfort.

Interfacing and “situatedness” problems

Validation of an AI system operating within a wider IT eco-system which includes other AI systems can give rise to particular problems. An AI system might work well on its own, but may have unexpected interaction effects in relation to other AI systems. Peter McBurney, Professor of Computer Science at King’s College London, gives this example:

“Suppose that driverless cars communicate with CCTV cameras to detect the presence of pedestrians, and both sets of devices communicate with traffic lights to optimise traffic flow. Something goes wrong and a driverless car hits a pedestrian. Each intelligent device was working perfectly. Some system-level interaction property instead caused a problem.”

AI that is designed for “situatedness” may give rise to additional challenges in terms of Validation (and attribution of liability). “Situatedness” connotes the idea that successful operation of an AI system may require a sophisticated awareness of the environment within which the system operates, including predictions (and models) about how other entities in the environment are likely to act. Testing other than in a live environment would produce a less than complete picture of the performance of the AI system.

The above example illustrates that intrinsic Validation of an AI system will not typically be sufficient. Extrinsic Validation will also be required.

The automation bias problem

So-called “automation bias” will need to be measured and corrected as part of the Validation stage where humans are inserted into a decision-making process involving AI (see Inserting Humans into the Loop).

Automation bias is the name given for the problem that humans tend to: (1) give undue weight to the conclusions presented by automated decision-makers; and (2) ignore evidence that suggests a different decision should be made.

For example, an automated diagnostician may assist a GP reach a diagnosis, but it may also have the effect of reducing the GP’s own independent competence as he or she may be unduly influenced by the automated diagnostician. The result may be that the overall augmented system may not be as much of an improvement as had been expected.

One-off Validation exercises should be done periodically to audit the functioning of the Toolkit and to give a “snapshot” of progress. However, essentially, Validation is continuous. The results are fed back into the Process stage, so that the Process and Validation stages together form an iterative development cycle.

Once entrenched in development of an AI system, a multidisciplinary team applying the Toolkit will give the business the confidence that ethical-legal are being addressed.


Footnotes

39. State v Loomis, 881 N.W.2d 749 (Wis. 2016).

40. Our use of the description “Validation” here is intended to include verification (checking that a system conforms to a design) as well as the traditional idea of validation (checking that a system acts as desired/intended).

41. Verification means checking that a system conforms to a design.

42. IEEE, Ethically Aligned Design: A Vision for Prioritizing Human Wellbeing with Artificial Intelligence and Autonomous Systems, version one, 2016, page 93.

43. IEEE, Ethically Aligned Design: A Vision for Prioritizing Human Wellbeing with Artificial Intelligence and Autonomous Systems, version one, 2016, page 38.

44. IEEE, Ethically Aligned Design: A Vision for Prioritizing Human Wellbeing with Artificial Intelligence and Autonomous Systems, version one, 2016, page 22.

45. European Parliament, Resolution with recommendations to the Commission on Civil Law Rules on Robotics (2015/2103(INL)), 16 February 2017.

46. Designing autonomous systems that do not try to work around external obstacles such as ethical rules (‘corrigible systems’) is an active area of research in computer science: see, for instance, Orseau and Armstrong, Safely Interruptible Agents in Proceedings of the Thirty-Second Conference in Artificial Intelligence, 2016.

47. Executive Office of the President, National Science and Technology Council Committee on Technology, Preparing for the Future of Artificial Intelligence, October 2016, page 3.

48. Executive Office of the President, National Science and Technology Council Committee on Technology, Preparing for the Future of Artificial Intelligence, October 2016, page 32.

49. IEEE, Ethically Aligned Design: A Vision for Prioritizing Human Wellbeing with Artificial Intelligence and Autonomous Systems, version one, 2016, page 91.