Executive summary

What is artificial intelligence?

AI is a field of computer science that includes machine learning, natural language processing, speech processing, robotics, and machine vision. Although many people assume that AI means “Artificial General Intelligence” (that is, intelligence of a machine which performs any intellectual task as well as, or better than, a human can perform it), in reality current AI systems are still some way off from this level of sophistication. They are constituted by a range of methodologies and are deployed in an array of applications (see AI Encompasses A Wide Spectrum Of Technologies).

AI and ethics

For AI systems to be accepted for use in a given market, as a matter of commercial reality the use of AI will need to be perceived by the participants in that market as meeting certain minimum ethical standards. Because legal and  ethical responsibility are inextricably linked, that same commercial reality in effect imposes an imperative on businesses to address the corresponding legal issues (quite apart from a desire to limit risk). 

Why are ethics a key issue for the AI industry: AI systems are typically both the outcome of, and result in, a movement of ethical decision-making to an earlier stage in a system’s life-cycle. This can have profound implications for where ethical and legal responsibility can lie in a given AI supply chain. The Toolkit examines how businesses can mitigate the risks that can arise as a result.  

Can human values be embedded in AI?: The idea that AI systems should be designed at inception to embed human values in order to avoid breaching human rights and to avoid creating bias, commonly known as “ethics-bounded optimisation”, is increasingly accepted  within the AI industry. However, AI will not change the fact that those who breach legal obligations in relation to human rights will still be responsible for such breaches (although it may make determining who is responsible more complex).

Addressing such risks by attempting to embed human values in AI systems may be extremely difficult for a range of reasons (see Can human values be embedded in AI?), not least because the definition of what is a societal norm may differ over time, between markets, and between geographies.

What steps should be taken to minimise the risk of bias?: Designers, developers, and manufacturers of AI will wish to avoid creating unacceptable bias from the data sets or algorithms used. To mitigate the risk of bias, they will need to understand the different potential sources of bias, and the particular AI system will need to integrate identified values and enable third party evaluation of those embedded human values to detect any bias. We discuss this approach in the Toolkit.

How can transparency be achieved?: AI and AI-enabled products and services will need to incorporate a degree of ethical transparency in order to engender trust (otherwise market uptake may be impeded). This will be particularly important when AI autonomous decision-making has a direct impact on the lives of market participants.  How can such ethical transparency be achieved?  There are two separate elements. AI should:

  • be open, understandable and consistent to minimise bias in decision-making; and 

What steps are required to achieve accountability?: Legal systems will need to consider how to allocate legal responsibility for loss or damage caused by AI systems. As they proliferate and are allowed to control more sensitive functions, unintended actions are likely to become increasingly dangerous. There should accordingly be program-level accountability to explain why an AI system reached a decision to address questions of legal responsibility.  We explain in more detail in the Toolkit how this can be achieved in practice.

Inserting humans in the loop: The complexity of AI systems in combination with emerging phenomena they encounter mean that constant monitoring of AI systems, and keeping humans “in the loop”, may be required. However, while keeping humans “in the loop” may help to achieve accountability, it may also limit the intended benefits of autonomous decision-making. A balance will need to be struck.

Legislative initiatives are being considered in a number of jurisdictions to address questions of accountability (see Possible Legislative Measures). These include a registration process for AI, identity tagging, criteria for allocating responsibility, and an insurance framework.

What are the key legal risks?

AI’s autonomous nature, more than any other characteristic, needs to be factored into any legal risk assessment of the technology.  Such risk assessment should cover a range of considerations, including:

  • can responsibility for loss or damage be attributed to someone? Participants in a particular AI supply chain may need to address more complex liability allocations than would normally be the case. This is because AI has the potential to shift liability within a supply chain; 
  • what types of liability might be at issue? The risk of criminal liability in certain circumstances may need to be considered, along with the potential civil liabilities, including in contract (which may require new schemes for contractual liability and indemnification), tort (including negligence, breach of statutory duty, and other strict liability), and liability under AI or industry-specific regulations;
  • what is the potential impact on people? Depending on what the particular AI system does, human rights considerations may need to be assessed, along with data privacy requirements in relation to personal data - in particular: (1) data profiling; (2) prohibitions applicable in some jurisdictions on reaching a decision based solely on automated processing; (3) requirements to give information as how such decisions (when permitted) were made; and (4) cyber security. In addition, AI solutions may give rise to employment law issues (where mitigating steps may be required), and to consumer law considerations;
  • what is the impact on the supply of goods and services, obtaining insurance, and potential antitrust implications? The laws of some jurisdictions prohibit discrimination by service providers in relation to goods, services, and facilities in relation to protected characteristics, which will be relevant when AI systems are involved in supply decisions. Existing insurance arrangements will need to be assessed, along with potential antitrust considerations; and   
  • what impact will AI have on a business’s intellectual property rights strategy? Businesses developing or using AI will need to consider developing and rolling out a “layered” approach to IP rights in connection with their AI in order to protect different aspects of the innovation and to reduce the risk of infringement claims against them by third parties.

For more detail in relation to the legal issues, see What are the Key Legal Risks?. Compliance by design and default may increasingly become part of the design remit for AI systems at the pre-product design stage in order to address the legal and compliance issues adequately.   

Ethics Risk Toolkit

How can a business deal with the distinctive ethical-legal risks that arise in relation to AI - that is, multiple scenarios demanding ethical judgments that are difficult to foresee, must be agreed in advance, and must be consistent? It is not possible to eliminate these risks by guaranteeing that all ethical judgments will be correct.  Instead, businesses should consider creating a defensible process for making ethical judgments. The elements of such approach are set out in the Toolkit

Which industry sectors might be affected?

Finally, we gather together a range of sector-specific AI use cases that are already deployed, and some which are likely to be adopted in the future. They demonstrate that AI will impact most, if not all, industry sectors in significant (and at times highly disruptive) ways.