Artificial intelligence & ethics

In this briefing we do not deal with wider societal and public policy factors relating to the use and deployment of AI, but limit ourselves to those ethical considerations (we call these Addressable Ethical Considerations) in connection with which businesses are:

  • likely to have a degree of control (for example, by factoring them into the pre-product design stage of an AI project); or
  • otherwise capable of addressing in some way (whether by way of risk assessment/mitigation or otherwise – the Toolkit provides some guidance in this respect).

For AI to be accepted for use in a given market (for example, by achieving sufficient end user uptake), as a matter of commercial reality the use of AI will need to be perceived by the participants in that market as meeting certain minimum ethical standards. What these are will vary according to the type of AI at issue and the relevant sector in which it is deployed.

“For AI to be accepted for use in a given market, as a matter of commercial reality the use of AI will need to be perceived by the participants in that market as meeting certain minimum ethical standards.”

Mike Rebeiro, Global Head, Technology and Innovation

Because legal and  ethical responsibility are inextricably linked, that same commercial reality in effect imposes an imperative on businesses to address the corresponding legal issues (quite apart from a desire to limit risk). Accordingly we examine:

  • the key Addressable Ethical Considerations applicable to businesses. In particular, we focus on: (1) human rights; (2) transparency of algorithms and data; and (3) accountability for autonomous actions or operations; and
  • the related key legal issues for businesses.

Why are ethics a key issue for the AI industry?

Take an accounting software package. It can be used to reconcile accounts, but in the wrong hands it can also be used to commit corporate fraud. Assuming programming designed to facilitate such fraud was not deliberately included in the coding, the morally objectionable outcomes from its use (fraud) are determined by the user.

AI, like an accounting software package, performs functions determined by its programmers. However, unlike an accounting software package, AI can learn, determine on what basis (or criteria) it is to make decisions, and make autonomous decisions based on that learning and such criteria.  

Its autonomous actions and outcomes are determined not by the user, but flow inexorably from ethical judgments: (1) made at the time the system was programmed; and (2) inherent in training data to which it was exposed. 

AI systems are both the outcome of, and result in, a movement of ethical decision-making to an earlier stage in a system’s life-cycle. This can have profound implications for where ethical and legal responsibility can lie in a given AI supply chain. The Toolkit examines how businesses can mitigate the risks that arise as a result.

Can human values be embedded in AI?

The idea that AI systems should be designed at inception to embed human values in order to avoid breaching human rights and creating bias, commonly known as “ethics-bounded optimisation”, is increasingly accepted within the AI industry.2 An example of ethics-bounded optimisation in, say, the Life Sciences and Healthcare sector would be the coding of AI systems to reflect the “first do no harm” principle of the Hippocratic Oath (and its modern equivalents).

“AI will revolutionise many activities within the Life Sciences and Healthcare sector.  AI in combination with Big Data analytics already plays an important role in clinical trials and diagnostics, and in the future will be expected to significantly augment (and perhaps replace aspects of ) human decision-making in primary healthcare and surgical procedures.

The ability of AI to predict healthcare-related trends at the macro level (within national, regional or global populations) will also help with private and public sector investment decisions within the industry.

It is expected that there will be significant adoption of the technology within the sector going forward.  Because humans are the focus of that technology, it will be crucial that the technology reflects and respects human values. To do otherwise would not only risk uptake of the technology within the sector, but could also give rise to serious liability and regulatory concerns for those manufacturing and using it.”

Patrick Kierans, Global Co-Head of Life Sciences and Healthcare

Have human values already been codified?

There are many formulations of the human values of rights, freedoms, and expressions.  Examples include:

  • conventions such as the United Nations Universal Declaration of Human Rights and the European Convention on Human Rights; and
  • aspects of state and federal constitutions (such as the U.S.’s Bill of Rights).

AI will not change the fact that those who breach legal obligations in relation to human rights will still be responsible for such breaches (although it may make determining who is responsible more complex).

Addressing such risks by attempting to embed human values in AI may, however, be extremely difficult. This is because:

  • of the current technical challenges in incorporating them to work in combination with AI’s deep learning functionality;
  • deep learning uses past examples. As AI is trained on past data, it implicitly takes on past societal norms. However, societal norms may change over time (for example, what might once have been acceptable recruitment decision-making might now be considered to be unacceptable bias).  It may be necessary to build in a “reset” (or revalidation) of such values embedded within the technology along a time line (perhaps the most apt analogy here is in relation to maintained software, which is periodically upgraded with releases and new versions); and
  • the definition of what is a “societal norm” may differ within particular markets and geographies. For example:
    • Sharia law prohibits certain financing models which include the payment of interest upon loans; and
    • In Europe, data privacy laws are seen as an elaboration of human rights. In the U.S., on the other hand, data privacy is more often regarded as an extension of consumer rights.

AI may also lean from real time interactions with society. Such interaction may lead to unfortunate results.

Tech Giant “Deletes ‘Teen Girl’ AI After it Became a Hitler-loving Sex Robot Within 24 Hours”
Helen Horton, The Telegraph, 24 March 2016

In 2016 a well-known tech giant launched a chat bot through the messaging platforms Twitter, Kik, and GroupMe. The chatbot was intended to mimic the way a nineteen year old American girl might speak. The chatbox owner’s aim was reportedly to conduct research on conversational understanding. The chatbox was programmed to respond to messages in an entertaining way, and to impersonate the audience she was created to target: American eighteen to twenty year olds.

Hours after the chatbox’s launch, among other offensive things, she was providing support for Hitler’s views, and agreeing that 9/11 was probably an inside job. She seemed to choose consistently the most inflammatory responses possible. By the evening of her launch, the chatbox was taken offline.

As the launch of this particular  chatbox shows, AI can result in unexpected, unwanted outcomes and reputational damage. The chatbox’s responses were modelled on those she got from humans, so her evolution simply reflected the data sets to which she was exposed. What AI learns from can determine whether its outputs are perceived as intelligent or unhelpful.

The recently published Asilomar AI Principles (promulgated for AI research, AI ethics and values, and longer term AI deployment) offer a credible formation for embedding human values within AI.3

In some instances, the values to embed in an AI system might need to be specific to the relevant community or stakeholders, particularly for vulnerable groups or users impacted directly by the AI system (such as operators of AI-enabled robotics).

What steps should be taken to mitigate the risk of bias?

Designers, developers, and manufacturers of AI should avoid creating unacceptable bias from the data sets or algorithms. To mitigate against the risk of bias:

  • they need to understand the different potential sources of bias, including bias arising from the way an AI system views the world, on the one hand, to how it processes data and reacts, on the other. For example, a face recognition system that is trained only using Caucasian faces may not work properly on images of non-Caucasian faces; and
  • the AI system needs to integrate identified values and enable third party evaluation of those embedded human values to detect any bias. Members of groups who may be disadvantaged by AI systems should be included in the engineering and design process to embed specific values and mitigate bias. (We discuss this approach in more detail in the Toolkit.

How can transparency be achieved?

AI and AI-enabled products and services will need to incorporate degree of ethical transparency in order to engender trust (otherwise market uptake may be impeded). This will be particularly important when AI autonomous decision-making has a direct impact on the lives of the market participants. 

How can such ethical transparency be achieved? There are two separate elements. AI should:

  • be open, understandable and consistent; and
  • deliver transparency as to the decision or action.

Open, understandable and consistent

The decision-making of an AI system should be open, understandable and consistent to minimise bias in decision-making. This may be easier said than done: an AI system acting or operating autonomously may not indicate how or why it acted or operated a certain way. It is currently rare for AI systems to be set up to provide a reason for reaching a particular decision.4

For example:

  • often AI solutions are so-called “black boxes”, providing no possibility for oversight of the algorithms and data sets;
  • open data sets may not be available to train an AI solution;
  • AI systems which use machine learning “are not pre-programmed and … do not produce ‘logs’ of how the system reached its current state. This process creates difficulties for everyone ranging from the engineer to the lawyer in court, not to mention ethical issues of ultimate accountability.”5

“The algorithms behind intelligent or autonomous systems are not subject to consistent oversight. This lack of transparency causes concern because end users have no context to know how a certain algorithm or system came to its conclusions.” 

IEEE, Ethically Aligned Design: A Vision for Prioritizing Human Wellbeing with Artificial Intelligence and Autonomous Systems, version one, 2016, page 45

Open and proprietary data sets

Most products involving machine learning or AI rely heavily on proprietary datasets that are often not released. Keeping the data sets proprietary can provide implicit defensibility against competitors. An AI developer might seek to rely perhaps on the law relating to trade secrets/confidentiality and other intellectual property rights in relation to the data sets (see What IP Protection is available for AI?). 

The proprietary nature of its data sets may be particularly important to an AI developer in circumstances where open source software might otherwise lower barriers to entry by competitors. 

Where open data sets:

  • are used to train AI: the characteristics of the data sets may be generally understood, with implications for an AI system reliant on such data sets accordingly more apparent and able to be scrutinised. Trust in the underlying data sets may help in establishing trust in an AI solution reliant on them. A number of open data sets are currently available (in both the public and private sectors). In the future it is possible that regulators may wish to look at steps to promote the development of open data sets; and
  • are not used: proprietary data sets may contain inherent biases not obvious to those without access to them. The type of bias at issue may vary according to, say, the country, culture, or age demographic from which the data set was sourced.

Transparency as to decision or action

The operation of an AI system should be transparent to ensure that the AI designer, developer, manufacturer, or other responsible person can explain how and why the system made a decision or executed an action. 

Several obstacles may need to be overcome to achieve this objective. For example:

  • the complexity of AI systems may make it difficult to understand the capabilities and reasons for actions taken by the AI; and
  • designers, developers, and manufacturers may wish to protect their intellectual property rights in an AI system as a trade secret, which can lead to a deliberate lack of transparency.

Logs

Creators of AI systems should therefore consider implementing a process to automate a log or report for a human user of operations and decisions to enable audits and increase transparency.  Any assumptions relied on should also be included in the log. The logic and rules of AI systems should also be available as needed. (See the Toolkit for how such functionality could operate in conjunction with an ethics policy.)

What steps are required to deliver accountability?

AI systems can cause physical damage and economic loss. Legal systems will inevitably need to consider how to allocate legal responsibility for such loss or damage. As AI systems proliferate and are allowed to control more sensitive functions, unintended actions are likely to become increasingly dangerous. There should accordingly be program-level accountability to explain why an AI system reached a decision to address questions of legal responsibility. We explain in more detail in the Toolkit how this can be achieved in practice, including by the use of an ethics compliance log.

AI processes can include an element of randomness (for example, the outcome of tossing a coin). While randomness might help to reduce the risk of bias, it is unlikely to enhance:

  • program-level accountability; or
  • reduce the risk of liability in general (for example, would the fact that a decision is made randomly mean that a duty of care is less likely to arise?)

Inserting humans “into the loop”

The complexity of AI systems in combination with emerging phenomena they encounter mean that constant monitoring of AI systems, and keeping humans “in the loop”, may be required. One-time due diligence in advance of implementation may not be sufficient. Introducing (or maintaining) an element of human involvement in AI autonomous decision-making may well assist in demonstrating accountability where program-level accountability might otherwise be problematic. 

Businesses should therefore identify how their AI may fail and what human safety nets can be implemented in the case of such failures. Where AI systems are used in safety-critical situations, a human safety override capability should be embedded into the AI’s functionality.

So-called “interactive machine learning” puts interactions with humans as a central part of developing machine learning systems. It includes building in functionality that enables: (1) an AI system to “explain” its decision-making to a human; and (2) the human to give feedback on the system’s performance and decision outcomes.

However, while keeping humans “in the loop” may help to achieve accountability, it may also limit the intended benefits of autonomous decision-making. Not having to involve humans may have been the reason the AI was implemented in the first place.

Possible legislative measures

Legislatures and courts will need to clarify liability issues for AI systems to help designers, developers, manufacturers, and other persons responsible to understand their rights and obligations. Depending on the type of AI and the particular sector, legislative measures might include:

  • a registration process for AI systems which identifies the intended use, creator, training data sets, algorithms, and optimisation goals;
  • identity tagging of an AI system to its registered profile in order to maintain a clear line of accountability;
  • criteria for determining who is responsible for loss or damage caused by AI (a useful analogy here is in relation to autonomously acting domestic or farmed animals, where legislation in many countries determines who should bear the risk of loss they cause, and in what circumstances); and
  • an insurance framework (such as that advocated by the European Parliament in its Resolution on Civil Law Rules Relating to Robotics)6 to compensate those harmed by AI systems may be required for a business to avoid liability. It  could be similar to the workers’ compensation frameworks operating in many jurisdictions. In its 2017 Queen’s Speech, the United Kingdom Government announced an intention to introduce an Automated and Electric Vehicles Bill. The proposed legislation would extend compulsory motor vehicle insurance to cover the use of autonomous vehicles.

Footnotes

2. This approach can be vulnerable to being undermined by the autonomous nature of the technology (for the reasons discussed in more detail in the Ethics Risk Toolkit).

3. Future of Life Institute, Asilomar AI Principles, 2017.

4. House of Commons Science and Technology Committee, Robotics and Artificial Intelligence, 5th Report of Session 2016 – 2017, HC 145, 12 October 2016, page 17. 

5. IEEE, Ethically Aligned Design: A Vision for Prioritizing Human Wellbeing with Artificial Intelligence and Autonomous Systems, version one, 2016, page 90.

6. European Parliament,  Resolution with Recommendations to the Commission on Civil Law Rules on Robotics (2015/2103(INL)), 16 February 2017.