Explore this page

Explore this page


United Kingdom

The Automated and Electric Vehicles Act (the “Act”) became law in July 2018. It extended compulsory motor insurance to autonomous vehicles.

The UK Government hope the new legislation will encourage manufacturers to develop transport technology in the UK. When introducing the Act to the House of Commons for its Second Reading, Minister for Transport, Mr. John Hayes, referenced repeatedly, along with his colleagues, the UK Government’s desire to be a “global leader in the production and use of autonomous vehicles.”

The Act mandates the creation of a list of all motor vehicles that might be used on roads or other public places in Great Britain and that are designed or capable of safely driving themselves. This approach provides absolute clarity for insurers. It also illustrates the UK Government’s commitment to progressing autonomous vehicle technology: monitoring and updating this list will require significant resources and close relationships with the manufacturers to stay up-to-date with new developments.

Under the Act, where an accident has been caused by an autonomous vehicle, the insurer will be liable for “death or personal injury” or any other damage apart from damage to the autonomous vehicle itself. Importantly, this covers the insured owner of the autonomous vehicle if they have suffered any harm as a result of the accident, not just other drivers of vehicles involved in a collision and third parties. The insurer may then claim against the person responsible for the incident, such as the manufacturer or another driver. Under this provision, anyone liable to the injured party is under the same liability to the insurer or vehicle owner, and the Act defines how the calculation of liability is settled. This includes the preservation of contributory negligence principles in the apportioning of liability.

The Act also addresses the unique aspects of autonomous vehicles – the computer software and the issue of tampering. Insurer liability under the Act is excluded if the software in an autonomous vehicle is not updated or if it has been adapted to a standard outside of the policy limits. This provision ensures that insurers are not responsible for autonomous vehicles with unauthorized modifications. It raises the question of how manufacturers will disseminate software updates to their customers. Expecting customers to carry out updates themselves could create issues if the update is not received or if the customer does not install it properly, potentially resulting in the breach of their insurance policy. It may lead to manufacturers making the updates automatic – perhaps when a vehicle is not in use and is connected to WiFi – thereby removing the vehicle owner from the process. It will be crucial to manage the resulting cybersecurity risks.

The UK also already has a testing code of practice (the “Code of Practice”) which provides guidance to anyone wishing to conduct testing of autonomous vehicle technologies on public roads or in other public places in the UK. It provides details of recommendations for maintaining safety and minimizing potential risks. The Code of Practice applies to the testing of a wide range of vehicles, from smaller automated pods and shuttles, through to cars, vans and heavy duty vehicles.

What restrictions are there in the UK as to who or what is allowed to drive or operate a vehicle?

The Road Traffic Act 1988 contains the requirements for drivers to operate a vehicle on UK roads. In Part III, it states the requirement for all drivers to hold a valid drivers licence and to have taken and passed a test of competence to drive as well as the minimum ages to drive a variety of vehicles and physical fitness requirements.

Note that the UK is not party to the Vienna Convention on Road Traffic and so is not hampered by its provisions regarding the necessity for a human driver. However, the Code of Practice sets out requirements for drivers during testing, including that a suitably licensed and trained test driver or test operator should supervise the vehicle at all times and be ready and able to override automated operation if necessary.

What rules are there relating to safety of autonomous vehicles?

Safety requirements for testing

The Code of Practice sets out requirements for testing, including testing:

Responsibility for ensuring that testing of these technologies on public roads or in other public places is conducted safely always rests with those organizing the testing. Compliance with these guidelines alone should not be considered to be sufficient to ensure that all reasonable steps to minimize risk have been taken.

Vehicles under test on public roads must obey all relevant road traffic laws. It is the responsibility of testing organisations to satisfy themselves that all tests planned to be undertaken comply with all relevant existing laws and that the vehicles involved are roadworthy, meet all relevant vehicle requirements, and can be used in a way that is compatible with existing UK road traffic law.

The relevant road traffic laws include regulation 100 (or regulation 115 in Northern Ireland) of Construction and Use Regulations. Broadly these highlight that it is an offence to use a motor vehicle or trailer in such a way that it would present a danger to other road users.

Testing organisations should:

  • Ensure that test drivers and operators hold the appropriate driving license and have received appropriate training.
  • Conduct risk analysis of any proposed tests and have appropriate risk management strategies.
  • Be conscious of the effect of the use of such test vehicles on other road users and plan trials to manage the risk of adverse impacts.

Reporting requirements relating to safety

The Code of Practice requires autonomous vehicles being tested to be fitted with “a data recording device which is capable of capturing data from the sensor and control systems associated with the automated features as well as other information concerning the vehicle’s movement.” The Code of Practice sets out specific requirements for this device and notes that data protection legislation will apply to data collected using it.

Are there any requirements for autonomous vehicle manufacturers to provide consumer education?

Although there are no legal obligations to fund or provide education, various consultations have stressed the importance of manufacturers doing this on a voluntary basis.

The Government’s 2015 paper “The Pathway to Driverless Cars: detailed review of regulations for autonomous vehicle technologies” states:

“[It is] believed that it would be beneficial to develop educational materials due to the strong public interest in the subject, helping increase understanding and acceptance of autonomous vehicles. It was suggested the information should:

  • Target all road users nationwide.
  • Not unduly influence the reactions of other
    road users.
  • Not raise public expectation that autonomous vehicles are close to market ready.”

The Code of Practice also states that:

“Testing organisations should consider the benefits of developing a public relations and media communications strategy to:

  • Educate the public regarding the potential benefits of autonomous vehicles.
  • Explain the general nature of the tests to be undertaken.
  • Explain the implications for other road users, if any, and what steps are being taken to mitigate any risks.
  • Provide reassurance and address any concerns that the public may have. Particular consideration should be given to the concerns of more vulnerable road users including disabled people, those with visual or hearing impairments, pedestrians, cyclists, motorcyclists, children and horse riders.”

Do the UK regulators or legislators use the SAE nomenclature for autonomous vehicles?

Government and regulators are generally conversant with the SAE nomenclature although neither the Code of Practice nor the Bill refer to it directly.

What laws, regulation or guidance does the UK have relating to cybersecurity of autonomous vehicles?

The UK is a contracting party to UN Regulation 116 on the unauthorized use of motor vehicles. The Code of Practice states that:

Manufacturers providing vehicles, and other organisations supplying parts for testing, will need to ensure that all prototype automated controllers and other vehicle systems have appropriate levels of security built into them to manage any risk of unauthorized access. Testing organisations should consider adopting the security principles set out in BSI PAS754 Software Trustworthiness – Governance and management – Specification or an equivalent.

See also the section on security of personal data below.

What laws, regulation or guidance does the UK have relating to data protection and privacy for autonomous vehicles?

Introduction

As of May 25, 2018, at an EU level, the collection and use of personal data by manufacturers and other actors in the service chain of autonomous and connected vehicles will become subject to the General Data Protection Regulation (EU) 2016/679 (GDPR). As a European Regulation, the GDPR will have direct effect across EU Member States, and will supersede the existing data protection regime as governed by the Data Protection Directive 95/46/EC (as amended) (DP Directive) along with Member State implementing legislation.

The GDPR represents the most ambitious and comprehensive changes to data protection rules around the world in the last 20 years. It builds upon and strengthens the principles of the DP Directive, whilst introducing new obligations on organisations, enhanced rights for individuals and tougher sanctions for non-compliance, including fines of up to the higher of EUR 20 million and 4% of total worldwide annual turnover for certain infringements.

Not only does the GDPR apply to entities “established” within the European Union (EU), but its territorial scope also captures the processing activities of non-EU organisations that are offering goods or services to individuals in the EU, or that are monitoring individuals within the EU (such activities include the tracking and profiling of individuals).

EU-based manufacturers of vehicles should already be complying with the DP Directive in relation to any personal data that they currently process, and they should now have reviewed and updated their personal data strategy to ensure GDPR compliance from May 2018. Non-EU based manufacturers that interact with individuals within the EU will need to determine the extent of the application of the GDPR to their overall data processing activities, and take practical steps towards compliance with the GDPR.

In general, manufacturers of vehicles should use personal data fairly and lawfully for limited and specified purposes in a way that is relevant and not excessive. Personal data should be kept accurate, safe and secure, for only as long as is absolutely necessary and not exported outside the European Economic Area without legal protection.

The gathering and use of personal data in relation to driver-controlled vehicles has often been limited and relatively uncomplicated. The development of autonomous and connected vehicles changes this. Such vehicles collect large amounts of personal data through various technological means, including smart infotainment systems, data recorders, location tracking and vehicle-to-vehicle communication. Given the nature of autonomous and connected vehicles, this personal data will be passed on to a number of other parties. This increase in the collection and use of personal data means manufacturers will need to (i) take their obligations under the DP Directive and the GDPR more seriously, especially given the possibility of significant fines for non-compliance under the GDPR; and (ii) engage with new data protection challenges presented by autonomous connected vehicles. We consider both of these points in further detail below.

Obligations and challenges

  • Privacy by design

Data protection and privacy considerations will need to be at the forefront of manufacturers’ and other service providers’ minds at each developmental stage. Such a “privacy first” approach is referred to as “privacy by design” and will become much more important given that it is an explicit principle under the GDPR. It assists with avoiding reputational damage, costly recalls or regulatory fines.

A critical part of “privacy by design” is the “privacy impact assessment”, which is mandatory in certain circumstances under the GDPR. This is a process that is used to identify the flows of personal information and track how it is obtained, used, retained and transferred by the autonomous connected vehicle. Based on this, potential data protection risks to the vehicle owner, the individual drivers, their passengers and other road users can be identified and assessed, allowing for appropriate solutions to be built into the actual data collection, storage and sharing architecture and for user interfaces to alert users to the use of this data. This allows unnecessary data collection to be eliminated and privacy impacts to be assessed from as many angles as possible, including user consultations, so costly reworks or breaches can be avoided.

  • Transparency

Transparency is a key element of the DP Directive, and is at the heart of the GDPR, as it allows users to control how personal data is used. Manufacturers and other service providers will need to ensure that drivers are informed of and understand what personal data is being collected, how it is being used (and what legal basis a manufacturer is relying on for each processing activity) and who it is being disclosed to. The GDPR is a lot more prescriptive about the type of information and level of detail that needs to be provided to drivers. For example, the GDPR will require manufacturers to include information about what rights drivers have under the GDPR, whether their data is exported outside the European Economic Area and how long their data is retained. Manufacturers will therefore need to understand fully the flows of personal data within their organisation. This is all the more important as the GDPR will require manufacturers to map their data processing activities and maintain this in a formal register.

This information is usually presented to individuals through a Privacy Policy. The GDPR requires that this Privacy Policy is “provided” to data subjects, which in essence requires manufacturers to take active steps to furnish the information to the driver. Manufacturers will therefore need actively to communicate and explain to users what is being done with their personal data. This will need to be presented clearly and accurately. An effective method of communication will need to be deployed, especially given that it has been reported that only 16% of internet users read Privacy Policies and of that, only 20% actually understand them (according to The Internet Society’s Global Internet User Survey 2012). Manufacturers will need to consider alternative methods to inform users sufficiently of this information, rather than using lengthy Privacy Policies. Some features in automated connected vehicles could assist with this. For example, the Privacy Policy could be presented on the infotainment screen with an interactive and layered approach, and “just in time” notices could be communicated to the user during the journey prior to the point at which certain personal data is collected.

  • Apportioning liability

Automated connected vehicles will also be likely to bring about further issues concerning contractual arrangements and apportioning of data protection responsibilities. Manufacturers will be partnering with developers (both hardware and software network providers), suppliers and business partners. For each arrangement, the data protection implications will need to be considered in detail. Robust data processor obligations will need to be placed on data processors, given the increased risk that comes with the high volume of personal data collected. These will need to include the mandatory data processor terms that the GDPR prescribes are incorporated in agreements between data controllers and data processors.

Joint or co-data controller arrangements will likely become more common, for example, during vehicle-to-vehicle communications. The manufacturer of the automated connected vehicle that is providing location data to another automated connected vehicle will be the primary data controller of that location data. The manufacturer of the automated connected vehicle receiving that personal data could, however, also be a co-controller of the personal data received. This is because the recipient would use that personal data for its own purposes, such as judging its own location in relation to the other automated connected vehicle.

Where such arrangements exist, data protection roles, responsibilities and liabilities will need to be clearly allocated to avoid joint and several liability for the other data controller’s breaches. This is all the more important given the possibility of high fines under the GDPR.

  • Export of personal data

Novel implications around the export of personal data should also be considered. Vehicles often cross international borders. An autonomous and connected vehicle originating in the European Economic Area (EEA) will be generating personal data relating to EEA individuals. Should this vehicle enter non-EEA jurisdictions and share this personal data by way of communicating with other autonomous and connected vehicles or local third parties, this will be an international transfer of personal data. Under both the DP Directive and the GDPR, manufacturers will need to ensure that adequate export mechanisms are put in place to legitimize the transfer of such personal data.

  • Location data

In order to operate, autonomous connected vehicles need to collect location data. Amongst other functions, location data is used to identify the autonomous connected vehicle’s location in relation to other vehicles and for route planning (including saving a location, setting route preferences and identifying local points of interest). It is likely that the user will be able to be identified from such location data, either by itself or in conjunction with other personal data that the manufacturer holds. As such, location data is subject to the DP Directive and will be subject to the GDPR and therefore other implications discussed in this chapter.

Data protection roles, responsibilities and liabilities will need to be clearly allocated to avoid joint and several liability for the other data controller’s breaches.

In addition, the Directive 2002/58/EC on Privacy and Electronic Communications (as amended) (E-privacy Directive) (as implemented within Member States) imposes additional requirements for the use and collection of certain types of location data. If the location data falls within the remit of the E-privacy Directive, specific consent to collect and use the location data will be required from the individual. The individual will also need to be informed about the type of location data processed (including the level of granularity, frequency that their location will be captured and how long that information will be kept for), the use and purpose of collecting the location data and which third parties it is passed to.

Currently however, the E-privacy Directive’s definition of location data is limited, and does not include GPS-based location data, which is what autonomous and connected vehicles are likely to use. Despite this, various regulators are increasingly viewing all types of location data as a sensitive subset of non-sensitive personal data. This is because location data can be particularly intrusive and revealing and can therefore allow for very specific targeting.

As a result, regulators generally expect that organisations treat all types of location data with the same safeguards and stringency as described in the E-privacy Directive. In relation to this and understanding the nature of all types of location data, a number of organisations are beginning to seek consent from users in relation to location data that does not fall within the E-privacy Directive. Manufacturers should be aware that while this is only best practice and not currently legally required in Europe (and that manufacturers should be able to rely on the fact that the use and collection of location data is required for them to perform their contractual obligations to the user), any secondary use of location data is likely to oblige manufacturers to seek consents from users. 

Manufacturers should also be aware that the E-privacy Directive will eventually be superseded by the E-privacy Regulation (currently in draft form). The draft is currently being negotiated, but the finalized Regulation will likely increase the stringency of the rules around collecting and processing location data.

  • Consents

Consents from users will be required as a legal basis for a processing activity where the manufacturers are using and collecting certain types of personal data, or using personal data for certain activities which cannot be justified by manufacturers by using a non-consent basis. Amongst other things, consent may be required to process “sensitive personal data” as defined under the DP Directive (which is renamed “special categories of personal data” under the GDPR). This covers personal data relating to race/ethnicity, criminal convictions, health, religious beliefs, political opinions, sex life and union memberships, and under the GDPR, also covers genetic and biometric data. Consent is also required under the E-Privacy Directive to send users unsolicited marketing materials by certain electronic communications such as email and SMS.

Manufacturers will need to consider this as part of their “privacy by design” approach and “privacy impact assessments.”

As mentioned above, location data can reveal intimate information about users. The history of trips made can provide private sensitive data about individuals, e.g. trips to certain places of worship or medical facilities. In order for the manufacturer to provide a complete service, the collection of such data may be unavoidable.

The GDPR sets a higher standard for sufficient consent than the DP Directive. In order for consent to be valid under the GDPR, it must be given freely by an affirmative action and must be informed, specific and unambiguous and withdrawable. Given the high threshold set by the GDPR for valid consent, manufacturers should assess whether their processing activities can be justified using one of the non-consent legal bases available under the GDPR. If not, manufacturers will need to ensure that their consents comply with the requirements of the GDPR in order to be valid and reliable.

In relation to marketing opportunities, the types of personal data collected by autonomous and connected vehicles is particularly valuable. For example, certain sensors may be able to tell whether a child is on board. Other sensors could potentially collect data about a user’s stress level and general wellness. Businesses might seek to utilize this type of data, for example, to suggest parents pull off the road for local children-friendly offers or to stop over at the local spa to de-stress. Furthermore, location data could be used as a means to target the type of marketing provided to users: for example, local businesses transmitting advertisements to the autonomous connected vehicle when it is within a five mile radius. It is no surprise that McKinsey & Company estimate that vehicle generated data may become a USD 450-750 billion market by 2030 (in “Monetizing car data, ”McKinsey & Company, September 2016).

Therefore, where consent is being relied on, it is in the manufacturers’ interest to have as many users as possible consenting to the above. Manufacturers will need to create, trial and test their consent wordings and mechanisms to ensure that they are presented in a way that is not only transparent and comprehensible to the driver, but that will maximize the number of users that provide their consent (whilst being compliant with the requirements of the GDPR).

  • Necessary disclosure of personal information

Whilst carrying commercial benefits, personal data collected by autonomous and connected vehicles can also be valuable to legal/regulatory enforcement agencies. Regulation 2015/758 of the European Parliament (the “eCall” Regulation) must be complied with by April 2018 and requires new cars to be fitted with the “eCall” system. This system dials the European emergency number 112 and communicates the vehicle’s location to the emergency services as soon as in-vehicle sensors and/or processer (e.g. an airbag) detect a crash. This is an example of obligatory data sharing.

Manufacturers or other parties may be compelled by legal/regulatory enforcement agencies to disclose personal data that they are holding about users. For example, such agencies may demand the location history or travel patterns of a user over a certain period to establish their whereabouts. Such agencies may also demand access to a user’s personal data in order to track them if they were suspicious that the user may be involved in criminal activities. Manufacturers will need to communicate such possibilities to users as part of their transparency obligations (described above) and ensure disclosures comply with data protection laws.

  • Security of personal data

Given the volume of personal data being collected, data security will be critical and manufacturers will need to ensure that the technological components are built with regard to appropriate security levels. Given that automated connected vehicles are made up of a number of technological components and deploy a number of communication methods (WiFi, Bluetooth, radio, GPS, etc.), the potential for security breaches or hacking is high.

From a data protection perspective, unauthorized access to and use of users’ personal data can cause real harm and distress to the individuals. A hacker could, for example, use details of a user’s journey history to determine when and what times they are away from home to plan a theft. Identity theft, credit card fraud, exposure of vulnerable or protected people are just some of the other potential scenarios of such access to personal data.

The DP Directive and GDPR state that manufacturers must ensure that they employ appropriate technical and organisational measures against unauthorized or unlawful processing of personal data. This element will be an important factor in the “privacy by design” process. Manufacturers should note that such security measures are not limited to the automated connected vehicles themselves. For example, personal data of drivers will likely be held on the manufacturer’s systems. Therefore, manufacturers will need to ensure that data security is implemented at a much broader organisational level. Physical and computer security, managerial measures and staff training are all key elements to minimize the threats and the subsequent fines, enforcements and reputational damage that could be suffered by the manufacturer. This is all the more important given the possibility of high fines and additional sanctions under the GDPR.

Conclusion

The autonomous connected vehicle is an exciting reality. The collection of personal data is interweaved within each of its moving parts and is fundamental to its functions. Whilst access to this personal data presents new and great opportunities for manufacturers and other actors, the correspondent risks involved with its use must also be considered and addressed if users are to give manufacturers and other actors the permission they need for monetizing secondary uses of personal data. A balance must be struck between providing users with the most personalized and bespoke service, and respecting their fundamental right to privacy.

What laws, regulation or guidance does the UK have relating to the insurance of autonomous vehicles?

The Act contains provisions extending compulsory insurance to driverless vehicles. Further details are set out in the answer to question A above.

Who will be liable for damage or personal injury caused by an autonomous vehicle?

Introduction

Sources of liability for damage caused by an autonomous vehicle include strict liability for defective products under the Consumer Protection Act 1987 (the “CPA”), liability for the tort of negligence and even, in limited circumstances, liability for breach of statutory duty.

Liability depends on determining what caused any particular injury and thereby allocating fault. This already is potentially complex in vehicles with sophisticated technologies, such as anti-lock braking, given that many different parties may be involved in a particular accident, including the driver, the manufacturer, a component manufacturer and other drivers. It will become far more complex when an autonomous vehicle (AV) is involved as the definition of driver is less clear and both hardware and software may be responsible.

The UK government currently proposes to enhance the current fault-based approach, which combines fault-based liability and product liability law, with a new form of compulsory insurance. As described above, the Bill will extend insurers liability to “death or personal injury” or any other damage apart from damage to the autonomous vehicle itself. Importantly, this covers the insured owner of the autonomous vehicle and not just other drivers of vehicles involved in a collision and third parties. The insurer may then claim against the person responsible for the incident, such as the manufacturer or another driver, who is under the same liability to the insurer or vehicle owner as to the injured person. Insurer liability is excluded if the software in an autonomous vehicle is not updated or if it has been adapted to a standard outside of the policy limits. Overall, the Bill reflects a pragmatic, step-by-step approach relying on the ability of English law to adapt to new circumstances.

Dedicated Short Range Communications (“DSRC”) is a set of protocols and standards for dedicated vehicle-to-vehicle and vehicle-to-roadside communications using wireless technology. DSRC has many advantages for the operation of AVs, but also creates additional risks and sources of liability. We consider below the application of English product liability law to DSRC.

Sources of liability

AVs contain technology that are not found in other vehicles. Although these innovations are meant to allow us to enjoy the benefits of a driverless or nearby-driverless vehicle, they also could be the source of new liability:

  • a “bug” in the software running the AV. These bugs can be divided into the following categories:
    • Logic error: The code does not do what the programmer intended it to do; this is perhaps the type of error that is most associated with a software bug and is most clearly characterized as a defect in the product;
    • Implementation error: The code does not correspond to the intended specification for that piece of the software; that is, it works as the programmer meant it to work, but this is not what the programmer was meant to implement. This may also be a defect in the specification and finding it requires analysing not only the code but also the written design parameters. An error in the parameters of the design often occurs where those parameters are set by legislation or regulation.
    • Corner case: The code (and the underlying specification) fails to address a particular situation encountered by the AV and the resulting behavior in that situation is inappropriate. This is a bug particularly apposite for AVs that will face unpredictable, real world situations. It may be unclear whether a Corner Case constitutes a defect.
  • a deliberate choice by the software. For instance, it chooses to swerve into another car in order to avoid a pedestrian who stepped into the road.
  • a defect in the specialist equipment used by the AV, such as its sensors, so that the software receives incorrect or inadequate information about the real world or its commands are not put into effect accurately by the vehicle.
  • a fault in the handover of control between the AV and the driver: this is only an issue for AVs that are not fully automated.

In addition, there are a number of different entities that may be responsible or partly responsible for the cause of any injury or damage involving an AV:

  • manufacturer;
  • driver;
  • owner;
  • seller;
  • repairer;
  • component manufacturer/supplier; and
  • data provider.

Owing to the additional complexities around AVs, it is possible that in the UK new laws will allocate responsibility for injury when an AV is involved. So far, the UK government has proposed a new system for compulsory insurance, including for damage to the driver or owner of the car, in the Bill. Future changes may impose further no-fault liability on manufacturers, for instance. This may speed acceptance of AVs but has obvious risks for manufacturers.

At present, although the Bill will supplement insurance coverage, the UK government is not proposing to make any wholesale changes to the laws on product liability and negligence to accommodate AVs. The Bill should close gaps in the existing car insurance regime and may reduce the likelihood of compensation being delayed by complex product liability litigation, but it does not alter the underlying allocation of liability.

Strict liability for defective products

Under the Consumer Protection Act 1987, manufacturers are strictly liable for damage caused by “defective products.” A product is defective if “the safety of the product is not such as persons generally are entitled to expect.” In determining this, the courts will take into account instructions and warnings that accompany the product and “what might reasonably be expected to be done with the product.” There are various defenses, including compliance with UK or EU law, and a “state of the art” defense: “that the state of scientific and technical knowledge at the relevant time was not such that a producer of products of the same description as the product in question might be expected to have discovered the defect.”

A preliminary question is what level of safety people are entitled to expect from today’s AVs. One point of comparison is the average level of safety attributable to a human driver – that is, the level of driving ability that would not be negligent for a human driver. In fact, public opinion appears to demand a much higher level of safety from an AV – little short of perfection. The highest possible standard is to demand zero accidents subject only to the “state of the art” exception. Of course, no AV will be perfect and accidents and injuries will inevitably occur. Where on the spectrum between a human driver and a perfect driver the standard is set and how well defined that standard is could affect the feasibility of AV production by manufacturers.

Most Logic Errors and Implementation Errors will fall within the definition of defects, to the extent that they compromise safety. However, given the extremely complex nature of AV software, manufacturers could argue that a particular Logic Error or Implementation Error was not discoverable – the “state of the art” defense. This is most relevant for software based on self-learning algorithms, such as artificial neural networks, where the bug is not expressly implanted by a programmer but arises endogenously from the operation of the learning algorithm. In that case, the manufacturer could argue that the AV behaved correctly through extensive testing and it was effectively impossible to predict the particular circumstance that led to injury. This amounts to an argument that the learning algorithm was the “state of the art” and so not defective, even if it failed in a particular situation. The success of this argument is likely to turn on expert evidence about the algorithms underlying the AV software and the statistical robustness of tests.

A Corner Case, the failure to program for the particular situation that gave risk to the accident, will be a “defect” if the following can be shown. Firstly, the failure of the AV software must compromise safety in a way that would not be anticipated. For instance, a sudden puncture while driving on the motorway is a rare occurrence, but if it is not dealt with appropriately by the AV software it may likely be found to be a Corner Case defect. But a simultaneous puncture of two tires while driving on the motorway might be so rare that a failure of the AV software to react appropriately does not compromise the general expectation of safety.

Secondly, the Corner Case must fall within what might reasonably be expected to be done with the product. A failure to cope with unanticipated off-road conditions, for instance, may not be a defect unless the AV was designed for off-road use.

Thirdly, warnings or instructions given with the AV may limit liability for Corner Cases – although, in a fully automated AV, it is unclear what a passenger is supposed to do if an unanticipated situation arises, and so any warning that applies to normal operation of the AV may not be effective in limiting liability.

Where the AV is not fully automated, the transition between control by the software and the human driver is another potential source of defects. The limitation of strict liability for appropriate “instructions and warnings” may be relevant here. Specific training may be needed for human drivers interacting with partially automated AVs.

Finally, there is the novel case of a deliberate choice by the AV software to inflict injury or damage – presumably, in order to avoid inflicting worse injury or damage. One possible example is swerving into a car to avoid a pedestrian. Whether this is classed as a defect may be a complex question, dependent on questions of ethics and morality as well as law. It may also be studied empirically – MIT’s Moral Machine is a website that aims to build an understanding of practical ethics by asking users how they would decide when faced with a variety of moral dilemmas.

Some situations may clearly suggest a defect: the AV software chooses to swerve into a pedestrian in order to avoid damage to the car. Others will be more subtle. There is no comparison with the actions of a human driver: an instantaneous reaction by a human is a matter of judgment that is not easily found to be negligent; the same reaction by AV software follows from a deliberate decision by a programmer to have the software react in that way to that situation. Therefore, if it does not conform to general expectations of ”safety”, it may be defective.

Negligence

A manufacturer of goods has a duty of reasonable care owed to those who might foreseeably use those goods. In the case of AVs, this duty is likely to extend to passengers in the AV as well as other road users and pedestrians. A manufacturer will be liable in negligence if a person in one of those categories suffers damage as a result of its breach of this duty.

Showing that the breach by the manufacturer caused the loss may involve allocating responsibility between the different entities listed in the Introduction. In particular, where a hardware component, such as a sensor, may be at fault, the cause of an accident may be the defective sensor, negligence in the incorporation of the sensor into the AV, a negligent repair or maintenance of the sensor, or insufficiently robust AV software that fails to anticipate possible sensor failure and transition into appropriate fail-safe modes.

These questions of causation already arise with existing semi-autonomous systems. Normally, the vehicle can be driven safely with these systems in a failed state – they will switch themselves off and ensure no adverse effects on the vehicle. This is a simple solution to avoid any negligence in the implementation of those systems causing an accident, but it is not available to a fully autonomous AV. Therefore, determining whether an AV is in breach of a duty to take reasonable care – or, to put it another way, what is the standard of care for an AV – is a novel question.

There appear to be two approaches. The manufacturer may argue that its extensive testing of the AV showed that the software reached an appropriate standard of driving ability and that this constitutes reasonable care by the manufacturer. The advantage of this approach is that it does not require extensive analysis of the software itself, only observation of how the software operates. The cost is in the time taken for extensive testing, although this may be a feature of AV software development in any case.

The second approach is an analysis of the software itself to verify that its behavior is as desired and that it does not contain any errors. A manufacturer may argue that its extensive analysis of the software as well as the resources devoted to writing the software fulfill its requirement to take reasonable care.

In practice, a combination of both of these approaches may be needed to satisfy the standard of reasonable care. A Logic Error or Implementation Error that causes an accident may be sufficient to show negligence even if the error did not manifest during real-world testing and could only have been found by analysis of the code. Conversely, the only realistic way to discover Corner Cases in complex code is by extensive real-world testing.

Even with extensive testing and analysis, an AV will sometimes be faced with a novel situation requiring a split-second response. This is where any analogue with a human driver breaks down. A human driver will make a judgment in that split-second and the duty of reasonable care applied to that judgment will make allowances for the lack of reaction time. AV software will operate according to its programming. There will be no allowance for reaction time (other than the mechanical limits of the vehicle). The duty of reasonable care will apply to determine whether the novel situation was actually a Corner Case that should have been anticipated or whether the failure mode of the software when dealing with an unanticipated input was appropriate: i.e. was it fail-safe to a reasonable standard.

In other words, the burden of avoiding negligence largely shifts from the actions of the driver while driving to the process used for creation and testing of the AV software. This will involve a combination of the two approaches. To satisfy their duty to take reasonable care, manufacturers will need to develop expertise both in methodologies for creation and verification of real-time software and in statistical proofs of robustness of testing procedures. Inevitably, the outcome of this process will not always be successful – that is, there will always be accidents – but if manufacturers can show that the process itself was undertaken with reasonable care, they may still avoid liability for negligence. The path to risk mitigation for AV manufacturers may be to demonstrate a comprehensive audit of the development and testing process.

Once again, a key determinant of liability will be whether the overall outcome should be similar to that of the average non-negligent driver or set at some higher level. A standard of reasonable care implicitly accepts that the manufacturer is not liable for some accidents that are caused by the AV software falling below a higher absolute standard of care. It is not clear that this is consistent with public acceptance of widespread AV deployment.

Statutory liability

Manufacturers may be liable for breach of statutory duty, where a statute imposes a duty on the manufacturer and breach of that duty is actionable by an individual who has suffered damage as a result of that breach.

A product is not necessarily defective within the meaning of the Consumer Protection Act 1987 if it is in breach of a statutory or regulatory requirement. For instance, in Tesco v Pollard [2006] EWCA Civ 393, a child resistant cap was not defective because it was harder to open than a non-resistant cap, which was what people would generally expect, even though it was not hard enough to open to comply with the relevant statutory regulations on child resistant caps. Accordingly, breach of statutory duty may be a wider source of liability than a failure to comply with the Consumer Protection Act 1987.

A person who suffers damage as a result of a breach of a statutory or regulatory requirement will not always have a right of action against the person in breach of that duty. It will depend on the scope of the duty and whether courts determine that the legislation is intended to give a private cause of action to individuals. The use of AVs will doubtless lead to further regulations and these may be used to argue for private causes of action.

Liability for DSRC

As set out above, Dedicated Short Range Communications (“DSRC”) is a set of protocols and standards for dedicated vehicle-to-vehicle and vehicle-to-roadside communications using wireless technology. There are various implementations of DSRC in different jurisdictions and wide variation in their compatibility. Within the European Union, the European Committee for Standardization (“CEN”) and the European Telecommunications Standards Institute (“ETSI”) have produced a number of standards on the operation of DSRC, including frequencies and bandwidths, but these also allow for optional frequencies covered by national regulation.

DSRC offers many potential advantages:

  • Platooning: Organizing vehicles into closely spaced formations with synchronized controls;
  • Warnings: From other vehicles or roadside transmitters, such as the presence of an obstruction around a hidden bend;
  • Efficient traffic flow: Communication with other vehicles and traffic lights allows more efficient traffic flow through junctions.

A corollary of these advantages is that an AV be able take action in reliance on communication received through DSRC. Where an AV reacts inappropriately to a DSRC message, this raises all the issues discussed above as to liability. However, there are other situations that only arise in the context of DSRC:

  • Misunderstanding: An AV does not understand, or misunderstands, a message received from another AV, due to a failure of interoperability. For instance, an AV in a platoon receives a message to apply the brake but understands it as a message to apply the accelerator;
  • Misinformation: An AV receives data that is incorrect. For instance, an AV receives a message that a traffic light is green when it is red;
  • Malice: A hacker attempts to use DSRC as a vector to compromise an AV’s software.

In cases of Misunderstanding, it may be difficult to determine liability unless there are clear and unambiguous protocols for DSRC. Take the case where there are two rival protocols and a message sent using one is interpreted using the other. It could be argued that the fault is that of the receiving AV for not being cautious in interpreting an ambiguous message; it could be argued that the fault is the sending AV for sending a message that could be misinterpreted. It might even be argued that the author of the DSRC protocol or the operator of the DSRC system is at fault for enabling the transmission of ambiguous messages. Presumably, an AV would aim to be as cautious as possible when receiving messages to minimize any misunderstandings, but the nature of DSRC messages may make this difficult. For instance, if an AV receives a DSRC warning that there is a danger around the corner, the cautious option may be to react to the message and apply the brakes, even if the message was sent using an ambiguous protocol.

Where there is Misinformation, the sender may be liable for negligent misstatement or negligent or fraudulent misrepresentation. The exact factual circumstances will determine whether liability may accrue. First, the receiver – or any other person injured or object damaged by the message – must be within the class of entities to which the sender owes a duty of care. Road users of all types are likely to be owed a duty of care by senders of DSRC messages. Secondly, it must be reasonable for the receiver to rely on the message. This may depend on the status of the sender, the content of the message and whether it is consistent with other sensor inputs to the AV. For instance, a traffic light using an approved protocol is a reliable sender and a message that it is green is exactly the sort of message that might be relied upon. But if the AV can see the traffic light itself, it may still not be reasonable for it to rely on the message alone when it is inconsistent with the color shown on the traffic light. Thirdly, action taken in reliance on the message must have caused the relevant damage.

In cases of both Misunderstanding and Misinformation, a further investigation may be needed to determine which legal entity is responsible for any liability that may accrue. Where the sender is itself an automated system, this may raise complex issues.

Finally, there is the case of Malice: a message may be an attempt to hack the AV. Cybersecurity is a concern for AVs generally, but is a particular problem for DSRC. The need for very low latency, simple communication reduces the scope to impose security measures. In fact, DSRC generally allows messages to be accepted even without the basic handshaking protocols to verify identity of the other party. Accordingly, DSRC is a high risk channel of communication and the standard of care for AV manufacturers in dealing with DSRC messages may be correspondingly high.

Overall, while DSRC may bring benefits, it also adds a layer of complexity in determining liability for actions of AVs.

Conclusion

The operation of AV software will introduce a variety of novel and complex situations where manufacturers of AVs may be liable to road users. Liability may arise from duties under the Consumer Protection Act 1987, a duty to take reasonable care to avoid liability for negligence and possible liability for breach of statutory duty arising from new regulations. We have set out here how these principles may evolve for AVs generally, also looking specifically at issues raised by DSRC.

The Bill preserves the existing principles of product liability but, as set out above, extends insurer liability. This aims to smooth the introduction of AVs into use on the roads for testing and deployment while allowing the innate flexibility of English law to develop an appropriate response based on the existing principles of product liability. Overall, it maintains the UK as a relatively benign environment for AV deployment and use.