AI evidence and the future of motor vehicle accident disputes
DOI:
https://doi.org/10.14296/deeslr.v21i.5803Abstract
This paper uses a learning scenario to explore how UK civil courts may deal with self-driving vehicle cases involving a collision. We explore challenges related to obtaining and presenting evidence about decisions made by artificial intelligence (AI) and the impact the legal process may have on automated vehicle (AV) users and other road users.
At first glance, UK legislation regulating AV claims appears to provide a straightforward mechanism for any road user to make a claim for injury or damage caused by a vehicle in self-driving mode. However, this paper will discuss how in the event of the insurer denying a claim or the insurer alleging that the AV user contributed to the accident, those disputing an insurer’s decision, will have no alternative but to take the expensive and time-consuming action of pursuing the matter through the courts.
The UK has introduced legislation which creates a benchmark of safety for AVs which is that AVs should drive to the standard of a ‘careful and competent human driver’. This means that if an AV has a crash while driving itself, an assessment has to be made: Was the vehicle driving like a careful and competent human driver? In the first instance, this decision will be made by the insurer.
If an insurer decides the AV was not at fault, and instead, the AV user was at fault, or partially at fault, the user is at a distinct disadvantage if they disagree with the insurer’s assessment. Anyone disputing the insurer’s assessment will need access to vehicle data and expertise to interpret the data. It will be possible to access some data, but the majority of the relevant data may be difficult to obtain. While there are laws regarding compulsory incident data recording, the mandatory parameters are narrow, and this data may not reveal how an incident occurred. More data parameters exist and will be recorded and available to manufacturers and service providers, but these will not necessarily be made available to other parties. Parties such as AV users, who are not in control of the data, will have to request access. Where this is not voluntarily forthcoming, production of this evidence must be pursued through the Courts.
If an insurer assesses a crash and assigns liability to an AV user, and that user wishes to dispute this and allege that an incident happened due to faulty AI, the existing legal presumptions in the UK about the reliability of computers, combined with the inexplicable nature of deep learning algorithms, presents an almighty challenge for both a user to present a case and for the Court to understand the evidence before it. The UK has not (at the time of writing) amended its product liability legislation to include AI and software. Consequently, if there is an allegation of a vehicle fault relating to the AI, and this is not accepted by the insurer, the claim must be pursued via a negligence claim.
This paper considers how cases may be presented and the issues which arise as a result of the interaction between the law and the AI used in vehicles.