Autonomous vehicles and accidents: are they safer than vehicles operated by drivers?
This article is also available here in Spanish.

Autonomous vehicles and accidents: are they safer than vehicles operated by drivers?

My list

Author | Jaime Ramos

The dream of self-driving cars is still maturing, ticking over silently. Advancements and tests contrast with the doubts surrounding this technology: will self-driving cars be capable of fulfilling the promise to reduce accident rates to anecdotal levels?

What is the likelihood of an accident in a self-driving car?

It is still very early days in terms of the development of autonomous vehicles to truly establish how much self-driving cars will reduce accidents. In this regard, legal advisory companies are somewhat pessimistic (without well-founded reasons in most cases) in terms of self-driving cars and their risks.

If we take a look at specialist lawyers experienced in road traffic law matters, they refer to unfavorable statistics for self-driving cars, with an estimated 9.1 self-driving car accidents per million miles driven in the United States. In turn, the average number of “human” accidents stands at around 4.1. However, these figures, greatly repeated in the sector, correspond to a study conducted in 2013. In other words, the figures are outdated.

By states, and with more updated information, California recorded 690 autonomous vehicle collision reports up to February 2024, according to the California Department of Motor Vehicles (DMV), whose website records every accident. They also remind manufacturers that are currently testing autonomous vehicles of the obligation to report any collision within 10 days of the incident occurring.

self-driving car 2

Other legal sources also offer a forecast taking into account the three fatalities in the United States to date, associating this figure with the distance driven and comparing it to the average number of human-controlled vehicle accidents. Of course, self-driving cars clearly loose this bet.

It is worth noting that self-driving cars have remained in mid-development stage for a few years now and that, no matter how much we are bombarded with fears in relation to their reliability, not enough information has been collected to know what the accident rate will be at its peak.

What are the potential faults with self-driving cars?

The analysis of the incidents reported involving autonomous vehicles, sheds some light on the causes of the same.

Unrealistic testing

In many cases, self-driving cars are tested virtually in almost idyllic conditions; with good weather, roads in perfect condition and drivers who abide by the Law. However, when tested in real environments, the vehicle may encounter foggy weather, scooters in the side lanes and a whole host of different unknown scenarios to which it is incapable of reacting.

Technical faults

As with any machine, and particularly technology-dependent ones, autonomous vehicles also experience technical or design faults.

These technological errors might be due to:

  • Faulty sensors

In order to perceive their surroundings, autonomous vehicles use sensors such as cameras, radars and LiDAR. If one of them does not calculate the distance between the vehicle in front, or it fails to detect an obstacle or does not read a traffic sign correctly, the risk of collision increases significantly.

  • Software malfunction

In order to function, self-driving cars use complex algorithms and software, which are also constantly evolving. A fault in the code can cause the vehicle to make an incorrect decision, leading to an accident.

  • Inaccurate mapping dataThese vehicles using mapping data to navigate roads. If these are not up to date, for example, it could cause the vehicle to make a wrong turn or miss important information, resulting in an accident.

Autonomous driving present and future

It will still be a few years before we see the commercial eruption of fully autonomous driving or what is known as level 5. During this time, manufacturers still have to overcome a series of obstacles. However, in some parts of the world, particularly in various North American cities, the number of experimental trials and projects with technologies at that level have almost multiplied.

In addition to the uncertainty regarding the definitive takeoff, there are issues in terms of driver assistance systems and automatic pilots already being introduced by manufacturers in their vehicles and which tend to get confused, as reported on numerous occasions by organizations such as Euro NCAP.

What we do know is that, according to the National Highway Traffic Safety Administration (NHTSA), human error accounts for 94% of all accidents. In this regard, there are great hopes that the number of victims will be reduced with autonomous driving. Studies suggest all types of figures, from the complete eradication of that 94%, to more pessimistic reports (by insurance companies) that calculate a reduction of around 35%.

Towards a hybrid model?

One path that is still being investigated, offers an intermediate solution: combining the autonomy of the vehicles in scenarios it already dominates, such as cruise control to automatically control speed on, for example, a highway, and delegating the execution of more complex maneuvers to remote human operators.

In an article published in 2023 in the journal IEEE Transactions on Robotics, Cathy Wu, who is a member of the Laboratory for Information and Decision Systems (LIDS) and a member of the MIT, and her co-authors, introduced a framework for how remote human supervision could be scaled to make a hybrid system efficient without compromising passenger safety.

They noted that if autonomous vehicles were able to coordinate with each other on the road, they could reduce the number of moments in which humans needed to intervene.

A question they set out to answer was whether a small number of remote human supervisors could successfully manage a larger group of autonomous vehicles, since logic would tell us that the higher the number of cars, the greater the need for remote human supervisors.

However, in the scenarios tested in which autonomous vehicles coordinated with each other, the team found that cars could significantly reduce the number of times humans needed to step in. For example, an autonomous vehicle already on a highway could adjust its speed to make room for a merging car, eliminating a risky situation altogether.

How to regulate self-driving cars?

Regulating them is one of them. It will not be long before the various legal regulations have to take a stance in terms of what will happen with criminal and civil liability.

That is why, in the United States, a country in which case law has a significant influence, legal professionals are preparing for a likely legal battle between technology suppliers, insurers and victims, which will be a determining factor for the future of self-driving cars. Because, in the U.S. legal experts fear the complexity of this matter and are patiently working on establishing definitive criteria regarding who will be responsible for the mistakes of self-driving cars.

In California, where autonomous vehicles traveled around 3.3 million miles in 2023, over five times the previous year’s total. General Motors’ Cruise, and Alphabet’s Waymo, accounted for most of the miles (63% and 36%, respectively), according to the state’s Department of Motor Vehicles (DMV).

A moral self-driving car

self-driving car 3

In terms of the liability dilemma, it is not just a case of establishing the blame in the event of an accident, but of reflecting on decision making. One of the livelier aspects of the debate in recent years has been about how a self-driving car should act if it finds itself in a dilemma in which, no matter what it does, it will cause harm to a human being.

In this regard, the following experiment is interesting: “The Moral Machine“. This gathered a human perspective from two million people in 233 countries on how a car should respond to these types of tricky situations. Given the complexity of the solutions, those responsible are set on developing legal frameworks adapted to each area and, particularly, on equipping motorized intelligence with a moral intelligence that emulates human intelligence. It will not be easy.

Images | Wikimedia.commons/Dllu, Waymo, Volvo Cars

Related Content

Recommended profiles for you

RH
River Huang
FETC International
EK
Ezgi Kosan
Aselsan
Business develipment
JM
Jana Mýtinová
CTU Prague
Design student and UX design intern in an automotive company.
TE
Test Elogia
test
asdadasdasdadad
EW
Eva Weidenthaler
GLEAM technologies GmbH
Having experience from multiple industries, I manage the GLEAM business development activities.
MG
Miguel Garcia Cobo
Assoc.Veïns Sant Gervasi Bonanova
Por favor enviar toda la información en español ó catalán.
KH
Katie Higgs
London & Partners
Business Development Executive supporting companies expanding into London
DS
Diana Sobrevilla
CLS PERU
MS
Milan Sliacky
CTU in Prague, Faculty of Transportation Sciences
Project Manager, Teacher
IM
Isabel Martínez Martínez
Ajuntament de L'Hospitalet
Redactora en el Gabinete de Prensa Municipal
MR
Marta Recasens
vadeCity
CEO
ÉP
Éllen Pereira
Universidade Federal do Piauí
Undergraduate
AK
Aysegul Kardas
Çankaya Municipality
CK
Carol Koh
Foreign, Commonwealth and Development Office UK
Prosperity Fund Future Cities Programme Adviser
TA
Taiwo Ayorinde
Infraserv Nigeria Limited
VL
Vo Le
University
VB
Vlad Burac
RENERGY.MD
MB
Matteo Barbieri
GIGA PUBLISHING
PV
pedro vazquez quintanilla
IDOM ENGINEERING
ITS ASIA Manager
HM
Hiro Masumoto
Sekisui Products
Senior Manager, Business Development

Are we building the cities we really need?

Explore Cartography of Our Urban Future —a bold rethink of ‘smart’ cities and what we must change by 2030.