Six AV Collisions and What They Tell Us

Extracting the lessons from different types of autonomous vehicle incidents

Robotaxis will bring in a new era of autonomous transport, when the car arrived, it represented personal freedom, but the robotaxi model provides a different proposition. These vehicles are owned, operated, and updated by companies. What happens on our streets will be increasingly determined by people we will never meet. That is not a reason to resist the technology, but it is a reason to care about how these companies are held to account.

This article reviews six cases of autonomous vehicle collisions. Each reveals something informative: how AVs operate, where they fail, and where the key developments still need to come to improve safety. Most involve Waymo, not because they were intentionally singled out, but because they are the global industry leader. Waymo has achieved nearly 200 million miles of fully autonomous driving and of all the firms they have comparatively transparent reporting.

What excites us about autonomous vehicles is that their core value proposition is safety and wider transport access. If we can build transport as general as a car and as safe as a train, we can unlock journeys that could not have happened in the past. Public trust will be shaped by this early phase, and it is crucial for people to believe that companies are held to account when things go wrong.

Case 1 : Cruise

The cruise incident is the most notable incident in the Western robotaxi industry

What happened

On 2 October 2023 a human-driven Nissan hit a pedestrian in San Francisco, launching them into the path of a Cruise autonomous vehicle. The Cruise AV braked but still struck the pedestrian. It then initiated an automated pullover manoeuvre, travelling at up to 7.7 mph and dragging the pedestrian 20 feet. That last detail took weeks to surface publicly. Cruise’s senior leadership chose not to include it in their press release or in meetings with regulators. It was this lack of transparency, not the collision itself, that led California’s DMV to suspend Cruise’s driverless licence on 24 October 2023.

The fog of early reporting

A pattern that we will see around these cases is that initially, the truth is hard to determine. The initial media narrative was that the Cruise AV was the first to strike the pedestrian, which was wrong. The most critical detail, the dragging, did not emerge until weeks later, and a full accounting only arrived months later when the law firm Quinn Emanuel published its 200-page investigative report. Even Cruise’s own employees were uncertain about basic facts in the days after the collision. Anyone forming strong conclusions about an AV incident in its first days is working from incomplete information.

The regulatory bargain

California had taken a permissive approach to AV deployment: companies could apply for licences, self-certify aspects of their safety, and operate on public roads with light ongoing oversight. The implicit deal was transparency in return for freedom. Cruise broke that deal. When the DMV concluded it had been misled, it suspended the licence to operate. Cruise ceased autonomous operations and never recovered as an independent company. The regulatory system worked. Permissiveness enabled Cruise to scale to roughly 400 driverless vehicles and accumulate millions of miles of real-world data. When trust was broken, accountability followed quickly.

Situational judgement

The incident reveals a unique dichotomy between human and autonomous driving. On average, AVs are better drivers than humans. They do not get tired, drunk, or distracted. But their peak performance falls short of a human driver’s peak performance, because they lack situational judgement. Any person who witnessed a crash in the adjacent lane would have slowed down or stopped entirely. The Cruise AV only began braking when the pedestrian physically entered its path. Fortunately, ‘the system didn’t know better’ is not a legal defence. Regulators evaluated Cruise holistically, not just on the collision but on how it was disclosed, and concluded that a company willing to obscure what happened after one incident could not be trusted to report the next one. The suspension followed from that judgement.

Case 2: Waymo’s fence collisions

What happened

Between 2022 and 2024, Waymo vehicles were involved in at least 16 low-speed collisions with stationary objects: poles, fences, gates, chains. The vehicles struggled to recognise hazards on curbless streets where there was no clear road edge. Waymo issued two software recalls covering its full fleet and the National Highway Traffic Safety Administration (NHTSA) opened an investigation.

“Recall” has a particular meaning for AVs

Words can shape public trust. In traditional automotive manufacturing “recall” means pulling vehicles off the road and replacing physical components at enormous cost. A Waymo “recall” is a software update pushed to vehicles overnight at a central depot. The fleet was never grounded. The inherited language of automotive regulation makes routine software fixes sound like structural or embedded fault.

Nobody was hurt. The collisions were minor and involved objects, not people. Yet the regulator, NHTSA, opened a formal investigation, because the pattern of mistakes was sufficient for an investigation. An isolated serious incident tells you about a specific edge case. A recurring low-severity incident tells you about the system itself. NHTSA was reading the signal, not reacting to any particular headline. That is what good regulatory behaviour looks like.

Case 3: Waymo’s operations were overwhelmed during a blackout

What happened

On 19 December 2025, a power substation fire knocked out electricity to a third of San Francisco, disabling traffic lights across a wide area. Waymo’s vehicles handled dark intersections as designed, treating them as four-way stops. But when enough vehicles encountered situations they were uncertain about, they queued up requests for guidance from Waymo’s remote assistance centre. The centre was overwhelmed. Vehicles sat stalled at intersections with hazard lights on, in some cases multiple Waymos blocking the same junction, adding to already severe congestion. Waymo shut down service for around 18 hours.

What this reveals: AV’s rule-based protocols can get overwhelmed during city-wide transport issues

This was not a software failure. The software did exactly what it should: when uncertain, stop and ask a human rather than guess. The failure was operational. Every AV company maintains a ratio of remote operators to vehicles, somewhere between 1:20 and 1:200 depending on the company and environment. These operators do not drive the vehicles; they provide guidance when the system requests it. On 19 December there were not enough of them. This changes the operational burden. The technology has proven itself drive, but a reliable service requires stress-tested operations. Permissive regulation makes sense while the industry is young, but at some point regulators need to ensure AV operators have disaster readiness plans that are known to city emergency services and tested regularly, because a fleet that freezes at intersections during a power cut will be a lethal liability during an earthquake.

Case 4: A Waymo vehicle hits a cat

What happened

In October 2025, a Waymo vehicle waiting to collect passengers in San Francisco’s Mission District pulled away from a stop with a cat underneath it. The cat, a well-known neighbourhood bodega pet called Kit Kat, was struck by a rear tyre and later died. Security footage showed a bystander crouching in front of the vehicle trying to coax the cat out before the car moved.

This is not a software failure and not an operational failure. Waymo’s sensors detected the environment as designed; the cameras and lidar simply do not cover the undercarriage. No car does. The difference is that a human driver, seeing someone crouching in front of their vehicle and peering underneath, would have inferred that something was wrong. Waymo could not make that inference.

What this reveals

What this reveals is a limitation of vehicle design. Waymo’s fleet is built on modified Jaguar I-Paces, cars designed for human drivers and human passengers, with autonomous hardware retrofitted on top. The sensor arrays are optimised for the driving task: seeing the road ahead, detecting obstacles at speed, reading traffic signals. They are not optimised for the low-speed, close-quarters situations that arise during pick-up and drop-off, where animals or small objects can end up directly beneath the vehicle.

The data supports this. Waymo’s animal collisions have overwhelmingly occurred at low speeds during the pickup and drop-off phase. None have happened at standard driving speeds. Autonomous vehicles are not hitting animals at speed, so they are not hitting humans at speed either. Particularly at night, when the majority of pedestrian fatalities involving human drivers occur, these vehicles’ sensor suites are better at detecting animals and people than a human driver’s eyes. The safety concern is not the open road. It is the last few metres of a journey, at walking pace, where the vehicle’s form factor creates blind spots its sensors cannot cover.

In the longer run, there is every incentive (commercial and safety) to design vehicles from the ground up for autonomous operation rather than retrofitting human-oriented cars. Zoox, the Amazon-owned AV company, has taken exactly this approach: a purpose-built, bidirectional vehicle with no traditional front or back, designed around the requirements of driverless operation rather than a human driver. Whether that form factor eliminates these close-quarters blind spots remains to be seen. If Zoox’s purpose-built vehicles produce a similar pattern of low-speed animal collisions, that would be meaningful evidence that the design problem is harder than a change in form factor can solve.

Case 5: A Waymo vehicle struck a child

In January 2026, a Waymo vehicle in Santa Monica struck a child who stepped into the road from behind a parked SUV near a school. The vehicle was travelling at approximately 17 mph during a period when school-hour restrictions made the enforceable limit 15 mph. The Waymo braked hard and reduced its speed to under 6 mph before contact. The child stood up immediately and walked to the sidewalk. Waymo called 911, remained at the scene, and voluntarily contacted NHTSA the same day.

The PR

Waymo’s public statement is worth reading carefully. The child becomes “a young pedestrian.” The collision becomes an “event” where the vehicle “made contact.” The statement leads not with what happened to the child but with Waymo’s commitment to transparency, and pivots quickly to a peer-reviewed model showing a human driver in the same situation would have hit the child at 14 mph rather than 6. That is probably true. It is also the kind of framing that, if a human driver’s lawyer tried it after hitting a child near a school (”my client was braking hard and made contact at a significantly lower speed than an average driver would have”), would not be received warmly. Waymo handled the incident itself responsibly. The language around it is doing a lot of work to make a vehicle hitting a child near a school sound like a safety success story.

The tradeoffs with live software updates

A common concern about AVs is that a mistake made by one vehicle is shared by every vehicle on the same platform. This fear is largely misplaced. The software is perpetually updated, and a flaw identified in one vehicle’s behaviour can be corrected across the entire fleet in a way no amount of driver retraining could match.

But this incident complicates that reassuring picture. It happened after a software update that made Waymo’s vehicles more aggressive, including with speed limits. The vehicle was travelling 2 mph over the posted limit. It is not clear whether that update played a direct role in this specific case. But the tension is real: the same mechanism that makes AV mistakes non-persistent (continuous software updates) is also the mechanism that introduces new risks with every iteration. The system that fixes yesterday’s problems can create tomorrow’s. Waymo passengers had found rigid adherence to speed limits frustrating, and allowing human-level flexibility is a reasonable commercial decision.

Case 6: Hello Robotaxi strikes two pedestrians

On the morning of 6 December 2025, an autonomous vehicle struck a man and a woman at a zebra crossing in Zhuzhou, Hunan province. Bystanders and police lifted the vehicle to the man who was trapped underneath. Both were admitted to intensive care. The vehicle carried Hello branding and reporting indicates it used a Baidu Apollo RT6, though no official statement has confirmed the technical configuration. Hello’s robotaxi services were suspended indefinitely. Zhuzhou traffic police have not published a responsibility determination.

That is close to the limit of what can be said responsibly. The vehicle’s speed at impact, whether it was operating in full autonomous mode, whether a safety driver was present, and which layer of the system failed: none of this is in the public record.

A note on operators, suppliers, and transparency

The Zhuzhou crash raises two questions that sit outside the incident itself.

The first is accountability in a layered model. Hello operated the service. Baidu Apollo likely supplied the vehicle and autonomy platform. When something goes wrong in that arrangement, who is responsible depends on where the failure originated. This is not a new problem in transport: airlines fly aircraft built by Boeing and Airbus, and decades of regulation allocate fault across manufacturers, operators, and maintenance providers accordingly. But in autonomous vehicles, that framework barely exists yet. It will need to. Uber has announced plans to host both Baidu’s Apollo Go robotaxis and Wayve’s autonomous vehicles on its platform in London: a ride-hailing marketplace that neither builds the vehicles nor writes the driving software. The question of who answers when something goes wrong cannot wait until something does.

The second is information. Waymo publishes safety reports. NHTSA investigations are publicly accessible. California’s DMV requires disclosed incident data as a condition of operating permits. No equivalent transparency infrastructure is accessible for Chinese AV operations, at least not to international observers. This makes every incident involving a Chinese operator harder to learn from publicly than one involving an American operator. As Chinese AV technology is exported through partnerships and vehicles entering international markets, that gap will matter more, not less.

Conclusion

No two of these incidents failed for the same reason. A company hid information from its regulator. A fleet ran out of human operators during a crisis. A cat was killed by a blind spot that no software update can fix. A child was struck following an update designed to make vehicles feel less robotic. And in Zhuzhou, we cannot say what failed because the information is not available.

Each incident taught us something, and that is how this technology gets better. Public trust will determine how quickly autonomous vehicles develop, and trust is earned by accountability. The accountability mechanisms that suit a pilot phase will not be enough at city scale. A few dozen vehicles learning on public roads is a different proposition to a fleet large enough to gridlock intersections during a power cut. As these fleets grow, the rules around them will grow with them. There will be more incidents which will be part of the journey to a safer transport system. It’s crucial to make future incidents transparent to regulators so that we can collectively learn and build public trust.

Next
Next

The Real Economic Impact of Driverless Cars Won’t Fit in a Headline