Cruise, the driverless car maker, and GM have filed for a commercial licence in America. But like so many highway agencies, the UK’s National Highway Traffic Safety Administration faces a dilemma; namely, can the public be convinced that safety is at the fore front of regulatory frameworks? Here, Rav Babbra, business development manager at drisk.ai and Ed Houghton, head of research & service design at DG Cities,and members of the D-RISK co-innovation project, look at the issues at stake.
In February this year, the autonomous vehicle (AV) maker Cruise and GM, applied to the National Highway Traffic Safety Administration (NHTSA) for a commercial licence for its Origin self-driving vehicle. At the centre of the petition is the notion that the ‘robotaxis’ are safe, and will improve overall road safety, while also making travel more sustainable and accessible to communities that traditionally face barriers to reliable transport.
While all those arguments may well be true, proving it is another matter, especially when it comes to safety. There are several public examples, from multiple manufacturers, where test driverless vehicles have caused accidents, some with tragic consequences. Safety is therefore on the minds of regulators. A commercial deployment must be safe for the passenger and other road users including pedestrians.
There is plenty of test data to draw upon when making decisions about commercial licences. But until AVs are commercially deployed, no regulator can be sure the trial data translates into sustainable safety standards. It requires real-world data to be sure.
This poses a considerable problem for regulators how can they sanction commercial deployments if safety isn’t completely assured? And, at the same time, how can they obtain the data that provides assurances without approving commercial licences? It’s a dilemma.
It’s not just regulators who need to be convinced either; the public needs to be too. Regulators know that without public support, AVs will be met with scepticism and could never be adopted. Getting it wrong would also undermine the environmental upsides and congestion-reducing benefits AVs can bring.
How big is the public opinion gap?
Latest studies give regulators a basis for their assessments. 37.8% of people think AVs will be less safe than traditional vehicles, and only three in 10 (29.9%) are convinced that self-driving vehicles will be safer.
What’s more only 37% feel that they can trust self-driving vehicles and a quarter (25.9%) of people are undecided on safety. Safety perceptions also differ by age. For example, older people a group who could significantly benefit from more accessible transportation were less positive about self-driving cars and more concerned about safety.
However, there’s evidence from our studies that those who are ‘undecided’ could be convinced AVs are safe by participating in trials. Where people have been given a chance to ride in an AV accompanied by an expert explaining the technology, and another acting as a safeguard, they have changed their opinion.
Fundamentally, the opportunity to experience the technology and ask questions about how AVs work has allayed fears and moved people from a position of ‘novelty’ and ‘concern’ to one of understating, and even an acknowledgment that AVs can be a future mode of transport for greener and accessible city living
But with such mixed views, regulators can’t make a firm decision one way of other. But what the research above does tell regulators, is that more the public is engaged in the trials the greater the potential to help people see the value and become more confident in the technology and the benefits.
Is that enough though? Will a ride in a driverless car with experts be enough? The answer is probably not. People will want to know that the safety nets are in place when a steward isn’t on board. This means the artificial intelligence (AI) in use must achieve higher and more stringent standards.
Could a different approach to artificial intelligence help?
Currently, the artificial intelligence (AI) being applied to AVs is built on the rules of the road. For instance, AVs must be taught how to behave at junctions and in particular high-risk ones like unprotected right or left turns. This requires training. However, any time a high-risk scenario is identified, there’s a risk of over-training such that the AV becomes so good at turning right that it can’t turn left with the same levels of certainty.
It also doesn’t allow for changes in highway code. The recent code changes in the UK are a good example. They give pedestrians and cyclists more rights. Yet a month on, it would be hard to believe there have been any changes at all simply because driver and pedestrian behaviour doesn’t appear to have significantly changed. It creates more risk for AV decision rules and exerts even more pressure on designing responsive and adaptive vehicles.
There is also a tendency to design the technology that solves the easy problems first, building up to the more complex driving scenarios. This linear approach isn’t representative of true life and can introduce more risk because the more difficult scenarios are neglected or not trained to the same extent.
However, persuading people that AVs are safe isn’t only about knowing an AV will turn left and right safely. It must also be able to tackle the ‘edge case’ or one in a million driving scenarios that are near-impossible for a developer to dream up in a lab yet have happened to members of the public.
A child in the road
Developers might plan for a child stepping out from behind a parked car, or a dog that’s off the lead, but they won’t have considered all the potential extreme versions of those scenarios. The small child dressed as a giraffe that steps out, to a six-meter-tall giraffe standing in the road.
An AV might struggle to identify the permutations as risks that require an emergency stop. Compiling a library of one in a million instances is therefore vital to developing the artificial intelligence to not only meet set standards of safety but to exceed and improve them.
Extreme scenarios, aka ‘edge cases’
Edge cases can’t just be conceived in a lab by a developer. They need to be founded in part by real-life example of incidents people have experienced on the road. As such, developers can use the public’s experience to plug the gaps in their own experience and knowledge and determine the next best test to perform. In effect it provides developers with a more comprehensive risk profile for scenarios that could happen. This can then be used to adjust the algorithms the AI uses and ultimately generate more trust in the process of AV design.
It’s this kind of thinking that will ensure the more difficult edge cases are tested and move AVs nearer to large scale deployments. Until then, AVs will remain a technology that shuttles people along a short, low-risk distance of one airport terminal to another and never becomes a means to move freight across a country.
Involve the public
It’s clear from both examples, that regulators have a greater chance of setting policy that works and issuing reasonable safety standards if they involve the public.
Indeed, it cannot be underestimated how invaluable public engagement is in the design of AVs and encouraging public acceptance. Active public involvement can and will shift opinion and engender trust.
We believe it’s this form of deeper research and evidence-based learning and development that will lay the foundations for commercial deployments of this potentially transformative technology. Until we see this approach adopted, driverless cars will remain in testing and the world will miss out on the benefits.
The authors are Rav Babbra (above), business development manager at drisk.ai, and Ed Houghton (below), head of research & service design at DG Cities.
Comment on this article below or via Twitter: @IoTNow_OR @jcIoTnow
This UrIoTNews article is syndicated fromIoT-Now