Despite our increasing reliance on machines, they are still held with a bit of contempt in the mind of most people. Consider the multitude of science fiction works dealing in some way with machines taking over the world and either enslaving or exterminating all humans. Despite lifetimes of mostly positive experiences with machines, we still seem to have a flicker of doubt as to whether they are completely under our control.
That brings up an interesting question when thinking about the autonomous vehicles that we are developing with haste. What does the public expect from a machine in care of their life without human intervention?
I believe that expectation is “Perfection.”
For example, in April this year, one person was tragically killed when a broken fan blade caused an uncontained engine failure on a Southwest Airlines flight. Two things were significant about that accident when it comes to our expectation of perfection: 1) Southwest had a perfect record of no inflight fatalities for their entire 51-year history, and 2) The accident was preceded by nine years of perfection—not a single fatality since 2009.
Despite the rarity of that tragic event, it was the number one story across the country. Southwest estimated that they would lose $50-$100 million in bookings in the aftermath, and people started avoiding the window seat adjacent the engine nacelle. One article made the leap to offer up that if all the 737s were affected by this problem, it could be a disaster. That’s an incredible leap to distrust considering your chance of dying in a plane crash is about 1 in 7 billion, and your odds of dying from a lightning strike are three-and-a-half times greater!
Tragic accidents involving autonomous automobiles have brought similar reactions. After a fatal accident this past March involving a Tesla, their stock took a steep and sudden plunge before recovering again. An accident a month earlier where an autonomous Uber car killed a pedestrian with a bicycle at night led to Uber suspending all tests.
These two accidents led Jack Stewart to pen an opinion piece in Wired titled, “For a Much-Needed Win, Self-Driving Cars Should Aim Lower.” In that article, Steward concludes that “… AV makers are clearing the technological hurdles and tripping over the psychological ones.”
Can Perfection Be Achieved?
Statistically and realistically, perfection cannot be achieved. The number of variables and the number of outcomes are just too large. So what then is to be done? The psychological aspect of autonomous vehicle malfunctions is real and should be handled by experts in that field. But, from a technological standpoint, the answer lies in standards and safety requirements.
Standards and safety requirements get created using the most methodical and exhaustive of analysis processes. Input is accepted from all stakeholders. Most importantly, stakeholders can define the proven methodology, proven requirements and proven limits to use.
Standards for Autonomous Driving
The overarching standard for functional safety is ISO 26262. This standard is not specific just to autonomous vehicles, but also to electrical and electronic systems used in all vehicles. It presides over risk, failure and the outcomes of those failures. ISO 26262 describes design rules and goals for both hardware and software.
From a “real world” look at driverless cars, ISO 26262 defines the Automotive Safety Integrity Level (ASIL). This is a system for classifying hazardous events by order of severity considering three factors: 1) severity of possible injuries, 2) timeframe exposed to the hazard, and 3) ability to control the outcome.
For a particular event, those three factors are assigned risk values. The severity of injuries depends a lot on speed, so if an accident occurred at a slow speed in a neighborhood, it would be presumed low. Considering all these factors for all hazardous events results in a risk assignment from Table 1.
Once you have assigned risk values, you can designate a safety level as shown here. Note that even if the severity was level S3 (possibly fatal injuries), if the controllability was C2 (normal or better) and the exposure was E1 (very low), the result is “QM” (Quality Management) … not even ASIL A. The higher the ASIL (A being lowest and D highest), the more difficult it is to design hardware and software that meets the requirements.
The ISO 26262 standard also defines “Freedom from Interference.” This means that hardware and software must be designed so that non-safety-critical components cannot interfere with safety-critical components. While this is a clear necessity, verifying that fact can be cumbersome and expensive.
Standards such as the Automotive Software Process Improvement and Capability Determination (ASPICE) in Europe and Asia, and the Capability Maturity Model Integration (CMMI) in North America define very strict software development methodologies and practices for producing safe and reliable software.
The Coming Legislation
From a strict engineering and technology standpoint, the standards above and other relevant standards are likely able to cover all currently foreseeable events and are capable of being modified as unforeseen technology and use cases evolve.
But there is another force that could be the most influential of all: laws.
To this point in the evolution of the nascent driverless car development, legislation has been in a “wait and see” mode. Tesla and the State of California tangled over licensing requirements, which led Uber to quickly accept an invitation from the State of Arizona, further minimizing regulation attempts.
As it has been for some time, lawmakers’ threshold for loss of life resulting from technological means is not high. It would not be unreasonable to think that legislation could delay autonomous vehicle introduction by months or years.
Here are some legislative and regulatory issues that have recently surfaced:
- The National Highway Traffic Safety Administration (NHTSA) has proposed installing vehicle-to-vehicle information sharing in all cars. With this technology, information about hazards and accidents would be instantly shared with nearby vehicles to improve safety.
- The National Transportation Safety Board (NTSB) and the NHTSA seem to be vying for jurisdiction. The first Tesla fatality was investigated by the NHTSA, who placed full blame on the driver. The NTSB published a contradictory finding, and subsequently took control of the most recent fatality, eventually removing Tesla from the investigation.
- USA Today published an opinion piece titled, “After tragic self-driving Uber accident, the government needs to set safety standards,” demanding that the government begin taking a much more proactive role in moving toward legislation such as requiring exhaustive tests in controlled environments before allowing autonomous vehicles on public streets.
Where to Go from Here?
Clearly, the environment around autonomous cars is in a change phase. Government policy makers are unquestionably going to move forward with legislation at a faster pace. Coming legislation will likely include collaboration with industry representatives, which is a good thing.
From an engineering and development standpoint, it still makes sense to understand, design for and meet the standards as they currently stand. It will also be critically and competitively important to react to any changes in those standards as quickly as possible. While it will be imperative to react to new and revised standards and requirements, there will likely be delays and back-to-the-drawing-board moments.
The years preceding the introduction of the first widely available driverless vehicles will be chaotic. It will certainly be frustrating, require many redrafts and result in many late nights at the office. But it will also be exciting, rewarding and hopefully lucrative to all those who contribute to make it a reality.
Will those vehicles be perfect? Not a chance! Will they be better than we envision today? Almost certainly.