Jul 14, 2016 | 09:15 GMT

5 mins read

Tapping the Brakes on Automated Vehicles

A fatal crash involving a Tesla car being operated in autopilot mode could lead to some second-guessing about autonomous cars and may slow the pace of the technology's adoption.
(SPENCER PLATT/Getty Images)

Engineering advances have put automated vehicles on an accelerating path toward commercial availability and, eventually, wider use. But a recent fatal accident involving a car that used semi-autonomous technology might stall the future of self-driving cars.

Automated vehicles' promise for improving fuel efficiency, increasing road safety and relieving traffic congestion has been well documented. The realization of those benefits, however, rests not only on technological advancement but also on public acceptance. For driverless cars to reach their full potential, especially in improving vehicle safety, they must first achieve high levels of market penetration.

Though certain sectors, such as ride-sharing operations, could be early adopters of these cars, widespread use will not come until public trust is gained. And while technological failures are a normal part of any development process, high-profile accidents involving vehicles using semi-automated technologies, like three recent ones in which drivers used (or claimed to have used) Tesla's autopilot mode, will set back the timetable for complete public acceptance.

As automated and semi-automated vehicle technologies continue to develop and improve, it is important to keep clear the distinction between the two. Driver-assist features, including automatic emergency braking, adaptive cruise control and active lane assist — many of which are already in use and some of which are due to become available in new vehicle models in the next couple of years — still require driver interaction and attention. The autopilot mode used in Tesla models that was activated in at least two accidents within the past few months is a similar technology.

Tesla's autopilot technology is still undergoing beta testing, and to use the feature, vehicle operators must activate it and acknowledge the requirements and risks involved. The autopilot's computer-driven detection system sees, interprets and classifies objects, and it then uses that information to guide the car. But it still requires drivers to stay attentive and be ready to take control. In the case of a fatal collision in Florida, it appears that the computerized system did not differentiate the white trailer of a turning truck from the bright background of a sunny day, and both the driver and system failed to adjust to avoid a collision.

Tesla is not alone in developing and deploying such systems for passenger vehicles. BMW, General Motors Co. and Mercedes-Benz all plan to include hands-free or active lane assist options in future models. And as the technology continues to develop and costs decrease, such features will become even more common. In theory, these features should improve the safety of driving. In fact, safety is one of the highest priorities and selling points of the technologies. Automation combined with vehicle-to-vehicle or vehicle-to-infrastructure communication in an effort to eliminate human error should make driving safer, once the technology has matured. Tesla's autopilot feature was tested over 210 million driving kilometers (130 million miles) before the high-profile crash occurred, compared with the 151 million kilometers that are driven between fatal accidents in vehicles guided only by people (although admittedly, traditional systems are driven over a much wider array of conditions). But as long as the Tesla system and others like it remain semi-autonomous, another factor will remain constant: the human one.

These features, whether emergency brake assist, lane assist or autopilot on highways, still require vehicle operators to pay attention to the road. But whether because of a misunderstanding of their limits or expectations inflated by ambiguous marketing, this is not always the case. With the semi-autonomous system or not, distracted driving remains a large problem for today's drivers. People can misuse or abuse the technology, which in turn can lead to accidents that the technology did not cause and cannot prevent. Advances in technology, however, are likely solutions (or at least helpmates) here as well. Driver-state sensing technologies inside the car can help the automated system determine whether to switch to driver-control mode, or even to make a safe stop if it can tell the driver is not paying attention.

Developers and experts do not all agree on the best method for introducing the technology to the general public, and many debate whether incremental development, as employed by Tesla, is the best strategy. Google has opted to focus on creating and testing fully automated vehicles before allowing their unfettered use. While this might result in slower incorporation and take more time for technological hurdles to be overcome, it eliminates the human aspect and possible misuse.

Change Is Difficult

Automated driving has great potential to change transportation as we know it, but as noted, realizing its full benefits will require high levels of incorporation. There have been significant advances both in the technology required for automation and in the communication among vehicles and devices needed to successfully implement it. But the greatest roadblock — public perception — has been slow to fall. Accidents such as those involving Tesla's vehicles, even if caused by human error and not by technological glitches, could hamper future incorporation. Yet even if the difficult problems of human acceptance and compliance slow the drive toward widespread use of vehicle automation, they will not halt it completely. Niche sectors could adopt the technologies, and car companies are evaluating new business models that assume automation to at least some extent. Additional diversification could occur in the automotive industry as automation allows for more tailoring based on the needs and wants of a driver at any given time or occasion.

In the meantime, the automotive and computing sectors will take steps to combat public skepticism, touting the safety of their own systems over traditional, human-run ones. That is why, during the development process, emphasis is placed on the safety of the operation of the computing and detection systems themselves and on including numerous planned redundancies. While technological hurdles remain, especially in the development of fully automated systems, the erosion of public trust by highly publicized accidents is hardly just a bump in the road — it could actually slow down the pace of the technology's adoption.

Article Search

Copyright © Stratfor Enterprises, LLC. All rights reserved.

Stratfor Worldview


To empower members to confidently understand and navigate a continuously changing and complex global environment.