In case you missed it, news of the advance of driverless transportation technology abounds. Amazon made its first drone delivery on December 14, 2016, delivering an Amazon Fire TV and microwave popcorn to a customer in the United Kingdom (where the autonomous delivery technology is currently being tested). Domino’s Pizza has been testing delivery of hot pizza and cold drinks through an autonomous delivery cart since Spring 2016. Google’s self-driving car project has driven over two million miles in testing. Tesla Motor’s autopilot, a semi-autonomous driving mode, is already available in current models. Similarly, Uber’s recently acquired autonomous heavy truck tech startup, Otto, completed its first delivery in October 2016, completing a journey of 120 miles from a weigh station in Fort Collins, Colorado to Colorado Springs.
While some of these announcements may have been little more than stunts, with commercial availability still potentially years away, some components of automated transportation technology are already available, and vehicles which can fully control their own operation without driver input at least some of the time are nearing widespread availability. These advances promise to increase safety through elimination of human error, and to transform the transportation company of tomorrow. Despite the speed with which this technology is being developed, state and federal safety standards necessary for the implementation of these technologies into everyday use lag behind their development. Recognizing this lapse, in September 2016, the U.S. Department of Transportation released the Federal Automated Vehicles Policy (“FAVP”), the first official guidance from the U.S. DOT (and an update to a previous policy statement from NHTSA) to vehicle manufacturers, States, and other stakeholders on how autonomous vehicles may be regulated.
Federal Automated Vehicles Policy
The Federal Automated Vehicles Policy primarily serves two purposes. The first is aimed toward providing a framework upon which future rulemaking will be based, which will include guidance for best practices to companies designing and manufacturing autonomous vehicle technology as well as a preview as to the process by which safety standards will be developed. The second is to provide a model state policy, which is intended to assist in providing a consistent national framework for vehicle licensing, traffic rules, liability and insurance requirements for self-driving vehicles to avoid the patchwork of inconsistent rules that presently exist.
Because the FAVP is merely one of the steps toward developing standards and regulations for autonomous vehicles, it does not set out any specific policy determinations. However, it sets forth specific considerations for policy makers going forward. Of these, considerations most pertinent to operators are mostly those pertaining to safety. At the present time, however, the FAVP takes mostly a hands-off approach to safety, requiring each manufacturer to provide its own safety assessment regarding how the guidance has been followed with respect to 15 categories, including: data recording and sharing, privacy, system safety, vehicle cybersecurity, human machine interface, crashworthiness, post-crash behavior, ethical considerations, and object and event detection and response, among others. By leaving these issues largely to the auto manufacturers, at least for now, the federal government is missing an opportunity to resolve one of the greater liability and safety concerns for auto makers and passengers of autonomous vehicles: ethical decisions on how to handle emergency situations.
Ethical Considerations and Federal Guidance
Among the safety considerations most relevant to a motor carrier concerned about the implementation of self-driving vehicle technology is the treatment of ethical questions by the manufacturer. The FAVP explicitly recognizes that a driver may be presented with extremely difficult moral decision making when encountering an emergency situation. These dilemmas, which are explained below, present one of the most significant hurdles to the introduction of truly autonomous vehicles into widespread use.
These dilemmas are readily and most frequently presented through the mental exercise known as the “Trolley Problem.” In the original Trolley Problem, a trolley is bearing down on a group of jaywalking pedestrians on the tracks ahead. Continuing forward will result in the sure death of five people. By throwing a switch, the trolley operator can divert the trolley onto adjacent tracks where a single person is standing out of the path of the trolley. Determining whether to sacrifice one innocent life in order to save a greater number of lives may seem like a purely utilitarian exercise. However, from a moral and legal standpoint, the decision is not as simple as it may seem.
Ignoring purely moral questions, the legal ramifications of this encounter are entirely different for a human driver than they are for a machine. With a human driver, permitting a vehicle to continue on its path will result in greater injuries, but considering the sudden emergency doctrine, the operator may be fully justified in his or her behavior. Even if not fully justifiable, indecision is more likely to be viewed as negligence rather than intentional conduct. However, if the driver chooses to divert the vehicle in order to spare lives, the driver has now committed an intentional act with knowledge that it will likely result in death. There are no clear answers to what a person should do under these circumstances, but because inaction is a potential option, a person may be excused from their decision making if they continue forward.
Unlike a person, autonomous vehicle technology should make intentional decisions based upon its programming. However, whether the vehicle should make decisions to harm a smaller number of innocent bystanders versus a greater number of potentially at fault parties, or protect its occupants over all others, are decisions that will have great significance in litigation.
Potentially, the manufacturer could permit the operator to elect how the vehicle will handle these decisions and pass on any potential tort liability arising from the consequences to the operator. In fact, the FAVP potentially resolves that concern by suggesting that the manufacturer should design the technology to make these decisions intentionally using input from Federal and State regulators, drivers, passengers and others. However, the FAVP provides no guidance as to how the decisions should be made, which leaves manufacturers in a position to face potential liability for their programming.
Much of the current debate on how and whether federal regulation should exist focuses on whether federal regulators should set minimum design standards for autonomous vehicles and provide immunity to manufacturers who conform their products to those standards, or permit courts and civil liability to dictate design decisions. While closing this debate by providing minimum design standards and immunity for following them may be unpalatable to federal regulators with respect to many aspects of vehicle design and manufacturing, providing federally mandated protocols for how automated systems should handle emergency situations will provide benefits to vehicle operators, manufacturers, and the public. Setting uniform standards for vehicle decision making will permit motorists to better predict how robotically operated vehicles will behave, providing the best opportunity for accident avoidance and enhancing the safety of all. Additionally, it will provide operators, manufacturers and insurers greater predictability in litigation and permit state lawmakers to better assess financial responsibility requirements for motorists and manufacturers.
Levels of Automation and the Current State of Autonomous Heavy Trucks
As touched on above, not all automated vehicle technologies are fully autonomous. Thus, the FAVP provides levels of automation that classify technologies based upon the amount of driver involvement in the operation of the vehicle. These levels, which are based upon definitions created by the Society of Automotive Engineers, update the four levels of automation set in in NHTSA’s 2013 policy statement.
The current six levels of automation range from zero to five, with zero providing no automation and five being fully autonomous with no driver control.
At present, most autonomous heavy truck designs fit within Level Three, which permits some autonomous driving, but requires a driver to resume control over some driving tasks. Uber’s self-driving truck technology, which can be retrofitted to an existing tractor, and Volvo and Freightliner’s self-driving trucks, both permit the vehicle to assume control of the driving task, but only while on the highway. This means a human must still be in the driver’s seat.
These technologies offer real safety advantages, because while they are not fully automated for all driving tasks, they eliminate human error for the majority of miles an over the road commercial vehicle is operated. This, in turn, eliminates potential for collisions resulting from driver fatigue, inattention, speeding or miscalculation of stopping distances, for example. There also exist potential energy and costs savings through use of automated commercial trucks, as platooning vehicles can save up to 6% of their fuel consumption, and more uniform driving techniques may also save on vehicle maintenance and repair.
Despite these advantages, Level Three autonomous trucks still require a properly licensed human driver who must monitor the operation of the vehicle and remain attentive to the road and ready to take over operations when required by the vehicle. This presents its own unique set of risks because drivers who are less engaged in the driving task during use of automated systems have been found to be less able and ready to take over driving when the vehicle has reached its autonomous limits. Thus, when Level Three heavy trucks become widely available, motor carriers will face new challenges with respect to ensuring their drivers have been properly trained on the use of the technology and maintaining alertness to both the roadway and vehicle during periods of autonomous operation.
Regulatory Concerns for Commercial Motor Carriers
In addition to purely safety-oriented concerns, use of automated commercial vehicles in interstate commerce present additional regulatory questions. Again, because the present debate appears to center around standards for manufacturers, there are presently no safety rules to provide guidance on the incorporation of automated systems into motor carriers. Thus, absent new rules, motor carriers will likely be free to incorporate autonomous vehicle technology, but remain subject to existing limitations on driver qualification, fitness, and hours of service. These critical aspects of commercial driver safety standards focus primarily on the human element of driving, and rightly so considering that human error is such a prominent factor in the cause of motor vehicle collisions. However, as vehicle technologies improve, and Level Four and Level Five commercial trucks become available in the years ahead, it will be important that the U.S. DOT and FMCSA develop standards for implementation by motor carriers that acknowledge the decreased role of humans in the operation of vehicles by loosening fitness requirements. This will permit motor carriers to enlarge driving shifts in order to deliver freight more expeditiously, as well as to expand the number of drivers available, potentially alleviating the current driver shortage.