The first writing assignment of my general undergraduate seminar class focused on the legal and ethical ramifications of self-driving cars. I took this as an opportunity to develop my burgeoning awareness of the unhappy relationship between regulation and marketplaces and am happy with the result.

Here’s the essay prompt:

CSCE 481 - Long Assignment 1

Your goal in this assignment is to provide an executive summary of issues related to the operation and use of autonomous vehicles on roads. Assume that you have been asked by a state-level organization to provide a recommendation for how the state should proceed with allowing (or not), and to what level, the operation of autonomous vehicles on public streets. You may advocate for any range of solutions, from an outright ban to allowing unfettered access and experimentation, however your summary should correctly summarize the various pros and cons and give a well-reasoned recommendation in line with ethical guidelines.

My essay:

Executive Summary

Deregulating Driverless Vehicles

Only time and regulation stand between American roadways and the use of self-driving vehicles. Though subject to continued optimization, autonomous vehicle technology is already a reality. In some jurisdictions, companies have cooperated with regulators to provide for the introduction of self-driving cars, which have prompted concerns regarding safety, liability, and privacy. However, the role of government in developing solutions to these problems should be minimized. Attempting to control the use of autonomous vehicles would prove counterproductive by delaying safety, misdirecting liability, and compromising privacy. Regulators should handle autonomous vehicles similarly to standard driver-dependent vehicles whenever possible, and allow consumers to determine the value of this potentially life-saving technology [8].


In America, traffic accidents kill 32,000 people annually [5]. Autonomous vehicles have the potential to significantly reduce this danger by minimizing human error, which is estimated to be the primary factor in 90 percent of all car crashes [3]. Subtle advancements in automated driving have already shown promise in reducing accidents. For example, Volvo cars equipped with a computer safety system have had significantly fewer property damage claims than those without the system [1]. It is true, however, that computerized vehicles cannot prepare for every scenario, and in some cases human sense trumps the rigidity of computer processes. To account for these scenarios, automated cars typically allow for driver intervention at any moment; but considering today’s high fatality rates, it is unreasonable to demand perfection.

Provided the current viability of autonomous vehicles—Google’s self-driving cars have clocked 700,000 crash-free miles on California roads—their safety should be valued by citizens rather than by regulators, who may be susceptible to corporate pressures [2]. Everyday drivers, having the greatest vested interest in their own safety and in the consequences of their actions, should be permitted to choose the systems with which they feel most comfortable. Similarly vehicle manufacturers, being concerned with their reputations, liabilities, and sales, and facing consumers skeptical of a new technology, are naturally incentivized to provide an abundance of safety. As numerous corporations have already begun testing self-driving cars, competition among them and with standard car models can further be relied upon to encourage transparency, security, and affordability [3]. The producers of autonomous cars have already proven eager to outdo one another in advertising honest safety statistics.


In developing self-driving vehicles, engineers face an ethical dilemma. They must consider whether self-driving vehicles should be permitted to break traffic laws when a typical driver would. For example, engineers may want to program a self-driving car to safely but illegally drift out of its lane to avoid an obstacle instead of breaking and risking being rear-ended. The legal solution to this dilemma is simply to treat autonomous cars like regular cars. The behavior of cars should not be regulated according to how they are designed or programmed, but according to their performance. In choosing to program an autonomous car to break basic traffic laws as in the above-mentioned example, engineers are not infringing upon ethical guidelines because they are not themselves violating laws nor causing harm, but rather acting to avert it according to their best judgment. If a traffic safety officer were to witness and disprove of a vehicle’s actions, then the manufacturer may be held liable at that time; but in the scenario given, an officer would commonly find nothing wrong. Mandating legal compliance in the production stage, besides being costly and inefficient, would make roads less safe [4].


Liability for damages caused by self-driving cars and their owners poses another question. In the case of an automobile accident, current laws typically hold the car manufacturer or driver liable according to whether the car or driver was responsible. This is appropriate. Such laws may require clarification for autonomous vehicles without significant modification. Furthermore, the data stored by self-driving cars would typically provide clear evidence to determine which party was at fault [6]. Insurance companies should be permitted to extend protection to both drivers and manufacturers. Consumers should consider their potential liability for damages and would make contracts and choose vehicles accordingly. Similarly, vehicle manufacturers would incorporate perceived liability into their pricing, while also working to remain competitive. In this way manufacturers, drivers, and insurers would establish an equilibrium in managing responsibility for autonomous-car-induced damage in the context of essentially unchanged laws. To attempt to redirect liability would upset this balance towards one or two of these parties—and likely at the overall expense of consumers.


Regulators must also consider privacy in autonomous vehicles. Like modern cellphones, self-driving cars typically utilize and store sensitive data such as location data, calendar data, and various other details. It would be easy for government agencies to justify mandating access to this data in the interest of traffic safety or controlling crime. However, such intrusion could be perceived as a breach of fourth amendment rights and may lead to excessive bureaucracy. Autonomous car data should be private, accessible to government only via a warrant. Whether a manufacturer or third party may access data should be contractually determined by the manufacturer and consumer as part of the initial transaction of the vehicle.

Regarding privacy, the potential hacking of autonomous car systems is a concern. While ominous to consider, it is important to note that it is already possible to hack cars, airplanes, ships, and other much more critical computer systems. Hacking is not a new problem, and hackers targeting autonomous cars should be dealt with similarly to hackers of any other computer system. In the summer of 2015 congressmen proposed legislation requiring autonomous car manufacturers to implement certain security measures against hackers [3]. This legislation is not only unnecessary, but also potentially detrimental as it may limit innovative competition for superior security standards. The cost of complying with such regulation is an obstacle to smaller manufacturers, while large companies will simply pass the cost down to consumers through higher prices. Consider that although these proposed laws have not yet been implemented, competing automobile manufacturers are clearly already incentivized to keep their cars’ computer systems highly secure.


While the dangers associated with self-driving cars are mostly foreseeable, the potential benefits are exciting and unknown. Fleets of autonomous vehicles may save untold economic resources through their efficiency, remove the need for individual car ownership, and save countless lives [7]. Overregulation would inevitably suppress some of the unknown developments that may arise from such revolutionary technology while not necessarily solving any problems. States should foster the inception of autonomous vehicles through minimal intervention, thereby creating conditions for safety and prosperity.


[1] Bilger, Burkhard. “Auto Correct: Has the Self-driving Car at Last Arrived?” The New Yorker. N.p., 25 Nov. 2013. Web. 27 Sept. 2015.

[2] “Google’s Self-Driving Cars Approach 700,000 Miles of Crash-Free Driving.” Right Side News. N.p., n.d. Web. 27 Sept. 2015. editorial/googles-self-driving-cars-approach-700-000-miles-of-crash-free-driving/.

[3] Keating, Lauren. “The Driverless Car Debate: How Safe Are Autonomous Vehicles?” Tech Times. N.p., 28 July 2015. Web. 26 Sept. 2015.

[4] Lin, Patrick. “The Ethics of Autonomous Cars.” The Atlantic. N.p., 8 Oct. 2013. Web. 26 Sept. 2015. cars/280360/.

[5] Lin, Patrick. “The Ethics of Saving Lives With Autonomous Cars Is Far Murkier Than You Think.” N.p., 30 July 2013. Web. 27 Sept. 2015.

[6] Simonite, Tom. “Data Shows Google’s Robot Cars Are Smoother, Safer Drivers Than You or I.” MIT Technology Review. N.p., 25 Oct. 2013. Web. 26 Sept. 2015. safer-drivers-than-you-or-i/.

[7] Tannert, Chuck. “Inside the Road Revolution.” Fast Company. N.p., 8 Jan. 2014. Web. 26 Sept. 2015. of-the-wheel.

[8] Thierer, Adam D., and Ryan Hagemann. “Removing Roadblocks to Intelligent Vehicles and Driverless Cars.” SSRN Journal (2014): n. pag. Mercatus Center. George Mason University, Sept. 2014. Web. 26 Sept. 2015.