fbpx

Oklahoma Bar Journal

Autonomous Vehicles and the Trolley Problem

An Ethical and Liability Conundrum

By Spencer C. Pittman and Mbilike M. Mwafulirwa

Today we're looking at science fiction becoming tomorrow's reality - the self-driving car.
Gov. Jerry Brown1

Pretend for a moment you are a trolley driver. You round a bend and see five men repairing the track you are driving on. Your only option to avoid the men is to apply the trolley’s brakes, but you quickly discover the brakes do not work. You suddenly see a break in the track to the right. You have the option to turn the trolley right and avoid the five men ahead of you. Luck, however, is not on your side. Another workman is working on that side of the track as well. Due to steep sides, none of the workmen can get off the track in time to avoid your trolley. You quickly glance to your left hoping for some reprieve. Instantly, you see a worn path that leads to a dead end and a sizable barrier, and most certainly, to your end.

Your options so far are threefold: 1) continue on your charted path and kill the five workmen, 2) turn the trolley right and kill the single workman and finally 3) turn the trolley to the left side leading to your own demise. Therein lies the conundrum: Are you under a moral duty to turn the trolley?2 If so, which way? What about liability? Who should be responsible? This article discusses and analyzes the trolley problem’s application to autonomous vehicles and also attempts to answer the vexing liability questions – who should be liable and on what terms?

For well over half a century, the trolley problem (in various forms) has been a ripe subject for philosophical debate. Recent advances in technology, however, make this a question for our time. Uber, for example, has already started testing self-driving cars in Pittsburgh.3 Industry experts predict that as many as 10 million self-driving vehicles are expected to be on American roads by 2020.4 Assuming for all intents and purposes that an autonomous vehicle is 100 percent self-driven with no user-operation or control, experts have recognized the potential application of the “trolley problem” in the inherent functionality of self-driving cars, especially if the car is placed in a situation where it must make a judgment call – to risk taking your life or someone else’s.5 Again, who should be liable and on what terms?

AUTONOMOUS VEHICLES AND THE LAW: INEVITABLE UNCOUPLING
We are living in the age of artificial intelligence (AI). AI is the sum of efforts “to build intelligent entities.”6 Intelligent entities are made by “creating machines with one or more of the following abilities: the ability to use language; to form concepts; to solve problems now solvable only by humans; to improve themselves.”7 These are machines that operate independently of humans. For so long, the motorcar and its driver have been indispensable partners. This bond, however, is undergoing a conscious uncoupling. The automobile industry is preparing to roll out self-driving vehicles.8

A number of states have enacted legislation to regulate self-driving cars. Nevada took the lead in 2011, enacting Assembly Bill 511 (2011). Nevada defines an “autonomous vehicle” as “a motor vehicle that uses artificial intelligence, sensors and global positioning system coordinates to drive itself without the active intervention of a human operator.”9 However, vehicles “with a safety system or driver assistance system, including . . . a system to provide electronic blind spot assistance, crash avoidance, emergency braking, parking assistance, adaptive cruise control, lane keep assistance, lane departure warnings” systems are not considered self-driving cars.10 For liability purposes, the car driver is still considered the operator of the vehicle so long as he “causes the autonomous vehicle to engage, regardless of whether the person is physically present in the vehicle while it is engaged.”11 Other states are also quickly enacting similar autonomous vehicle laws in anticipation of self-driving cars.12 Oklahoma, however, does not have legislative schemes in place for autonomous cars.13 Short of comprehensive legislative intervention, the common law will have to fill the void and deal with autonomous vehicles.14

THE FORESEEABLE CONUNDRUM INHERENT IN SELF-DRIVING CARS
Recall our hypothetical: the self-driving car is involved in an accident. A number of vexing issues inevitably arise, 1) who should the car injure? You (the passenger) or them (innocent victims)?; and, 2) who should be liable and on what terms? We delve into these issues.

You or Them?

What should the autonomous vehicle be programmed to do? Who should be responsible for the programming? Should the driver of the autonomous vehicle have a part in the decision? Although the common law does not specifically answer these questions, it has over the years laid down some bright lines. Sir William Blackstone, for example, quipped that when a person is faced with a choice of saving himself at the expense of another innocent person, that person “ought rather . . . die himself than escape by the murder of an innocent.”15 Other scholars, on the other hand, have suggested that flipping a coin would provide a better and more fair answer.16 In fact, the coin flip (i.e., leaving the fatal choice to chance alone) has been suggested as a logical extension and expression of natural law since the ultimate choice forces “God to ‘show his hand’” and forces “Him to reveal His intentions for the future.”17 In Oklahoma, the common law presently appears to favor that the choice should fall on self.18 Interestingly, in accord with the law’s position, experience has also shown that a good number of people, given a choice, would favor self-sacrifice.19 To expect the self-driving vehicle to make the election itself would provoke more questions than answers, such as what objective criteria would the machine use to make the election when preferring one life over another? Maybe Isaac Asimov, the renowned robotics writer, had the solution all along. In his famous book, I, Robot, Mr. Asimov coined the “Three Laws of Robotics” to govern robots. According to those rules:

  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  2. A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.20

If followed, these rules foreclose any possibility of the self-driving vehicle electing to take human life, unless specifically programmed to do so.21

Who Should Be Responsible?

With no continued user-control in the operation of the vehicle, which then takes the life of another, who should the law hold responsible for the car’s choices? Three possibilities emerge: 1) the owner of the vehicle, 2) someone on the manufacturing end, or 3) the autonomous vehicle itself. In regard to owner liability, if the owner of the vehicle plays a role in initiating the operation of the vehicle (either by pressing a button, voice activation or paying for the use of the car in order to initiate it), then he should have responsibility for the consequences. The second possibility is that the owner of the vehicle, as a mere passenger in a self-operated vehicle, should have no liability. After all, he or she did not decide the fate of any person, let alone operate a vehicle to carry out that fate. On the other hand, liability on the manufacturing end of the spectrum supposes that the automated vehicle will have a certain “code” or algorithm to instruct the autonomous vehicle which decision to make.22 Thus, the coder will be empowered to choose whose life should be taken – yours or theirs. Should the coder write the algorithm to minimize the loss of human life? Should the coder risk the lives of the elderly over the infantile? And if the coder does make the choice, should liability attach? We explore these themes.

Owner Liability. Oklahoma law provides that when civil tort liability is in question, it can be premised on either a fault-based or no-fault liability model.23 On a fault based liability model, we can rule out intentional act liability because due to the very fact that the vehicle is self-operated, it is difficult to envision that the nonparticipant owner deliberately chose to run over a specific person, let alone that he had substantial certain knowledge that he would kill someone by just being a passenger in his car.24

A negligence theory, however, might provide a sounder basis for liability. The default common law liability-limitation rule is sic utere tuo ut alienum non laedas (the rightful and lawful use and enjoyment of one’s own property cannot be a legal wrong to another absent malice or negligence).25 The strength of that proposition is underscored in situations where the owner of the autonomous vehicle is a mere passenger who does nothing to initiate or continue the operation of the vehicle. The analysis is, however, somewhat different if you assume two facts at this juncture: 1) that the autonomous vehicle is programmed to save a life (yours) but to take that of another and 2) the owner of the vehicle played some role in initiating the operation of the vehicle (either by pressing a button, voice activation or paying for the use of the car in order to initiate it). Under these circumstances, the owner cannot so easily disclaim liability. In negligence claims, the “existence of a duty of care is the threshold question.”26 Oklahoma law imposes an affirmative duty on every person not to engage in any conduct that might injure another person or the property of another.27 Specifically in relation to vehicles, “drivers have a duty to operate their vehicle with due care.”28In fact, the law generally imposes a duty “where the person’s own affirmative act created an unreasonably high risk that harm would occur to the injured party.”29 Armed with the knowledge that the self-automated car is deliberately predisposed to killing third parties in the event of an accident, the person placing that risk on others should have a duty to prevent that outcome. A failure to do so should be sufficient to make out a prima facie case for negligence.

Manufacturer Liability.
 The base assumption in this context is that the car’s manufacturer (or the programmer, either as an employee or an independent contractor for the manufacturer) has programmed the vehicle to deliberately injure third parties (as opposed to its operator) in the event of an accident. Against that background, Oklahoma law generally imposes a duty “where the person’s own affirmative act created an unreasonably high risk that harm would occur to the injured party.”30 Under these circumstances, the manufacturer would have a duty, as a matter of law, to prevent that outcome.31 The safest option for the manufacturer would be to program the vehicle to injure its user.32

Strict liability offers an additional basis for manufacturer liability. “[T]hough strict liability does eliminate negligence as a basis for recovery, it does not dispense with the requirement of proximate cause.”33 The prima facie case elements for strict liability in Oklahoma are threefold: “(1) a defect existed in the product at the time it left the manufacturer, retailer, or supplier’s control; (2) the defect made the product unreasonably dangerous; and (3) the defect in the product was the cause of the injury.”34 Oklahoma’s strict liability “doctrine also applies to bystanders.”35 A strict liability lawsuit has to be grounded on a claim that 1) the product had a manufacturing defect when it left the manufacturer, retailer or supplier; 2) the entire line of the product is defective (design defect); or 3) there was a warning defect.36 Regardless of the claim pursued, a plaintiff still must show, as a threshold matter, that the product was both defective and unreasonably dangerous.37 The requirement that a product has a defective condition and that it be unreasonably dangerous are “essentially synonymous.”38 As such, a product can be defective because it is unreasonably dangerous.39 Oklahoma applies the consumer expectation test from the Restatement of Torts to determine if a product is unreasonably dangerous.40 That test holds that a product is unreasonably dangerous if it is “dangerous to an extent beyond that which would be contemplated by the ordinary consumer who purchases it, with the ordinary knowledge common to the community as to its characteristics.”41 As applied to self-driving cars, there is a reasonable basis to find such vehicles to be unreasonably dangerous. Ordinary consumers will probably not know off-hand that self-driving cars in the streets are irreversibly disposed to killing them (as op-posed to leaving it to chance) in the event of an accident. That would serve as an ideal predicate for a failure-to-warn manufacturer’s liability claim. After all, Oklahoma law imposes liability on manufacturers when they fail to provide adequate notice for the risks posed by their products.42 In addition, based on the preceding analysis, we have no doubt that a product liability claim based on either a manufacturing or product design defect would remain open to a plaintiff.43

Suing the Self-Driving Car. The more advanced self-driving cars become – exercising uninhibited free judgment – the more compelling the argument for holding the car responsible becomes. In Sierra Club v. Morton,44 Justice Douglas considered a novel question: Should inanimate objects (like trees) have standing?45 Justice Douglas decided that they should; his premise was straightforward:

Inanimate objects are sometimes parties in litigation. A ship has a legal personality, a fiction found useful for maritime purposes. The corporation sole – a creature of ecclesiastical law – is an acceptable adversary, and large fortunes ride on its cases. The ordinary corporation is a “person” for purposes of the adjudicatory processes, whether it represents proprietary, spiritual, aesthetic, or charitable causes.46

In Morton, Justice Douglas decided that legal personality should be extended to trees because he saw no notable difference between them and other inanimate objects like corporations and ships – that all have litigation rights and burdens.47

Self-driving cars, like other inanimate objects, can also be sued directly. The common law’s approach to ships provides the closest analogy. The common law has long personified ships.48Besides in rem actions, the common law “permits a salvage action to be brought in the name of the rescuing vessel.”49 In addition, in collision litigation, ships can sue and be sued directly.50 Likewise, the common law should afford self-driving cars legal personality like it does to other notable inanimate objects, like ships and corporations, so they can be sued directly.51 Indeed, other jurisdictions, like the European Union, are already adjusting their laws to give self-driving cars legal personality, so they can sue and be sued.52

Once it is accepted that the self-driving car has advanced consciousness and that it can make an independent judgment (choosing between multiple risks), there is room to accept that it can also make socially undesirable choices – i.e., make wrong choices or mistakes.53 This is important because in Oklahoma, at a bare minimum, “[e]very mistake involves an element of negligence, carelessness or fault.”54 That wrong choice could serve as the basis for liability, especially if it resulted in harm or injury to someone or his property interests.55

But who would represent the vehicle’s interest in litigation? Like the inanimate objects (corporations and trees) contemplated by Justice Douglas, in the event of a lawsuit against the car, “the voice of the existing beneficiaries of these . . . [inanimate] wonders should be heard.”56 This class of persons could include the owner or licensed user of the vehicle, and even an attorney provided by the self-driving car’s insurers.57

CONCLUSION

The ethical dilemma of the trolley problem will soon become a reality on many of America’s roadways – one that will challenge settled ethical expectations and civil liability rules. Before the advent of mass scale operation of autonomous vehicles (likely just before 2020), a comprehensive regulatory scheme should be implemented. Otherwise, litigants will be forced to rely upon judge-made law and its incremental case-by-case development until this conundrum is addressed. From a survey of various other states’ laws, if a comprehensive regulatory framework is to be implemented, at a bare minimum, it should ensure that, 1) the autonomous cars, like all other vehicles, obey all existing traffic laws; 2) any owner or operator of such a vehicle (that is placed on a public road) should be required to comply with the compulsory liability insurance laws; 3) to the extent that the vehicle can switch from operating 100 percent autonomously to having some user input, the human operator should be required to have a valid driver’s license (with an appropriate endorsement attained after showing competence in the operation of autonomous vehicles); and 4) in the event of a detected error with the autonomous vehicle’s operating system, the car should be required to stop at the nearest safest point and cease operation, only resuming if a licensed (nonimpaired) human operator is willing to assume manual control of the vehicle.

ABOUT THE AUTHORS
Spencer C. Pittman is an attorney with Winters & King Inc. in Tulsa. His primary focus is business litigation/transactions and personal injury. He completed his undergraduate at OU in 2010 and obtained his law degree from the TU College of Law in 2013.

Mbilike M. Mwafulirwa is an attorney at Brewster & DeAngelis PLLC. Mr. Mwafulirwa’s practice focuses on general civil litigation, civil rights defense and appellate law. He is a 2012 graduate of the TU College of Law.

1. “Driverless Car Bill is Signed in California at Google Headquarters,” BBC News, Sept. 26, 2012, www.bbc.com/news/technology- 19726951 (last accessed July 5, 2017). 
2. Phillipa Foot, “The Problem of Abortion and the Doctrine of the Double Effect,” 5 Oxford Rev. 5 (1967); Judith Jarvis Thompson, “The Trolley Problem,” 94:6 Yale L.J. 1395 (May 1985).
3. The New York Times (Sept. 10, 2016), “No Driver? Bring It On. How Pittsburgh Became Uber’s Testing Ground,” www.nytimes.com/2016/09/11/technology/no-driver-bring-it-on-how-pittsburgh-became-ubers-testing-ground.html?_r=0 (last accessed Jan. 8, 2017). 
4. John Greenough, “10 Million Self-driving Cars Will Be on the Road by 2020,” Business Insider, June 15, 2016, www.business insider.com/report-10-million-self-driving-cars-will-be-on-the-road-by-2020-2015-5-6 (last accessed July 5, 2017). 
5. John Markoff, “Should Your Driverless Car Hit a Pedestrian to Save Your Life?,” The N.Y. Times, June 23, 2016, www.nytimes.com/ 2016/06/24/technology/should- (last accessed July 5, 2017). (your-driverless-car-hit-a-pedestrian-to-save-your-life.html.
6. See generally, Stuart Russell and Peter Norvig, Artificial Intelligence: A Modern Approach 1, 3 (3rd ed. 2009). 
7. James Somers, “The Man Who Would Teach Machines To Think,” The Atlantic, November 2013, www.theatlantic.com/magazine/archive/2013/11/the-man-who-would-teach-machines-to-think/ 309529/ (last accessed Jan. 23, 2017). 
8. Supra, note 3. 
9. A.B. 511 (2011), at (8)(3)(b) (emphasis added); see also, e.g., Nev. Rev. Stat. §§482A et seq.; Fla. Stat. §§316.85, 319.145; Cal. Veh. Code §38750(b); Ut. Stat. §§41-26-101, et seq.; Tn. Stat. §55-8-202; D.C. Code §§50-2351, et seq
10. Reg. Dep’t of Motor Veh., LCB File No. R084-11, §1 (eff. March 1, 2012).
11. Id. at §3.
12. Automated Driving: Legislative and Regulatory Action, available at cyberlaw.stanford.edu/wiki/index.php/Automated_Driving:_Legislative_and_Regulatory_Action (last accessed Jan. 8, 2017). 
13. H.R. 3007, 53rd Leg., 2d Sess., (2012).
14. See, e.g., Okla. Stat. tit. 12 §2 (“The common law, as modified by constitutional and statutory law, judicial decisions and the condition and wants of the people, shall remain in force in aid of statutes of Oklahoma”).
15. Blackstone et al., 2 Commentaries on the Laws of England: In Four Books; with an Analysis of the Work, at *30 (1832); see also R v. Dudley & Stevens, (1884) 14 Q.B.D. 273 (noting that when a person is faced with a choice of saving himself at the expense of another innocent person, that person should sacrifice himself). That rule drifted across the pond to the United States and was the basis for the conviction of a cannibalistic stranded stranger who ate another person to survive. See United States v. Holmes, 26 F.Cas. 360 (E.D. Pa. 1842). 
16. Thompson, supra note 2, at 1359 n. 2. 
17. Ctlin Avramescu, An Intellectual History of Cannibalism 32 (2003); for an in-depth analysis on the potential rationales and arguments for any of the potential choices, see Thompson, supra note 2.
18. In fact, Oklahoma has generally adopted Sir. William Blackstone’s approach that requires that a person should not take the life of an innocent person to preserve himself. See Tully v. State, 1986 OK CR 185, ¶13, 730 P.2d 1206, 1210 (quoting Blackstone). 
19. See, e.g., Jean-François Bonnefon, Azim Shariff, and Iyad Rahwan, “The Social Dilemma of Autonomous Vehicles,” 352Science 1573 (June 24, 2016), available at www.popularmechanics.com/cars/a21492/the-self-driving-dilemma/ (noting that in a recent survey, 76 percent of respondents thought it would be more moral to have an autonomous vehicle sacrifice the passenger than kill ten pedestrians).
20. Isaac Asimov, I, Robot 27 (2008 ed.). 
21. See generally id. 
22. Cory Doctorow, “The Problem With Self-driving Cars: Who Controls the Code?,” The Guardian, Dec. 23, 2015, www.theguardian.com/technology/2015/dec/23/the-problem-with-self-driving-cars-who-controls-the-code. (last accessed Jan. 8, 2017), superseded by statute on other grounds. 
23. “[A]ctionable tortious conduct [falls] into (1) negligence, and (2) willful acts that result in intended or unintended harm.” Parrett v. Unicco Serv. Co., 2005 OK 54, ¶12, 127 P.3d 572, 575. 
24. An act is intentional when, 1) it is the actor’s desire to bring about that result, or 2) he must have acted with the knowledge that such an outcome was substantially certain to occur. Id. ¶14, 127 P.3d at 576. 
25. “G.A.I., Sic Utere Tuo Ut Alienum Non Laedas,” 5:8 Mic. L.R. 673 (1907); Franklin Drilling Co. v. Jackson, 1950 OK 107, ¶2, 217 P.2d 816, 823 (Halley, J., concurring in part and dissenting in part).
26. Lowery v. Echostar Sat. Corp., 2007 OK 38, ¶12, 160 P.3d 959, 964 (citations omitted).
27. Okla. Stat. tit. 76 §1 (“Every person is bound, without contract, to abstain from injuring the person or property of another, or infringing upon any of his rights”). 
28. Fargo v. Hays-Kuehn, 2015 OK 56, ¶13, 352 P.3d 1223, 1227 (internal citations omitted). 
29. J.S. v. Harris, 2009 OK CIV APP 92, ¶16, 227 P.3d 1089, 1094.
30. Id.see also Schenfeld v. Norton Co., 391 F.2d 420, 422 (10th Cir. 1968) (“It is no longer doubted that the supplier of a chattel negligently made is liable for foreseeable harm to anyone injured, regardless of privity”). 
31. See also Dylan LeValley, “Autonomous Vehicle Liability – Application of Common-Carrier Liability,” 36 Seattle U. L. REV. 5, 6 (2013) (manufacturers of autonomous cars, similar to common carriers, should be saddled with liability for the “slightest negligence”). 
32. The programmer could also create a clear warning message to the vehicle-user before the vehicle starts informing the user that that in case of an accident, the vehicle would opt to injure the user to save innocent third-parties. That way, assumption of risk would be at issue. 
33. Minor v. Zidell Trust, 1980 OK 144, ¶14, 618 P.2d 392, 396.
34. Black v. M&W Gear Co., 269 F.3d 1220, 1231 (10th Cir. 2001) (citations omitted); see also Kirkland v. Gen. Motors Corp., 1974 OK 52, ¶12, 521 P.2d 1353, 1363. 
35. Moss v. Polyco, Inc., 1974 OK 53, ¶12, 522 P.2d 622, 626. 
36. Mayberry v. Akron Rubber Mach. Co., 483 F.Supp. 407, 412 (N.D. Okla. 1979). 
37. See Kirkland, 521 P.2d at 1363; see also Black, 269 F.3d at 1233. 
38. Black, 269 F.3d at 1233 (quoting Spencer v. Nelson Sales Co., 620 P.2d 477, 481-482 (Okla. Ct. App. 1980)). 
39. Id. 
40. Restatement (Second) of Torts §402A, cmt. g; Clark v. Mazda Motor Corp., 2003 OK 19, ¶5 n. 4, 68 P.3d 207, 209 n. 4. (citations omitted).
41. Kirkland, 1974 OK 52, ¶26, 521 P.2d at 1362-1363. (citations omitted). 
42. McKee v. Moore, 1982 OK 71, ¶4, 648 P.2d 21 (Okla. 1982) (citing Tayar v. Roux Laboratories, Inc., 460 F.2d 494, 495 (10th Cir. 1972)). 
43. See Restatement (Second) of Torts §402A cmt. h (The defective condition may arise not only from harmful ingredients, not characteristic of the product itself either as to presence or quantity, but also from foreign objects contained in the product, from decay or deterioration before sale, or from the way in which the product is prepared or packed.). 
44. Sierra Club v. Morton, 405 U.S. 727, 742-743 (1972) (Douglas, J., dissenting). 
45. Id. 
46. Id.
47. Id. 
48. Maritime law encompasses a well-established body of common law. United States v. Reliable Transfer Co., 421 U.S. 397, 409 (1975) (“the Judiciary has traditionally taken the lead in formulating flexible and fair remedies in the law maritime, and ‘Congress had largely left to this Court the responsibility for fashioning the controlling rules of admiralty law”). 
49. Federal Judicial Center, Admiralty and Maritime Law 31 (2d ed. 2013) (an “in rem [action] [is] directly against the property – typically a vessel…. In such cases, the vessel – not the vessel’s owner – is the defendant. Under the fiction of ‘personification,’ the vessel is deemed to have a legal personality and, as such, is subject to suit directly whereby it can be held liable for the torts it has committed and for the contracts it has breached.”), available at www.fjc.gov/public/pdf.nsf/lookup/admiralty2d.pdf/$file/admiralty2d.pdf (last accessed Jan. 22, 2017); Morton, 405 U.S. at 743 n. 2 (Douglas, J., dissenting) (citing The Camanche, 8 Wall. 448, 75 U.S. 476 (1869)).
50. Morton, 405 U.S. at 743 n. 2 (Douglas, J., dissenting) (citing The Gylfe v. The Trujillo, 209 F.2d 386 (2d Cir. 1954)). Indeed, as soon as ship touches the water, “she acquires a personality of its own.” Id. (quoting Tucker v. Alexandroff, 183 U. S. 424, 438 (1902)).
51. Cf. Morton, 405 U.S. at 742-743 (noting the freestanding legal personality accorded ships and other inanimate objects).
52. See European Parliament Committee on Legal Affairs Draft Report With Recommendations to the Commission on Civil Law Rules on Robotics, at ¶31(f) (2015/2103(INL)) (May 31, 2016), (sophisticated autonomous robots . . . [should have] the status of electronic persons with specific rights and obligations, including that of making good any damage they may cause . . . .) available at www.europarl.europa.eu/sides/getDoc.do?pubRef=//EP//NONSGML%2BCOMPARL%2BPE-582.443%2B01%2BDOC%2BPDF%2BV0//EN (last accessed Jan. 22, 2017).
53. See, e.g., “Watson Wasn’t Perfect: IBM Explains Jeopardy! Errors,” AOL Finance (Feb. 17, 2011) (noting that the supercomputer made miscalculations and errors resulting in imperfect results),webcache.googleusercontent.com/search?q=cache:4rTaqvHWD44J: www.aol.com/article/2011/02/17/the-watson-supercomputer-isnt-always-perfect-you-say-tomato/19848213/+&cd=4&hl=en&ct=clnk&gl=us&client=safari (Jan. 22, 2017). 
54. Pan v. Bane, 2006 OK 57, ¶24, 141 P.3d 555, 563 (emphasis added). 
55. Brewer v. Murray, 2012 OK CIV APP 109, ¶¶9-11, 292 P.3d 41, 46-47 (Accorded precedential value by the Supreme Court). 
56. Morton, 405 U.S. at 749 (Douglas, J., dissenting). 
57. Just like ships, an owner of a self-driving car could consider procuring a general liability insurance for his property with an in rem endorsement that would provide the necessary funds to pay damages if the vessel was sued directly. Cf. John W. Fisk Co.,In rem coverage (“coverage endorsement extending coverage for suits filed against the value of a thing (Vessel) seeking for the recovery of damages ….”), jwfisk.com/definitions/#In-Rem (last accessed Jan. 22, 2017).

Originally published in the Oklahoma Bar Journal -- OBJ 88 pg. 1719 (Sept. 9, 2017)