This fatality is listed as the only “level three” autonomous vehicle deadly accident. This refers to the type of car involved. Because the Uber car was autonomous and required only little human intervention it is considered a level three autonomous vehicle (AV). As of September 2019, there have been five worldwide fatalities due to “level two” self-driving cars. All were Tesla models. These require more human involvement. It’s interesting to make note that the more people were expected to intervene, the more deaths occurred. Ranging from 2016–2019, one of these deaths happened in China, while the other four were in the United States. The first death was twenty-three-year-old Gao Yuning who after turning on the AV feature in his Tesla, was slammed into the back of a truck. In Florida, Joshua Brown was killed when his Tesla’s AV function mistook a left-turning white truck for clear sky. While Tesla has come under fire for such tech mishaps, they point to statistics to prove that self-driving cars are safe, if not safer than the average car. In a letter to the public after Yuning’s death they emphasized the point: “This is the first known fatality in just over one hundred and thirty million miles where Autopilot was activated. Among all vehicles in the US, there is fatality every ninety-four million miles. Worldwide, there is a fatality approximately every sixty million miles.5
With the advent of AVs, there comes ethical questions. First are the legal implications. Is the human operator to blame if the vehicle did not alert them to a problem? Is the manufacturer solely responsible for fatalities? Beyond these considerations are questions revolving around artificial intelligence.
The Trolley Problem is an ethical dilemma often posed to philosophy students. The scenario is this: If you were to stand beside a lever that controls the direction of a trolley that cannot stop, and there are five people tied on the tracks, would you divert the trolley seeing that there is one person on the other track? Would you essentially jump into action to kill one person over five? What if the five were children? Or the single one? There are many deviations of the scenario, yet they ask the same fundamental ethical questions. The Utilitarian approach would be to divert the trolley, because one death is preferable to five, yet there is a differing opinion in which participating in the carnage is morally wrong.
In a survey of professional philosophers, 69.9 percent would choose to divert the trolley in the Trolley Problem, while 8 percent choose not to change its direction, and 24 percent would not answer!
For example, examining how the technology might respond to a “no win” situation or a true ethical dilemma, such as the Trolley problem, is a vital task. Some scholars are seeking to evaluate empirically how drivers might handle a Trolley-type of situation; others are considering whether ethical theory, such as the application of Utilitarian reasoning may help resolve what is an appropriate response by the car in a crash situation.”6
Whatever you believe the right answer to be, there is worry that self-driving cars will never have the human ability to make these ethical choices, particularly in a matter of seconds. Many inanimate objects scare us, from dolls to cars. In
CHAPTER TWENTY-TWO
Cell
It’s not so odd these days to see groups of people all on their cell phones at the same time. Look at passengers on a subway or a train and most of them will be interacting with their device in some way. This rampant surge in the availability and popularity of technology got Stephen King thinking.