7. Power-law distributions: Johnson et al. 2006; Newman 2005; see Pinker 2011, pp. 210–22, for a review. See the references in note 17 of chapter 11 for an explanation of the complexities in estimating the risks from the data.
8. Overestimating the probability of extreme risks: Pinker 2011, pp. 368–73.
9. End-of-the-world predictions: “Doomsday Forecasts,”
10. Apocalyptic movies: “List of Apocalyptic Films,”
11. Quoted in Ronald Bailey, “Everybody Loves a Good Apocalypse,”
12. Y2K bug: M. Winerip, “Revisiting Y2K: Much Ado About Nothing?”
13. G. Easterbrook, “We’re All Gonna Die!”
14. P. Ball, “Gamma-Ray Burst Linked to Mass Extinction,”
15. Denkenberger & Pearce 2015.
16. Rosen 2016.
17. D. Cox, “NASA’s Ambitious Plan to Save Earth from a Supervolcano,”
18. Deutsch 2011, p. 207.
19. “More dangerous than nukes”: Tweeted in Aug. 2014, quoted in A. Elkus, “Don’t Fear Artificial Intelligence,”
20. In a 2014 poll of the hundred most-cited AI researchers, just 8 percent feared that high-level AI posed the threat of “an existential catastrophe”: Müller & Bostrom 2014. AI experts who are publicly skeptical include Paul Allen (2011), Rodney Brooks (2015), Kevin Kelly (2017), Jaron Lanier (2014), Nathan Myhrvold (2014), Ramez Naam (2010), Peter Norvig (2015), Stuart Russell (2015), and Roger Schank (2015). Skeptical psychologists and biologists include Roy Baumeister (2015), Dylan Evans (2015a), Gary Marcus (2015), Mark Pagel (2015), and John Tooby (2015). See also A. Elkus, “Don’t Fear Artificial Intelligence,”
21. Modern scientific understanding of intelligence: Pinker 1997/2009, chap. 2; Kelly 2017.
22. Foom: Hanson & Yudkowsky 2008.
23. The technology expert Kevin Kelly (2017) recently made the same argument.
24. Intelligence as a contraption: Brooks 2015; Kelly 2017; Pinker 1997/2009, 2007a; Tooby 2015.
25. AI doesn’t progress by Moore’s Law: Allen 2011; Brooks 2015; Deutsch 2011; Kelly 2017; Lanier 2014; Naam 2010. Many of the commentators in Lanier 2014 and Brockman 2015 make this point as well.
26. AI researchers vs. AI hype: Brooks 2015; Davis & Marcus 2015; Kelly 2017; Lake et al. 2017; Lanier 2014; Marcus 2016; Naam 2010; Schank 2015. See also note 25 above.
27. Shallowness and brittleness of current AI: Brooks 2015; Davis & Marcus 2015; Lanier 2014; Marcus 2016; Schank 2015.
28. Naam 2010.
29. Robots turning us into paper clips and other Value Alignment Problems: Bostrom 2016; Hanson & Yudkowsky 2008; Omohundro 2008; Yudkowsky 2008; P. Torres, “Fear Our New Robot Overlords: This Is Why You Need to Take Artificial Intelligence Seriously,”
30. Why we won’t be turned into paper clips: B. Hibbard, “Reply to AI Risk,” http://www.ssec.wisc.edu/~billh/g/AIRisk_Reply.html; R. Loosemore, “The Maverick Nanny with a Dopamine Drip: Debunking Fallacies in the Theory of AI Motivation,”
31. Quoted in J. Bohannon, “Fears of an AI Pioneer,”
32. Quoted in Brynjolfsson & McAfee 2015.
33. Self-driving cars not quite ready: Brooks 2016.
34. Robots and jobs: Brynjolfsson & McAfee 2016; see also chapter 9, notes 67 and 68.
35. The bet is registered on the “Long Bets” Web site, http://longbets.org/9/.
36. Improving computer security: Schneier 2008; B. Schneier, “Lessons from the Dyn DDoS Attack,”
37. Strengthening bioweapon security: Bradford Project on Strengthening the Biological and Toxin Weapons Convention, http://www.bradford.ac.uk/acad/sbtwc/.