The robot uprising is a myth. Despite the gory headlines, objective data show that people all over the world are, on average, living longer, contracting fewer diseases, eating more food, spending more time in school, getting access to more culture, and becoming less likely to be killed in a war, murder, or an accident. Yet despair springs eternal. When pessimists are forced to concede that life has been getting better and better for more and more people, they have a retort at the ready. We are cheerfully hurtling toward a catastrophe, they say, like the man who fell off the roof and said, “So far so good” as he passed each floor. Or we are playing Russian roulette, and the deadly odds are bound to catch up to us. Or we will be blindsided by a black swan, a four-sigma event far along the tail of the statistical distribution of hazards, with low odds but calamitous harm. For half a century, the four horsemen of the modern apocalypse have been overpopulation, resource shortages, pollution, and nuclear war. They have recently been joined by a cavalry of more-exotic knights: nanobots that will engulf us, robots that will enslave us, artificial intelligence that will turn us into raw materials, and Bulgarian teenagers who will brew a genocidal virus or take down the internet from their bedrooms. The sentinels for the familiar horsemen tended to be romantics and Luddites. But those who warn of the higher-tech dangers are often scientists and technologists who have deployed their ingenuity to identify ever more ways in which the world will soon end. In 2003, astrophysicist Martin Rees published a book entitled Our Final Hour, in which he warned that “humankind is potentially the maker of its own demise,” and laid out some dozen ways in which we have “endangered the future of the entire universe.” For example, experiments in particle colliders could create a black hole that would annihilate Earth, or a “strangelet” of compressed quarks that would cause all matter in the cosmos to bind to it and disappear. Rees tapped a rich vein of catastrophism. The book’s Amazon page notes, “Customers who viewed this item also viewed Global Catastrophic Risks; Our Final Invention: Artificial Intelligence and the End of the Human Era; The End: What Science and Religion Tell Us About the Apocalypse; and World War Z: An Oral History of the Zombie War.” Techno-philanthropists have bankrolled research institutes dedicated to discovering new existential threats and figuring out how to save the world from them, including the Future of Humanity Institute, the Future of Life Institute, the Center for the Study of Existential Risk, and the Global Catastrophic Risk Institute. How should we think about the existential threats that lurk behind our incremental progress? No one can prophesy that a cataclysm will never happen, and this writing contains no such assurance. Climate change and nuclear war in particular are serious global challenges. Though they are unsolved, they are solvable, and road maps have been laid out for long-term decarbonization and denuclearization. These processes are well underway. The world has been emitting less carbon dioxide per dollar of gross domestic product, and the world’s nuclear arsenal has been reduced by 85 percent. Of course, though to avert possible catastrophes, they must be pushed all the way to zero.