Site • RSS • Apple PodcastsDescription (podcaster-provided):
Audio narrations of academic papers by Nick Bostrom.Themes and summary (AI-generated based on podcaster-provided show and episode descriptions):
➤ AI ethics, safety, governance • digital minds’ moral status/rights • existential risk, information hazards, vulnerable world • transhumanism, human enhancement, embryo selection • metaethics, bias reduction heuristics • future meaning, utopia, cosmic norms • simulation, doomsday, Fermi paradoxThis podcast consists of audio narrations of Nick Bostrom’s academic writing (and some co-authored work), focused on philosophical analysis of emerging technologies and long-run futures. Across the episodes, a central concern is how advances such as artificial intelligence, biotechnology, and other transformative capabilities could reshape human life, social order, and the future trajectory of civilization.
A large portion of the content examines the ethics and governance of advanced AI, including questions of alignment and safety, the potential moral status of artificial or digital minds, and the institutional challenges of sharing resources and political rights with nonhuman intelligences. Several narrations explore how to reason about and prepare for destabilizing technological change, including the idea that some innovations could make catastrophic harm easy by default, and that managing such vulnerability might require new forms of global coordination, surveillance, or restraint. Related discussions address “information hazards,” where disseminating true knowledge can itself create risk.
The podcast also returns frequently to human enhancement and transhumanism: debates over whether altering human capacities undermines dignity, how to evaluate enhancement proposals in light of evolutionary tradeoffs, and the feasibility and societal implications of genetic interventions such as embryo selection for cognitive traits. These themes connect to broader questions about steering evolution and the political conditions under which humanity could deliberately influence its own long-term development.
Another recurring thread is “big-picture” philosophy applied to the human future: existential risk and its moral importance, decision-making under deep uncertainty, and unusual arguments that connect anthropic reasoning to predictions about extinction risk or the possibility that we live in a simulation. Alongside analytical papers, the podcast includes more speculative or literary visions of future flourishing, as well as reflections on meaning and purpose in a world where superintelligence and automation could render human labor unnecessary and make human nature highly malleable.