This was originally published on Philosophical Disquisitions.
Some of you may have noticed my recently-published paper on existential risk and artificial intelligence. The paper offers a somewhat critical perspective on the recent trend for AI-doomsaying among people like Elon Musk, Stephen Hawking and Bill Gates. Of course, it doesn’t focus on their opinions; rather, it focuses on the work of the philosopher Nick Bostrom, who has written the most impressive analysis to date of the potential risks posed by superintelligent machines.
I want to try and summarise the main points of that paper in this blog post. This summary comes with the usual caveat that the full version contains more detail and nuance. If you want that detail and nuance, you should read that paper. That said, writing this summary after the paper was published does give me the opportunity to reflect on its details and offer some modifications to the argument in light of feedback/criticisms.… Read the rest