The Singularity: A Philosophical Analysis
David J. Chalmers
1 Introduction1
What happens when machines become more intelligent than humans? One view is that this event
will be followed by an explosion to ever-greater levels of intelligence, as each generation of ma-
chines creates more intelligent machines in turn. This intelligence explosion is now often known
as the “singularityâ€.
The basic argument here was set out by the statistician I. J. Good in his 1965 article “Specula-
tions Concerning the First Ultraintelligent Machineâ€:
Let an ultraintelligent machine be defined as a machine that can far surpass all the
intellectual activities of any man however clever. Since the design of machines is one
of these intellectual activities, an ultraintelligent machine could design even better
machines; there would then unquestionably be an “intelligence explosionâ€, and the
intelligence of man would be left far behind. Thus the first ultraintelligent machine is
the last invention that man need ever make.
The key idea is that a machine that is more intelligent than humans will be better than humans
at designing machines. So it will be capable of designing a machine more intelligent than the most
intelligent machine that humans can design. So if it is itself designed by humans, it will be capable
of designing a machine more intelligent than itself. By similar reasoning, this next machine will
1This paper was published in the Journal of Consciousness Studies 17:7-65, 2010. I first became interested in this
cluster of ideas as a student, before first hearing explicitly of the “singularity†in 1997. I was spurred to think further
about these issues by an invitation to speak at the 2009 Singularity Summit in New York City. I thank many people
at that event for discussion, as well as many at later talks and discussions at West Point, CUNY, NYU, Delhi, ANU,
Tucson, Oxford, and UNSW. Thanks also to Doug Hofstadter, Marcus Hutter, Ole Koksvik, Drew McDermott, Carl
Shulman, and Michael Vassar for comments on this pap