Technological singularity
The technology singularity is a term with multiple related, but conceptually distinct, definitions. One definition has the Singularity as a speculative time at which technological progress accelerates beyond the ability of current-day human beings to understand. Another defines the Singularity as the emergence of smarter-than-human intelligence.
The concept was first mentioned in the book Future Shock by Alvin Toffler. It is based on the true fact that projections of speed of travel, human intelligence, social communication, population, and many other trend lines showed logarithmically increasing trend lines up until about 1995. At this point, many of them became linear, or inflected and began to flatten into limited growth curves.
Belief in the singularity was reinforced by Moore's Law in the computer industry. It was introduced as a science fiction theme by Vernor Vinge in 1993, with the essay [http://www.ugcs.caltech.edu/~phoenix/vinge/vinge-sing.html Singularity]. Since then, it has been the subject of several futurist writings.
Vinge claims that:
- "Within thirty years, we will have the technological means to create superhuman intelligence. Shortly thereafter, the human era will be ended."
Vinge's technological singularity is commonly misunderstood to mean technological progress rising to "infinity." Actually, he refers to the pace of technological change increasing to such a degree that a person who doesn't keep pace with it will rapidly find civilization to have become completely incomprehensible. This was one of the philosophical ideas that inspired the successful movie The Matrix.
It has been speculated that the key to such a rapid increase in technological sophistication will be the development of superhuman intelligence, either by directly enhancing existing human minds (perhaps with cybernetics), or by building artificial intelligences. These superhuman intelligences would presumably be capable of inventing ways to enhance themselves even more, leading to a feedback effect that would quickly surpass preexisting intelligences.
The effect is presumed to work along these lines: First, a seed intelligence is created that is able to reengineer itself, not merely for increased speed, but for new types of intelligence. At minimum, this might be a human equivalent intelligence. This intelligence redesigns itself with improvements, and uploads its memories, skills and experience into the new structure. The process repeats, with presumed redesign of not just the software, but also the computer. The mind may well make mistakes, but it will make backups. Failing designs will be discarded, successful ones will be retained.
Simply having a human-equivalent artifcial intelligence may yield this effect, if moore's law continues long enough. That is, the first year, the intelligence is equal to a human. Eighteen months later, it is twice as fast. Thirty-six, it is four times as fast, Etc.
However, human neurons only compute at 200 meters per second, while electronics already computes at 100 million meters per second (1/3 the speed of light in vacuum). Therefore, it may be reasonable to expect a conservative (only) million fold improvement in the intelligence's speed of thought.
In this case, the intelligence could double its capacity as fast as every 62 seconds. The actual doubling time would probably start out more slowly, because the intelligence would need special machinery constructed for its new mind. However, one of the first improvements would probably be to give it control of its self-manufacture.
One presumption is that such intelligences will be attainably small and inexpensive. Some researchers claim that even without quantum computing, using advanced molecular manufacturing, matter could be organized so that a graom of matter could simulate a million years of a human civilization per second.
Another presumption is that at some point, with the correct mechnisms of thought, all possible correct human thoughts will become obvious to such an intelligence.
Therefore, if the above conjectures are right, then all human problems could be solved within a few years of constructing a friendly version of such an intelligence. If that's true, then constructing such an intelligence is most moral possible allocation of resources at this time.
A number of concepts and terms have come into standard use in this topic:
- Arthur C. CLarke's aphorism, 'Any sufficiently advanced technology is indistinguishable from magic.' is taken as a reliable guide to a human's response to incomprehensibly advanced technologies.
- The singularity is often thought to be an unavoidable consequence of advancing information technology. It is true that artificial intelligence operated for nearly thirty years with computers running at one million instructions per second, or slower. It is also true that it has begun to show more fruit as computers have dramatically exceeded this speed.
- The beyond is the set of concepts or experiences from beyond the singularity.
- The low beyond is the set of concepts or experience that might be explained to merely brilliant human beings, or newly-transcended intelligences.
- The high beyond is the incomprehensible set of concepts or experience which are impossible for any human being to understand.
- Transcendence is what occurs when a person or thing passes through the singularity.
- Human beings might experience transcendence by a process of uploading their mind to a transcendent thinking machine, or by upgrading their brain to be a transcendent thinking machine.
- A power is a fully transcended intelligence operating from the high beyond. It has powers that would actually be constrained in some way by physical reality, but it might well seem to a human being to have god-like powers. Certainly no human being could predict what was and was not possible for it.
- Apotheosis might be the sublime state that occurs if billions of subjective years of experience can be made available to transcended individuals in a few minutes of time, because their thoughts have been sped by a factor of a million or more. The projected extreme sensory deprivation of this state is often dismissed by an appeal to simulated environments.
Whether such a process will actually occur is open to strong debate. To date, no artificial intelligence has approached human general ability, and there is no guarantee such a thing is possible. The claim that the rate of technological progress is increasing has also been questioned. The technological singularity is sometimes referred to as the "Rapture of the Nerds" by detractors of the idea.
A nonprofit institute is now dedicated to assuring that the singularity occur, and that it be 'friendly' to human beings. It is accepting donations.
Prominent theorists and speculators on the subject include:
- Vernor Vinge
- Hans Moravec
- Eliezer Yudkowsky
- Ray Kurzweil
- Marvin Minsky
- Arthur T. Murray
- Michael Anissimov
- Terence McKenna
See Also: extropy, artilect, Omega point, transhumanism
For another possible future, see Herbert's Dune series: Butlerian Jihad
External links
The full text of the Vinge article cited can be found at:
For more information on the various definitions of the Singularity:
See Eliezer Yudkowsky's extensive Singularity writings at:
Singularity Action Group : Working toward a positive Singularity through public education and direct action in the development of Singularity technologies.
Singularity Discussion Forum:
Human Knowledge: Foundations and Limits includes
- a survey of the theoretical and physical limits on minds, and
- a forecast in which the singularity does not happen.
The website of economist Robin Hanson has
- a discussion between Vinge and his critics; and
- an economic analysis of the singularity concept.
Broderick, Damien. The Spike: How Our Lives Are Being Transformed by Rapidly Advancing Technologies Forge; 2001. ISBN 0312877811.