Saturday, September 23, 2006

The Singularity

When intelligence on earth has hit a wall in computational capacity enforced by the laws of physics, the Singularity will have happened. Can we postulate on what kind of events will lead up to the Singularity? Several significant intellectuals already have.


In a nutshell

Vernor Vinge coined the term "technological singularity" to refer to the metaphorical event horizon of technological progress beyond which we cannot imagine. Vinge chose the word “singularity” purely as a rhetorical flourish to connote the inescapable gravitational singularity of a black hole. In his 1993 essay “The Coming Technological Singularity.” Vinge posited a simple logical argument so compelling that it altered the trajectories of science fiction and speculative futurism considerably. If human beings make a machine smarter than us, what is that machine likely to do with its intelligence? The answer, of course, is to build more intelligent machines. The outcome is a feedback loop that could have global intelligence evolving at the rates we currently see in computational complexity. The essay begins, “Within thirty years, we will have the technological means to create superhuman intelligence. Shortly after, the human era will be ended.”

In 1965, British statistician I.J. Good, known for his work in Bayesian statistics, envisioned a situation similar in scope to Vinge’s technological singularity, which he called an intelligence explosion. "Let an ultraintelligent machine be defined as a machine that can far surpass all the intellectual activities of any man however clever,” Good wrote. “Since the design of machines is one of these intellectual activities, an ultraintelligent machine could design even better machines; there would then unquestionably be an 'intelligence explosion,' and the intelligence of man would be left far behind. Thus the first ultraintelligent machine is the last invention that man need ever make."

Even earlier, in a book published in 1927, a French Jesuit priest named Pierre Teilhard de Chardin saw the earth evolving into what he termed a noosphere. The earth was becoming “cephalized” in De Chardin’s view, meaning it was growing a head. Due to the increased communication of disparate groups of people by means of ever-evolving information technologies, the global human population would become increasingly “lovingly interdependent.” Outlined in The Phenomenon of Man, his vision of the future, in its focus on information technologies and a manner of global phase transition, thematically prefigures Vinge’s and Good’s. An excellent extrapolation of De Chardin’s thesis is available at Acceleration Watch. For those interested in investigating a sane confluence of technology and theology, De Chardin is a good place to start.


Perhaps the single most lucid essay on the subject was written by Eliezer Yudkowsky in 1996 and is titled Staring Into the Singularity. It begins:

If computing speeds double every two years,
what happens when computer-based AIs are doing the research?

Computing speed doubles every two years.
Computing speed doubles every two years of work.
Computing speed doubles every two subjective years of work.

Two years after Artificial Intelligences reach human equivalence, their speed doubles. One year later, their speed doubles again.

Six months - three months - 1.5 months ... Singularity.

Plug in the numbers for current computing speeds, the current doubling time, and an estimate for the raw processing power of the human brain, and the numbers match in: 2021.

But personally, I'd like to do it sooner.




Yudkowsky has made his life purpose the safe transition to a technological singularity by helping to found the non-profit organization The Singularity Institute for Artificial Intelligence and developing what he has termed Friendly AI. His emphasis is not on rushing headlong into a singularity, but painstakingly ensuring that the transition reflects to the best of our ability the most fundamental ethical standards of human civilization as determined by research into cognitive science, evolutionary psychology, and a host of other complex scientific disciplines. His non-profit is perhaps the most serious overt effort to protect humanity against a failed singularity, a relatively long-term but nevertheless immediate existential risk. It is also an effort toward protecting against the many other existential risks staring us in the face. The Singularity Insitute provides an online reading list that is both comprehensive and easy to navigate.



Technology innovator and proactive humanitarian Ray Kurzweil’s most recent book lays out an ideal gameplan to get humankind to a positive singularity sometime around 2048. It is entitled The Singularity is Near: When Humans Transcend Biology. Kurzweil predicts that the singularity will lead from various steps contingent one upon the other, progressing from biotechnology to nanotechnology to artificial intelligence and the singularity. For someone who would like to gain a comprehensive grasp of the technological innovations standing between us today and a future singularity, while suffering no unneeded future shock, Kurzweil’s book is an eloquent, life-affirming, and accessible explication of the singularity.


The accelerating rate of technological progress charted by Ray Kurzweil and leading to a technological singularity

Some researchers in artificial intelligence, like Novamente's Ben Goertzel for instance, see the possibility for a “hard take-off” if AI research is accelerated. In a hard take-off scenario, an infant-level artificial intelligence would progress to superintelligence in a very short span of time, accomplishing all the necessary technological achievements that would get human civilization to a singularity. Such a superintelligence basically would compress Kurzweil’s timescale down to a few years, or perhaps even months. The best possible outcome we could hope for is a hard take-off positive technological singularity happening sometime this evening. The further off the singularity is postponed, the greater the likelihood of an intervening existential threat ending the human experiment.

In short, if you have ever wondered what it’s all about, why even bother getting up in the morning, or is meaning to life, one rather obvious area to investigate is the coming technological singularity.

Now here is Ben Goertzel to explain to you how to get to the singularity.



Artificial General Intelligence and its Role in the Singularity



Ten Years to the Singularity (If We Really Try)

0 Comments:

Post a Comment

<< Home