Sunday, September 03, 2006

Existential Risk

With advanced destructive technologies becoming increasingly affordable to develop, error or deliberate misuse resulting in near-term human extinction has never before posed such an immediate threat. If you find discussions of this sort offensive, please read no further.

The threat of the end of life on the planet is what Oxford professor Nick Bostrom calls existential risk. The build-up of nuclear armaments during the Cold War first demonstrated that human civilization possessed the means to annihilate itself. The fact that no exchange of nuclear weapons has occurred, thanks in large part to the extreme hazard of mutually assured destruction, is a positive sign that our species has sense enough not to bring about its own destruction. Yet the prevalence of terrorist activities in this early chapter of the 21st century already signals that it takes less than the efforts of an entire rogue state to cause horrendous casualties.

The Doomsday argument sums up the problem rather succinctly. If the human civilization is to end as the result of an advanced weapon of mass destruction, this event would be most likely to occur when the greatest number of people are around. Considering you are on the planet and you are a human being, of all the eras in human history you are most likely to occupy, the final stage is most probable. What then are the sources of existential risks that might concern us?

At the top of the list, there’s the obvious risk of nuclear warfare. Michael Anissimov of The Lifeboat Foundation, a non-profit organization concerned with protecting against existential risk, recently published a list of ten targets of a nuclear weapon that would maximize global damage and chaos, adding some facts about the startling accessibility of uranium: “[U]ranium ore is more common than gold, mercury, silver, or tungsten, and is found in substantial quantities worldwide, including in southern Australia, Africa, and the Middle East. It is the 48th most abundant element in the earth’s crust. Pitchblende uranium (1% pure) is available on eBay for approximately $20/kg. The US Department of Energy has stockpiled 704,000 metric tons of uranium in the form of hexaflourine solids.”


In 1961, during the height of the Cold War, the Russians detonated the Tsar Bomba, a 50 megaton nuclear weapon. The Tsar Bomba mushroom cloud rose as high as 64 km (40 mi) above the ground of Novaya Zemlya, an island in the Arctic Sea.

Nick Bostrom notes that while a Nuclear Holocaust would not necessarily end the human species instantaneously, it would mark a setback in the progress of civilization that would not easily be overcome.

“Even if some humans survive the short-term effects of a nuclear war, it could lead to the collapse of civilization. A human race living under stone-age conditions may or may not be more resilient to extinction than other animal species.”



Another risk worthy of our concern is that posed by a genetically engineered biological pathogen. Recently the genome of the 1918 Spanish flu virus responsible for the deaths of 50 million people was published online. In a guest editorial printed in the New York Times, Bill Joy and Ray Kurzweil argued that thoughtlessly circulating the instructions for dangerous pathogens was a "Recipe for Destruction". “The genome is essentially the design of a weapon of mass destruction. No responsible scientist would advocate publishing precise designs for an atomic bomb, and in... ways revealing the sequence for the flu virus is even more dangerous.”

Kurzweil and Joy disagree in their approaches to the perils associated with accelerating technological progress. Joy promotes relinquishment, though he offers no concrete plan as to how the slippery slope of scientific advancement could peaceably be brought to a halt. Kurzweil argues in The Singularity is Near that the relinquishment of potentially dangerous technologies would require worldwide suppression by a global totalitarian regime. Only the continued development of more advanced therapies, such as RNA interference-based viral protection, can counteract the hazards posed by newly introduced deadly pathogens.


Audio of Nick Bostrom's presentation on existential risk at the Singularity Summit at Stanford University

An existential risk lying further down the road is the accidental misuse of nanotechnology. A self-replicating “nanobot” would be defined as a device made of nanoscale molecular components that can make copies of itself from materials in its surroundings. While the Foresight Institute, whose mission is to ensure the beneficial implementation of nanotechnology, has determined measures to ensure the safe use of nanotech assemblers in a laboratory setting, this does not address the potential for intentional misuse in an uncontrolled environment, leading to nanobots multiplying wildly out of control. Eric Drexler explained the threats associated with the exponential growth of nanoscale assembers in Engines of Creation and labeled the risk “grey goo.”

"Thus the first replicator assembles a copy in one thousand seconds, the two replicators then build two more in the next thousand seconds, the four build another four, and the eight build another eight. At the end of ten hours, there are not thirty-six new replicators, but over 68 billion. In less than a day, they would weigh a ton; in less than two days, they would outweigh the Earth; in another four hours, they would exceed the mass of the Sun and all the planets combined - if the bottle of chemicals hadn't run dry long before."

As an antidote to grey goo, Drexler proposed the use of active shields, global immune systems that could counteract the unregulated proliferation of assemblers. Recently Ralph Merkle and Michael Vassar in conjunction with the Lifeboat Foundation proposed plans for a Nanoshield to protect against self-replicating nanobots.

An often unforeseen source of existential risk is destructive artificial intelligence. An AI intelligence that was not programmed properly might simply fail to include human survival in its agenda. Technologists concerned with human equivalent machine intelligence cite an article written by Vernor Vinge in 1993, entitled The Coming Technological Singularity, as foreseeing a possible scenario for the creation of a godlike superintelligence.

“Let an ultraintelligent machine be defined as a machine that can far surpass all the intellectual activities of any any man however clever. Since the design of machines is one of these intellectual activities, an ultraintelligent machine could design even better machines; there would then unquestionably be an "intelligence explosion," and the intelligence of man would be left far behind.”



The organization currently devoting its efforts to ensuring the safe development of AI is the Singularity Institute for Artificial Intelligence. The goal of the research team is to ensure that advances in the field of robotics reflect the humane values of human civilization.

The positive side of accelerating technological change is the possibility that human civilization might overcome the constraints forced upon it by centuries of environmental selection pressures. By creating technologies that enhance intelligence and improve the potential for mutual cooperation, our species may well continue to elude self-destruction. While relinquishment is most likely impossible to implement safely and counter to the human desire to expand knowledge, the continued development of human society obviously ought to be directed by the ethical imperatives that safeguard our continued survival in the face of existential risks.

0 Comments:

Post a Comment

<< Home