User:Jkramar/sandbox

An existential risk is a possible future event or scenario in which Earth-originating intelligent life goes extinct, or its potential for future thriving development is drastically and permanently curtailed. These are a particularly severe subset of global catastrophic risks.

Some potential sources of risk

 * Artificial general intelligence
 * Runaway climate change
 * Supervolcanoes
 * Asteroid or comet impact
 * Astrophysical events such as gamma ray bursts (maybe a past extinction event? Max will send paper. some speculations exist about periodicity of extinctions due to galaxy arms passing (as density wave) thru solar system)
 * Pandemics (Concern about possible future pandemics|natural or engineered)
 * Permanent global totalitarian government
 * Ecological disasters
 * Exhaustion of natural resources
 * Nuclear holocaust
 * Extraterrestrial invasion
 * Grey goo
 * Experimental accident

Probability of an existential catastrophe
The following are examples of individuals and institutions that have made probability predictions about existential events. Some risks, such as that from asteroid impact, with a one-in-a-million chance of causing humanity's extinction in the next century, have had their probabilities predicted with considerable precision (though some scholars claim the actual rate of large impacts could be much higher than originally calculated). Similarly, the frequency of volcanic eruptions of sufficient magnitude to cause catastrophic climate change, similar to the Toba Eruption, which may have almost caused the extinction of the human race, has been estimated at about 1 in every 50,000 years. The relative danger posed by other threats is much more difficult to calculate. In 2008, a group of "experts on different global catastrophic risks" at the Global Catastrophic Risk Conference at the University of Oxford suggested a 19% chance of human extinction over the next century. However, the conference report cautions that the methods used to average responses to the informal survey is suspect due to the treatment of non-responses. The probabilities estimated for various causes are summarized below.


 * {| class="wikitable"

! Risk !! Probability of human extinction before 2100
 * Molecular nanotechnology weapons|| 5%
 * Superintelligent AI|| 5%
 * Wars|| 4%
 * Engineered pandemic|| 2%
 * Nuclear war|| 1%
 * Nanotechnology accident|| 0.5%
 * Natural pandemic || 0.05%
 * Nuclear terrorism || 0.03%
 * }
 * Table source: Future of Humanity Institute, 2008.
 * Nanotechnology accident|| 0.5%
 * Natural pandemic || 0.05%
 * Nuclear terrorism || 0.03%
 * }
 * Table source: Future of Humanity Institute, 2008.
 * Nuclear terrorism || 0.03%
 * }
 * Table source: Future of Humanity Institute, 2008.

During the Cuban Missile Crisis, then President Kennedy estimated that there was between a third and a half chance of nuclear war.

There are significant methodological challenges in estimating these risks with precision. Most attention has been given to risks to human civilization over the next 100 years, but forecasting for this length of time is difficult. The types of threats posed by nature may prove relatively constant, though new risks could be discovered. Anthropogenic threats, however, are likely to change dramatically with the development of new technology; while volcanoes have been a threat throughout history, nuclear weapons have only been an issue since the 20th century. Historically, the ability of experts to predict the future over these timescales has proved very limited. Man-made threats such as nuclear war or nanotechnology are harder to predict than natural threats, due to the inherent methodological difficulties in the social sciences. In general, it is hard to estimate the magnitude of the risk from this or other dangers, especially as both international relations and technology can change rapidly.

Existential risks pose unique challenges to prediction, even more than other long-term events, because of observation selection effects. Unlike with most events, the failure of an complete extinction event to occur in the past is not evidence against their likelihood in the future, because every world that has experienced one has no observers, so regardless of their frequency, no civilization observes existential risks in its history. These anthropic issues can be avoided by looking at evidence that does not have such selection effects, such as asteroid impact craters on the Moon, or directly evaluating the likely impact of new technology.

Fermi paradox
Given the frequency of extra-solar planets, the speed with which the Earth spawned life, and the size of the observable universe, it may seem likely that life would have independently arisen on other planets. However, our planet has not been colonized by aliens, and we cannot see any large-scale astro-engineering projects anywhere in the known Universe; this is known as the Fermi paradox. One proposed, but not widely accepted, explanation for this paradox is that many planets develop civilizations similar to ours, but almost none of these civilizations survive to colonize space.

Moral importance of existential risk
Some scholars have strongly favored reducing existential risk on the grounds that it greatly benefits future generations. Derek Parfit argues that extinction would be a great loss because our descendants could potentially survive for a billion years before the expansion of the Sun makes the Earth uninhabitable. Bostrom argues that there is even greater potential in colonizing space. If future humans colonize space, they may be able to support a very large number of people on other planets, potentially lasting for trillions of years. Therefore, reducing existential risk by even a small amount would have a very significant impact on the expected number of people that will exist in the future.

Little has been written arguing against these positions, but some scholars would disagree. .

Some economists have discussed the importance of global catastrophic risks, though not existential risks. Martin Weitzman argues that most of the expected economic damage from climate change may come from the small chance that warming greatly exceeds the mid-range expectations, resulting in catastrophic damage. Richard Posner has argued that we are doing far too little, in general, about small, hard-to-estimate risks of large scale catastrophes.

Scope insensitivity influences how bad people consider the extinction of the human race to be. For example, when people are motivated to donate money to altruistic causes, the quantity they’re willing to give does not increase linearly with the magnitude of the issue: people are as concerned about 200,000 birds getting stuck in oil as they are about 2,000. Similarly, people are often more concerned about threats to individuals than to larger groups.

Existential risk reduction organizations

 * The Centre for the Study of Existential Risk studies four major technological risks: artificial intelligence, biotechnology, global warming and warfare.
 * Center for Responsible Nanotechnology researches the risks of nanotechnology and how it affects the survival of humanity
 * Foresight Institute aims to increase the benefits and reduce the risks of nanotechnology
 * Machine Intelligence Research Institute researches the risks of artificial intelligence
 * Future of Humanity Institute researches the moral questions of humanity's long-term future, particularly existential risk
 * The Lifeboat Foundation is "a nonprofit nongovernmental organization dedicated to encouraging scientific advancements while helping humanity survive existential risks and possible misuse of increasingly powerful technologies, including genetic engineering, nanotechnology, and robotics/AI, as we move towards the Singularity."