User:DrSheHulk/Artificial intelligence arms race

Editing article Artificial intelligence arms race

Original:

Risks
Nick Bostrom and others argue an AI risk could cause powers to skimp on safety precautions.

Stephen Cave of the Leverhulme Centre argues the risk is threefold, with the first risk potentially having geopolitical implications, and the second two definitely having geopolitical implications:

i) The dangers of an AI 'race for technological advantage' framing, regardless of whether the race is seriously pursued;

ii) The dangers of an AI 'race for technological advantage' framing and an actual AI race for technological advantage, regardless of whether the race is won;

iii) The dangers of an AI race for technological advantage being won.

Cave argues the risk is compounded in the case of a race to artificial general intelligence, which may present an existential risk.

Arms-race terminology is also sometimes used in the context of competition for economic dominance and "soft power"; for example, the November 2019 'Interim Report' of the United States' National Security Commission on Artificial Intelligence, while stressing the role of diplomacy in engaging with China and Russia, adopts the language of a competitive arms race. It states that US military-technological superiority is vital to the existing world order :11 and stresses the ongoing US militarization of AI, together with militarization of AI by China and Russia, is for geopolitical purposes: :1-2

Developments in AI cannot be separated from the emerging strategic competition with China and developments in the broader geopolitical landscape. We are concerned that America’s role as the world’s leading innovator is threatened. We are concerned that strategic competitors and non-state actors will employ AI to threaten Americans, our allies, and our values. We know strategic competitors are investing in research and application. It is only reasonable to conclude that AI-enabled capabilities could be used to threaten our critical infrastructure, amplify disinformation campaigns, and wage war.

In Foreign Policy, Paul Scharre warns that rhetoric about an AI arms race could, itself, amplify into a self-fulfilling prophecy.

My rewrite:

%%%%%%% Moved live Nov 10 %%%%%

Risks
Stephen Cave of the Leverhulme Centre argues the risks of an AI race is threefold, with the first risk potentially having geopolitical implications, and the second two definitely having geopolitical implications. The first risk is that even if there is no race, the terminology surrounding the race is dangerous. The rhetoric around the AI race and the importance of being first does not encourage the type of thoughtful deliberation with stake holders required to produce AI technology that is the most broadly beneficial to society. This kind of competitive rhetoric could become self-fulfilling and spark a race when none originally existed.

The second risk is if a race to develop AI actually emerges, whether or not the race is won by any one group. Due to the rhetoric and perceived advantage of being the first to develop advanced AI technology, there becomes strong incentive to cut corners on safety considerations which might leave out important aspects such as bias and fairness. In particular, the perception of another team being on the brink of a break through encourages other teams to take short cuts and deploy an AI system that is not ready, which can be harmful to others and the group possessing the AI system. As Paul Scharre warns in Foreign Policy, "For each country, the real danger is not that it will fall behind its competitors in AI but that the perception of a race will prompt everyone to rush to deploy unsafe AI systems. In their desire to win, countries risk endangering themselves just as much as their opponents." Nick Bostrom and others developed a model that provides further evidence of such. The model found that the more information one team possessed about other teams' capabilities caused more risk taking and short cuts in the development of the AI system. Further, the greater the enmity between teams, the greater the risk of ignoring precautions and an AI disaster. Another danger if a race actually emerges is the risk of losing control of the AI systems and the risk is compounded in the case of a race to artificial general intelligence, which may present an existential risk.

The third risk of an AI arms race is if the race is actually won by one group. An example of this risk is the consolidation of power and technological advantage in the hands of one group. If one group achieves superior AI technology "[i]t is only reasonable to conclude that AI-enabled capabilities could be used to threaten our critical infrastructure, amplify disinformation campaigns, and wage war."

Arms-race terminology is also sometimes used in the context of competition for economic dominance and "soft power"; for example, the November 2019 'Interim Report' of the United States' National Security Commission on Artificial Intelligence, while stressing the role of diplomacy in engaging with China and Russia, adopts the language of a competitive arms race. It states that US military-technological superiority is vital to the existing world order :11 and stresses the ongoing US militarization of AI, together with militarization of AI by China and Russia, is for geopolitical purposes. :1-2

%%%%%%%%%%%

Research from the references:

Main points from : A simplified model of an AI arms race


 * Assumptions made:
 * There is a definite probability of an AI-related disaster, given the creation of AI.
 * The probability of such disaster goes up the more the AI development team skimps on precautions.
 * There is great value to being the first.
 * Increase the danger:
 * building the AI depends more on risk-taking than on skill.
 * Reducing enmity between teams or number of teams REDUCES the risk
 * Extra information about other teams exacerbates the danger.
 * Findings:
 * More skill in building the AI reduces the risk-taking.
 * Reduce enmity reduces risk of disaster
 * Knowing more about opponents capabilities causes more risk taking.
 * Adding extra teams increases the dangers.

Is citation title incorrect?

Expanding on citation 2 (Cave):


 * The dangers listed above can be worded as:
 * Even if there is no race, the terminology surrounding the race is dangerous. The rhetoric around the AI race about the importance of being first, does not encourage the type of thoughtful deliberation with stake holders required to produce AI technology that is the most broadly beneficial to society. This kind of competitive rhetoric could spark a race when none originally existed.
 * If a race actually emerges. There becomes strong incentive to cut corners on safety considerations which might leave out important aspects to take into consideration such as bias and fairness. Risk of losing control of the AI systems. Second, the AI race could lead to real conflict, in the form of cyber attacks or targeting key individuals, and turn competitors into enemies.
 * If the race is won. Too much power or technological advantage in the hands of one group.

Citation 5 (Scharre): Seems to relate to citation 2, so tie together?

"For each country, the real danger is not that it will fall behind its competitors in AI but that the perception of a race will prompt everyone to rush to deploy unsafe AI systems. In their desire to win, countries risk endangering themselves just as much as their opponents."

The perception of another team being on the brink of a break through will encourage other teams to take short cuts and deploy an AI system that is not ready.

From the Interim report

%%%%%%%%%%%%% Jan 30, 2021 %%%%%%%%%%%%

On the stance of the EU:

European Union
The European Parliament holds the position that humans must have oversight and decision-making power over lethal autonomous weapons. However, it is up to each member state to determine their stance on the use of autonomous weapons and the mixed stances of the member states is perhaps the greatest hinderance to the European Union's ability to develop autonomous weapons. Some members such as France, Germany, Italy, and Sweden are developing lethal autonomous weapons. Some members remain undecided about the use of autonomous military weapons and Austria has even called to ban the use of such weapons.

Some EU member states have developed and are developing automated weapons. Germany has developed an active protection system that can respond to a threat with complete autonomy in less than a millisecond, the Active Defense System. Italy plans to incorporate autonomous weapons systems into its future military plans.

%%%%% Above was pushed live on Jan 30, 2021%%%%%%%%%%%%%%

Other possible edits:


 * Alphabetize the countries in the stances section
 * From this source the top 5 world leaders in lethal autonomous weapons development are: United States, China, Russia, South Korea, European Union. More from this sources?
 * This source could have a lot more that could be used.
 * General editing - remove all of the so and so said
 * Other reactions section: better title, more info. Get rid of?
 * Disassociation section: better title, not its own section?
 * A history section? Seems sort of combined with the Stances section.
 * From create a table that has some of the information in their table. I like the idea about defense spending, patents, publications. Some others.
 * Rewrite the lead section - it is very short as is.

Other sources:

Read over this one from towards data science on the AI Arms Race in 2020.

Military spending

Use Statista (through nd library) to see reports, trends, future predictions on things like military spending, artificial intelligence, drone usage.