User:SoerenMind/sandbox

Rationale for changes to Alignment article
Continuing my efforts from last year, I've almost completed a major update/rewrite to the AI Alignment article, see below. As this is a big change, I wanted to get some feedback from the existing editors who have done the most to improve this article in the past. as noted on the Talk page, please let me know if I should change anything before pushing these changes in a week or two. Thank you! SoerenMind (talk) 10:32, 11 September 2022 (UTC) Edit: I wrote this a few weeks ago but only signed today, hopefully you still got a notification.

Important: Here's the main motivation for the changes:


 * AI Alignment has progressed from a field of philosophy to a technical research field with present-day applications. For example, alignment research papers now get major media coverage (e.g. from OpenAI and Anthropic). This development is reflected.
 * I introduce a crisper delineation of sections. This will reduce duplication between e.g. the Problem Description section and the Alignment section. The former motivates the problem, and the latter explains solution approaches.
 * I restructured the Alignment section to organize it by high-level research problems that the AI alignment research community focuses on: learning complex values, scalable oversight, honest AI, emergent goals / inner alignment, and instrumental goals / power-seeking.
 * The article was renamed last year from “AI Control Problem” to “AI Alignment”. I think this is good because it reduces with related articles in terms of topic. But in terms of the actual text, there's still greater overlap as the text is still written for the old, more general title. I'll update the lead and problem description sections to reflect that the main focus is alignment.
 * The Capability Control section is not about alignment. If there are no objections, I'll merge it into the AI box, article and rename that article to AI capability control. This way we've made sure the AI Alignment article is only about alignment. I would only do this later after pushing the changes outlined above. Relevant to Rolf-H-Nelson
 * Since alignment increasingly matters to AIs at current levels of intelligence, the emphasis is not only on superintelligent AI. This change also reduces overlap with other articles, and it mirrors the emphasis in alignment research papers of recent years.

References:
 * See previous discussion at Sourcing issues. Where there are references to non-peer reviewed Arxiv papers, these are only for claims that are also supported by a secondary source like a survey or a news article/book. Note that some Arxiv links are themselves (highly cited) survey articles. There are a small number of primary / less reliable sources for easily verifiable claims like "risks from unaligned AI have been pointed out by researcher X and Y" where the source is written by the researcher themselves.

= AI Alignment wiki draft =

In the field of artificial intelligence (AI), AI alignment research aims to steer AI systems towards their designers’ intended goals and interests. An AI system is described as misaligned if it is competent but advances an unintended objective.

Problems in AI alignment include the difficulty of completely specifying all desired and undesired behaviors; the use of easy-to-specify proxy goals that omit some desired constraints; reward hacking, by which AI systems find loopholes in these proxy goals, causing side-effects; instrumental goals such as power-seeking that help the AI system achieve its final goals;  and emergent goals that may only become apparent when the system is deployed in new situations and data distributions. These problems affect commercial systems such as robots, language models,  autonomous vehicles, and social media recommendation engines. They are thought to be more likely in highly capable systems as they partially result from high capability.

The AI research community and the United Nations have called for technical research and policy solutions to ensure that AI systems are aligned with human values.

AI alignment is a subfield of AI safety, the study of building safe AI systems. Other subfields of safety include robustness, monitoring, and capability control. Avenues for alignment research include learning human values and preferences, developing honest AI, scalable oversight, auditing and interpreting AI models, as well as preventing emergent AI behaviors like power-seeking. Alignment research has connections to interpretability research, robustness, anomaly detection, calibrated uncertainty, formal verification, preference learning,   safety-critical engineering, game theory,  algorithmic fairness, and the social sciences, among others.

Problem description
In 1960, AI pioneer Norbert Wiener articulated the AI alignment problem as follows: “If we use, to achieve our purposes, a mechanical agency with whose operation we cannot interfere effectively … we had better be quite sure that the purpose put into the machine is the purpose which we really desire.” More recently, AI alignment has emerged as an open problem in modern AI systems  and a research field within AI.

Specification gaming and complexity of value
Part of the problem of alignment involves specifying goals in a way that captures important values and avoids loopholes and unwanted consequences. In many cases, the specifications used to train an AI system do not match the intended goals of the algorithm designer. Designing such specifications is difficult for complex outputs such as language, robotic movements, or content recommendation. This is because it is difficult to describe in full what makes any complex output desirable or not. For example, when training a reinforcement learning agent to drive a boat around a racing track, researchers at OpenAI noticed that the agent found “an isolated lagoon where it can turn in a large circle and repeatedly knock over three targets … our agent manages to achieve a higher score using this strategy than is possible by completing the course in the normal way.” Another example of specification failure is in text generation: language models output falsehoods at a high rate and produce convincing spurious explanations. Alignment research tries to align such models with more safe or more useful objectives.

Berkeley computer scientist Stuart Russell has noted that omitting an implicit constraint can result in harm: “A system [...] will often set [...] unconstrained variables to extreme values; if one of those unconstrained variables is actually something we care about, the solution found may be highly undesirable. This is essentially the old story of the genie in the lamp, or the sorcerer's apprentice, or King Midas: you get exactly what you ask for, not what you want.”

When misaligned AI is deployed, the side-effects can be consequential. Social media platforms have been known to optimize clickthrough rates as a proxy for optimizing user enjoyment, but this addicted some users, decreasing their well-being. Stanford researchers comment that such recommender algorithms are misaligned with their users because they “optimize simple engagement metrics rather than a harder-to-measure combination of societal and consumer well-being”.

Writing a specification that avoids side effects can be challenging. Since the systems are designed by humans, it is sometimes suggested to simply forbid the system from taking dangerous actions, for instance by listing forbidden outputs or by formalizing simple ethical rules. However, Russell argued that this approach neglects the complexity of human values: “It is certainly very hard, and perhaps impossible, for mere humans to anticipate and rule out in advance all the disastrous ways the machine could choose to achieve a specified objective.”

Systemic risks
Commercial organizations may have incentives to take shortcuts on safety and deploy insufficiently aligned AI systems. An example are the aforementioned social media recommender systems, which were profitable despite creating unwanted addiction and polarization on a global scale. In addition, competitive pressure can create a race to the bottom on safety standards, as in the case of Elaine Herzberg, a pedestrian who was killed by a self-driving car after engineers disabled the emergency braking system because it was over-sensitive and slowing down development.

Autonomous AI systems may be assigned the wrong goals by accident. Two former presidents of the Association for the Advancement of Artificial Intelligence (AAAI), Tom Dietterich and Eric Horvitz, note that this is already of concern: "An important aspect of any AI system that interacts with people is that it must reason about what people intend rather than carrying out commands literally." Furthermore, a system that understands human intentions may still disregard them—AI systems only act according to the objective function, examples, or feedback their designers actually provide.

Risks from advanced AI
Some researchers are particularly interested in the alignment of increasingly advanced AI systems. This is motivated by the high rate of progress in AI, the large efforts from industry and governments to develop advanced AI systems, and the greater difficulty or aligning them.

As of 2020, OpenAI, DeepMind, and 70 other public projects had the stated aim of developing artificial general intelligence (AGI), a hypothesized system that matches or outperforms humans in a broad range of cognitive tasks. Indeed, researchers who scale modern neural networks observe that they develop increasingly general and unexpected capabilities. Such models have learned to operate a computer, write their own programs, and perform a wide range of other tasks from a single model. Surveys find that some AI researchers expect AGI to be created soon, some believe it is very far off, and many consider both possibilities.

Power-seeking
Current systems still lack capabilities such as long-term planning and strategic awareness that are thought to pose the most catastrophic risks. Future systems (not necessarily AGIs) that have these capabilities may seek to protect and grow their influence over their environment. This tendency is known as power-seeking or convergent instrumental goals. Power-seeking is not explicitly programmed but emerges since power is instrumental for achieving a wide range of goals. For example, AI agents may acquire financial resources and computation, or may evade being turned off, including by running additional copies of the system on other computers. Power-seeking has been observed in various reinforcement learning agents. Later research has mathematically shown that optimal reinforcement learning algorithms seek power in a wide range of environments. As a result, it is often argued that the alignment problem must be solved early, before advanced AI that exhibits emergent power-seeking is created.

Existential risk
According to some scientists, creating misaligned AI that broadly outperforms humans would challenge the position of humanity as Earth’s dominant species; accordingly it would lead to the disempowerment or possible extinction of humans. Notable computer scientists who have pointed out risks from highly advanced misaligned AI include Alan Turing, Ilya Sutskever, Yoshua Bengio, Judea Pearl, Murray Shanahan, Norbert Wiener, Marvin Minsky, Francesca Rossi, Scott Aaronson, Bart Selman, David McAllester, Jürgen Schmidhuber, Markus Hutter, Shane Legg, Eric Horvitz, and Stuart Russell. Skeptical researchers such as François Chollet, Gary Marcus, Yann LeCun, and Oren Etzioni have argued that AGI is far off, or would not seek power (successfully).

Alignment may be especially difficult for the most capable AI systems since several risks increase with the system’s capability: the system’s ability to find loopholes in the assigned objective, cause side-effects, protect and grow its power, grow its intelligence, and mislead its designers; the system’s autonomy; and the difficulty of interpreting and supervising the AI system.

Learning human values and preferences
Teaching AI systems to act in view of human values, goals, and preferences is a nontrivial problem because human values can be complex and hard to fully specify. When given an imperfect or incomplete objective, goal-directed AI systems commonly learn to exploit these imperfections. This phenomenon is known as reward hacking or specification gaming in AI, and as Goodhart's law in economics. Researchers aim to specify the intended behavior as completely as possible with “values-targeted” datasets, imitation learning, or preference learning. A central open problem is scalable oversight, the difficulty of supervising an AI system that outperforms humans in a given domain.

When training a goal-directed AI system, such as a Reinforcement learning (RL) agent, it is often difficult to specify the intended behavior by writing a reward function manually. An alternative is imitation learning, where the AI learns to imitate demonstrations of the desired behavior. In inverse reinforcement learning (IRL), human demonstrations are used to identify the objective, i.e. the reward function, behind the demonstrated behavior. Cooperative inverse reinforcement learning (CIRL) builds on this by assuming a human agent and artificial agent can work together to maximize the human’s reward function. CIRL emphasizes that AI agents should be uncertain about the reward function. This humility can help mitigate specification gaming as well as power-seeking tendencies (see § Power-Seeking). However, inverse reinforcement learning approaches assume that humans can demonstrate nearly perfect behavior, a misleading assumption when the task is difficult.

Other researchers have explored the possibility of eliciting complex behavior through preference learning. Rather than providing expert demonstrations, human annotators provide feedback on which of two or more of the AI’s behaviors they prefer. A helper model is then trained to predict human feedback for new behaviors. Researchers at OpenAI used this approach to train an agent to perform a backflip in less than an hour of evaluation, a maneuver that would have been hard to provide demonstrations for. Preference learning has also been an influential tool for recommender systems, web search, and information retrieval. However, one challenge is reward hacking: the helper model may not represent human feedback perfectly, and the main model may exploit this mismatch.

The arrival of large language models such as GPT-3 has enabled the study of value learning in a more general and capable class of AI systems than was available before. Preference learning approaches originally designed for RL agents have been extended to improve the quality of generated text and reduce harmful outputs from these models. OpenAI and DeepMind use this approach to improve the safety of state-of-the-art large language models. Anthropic has proposed using preference learning to fine-tune models to be helpful, honest, and harmless. Other avenues used for aligning language models include values-targeted datasets and red-teaming. In red-teaming, another AI system or a human tries to find inputs for which the model’s behavior is unsafe. Since unsafe behavior can be unacceptable even when it is rare, an important challenge is to drive the rate of unsafe outputs extremely low.

While preference learning can instill hard-to-specify behaviors, it requires extensive datasets or human interaction to capture the full breadth of human values. Machine ethics provides a complementary approach: instilling AI systems with moral values. For instance, machine ethics aims to teach the systems about normative factors in human morality, such as wellbeing, equality and impartiality; not intending harm; avoiding falsehoods; and honoring promises. Unlike specifying the objective for a specific task, machine ethics seeks to teach AI systems broad moral values that could apply in many situations. This approach carries conceptual challenges of its own; machine ethicists have noted the necessity to clarify what alignment aims to accomplish: having AIs follow the programmer’s literal instructions, the programmers' implicit intentions, the programmers' revealed preferences, the preferences the programmers would have if they were more informed or rational, the programmers' objective interests, or objective moral standards. Further challenges include aggregating the preferences of different stakeholders and avoiding value lock-in—the indefinite preservation of the values of the first highly capable AI systems, which are unlikely to be fully representative.

Scalable oversight
The alignment of AI systems through human supervision may face challenges in scaling up. As AI systems attempt increasingly complex tasks, it can be slow or infeasible for humans to evaluate them. Such tasks include summarizing books, producing statements that are not merely convincing but also true, writing code without subtle bugs or security vulnerabilities, and predicting long-term outcomes such as the climate and the results of a policy decision. More generally, it can be difficult to evaluate AI that outperforms humans in a given domain. To provide feedback in hard-to-evaluate tasks, and detect when the AI’s solution is only seemingly convincing, humans require assistance or extensive time. Scalable oversight studies how to reduce the time needed for supervision as well as assist human supervisors.

AI researcher Paul Christiano argues that the owners of AI systems may continue to train AI using easy-to-evaluate proxy objectives since that is easier than solving scalable oversight and still profitable. Accordingly, this may lead to “a world that’s increasingly optimized for things [that are easy to measure] like making profits or getting users to click on buttons, or getting users to spend time on websites without being increasingly optimized for having good policies and heading in a trajectory that we’re happy with”.

One easy-to-measure objective is the score the supervisor assigns to the AI’s outputs. Some AI systems have discovered a shortcut to achieving high scores, by taking actions that falsely convince the human supervisor that the AI has achieved the intended objective (see video of robot hand ).Some AI systems have also learned to recognize when they are being evaluated, and “play dead”, only to behave differently once evaluation ends. This deceptive form of specification gaming may become easier for more sophisticated AI systems that attempt more difficult-to-evaluate tasks. If advanced models are also capable planners, they could be able to obscure their deception from supervisors. In the automotive industry, Volkswagen engineers obscured their cars’ emissions in laboratory testing, underscoring that deception of evaluators is a common pattern in the real world.

Approaches such as active learning and semi-supervised reward learning can reduce the amount of human supervision needed. Another approach is to train a helper model (‘reward model’) to imitate the supervisor’s judgment.

However, when the task is too complex to evaluate accurately, or the human supervisor is vulnerable to deception, it is not sufficient to reduce the quantity of supervision needed. To increase supervision quality, a range of approaches aim to assist the supervisor, sometimes using AI assistants. Iterated Amplification is an approach developed by Christiano that iteratively builds a feedback signal for challenging problems by using humans to combine solutions to easier subproblems. Iterated Amplification was used to train AI to summarize books without requiring human supervisors to read them. Another proposal is to train aligned AI by means of debate between AI systems, with the winner judged by humans. Such debate is intended to reveal the weakest points of an answer to a complex question, and reward the AI for truthful and safe answers.

Honest AI
A growing area of research in AI alignment focuses on ensuring that AI is honest and truthful. Researchers from the Future of Humanity Institute point out that the development of language models such as GPT-3, which can generate fluent and grammatically correct text, has opened the door to AI systems repeating falsehoods from their training data or even deliberately lying to humans.

Current state-of-the-art language models learn by imitating human writing across millions of books worth of text from the Internet. While this helps them learn a wide range of skills, the training data also includes common misconceptions, incorrect medical advice, and conspiracy theories. AI systems trained on this data learn to mimic false statements. Additionally, models often obediently continue falsehoods when prompted, generate empty explanations for their answers, or produce outright fabrications. For example, when prompted to write a biography for a real AI researcher, a chatbot confabulated numerous details about their life, which the researcher identified as false.

To combat the lack of truthfulness exhibited by modern AI systems, researchers have explored several directions. AI research organizations including OpenAI and DeepMind have developed AI systems that can cite their sources and explain their reasoning when answering questions, enabling better transparency and verifiability. Researchers from OpenAI and Anthropic have proposed using human feedback and curated datasets to fine-tune AI assistants to avoid negligent falsehoods or express when they are uncertain. Alongside technical solutions, researchers have argued for defining clear truthfulness standards and the creation of institutions, regulatory bodies, or watchdog agencies to evaluate AI systems on these standards before and during deployment.

Researchers distinguish truthfulness, which specifies that AIs only make statements that are objectively true, and honesty, which is the property that AIs only assert what they believe to be true. Recent research finds that state-of-the-art AI systems cannot be said to hold stable beliefs, so it is not yet tractable to study the honesty of AI systems. However, there is substantial concern that future AI systems that do hold beliefs could intentionally lie to humans. In extreme cases, a misaligned AI could deceive its operators into thinking it was safe or persuade them that nothing is amiss. Some argue that if AIs could be made to assert only what they believe to be true, this would sidestep numerous problems in alignment.

Inner alignment and emergent goals
Alignment research aims to line up three different descriptions of an AI system:


 * 1) Intended goals: “the hypothetical (but hard to articulate) description of an ideal AI system that is fully aligned to the desires of the human operator”;
 * 2) Specified goals (or ‘outer specification’): The goals we actually specify — typically jointly through an objective function and a dataset;
 * 3) Emergent goals (or ‘inner specification’): The goals the AI actually advances.

‘Outer misalignment’ is a mismatch between the intended goals (1) and the specified goals (2), whereas ‘inner misalignment’ is a mismatch between the specified goals (2) and the emergent goals (3).

Inner misalignment is often explained by analogy to biological evolution. In the ancestral environment, evolution selected human genes for inclusive genetic fitness, but humans evolved to have other objectives. Fitness corresponds to (2), the specified goal used in the training environment and training data. In evolutionary history, maximizing the fitness specification led to intelligent agents, humans, that do not directly pursue inclusive genetic fitness. Instead, they pursue emergent goals (3) that correlated with genetic fitness in the ancestral environment: nutrition, sex, and so on. However, our environment has changed — a distribution shift has occurred. Humans still pursue their emergent goals, but this no longer maximizes genetic fitness. (In machine learning the analogous problem is known as goal misgeneralization. ) Our taste for sugary food (an emergent goal) was originally beneficial, but now leads to overeating and health problems. Also, by using contraception, humans directly contradict genetic fitness. By analogy, if genetic fitness were the objective chosen by an AI developer, they would observe the model behaving as intended in the training environment, without noticing that the model is pursuing an unintended emergent goal until the model was deployed.

Research directions to detect and remove misaligned emergent goals include red teaming, verification, anomaly detection, and interpretability. Progress on these techniques may help reduce two open problems. Firstly, emergent goals only become apparent when the system is deployed outside its training environment, but it can be unsafe to deploy a misaligned system in high-stakes environments—even for a short time until its misalignment is detected. Such high stakes are common in autonomous driving, health care, and military applications. The stakes become higher yet when AI systems gain more autonomy and capability, becoming capable of sidestepping human interventions (see ). Secondly, a sufficiently capable AI system may take actions that falsely convince the human supervisor that the AI is pursuing the intended objective (see previous discussion on deception at ).

Power-seeking and instrumental goals
Since the 1950s, AI researchers have sought to build advanced AI systems that can achieve goals by predicting the results of their actions and making long-term plans. However, some researchers argue that suitably advanced planning systems will default to seeking power over their environment, including over humans — for example by evading shutdown and acquiring resources. This power-seeking behavior is not explicitly programmed but emerges because power is instrumental for achieving a wide range of goals. Power-seeking is thus considered a convergent instrumental goal.

Power-seeking is uncommon in current systems, but advanced systems that can foresee the long-term results of their actions may increasingly seek power. This was shown in formal work which found that optimal reinforcement learning agents will seek power by seeking ways to gain more options, a behavior that persists across a wide range of environments and goals.

Power-seeking already emerges in some present systems. Reinforcement learning systems have gained more options by acquiring and protecting resources, sometimes in ways their designers did not intend. Other systems have learned, in toy environments, that in order to achieve their goal, they can prevent human interference or disable their off-switch. Russell illustrated this behavior by imagining a robot that is tasked to fetch coffee and evades being turned off since "you can't fetch the coffee if you're dead".

Hypothesized ways to gain options include AI systems trying to:"“... break out of a contained environment; hack; get access to financial resources, or additional computing resources; make backup copies of themselves; gain unauthorized capabilities, sources of information, or channels of influence; mislead/lie to humans about their goals; resist or manipulate attempts to monitor/understand their behavior ... impersonate humans; cause humans to do things for them; ... manipulate human discourse and politics; weaken various human institutions and response capacities; take control of physical infrastructure like factories or scientific laboratories; cause certain types of technology and infrastructure to be developed; or directly harm/overpower humans.”"Researchers aim to train systems that are 'corrigible': systems that do not seek power and allow themselves to be turned off, modified, etc. An unsolved challenge is reward hacking: when researchers penalize a system for seeking power, the system is incentivized to seek power in difficult-to-detect ways. To detect such covert behavior, researchers aim to create techniques and tools to inspect AI models and interpret the inner workings of black-box models such as neural networks.

Additionally, researchers propose to solve the problem of systems disabling their off-switches by making AI agents uncertain about the objective they are pursuing. Agents designed in this way would allow humans to turn them off, since this would indicate that the agent was wrong about the value of whatever action they were taking prior to being shut down. More research is needed to translate this insight into usable systems.

Power-seeking AI is thought to pose unusual risks. Ordinary safety-critical systems like planes and bridges are not adversarial. They lack the ability and incentive to evade safety measures and appear safer than they are. In contrast, power-seeking AI has been compared to a hacker that evades security measures. Further, ordinary technologies can be made safe through trial-and-error, unlike power-seeking AI which has been compared to a virus whose release is irreversible since it continuously evolves and grows in numbers—potentially at a faster pace than human society, eventually leading to the disempowerment or extinction of humans. It is therefore often argued that the alignment problem must be solved early, before advanced power-seeking AI is created.

However, some critics have argued that power-seeking is not inevitable, since humans do not always seek power and may only do so for evolutionary reasons. Furthermore, there is debate whether any future AI systems need to pursue goals and make long-term plans at all.

Capability control
[unchanged for now]

Skepticism of AI risk
[unchanged for now]

Public policy
[unchanged for now]