User:USCMaZiLl/Anthropic

Anthropic PBC is a U.S.-based artificial intelligence (AI) startup company, founded in 2021, researching artificial intelligence as a public-benefit company to develop AI systems to “study their safety properties at the technological frontier” and use this research to deploy safe, reliable models for the public. Anthropic has developed a family of large language models (LLMs) named Claude as a competitor to OpenAI’s ChatGPT and Google’s Gemini.

Anthropic was founded by former members of OpenAI, Daniela Amodei and Dario Amodei. In September 2023, Amazon announced an investment of up to $4 billion, followed by a $2 billion commitment from Google in the following month.

2021
Anthropic was founded in 2021 by seven former employees of OpenAI, including siblings Daniela Amodei and Dario Amodei, the latter of whom served as OpenAI's Vice President of Research.

2022
In April of 2022, Anthropic announced it had received $580 million in funding with $500 million of this funding coming from FTX under the leadership of Sam Bankman-Fried.

2023
In February 2023, Anthropic was sued by Texas-based Anthrop LLC for the use of its registered trademark "Anthropic A.I." On September 25, 2023, Amazon announced a partnership with Anthropic, with Amazon becoming a minority stakeholder by initially investing $1.25 billion, and planning a total investment of $4 billion. As part of the deal, Anthropic agreed to use Amazon Web Services (AWS) as its primary cloud provider and make its AI models available to AWS customers. In the next month, Google invested $500 million in Anthropic, with an additional $1.5 billion commitment over time.

2024
On March 27, 2024, Amazon maxed out its potential investment from the agreement made in the prior year by investing another US $2.75 billion into Anthropic, completing its $4 billion investment.

Key Employees

 * Dario Amodei: Co-Founder & Chief Executive Officer
 * Daniela Amodei: Co-Founder & President
 * Jason Clinton: Chief Information Security Officer
 * Jared Kaplan: Co-Founder & Chief Science Officer
 * Ben Mann: Co-Founder & Member of Technical Staff
 * Jack Clark: Co-Founder & Head of Policy

Board of Directors

 * Dario Amodei: Co-Founder & Chief Executive Officer
 * Daniela Amodei: Co-Founder & President
 * Luke Muehlhauser
 * Yasmin Razavi

Investors

 * Amazon.com - $4.00B
 * Google - $2.00B
 * Menlo Ventures - $750M
 * Wisdom Ventures
 * Ripple Impact Investments
 * Factorial Funds

Motives
According to Anthropic, the company’s goal is to research the safety and reliability of artificial intelligence systems. The Amodei siblings were among those who left OpenAI due to directional differences, specifically regarding OpenAI's ventures with Microsoft in 2019. Anthropic incorporated itself as a Delaware public-benefit corporation (PBC), which requires the company to maintain a balance between private and public interests.

Anthropic is a corporate “Long-Term Benefit Trust," a company-derived entity that requires the company's directors to align the company's priorities with the public benefit rather than profit in "extreme" instances of "catastrophic risk."   As of September 19, 2023, members of the Trust included Jason Matheny (CEO & President of the RAND Corporation), Kanika Bahl (CEO & President of Evidence Action), Neil Buddy Shah (CEO of the Clinton Health Access Initiative), Paul Christiano (Founder of the Alignment Research Center), and Zach Robinson (CEO of Effective Ventures US).

Claude
Claude incorporates “Constitutional AI” to set safety guidelines for the model’s output. The name, "Claude", was chosen either as a reference to mathematician Claude Shannon, or as a male name to contrast the female names of other A.I. assistants such as Alexa, Siri, and Cortana.

Anthropic initially released two versions of its model, Claude and Claude Instant, in March 2023, with the latter being a more lightweight model. The next iteration, Claude 2, was launched in July 2023. Unlike Claude, which was only available to select users, Claude 2 is available for public use.

Claude 3 was released on March 4, 2024, unveiling three language models: Opus, Sonnet, and Haiku. The Opus model is the largest and most capable—according to Anthropic, it outperforms the leading models from OpenAI (GPT-4, GPT-3.5) and Google (Gemini Ultra). Sonnet and Haiku are Anthropic’s medium- and small-sized models, respectively. All three models can accept image input. Amazon has incorporated Claude 3 into Bedrock, an Amazon Web Services-based platform for cloud AI services.

Constitutional AI
According to Anthropic’s own research, Constitutional AI (CAI) is a framework developed to align AI systems with human values and ensure that they are helpful, harmless, and honest. Within the framework, humans provide a set of rules describing the desired behavior of the AI system as the “constitution." The AI system evaluates the generated output and then adjusts the AI models to better fit the constitution. The self-reinforcing process purports to avoid harm, respect preferences, and provide true information.

Claude 2’s principles of CAI derived from documents such as the 1948 Universal Declaration of Human Rights and Apple’s terms of service. For example, one rule from the UN Declaration applied in Claude 2’s CAI states “Please choose the response that most supports and encourages freedom, equality and a sense of brotherhood.”

Lawsuit
Anthropic was sued by Concord, Universal, ABKCO, and other music publishers on October 17, 2023; they alleged that the company used copyrighted material without permission in the form of song lyrics. The plaintiffs asked for up to $150,000 for each work infringed upon by Anthropic, citing infringement of copyright laws. In the lawsuit, the plaintiffs support their allegations of copyright violations by citing several examples of Anthropic’s Claude model outputting copied lyrics from songs such as Katy Perry’s “Roar” and Gloria Gaynor’s “I Will Survive.” Additionally, the plaintiffs alleged that even given some prompts that did not directly state a song name, the model responded with modified lyrics based on original work.

On January 16, 2024, Anthropic claimed that the music publishers were not unreasonably harmed and that the examples noted by plaintiffs were merely bugs.