User:Hi iamsam/sandbox

The AI Now Institute at NYU (AI Now) is a research institute studying the social implications of artificial intelligence. AI Now was founded by Kate Crawford and Meredith Whittaker in 2017 after a symposium hosted by the White House under Barack Obama. It is located at New York University. AI Now is partnered with organizations such as the New York University Tandon School of Engineering, New York University Center for Data Science, Partnership on AI, and the ACLU. They produce annual reports that examine the social implications of artificial intelligence. AI Now conducts interdisciplinary research that focuses on four themes:
 * Bias and inclusion
 * Labour and automation
 * Rights and liberties
 * Safety and civil infrastructure

Founding and Mission
AI Now grew out of a 2016 symposium spearheaded by the Obama White House Office of Science and Technology Policy. The event was led by Meredith Whittaker, the founder of Google's Open Research Group, and Kate Crawford, a principal researcher at Microsoft Research. The event focused on near-term implications of AI in social domains: Inequality, Labor, Ethics, and Healthcare.

In November 2017, Whittaker and Crawford held a second symposium on AI and social issues, and publicly launched the AI Now Institute in partnership with New York University. It is claimed to be the first university research institute focused on the social implications of AI, and the first AI institute founded and led by women.

In an interview with NPR, Crawford stated that the motivation for founding AI Now was that the application of AI into social domains - such as health care, education, and criminal justice - was being treated as a purely technical problem. The goal of AI Now's research is to treat these as social problems first, and bring in domain experts in areas like sociology, law, and history to study the implications of AI.

Research
Following each symposium, AI Now published an annual report on the state of AI, and its integration into society. Its 2017 Report stated that "current framings of AI ethics are failing", and provided ten strategic recommendations for the field - including pre-release trials of AI systems, and increased research into bias and diversity in the field. The report was noted for calling for an end to "black box" systems in core social domains, such as those responsible for criminal justice, healthcare, welfare, and education.

In April 2018, AI Now released a framework for algorithmic impact assessments (AIA Report), as a way for governments to assess the use of AI in public agencies. According to AI Now, an AIA would be similar to environmental impact assessment, in that it would require public disclosure and access for external experts to evaluate the effects of an AI system, and any unintended consequences. This would allow systems to be vetted for issues like biased outcomes or skewed training data, which researchers have already identified in algorithmic systems deployed across the country.