Center for Security and Emerging Technology

The Center for Security and Emerging Technology (CSET) is a think tank dedicated to policy analysis at the intersection of national and international security and emerging technologies, based at Georgetown University's School of Foreign Service. CSET's founding director is the former director of the Intelligence Advanced Research Projects Activity, Jason Gaverick Matheny. Its current executive director is Dewey Murdick, former Chief Analytics Officer and Deputy Chief Scientist within the Department of Homeland Security.

Established in January 2019, CSET has received more than $57,000,000 in funding from the Open Philanthropy Project, the William and Flora Hewlett Foundation, and the Public Interest Technology University Network. CSET has faced criticism over its ties to the effective altruism movement.

Its mission is to study the security impacts of emerging technologies, support academic work in security and technology studies, and deliver nonpartisan analysis to the policy community. For its first two years, CSET plans to focus on the intersection of security and artificial intelligence (AI), particularly on national competitiveness, talent and knowledge flows and relationships with other technologies. CSET is the largest center in the U.S. focused on AI and policy.

Public events
In September 2019, CSET co-hosted the George T. Kalaris Intelligence Conference, which featured speakers from academia, the U.S. government and the private sector.

Publications
CSET produces a biweekly newsletter, policy.ai. It has published research on various aspects of the intersection between artificial intelligence and security, including changes to the U.S. AI workforce, immigration laws' effect on the AI sector, and technology transfer overseas. Its research output includes policy briefs and longer published reports.

A study published in January 2023 by CSET, OpenAI, and the Stanford Internet Observatory and covered by Forbes cited that "There are also possible negative applications of generative language models, or ‘language models’ for short. For malicious actors looking to spread propaganda—information designed to shape perceptions to further an actor’s interest—these language models bring the promise of automating the creation of convincing and misleading text for use in influence operations, rather than having to rely on human labor."

In May 2023, Chinese officials announced that they would be closing some of the access that foreign countries had into their public information as of a result of studies from think tanks like CSET, citing concerns about cooperation between the U.S. military and the private sector.