Draft:Robust Intelligence

Robust Intelligence is an American artificial intelligence (AI) security company headquartered in San Francisco, California. The company develops a comprehensive platform for AI security and risk management to protect organizations from the security, ethical, and operational risks of artificial intelligence.

Robust Intelligence was founded in 2019 by Dr. Yaron Singer, a Gordon McKay Professor of Computer Science and Applied Mathematics at Harvard University, and Kojin Oshiba, a machine learning researcher and Harvard University alumnus.

The company raised its Series B financing round in December 2021, and its investors include Sequoia Capital and Tiger Global.

History
Robust Intelligence was co-founded by Yaron Singer, a tenured professor of Computer Science and Applied Mathematics at Harvard, and Forbes 30 Under 30 recipient Kojin Oshiba in 2019 after nearly a decade of combined robust machine learning research at the university and Google Research. Recognizing the state of artificial intelligence adoption and the chronic challenges of AI risk in industry, the pair developed the industry’s first AI firewall.

Prior to founding Robust Intelligence and his ten-year tenure at Harvard, Yaron worked as a Postdoctoral Research Scientist at Google on the Algorithms and Optimization team. This role followed receiving his PhD in computer science from University of California, Berkeley in 2011.

Co-founder Kojin Oshiba graduated with a Bachelor’s Degree in Computer Science and Statistics from Harvard University in 2019. During this time, he spent a year as a machine learning engineer at QuantCo and helped co-found the company’s Japan branch.

Robust Intelligence emerged from stealth mode in 2020 with the announcement of its $14 million fundraising round led by Sequoia Capital. The company raised a $30 million Series B fundraising round in 2021 led by Tiger Global, with participation from Sequoia, Harpoon Venture Capital, Engineering Capital, and In-Q-Tel.

Dr. Hyrum Anderson, Robust Intelligence’s Chief Technology Officer, joined the company in 2022 from Microsoft where he co-organized the AI Red Team and served as the chair of its governing board. An accomplished machine learning and cybersecurity expert, Anderson co-founded the Conference on Applied Machine Learning in Information Security (CAMLIS) and co-authored the book Not With a Bug, But With a Sticker: Attacks on Machine Learning Systems and What To Do About Them.

Several notable figures and technologies in the field of artificial intelligence have emerged from research and development that began at Robust Intelligence. Most prominent are LangChain, an open source framework designed to simplify the creation of applications using LLMs, developed by former Robust Intelligence machine learning engineering leader Harrison Chase; and LlamaIndex, a data framework for connecting custom data sources to LLMs developed by Jerry Liu.

Technology
The Robust Intelligence platform automates the end-to-end security and risk management of AI models through its two primary components: continuous validation and AI Firewall.

Continuous validation regularly evaluates models and data throughout the AI lifecycle to identify security, operational, and ethical risks through hundreds of specialized tests and automated red teaming. Examples of these risk scenarios include susceptibility to adversarial attacks, evasion attacks, data poisoning attacks, data leakage, bias responses, factual awareness, and drift. The results of these tests inform automated risk assessment reports, which can be used to enforce internal standards and comply with AI regulations, guidelines, and frameworks.

Robust Intelligence developed the industry’s first AI Firewall to protect applications in real time. These external, model-agnostic guardrails wrap around a model to block malicious inputs and validate model outputs, securing against prompt injection, PII exposure, toxic output, model hallucination, and other risks. AI Firewall also helps secure proprietary data provided to LLMs during fine-tuning or retrieval-augmented generation (RAG).

To address concerns around AI supply chain risk, which refers to the presence of security vulnerabilities in third-party software, models, or data, Robust Intelligence released the AI Risk Database in March 2023 as a community resource. The database includes hundreds of thousands of models, and provides supply chain risk exposure that includes file vulnerabilities, risk scores, and vulnerability reports submitted by AI and cybersecurity researchers. In August 2023, Robust Intelligence partnered with MITRE to provide continuous support and advancement for the AI Risk Database as an open-source tool under the MITRE ATLAS™.The database was recognized by OWASP as a leading resource for AI model vulnerability tracking.

Research
Individuals at Robust Intelligence have contributed to and co-authored several notable research papers on AI vulnerabilities and adversarial machine learning techniques, both while working at the company and in academia. Some examples include:


 * “Tree of Attacks: Jailbreaking Black-Box LLMs Automatically”. Anay Mehrotra, Paul Kassianik, Blaine Nelson, Hyrum Anderson, Yaron Singer, et al. December 2023.
 * “Adversarial Attacks on Binary Image Recognition Systems”. Eric Balkanski, Harrison Chase, Kojin Oshiba, Alexander Rilee, Yaron Singer, Richard Wang. October 2020.
 * “Poisoning Web-Scale Training Datasets is Practical”. Carlinig, et al. February 2023.
 * “Real Attackers Don’t Compute Gradients: Bridging the Gap Between Adversarial ML and Practice”, Apruzesse, et al., Dec 2022.
 * “Machine Learning Model Attribution Challenge”, Merkhofer, et al. February 2023.
 * “Poisoning Attacks against Support Vector Machines”, Biggio, et al., March 2013, and 2023 ICML Test of Time Award.
 * Adversarial Machine Learning, Joseph, et al., Cambridge University Press, 2019.
 * Not With a Bug, But With a Sticker: Attacks on Machine Learning Systems and What To Do About Them, Siva Kumar and Anderson, John Wiley and Sons, 2023.