Undetectable.ai

Undetectable AI (Undetectable.ai) is an adversarial AI content detection software that rewrites AI-generated text to make it appear more human.

History
The Undetectable.ai software was designed by Bars Juhasz, a PhD student from Loughborough University who previously worked alongside the Royal Air Force to research unmanned aircraft system operations in denied command and control environments. The online deployment of Undetectable.ai was co-developed by Christian Perry and Devan Leos. It was officially made public in May 2023.

Reception and analysis
Undetectable AI has been the subject of discussion within the technology and academic communities. Articles in mainstream technology news outlets such as Mashable, TechTudo and The Inquirer have discussed the use and ethical implications of the software.

Academic concerns
Numerous researchers within academia have expressed concerns over the detection bypassing functionality of the Undetectable AI software.

In July 2023, a research paper, titled "Modern threats in academia: evaluating plagiarism and artificial intelligence detection scores of ChatGPT," researchers from Magna Græcia University (Andrea Taloni et al.) tested Undetectable.ai against generative-text and plagiarism detection software.

The findings concluded that while the detection software Originality.ai was 95% accurate in detecting standard instances of AI-generated scientific texts (specifically those generated by GPT-4), when processing AI-generated and plagiarized texts through Undetectable.ai they became significantly harder to detect. The research paper suggested that the functionality of Undetectable.ai demonstrated the limitations of detecting text generated by LLMs, and described its functionality as having ability to enable "malicious attempts to circumvent AI detection."

On November 4, 2023, Erik Piller, an academic at Nicholls State University, published a paper titled "The Ethics of (Non)disclosure: Large Language Models in Professional, Nonacademic Writing Contexts," addressing ethical concerns in artificial intelligence. In the paper, Piller critically examined the Undetectable.ai software, questioning the moral foundation and the underlying intention of its deployment in various contexts, while expressing skepticism of the software having a positive application.

Potential to affect data quality
On August 14, 2023, researched Dr. Christoph Bartneck published a joint research paper (Bartneck et al.), titled "Detecting The Corruption Of Online Questionnaires By Artificial Intelligence", which investigated the challenges posed by Undetectable.ai to data quality control in online questionnaires. The paper noted that the Undetectable.ai software was able to bypass conventional AI detection systems, and how it could raise concerns about the integrity of data collected from online studies. The study found that while AI detection systems demonstrated the ability to identify ChatGPT-generated text, they failed to identify text obfuscated by Undetectable.ai; however, the paper concluded that human judgment may ultimately more successful in distinguishing between human and AI-generated content.

Cultural impact
On November 30, 2023, EarthWeb used Undetectable.ai's content analysis function alongside GPTZero to scan text of apologies posted by celebrities, some of whom got accused of writing their apologies with AI, based on the results.

A staff article posted by SourceFed in January 2024 disclosed that they would be using Undetectable.ai to detect content created with or assisted by artificial intelligence.

On January 28, 2024, a report published by Daan Van Rossum on Flex.os, listed Undetectable AI as the 35th most visited AI software in 2023 (out of 150 total AI based software analyzed,) based on website traffic data.

Mechanism
In machine learning, the described primary function of Undetectable.ai is adversarial. The function of Undetectable AI is based on the core function of detecting artificially generated text, (e.g. text generated by Large language models.)