Robots.txt

robots.txt is the filename used for implementing the Robots Exclusion Protocol, a standard used by websites to indicate to visiting web crawlers and other web robots which portions of the website they are allowed to visit.

The standard, developed in 1994, relies on voluntary compliance. Malicious bots can use the file as a directory of which pages to visit, though standards bodies discourage countering this with security through obscurity. Some archival sites ignore robots.txt. The standard was used in the 1990s to mitigate server overload. In the 2020s many websites began denying bots that collect information for generative artificial intelligence.

The "robots.txt" file can be used in conjunction with sitemaps, another robot inclusion standard for websites.

History
The standard was proposed by Martijn Koster, when working for Nexor in February 1994 on the www-talk mailing list, the main communication channel for WWW-related activities at the time. Charles Stross claims to have provoked Koster to suggest robots.txt, after he wrote a badly behaved web crawler that inadvertently caused a denial-of-service attack on Koster's server.

The standard, initially RobotsNotWanted.txt, allowed web developers to specify which bots should not access their website or which pages bots should not access. The internet was small enough in 1994 to maintain a complete list of all bots; server overload was a primary concern. By June 1994 it had become a de facto standard; most complied, including those operated by search engines such as WebCrawler, Lycos, and AltaVista.

On July 1, 2019, Google announced the proposal of the Robots Exclusion Protocol as an official standard under Internet Engineering Task Force. A proposed standard was published in September 2022 as RFC 9309.

Standard
When a site owner wishes to give instructions to web robots they place a text file called robots.txt in the root of the web site hierarchy (e.g. https://www.example.com/robots.txt). This text file contains the instructions in a specific format (see examples below). Robots that choose to follow the instructions try to fetch this file and read the instructions before fetching any other file from the website. If this file does not exist, web robots assume that the website owner does not wish to place any limitations on crawling the entire site.

A robots.txt file contains instructions for bots indicating which web pages they can and cannot access. Robots.txt files are particularly important for web crawlers from search engines such as Google.

A robots.txt file on a website will function as a request that specified robots ignore specified files or directories when crawling a site. This might be, for example, out of a preference for privacy from search engine results, or the belief that the content of the selected directories might be misleading or irrelevant to the categorization of the site as a whole, or out of a desire that an application only operates on certain data. Links to pages listed in robots.txt can still appear in search results if they are linked to from a page that is crawled.

A robots.txt file covers one origin. For websites with multiple subdomains, each subdomain must have its own robots.txt file. If example.com had a robots.txt file but a.example.com did not, the rules that would apply for example.com would not apply to a.example.com. In addition, each protocol and port needs its own robots.txt file; http://example.com/robots.txt does not apply to pages under http://example.com:8080/ or https://example.com/.

Compliance
A robots.txt has no enforcement mechanism in law or in technical protocol, despite widespread compliance by bot operators.

Search engines
Some major search engines following this standard include Ask, AOL, Baidu, Bing, DuckDuckGo, Google, Yahoo!, and Yandex.

Archival sites
Some web archiving projects ignore robots.txt. Archive Team uses the file to discover more links, such as sitemaps. Co-founder Jason Scott said that "unchecked, and left alone, the robots.txt file ensures no mirroring or reference for items that may have general use and meaning beyond the website's context." In 2017, the Internet Archive announced that it would stop complying with robots.txt directives. According to Digital Trends, this followed widespread use of robots.txt to remove historical sites from search engine results, and contrasted with the nonprofit's aim to archive "snapshots" of the internet as it previously existed.

Artificial intelligence
Starting in the 2020s, web operators began using robots.txt to deny access to bots collecting training data for generative AI. In 2023, Originality.AI found that 306 of the thousand most-visited websites blocked OpenAI's GPTBot in their robots.txt file and 85 blocked Google's Google-Extended. Many robots.txt files named GPTBot as the only bot explicitly disallowed on all pages. Denying access to GPTBot was common among news websites such as the BBC and The New York Times. In 2023, blog host Medium announced it would deny access to all artificial intelligence web crawlers as "AI companies have leached value from writers in order to spam Internet readers".

GPTBot complies with the robots.txt standard and gives advice to web operators about how to disallow it, but The Verge's David Pierce said this only began after "training the underlying models that made it so powerful". Also, some bots are used both for search engines and artificial intelligence, and it may be impossible to block only one of these options.

Security
Despite the use of the terms "allow" and "disallow", the protocol is purely advisory and relies on the compliance of the web robot; it cannot enforce any of what is stated in the file. Malicious web robots are unlikely to honor robots.txt; some may even use the robots.txt as a guide to find disallowed links and go straight to them. While this is sometimes claimed to be a security risk, this sort of security through obscurity is discouraged by standards bodies. The National Institute of Standards and Technology (NIST) in the United States specifically recommends against this practice: "System security should not depend on the secrecy of the implementation or its components." In the context of robots.txt files, security through obscurity is not recommended as a security technique.

Alternatives
Many robots also pass a special user-agent to the web server when fetching content. A web administrator could also configure the server to automatically return failure (or pass alternative content) when it detects a connection using one of the robots.

Some sites, such as Google, host a  file that displays information meant for humans to read. Some sites such as GitHub redirect humans.txt to an About page.

Previously, Google had a joke file hosted at  instructing the Terminator not to kill the company founders Larry Page and Sergey Brin.

Examples
This example tells all robots that they can visit all files because the wildcard  stands for all robots and the   directive has no value, meaning no pages are disallowed.

User-agent: * Disallow:

User-agent: * Allow: /

The same result can be accomplished with an empty or missing robots.txt file.

This example tells all robots to stay out of a website:

User-agent: * Disallow: /

This example tells all robots not to enter three directories:

User-agent: * Disallow: /cgi-bin/ Disallow: /tmp/ Disallow: /junk/

This example tells all robots to stay away from one specific file:

User-agent: * Disallow: /directory/file.html

All other files in the specified directory will be processed.

User-agent: BadBot # replace 'BadBot' with the actual user-agent of the bot Disallow: /

This example tells two specific robots not to enter one specific directory:

User-agent: BadBot # replace 'BadBot' with the actual user-agent of the bot User-agent: Googlebot Disallow: /private/

Example demonstrating how comments can be used:

User-agent: * # match all bots Disallow: / # keep them out
 * 1) Comments appear after the "#" symbol at the start of a line, or after a directive

It is also possible to list multiple robots with their own rules. The actual robot string is defined by the crawler. A few robot operators, such as Google, support several user-agent strings that allow the operator to deny access to a subset of their services by using specific user-agent strings.

Example demonstrating multiple user-agents:

User-agent: googlebot       # all Google services Disallow: /private/         # disallow this directory

User-agent: googlebot-news  # only the news service Disallow: /                 # disallow everything

User-agent: *               # any robot Disallow: /something/       # disallow this directory

Crawl-delay directive
The crawl-delay value is supported by some crawlers to throttle their visits to the host. Since this value is not part of the standard, its interpretation is dependent on the crawler reading it. It is used when the multiple burst of visits from bots is slowing down the host. Yandex interprets the value as the number of seconds to wait between subsequent visits. Bing defines crawl-delay as the size of a time window (from 1 to 30 seconds) during which BingBot will access a web site only once. Google provides an interface in its search console for webmasters, to control the Googlebot's subsequent visits.

User-agent: bingbot Allow: / Crawl-delay: 10

Sitemap
Some crawlers support a  directive, allowing multiple Sitemaps in the same robots.txt in the form  : Sitemap: http://www.example.com/sitemap.xml

Universal "*" match
The Robot Exclusion Standard does not mention the "*" character in the  statement.

Meta tags and headers
In addition to root-level robots.txt files, robots exclusion directives can be applied at a more granular level through the use of Robots meta tags and X-Robots-Tag HTTP headers. The robots meta tag cannot be used for non-HTML files such as images, text files, or PDF documents. On the other hand, the X-Robots-Tag can be added to non-HTML files by using .htaccess and httpd.conf files.

A "noindex" HTTP response header
The X-Robots-Tag is only effective after the page has been requested and the server responds, and the robots meta tag is only effective after the page has loaded, whereas robots.txt is effective before the page is requested. Thus if a page is excluded by a robots.txt file, any robots meta tags or X-Robots-Tag headers are effectively ignored because the robot will not see them in the first place.

Maximum size of a robots.txt file
The Robots Exclusion Protocol requires crawlers to parse at least 500 kibibytes (512000 bytes) of robots.txt files, which Google maintains as a 500 kibibyte file size restriction for robots.txt files.