URI normalization

URI normalization is the process by which URIs are modified and standardized in a consistent manner. The goal of the normalization process is to transform a URI into a normalized URI so it is possible to determine if two syntactically different URIs may be equivalent.

Search engines employ URI normalization in order to correctly rank pages that may be found with multiple URIs, and to reduce indexing of duplicate pages. Web crawlers perform URI normalization in order to avoid crawling the same resource more than once. Web browsers may perform normalization to determine if a link has been visited or to determine if a page has been cached. Web servers may also perform normalization for many reasons (i.e. to be able to more easily intercept security risks coming from client requests, to use only one absolute file name for each resource stored in their caches, named in log files, etc.).

Normalization process
There are several types of normalization that may be performed. Some of them are always semantics preserving and some may not be.

Normalizations that preserve semantics
The following normalizations are described in RFC 3986 to result in equivalent URIs:
 * Converting percent-encoded triplets to uppercase. The hexadecimal digits within a percent-encoding triplet of the URI (e.g.,  versus  ) are case-insensitive and therefore should be normalized to use uppercase letters for the digits A-F. Example:


 * Converting the scheme and host to lowercase. The scheme and host components of the URI are case-insensitive and therefore should be normalized to lowercase. Example:


 * Decoding percent-encoded triplets of unreserved characters. Percent-encoded triplets of the URI in the ranges of ALPHA ( – and  – ), DIGIT ( – ), hyphen, period , underscore , or tilde  do not require percent-encoding and should be decoded to their corresponding unreserved characters. Example:


 * Removing dot-segments. Dot-segments  and   in the path component of the URI should be removed by applying the remove_dot_segments algorithm to the path described in RFC 3986. Example:


 * Converting an empty path to a "/" path. In presence of an authority component, an empty path component should be normalized to a path component of "/". Example:


 * Removing the default port. An empty or default port component of the URI (port 80 for the  scheme) with its ":" delimiter should be removed. Example:

Normalizations that usually preserve semantics
For http and https URIs, the following normalizations listed in RFC 3986 may result in equivalent URIs, but are not guaranteed to by the standards:
 * Adding a trailing "/" to a non-empty path. Directories (folders) are indicated with a trailing slash and should be included in URIs. Example:
 * However, there is no way to know if a URI path component represents a directory or not. RFC 3986 notes that if the former URI redirects to the latter URI, then that is an indication that they are equivalent.
 * However, there is no way to know if a URI path component represents a directory or not. RFC 3986 notes that if the former URI redirects to the latter URI, then that is an indication that they are equivalent.

Normalizations that change semantics
Applying the following normalizations result in a semantically different URI although it may refer to the same resource:
 * Removing directory index. Default directory indexes are generally not needed in URIs. Examples:


 * Removing the fragment. The fragment component of a URI is never seen by the server and can sometimes be removed. Example:
 * However, AJAX applications frequently use the value in the fragment.
 * However, AJAX applications frequently use the value in the fragment.


 * Replacing IP with domain name. Check if the IP address maps to a domain name. Example:
 * The reverse replacement is rarely safe due to virtual web servers.
 * The reverse replacement is rarely safe due to virtual web servers.


 * Limiting protocols. Limiting different application layer protocols. For example, the “https” scheme could be replaced with “http”. Example:


 * Removing duplicate slashes Paths which include two adjacent slashes could be converted to one. Example:


 * Removing or adding “www” as the first domain label. Some websites operate identically in two Internet domains: one whose least significant label is “www” and another whose name is the result of omitting the least significant label from the name of the first, the latter being known as a naked domain. For example,  and   may access the same website. Many websites redirect the user from the www to the non-www address or vice versa. A normalizer may determine if one of these URIs redirects to the other and normalize all URIs appropriately. Example:


 * Sorting the query parameters. Some web pages use more than one query parameter in the URI. A normalizer can sort the parameters into alphabetical order (with their values), and reassemble the URI. Example:
 * However, the order of parameters in a URI may be significant (this is not defined by the standard) and a web server may allow the same variable to appear multiple times.
 * However, the order of parameters in a URI may be significant (this is not defined by the standard) and a web server may allow the same variable to appear multiple times.


 * Removing unused query variables. A page may only expect certain parameters to appear in the query; unused parameters can be removed. Example:
 * Note that a parameter without a value is not necessarily an unused parameter.
 * Note that a parameter without a value is not necessarily an unused parameter.


 * Removing default query parameters. A default value in the query string may render identically whether it is there or not. Example:


 * Removing the "?" when the query is empty. When the query is empty, there may be no need for the "?". Example:

Normalization based on URI lists
Some normalization rules may be developed for specific websites by examining URI lists obtained from previous crawls or web server logs. For example, if the URI

appears in a crawl log several times along with

we may assume that the two URIs are equivalent and can be normalized to one of the URI forms.

Schonfeld et al. (2006) present a heuristic called DustBuster for detecting DUST (different URIs with similar text) rules that can be applied to URI lists. They showed that once the correct DUST rules were found and applied with a normalization algorithm, they were able to find up to 68% of the redundant URIs in a URI list.