Help:Searching/Features

Search engine features
The search engine can
 * sort by date
 * fold character families. An e matches an ë, and Aeroskobing matches Ærøskøbing.
 * understand when a page linksto or hastemplate, or has something intitle , or is incategory
 * understand OR and AND, and two forms of not.
 * perform fuzzy searches on word spellings.
 * locate words as near to each other as you specify.
 * find wildcard expressions and regular expressions.

A search matches what you see rendered on the screen and in a print preview. The raw "source" wikitext is searchable by employing the insource parameter. For these two kinds of searches a word is any string of consecutive letters and numbers matching a whole word or phrase. All other keyboard characters like punctuation marks, brackets and slashes, math and other symbols, are not normally searchable.

By default Search will also stem the words and match them too. It automatically sorts results by the frequency and location of these, but also can boost page ranking by time, template usage, or even similarity to other pages.

Search is a search engine that does a full text search by querying an index database. It offers search syntax and parameters exceeding the capabilities and control of other public search engines that could search Wikipedia.

Page score
Say the search box is given two words. The search starts with two index lookups, and the two results are combined with a logical AND. But before they are displayed as search results, they must all be assigned a final score before the top twenty (listed on the first page) can be displayed, and they must be formatted with snippets and highlighting. Page ranking deals quickly with very large numbers of pages, by approaching things statistically, and taking several swipes through the data.


 * 1) The frequency and location of each word determines the first sorting.
 * 2) The order of the words determines the second sorting.  If the two words happen to be found in the same order on a page, that page is boosted again.
 * 3) The number of incoming links.

These attributes for a word earn that page a higher score:
 * position in the title
 * position in the lead section
 * repetition
 * close proximity to other words in the query

There can be several other scoring mechanisms. The parameters that you can control are morelike, boost-template , and prefer-recent.

General description
There are now eleven parameters for various approaches to searching the many namespaces. Four of the seven new parameters now offer to target these page characteristics: hastemplate and linksto, insource and insource:/regexp/. The other three now offer to target page ranking: morelike works all alone, a prefer-recent term can be added to any query, and there is now also a boost-template parameter. The other four, preserved in name only, from the entirely rewritten previous version of Search, are intitle, incategory , prefix , and namespace.

Any search will feature one of these approaches
 * Rely on page ranking; ignore most results; run once.
 * Search for an exact string using a simple regexp; pretest a small search domain.
 * Hack out a highly refined set of page characteristics with concern only for an exact count of pages; refine in a sandbox and on the search results page.

The concept of a search domain plays an important part in all this. By default it is just article space, but in general a search domain starts out as a set of namespaces, and ends up as all the pages in the search result.

One term of a query will set the search domain for another term in the same query. The order is optimized by the search engine. The query term1 term2 transforms the search domain twice to get those search results. For example, a bare namespace returns the pages of the namespace. The query term1 term2 regexp relies heavily on the first two terms to reduce the search domain size.

All terms in a query are indexed searches unless they are a regexp. Indexed terms run word-wise instantly, and a regexp runs character-wise slowly. Even the most basic use of a regexp, just to find an exact string, should always limit the size of its search domain to as little as possible. This can be as simple as adding a few terms, (as covered below), because each term in a query tends to reduce the number of pages. Never run a bare regexp on the wiki especially if your user profile is preset to Everything. The search engine limits the number of regexp searches that can run at once. Without the proper filter running alongside a regexp it will run for up to twenty seconds, and then incur an HTML timeout.

On the search results page, the initial search domain on which the query was run is indicated by the following, given in increasing power to override the others:
 * an open namespace dialog if the user has preset a profile of namespaces
 * Content pages or Multimedia or Everything: if one of them was the initial search domain, then the color of that one's text will have turned from (link-colored) blue to (presentation) black.
 * a namespace parameter in the query
 * a prefix parameter overrides them all.

For example, if the namespace parameter is all, the size of the initial search domain will be the pages in all namespaces:  0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 100, 101, 118, 119, 710, 711, 828, 829 A prefix parameter specifies just one of those namespaces, in whole or part. If the initial search domain is the default, Content pages its size is the pages in namespace 0, (article space).

A search can be set into a link to specialize and share searches: [ [Special:Search/search]]. Such a query should always be a fully specified by specifying an initial search domain so as to avoid user profile discrepancies. This way it gives the same results. For example, if more than one namespace is needed, use.

Other helpful approaches to the search engine features are
 * templates such as template usage that offer pre-made specialized searches.
 * Input box setups, such as the one at the bottom of this page, that can perhaps be made to work with such templates.
 * driving new or improved feature requests at phabricator

Syntax
Greyspace characters are the non-alphanumeric characters: ~!@#$%^&*_+{}|[]\:";'<>?,./ . Any string of greyspace characters and/or whitespace characters is "greyspace".

Greyspace is ignored except where it has meaning as a modifier in syntax.
 * +term turns off "Did you mean" suggestions
 * _term turns off "Did you mean" suggestions for that term
 * -term means not. It changes the meaning from include to exclude.
 * !term also means not.
 * The colon : character can specify the "article space" as the search domain, and it can, in some cases, act as a letter or number when inside a word (non-spaced). These are covered below.
 * The tilde ~ character associates generally to finding more search results:
 * ~query guarantees search results instead of navigation.
 * word~ does a "fuzzy search" for that word.
 * "exact phrase"~ adds stemming for each word.
 * "exact phrase"~n does a "proximity search", allowing n extra words inside the exact wording.

Parameters also accept words and phrases, but each can search their own index and interpret their own arguments, such as for
 * requiring a namespace or not, or accepting namespace aliases or not
 * reporting redirects or not
 * for a pagename input: being case sensitive or not, or accepting the underscore _ character in lieu of a space character or not
 * delimiters for there arguments
 * the meaning of their own modifier characters syntax

The delimiters:
 * Namespace needs no delimiters, but accepts whitespace to the left and greyspace to the right
 * Prefix accepts only whitespace between the namespace and the pagename, and accepts greyspace to the left.
 * insource:/arg/ requires no space, but all other parameters tolerate at least whitespace
 * Two words separated only by greyspace characters make a greyspace phrase, subject to stemming
 * "Double quotes:" make an exact phrase, and make stemming and proximity possible with more modifiers added
 * Greyspace is ignored:
 * anywhere inside double quotes
 * in starting characters of the search box query, but not before a namespace
 * between words and phrases, except for greyspace phrases
 * Space characters are important only
 * for pagenames (linksto, prefix, incategory, boost-templates, morelike).
 * between two parameters (to delimit the argument)

Colon : character:
 * as a namespace, it means article space
 * as a prefix, it means article space
 * to insource or "exact phrase" it means a literal colon and acts just like a letter or number if it is an non-spaced colon.

Word and phrase
A search is a query with one or more terms. The query does not actually search the page database, but rather, a search queries a prebuilt, constantly maintained, search index database. When creating the search index of words on the wiki, or when entering a query, a word boundary is greyspace. Greyspace characters can create a multi-word_phrase. We must say tab and newline even though we cannot put those characters in our query; this is because of the important fact that  the same analysis that is done on the wikitext is also done on the query. A word boundary is whitespace characters (tab, space, or newline) or greyspace characters. Greyspace characters and whitespace characters are all folded together as one, just as special characters like æ (ae) or á (a) are folded into the standard keyboard characters.

A phrase expresses an ordering of words, and there are three ways to make one, depending on how aggressively you want the phrase to match.


 * "quotation marks"
 * joining_with_non-alphanumeric(characters)
 * camelCaseNaming or letter222number transitions

"Quotation marks", phrases are called an "exact phrase" because it is exact wording: stemming, fuzzy search, and wildcards are not used in an "exact phrase". Like the rest of Search, an "exact phrase" tolerates greyspace between words. Joining_with_non-alphanumeric(characters) only, will employ stemming on the words. CamelCaseNaming or letter222number transitions, matches the phrase in greyspace, with stemming, and additionally matches the word itself. Parameters can require the quotation marks to include whitespace in their input.

The wikitext is searched by employing the insource parameter. The insource parameter ignores greyspace characters too.

For example, to find the phrase , use http://en.wikipedia.org/wiki/Search_engine, or use insource: "http en wikipedia org wiki search engine".

When you search for a word, that word is just looked up in an index. An indexed search instantly concludes with all search result titles, without having to search the wiki itself.

Each word you see in a page's content (a title's content) is already in an index, where it points to all its other prearranged results. A word is indexed to a list of page names, where it is seen in the text, or it is seen in the title only.

Each indexed word is seen as


 * a string of alphabetic characters a-z, or
 * a string of digits 0-9, or
 * a string of alphanumeric characters a-z, 0-9.
 * a token inside a camelCase word.

For transitions from lower to upper case, (or camelCase), and transitions from letter to number:


 * these are two words
 * only the first transition divides such words, into two
 * a null space matches non-alphanumerics: game-folks matches gameFolks.

for or digit-letter these match singly or together. In other words you don't need the space, but that also works to find either "word" of a camel case or mixed alphanumeric word. You don't need a space, and non-alphanumeric characters are treated as that null space.

We may call these "word" characters or "alphanumeric" characters at times as opposed to the "non-word" characters, which are ignored except as to function as a word boundary. Usually a word boundary is just a space character.

These words are case-insensitive: a-z is equivalent to A-Z, so Search box will navigate to a pagename regardless of capitalization (even though wikilinks and URLs must match capitalization apart from the initial character).

Each word is aliased to all its word-stems, so cloud, clouding, clouds, clouded, cloudy will all point to the same index entry.

In Search the characters !@#$%^&*_+-={}|[]\:;'<>,.?/ are ignored. Any mix of whitespace characters and these non-word characters, we may refer to as grey-space. Grey-space, then, is all non-word characters except the double quote character, which is not ignored.

Grey-space is a string of one or more characters such as brackets and math symbols and punctuation and space. Now, a search-indexed word will be found between grey-space, and grey-space is an implied AND of two words in a search query, but the AND is not always implied: when two phrase exist side-by-side the AND is required.

Exceptions to what "words" are indexed are these portioned words:


 * A change from a numeric to an alphanumeric character is an additional word boundary in an alphanumeric word.
 * A change from an alphanumeric to a numeric character is a word boundary in an alphanumeric word.
 * A change in case from lowercase to uppercase is a word boundary in an alphabetic word.

The word boundary between such numeric portions and an alphabetic portions may include grey-space or not, but a phrase search turns off portioning, because it is an "exact phrase search", the words in the phrase matching only alphanumeric words delimited by grey-space.

Words joined only by non-alphanumerics are treated like a phrase. So word1_word2&word3 is the same as "word1 word2 word 3". However they will also match camelCase and letter-number transitions. An exact phrase search will not match camelCase or letter-number transitions. For example, terms like wgCanonicalNamespace and !wgCanonicalSpecialPageName can be found looking for.

For example:


 * A numeronym like C10k is considered one word for proximity, but two words for matching.
 * pluralized numbers, like "2010s"

The following match the single term  on a page: txt , 2, regex , reg , ex, txt2 , 2reg , 2regex. None of those portions would match in a phrase search; only "txt2regex" would match.

The following match the two terms : 2 or "2" , 2 2 or "2 2" , "2 2" or "2" , "2+2" or 2+2 , "2-2" or 2-2 , "2.2" or 2.2 Each term is a query, and the grey-space is an AND.

Fuzzy search, wildcards, and stemming
Stemming is a way to match meaning "ambitiously", to get the numbers up, for possible semantic matching, such that run_shoe also matches. Stemming is a spelling algorithm only distantly reliant on any dictionary. The algorithm attempts to find the same word, but in all its word endings.

A fuzzy search will match a different word. Words (but not phrases) accept approximate string matching or "fuzzy search". A tilde ~ character is appended for this "sounds like" search. The other word must differ by no more than two letters.


 * Not the first two letters. The first two letters must match.
 * Two letters swapped.
 * Two letters changed.
 * Two letters added, two letters subtracted, or one subtracted and one added.

But it can differ by one letter in these ways. A fuzzy search matches the word exactly plus words like it.


 * ,→ thus and thud, thins and the, but not his or thistle
 * → Charlie Parker and Charles Palmer and Charley Parks

With wildcards you can specify which letters change, including the first two letters, and you can increase the number of letters that can change. Wildcards have their own rules:


 * * zero or more letters or numbers
 * *\? one or more letters or numbers.
 * \? one letter or number
 * neither * nor \? can match the first letter; they can go in the middle or the end.
 * \? and * can be used any number of times in a word
 * → thistle and This1234 and This
 * → gaiter goiter guitar g8it9r
 * → keypad and keypunch

While the word indexes are being built and updated, stemming automatically adds aliases to most entries. An actual dictionary is not used. Instead it runs an algorithm that applies generic English syntax rules for word endings. The results are imperfect. Even misspelled words, non-words, and words with numbers in them are indexed and stemmed in this way. By adding different forms of the same word to the indexed search query, stemming is a standard method search engines use to aggressively garner more search results to then run a bunch of page-ranking rules against.

For example, stemming will alias cloud, clouds, clouded, and clouding. It will not alias the word cloudy, but it will alias the various forms of cloud to the non-word cloudion, because -ion is a common word ending.

Stemming is automatically turned off for insource searches:

To turn stemming off put the word in quotation marks, this is an "exact phrase" search. 

For example: gameFolks, game!folks, game:folks matches FolksSoul

Proximity

 * Proximity searches do not search titles.
 * Proximity works backwards if you give it a higher count.
 * Proximity searches turn off stemming.

An "Exact phrase" or a word will match in a title. And creating a phrase "with tilde"~ just turns on stemming, (which is equivalent to forming a phrase by joining the words with_greyspace ). But "exact phrase"~1 matches the wording in that order plus allows any one extra word to fall between the two words.

For example


 * "exact second phrase"~2 allows two extra words to fit anywhere on either side of the second term.
 * "exact phrase"~3 also finds "phrase exact" (the two words in reverse order)
 * Looking for either "Shift-Alt-P" or "Alt_Shift-P"? Its not "Alt-shift-P"~3 . It's not "alt shift"~3-P . Use "alt shift p" OR "shift alt p" instead.
 * matches "Dorsal (or Thoracic) vertebra"
 * matches "three w-1 w2% extra w:3 w_4 $w5 words".

"hitch4 hiker2" finds the two "words" in that order, (possibly separated by punctuation or brackets or other keyboard symbols like math symbols), and without the quotes finds them in the same article. In both cases the article is listed when the space satisfies the logical AND meaning.

hello_dolly does the same thing as "hello dolly" does, but the double quotes version offers a proximity filter. After the closing quote you add a tilde ~ and a number that indicates the total number of words allowed between all the terms.


 * "WordOne wordTwo" means a phrase (zero words in between)
 * → word1 <[!@#]> <[:$%^*]> <[+-*/]> word2
 * → word3 extra1word word4
 * → word5 extra1word word6 extra2word word7
 * → word8 word9 extra1word extra2word word10

Backward proximity works too, but includes the two end words between each segment. Proximity cannot make the last word proximate to the first. The proximity can be a large number, like 500 or 1000.

Say a page has word1 word2 word3 in that order.


 * → wordA extra1word extra2word wordB
 * → WordA wordB   extra1word extra2word wordc

Two search terms with no quotes is two filters, and a bunch of page-ranking rules.

Search logic
Truth logic is AND, OR, and not. Logical OR increases results, whereas logical AND decreases them. Logical not is a good way to refine a query by removing any kind of term except the prefix parameter.
 * Queries do not accept parentheses. So multiple terms cannot be grouped into a single, logical term.
 * Parameters do not accept AND or OR, but do accept not
 * word word2 will AND the two terms.
 * word AND word2 will AND the two terms. (similar)
 * word OR word2 will OR the two
 * -word will not the term, excluding the pages that match word.
 * !word will not the term (similarly)

For example while -refining -unwanted search results. For example credit card -"credit card" finds all articles with "card" and "credit"

Prefix and namespace
Prefix and namespace are the only positional parameters, and namespace is an unnamed search parameter. One or the other of them is used in a query to override the initial search domain set by user profile or by the search bar. They aren't used together: prefix overrides namespace.

The namespace argument must be at the beginning of a query, and the  prefix:  parameter must be at the end of a query.

Namespace
Namespace: is an unnamed search parameter that goes at the beginning of a query. The namespace is followed by a colon, followed by zero or more whitespace characters. and matches a namespace name. The namespace names and "all" work as expected, but seeing one in the search box does not guarantee it represent the search results, as explained below.

In addition to the usual namespace names and their aliases Pages with namespaces outnumber pages without them 7 to 1.
 * all searches all namespaces on the wiki.
 * file searches the wiki plus the Commons wiki.
 * the words and phrases on the file pages are searched
 * the textual content inside all uploaded attachments is searched
 * If the match is made inside a pdf (or the like) this is indicated in the searches results parenthetically: "(matches file content)".
 * file:local turns off the search on Commons
 * all does not search Commons
 * The namespace names are not case sensitive, but "all" and "local" must be lowercase.
 * All: is not a search namespace, and will be treated as a word.
 * local: will not be treated like a word, but silently ignored instead, unless the File namespace is involved, such as it is on the search bar when activating Multimedia or Everything.
 * In a query, local: only has an effect following the File namespace file:local.

On the search bar at the search results page These differ from namespace "all" by matching your search terms inside a pdf on a help:file page, that item on the search results page says "(matches file content)".
 * Everything searches all, plus Commons and the File namespace.
 * Advanced when All (namespaces) is checked is equivalent to Everything.
 * Multimedia searches the File and Media namespaces on the local wiki plus Commons.

For example matches inside a pdf, but does not.

Prefix
prefix:namespace: string filters a namespace down to one or more pages where string matches the pagename's beginning characters. For example, prefix:help:t finds Help pagenames that begin with "T".


 * When the string has zero characters all pages in the given namespace are found.
 * When the string has all the characters a pagename, a single page is found.
 * The string is not case sensitive.
 * The namespace can be an namespace alias, like WP for Wikipedia.
 * A space between the namespace and pagename is allowed.
 * The namespace for prefix defaults to article space.
 * Prefix will not match a redirect. (But see Special:PrefixIndex.)
 * Prefix cannot be used as a filter: the dash of -prefix is ignored. -prefix:WP: ab only sets the search domain to "Wikipedia:Ab".
 * No pagename characters are ignored. Even the space character is part of the pagename, and this is why prefix must go at the end.

Prefix can perform the function of the namespace filter, plus it can isolate a single article whereas intitle cannot. Prefix cannot isolate a single page if it has subpages.

An alternative to a prefix query is Special:PrefixIndex:
 * multi-column report capable of listing several hundred pagenames on one page
 * Case sensitive
 * lists redirects too

Compared
Comparing the namespace and prefix parameters:
 * Prefix and namespace can both serve to set the initial search domain.
 * For a given namespace they are equivalent.
 * They both filter titles.
 * The both accept namespace aliases, but prefix does not recognize "all".
 * They both limit the initial search domain to one namespace.
 * A namespace goes only at the beginning, and a prefix goes only at the end.

The following methods set an initial search domain by namespace: These are in the order of precedence. A prefix overrides a namespace overrides the GUI. The argument to the prefix parameter is a fullpagename, which conveys a namespace.
 * a  prefix: , which defaults to article space
 * a namespace argument at the beginning of a query, which defaults to the user's default search domain
 * the URL parameters &nsN=1
 * the "advanced profile" GUI on the search results page

When alternating search domains, with the various techniques, and because of their priorities, it deserves repeating: check the search bar indication; it is most subtle. The Advanced namespace selection pane from the search bar is not so subtle. It will remain for as long as the earlier selection "remember selection for future searches" is in effect. You can "remember" article space and then either 1) press Content, 2) choose another search bar search domain, or 3) remove all instances of  from the URL.

Page attributes
These five search parameters filter a namespace according to an input word or phrase.


 * No OR. For example, no intitle:A OR intitle:B
 * No positional requirements, and all can standalone, for example !hastemplate: Val
 * Only incategory accepts several inputs (between pipe | characters)
 * Only linksto and insource do not accept greyspace phrases
 * Only linksto is case sensitive.
 * Only insource is sensitive to an non-spaced colon:character.

These parameter names must be in all-lowercase letters.

Intitle
Intitle finds a word or phrase in a pagename. Like a word or phrase search stemming and fuzzy searches can apply.
 * A word input can be put in double "quotes" to turn off stemming.
 * A phrase input can use greyspace to turn on stemming.
 * A single word input can suffix the tilde ~ character for a fuzzy search.
 * A single word input can suffix the star * character for a wildcard search.
 * Intitle does not search redirects.
 * Proximity search is not an option in a title search.

To find a match in a redirect title, or to apply a proximity search to a title you can rely on page ranking software to boost title matches before content matches. So a basic word or phrase search, or proximity search, is an alternative to intitle.

For example
 * finds one, while the proximity search
 * finds a dozen related titles immediately.
 * shows stemming while does not.
 * shows stemming.
 * shows how to search for two words in one title.

Incategory
Incategory has the general format
 * incategory: "category|category|...|category"

and selects from the pages section of given category pages, those pages that are also in the search domain.
 * Incategory inputs are not case sensitive.
 * Incategory inputs are space sensitive. No spaces around the category. For any space inside any input, use "double quotes" around the whole expression.
 * The search results do not including subcategories. For that there is a deepcat search parameter, available by adding a line to your javaScript and CSS files.
 * Multiple categories may be applied up to the 300-character limit of a query.

Because many pages outside the mainspace are also categorized, the counts often won't match the category unless the search domain is the entire wiki:
 * (all 70 pages)
 * (article space, 36 pages)
 * (portal space, 2 pages)

Multi-category input counts a page only once. The following two categories have 209 pages in article space, with six pages found in both categories:
 * (6)
 * (159)
 * (50)
 * (203:= 209&minus;6)

On the other hand these are disparate categories:
 * (23 pages about mountains)
 * (18 pages about ships)
 * (41:=23+18)

Because of the nature of categorization these categories share no pages:
 * (zero pages matching all/and)
 * (70 pages)
 * (57 pages)
 * (30 pages)
 * (127 pages)
 * (100 pages)
 * (87 pages)
 * (157 pages)

Categories and Search are synergistic.
 * To search for category titles, and for links and text on a category page, search the category namespace (or use CategoryTree, or Categories for title searches).
 * If two categories are closely related but are not in a subset relation, then links between them can be included in the text of the category pages.
 * A word or phrase search can often precisely match incategory: it can match inside the categories box at the bottom of every page. When this occurs that search result will include a parenthetical flag "(Category pagename)".

In the following examples, note how the page description in the category namespace show category sizes instead of page sizes.
 * (searches the category namespace for titles with that word.)
 * (searches the category namespace for those two words in the title or body of a category page)
 * (It's easy to spot the pages that need categorization, because they also don't have a redirect with that term.)

Hastemplate
 Hastemplate  finds pages that transclude a given template. Finds template usage, not just a name pattern, because it will find all pages where the template content itself was used in any way. The results differ slightly depending on the alias you give.

Hastemplate
 * given the canonical pagename (on the title line), it will find all aliases' (redirects') usage too, and it will find any subpage links to it from a parent template too.
 * given an alias (on the redirect's pagename) finds redirect's name pattern
 * is not case-sensitive
 * accepts a fullpagename to find template usage of templates (homed) in other than the default, Template namespace (just as within the { {template}} call itself)

If you don't find the searched template name on the wikitext of the page, it can mean either that you gave the canonical pagename but it found an alias, or that it was called as a secondary template by way of a template that is shown in the wikitext. To find visible (primary) calls only, use insource. 

Insource
Insource: term finds a word or phrase in wikitext.
 * No greyspace_phrases.
 * No stemming.
 * No proximity.
 * Yes wildcards, but only for words, not when the term is an "exact phrase".
 * treats a non-spaced colon : character like a normal letter
 * Insource doesn't search in .js or .css files except in comments or nowiki tags.

Unlike a normal search insource doesn't find things "sourced" by a transclusion.

Insource targets wikitext in two ways. They look similar, but the regexp form employs the slash / character to delimit the regexp.
 * 1) insource: term  finds an indexed word or phrase.
 * 2) insource:/regexp/  targets the entire wikitext of every page in the search domain as one long string of characters per page, either having a pattern or not. This is the "regular expression" (or regexp, or regex). Its metacharacters can represent multiple possibilities for a character position or a range of character positions within a page, using metacharacters for truth logic, grouping, counting, and modifying the characters to be found.

A basic regexp is an easy way to find a specific, /"exact strings"/, as shown below. The double quotes are field delimiters. They are escape characters which quote all the set of characters between them, and keep their interpretation literal (keep any metacharacter interpretation from occurring).

An advanced regexp uses the metacharacters to program general string patterns. It finds everything, even pieces and parts of words, conveying no notion of "words", but only that of a string of characters in a sequence. Metacharacters are interpreted unless quoted by a backslash, double quotes, or square brackets. See the section on regex. The obvious example is, you must quote any slash in your pattern so it won't be interpreted as the closing slash delimiter, using \/ instead of / to match a literal slash. A regexp interprets all metacharacters. Testing a regexp pattern responsibly, requires limiting the search domain Abusing regexp will not harm Wikipedia performance, but it limits regex search information from flowing elsewhere.
 * by making it a single page using a page-name filter prefix:page name
 * a prefix parameter or other filter that limits the search domain to only as many pages as necessary
 * the test wiki.

Only regex interpret greyspace characters. The regular insource, as everywhere else, ignores greyspace characters. So matches m/s, as do  "M-S" and  "m=s". But  will match it, and the filtered version will too:. The insource:"word1 word2" filter is the most obvious filter for insource:/word1 word2/, where the two wikitext words are only separated by punctuation and space. Say the target string is $9,999 m/s$: Insource matches words sequentially, but the match could occur anywhere on the page, not necessarily inside the { {template markup}}. For this there is template usage, and it matches any regex inside the template.
 * "val 9999 ul m s fmt commas" → match
 * val "9999 ul" → match
 * val "999" → no match
 * val "fmt commas" → match
 * val "ul m" → match
 * val "ul M S" → match
 * val fmt → match

For thorough precision, use /regex/. For example, to find any bare URL inside , with, with possible you than can't use the simpler insource:"ref http server com". Taking a cautious approach, before trying the full regexp, create a search domain under 10,000 pages. Starting with two filters, prefix and insource: We have the only possible filter. That filter produces a regex search domain of only 2300. The filter produces a search domain of $98,000$. Running the regex on that many pages is possible, and produces $1,000$ results.
 * 1)   $3,700$ is too many to start.
 * 2)   $264,000$ is good.
 * 3) So ya try adding a regex term   zero for prefix:AA, one for prefix:AB
 * 4) So ya try just   instead, and then try prefix:AA zero; try AB, one.
 * 5) You notice you forgot the modifier for.
 * 6)  . There are $64,000$, and that is OK.
 * 7) Experiment further.  Then decide to do the project in segments AA, AB, AC, ... ZZ.

To find a more targeted URL, say yahoo.brand.edgar.com, use insource: "http yahoo brand edgar com" (or cut and paste the entire URL, slashes dots, and all; it doesn't matter). Do another search with the https version. These searches capable of more flexibility than Special:LinkSearch. No filter is needed, but every search always benefits from extra information: any word, any phrase, and most parameters.

Linksto
 Linksto  Reports wikilinks to a page name.


 * Linksto only accepts a canonical fullpagename. Use the title line.  If the title does not begin with a capital letter, or if you're not sure about the title line for any reason, you can preview { {FULLPAGENAME}} on an edit of the page.
 * Linksto is case sensitive.
 * Namespace aliases are found, but not accepted as input.
 * Linksto does not find redirects. If you want all links to content you'll have to search each redirect page name.
 * Linksto does not report the given page as a link to itself, even when there are internal section-to-section links.
 * Linksto does not find URL-style wikilinks to a page.
 * Collapsed navlinks are not reported by linksto, but they are reported by WhatLinksHere.

Linksto reports wikilinks to a page name, even if the wikilink is
 * to a section.
 * from a subpage link.
 * hidden in a transclusion ("behind" a template that forms a wikilink).

Linksto can differ from the " What links here " tool, because the search domain for " What links here " is all. Linksto search results are in your default search domain. (Also  linksto  reports the count, as do all searches.)

In addition to wikitext it searches inside a pages transcluded content.

first, and then scan the contents. For example
 * linksto:"Mozart and scatology"

will report a list of 300 articles that link to it, as will " What links here ". But Mozart and scatology is actually linked only 15 times by content authors. The rest are due to Mozart and scatology in Template:Wolfgang Amadeus Mozart on the unwanted pages. The template is wanted, but the "links to" reference is probably not.

The trick to getting around this, and just finding all authorship links to an article is a regexp search:
 * : insource:"pagename" insource: / \[\[ *[Pp]agename *[]|] /

That search will find articles only because the initial : limits the initial search domain to article space, no matter how your default search domain happens to be set. It will find all of the links many times more quickly than a bare regexp would, because the first  insource  term instantly creates the refined search domain that sets the proper limits for the regexp search. A regexp can accommodate for the variations found in the wikitext allowed by the permissions of wikilinks: 1) the metacharacter * allows for "zero or more" space characters before and after the title, and 2) the [character class] at the beginning allows for the relaxed capitalization of the first character in any pagename, and 3) the character class at the end finds the link whether it is labeled via the pipe character | or closed via the square bracket ] of the wikilink.

Links to transclusions are handled by hastemplate.

Sorting results
A page's overall score determines its place in the search results.

A better match will raise the score.
 * A section zero (lead-section) match is better than a numbered section.
 * A title or headings match is better than a lead section match.
 * A greater frequency of a search term is better.
 * A direct match is better than a stemmed match.
 * When several words are all found in many documents, a matching order is better.
 * A higher mesh—more links to and from a page—is better.

Wikiproject "importance" and article quality assessments can factor in. Searching from a page, its categories, wikidata, and geo-location can factor in.

Knowing this you may be able to better find, for example, a half-remembered title. Using intitle may skew the results too much because of the order of the words. Use those in a word search, and depend on page ranking. The titular words will show up on top.

To get an idea of how CirrusSearch might work see Search/Old.

To sort search results by date, use prefer-recent. To sort search results by template usage, use boost-template.

Morelike
The  morelike  search parameter lists all articles that compare in word frequency and word length to one or more given articles.
 * morelike: pagename | pagename2 | ... | pagename50


 * Quotation marks are not needed, and spacing is not important.
 * Capitalization is enforced, and misspelled pagenames silently fail.
 * Redirects are accepted; the target article's title is used.
 * a pagename with a namespace silently fails.
 * wp:shortcuts silently fail. (A shortcut redirect from article space to a project space.)
 * No other search parameters or other terms are allowed alongside morelike.

Morelike calculates a multi-word search.
 * : word1 word2 ... wordN

See them highlighted in the snippet.

Morelike looks up the given pagename(s) in the search index, creates a word-frequency aggregate and a word-length aggregate from all the words, and calculates a multi-word search based on those, plus internal, variable settings. It is an expensive search.

For example, say you search for
 *  morelike:William H. Stewart 

then pick a name from that list and add it
 *  morelike:William H. Stewart|Leroy Edgar Burney 

then add more names, until you have five input pagenames. Then you could begin blindly adjusting this automatically calculated morelike query, saying the following sorts of things: Make the calculated query Then, say, you adjust the number of input pagenames that have a word to two (out of five). https://en.wikipedia.org/w/index.php?title=Special:Search&profile=default&search=morelike:ant|bee|wasp|Eusociality|termite&fulltext=Search&cirrusMtlUseFields=yes&cirrusMltFields=opening_text&limit=1150
 * at least five words
 * a minimum word length of seven
 * a minimum word frequency of three
 * At most four of the five pagenames must have the term.
 * At least three of them must have the term.

It can also find similar articles based on just the title, or on just the headings, or on just the lead section.
 * &cirrusMtlUseFields=yes&cirrusMltFields=title
 * &cirrusMtlUseFields=yes&cirrusMltFields=headings
 * &cirrusMtlUseFields=yes&cirrusMltFields=text
 * &cirrusMtlUseFields=yes&cirrusMltFields=auxiliary_text
 * &cirrusMtlUseFields=yes&cirrusMltFields=opening_text
 * &cirrusMtlUseFields=yes&cirrusMltFields=all

The search results depend on internal (, More like this) variables, settable via the URL, concerning which words to search with:

For example here is what the address bar (turned search bar) looks like for a morelike search for lead sections of two articles, as compared to other lead sections: https://en.wikipedia.org/w/index.php?title=Special:Search&profile=default&search=morelike:William+H.+Stewart|Leroy+Edgar+Burney&fulltext=Search&cirrusMtlUseFields=yes&cirrusMltFields=opening_text Notice the end containing the two added URL parameters that activated a morelike capability.

Prefer-recent
You can sort search results by date. It goes anywhere in the query. It defaults to 160 days as "recent", and applies its boost formula 60% of the score. The formula is not the usual multiplier, it is an exponential multiplier, potentially much more powerful. This enables it to work where the default for "recent", instead of being 160 days, is can be as little as 9 seconds. If your "recent" means 9 seconds, use prefer-recent:0.0001
 * prefer-recent:
 * prefer-recent:recent,boost

For example, if you're only interested in the relatively few articles that have changed in the last week, use 7 instead. How this works is that all articles older than seven days are only boosted half as much, and all articles older than 14 days are boosted half as much again, and so on.

The boost is more than the usual multiplier, it is exponential. The factor used in the exponent is the time since the last edit. The bigger the time since the last edit, the less the boost. The formula is e −t, where t is either the interval in days or interval of interest.

Add  prefer-recent   to the beginning of a search. It will give the more recently edited articles a boost in the search results. The general form is
 * prefer-recent:proportion_of_score_to_scale,half_life_in_days

This parameter accepts two, comma-separated arguments to allowing for adjusting the default settings. By default this will scale 60% of the score exponentially with the time since the last edit, with a half life of 160 days. So the default is prefer-recent:0.6,160.

This can be changed to increase the weight:
 * prefer-recent:0.8,360

or decrease it:
 * prefer-recent:0.4,10

The proportion_of_score_to_scale must be a number between 0 and 1 inclusive. The half_life_in_days must be greater than 0 but allows decimal points, and so works pretty well to sort close edit times if very small.

For example prefer-recent:0.6,0.0001 operates with a half-life of 8.64 seconds

This will eventually be on by default for Wikinews.

Boost-templates
Boost-templates:"&thinsp;" adds weight to pages with the given template or templates (plural). Using this search parameter overrides the normal template-boosting function of Search. Don't use this search parameter without supplying the weight-boosting argument unless you mean to disable the template weighting function for the search.

The general format is
 * boost-templates:"Template:pagename|parameter Template:pagename|parameter"

You see, normally the system message titled MediaWiki:cirrussearch-boost-templates boosts the score of the following fullpagenames:. These are the actual template names and there actual boost. These are replaced during the boost-templates usage.

For example a search for "phenom" AND "lecture", with the templates Search link and regexp having the weighting score of the pages they are on multiplied by 1.5 and 2.25 respectively, ignoring all other templates (halting the addition of any score for any other template):
 * phenom lecture boost-templates:"Template:search link|150% tlusage|225%"

Boost-templtes differs from hastemplate in
 * the default namespace
 * gramar. Boost-templates has a plural form, and uses a dash between the words.
 * syntax. Boost-templates requires quotation marks.
 * function. Hastemplate is a filter, but boost-templates is not; it only changes a score.
 * Boost-template has a parameter for controlling the boost.

If you just want your search results to include only pages with certain templates, use hastemplate one or more times instead, to filter out pages that don't. Otherwise, choose a multiplier similar to the system message shown above. Multiplying a page score by 10 is done with 1000%, and will probably mask all other weighting functions, such as "when the search words match in the title", will have little effect in the presentation of search results, and is not recommended because it affects the order of the entire list.

Either hastemplate or boost-templates one can go anywhere in the query, each having other terms on either side of it. is a term in a query that can go anywhere in the query, having other terms on either side of it.

Bugs
Relevant issues in CirrusSearch :


 * : pagename can't have double quotation " mark: incategory or intitle
 * The tilde ~ character should not affect the all parameter, for example .  Not only does ~ at the beginning not navigate, but it also does not create a page, and all this without interfering with any namespace argument, but it does interfere with the pseudo-namespace "all".
 * Use of both AND and OR in the same query don't work as expected
 * A phrase search can extend over a number # sign, but not an asterisk * character. This is inconsistent.
 * cm2 does not find, m3 does not find  , where the superscript are unicode characters.
 * The search profile dialog box is difficult to dislodge. Even after the search profile is changed back to default, it continues to display.

Workarounds
 * Use AND between two phrases, for example, to avoid six unwanted articles relating to the double quote " mark.

Troubleshooting
 * https://test.wikipedia.org/
 * Change the backend by suffixing the URL: &srbackend=LuceneSearch or &srbackend=CirrusSearch
 * Release notes