Vertical search

A vertical search engine is distinct from a general web search engine, in that it focuses on a specific segment of online content. They are also called specialty or topical search engines. The vertical content area may be based on topicality, media type, or genre of content. Common verticals include shopping, the automotive industry, legal information, medical information, scholarly literature, job search and travel. Examples of vertical search engines include the Library of Congress, Mocavo, Nuroa, Trulia, and Yelp.

In contrast to general web search engines, which attempt to index large portions of the World Wide Web using a web crawler, vertical search engines typically use a focused crawler which attempts to index only relevant web pages to a pre-defined topic or set of topics. Some vertical search sites focus on individual verticals, while other sites include multiple vertical searches within one search engine.

Benefits
Vertical search offers several potential benefits over general search engines:
 * Greater precision due to limited scope,
 * Leverage domain knowledge including taxonomies and ontologies,
 * Support of specific unique user tasks.

Vertical search can be viewed as similar to enterprise search where the domain of focus is the enterprise, such as a company, government or other organization. In 2013, consumer price comparison websites with integrated vertical search engines such as FindTheBest drew large rounds of venture capital funding, indicating a growth trend for these applications of vertical search technology.

Domain-specific search
Domain-specific verticals focus on a specific topic. John Battelle describes this in his book The Search (2005):

Domain-specific search solutions focus on one area of knowledge, creating customized search experiences, that because of the domain's limited corpus and clear relationships between concepts, provide extremely relevant results for searchers.

Any general search engine would be indexing all the pages and searches in a breadth-first manner to collect documents. The spidering in domain-specific search engines more efficiently searches a small subset of documents by focusing on a particular set. Spidering accomplished with a reinforcement-learning framework has been found to be three times more efficient than breadth-first search.

DARPA's Memex program
In early 2014, the Defense Advanced Research Projects Agency (DARPA) released a statement on their website outlining the preliminary details of the "Memex program", which aims at developing new search technologies overcoming some limitations of text-based search. DARPA wants the Memex technology developed in this research to be usable for search engines that can search for information on the Deep Web – the part of the Internet that is largely unreachable by commercial search engines like Google or Yahoo. DARPA's website describes that "The goal is to invent better methods for interacting with and sharing information, so users can quickly and thoroughly organize and search subsets of information relevant to their individual interests". As reported in a 2015 Wired article, the search technology being developed in the Memex program "aims to shine a light on the dark web and uncover patterns and relationships in online data to help law enforcement and others track illegal activity". DARPA intends for the program to replace the centralized procedures used by commercial search engines, stating that the "creation of a new domain-specific indexing and search paradigm will provide mechanisms for improved content discovery, information extraction, information retrieval, user collaboration, and extension of current search capabilities to the deep web, the dark web, and nontraditional (e.g. multimedia) content". In their description of the program, DARPA explains the program's name as a tribute to Bush's original Memex invention, which served as an inspiration.

In April 2015, it was announced parts of Memex would be open sourced. Modules were available for download.