Vantage-point tree

A vantage-point tree (or VP tree) is a metric tree that segregates data in a metric space by choosing a position in the space (the "vantage point") and partitioning the data points into two parts: those points that are nearer to the vantage point than a threshold, and those points that are not. By recursively applying this procedure to partition the data into smaller and smaller sets, a tree data structure is created where neighbors in the tree are likely to be neighbors in the space.

One generalization is called a multi-vantage-point tree (or MVP tree): a data structure for indexing objects from large metric spaces for similarity search queries. It uses more than one point to partition each level.

History
Peter Yianilos claimed that the vantage-point tree was discovered independently by him (Peter Yianilos) and by Jeffrey Uhlmann. Yet, Uhlmann published this method before Yianilos in 1991. Uhlmann called the data structure a metric tree, the name VP-tree was proposed by Yianilos. Vantage-point trees have been generalized to non-metric spaces using Bregman divergences by Nielsen et al.

This iterative partitioning process is similar to that of a $k$-d tree, but uses circular (or spherical, hyperspherical, etc.) rather than rectilinear partitions. In two-dimensional Euclidean space, this can be visualized as a series of circles segregating the data.

The vantage-point tree is particularly useful in dividing data in a non-standard metric space into a metric tree.

Understanding a vantage-point tree
The way a vantage-point tree stores data can be represented by a circle. First, understand that each node of this tree contains an input point and a radius. All the left children of a given node are the points inside the circle and all the right children of a given node are outside of the circle. The tree itself does not need to know any other information about what is being stored. All it needs is the distance function that satisfies the properties of the metric space.

Searching through a vantage-point tree
A vantage-point tree can be used to find the nearest neighbor of a point $n$. The search algorithm is recursive. At any given step we are working with a node of the tree that has a vantage point $x$ and a threshold distance $v$. The point of interest $t$ will be some distance from the vantage point $x$. If that distance $v$ is less than $d$ then use the algorithm recursively to search the subtree of the node that contains the points closer to the vantage point than the threshold $t$; otherwise recurse to the subtree of the node that contains the points that are farther than the vantage point than the threshold $t$. If the recursive use of the algorithm finds a neighboring point $t$ with distance to $n$ that is less than $x$ then it cannot help to search the other subtree of this node; the discovered node $|t − d|$ is returned. Otherwise, the other subtree also needs to be searched recursively.

A similar approach works for finding the $n$ nearest neighbors of a point $k$. In the recursion, the other subtree is searched for $x$ nearest neighbors of the point $k − k′$ whenever only $x$ of the nearest neighbors found so far have distance that is less than $k′ (< k)$.

Advantages of a vantage-point tree

 * 1) Instead of inferring multidimensional points for domain before the index being built, we build the index directly based on the distance. Doing this, avoids pre-processing steps.
 * 2) Updating a vantage-point tree is relatively easy compared to the FastMap approach. For FastMap, after inserting or deleting data, there will come a time when FastMap will have to rescan itself. That takes up too much time and it is unclear to know when the rescanning will start.
 * 3) Distance based methods are flexible.  It is “able to index objects that are represented as feature vectors of a fixed number of dimensions."

Complexity
The time cost to build a vantage-point tree is approximately $|t − d|$. For each element, the tree is descended by $O(n log n)$ levels to find its placement. However there is a constant factor $log n$ where $k$ is the number of vantage points per tree node.

The time cost to search a vantage-point tree to find a single nearest neighbor is $k$. There are $O(log n)$ levels, each involving $log n$ distance calculations, where $k$ is the number of vantage points (elements) at that position in the tree.

The time cost to search a vantage-point tree for a range, which may be the most important attribute, can vary greatly depending on the specifics of the algorithm used and parameters. Brin's paper gives the result of experiments with several vantage point algorithms with various parameters to investigate the cost, measured in number of distance calculations.

The space cost for a vantage-point tree is approximately $k$. Each element is stored, and each tree element in each non-leaf node requires a pointer to its descendant nodes. (See Brin for details on one implementation choice. The parameter for number of elements at each node plays a factor.)

With $n$ points there are $n$ pairwise distances between points. However, the creation of a vantage-point tree requires that only $O(n^{2})$ distances be calculated explicitly, and a search requires only $O(n log n)$ distance calculations. For example, if $x$ and $y$ are points and it is known that the distance $O(log n)$ is small then any point $z$ that is far from $x$ will also necessarily be almost as far from $y$ because the metric space's triangle inequality gives $d(x, y)$.