Network Coordinate System

A Network Coordinate System (NC system) is a system for predicting characteristics such as the latency or bandwidth of connections between nodes in a network by assigning coordinates to nodes. More formally, It assigns a coordinate embedding $$\vec c_n$$ to each node $$n$$ in a network using an optimization algorithm such that a predefined operation $$\vec c_a \otimes \vec c_b \rightarrow d_{ab}$$ estimates some directional characteristic $$d_{ab}$$ of the connection between node $$a$$ and $$b$$.

Uses
In general, Network Coordinate Systems can be used for peer discovery, optimal-server selection, and characteristic-aware routing.

Latency Optimization
When optimizing for latency as a connection characteristic i.e. for low-latency connections, NC systems can potentially help improve the quality of experience for many different applications such as:


 * Online Games
 * Forming game groups such that all the players are close to each other and thus have a smoother overall experience.
 * Choosing servers as close to as many players in a given multiplayer game as possible.
 * Automatically routing game packets through different servers so as to minimize the total latency between players who are actively interacting with each other in the game map.
 * Content delivery networks
 * Directing a user to the closest server that can handle a request to minimize latency.
 * Voice over IP
 * Automatically switch relay servers based on who is talking in a few-to-many or many-to-many voice chat to minimize latency between active participants.
 * Peer-to-peer networks
 * Can use the latency-predicting properties of NC systems to do a wide variety of routing optimizations in peer-to-peer networks.
 * Onion routing networks
 * Choose relays such as to minimize the total round trip delay to allow for a more flexible tradeoff between performance and anonymity.
 * Physical positioning
 * Latency correlates with the physical distances between computers in the real world. Thus, NC systems that model latency may be able to aid in locating the approximate physical area a computer resides in.

Bandwidth Optimization
NC systems can also optimize for bandwidth (although not all designs can accomplish this well). Optimizing for high-bandwidth connections can improve the performance of large data transfers.

Sybil Attack Detection
Sybil attacks are of much concern when designing peer-to-peer protocols. NC systems, with their ability to assign a location to the source of traffic can aid in building systems that are Sybil-resistant.

Landmark-Based vs Decentralized
Almost any NC system variant can be implemented in either a landmark-based or fully decentralized configuration. Landmark-based systems are generally secure so long as none of the landmarks are compromised, but they aren't very scalable. Fully decentralized configurations are generally less secure, but they can scale indefinitely.

Euclidean Embedding

 * This design assigns a point in $$k$$-dimensional euclidean space to each node in the network and estimates characteristics via the euclidean distance function $$d_{ab} = ||\vec c_a - \vec c_b||$$ where $$\vec c_n$$ represents the coordinate of node $$n$$.
 * Euclidean Embedding designs are generally easy to optimize.
 * The optimization problem for the network as a whole is equivalent to finding the lowest energy state of a spring-mass system where the coordinates of the masses correspond to the coordinates of nodes in the network and the springs between the masses represent measured latencies between nodes.
 * To make this optimization problem function work in a decentralized protocol, each node exchanges its own coordinates with those of a fixed set of peers and measures the latencies to those peers, simulating a miniature spring-mass system where all the masses representing the coordinates of the peers and each mass is connected via a single spring to the node's own "mass" which when simulated, gives a more optimal value for the node's coordinate. All these individual updates allow the network as a whole to form a predictive coordinate space by collaboratively.
 * The laws of Euclidean space require certain characteristics of the distance function to hold true, such as symmetry (measuring from $$a \rightarrow b$$ should give the same result as from $$b \rightarrow a$$) and the triangle inequality $$(a \rightarrow b) + (b \rightarrow c) \geq (a \rightarrow c)$$. No real-world network characteristics completely satisfy these laws, but some do more than others and NC systems using euclidean embedding are somewhat accurate when run on datasets containing violations of these laws.
 * Notable Papers: GNP, PIC Vivaldi, Pharos

Matrix Factorization

 * The matrix factorization design imagines the entire network as represented by an incomplete matrix $$X : \R_{n \times n}$$ where $$n$$ is the total number of nodes in the network, and any element of the matrix at the intersection between row $$i$$ and column $$j$$ of the matrix represents a directional latency measurement from node $$n_i$$to node $$n_j$$. The goal is to estimate the numbers in the unfilled squares of the matrix using the squares that are already filled in, i.e. performing matrix completion.
 * To estimate a specific latency between two nodes, this method uses the dot product $$d_{ab} = \vec u_a \vec v_b$$ where $$\vec u_n$$/$$\vec v_n$$ represents a point in a $$k$$-dimensional inner product space.
 * NC system designs using matrix factorization are generally more complicated than their euclidean counterparts.
 * In the centralized variant, matrix completion can be performed directly on a set of landmarks which have measured latency to every other landmark in a set, thus creating a complete matrix $$X$$ representing the landmark network. This matrix can then be factored on a single computer using non-negative matrix factorization (NNMF) into two matrices $$U : R_{n \times r}$$ and $$V : R_{r \times n}$$ such that $$UV \approxeq X$$. Since matrix multiplication is essentially doing the dot product for each row and column of the input matrices, coordinates for each landmark $$j$$ can be represented by two "in" and "out" vectors ($$\vec u_j$$ and $$\vec v_j$$) taken respectively from the $$j$$th row of $$U$$ and the $$j$$th column of $$V$$. With this, latencies between two landmarks can be approximates by a simple dot product: $$d_{ij} = \vec u_i \vec v_j$$. Any node that wants to figure out their own coordinates can simply measure the latency to some subset of all the landmarks, re-create a complete matrix using the landmark's coordinates, and then perform NNMF to calculate their own coordinate. This coordinate can then be used with any other node (landmark or otherwise) to estimate latency to any other coordinate that was calculated via the same set of landmarks.
 * The decentralized variant is decidedly simpler. For a given node, the goal is to minimize the absolute difference (or squared difference) between the measured latencies to the peers and the predicted latencies to the peers. The predicted latency is given by the same equation $$d_{ij} = \vec u_i \vec v_j$$ where $$\vec u_i$$is the outgoing vector of node $$i$$ and $$\vec v_j$$ is the incoming vector of node $$j$$. This goal (or loss function) can then be minimized using stochastic gradient descent with line search.
 * Notable Papers: IDES, Phoenix, DMFSGD

Tensor Factorization

 * Notable Papers: TNDP Leverage Sampling + Personal Devices

Relative Coordinates

 * Notable Papers: RMF

Alternatives
Network Coordinate Systems are not the only way to predict network properties. There are also methods such as iPlane and iPlane Nano which take a more analytical approach and try to mechanistically simulate the behavior of internet routers to predict by what route some packets will flow, and thus what properties a connection will have.

In The Wild

 * Vuze - BitTorrent Client