Multipath routing

Multipath routing is a routing technique simultaneously using multiple alternative paths through a network. This can yield a variety of benefits such as fault tolerance, increased bandwidth, and improved security.

Mobile networks
To improve performance or fault tolerance, concurrent multipath routing (CMR) is often taken to mean simultaneous management and utilization of multiple available paths for the transmission of streams of data. The streams may be emanating from a single application or multiple applications. A stream is assigned a separate path, as uniquely possible given the number of paths available. If there are more streams than available paths, some streams will share paths. CMR provides better utilization of bandwidth by creating multiple transmission queues. It provides a degree of fault tolerance in that should a path fail, only the traffic assigned to that path is affected. There is also, ideally, an alternative path immediately available upon which to continue or restart the interrupted stream.

CMR provides better transmission performance and fault tolerance by providing simultaneous, parallel transport over multiple carriers with the ability to reassign an interrupted stream, and by load balancing over available assets. However, under CMR, some applications may be slower in offering traffic to the transport layer, thus starving paths assigned to them, causing under-utilization. Also, moving to the alternative path will incur a potentially disruptive period during which the connection is re-established.

True CMR
A more powerful form of CMR (true CMR) goes beyond merely presenting paths to applications to which they can bind. True CMR aggregates all available paths into a single, virtual path.

Applications send their packets to this virtual path, which is de-multiplexed at the network Layer. The packets are distributed to the physical paths via some algorithm e.g. round-robin or weighted fair queuing. Should a link fail, succeeding packets are not directed to that path and the stream continues uninterrupted to the application through the remaining path(s). This method provides significant performance benefits over the application level CMR:


 * By continually offering packets to all paths, the paths are more fully utilized.
 * No matter how many paths fail, so long as at least one path is still available, all sessions remain connected and no streams need to be restarted and no re-connection penalty is incurred.

Capillary routing
In networking and in graph theory, capillary routing, for a given network, is a multi-path solution between a pair of source and destination nodes. Unlike shortest-path routing or max-flow routing, for any given network topology - only one capillary routing solution exists.

Capillary routing can be constructed by an iterative linear programming process, transforming a single-path flow into a capillary route.


 * 1) First minimize the maximal value of the load on all of the network routing node links
 * 2) * Do that by minimizing a load upper bound value that is applied to all links.
 * 3) * The full mass of the flow will be split equally across the possible parallel routes.
 * 4) Find the bottleneck links of the first layer (see below), then set their loading amount at the found minimum.
 * 5) Additionally, minimize the maximal load of all remaining links, but now without the bottleneck links of the first layer.
 * 6) * This second iteration further refines the path diversity.
 * 7) Next, we determine the bottleneck links of the 2nd network layer.
 * 8) * Again, minimize the maximal load of all remaining links, but now without the bottlenecks of the 2nd network layer as well.
 * 9) Repeat this algorithm until the entire communication footprint is enclosed in the bottlenecks of the constructed layers.

At each functional layer of the network protocol, after minimizing the maximal load of links, the bottlenecks of the layer are discovered in a bottleneck detection process.


 * 1) At each iteration of the detection loop, we minimize the sending of traffic over all links having maximal loading, and being suspected as bottlenecks.
 * 2) Links unable to maintain their traffic load at the maximum are eventually removed from the candidate path list.
 * 3) The bottleneck detection process stops when there are no more links to remove, because this best path is now known.