User:Mbaer3000/End-to-end principle

The end-to-end principle is one of the classic design principles of computer networking. First explicitly articulated in a 1981 conference paper by Saltzer, Reed, and Clark,  it has inspired and informed many subsequent debates on the proper distribution of functions in the Internet and communication networks more generally.

The end-to-end principle states that application specific functions ought to reside in the end hosts of a network rather than in intermediary nodes – provided they can be implemented "completely and correctly" in the end hosts. Going back to Baran's work on obtaining reliability from unreliable parts in the early 1960s, the basic intuition behind the original principle is that the payoffs from adding functions to the network quickly diminish, especially in those cases where the end hosts will have to implement functions for reasons of "completeness and correctness" anyway (regardless of the efforts of the network).

The canonical example for the end-to-end principle is that of arbitrarily reliable data transfer between two communication end points in a distributed network of nontrivial size, for the only way two end points can obtain perfect reliability is by positive end-to-end acknowledgments plus retransmissions in their absence. In debates about network neutrality a common interpretation of the end-to-end principle is that it implies a neutral or "dumb" network.

Basic content of the principle
The fundamental notion behind the end-to-end principle is that for two processes communicating with each other via some communication means the reliability obtained from that means cannot be expected to be perfectly aligned with the reliability requirements of the processes. In particular, meeting or exceeding very high reliability requirements of communicating processes separated by networks of nontrivial size is more costly than obtaining the required degree of reliability by positive end-to-end acknowledgements and retransmissions (referred to as PAR or ARQ). Put differently, it is far easier and more tractable to obtain reliability beyond a certain margin by mechanisms in the end hosts of a network rather than in the intermediary nodes, especially when the latter are beyond the control of and accountability to the former. An end-to-end PAR protocol with infinite retries can obtain arbitrarily high reliability from any network with a higher than zero probability of successfully transmitting data from one end to another.

The end-to-end principle does not trivially extend to functions beyond end-to-end error control and correction. E.g., no straightforward end-to-end arguments can be made for communication parameters such as latency and throughput. Based on a personal communication with Saltzer (lead author of the original end-to-end paper ) Blumenthal and Clark in a 2001 paper note: "[F]rom the beginning, the end-to-end arguments revolved around requirements that could be implemented correctly at the end-points; if implementation inside the network is the only way to accomplish the requirement, then an end-to-end argument isn't appropriate in the first place. (p. 80)"

History
The meaning of the end-to-end principle has been continuously reinterpreted ever since its initial articulation. Also, noteworthy formulations of the end-to-end principle can be found prior to the seminal 1981 Saltzer, Reed, and Clark paper.

The basic notion: reliability from unreliable parts
In the 1960s Paul Baran and Donald Davies in their pre-Arpanet elaborations of networking made brief comments about reliability that capture the essence of the later end-to-end principle. To quote from a 1964 Baran paper: "Reliability and raw error rates are secondary. The network must be built with the expectation of heavy damage anyway. Powerful error removal methods exist. (p. 5)" Similarly, Davies notes on end-to-end error control: "It is thought that all users of the network will provide themselves with some kind of error control and that without difficulty this could be made to show up a missing packet. Because of this, loss of packets, if it is sufficiently rare, can be tolerated. (p. 2.3)"

Early trade-offs: experiences in the Arpanet
The Arpanet was the first large-scale general-purpose packet switching network – implementing several of the basic notions previously touched on by Baran and Davies, and demonstrating a number of important aspects to the end-to-end principle:


 * Packet switching pushes some logical functions toward the communication end points
 * If the basic premise of a distributed network is packet switching, then functions such as reordering and duplicate detection inevitably have to be implemented at the logical end points of such network. Consequently, the Arpanet featured two distinct levels of functionality – (1) a lower level concerned with transporting data packets between neighboring network nodes (called IMPs), and (2) a higher level concerned with various end-to-end aspects of the data transmission. Dave Clark, one of the authors of the end-to-end principle paper, concludes: "The discovery of packets is not a consequence of the end-to-end argument. It is the success of packets that make the end-to-end argument relevant" (slide 31).


 * No arbitrarily reliable data transfer without end-to-end acknowledgment and retransmission mechanisms
 * The Arpanet was designed to provide reliable data transport between any two end-points of the network – much like a simple I/O channel between a computer and a nearby peripheral device. In order to remedy any potential failures of packet transmission normal Arpanet messages were handed from one node to the next node with a positive acknowledgment and retransmission scheme; after a successful handover they were then discarded,  no source to destination retransmission in case of packet loss was catered for. However, in spite of significant efforts, perfect reliability as envisaged in the initial Arpanet specification turned out to be impossible to provide – a reality that became increasingly obvious once the Arpanet grew well beyond its initial four node topology. The Arpanet thus provided a strong case for the inherent limits of network based hop-by-hop reliability mechanisms in pursuit of true end-to-end reliability.


 * Trade-off between reliability, latency, and throughput
 * The pursuit of perfect reliability may hurt other relevant parameters of a data transmission – most importantly latency and throughput. This is particularly important for applications that require no perfect reliability, but rather value predictable throughput and low latency – the classic example being interactive real-time voice applications. This use case was catered for in the Arpanet by providing a raw message service that dispensed with various reliability measures so as to provide faster and lower latency data transmission service to the end hosts.

The canonical case: TCP/IP
To this day the Internet is characterized mainly by the primacy of the IP protocol – providing a connectionless datagram service with no delivery guarantees and effectively no QoS parameters – at the narrow waist of the [[Media:Internet-hourglass.svg|hourglass abstraction of the Internet architecture]]. Arbitrary protocols may sit on top of IP; however, TCP has been the one most widely used, given that it provides a reliable end-to-end transport service to end points thus communicating. The functional separation between IP and TCP serves at the canonical exemplification of the end-to-end principle and is often used in a normative sense when lamenting violations of network neutrality.