Draft:Sleep sort

Sleep sort is a simple sorting algorithm based upon asynchronous and concurrent insertions into a destination via a programming language's timing or delay mechanism that permits a program to “sleep”. A separate process or task is generated for each input element, and, upon its commencement, is ordered to sleep a number of time units proportional to the element's size, before appending the same to the output target. By doing so it obviates the per-element comparison traditionally associated with purposeful rearrangements in favor of indirect relationship determinations.

Trading temporal investments for simplicity, as well depending upon the operating system's opaque and ultimately unpredictable scheduling mechanism, the algorithm is, in general, deprived of practicality, and serves rather as a medium for sportive exercises. Its combination of easy realization and the emphasis it fundamentally communicates on the subject of concurrency, however, permit its appropriation for a didactic introduction into parallelism in computer science.

History
The sleep sort algorithm was introduced by an anonymous user of the 4chan imageboard's forum in a post from Junary 20th, 2011:

The presented bash script iterates through its command line arguments, produces for each such a background process that delays the operation for a number of seconds tantamount to the element's value, and subsequently writes the datum to the current standard output's end. The involvement of the delay, or sleep, betwixt the writing operations capacitates the desired arrangement's realization.

Algorithm
An abstraction of the concept, which ignores the bash shell's lack of explicit multithreading facilities and the particular redirection to the standard output, and instead employs threads for parallel operations and a dedicated sequence for the sorted elements' presentation, leads to the following informal exposition:

Given the input sequence I, an initially empty output sequence O, and an initially empty set of threads T, it holds:


 * 1) Generate for each element e of the input sequence I a new thread t which upon its execution insert e at the rear of O.
 * 2) Delay the thread's activation by a number of time units equal to the value e, that is, let is “sleep”.
 * 3) Insert t into the set of threads T.
 * 4) Start all threads in T.
 * 5) Wait until all threads in T have completed their operation.

Formal Description
A more formal expression in pseudocode serves to provide further requisite details:

Implementation
An exemplary implementation in Common Lisp, the same requires the Quicklisp library manager and the Bordeaux Threads library for concurrency, shall elucidate the concepts involving multi-threading:

Restrictions
The algorithm's most conspicuous limitation appertains to its input elements' exploitation, prescribing their concomitant utility as the delay routine's parameters, which in the common case resolves to an integral or floating-point value along the non-negative axis.

However, the situation may be remedied by the introduction of eligible representatives for every original element, in the form of a mapping from the input space to a counterpart that vouches compatibility with the respective programming language facility. As an example, a sequence of strings may be sorted by sleeping an amount of time units equal to the element's length.

Complexity
A formulation of the algorithm's time complexity is inflicted with several predicaments proceeding from its inherent characteristics, namely the consideration of the input elements' characteristics as factors, additionally to their quantity, and the implementation-dependent contribution of the operation system's task scheduling component.

If the last aspect is eschewed, a tenable approximation of $$O(I_{max})$$, where $$I_{max}$$ refers to the largest element of the input sequence, may be assumed.

This perspective, nethertheless, excises the impositions incurred by the strong propinquity to the execution context, in particular the operating system. Consequently, an estimate of $$O(n\log{n} + I_{max})$$, for an input sequence $$I$$, comprehending $$n$$ elements, the largest of which is denoted $$I_{max}$$, can be derived from several implications of thread scheduling and per-element delay.

This reckoning is composed of two constituents: the common runtime complexity of $$O(n\log{n})$$ for the underlying priority queue, as frequently employed in operating systems for the management of threads, and the maximum delay, or “sleeping time”, $$I_{max}$$ required for the responsible thread's completion.

Practical Usage
Sleep sort diverges in its two aspects, once as a tool of purely abstract deliberation, the other as an instrument of actual utility. Whereas its peculiarities vindicate the former claim's admission, the latter commitment is hindered by its opaque design and entrustment to the executing system.

Christian and Griffiths in their work Algorithms to Live By: The Computer Science of Human Decisions, describing the application of computer algorithms to human life, both admit sleep sort's interesting approach and novelty, and concomitantly raise the issue of the algorithm's deceptive display of practicality.

Yet, despite its adverse semblance, sleep sort has become a subject, at least to a tentative grade, of both academic and practical attention.

The algorithm has been employed by Tatabe et al. in the context of ad-hoc networks established by a large amount of autonomous robots whose waiting time before the transition into a mobile state is computed, as an alternative to a random number generation approach, by sleep sort.

A more practical materialization has been experienced in the Puma 5 project, a web server based upon Rack interface, which effectively employs sleep sort for distributing requests among its workers in accordance with their busyness. To this end, the workers are arranged in a fashion with ascending degree of their current workload, appropriating this metric as the sleeping time before listening to the socket.