Reverse computation

Reverse computation is a software application of the concept of reversible computing.

Because it offers a possible solution to the heat problem faced by chip manufacturers, reversible computing has been extensively studied in the area of computer architecture. The promise of reversible computing is that the amount of heat loss for reversible architectures would be minimal for significantly large numbers of transistors. Rather than creating entropy (and thus heat) through destructive operations, a reversible architecture conserves the energy by performing other operations that preserve the system state.

The concept of reverse computation is somewhat simpler than reversible computing in that reverse computation is only required to restore the equivalent state of a software application, rather than support the reversibility of the set of all possible instructions. Reversible computing concepts have been successfully applied as reverse computation in software application areas such as database design, checkpointing and debugging, and code differentiation.

Reverse Computation for Parallel Discrete Event Simulation


Based on the successful application of Reverse Computation concepts in other software domains, Chris Carothers, Kalyan Perumalla and Richard Fujimoto suggest the application of reverse computation to reduce state saving overheads in parallel discrete event simulation (PDES). They define an approach based on reverse event codes (which can be automatically generated), and demonstrate performance advantages of this approach over traditional state saving for fine-grained applications (those with a small amount of computation per event). The key property that reverse computation exploits is that a majority of the operations that modify the state variables are “constructive” in nature. That is, the undo operation for such operations requires no history. Only the most current values of the variables are required to undo the operation. For example, operators such as ++, ––, +=, -=, *= and /= belong to this category. Note, that the *= and /= operators require special treatment in the case of multiply or divide by zero, and overflow / underflow conditions. More complex operations such as circular shift (swap being a special case), and certain classes of random number generation also belong here.

Operations of the form a = b, modulo and bitwise computations that result in the loss of data, are termed to be destructive. Typically these operations can only be restored using conventional state-saving techniques. However, we observe that many of these destructive operations are a consequence of the arrival of data contained within the event being processed. For example, in the work of Yaun, Carothers, et al., with large-scale TCP simulation, the last-sent time records the time stamp of the last packet forwarded on a router logical process. The swap operation makes this operation reversible.

History of Reverse Computation as applied to Parallel Discrete Event Simulation


In 1985 Jefferson introduced the optimistic synchronization protocol, which was utilized in parallel discrete event simulations, known as Time Warp. To date, the technique known as Reverse Computation has only been applied in software for optimistically synchronized, parallel discrete event simulation.

In December 1999, Michael Frank graduated from the University of Florida. His doctoral thesis focused on reverse computation at the hardware level, but included descriptions of both an instruction set architecture and a high level programming language (R) for a processor based on reverse computation.

In 1998 Carothers and Perumalla published a paper for the Principles of Advanced and Distributed Simulation workshop as part of their graduate studies under Richard Fujimoto, introducing technique of Reverse Computation as an alternative rollback mechanism in optimistically synchronized parallel discrete event simulations (Time Warp). In 1998, Carothers became an associate professor at Rensselaer Polytechnic Institute. Working with graduate students David Bauer and Shawn Pearce, Carothers integrated the Georgia Tech Time Warp design into Rensselaer’s Optimistic Simulation System (ROSS), which supported only reverse computation as the rollback mechanism. Carothers also constructed RC models for BitTorrent at General Electric, as well as numerous network protocols with students (BGP4, TCP Tahoe, Multicast). Carothers created a course on Parallel and Distributed Simulation in which students were required to construct RC models in ROSS.

Around the same time, Perumalla graduated from Georgia Tech and went to work at the Oak Ridge National Laboratory (ORNL). There he built the uSik simulator, which was a combined optimistic / conservative protocol PDES. The system was capable of dynamically determining the best protocol for LPs and remapping them during execution in response to model dynamics. In 2007 Perumalla tested uSik on Blue Gene/L and found that, while scalability is limited to 8K processors for pure Time Warp implementation, the conservative implementation scales to 16K available processors. Note that benchmarking was performed using PHOLD with a constrained remote event rate of 10%, where the timestamp of events was determined by an exponential distribution with a mean of 1.0, and an additional lookahead of 1.0 added to each event. This was the first implementation of PDES on Blue Gene using reverse computation.

From 1998 to 2005 Bauer performed graduate work at RPI under Carothers, focusing solely on reverse computation. He developed the first PDES system solely based on reverse computation, called Rensselaer’s Optimistic Simulation System (ROSS). for combined shared and distributed memory systems. From 2006 to 2009 Bauer worked under E.H. Page at Mitre Corporation, and in collaboration with Carothers and Pearce pushed the ROSS simulator to the 131,072 processor Blue Gene/P (Intrepid). This implementation was stable for remote event rates of 100% (every event sent over the network). During his time at RPI and MITRE, Bauer developed the network simulation system ROSS.Net that supports semi-automated experiment design for black-box optimization of network protocol models executing in ROSS. A primary goal of the system was to optimize multiple network protocol models for execution in ROSS. For example, creating an LP layering structure to eliminate events being passed between network protocol LPs on the same simulated machine optimizes simulation of TCP/IP network nodes by eliminating zero-offset timestamps between TCP and IP protocols. Bauer also constructed RC agent-based models for social contact networks to study the effects of infectious diseases, in particular pandemic influenza, that scale to hundreds of millions of agents; as well as RC models for Mobile ad-hoc networks implementing functionality of mobility (proximity detection) and highly accurate physical layer electromagnetic wave propagation (Transmission Line Matrix model).

There has also been a recent push by the PDES community into the realm of continuous simulation. For example, Fujimoto and Perumalla, working with Tang et al. have implemented an RC model of particle-in-cell and demonstrated excellent speedup over continuous simulation for models of light as a particle. Bauer and Page demonstrated excellent speedup for an RC Transmission Line Matrix model (P.B. Johns, 1971), modeling light as a wave at microwave frequencies. Bauer also created an RC variant of SEIR that generates enormous improvement over continuous models in the area of infectious disease spread. In addition, the RC SEIR model is capable of modeling multiple diseases efficiently, whereas the continuous model explodes exponentially with respect to the number of combinations of diseases possible throughout the population.

Events

 * RC ’05 – 1st Int’l Workshop on Reversible Computing