User:BryceMW-CA/Drafts/Replay System

A replay system is a method for allowing pipelined processors to make use of a bypass network even when the latency of an instruction is unknown but predictable. This method reduces latency when the prediction is correct but uses more execution resources if the prediction is incorrect. Load instructions are a common target of replay since they generally have a set of known performance levels depending on what level of cache (if any) the requested data resides in.

The first documented instance of a replay system was in the Intel Pentium 4 processor and has been implemented in their subsequent processors. Some variation of this system is may exist in other superscalar processors but because it is an implementation detail that only impacts performance, it is rarely documented.

Overview
A traditional pipelined CPU has a register fetch stage near the beginning of the pipeline which reads the register operands of the instruction from the physical register file and a write-back stage near the end where the register outputs are written back out to the physical register file. Since there are multiple clock cycles between those stages, an instruction can't be in the pipeline directly after another instruction that produces a value for a register required by the first. Processors that take pipeline hazards into account can automatically insert bubbles in the pipeline to keep an instruction at the register fetch stage until all of its inputs have been written back.

In a pipeline design with one execute stage, a bypass bus can be added to allow data produced by the execute stage in one cycle to be directly used as an input to the execute stage on the next cycle thus eliminating the latency penalty for back-to-back dependent instructions. More complex bypass buses can be designed for more complex pipelines and superscalar processors can make use of bypass networks to allow data to be routed between execution units.

Not all instructions have latencies that are known at the time that instructions are scheduled. This includes some mathematical functions such as division and square root operations which depend on their operands, and instructions that depend on state external to the execution units such as memory and IO access and random number generation. For these instructions, the CPU would not be able to make use of the bypass bus since it wouldn't know on which cycle the data would be ready.

In the case of memory reads, any hits to the L1 cache (that also hit the TLB) have a known latency. Hits to the L2 cache may also have a known latency but generally hits to the L3 cache and full misses can't be easily predicted. Data reads are often on the critical path of execution so reducing their latency can have a large impact on the execution time of a program. Since an effective caching system will have most memory reads hit the L1 cache, the processor can schedule instructions that depend on a data read with the assumption that the read will hit the L1 cache. If the prediction is incorrect and the read is a miss, the results of the dependent instruction will be discarded and the instruction will be rescheduled after the read is complete. If the L2 cache latency is known, the instruction could be rescheduled to attempt to use the bypass bus again at the L2 latency.

Processors that decode instructions into multiple micro-operations and schedule them separately can replay only the μops that are dependent on the mispredicted instruction.

History
The first Intel processor to make use of a replay system was the Pentium 4 which is a superscalar processor with an unusually long pipeline. Having such a long pipeline means that the latency impact of not using bypass is much greater than in processors from shorter pipelines but the wasted cycles from an inaccurate prediction is also large. Despite having shorter pipelines, future Intel processors have also included replay systems since memory latency has become a critical factor in application performance. The cost of a misprediction has also decreased since there are far more execution units than the two of the Pentium 4.

The replay system implemented by the Pentium 4 was simplistic. Instructions went through both the regular pipeline as well as a queue with the same number of stages as the pipeline such that if a memory read failed, a signal could be sent to the scheduler to prevent reading instructions from further up the pipeline and instead read from the replay queue. Rather than waiting for the instruction that initially caused replay to complete, the replayed instructions cycle around the execution pipeline until the memory access is successful.

Later processors keep track of μops that haven't executed in a reorder buffer and use a microcode scoreboard to track dependencies between them. Using these tracking structured, μops can avoid being rescheduled too early.

[Do we know if AMD or any other processors implement this? I'd assume yes]

Tradeoffs
When instructions must be replayed, they have to execute again which takes power and generates heat. In the case of the Pentium 4, replayed instructions could take twice as many execution slots as other instructions. Replaying an instruction also takes more processor resources in general which reduces how many other instructions can be speculatively executed. In processors with simultaneous multithreading (such as Intel's hyper-threading), the resources taken by replayed instruction can't be used by the sibling threads that share the physical core either.

The replay system is a subsystem within the Intel Pentium 4 processor. Its primary function is to catch operations that have been mistakenly sent for execution by the processor's scheduler. Operations caught by the replay system are then re-executed in a loop until the conditions necessary for their proper execution have been fulfilled.

Overview
The replay system came about as a result of Intel's quest for ever-increasing clock speeds. These higher clock speeds necessitated very lengthy pipelines (up to 31 stages in the Prescott core). Because of this, there are six stages between the scheduler and the execution units in the Prescott core. In an attempt to maintain acceptable performance, Intel engineers had to design the scheduler to be very optimistic.

The scheduler in a Pentium 4 processor is so aggressive that it will send operations for execution without a guarantee that they can be successfully executed. (Among other things, the scheduler assumes all data is in level 1 "trace cache" CPU cache.) The most common reason execution fails is that the requisite data is not available, which itself is most likely due to a cache miss. When this happens, the replay system signals the scheduler to stop, then repeatedly executes the failed string of dependent operations until they have completed successfully.

Performance considerations
Not surprisingly, in some cases the replay system can have a very bad impact on performance. Under normal circumstances, the execution units in the Pentium 4 are in use roughly 33% of the time. When the replay system is invoked, it will occupy execution units nearly every available cycle. This wastes power, which is an increasingly important architectural design metric, but poses no performance penalty because the execution units would be sitting idle anyway. However, if hyper-threading is in use, the replay system will prevent the other thread from utilizing the execution units. This is the true cause of any performance degradation concerning hyper-threading. In Prescott, the Pentium 4 gained a replay queue, which reduces the time the replay system will occupy the execution units.

In other cases, where each thread is processing different types of operations, the replay system will not interfere, and a performance increase can appear. This explains why performance with hyper-threading is application-dependent.