Draft:WKdm

The WKdm algorithm is one of the first in the class of WK virtual memory compression techniques developed initially by Paul R. Wilson and Scott F. Kaplan et al. circa 1999. The "dm" in the WKdm acronym stands for "direct mapped" and refers to the direct mapped hash method used to map uncompressed words in memory to the WKdm algorithm's dictionary.

Motivation
The key insight on which the WKdm algorithm is built is the realization that most high-level programming languages compile to output whose data section(s) have certain very strong data regularities with regard to integers and pointers. Firstly, a large amount of integers and pointers are word-aligned within records, where "word" here will henceforth refer to 32 bits. Additionally, most integers usually contain small values relative to their maximum ranges. For pointers, most proximal in memory to each other reference addresses that are close to each other in memory. Finally, certain data patterns, particularly words of all zeroes, frequently occur and this is exploited by the algorithm.

To make use of the above data regularities, one need only realize that, frequently, words will share many of their high-order bits either because they aren't large enough to require a full-word bit width, or, said words are pointers whose values reference addresses close in memory to those referenced by nearby pointers. Also, words of all zeroes, which occur frequently, can be easily compressed.

Compression
The WKdm algorithm reads one word at a time from an address range, usually a page or pages, and uses a 16-entry direct mapped dictionary of words to produce compressed output which is segregated into four arrays or "segments" which contain, respectively, "tags" ( 2-bit values indicating the type of (non)match ), dictionary indices, unmatched words and the lower 10 bits of partially matched words. The tag, index and partial match values are initially output into bytes or words in their respective segments, before being “packed” after the number of words in the addresses range to be compressed is exhausted.

For each word read, the word is mapped to the dictionary using a direct-mapped hash, and then the type of (non)match is determined. If a full 32-bit word match is found in the dictionary, then a 2-bit "tag" value indicating a full-word match is written to the tags segment and the 4-bit index of the match within the dictionary is written to the indices segment. If only the high-order 22 bits match, then a different tag is written to the tags segment, the dictionary index of the partial match is output to the indices segment and the differing 10 lower-order bits are recorded in the partial match segment. If no match is found then the new value is added to the dictionary, as well as being emitted to the unmatched-words segment, and another tag signaling this is written to the tags segment. If the read word is all zeroes, then only one tag indicating this is output to the tags segment.

After all the words in the address range to be compressed have been read, the tags, indices and 10-bit partial match values, which are stored in bytes or words in their segments, are "packed" within their respective segments ( e.g. their bits are made contiguous if their particular segment is taken to be one large bit vector. Additional steps may be taken — the exact details are implementation specific ) using bitwise operations to further reduce the compressed size of the data.

Decompression
Decompression is quite straightforward. The tags segment is processed one 2-bit tag at a time and action is taken depending on the value of the tag. If the value indicates a full-word match, then the corresponding dictionary index within the indices segment is referenced and the value referenced by the index in the dictionary is output. If a partial match is indicated, then the corresponding entry in the indices segment is consulted to look up the value that matches the high-order 22 bits and then the partial match segment is read to reconstruct the full 32-bit word, which is written to the uncompressed output. If the current tag indicates that there was no match, then the corresponding 32-bit word in the unmatched words segment is referenced and added to the dictionary as well as being emitted as part of the uncompressed output. If the tag indicates that a word was read that was all zeroes, then a 32-bit zero value is sent to the output.

Performance
Tests and real world performance data show  that WKdm compression achieves a compression ratio comparable or superior to LZ-based dictionary compressors. The WKdm algorithm also has much less overhead than an LZ-class compressor as it only uses a dictionary that is 64 bytes in size as compared to eg. 64 kilobytes. Furthermore, because of the simplicity of the algorithm, compression and decompression is usually much faster than traditional LZ-based compressors.

Variants
The original authors of the WKdm algorithm also developed the so-called "WK4x4" algorithm. This variation on the algorithm uses a 4-way set associative cache instead of a direct mapped hash for the dictionary. The performance, though, was shown to be the same or worse than WKdm in most cases.

Matthew Simpson et al. developed a variation on the original WKdm algorithm named "WKS" in which the compression is performed in-place without the need of any temporary arrays or "segments." This cuts down on the temporary memory requirements. Also, the algorithm prevents compressed data expansion due to its ability to detect incompressible data. Furthermore, WKS extends what is considered a partial match for a given word.

Notable implementations
WKdm compression has been used by Apple's OSX since version 10.9 Mavericks in 2013   and also in the shared source of the Darwin-XNU open source operating system / kernel. WKdm compression has also been implemented in the OLPC Linux kernel. A new implementation of the Linux virtual-memory manager, called "CCache", has also been demonstrated to work with the WKdm algorithm and its variants on a Nokia Internet Tablet N800 running on a TI OMAP processor.