Talk:Lempel–Ziv–Oberhumer

"Requires no memory for decompression.", I think this could do with justification/expansion or correct. I haven't come across that many algorithms that require zero memory to operate! Perhaps this should talk about in-place decompression (decompress straight to the final buffer) if that is what it means? Sladen 20:56, 6 November 2006 (UTC)


 * Yes, I'm familiar with the algorithm and what that meant to say was that it requires no additional memory for compression. I'll fix the article. --Trixter 15:54, 7 November 2006 (UTC)
 * "Requires no memory for decompression." means that there is no need to allocate any buffers or whatever. It is enough to have source data and destination to place them. All other calculations could fit into CPU registers which are often not part of memory or address space. Hence, Oberhumer's claim is formally valid. There is decompressors who decompress it entirely in CPU registers, using no memory except source buffer holding compressed data and destination buffer holding uncompressed block. They can even partially overlap IIRC so compressed block + extra space transformed to uncompressed block at the end of operation. —Preceding unsigned comment added by 91.77.159.176 (talk) 01:40, 4 May 2009 (UTC)

Maybe there could be a more detailed description of the algorithm on the page? 24.6.254.250 05:00, 29 May 2007 (UTC)

Removed "Algorithm is thread safe" because algorithms always are. The thread safety of the reference implementation is already mentioned in the first sentence jan —Preceding unsigned comment added by 82.83.238.109 (talk) 15:15, 16 October 2007 (UTC)

On what is the claim "On modern architectures, decompression is very fast; in non-trivial cases able to exceed the speed of a straight memory-to-memory copy due to the reduced memory-reads." based? Author's own benchmarks at http://www.oberhumer.com/opensource/lzo/lzodoc.php don't support that speed claim. Skarkkai (talk) 12:23, 5 July 2009 (UTC)

The LZO code itself is a nightmare to read, so if an expert could explain the details of the algorithm in more depth, it would be greatly appreciated, especially details on how LZO acceptably handles non-compressible data. An explanation of why LZO is so much faster in benchmarks than most other *NIX file compressors would be nice as well; the most I can discern through strace and the like is that LZO (as used in lzop) is faster because it works with data in fixed-size chunks, drastically reducing system call counts and eliminating some code for handling special situations. I have nothing more than a run of strace to base this on, though, and would love a clear explanation. 75.138.198.88 (talk) 06:11, 25 February 2012 (UTC)
 * I agree. I'd like for someone who understands how LZO works under the hood to tell us how it works. What does LZO do that's different from LZ77 and how does it do it? 97.82.138.203 (talk) 20:03, 18 November 2014 (UTC)

Raising Lazarus 20 year-old bug
Might be of interest:. 76.10.128.192 (talk) 17:36, 27 June 2014 (UTC)

External links modified
Hello fellow Wikipedians,

I have just modified one external link on Lempel–Ziv–Oberhumer. Please take a moment to review my edit. If you have any questions, or need the bot to ignore the links, or the page altogether, please visit this simple FaQ for additional information. I made the following changes:
 * Added archive https://web.archive.org/web/20120625020414/http://www.lzop.de/ to http://www.oberhumer.com/opensource/lzo/

When you have finished reviewing my changes, you may follow the instructions on the template below to fix any issues with the URLs.

Cheers.— InternetArchiveBot  (Report bug) 20:18, 13 May 2017 (UTC)