2/29/2016
LZSSE Results
(LZSSE Latest commit c22a696 ; fetched 03/06/2016 ; test machine Core i7-3770 3.4 GHz ; built MSVC 2012 x64 ; LZSSE2 and 8 optimal parse level 16)
Basically LZSSE is in fact great on text, faster than LZ4 and much better compression.
On binary, LZSSE2 is quite bad, but LZSSE8 is roughly on par with LZ4. It looks like LZ4 is maybe slightly better on binary than LZSSE8, but it's close.
In general, LZ4 is does well on files that tend to have long LRL's and long ML's. Files with lots of short (or zero) LRL's and short ML's are bad for LZ4 (eg. text) and not bad for LZSSE.
(LZB16 is Oodle's LZ4 variant; 64k window like LZSSE; LZNIB and LZBLW have large windows)
Some results : enwik8 LZSSE2 : 100,000,000 ->38,068,528 : 2866.17 mb/s enwik8 LZSSE8 : 100,000,000 ->38,721,328 : 2906.29 mb/s enwik8 LZB16 : 100,000,000 ->43,054,201 : 2115.25 mb/s (LZSSE kills on text) lzt99 LZSSE2 : 24,700,820 ->15,793,708 : 1751.36 mb/s lzt99 LZSSE8 : 24,700,820 ->15,190,395 : 2971.34 mb/s lzt99 LZB16 : 24,700,820 ->14,754,643 : 3104.96 mb/s (LZSSE2 really slows down on heterogenous binary file lzt99) (LZSSE8 does okay, but slightly worse than LZ4/LZB16 in size & speed) mozilla LZSSE2: 51,220,480 ->22,474,508 : 2424.21 mb/s mozilla LZSSE8: 51,220,480 ->22,148,366 : 3008.33 mb/s mozilla LZB16 : 51,220,480 ->22,337,815 : 2433.78 mb/s (all about the same size on silesia mozilla) (LZSSE8 definitely fastest) lzt24 LZB16 : 3,471,552 -> 2,379,133 : 4435.98 mb/s lzt24 LZSSE8 : 3,471,552 -> 2,444,527 : 4006.24 mb/s lzt24 LZSSE2 : 3,471,552 -> 2,742,546 : 1605.62 mb/s lzt24 LZNIB : 3,471,552 -> 1,673,034 : 1540.25 mb/s (lzt24 (a granny file) really terrible for LZSSE2; it's as slow as LZNIB) (LZSSE8 fixes it though, almost catches LZB16, but not quite) ------------------ Some more binary files. LZSSE2 is not good on any of these, so omitted. win81 LZB16 : 104,857,600 ->54,459,677 : 2463.37 mb/s win81 LZSSE8 : 104,857,600 ->54,911,633 : 3182.21 mb/s all_dds LZB16 : 79,993,099 ->47,683,003 : 2577.24 mb/s all_dds LZSSE8: 79,993,099 ->47,807,041 : 2607.63 mb/s AOW3_Skin_Giants.clb LZB16 : 7,105,158 -> 3,498,306 : 3350.06 mb/s LZSSE8 : 7,105,158 -> 3,612,433 : 3548.39 mb/s baby_robot_shell.gr2 LZB16 : 58,788,904 ->32,862,033 : 2968.36 mb/s LZSSE8 : 58,788,904 ->33,201,406 : 2642.94 mb/sLZSSE8 vs LZB16 is pretty close.
LZSSE8 is maybe more consistently fast; its decode speed has less variation than LZ4. Slowest LZSSE8 was all_dds at 2607 mb/s ; LZ4 went down to 2115 mb/s on enwik8. Even excluding text, it was down to 2433 mb/s on mozilla. LZB16/LZ4 had a slightly higher max speed (on lzt24).
Conclusion :
On binary-like data, LZ4 and LZSSE8 are pretty close. On text-like data, LZSSE8 is definitely better. So for general data, it looks like LZSSE8 is a definite win.
LZSSE Notes
1. SIMD processing of control words.
All LZ-Bytewises do a little bit of shifts and masks to pull out fields and flags from the control word. Stuff like lrl = (control>>4) and numbytesout = lrl+ml;
This work is pretty trivial, and it's fast already in scalar. But if you can do it N at a time, why not.
A particular advantage here is that SSE instruction sets are somewhat better at branchless code than scalar, it's a bit easier to make masks from conditions and such-like, so that can be a win. Also helps if you're front-end-bound, since decoding one instruction to do an 8-wide shift is less work than 8 instructions. (it's almost impossible for a data compressor to be back-end bound on simple integer math ops, there are just so many execution units; that's rare, it's much possible to hit instruction decode limits)
2. Using SSE in scalar code to splat out match or LRL.
LZSSE parses the control words SIMD (wide) but the actual literal or match copy is scalar, in the sense that only one is done at a time. It still uses SSE to fetch those bytes, but in a scalar way. Most LZ's can do this (many may do it already without being aware of it; eg. if you use memcpy(,16) you might be doing an SSE splat).
3. Limitted LRL and ML in control word with no excess. Outer looping on control words only, no looping on LRL/ML.
To output long LRL's, you have to output a series of control words, each with short LRL. To output long ML's, you have to output a series of control words.
This I think is the biggest difference in LZSSE vs. something like LZ4. You can make an LZ4 variant that works like this, and in fact it's an interesting thing to do, and is sometimes fast. In an LZ4 that does strictly alternating LRL-ML, to do this you need to be able to send ML==0 so that long literal runs can be continued as a sequence of control words.
Traditional LZ4 decoder :
{
lrl = control>>4;
ml = (control&0xF)+4;
off = get 2 bytes; comp += 2;
// get excess if flagged with 0xF in control :
if ( lrl == 0xF ) lrl += *comp++; // and maybe more
if ( ml == 19 ) ml += *comp++; // and maybe more
copy(out,comp,lrl); // <- may loop on lrl out += lrl; comp += lrl; copy(out,out-off,ml); // <- may loop on ml out += ml; }
non-looping LZ4 decoder : (LZSSE style)
{
lrl = control>>4;
ml = control&0xF; // <- no +4 , 0 possible off = get 2 bytes; comp += 2; // <- * see below // no excess copy(out,comp,16); // <- unconditional 16 byte copy, no loop out += lrl; comp += lrl; copy(out,out-off,16); // <- unconditional 16 byte copy, no loop out += ml; }
(* = the big complication in LZSSE comes from trying to avoid sending the offset again when you're continuing a match;
something like if previous control word ml == 0xF that means a continuation so don't get offset)
(ignoring the issue of overlapping matches for now)
This non-looping decoder is much less branchy, no branches for excess lens, no branches for looping copies. It's much faster than LZ4 *if* the data doesn't have long LRL's or ML's in it.
4. Flagged match/LRL instead of strictly alternating LRL-ML. This is probably a win on data with lots of short matches, where matches often follow matches with no LRL in between, like text.
If you have to branch for that flag, it's a pretty huge speed hit (see, eg. LZNIB). So it's only viable in a fast LZ-Bytewise if you can do it branchless like LZSSE.
Bit Input Notes
2. The best case for bit input is when the length that you consume is not very variable. eg. in the Huffman case, 1-12 bits, has a reasonably low limit. The worst case is when it has a high max and is quite random. Then you can't avoid refill checks, and they're quite unpredictable (if you do the branchy case)
3. If your refills have a large maximum, but the average is low, branchy can be faster than branchless. Because the maximum is high (eg. maybe a max of 32 bits consumed), you can only do one decode op before checking refill. Branchless will then always refill. Branchy can skip the refill if the average is low - particularly if it's predictably low.
4. If using branchy refills, try to make it predictable. An interesting idea is to use multiple bit buffers so that each consumption spot gets its own buffer, and then can create a pattern. A very specific case is consuming a fixed number of bits. something like :
bitbuffer
if ( random )
{
consume 4 bits from bitbuffer
if bitbuffer out -> refill
}
else
{
consume 6 bits from bitbuffer
if bitbuffer out -> refill
}
these branches (for bitbuffer refill) will be very random because of the two different sites that consume different amounts. However, this :
bitbuffer1, bitbuffer2
if ( random )
{
consume 4 bits from bitbuffer1
if bitbuffer1 out -> refill
}
else
{
consume 6 bits from bitbuffer2
if bitbuffer2 out -> refill
}
these branches for refill are now perfectly predictable in a pattern (they are taken every Nth time exactly).
5. Bit buffer work is slow, but it's "mathy". On modern processors that are typically math-starved, it can be cheap *if* you have enough ILP to fully use all the execution units. The problem is a single bit buffer on its own is super serial work, so you need multiple bit buffers running simultaneously, or enough other work.
For example, it can actually be *faster* than byte-aligned input (using something like "EncodeMod") if the byte-input does a branch, and that branch is unpredictable (in the bad 25-75% randomly taken range).
2/17/2016
LZSSE
Some good stuff.
Basically this is a nibble control word LZ (like LZNIB). The nibble has a threshold value T, < T is an LRL (literal run len),>= T is a match length. LZSSET are various threshold variants. As Conor noted, ideally T would be variable, optimized per file (or even better - per quantum) to adapt to different data better.
LZSSE has a 64k window (like LZ4/LZB16) but unlike them supports MML (minimum match length) of 3. MML 3 typically helps compression a little, but in scalar decoders it really hurts speed.
I think the main interesting idea (other than implementation details) is that by limitting the LRL and ML, with no excess/overflow support (ML overflow is handled with continue-match nibbles), it means that you can do a non-looping output of 8/16 bytes. You get long matches or LRL's by reading more control nibbles.
That is, a normal LZ actually has a nested looping structure :
loop on controls from packed stream
{
control specifies lrl/ml
loop on lrl/ml
{
output bytes
}
}
LZSSE only has *one* outer loop on controls.
There are some implementation problems at the moment. The LZSSE2 optimal parse encoder is just broken. It's unusably slow and must have some bad N^2 degeneracy. This can be fixed, it's not a problem with the format.
Another problem is that LZSSE2 expands incompressible data too much. Real world data (particularly in games) often has incompressible data mixed with compressible. The ideal fix would be to have the generalized LZSSET and choose T per quantum. A simpler fix would be to do something like cut files into 16k or 64k quanta, and to select the best of LZSSE2/4/8 per-quantum and also support uncompressed quanta to prevent expansion.
I will take this moment to complain that the test sets everyone is using are really shit. Not Conors fault, but enwiks and Silesia are grossly not at all representative of data that we see in the real world. Silesia is mostly text and weird highly-compressible data; the file I like best in there for my own testing is "mozilla" (though BTW mozilla also contains a bunch of internal zlib streams; it benefits enormously from precomp). We need a better test corpus!!!
2/11/2016
String Match Stress Test Files
string_match_stress_tests.7z (60,832 bytes)
Consists of :
paper1_twice stress_all_as stress_many_matches stress_search_limit stress_sliding_follow stress_suffix_forward
An optimal parse matcher (matching at every position in each file against all previous bytes within that file) should get these average match lengths : (min match length of 4, and no matches searched for in the last 8 bytes of each file)
paper1_twice : 13294.229727 stress_all_as : 21119.499148 stress_many_matches : 32.757760 stress_search_limit : 823.341331 stress_sliding_follow : 199.576550 stress_suffix_forward : 5199.164464 total ml : 2896554306 total bytes : 483870
Previous post on the same test set : 09-27-11 - String Match Stress Test
And these were used in the String Match Test post series , though there I used "twobooks" instead of "paper1_twice".
These stress tests are designed to make imperfect string matchers show their flaws. Correct implementations of Suffix Array or Suffix Tree searchers should find this total match length without ever going into bad N^2 slowdowns (their speed should be roughly constant). Other matchers like hash-link, LzFind (hash-btree) and MMC will either find lower total match length (due to an "amortize" step limit) or will fall into bad N^2 (or worse!) slowdowns.
old rants
-
2018
(34)
- October (8)
- August (1)
- July (1)
- June (3)
- May (7)
- April (3)
- March (5)
- February (5)
- January (1)
-
2015
(33)
- December (1)
- November (1)
- October (1)
- September (1)
- July (1)
- June (1)
- May (8)
- March (6)
- February (11)
- January (2)
-
2014
(41)
- December (8)
- November (2)
- September (1)
- August (2)
- July (2)
- June (4)
- March (6)
- February (14)
- January (2)
-
2013
(41)
- December (1)
- November (2)
- October (2)
- September (2)
- August (5)
- June (1)
- May (3)
- April (7)
- March (11)
- February (6)
- January (1)
-
2012
(78)
- December (8)
- November (5)
- October (11)
- September (21)
- August (5)
- July (7)
- June (7)
- May (3)
- April (2)
- March (8)
- January (1)
-
2011
(134)
- December (7)
- November (9)
- October (13)
- September (16)
- August (8)
- July (31)
- June (13)
- May (4)
- April (6)
- March (7)
- February (8)
- January (12)
-
2010
(154)
- December (7)
- November (14)
- October (27)
- September (20)
- August (26)
- July (14)
- June (15)
- May (16)
- April (2)
- March (4)
- February (2)
- January (7)