TEST Maximum compression option -m7 -s
PC: Intel Core I7 920 2.67ghz 8 core (4 + 4 ht)
ZCM Archiver v.0.70d Ultra
compress.. c:/test/MSO97.DLL 3782416 to 1521055
compress.. c:/test/FP.LOG 20617071 to 381773
compress.. c:/test/rafale.bmp 4149414 to 723859
compress.. c:/test/english.dic 4067439 to 537835
compress.. c:/test/ohs.doc 4168192 to 743375
compress.. c:/test/AcroRd32.exe 3870784 to 1126931
compress.. c:/test/vcfiu.hlp 4121418 to 504828
compress.. c:/test/A10.jpg 842468 to 825680
compress.. c:/test/FlashMX.pdf 4526946 to 3662739
compress.. c:/test/world95.txt 2988578 to 441060
Compressed 53134726 bytes to 10469135 bytes
Kernel Time = 0.390 = 00:00:00.390 = 2%
User Time = 15.537 = 00:00:15.537 = 97%
Process Time = 15.927 = 00:00:15.927 = 99%
Global Time = 15.928 = 00:00:15.928 = 100%
ZCM Archiver v.0.80, Extreme.
compress.. c:/test/MSO97.DLL 3782416 to 1512807
compress.. c:/test/FP.LOG 20617071 to 369235
compress.. c:/test/rafale.bmp 4149414 to 712185
compress.. c:/test/english.dic 4067439 to 531629
compress.. c:/test/ohs.doc 4168192 to 741504
compress.. c:/test/AcroRd32.exe 3870784 to 1118344
compress.. c:/test/vcfiu.hlp 4121418 to 500889
compress.. c:/test/A10.jpg 842468 to 821699
compress.. c:/test/FlashMX.pdf 4526946 to 3658691
compress.. c:/test/world95.txt 2988578 to 441206
Compressed 53134726 bytes to 10408189 bytes
Kernel Time = 0.327 = 00:00:00.327 = 2%
User Time = 15.818 = 00:00:15.818 = 97%
Process Time = 16.146 = 00:00:16.146 = 99%
Global Time = 16.162 = 00:00:16.162 = 100%
ZCM Archiver v.0.88. Unreal version.
compress.. c:/test/MSO97.DLL 3782416 to 1527321
compress.. c:/test/FP.LOG 20617071 to 353741
compress.. c:/test/rafale.bmp 4149414 to 680688
compress.. c:/test/english.dic 4067439 to 505725
compress.. c:/test/ohs.doc 4168192 to 740605
compress.. c:/test/AcroRd32.exe 3870784 to 1118127
compress.. c:/test/vcfiu.hlp 4121418 to 498896
compress.. c:/test/A10.jpg 842468 to 822698
compress.. c:/test/FlashMX.pdf 4526946 to 3660541
compress.. c:/test/world95.txt 2988578 to 439241
Compressed 53134726 bytes to 10347583 bytes
Kernel Time = 0.358 = 00:00:00.358 = 2%
User Time = 13.587 = 00:00:13.587 = 97%
Process Time = 13.946 = 00:00:13.946 = 99%
Global Time = 13.963 = 00:00:13.963 = 100%
Last edited by Nania Francesco; 22nd June 2013 at 19:45.
Results on LTCB. http://mattmahoney.net/dc/text.html#1629
A couple of curious things.
1. -m7 -t1 runs faster (and compresses better) than -m7 -t2 on my test machine with 2 cores. Task manager shows 3 processes running. I guess that only 2 are compressing, and a third one is polling, because it does a lot of input (many GB) and uses 1/3 of the CPU but uses very little memory.
2. If I create an archive with 1 file with -t2 then the list command shows 3 copies of the file.
Thanks Matt for the tests performed.
Honestly I tried to improve the multiprocessing system already from this version but I couldn't get it to work well and fast as this version uses as he understood a master and several slaves per cpu!
option -t4
master [1 cpu]
slave [1 cpu] slave [1 cpu] slave [1 cpu] slave [1 cpu]
Last edited by Nania Francesco; 23rd June 2013 at 14:42.
BMP Test
http://img.photographyblog.com/revie...s_1100d_01.jpg
http://img.photographyblog.com/revie...s_1100d_02.jpg
http://img.photographyblog.com/revie...s_1100d_03.jpg
http://img.photographyblog.com/revie...s_1100d_04.jpg
http://img.photographyblog.com/revie...s_1100d_05.jpg
http://img.photographyblog.com/revie...s_1100d_06.jpg
http://img.photographyblog.com/revie...s_1100d_07.jpg
http://img.photographyblog.com/revie...s_1100d_08.jpg
converted to BMP 24bit
BIM - Lossless image compressor, v0.02 - Ilya Muravyov
canon_eos_1100d_01.bmp 36500022->10772213
canon_eos_1100d_02.bmp 36500022->9936109
canon_eos_1100d_03.bmp 36500022->12666883
canon_eos_1100d_04.bmp 36500022->11215440
canon_eos_1100d_05.bmp 36500022->11126009
canon_eos_1100d_06.bmp 36500022->12522454
canon_eos_1100d_07.bmp 36500022->7122128
canon_eos_1100d_08.bmp 36500022->13259271
292000176 to 88620507 bytes ENC=19.410 sec. DEC=19.892 sec.
BMF lossless image compressor, v.2.01 by Dmitry Shkarin - Option Q1
canon_eos_1100d_01.bmp 36500022 to 9002508
canon_eos_1100d_02.bmp 36500022 to 8296308
canon_eos_1100d_03.bmp 36500022 to 10408280
canon_eos_1100d_04.bmp 36500022 to 9730828
canon_eos_1100d_05.bmp 36500022 to 9462132
canon_eos_1100d_06.bmp 36500022 to 11030500
canon_eos_1100d_07.bmp 36500022 to 5971404
canon_eos_1100d_08.bmp 36500022 to 11574348
292000176 to 75476308 bytes ENC=51.091 sec. DEC=15.462 sec.
ZCM Archiver v.0.88. - Nania Francesco Antonio option -m0 -s
canon_eos_1100d_01.bmp 36500022 to 7779960
canon_eos_1100d_02.bmp 36500022 to 7021940
canon_eos_1100d_03.bmp 36500022 to 9359326
canon_eos_1100d_04.bmp 36500022 to 8216059
canon_eos_1100d_05.bmp 36500022 to 8152713
canon_eos_1100d_06.bmp 36500022 to 9832086
canon_eos_1100d_07.bmp 36500022 to 4731772
canon_eos_1100d_08.bmp 36500022 to 10355643
292000176 to 65449499 bytes ENC= 49.421 sec. DEC=46.629 sec.
BMF lossless image compressor, v.2.01 by Dmitry Shkarin - Option Q1 -S
canon_eos_1100d_01.bmp 36500022-> 7211984
canon_eos_1100d_02.bmp 36500022-> 6530116
canon_eos_1100d_03.bmp 36500022-> 8424468
canon_eos_1100d_04.bmp 36500022-> 7700868
canon_eos_1100d_05.bmp 36500022-> 7607396
canon_eos_1100d_06.bmp 36500022-> 9426060
canon_eos_1100d_07.bmp 36500022-> 4332072
canon_eos_1100d_08.bmp 36500022-> 9822300
292000176 to 61055264 bytes ENC=188.013 sec. DEC=109.932 sec.
The links to the JPG do not work .... you can take free images for instance from wikimedia commons (these links always works;)), for instance widescreen desktop images are here:
http://commons.wikimedia.org/wiki/Ca...op_backgrounds
In the website there are also gigabit images (of paintings) available.
However, what's important here is that you have been able to further optimize a compressor that already proved to be one of the most interesting for compression and speed... and now ZCM is the most promising compressor as clearly emerges from the last update of the squeeze chart (http://www.squeezechart.com/) of Stephan Busch....
Really, a wonderful result, Francesco!
Nania Francesco (25th June 2013)
Francesco,
can you anticipate your plans or the roadmap for ZCM?
Will ZCM have a GUI?
Is ZCM meant to be used just for test or a stable release will made available in the future?
Will it be closed source, open source or ... ?
Nania Francesco (27th June 2013)
I honestly would those plans to continue filing something in terms of speed and compression. I would surely release a GUI version. As regards Open source I think is not a viable option especially when considering that would maybe a series of clones that do not satisfy my belief!
ZGish (28th June 2013)
I think more people would use zcm if it were open source. People want to know that they can decompress their archives later, that the software won't disappear. People use zip, gzip, 7zip, and bzip2 even though there are better compressors because these are open source and in stable, well tested formats.
Even if the format is not stable, like PAQ, making it open source means that others can make improvements to it that are open source too (if you use GPL). PAQ has about 20 authors. I could not have done it all myself.
ZGish (28th June 2013)
It's good to hear that ZCM development will continue. I would obvioulsy like a open-source code, and I completely agree with Matt: it's easier for an open-source code to become popular and live much longer (and the author will not become less popular, maybe even more if clones appear) ... however that decision obviously entirely belong to the author, and depends from his plans and the strategy for the future... and a little of luck maybe...
There are many other gurus of compression here, many authors.... you can share and get their point of view.
Mat Chartier (author of MCM) is asking too about ZCM in the thread "New CM compressor in development"...
I frequently wonder how many improvements are lost when a code is not released and how many steps in the progress would be gained if more people would share their work.
I believe many people here and their programs would deserve to become more popular, and ZCM is for sure one of them. ZPAQ is another. And others too...
Popularity is a great gift given by people to someone: to gain it, maybe an equivalent valuable gift must be given to the people ...
(besides having lot of luck , too but for that is important to think positive and lateral...)...
Will we (dummies users) ever see a Joint Compression Force team,
for instance (for CM) Matt-ZPAQ + Francesco ZCM + Mat MCM + .... to join efforts , avoid dispersion and give a quantum leap to compression results?
Last edited by ZGish; 28th June 2013 at 12:06.
Matt Mahoney (28th June 2013),Nania Francesco (28th June 2013)
Enabling others to make improvements that are open source is not specific to GPL, it's the core of FOSS.Quote Originally Posted by Matt Mahoney View Postmaking it open source means that others can make improvements to it that are open source too (if you use GPL).
The key of GPL it to disable others to make improvements unless they do it in a specific way (with 'specific' being different with different GPL versions, which is why GPL v3 doesn't mix with v2 and trying to be close to a subset of 'open source')
Last edited by m^2; 28th June 2013 at 19:46.
The main reason I use GPL is to prevent commercial use in closed source products. If they don't want to make their products open source, then they have to buy a separate license, or hire me. But I would have no problem with companies using my code in open source products.
Sometimes I make exceptions. I made libzpaq is public domain, as any reference implementation of a standard should be.
When ZCM is open-sourced, I will have a chance to play with memory prefetching :]
I am updating the mingw benchmark at http://mattmahoney.net/dc/zpaq.html from zcm 0.80 to 0.88. Here is a comparison. (2.0 GHz T3200, 2 cores, 3 GB, Vista 32 bit). zcm v0.88 is faster with almost the same compression. But 1 thread is still faster than 2 on a 2 core machine, and update in solid mode is faster than separate files.
Code:Archiver Create mingw44 Time Add mingw45 Time Extract (CPU) Free Open Spec ------- -------------- ----- ----------- ---- ------- ----- ---- ---- ---- zcm v0.80 zcm -r (Win32)45,004,528 276.8 119,374,717 313.3 298.4 Yes No No zcm -r -m6 -t2 -s 38,021,034 94.2 78,026,436 118.2 175.3 zcm -r -m6 -t1 -s 36,174,231 68.5 73,968,725 93.2 161.0 zcm -r -m7 -t1 -s 36,115,632 86.0 73,846,883 101.6 176.9 zcm v0.88 zcm -r (Win32)46,176,811 277.1 121,883,071 352.8 291.7 Yes No No zcm -r -m6 -t2 -s 38,327,556 62.2 91,216,921 79.2 153.8 zcm -r -m6 -t1 -s 36,182,872 63.0 73,986,771 83.4 154.0 zcm -r -m7 -t1 -s 36,118,689 66.3 73,852,323 85.7 158.5
Nania Francesco (1st July 2013)
zcm 0.70d and 0.88 does not work in multi-threaded mode on my core i5 with 4gb memory. Encoder works correct without key "-t" and with "-t1". Otherwise it even do not create file, shows constant value of usage memory and cpu (25-27%). Verbose mode shows that coder not works -cursor just blinking. As long while I not kill process. I tested it on windows 7 and windows xp sp2. Where may be problem?
Sorry for my english.
ZCM 0.90 released!
For extra compression !
I have spent more time and work to the processing of data streams, so don't expect huge single file compression but a global improvement!
News:
- New CM core!
- Added data recognition
- More faster in compression/decompression!
- More stable !
- Corrected more bugs
Only from:
http://heartofcomp.altervista.org/
@Pedofinder
option " -t2 " enable 2 core compression
option " -t3 " enable 3 core compression
.... etc
option " -t0 " enable All core compression
Last edited by Nania Francesco; 4th May 2014 at 02:19.
Bulat Ziganshin (4th May 2014),Fallon (4th May 2014),fcorbelli (4th May 2014),Stephan Busch (4th May 2014),surfersat (5th May 2014)
Great!Quote Originally Posted by Nania Francesco View PostZCM 0.90 released!
For extra compression !
I have spent more time and work to the processing of data streams, so don't expect huge single file compression but a global improvement!
News:
- New CM core!
- Added data recognition
- More faster in compression/decompression!
- More stable !
- Corrected more bugs
Only from:
http://heartofcomp.altervista.org/
@Pedofinder
option " -t2 " enable 2 core compression
option " -t3 " enable 3 core compression
.... etc
option " -t0 " enable All core compression
Maximumcompression's SFC compressed to 10,411,246!
More tests later :)
Nania Francesco (4th May 2014)
Sembra ottimo, suggerisco però di mettere (almeno) un timer per mostrare il tempo impiegato, giusto per avere un rapido raffronto (già che ci sei fai stampare anche i parametri usati, è più facile mantenerli sempre a scopo di test).
Anche un ETA, o comunque un feedback sull'avanzamento, non sarebbe male.
Su file ASCII-DUMP di mysql sembra efficiente quanto, e più, di ZPAQ (di gran lunga più di arc nella versione .666)
Il working set è basso (relativamente), farò qualche ulteriore test con i "campioni" (nz in primis)
Segnalo un piccolo bug: se fai (poniamo dall'unità z:\)
zcm l prova
chiude con
"n.xxx files in archive z:\\prova"
con due \
Last edited by fcorbelli; 4th May 2014 at 16:40.
Nania Francesco (4th May 2014)
Grazie fcorbelli per i commenti ! Questa versione sarà a breve aggiornata in quanto ho fatto tanti cambiamenti nella struttura del compressore e necessita di alcuni aggiustamenti !
per vedere l'avanzamento puoi usare l'opzione "-v"
Thanks fcorbelli for the comments! This version will be updated soon because I made so many changes in the structure of the compressor and requires some adjustments!
to see the progress you can use the "-v"
Last edited by Nania Francesco; 4th May 2014 at 17:14.
Mmmhhh... extensive bug-fix needed...
zcm l => crash
Thanks for report .. but zcm l => crash with option and file ?Mmmhhh... extensive bug-fix needed...
zcm l => crash
Last edited by Nania Francesco; 4th May 2014 at 23:24.
Tested on 10GB (system 4). All files compare OK including .wav which failed for zcm v0.88. http://mattmahoney.net/dc/10gb.html
I used -t1 because when I use -t4 (machine has 4 threads) the program sometimes continues to run forever after compression is finished. This bug was in earlier versions too and happens both in Windows (32 bit) and under Wine 1.6 in Ubuntu. It does not happen every time. What happens is 5 processes are started, I guess 4 workers that divide up the compression and one to wait until the others to finish. Sometimes this one does not detect when they finish so it waits forever in a busy loop. Also, compression is worse with -t4. flashzip has the same bug.
Also, file dates and empty directories are not restored, but that's OK for the benchmark.
Edit: LTCB results. http://mattmahoney.net/dc/text.html#1629
Last edited by Matt Mahoney; 5th May 2014 at 03:54.
Thanks for all test Matt!
Hi !
Thanks for new version.
But brief tests show up quite bad results
ZCM 0.90 brings degradation in compression ratio.
Test 1: different texture types. DXT13円5円 and also some RAW formats with different bitdepth. Everything combined into single TAR of 2 353 697 792 bytes.
Test 2: GIMP 2.8.4-win-x64, 4388 files, 252 161 274 bytesCode:size time memory zcm 0.88 -m7 169 146 440 147.265s 1719712 KB zcm 0.90 -m7 181 092 627 149.875s 1716004 KB <- 7.06% worser compression
I can also confirm that multitask is highly unstable. I can't remember even one time I was able successfully run ZCM with -t8Code:size time memory zcm 0.88 -m7 48 724 512 61.375s 1720140 KB zcm 0.90 -m7 48 834 412 59.343s 1716432 KB <- 0.23% worse compression
Last edited by Skymmer; 17th May 2014 at 09:42. Reason: fixed incorrect size of tar file
ZCM 0.91 released!
News:
- Implemented CM core!
- Implemented data recognition
- More stable !
- Improved better compression
Only from:
http://heartofcomp.altervista.org/
Stephan Busch (14th May 2014)
Hi Nania..
I am always getting "no memory" message - with and without -t switches.
Nania Francesco (14th May 2014)
Me too...Quote Originally Posted by Stephan Busch View PostHi Nania..
I am always getting "no memory" message - with and without -t switches.
Nania Francesco (14th May 2014)
ZCM 0.91B released!
News:
- reduced the LZP buffer memory
Only from:
http://heartofcomp.altervista.org/
0.91B runs OK with -m6 but gives "no memory" message at -m7
Its fixable by the way. Just deupx the file and patch it to be LargeAddressAware. After that you can run ZCM with -m7
Bulat Ziganshin (14th May 2014)
How can that patch be done?
i think it is the same problem like here:
http://encode.su/threads/1838-Comman...ll=1#post35716
best regards
and here is something in german: http://www.3dcenter.org/artikel/das-...ess-aware-flag
Last edited by joerg; 15th May 2014 at 11:54.
Stephan Busch (15th May 2014)