Are there any compression algorithms outside of entropy encodings that yield a significant compression ratio for memoryless sources (not the Markov kind, but this kind)?
More concretely, if a million ASCII characters, each generated from the same probability distribution that is not necessarily even but where the probability of each ASCII character being chosen is non-zero, what compression algorithms outside of entropy encoders (such as arithmetic, huffman) would yield decent compression ratios?
-
2$\begingroup$ Why have you rejected entropy encoders? $\endgroup$D.W.– D.W. ♦2017年06月16日 21:28:32 +00:00Commented Jun 16, 2017 at 21:28
1 Answer 1
You've eliminated entropy encoders, which is the type of data you described.
A Memory-less Markov is a contradiction of terms.
The best Compression possible for the specifically generated list of characters, might just be the program that is used to generate the deterministic data in the first place.