Let me break this down into a real simple example
64 bits goes in.
Stage 1 increases it to 72 bits statistically
Stage 2 puts it in files, we then run file #1 again.
File #1 had 32 bits when put in, now it has 36 and is split into 3
subfiles.
1.1 has 16 bits
1.2 has 12 bits
1.3 has 8 bits
Now the ratio's:
File 1.1
93.75% to 6.25%.
File 1.2
80% to 20%
File 1.3
75% to 25%
File 1.1 comes out with a statistical 9.47 bits remaining.
File 1.2 comes out with a statistical 9.36 bits remaining
File 1.3 comes out with a statistical 6.75 bits remaining
This is 25.58 bits.
Compare this to the 28.44 bits that it would normally hold
Since this is repeatable upon any file (1.1, 1.2, 1.3, 2, 3 currently
in existance in this example) this means we can compress ALL files of
sufficient size.
If no matter what file #1 after being run through the system again is
smaller than what it went in as, we have proven compression of
entropic data.
Let me stress that again.
So long as 1.1, 1.2, 1.3 is smaller than file #1 was to begin with, we
have compression of random binary data.
I will stand by this to the end of my days, and be right always!
Everything bad in the economy is now Obama's fault. Every job lost, all the debt, all the lost retirement funds. All Obama. Are you happy now? We all get to blame Obama!
Kemp currently not being responded to until he makes CONCISE posts.
Avogardo and Noir ignored by me for life so people know why I do not respond to them. (Informational)