Topic: Math Science Peers?
I am probably going to find they all left, but once upon a time we had a slew of compression lovers.
I just completed an analysis of my work with some different perspectives and need a question resolved.
Given random data can have most, but not all segments compressed, and given a Standard Huffman can complete approximately all but 23.6% of data with an equal to, or compressed size...
And using the following basic Huffman:
00 = 0
01 = 10
10 = 110
11 = 111
Is there a known benchmark for comparison, as the highest percentage of random data compressible?
For instance I am able to compress all but 17% of random data (though a bit of hard work may get this to 15%) and I wanted to know how to benchmark this aspect properly.
No laughs at my old claims, thay was depression skewing the work. This work has been submitted and peer reviewed, though I am seeking a quicker answer to this question so posting here. Yes the system works, might be worth persuing now that I have time, and yes the peers proved it could work making me a math scientist with their recognition now please do not spam or troll this sincere question thread *sigh*
Kemp currently not being responded to until he makes CONCISE posts.
Avogardo and Noir ignored by me for life so people know why I do not respond to them. (Informational)