Topic: The Harrington Compression Method (HCM) details
Introduction:
This is a lossless compression method which WILL work on random binary data and data considered Entropic. It involves one unique step previously not considered by others to obtain the compression. It can work on data in nearly any level you want, with a minimum to be yet calculated.
It does this via a self creating filing system that gives much more possible values from our actual outcomes, as well as a ratio imbalance at the same time.
This post is to help people understand for two purposes, Investment in the proposal, and for peer review.
Therefore please understand I have the answer for the counting argument, and I have the answer for entropic data. I am going to make myself available for conversation on mIRC.
Find me at www.imperialconflict.com and in #compress
I am the registered person called Einstein there, and I will try to be available as much as I can be today.
End of the Introduction
Harrington Compression Method
The Harrington Compression Method, henceforth known as HCM, is a repeatable, self tabulating, statistical compression method like no other compression system in use today. HCM incorporates a built-in dictionary which allows the user to repeatedly run this system on individual files, or subfiles, via triggers for certain events, and includes command sections for each built-in. As a result, HCM allows for nearly endless variations and possibilities. And, most importantly of all, the degree of file compression is far greater than any existing compression software currently available. In short, the HCM is a revolutionary compression system.
This is a White Paper intended for peer review of the basic fundamentals behind the system. Michael Hugh Harrington reserves all rights
Kemp currently not being responded to until he makes CONCISE posts.
Avogardo and Noir ignored by me for life so people know why I do not respond to them. (Informational)