[aklug] Re: [OT] Re: random bits vs random Hex

From: Arthur Corliss <acorliss@nevaeh-linux.org>
Date: Wed May 29 2013 - 12:26:14 AKDT

On Wed, 29 May 2013, bryanm@acsalaska.net wrote:

> I don't know enough to address entropy, but I can say that changing
> from binary triplets to decimal digits leaves some of the pattern space
> unused (i.e. 8 and 9). In other words, the same data takes up more space,
> leaving open the possibility for an algorithm to compress it back to close
> to its original size.

If this were true that we'd be able to get great compression on any data,
random or not. By your logic, compressing binary data should be awesome,
since there's only two choices: 1 or 0.

You can't cheat around the basic problem of pattern recognition by changing
how the same data is presented. Choosing to evaluate smaller chunks of data
is a zero sum game because you either have to inflate your translation maps
or look for longer pattern strings than you would in larger chunks. In the
end, it's the repeatability of data chunks, regardless of presentation,
which will determine compressability.

         --Arthur Corliss
           Live Free or Die
---------
To unsubscribe, send email to <aklug-request@aklug.org>
with 'unsubscribe' in the message body.
Received on Wed May 29 12:26:42 2013

This archive was generated by hypermail 2.1.8 : Wed May 29 2013 - 12:26:42 AKDT