What would be the contents of canonical Huffman encoded bit stream? - algorithm

Let's consider the following example with symbol- code length - canonical code data.
A - 2 - 00
B - 2 - 01
D - 2 - 10
C - 3 - 110
E - 3 - 111
I was wondering what would be the contents of encoded bit stream? Is it 00 01 10 110 111 (basically all codes) or 2,2,2,3,3 in binary equivalent as corresponding code lengths? I wanted to add here that some resources say just transmit code as encoded bit stream and few other resources talk about throwing code away from encoded bit stream and transmit only code length data.

Encoded bitstream
The code is:
00 01 10 110 111
Note that if we sent the code of 2,2,2,3,3, then it would be impossible to decide if the input was AAACC or BBBEE (or many other equivalent choices).
Because Huffman codes are a prefix code it means that we can unambiguously decode the bitstream despite not knowing where the spaces are.
In other words, when given the output 000110110111, we can uniquely decode it as ABDCE.
Transmitting code table
I think the confusion may be because you need to possess two things to decode the bitstream:
The coded bitstream
The lookup table
These two things are often coded in very different ways.
In many cases the lookup table is fixed in advance so does not need to be transmitted.
However, if the probabilities can change, then we need to tell the recipient what code table to use. In this case we can just transmit the lengths of each code word and this gives enough information for the receiver to construct the canonical Huffman code. Alternatives are also possible, for example we can send the number of each code word length followed by the values. This alternative is used by JPEG and explained more below.
Example
The JPEG image codec uses Huffman tables. Normally some default tables are used, but it is possible to optimize the size of images by transmitting a custom Huffman code. A tutorial about this is here.
Another description of the way of transmitting the Huffman table is here. The code lengths are sent (as bytes) followed by the code values (again as bytes).
Code to read it (taken from the link) is:
// Next sixteen bytes are the counts for each code length
u8 counts[16];
for (i = 0; i < 16; i++) {
counts[i] = fgetc(fp);
ctr++;
}
// Remaining bytes are the data values to be mapped
// Build the Huffman map of (length, code) -> value
for (i = 0; i < 16; i++) {
for (j = 0; j < counts[i]; j++) {
huffData[table][huffKey(i + 1, code)] = fgetc(fp);
code++;
ctr++;
}
code <<= 1;
}

What you are asking is how to send a description of the code to the receiver, so that the receiver knows how to decode the following code values.
There are many ways of varying levels of sophistication, depending on how much effort you want to put into compressing the description of the code. Peter de Rivaz describes a simple approach used by JPEG, which is to send 16 counts of the number of codes of each length, followed by the byte values of each of those symbols. So for your code that would be (in hex):
00 03 02 00 00 00 00 00 00 00 00 00 00 00 00 00 41 42 43 44 45
That's not terribly compact, and it can't represent one of the possible codes, which is 256 8-bit codes, since you are limited to a count of 255 for each length.
The first thing you can do is cut off the code lengths when you have a complete code. It is easy to calculate how many code patterns are left, in which case you can simply end it when there are none left. Follow that with the symbols. You then have:
00 03 02 41 42 43 44 45
We don't need eight bits for each count, since they are limited by the constraints on those counts. For example, you can't have more than two one-bit codes. So we could code these in fewer bits, e.g. n+1 bits for n codes. So two bits, three, bits, and so on until the code is complete. For your code, now in binary:
00 011 0010
followed by the bytes 41 42 43 44 45, offset in the bit stream appropriately. Now the list of counts takes nine bits instead of 24. Since we know that there can only be 256 symbols, we can cap off the number of bits for each count at nine, allowing for the count 256, solving the previous problem of not being able to represent the flat code. Then if the code is limited to 16 bits in length (as it is for JPEG), the largest number of bytes needed for the counts is 14.5, less than the original 16. Often the counts will end before 14.5 bytes.
You can get even more sophisticated, noting that at each code length, you have a limit on the possible count of codes of that length due to the shorter code lengths using up patterns. Then the number of bits for each count can be variable, based on how many possible values there are. Then the counts description would be:
00 011 10, then the eight-bit values 41 42 43 44 45
Since we have no preceding patterns used up for lengths one and two, those still need to be two and three bits respectively. However we now have only three possibilities left for length three: the counts 0, 1, or 2. A count of 3 would oversubscribe the code. So we can use two bits for that last one. It is now seven bits instead of nine, and this greatly reduces the number of bits in the counts for codes that use longer code lengths.
An entirely different scheme is the one used by the deflate format (used in zip, gzip, zlib, png, etc.). There the number of code lengths to follow is sent first, followed by the code length of each symbol in order up to the last one. The symbols themselves are implied by the code length location. That results in lots of zeros, to represent symbols that are not present. So for your code there would be a 70 to go up to symbol 69 ("E"), followed by 65 zeros, then 2 2 2 3 3. That seems awfully long, and it is. deflate then run-length and Huffman codes that list of lengths, to compress it. The long strings of zeros get compressed to a few bits, and the short lengths are also just a few bits each. So then you have to first send a description of the code lengths code lengths code (!) so that you can decode that.
You can read the deflate specification for more information on that scheme. brotli uses a similar scheme, with more sophistication still.

Related

Lossless compression of an ordered series of 29 digits (each 0 to 5 Likert scale)

I have a survey with 29 questions, each with a 5-point Likert scale (0=None of the time; 4=Most of the time). I'd like to compress the total set of responses to a small number of alpha or alphanumeric characters, adding a check digit to the end.
So, the set of responses 00101244231023110242231421211 would get turned into something like A2CR7HW4. This output would be part of a printout that a non-techie user would enter on a website as a shortcut to entering the entire string. I'd want to avoid ambiguous characters, such as 0,O,D,I,l,5,S, leaving me with 21 or 22 characters to use (uppercase only). Alternatively, I could just stick with capital alpha only and use all 26 characters.
I'm thinking to convert each pair of digits to a letter (5^2=25, so the whole alphabet is adequate). That would reduce the sequence to 15 characters, which is still longish to type without errors.
Any other suggestions on how to minimize the length of the output?
EDIT: BTW, for context, the survey asks 29 questions about mental health symptoms, generating a predictive risk for 4 psychiatric conditions. Need a code representing all responses.
If the five answers are all equally likely, then the best you can do is ceiling(29 * log(5) / log(n)) symbols, where n is the number of symbols in your alphabet. (The base of the logarithm doesn't matter, so long as they're both the same.)
So for your 22 symbols, the best you can do is 16. For 26 symbols, the best is 15, as you described for 25. If you use 49 characters (e.g. some subset of the upper and lower case characters and the digits), you can get down to 12. The best you'll be able to do with printable ASCII characters would be 11, using 70 of the 94 characters.
The only way to make it smaller would be if the responses are not all equally likely and are heavily skewed. Though if that's the case, then there's probably something wrong with the survey.
First, choose a set of permissible characters, i.e.
characters = "ABC..."
Then, prefix the input-digits with a 1 and interpret it as a quinary number:
100101244231023110242231421211
Now, convert this quinary number to a number in base-"strlen(characters)", i.e. base26 if 26 characters are to be used:
02 23 18 12 10 24 04 19 00 15 14 20 00 03 17
Then, use these numbers as index in "characters", and you have your encoding:
CVSMKWETAPOUADR
For decoding, just reverse the steps.
Are you doing this in a specific language?
If you want to be really thrifty about it you might want to consider encoding the data at bit level.
Since there are only 5 possible answers per question you could do this with only 3 bits:
000
001
010
011
100
Your end result would be a string of bits, at 3-bits per answer so a total of 87 bits or 10 and a bit bytes.
EDIT - misread the question slightly, there are 5 possible answers not 4, my mistake.
The only problem now is that for 4 of your 5 answers you're wasting a bit...you ain't gonna benefit much from going to this much trouble I wouldn't say but it's worth considering.
EDIT:
I've been playing about with it and it's difficult to work out a mechanism that allows you to use both 2 and 3 bit values.
Since your output would be a 97 bit binary value you'd need ot be able make the distinction between 2 and 3 bits values when converting back to the original values.
If you're working with a larger number of values there are some methods you could use, like having a reserved bit for each values that can be used to sort of type a value and give it some meaning. But working with so few bits as it is, it's hard to shave anything off.
Your output at 97 bits could be padded out to 128 bits, which would give you 4 32-bit values if you wanted to simplify it. this 128 bit value would be like a unique fingerprint representing a specific set of answers. There are many ways you can represnt 128 bits.
But in the end borking at bit-level is about as good as it gets when it comes to actual compression and encoding of data...if you can express 5 unique values in less than 3 bits I'd be suitably impressed.

COBOL COMP-3 number format issue

I have a cobol "tape format" dump which has a mixture of text and number fields. I'm reading the file in C# as a binary array (array of byte). I have the copy book and the formats are lining up fine on the text fields. There are a number of COMP-3 fields as well. The data in those fields doesnt seem to match any BCD format. I know what the data should be and I have the raw bytes of the COMP-3. I tried converting to EBCDIC first which yielded no better results. Any thoughts on how a COMP-3 number can be otherwise internally stored? Below are three examples of the PIC, the raw data and the expected number. I know I have the field positions correct because there is alpha data on either side of the numbers and that all lines up correctly.
First Example:
The PIC of the field is 9(9) COMP-3
There are 5 bytes to the data, the hex values are 02 01 20 91 22
The resulting data should be a date (00CCYYMMDD). This particular date should be 3-17-14.
Second Example:
The PIC of the field is S9(3) COMP-3
There are 2 bytes to the data, the hex values are 0A 14
The resulting value should be between 900 and 999
My understanding is that the "S" means that the last nibble should be 0xC or 0xD to indicate + or -
Third Example:
The PIC of the field is S9(15)V99 COMP-3
There are 9 bytes to the data, the hex values are 00 00 00 00 00 00 01 80 0C
The resulting value should be 12.00
Ok so thanks to the people who responded as they pointed me in the right direction. This is indeed an ASCII/EBCDIC representation issue. The BCD is stored in EBCDIC. Using an ASCII to EBCDIC conversion table yields properly formatted BCD digits:
I used this link to map the data: http://shop.alterlinks.com/ascii-table/ascii-ebcdic-us.php
My data: 0A 14 Converted: 25 3C (turns out that 253 is a valid value, spec was wrong) C = +, all good
My data: 01 80 0C (excluding leading zeros) Converted: 01 20 0C 12.00 C = +, implied 2 digits in format, all good
My data: 02 01 20 91 22 Converted: 02 01 40 31 7F 2014/03/17 (F is unused nibble), all good
There is no such thing as a COBOL "tape format" although the phrase may mean something to the person who gave you the data.
The clue to your problem is that you can read the text. Connect that to the EBCDIC tag and your reference to C#.
So, you are reading data which is originally source from a Mainframe, most likely an IBM Mainframe, which uses EBCDIC instead of ASCII.
COBOL does not have native support for BCD.
What some kind soul has done for you is "convert" the data from EBCDIC to ASCII. Otherwise you wouldn't even recognise the "text".
Unfortunately, what that means for any binary or packed-decimal or floating-point fields (you won't see much of the last, but they are COMP-1/COMP-2) is that "convert" means "potentially scrambled", because the coversion is assuming individual bytes, with simple byte values, whereas all of those fields have conventional coding, either through multiple bytes or non-EBCDIC values or both.
So: COMP-3 PIC 9(9). As you say, five bytes. It is unsigned, so the rightmost nybble will be F (all bits on). You are slightly out with your positions due to the sign position being occupied, even for an unsigned field.
On the Mainframe, it contains a value X'020140317F'. Only that field in its entirety can make any sense as to its value. However, the EBCDIC to ASCII conversion has made it X'0201209122'.
How?
Look up the EBCDIC value of X'02' and X'01'. They don't change. Look up the value of X'40', whoops, that's a space, change it to ASCII X'20'. Look up the value of X'31'. Actually nothing special there, and it has converted to something higher than X'7F', but if you look at the translation table used, I guess you'll see why it happens. The X'7F' is a double-quote, so gets changed to X'22'.
The other values you show suffer the same problem.
You should only ever take data from a Mainframe in character-only format. There are many answers here on this, you should look at the related to the right.
Have a look at this recent question: Convert COMP and COMP-3 Packed Decimal into readable value with C
OK, let's have a look at your first example. Given the format and value the original BCD-content should have been something like
02 01 40 31 7F
When transforming that from EBCDIC to ASCII we run into trouble with the first, second and fourth byte because they are control-characters - so here we would need some more details on how the ASCII->EBCDIC-converter worked. Looking at the two remaining bytes those would be changed
EBCDIC ASCII CHARACTER
40 -> 20 (blank)
7F -> 22 "
So assuming the first two bytes remain unchanged and the third gets converted like 31->91 we end up with
02 01 20 91 22
which is what you got. So it looks like some kind of EBCDIC->ASCII-conversion took place. If that is the case it might be that you can't repair the data since the transformation may not be one-one and thus not reversible.
Looking at the second example and using
EBCDIC ASCII CHARACTER
25 -> 0A (LF)
3C -> 14 (DC4)
you would have started with 25 3C which would fit the format but not the range you gave.
In the third example the original 01 20 0C could be converted to 01 80 0C since 20 also is an EBCDIC control-character with no direct ASCII-equivalent.
But given all other examples I would assume there is some codepage-conversion issue.
If you used some kind of filetransfer to move the data fromm the (supposed) mainframe make sure it is set to binary-mode and don't do any character-conversion before you split the file into fields and know what's meant to be a character and what not.
EDIT: You can find a list of several EBCDIC and ASCII-based codepages here or look here for the same as one pdf.
I'm coming to this a bit late, but have a couple of suggestions that might make your life easier...
First, see if you can get your mainframe conterparts to convert all non-character (i.e. binary numeric and packed decimal) data
to display format (e.g. PIC X) before you
download it. Then you only need to deal with the "printable" range of numeric characters representing 0 through 9. Printable character
only code-page conversions are fairly standaard and tend not to screw up as much. Reformatting data given a copybook is not
a difficult prospect for anybody
proficient in a mainframe environment. Unfortunately, sometimes you get the "runaround" and a claim is made that it is
extremely costly or, takes special software, or any one of a hundred other bogus excuses.
If you get the "runaround" then the next best thing is to is download the file in binary format and do your own codepage conversion
for the character data (fairly
straight forward). Next deal with the binary data based on your copybook definitions. With a few Googles you should be able to find
enough information to get through converting the PACKED-DECIMAL (COMP-3) data to whatever you need.
Here are a couple of links to get you started:
Numeric Data Formats
Packed Decimal
I do not recommend trying to reverse engineer the code page conversions applied by your file transfer package in order to
decode the packed decimal and other binary data.
Ok so thanks to both people who responded as they pointed me in the right direction. This is indeed an ASCII/EBCDIC representation issue. The BCD is stored in EBCDIC. Using an ASCII to EBCDIC conversion table yields properly formatted BCD digits:
I used this link to map the data: http://shop.alterlinks.com/ascii-table/ascii-ebcdic-us.php
My data: 0A 14
Converted: 25 3C (turns out that 253 is a valid value, spec was wrong) C = +, all good
My data: 01 80 0C (excluding leading zeros)
Converted: 01 20 0C 12.00 C = +, implied 2 digits in format, all good
My data: 02 01 20 91 22
Converted: 02 01 40 31 7F 2014/03/17 (F is unused nibble), all good
Thanks again for the two above answers which led me in the right direction.
You can avoid the above issues by having the data converted into a modern method for transferring data: XML.

Handing 32-bit ifindex inside of snmp

In developing my own SNMP poller, I've come across the problem of being able to poll devices with 32-bit interface indexes. I can't find anything out there explaining how to covert the hex (5 bytes) into the 32 bit integer or from the integer into hex as it doesn't use simple hex conversion. Example, the interface index is 436219904. While doing a pcap with a snmpget, I see the hex for this is 81 d0 80 e0 00 which makes no sense. I cannot for the life of me figure out how that converts to an integer value. I've tried to find a RFC dealing with this and have had no luck. The 16 bit interface values convert as they should. 0001 = 1 and so on. Only the 32-bit ones seem to be giving me this problem. Any help is appreciated.
SNMP uses ASN.1 syntax to encode data. Thus, you need to learn the BER rules,
http://en.wikipedia.org/wiki/X.690
For your case, I can say you watched the wrong data, as if 436219904 is going to be encoded as Integer32 in SNMP, the bytes should be 1A 00 30 00.
I guess you have missed some details in the analysis, so you might want to do it once again and add more descriptions (screenshot and so on) to enrich your question.
I suspect the key piece of info missing from your question is that the ifIndex value in question as used in your polling is an index for the table polled (not mentioned which, but we could assume ifTable), which means it will be encoded in as a subidentifier of the OID being polled (give me [some property] for [this ifIndex]) versus a requested value (give me [the ifIndex] for [some other row of some other table]).
Per X.209 (the version of the ASN.1 Basic Encoding Rules used by SNMP) subidentifiers in OIDs (other than the first two) are encoded in one or more octets (8 bits) with the highest order bit used as a continuation bit (i.e. "next octet is part of this subidentifier too"), and then remaining 7 bits for the actual value.
In other words, in your value "81 d0 80 e0 00", the highest bit is set in each of the first 4 octets and cleared in the last octet: this is how you know there are 5 octets in the subidentifier. The remaining 7 bits of each of those octets are concatenated to arrive at the integer value.
The converse of course is that to encode an integer value into a subidentifier of an OID, you have to build it 7 bits at a time.

Should I avoid multibit code starting with 0 (zero) for huffman code?

When I use Huffman greedy algorithm to construct the binary tree, I am getting the following, if all the four alphabets are equally probable:
00
01
10
11
The problem is, my program see 00 and 01 as only 0 and 1. Should I restrict the length of the code starting with 0 (zero) to 1 (one)? What data type should use to store the huffman code or its individual bits?
If your "program see 00 and 01 as only 0 and 1", then your program have bug.
For four equiprobable symbols, the code would indeed be 00, 01, 10, and 11. That means you need to look for all of those bits when decoding. When decoding you pull the bit on the left first. So you get a 0. That means the code is either 00 or 01. Then you pull the next bit. It's a 1. So now you have the complete code 01. You emit the corresponding symbol, and then start over.
It's easier to see for the more typical case where the probabilities are not equal and the codes have different lengths. Consider this code:
a - 0
b - 10
c - 110
d - 111
To decode you start pulling bits from the stream. The first bit is 1. Now you know that it must be a, b, c, or d. Now you pull another 1. You have it down to c or d. You pull a 0, so now you know its d. You start from the beginning with the next bit.
Until you start pulling bits and narrowing down choices, you don't know the length of the code. You will know the length of the code once you have decoded it.

Convert a byte to something human-readable

I've got a file consisting of a bunch of 'rows' of data (not actually on separate lines). As far as I can tell from the description (PDF), each 'row' has forty bytes precisely, as follows:
4 bytes I can ignore.
2 bytes that are the byte representations of the (decimal) integer 240 or 241 — that is, 00 F0 or 00 F1 — depending on 'row'.
4 bytes I can ignore.
11 bytes: the first few bytes are an ASCII string, which I need, and the rest is padding by 00 bytes.
1 byte I can ignore.
1 byte I need. This seems from the documentation to be ASCII 'B', 'S', or '0'.
1 byte that's the byte representation of a small integer — that is, 00, 01, or the like.
4 bytes that are the byte representation of an integer.
4 bytes that are the byte representation of an integer.
4 bytes that are the byte representation of an integer.
4 bytes that are the byte representation of an integer.
And there's nothing else in the file (no file header, for example). (I may well be wrong in my understanding of the linked-to documentation, and would appreciate any correction; more info may be available elsewhere (PDF).)
I wish to:
convert each byte representation of a number to a human-readable representation of the number and each 00 byte to (say) a human-readable 0, and
convert the file to comma-separated or the like.
Now, step (2) should be doable using sed; I mention it only because I want to make sure step (1) is done is such a manner as to allow step (2) (for example, keep track of how many bytes are each field when doing step (1)). But step (1) I have no idea how to do. Can anyone help?
As a caveat, please note that I'm comfortable with sed and bash and can handle perl, but have no experience with real programming; and that, alas, I'm doing this on a Windows machine I don't have installation-of-programs rights on, so (although I have a sed port) I don't have bash. So, basically, I need to do this in sed or a Windows (DOS) command-line script. (I should be able to download the files to another machine, work with them, and upload them, though, if that turns out to be necessary.)
Perl has an unpack function you could use.

Resources