What does Cyclic Code mean? Are CRC and Reed-Solomon Cyclic codes? - crc32

We have an assignment where we must make a program in python that compresses a txt file with LZ-78, then encode the compressed file with "cyclic code", and after that send it as a json file to a reciever. I can't find an exact clarification what the professor means by cyclic code.
I searched the web and I found about CRC and Reed-Solomon but I'm not sure if these two are the correct codes to use, so can you please explain to me if these are okay for me to use or if I need something different.
I'm not sure if it helps, but for some teams he specified that he wanted them to use Reed-Muller.

what does cyclic code mean?
That every valid codeword can be rotated (left or right), and the result will be another valid code word. CRC (at least ones that don't post complement the CRC), BCH codes, and BCH type Reed Solomon codes are cyclic codes. Original view Reed Solomon codes are not cyclic unless a set of specific set of evaluation values, successive powers of the field primitive alpha is used.
Encoding and decoding normally don't directly exploit the cyclic nature of cyclic codes, other than as a possible method (reverse cycling as opposed to a lookup table) to correct single burst errors.
https://en.wikipedia.org/wiki/Cyclic_code
https://en.wikipedia.org/wiki/BCH_code
https://en.wikipedia.org/wiki/Reed%E2%80%93Solomon_error_correction
Reed Muller is a class of older codes that are not cyclic.
https://en.wikipedia.org/wiki/Reed%E2%80%93Muller_code
http://www-math.ucdenver.edu/~wcherowi/courses/m7823/reedmuller.pdf
http://www.mcs.csueastbay.edu/~malek/Class/Reed-Muller.pdf
http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.208.440&rep=rep1&type=pdf
Due to the conflict between "cyclic" and "Reed Muller", you should probably ask the professor for clarification.

Related

Does Reed-Solomon Error algorithm allow correction only if error occur on input data part?

Reed-Solomon algorithm is adding an additional data to the input, so potential errors (of particular size/quantity) on such damaged input can be corrected back to the original state. Correct? Does this algorithm protects also such added data not being part of the input, but used by the algorithm? If not, what happened if the error occurs in such non-input data part?
An important aspect is that Reed-Solomon (RS) codes are cyclic: the set of codewords is stable by cyclic shift.
A consequence is that no particular part of a code word is more protected or less protected.
A RS code has a error correction capability equal to t = (n-k)/2, where n is the code length (generally expressed in bytes) and k is the information part length.
If the total number of errors (in both parts) is less than t, the RS decoder will be able to correct the errors (more precisely, the t erroneous bytes in the general case). If it is higher, the errors cannot be corrected (but could be detected, another story).
The emplacement of the errors, either in the information part or the added part, has no influence on the error correction capability.
EDIT: the rule t = (n-k)/2 that I mentioned is valid for Reed-Solomon codes. This rule is not generally correct for BCH codes: t <= (n-k)/2. However, with respect to your question, this does not change the answer: these families of code have a given capacity correction, corresponding to the minimum distance between codewords, the decoders can then correct t errors, whatever the position of the errors in the codeword
As long as only half or less of the added data is in error, then errors that are only in the added data can be corrected.
With the appended data, the data + appended data form what is called a codeword, one that meets the rules for a codeword. Note there are two basic types of Reed Solomon code, the "original view" and the "BCH view". What constitutes a valid codeword depends which type of Reed Solomon code is being used. Link to Wiki article that explains this:
https://en.wikipedia.org/wiki/Reed%E2%80%93Solomon_error_correction
For an erasure only code, the location of all errors is determined by other means, and this case, even if all of the appended data is known to be bad, it can be corrected (or regenerated).

Algorithm for one sign checksum

I am desperate in the search for an algorithm to create a checksum that is a maximum of two characters long and can recognize the confusion of characters in the input sequence. When testing different algorithms, such as Luhn, CRC24 or CRC32, the checksums were always longer than two characters. If I reduce the checksum to two or even one character, then no longer all commutations are recognized.
Does any of you know an algorithm that meets my needs? I already have a name with which I can continue my search. I would be very grateful for your help.
Taking that your data is alphanumeric, you want to detect all the permutations (in the perfect case), and you can afford to use the binary checksum (i.e. full 16 bits), my guess is that you should probably go with CRC-16 (as already suggested by #Paul Hankin in the comments), as it is more information-dense compared to check-digit algorithms like Luhn or Damm, and is more "generic" when it comes to possible types of errors.
Maybe something like CRC-CCITT (CRC-16-CCITT), you can give it a try here, to see how it works for you.

Difference between direct and indirect CRC

I have seen two different kinds of CRC algorithms. The one kind is called "direct" the other kind is called "non-direct" or "indirect". The code for both is a bit different. Both are able to calculate the same checksum if direct type is supplied with a converted initial value.
I can successfully run both algorithms and I know how to convert the initial value. So this is no problem.
What I couldn't find out: Why do these two algorithms exist? Is there something that one can do what the other can't? Are they redundant from the user's point of view?
UPDATE You can find a testable online implementation (and C implementations of both aglorithms) here. However these terms (or one of them) are mentioned in some more places. Like here ("direct table algorithm"), in a microcontroller reference document, in forums etc.
The "direct" is referring to how to avoid processing n zero bits at the end for an n-bit CRC.
The mathematical definition of the CRC is a division of the message with n zero bits appended to it. You can avoid the extra operations by exclusive-oring the message with the CRC before operating on it instead of after. This requires processing the initial value of the register in the normal version through the CRC, and having that be the new initial value.
Since it is not necessary, you will never see a real-world CRC algorithm doing the extra operations.
See the section "10. A Slightly Mangled Table-Driven Implementation" in the document you link for a more detailed explanation.

Mapping Untyped Lisp data into a typed binary format for use in compiled functions

Background: I'm writing a toy Lisp (Scheme) interpreter in Haskell. I'm at the point where I would like to be able to compile code using LLVM. I've spent a couple days dreaming up various ways of feeding untyped Lisp values into compiled functions that expect to know the format of the data coming at them. It occurs to me that I am not the first person to need to solve this problem.
Question: What are some historically successful ways of mapping untyped data into an efficient binary format.
Addendum: In point of fact, I do know which of about a dozen different types the data is, I just don't know which one might be sent to the function at compile time. The function itself needs a way to determine what it got.
Do you mean, "I just don't know which [type] might be sent to the function at runtime"? It's not that the data isn't typed; certainly 1 and '() have different types. Rather, the data is not statically typed, i.e., it's not known at compile time what the type of a given variable will be. This is called dynamic typing.
You're right that you're not the first person to need to solve this problem. The canonical solution is to tag each runtime value with its type. For example, if you have a dozen types, number them like so:
0 = integer
1 = cons pair
2 = vector
etc.
Once you've done this, reserve the first four bits of each word for the tag. Then, every time two objects get passed in to +, first you perform a simple bit mask to verify that both objects' first four bits are 0b0000, i.e., that they are both integers. If they are not, you jump to an error message; otherwise, you proceed with the addition, and make sure that the result is also tagged accordingly.
This technique essentially makes each runtime value a manually-tagged union, which should be familiar to you if you've used C. In fact, it's also just like a Haskell data type, except that in Haskell the taggedness is much more abstract.
I'm guessing that you're familiar with pointers if you're trying to write a Scheme compiler. To avoid limiting your usable memory space, it may be more sensical to use the bottom (least significant) four bits, rather than the top ones. Better yet, because aligned dword pointers already have three meaningless bits at the bottom, you can simply co-opt those bits for your tag, as long as you dereference the actual address, rather than the tagged one.
Does that help?
Your default solution should be a simple tagged union. If you want to narrow your typing down to more specific types, you can do it - but it won't be that "toy" any more. A thing to look at is called abstract interpretation.
There are few successful implementations of such an optimisation, with V8 being probably the most widespread. In the Scheme world, the most aggressively optimising implementation is Stalin.

Basic Reed-Solomon Error Correction Question

Does Reed-Solomon error correction work in an instance where there is a dropped byte (or multiple dropped bytes)? For example, let's say it's a (12,8) Reed Solomon code, so theoretically it should be able to correct 2 errors (or 4 erasures if the position is known). But, what happens if only 11 (or 10) bytes are received and one doesn't know which byte(s) were dropped? Will Reed-Solomon error correction work?
Thanks,
Ben
RS decoding for erasures requires the position of the symbols "dropped" or lost. The kind of error you're talking about is due to phase distortion.
You can make it work by simply cycling through the possible positions where the character might be missing and letting it try to correct your result, so let's say you received 10 characters:
1234567890
Have it correct the following values:
??1234567890
?1?234567890
?12?34567890
:
1??234567890
1?2?34567890
:
1234567890??
Each attempt will probably give you some result, most of which are not the one you want. But I would expect that there should be exactly one result with the minimal number of additional modifications, and that should be the one you want to use as the most likely to be correct answer.
For example, if you correct the first three numbers of the example above, you might get the following result:
v
361274567890
917234567890
312734569897
: ^ ^
For the first and third case, you have additional corrections made beyond filling in the two blanks (marked with v and ^), whereas in the second case you have only the missing positions filled in and the other characters match the uncorrected input. Therefore, I would choose answer 2 as the most likely to be correct one.
Clearly, the chances that this works depend on whether there are other errors. Unfortunately I'm not able to give you a rigorous set of conditions under which this method will work for sure.
.
Another thing you can do if your message is long enough is to use an interleaving technique to basically have multiple orthogonal RS codes cover your data. That way, if one fails, you might be able to recover with another one. This method is for example used on compact discs (CDs), where it is called CIRC.
No, Reed-Solomon can't automatically correct instances where there are missing bits, because just like most other FEC algorithms, it was only designed to correct bit-flips. If you know the position of the missing bits, you can pad your received signal at those positions so that RS can then work normally.
However, if you don't know the position, you will need to use another algorithm that supports bit-insertion or bit-deletion such as Marker Codes and Watermark Codes.
Also note that RS can be not only used for erasures but also to process noisy bits using Forney syndrome.

Resources