What is the relation between 32bit CRC and the data word size? - crc32

if we take 32-bit CRC then the data word size will be 2 to the power of 32(2**32) plus 32 bit for CRC.... or not? Am I missing something?
If I want to write a code in Microsoft Visual C++ for implementing 32-bit CRC then what is the data type I can use? Maybe I am missing the point and talking rubbish.
Basically it is my assignment to implement 32-bit CRC and I am completely at a loss how to go about it.
Sorry if the question is vague. Any help toward implementation, logic, or basic fundamentals will be greatly appreciated.

CRC-32 is basically the act of dividing two polynomials and returning the remainder.
Recommended introductory reading:
http://en.wikipedia.org/wiki/Cyclic_redundancy_check
http://www.mathpages.com/home/kmath458.htm
http://www.ross.net/crc/download/crc_v3.txt

Related

Fixed point precision real numbers arithmetic support for Eigen/Eigen3

I'm going to raise again a very general question relating to Eigen/Eigen3 matrix library support for different matrix support "fields/representations" for operations.
I've analyzed a bit the Eigen matrix template library, and so far, I've only seen suffort for floating point real numbers arithmetic (that is IEEE754 single precision 32 bits and double precision 64 bits floating point numbers).
I would like to raise a question concerning fixed-size precision real numbers arithmetic support for Eigen/Eigen3:
is there any support for fixed precision vectorization in Eigen/Eigen3 ?
if not so, what would be necessary to implement such a suport ?
can standard decomposition routines and matrix operations be immediately implemented using fixed size precision ? If so, how ?
if not so, what are the pre-requisites for such a support (concepts, operators overloads, "real" functions required to be implemented, etc...) in order to implement such operations/decompositions without impairing Eigen's core ?
are there any plans to implement such functionnalities into the core of Eigen/Eigen3 ?
If such kind of things aren't foreseen in the near future,
does there exist already any kind of such functionnality that you are aware of and that would be compatible with Eigen/Eigen3 in order to fully implement vectorization/optimizations ?
if not so, which approach would you recomment if s.o. was interested in implementing it ?
I would like to know the feasibility to implement a few matrix computations onto a 16- or 32-bits micro-controller. I'm not aware of any such kind of things that are disclosed under GPL licencing scheme, and would be geatly interested if such thing would be usable. If not, I would like to assess workload necessary to implement it.
Thanking anyone in advance for help.

Integer Addition on current generation CPUs (2017)

First of, this is a genuine question and not poking fun at anyone. I am trying to learn C++ after many years of not touching it and I found a very old article (last updated July, 96) while trying to remember how implement BCD addition. Even though the article is old, looking at the person who wrote it is a professor, I am in a wtf state after reading the first few lines. I am learning and don't want to disregard something so easily, so please excuse my naivety.
The BCD system was chosen for the internal number system in these
machines because it is easy to convert it to alphanumeric
representations for printouts and displays. The compelling advantages
of BCD have waned over time, and these digits are supported by more
modern hardware simply to provide backward compatibility with earlier
generations of machines.
Is the above statement true ??? !!! If yes, can someone explain how the modern CPUs perform addition if not using Binary ??? Or is the author trying to say something else and I misunderstood. I am concerned that maybe author might be hinting at something at the hardware level that might be different than the software abstraction. Or it might be some sort of translation issue.
I don't see any purpose solved by processors giving an outer appearance of being binary ("for backward compatibility") when internally they are decimal and don't need BCD system.

How to multiply matrices containing floating points in FPGA?

I would like to ask a question about matrix multiplication in HDL. For 6 months I have been learning about FPGAs and ASIC design, but still do not have the enough experience for programming FPGAs using Verilog/VHDL. I had a quick search and found that Verily is suitable for me. Anyway, you just suppose me as a beginner and till now I only followed simple tutorials made of using Xilinx Spartan 3E-XCS1600E MicroBlaze Starter Kit, because I have it, too.
The most challenging part for me was to create matrices in Verilog. If I am able to create matrices and fill them with integers first, then I can move on the next step matrices with floating numbers. In advance, I also want to take inverse of these matrix and seems hard to me extremely.
My question is, what should I do in order to multiply matrices? Is there any tricky or easier way to do that like in C language? (I know Verilog is a HDL and we cannot think on that way). Also how can I convert my floating numbers to fixed or integer type? Then I think I can solve my problem in this way. I looked trough other questions but did not understand well. Thanks for your response and help.
Bonus Question: If I try these operations on MATLAB or Simulink, could it be easier to convert it to HDL using HDL Coder? If it is, could you guide me to do that?
Regards,
Leonardo
You can create matrices with RAM in hardware design. Actually, everything can be described as RAM:)
Of course only integer can be supported in Verilog, but we do have some method that can create and compute float numbers.
Define a float syntax. Suppose that we have reg var[7:0], we can assume var[7:4] is the integer part and var[3:0] is the decimal part. Like 8'b0101_1001 equals 5.9 in DEC. You must limit the range of var[3:0] to 0~9!
IEEE 754. http://grouper.ieee.org/groups/754/ This standard has been widely used in many areas, but I think it will be a little difficult for you.
Deal with a matrices is nothing special, just follow what you have learned in math class.
I'm not good at English. Hope you can understand.

The reason behind endianness?

I was wondering, why some architectures use little-endian and others big-endian. I remember I read somewhere that it has to do with performance, however, I don't understand how can endianness influence it. Also I know that:
The little-endian system has the property that the same value can be read from memory at different lengths without using different addresses.
Which seems a nice feature, but, even so, many systems use big-endian, which probably means big-endian has some advantages too (if so, which?).
I'm sure there's more to it, most probably digging down to the hardware level. Would love to know the details.
I've looked around the net a bit for more information on this question and there is a quite a range of answers and reasonings to explain why big or little endian ordering may be preferable. I'll do my best to explain here what I found:
Little-endian
The obvious advantage to little-endianness is what you mentioned already in your question... the fact that a given number can be read as a number of a varying number of bits from the same memory address. As the Wikipedia article on the topic states:
Although this little-endian property is rarely used directly by high-level programmers, it is often employed by code optimizers as well as by assembly language programmers.
Because of this, mathematical functions involving multiple precisions are easier to write because the byte significance will always correspond to the memory address, whereas with big-endian numbers this is not the case. This seems to be the argument for little-endianness that is quoted over and over again... because of its prevalence I would have to assume that the benefits of this ordering are relatively significant.
Another interesting explanation that I found concerns addition and subtraction. When adding or subtracting multi-byte numbers, the least significant byte must be fetched first to see if there is a carryover to more significant bytes. Because the least-significant byte is read first in little-endian numbers, the system can parallelize and begin calculation on this byte while fetching the following byte(s).
Big-endian
Going back to the Wikipedia article, the stated advantage of big-endian numbers is that the size of the number can be more easily estimated because the most significant digit comes first. Related to this fact is that it is simple to tell whether a number is positive or negative by simply examining the bit at offset 0 in the lowest order byte.
What is also stated when discussing the benefits of big-endianness is that the binary digits are ordered as most people order base-10 digits. This is advantageous performance-wise when converting from binary to decimal.
While all these arguments are interesting (at least I think so), their applicability to modern processors is another matter. In particular, the addition/subtraction argument was most valid on 8 bit systems...
For my money, little-endianness seems to make the most sense and is by far the most common when looking at all the devices which use it. I think that the reason why big-endianness is still used, is more for reasons of legacy than performance. Perhaps at one time the designers of a given architecture decided that big-endianness was preferable to little-endianness, and as the architecture evolved over the years the endianness stayed the same.
The parallel I draw here is with JPEG (which is big-endian). JPEG is big-endian format, despite the fact that virtually all the machines that consume it are little-endian. While one can ask what are the benefits to JPEG being big-endian, I would venture out and say that for all intents and purposes the performance arguments mentioned above don't make a shred of difference. The fact is that JPEG was designed that way, and so long as it remains in use, that way it shall stay.
I would assume that it once were the hardware designers of the first processors who decided which endianness would best integrate with their preferred/existing/planned micro-architecture for the chips they were developing from scratch.
Once established, and for compatibility reasons, the endianness was more or less carried on to later generations of hardware; which would support the 'legacy' argument for why still both kinds exist today.

Definition of built in redundancy

Assume that we have 3 bit ascii representation. How can I get the built in redundancy of that representation? I searched internet for days. But still couldn't find something relevant. It will be grate if someone can explain me what is "built in redundancy" means as well.
Thank you.
As far as I know, redundancy is the difference between the number of bits used to transmit a message and the entropy of the message.

Resources