VHDL: Division and decimal representation - vhdl

The sensor that I'm using will return 16-bit word and convert it to an actual value I need to use an expression,
The expression is ((175.72*16b_word)/65536)-46.85.
Can I divide by right shifting 16 positions?
I have searched for a couple of hours now and I still have no clue how to do with decimal representation! Does anyone have an example of how to solve it out?

Yes, shifting a binary number by 16 positions to the right is the same as a division by 65536 (with poor rounding however if your drop the shifted digits).

Related

Floating Point Algorithm when Divisor Mantissa is all zeros - say in the case of 2.0 in - IEEE -754

I have a need to implement a floating point package for small SBC and have most of the routines now working, but in the course of testing I have noticed that the algorithm in this 1 (attachment) will not (indeed cannot) produce the correct answer in the case where the divisor mantissa is all zeros, for the example 500/2 will produce the answer 255.0 rather than 250.0
01000000011111110100000000000000 = 0x43FA00 (500 base 10) and
01000000000000000000000000000000 = 0x400000 (2 base 10)
will produce
01000011011111111000000000000000 =0x437F0 (255 base 10)
Is there anyone who has a good knowledge of Floating Point arithmetic or indeed FP algorithms that can help out please?
[]
Nothing in the chart shown says to work with the bits that are the primary encoding of the significand of a floating-point number. One should not confuse the bits that encode something with the thing itself. The actual significand of a normal IEEE-754 binary floating-point number, for example, is some binary numeral 1.xxx…xxx with a value in [1, 2) and a number of bits determined by the specific format. (Subnormal numbers use a numeral 0.xxx…xxx.) When the number is encoded in an interchange format, the xxx…xxx bits are stored in the primary significand field, and the leading 1 or 0 bit is encoded by way of the exponent field. (If the exponent field is not zero, and does not indicate an infinity or a NaN, then the leading bit is 1. Otherwise, it is 0.)
Generally, the actual significand is used for arithmetic, not just the bits of the primary significand field.

QR code generation algorithm implementation case analysis

I'm implementing a QR code generation algorithm as explained on thonky.com and I'm trying to understand one of the cases:
as stated in this page and this page I can deduct that if the code is protected with M error correction level, and the chosen mask is No. 0, the first 5 bits of the format string (non-XORed) are '00000', and because of this the whole string of 15 bits is zeros.
the next step is to remove all leading zeros, which are, again, all of them. it means that there's nothing to XOR the generator polynomial string(10100110111) with, thus giving us a final string of 15 zeros, which means that the final (XORed) string will be simply the mask string (101010000010010).
I'm seeking for confirmation that my logic is right.
Thank you all very much in advance for the help.
Your logic is correct.
remove all leading zeroes
The actual process could be described as appending 10 zero bits to the 5 bits of data and treating the 15 bits as 15 single bit coefficients of a polynomial, then dividing that polynomial by the 11 bit generator polynomial resulting in a 10 bit remainder polynomial, which is then subtracted from the 5 data bits + 10 zero bits polynomial. Since this is binary math, add and subtract are both xor operations, and since the 10 appended bits are zero bits, the process can just append the 10 remainder bits to the 5 data bits.
As commented above, rather than actually implementing a BCH encode function, since there are only 32 possible format strings, you can just do a table lookup.
https://www.thonky.com/qr-code-tutorial/format-version-tables

Easiest way to do VHDL floating point division?

I'm looking for easiest way to divide two floating point numbers using VHDL. I need the code to be synthesizable (I'll be implementing it on Spartan 3 FPGA).
First operand will always be a fixed number (e.g. 600), and second one will be integer, let's say between 0 and 99999. Fixed number is dividend, and the integer one is divisor. So I'll have to calculate something like this: 600/124.
Or any other number instead of 124, of course that is in range between 0 and 99999. Second number (the one that is changing) will always be integer !! (there won't be something like 123.45).
After division, I need to convert the result into integer (round it up or just ignore numbers after decimal point, which ever is faster).
Any ideas ? Thanks !
There are many ways to do this, with the easiest being a ROM. You don't need floating point anywhere since doing an integer divide and compensating for a non-zero remainder can give you the same results. I'd suggest calculating the first 600 results in MATLAB or a spreadsheet so you can see that handling values up to 99999 isn't necessary.
Also, some common nomenclature for range and precision is QI.F where I is the number of integer bits and F is the number of fractional bits. Thus 0..99999 would be Q17.0 and your output would be Q10.0.
There's an FP divide function in this VHDL file from this site.

BitXor decimal equivalents

I am trying to understand an algorithms that gray code number in a QAM system which use XOR. Can anyone explain what happens in the Decimal world when you bitxor(a,b) is there a decimal implementation or expalnation for this.
XOR works in binary base and there's no direct relationship to 10-base (decimal) numbers.
However I don't see a connection between gray code numbers and base 10 (decimal). Is it gray or BCD (binary-coded decimal)?
Gray codes are ways to represent integers as binary numbers so that two consecutive integers differ by one bit only. There when bitxor(a,b) and a = b +/- 1, the result has only one bit set.

Why does Visual Studio 2008 tell me .9 - .8999999999999995 = 0.00000000000000055511151231257827?

When I type this into the Visual Studio 2008 immediate window:
? .9 - .8999999999999995
It gives me this as the answer:
0.00000000000000055511151231257827
The documentation says that a double has 15-16 digits of precision, but it's giving me a result with 32 digits of precision. Where is all that extra precision coming from?
There are only 15-16 digits in the answer. All those leading zeroes don't count. The number is actually more like 5.5511151231257827 × 10-16. The mantissa portion has 15-16 digits in it. The exponent (-16) serves to shift the decimal point over by 16 places, but doesn't change the number of digits in the overall number.
Edit
After getting some comments, I'm curious now about what's really going on. I plugged the number in question into this IEEE-754 Converter. It took the liberty of rounding the last "27" into "30", but I don't think that changes the results.
The converter breaks down the number into its three binary parts:
Sign: 0 (positive)
Exponent: -51
Significand: 1.0100000000000000000000000000000000000000000000000000 (binary for 1.2510)
So this number is 1.012 × 2-51, or 1.2510 × 2-51. Since there are only three significant binary digits being stored, that would suggest that Lars may be onto something. They can't be "random noise" since they are the same each time the number is converted.
The data suggests that the only stored digit is "5". The leading zeros come from the exponent and the rest of the seemingly-random digits are from computing 2-51.
You should read: What Every Computer Scientist Should
Know About Floating-Point Arithmetic
.
Basically it comes down to Floating Point numbers being stored with finite precision. You have to do your comparison with some delta.
if(.9 - .8999999999999995 <= 0.0001)
//close enough to be equal
The leading zeros are not significant/part of the precision (as far as the floating point number is concerned -- mathematically speaking, they are significant). The leading zeros are due to the exponent part of the floating point number's internal representation.
The portion 55511151231257827 (which is the significand or mantissa) has 17 decimal digits, which is close enough to 15-16 digits.
#Lars D: What you consider to be correct, is only correct within the context of the question. .9 - .8999999999999995 works out to a float with significand 0.625 and exponent of -50. Taking 0.625 * 2-50 results in 5.5511151231257827e-16. Now, out of the context of the original question, we have a number with 17 significant digits which does happen to be our best binary approximation of 0.0000000000000005. However, those leading zeros are still not significant as far as the representation of the floating point number is concerned.
? .9 - .8999999999999995
This subtraction process, with 15-16 significant digits, gives
0.0000000000000005
The rest of the digits are just rounding errors. However, since the computer always stores 15-16 significant digits after the first non-zero digit, the rounding errors are shown, and you get a lot of trailing random digits produced by rounding errors. So the result has 16 significant digits from the subtraction operation plus 16 digits from the storage of the result, which gives 32 digits.
The "floating" part of "floating point" means that you are getting something closer to 5.5511151231257827 * 10^(-16). That's not exactly how it's represented, because of course it's all done in binary under the hood, but the point is, the number is represented by the significant digits, plus a number which represents how far to move the radix (decimal point). As always, wikipedia can give you more detail:
http://en.wikipedia.org/wiki/Floating_point
http://en.wikipedia.org/wiki/Double_precision
(The second link is more specifically focused on your particular case.)
I think its because in the binary system, 5 is periodic as it is not dividable by 2. And then what Mark Rushakoff said applies.

Resources