Endian conversion of signed ints - endianness

I am receiving big endian data over UDP and converting it to little endian. The source says the integers are signed but when I swap the bytes of the signed ints (specifically 16-bit) I get unrealistic values. When I swap them as unsigned ints I get what I expect. I suppose the source documentation could be incorrect and is actually sending unsigned 16-bit ints. But why would that matter? The values are all supposed to be positive and well under 16-bit INT_MAX so overflow should not be an issue. The only thing I can think of is that (1) the documentation is wrong AND (2) I am not handling the sign bit properly when I perform a signed endian swap.
I really have two questions:
1) When overflow is not an issue, does it matter whether I read into signed or unsigned ints.
2) Is endian swapping different between signed and unsigned values (i.e. does the sign bit need to be handled differently)?
I thought endian conversion looked the same for both signed and unsigned values, e.g. for 16-bit value = value&0xff00 >> 8 | value&0x00ff << 8.
Thanks

You are running into problems with sign extensions in your swap function. Instead of doing this:
value & 0xff00 >> 8 | value & 0x00ff << 8
do this:
((value >> 8) & 0x00ff) | ((value & 0x00ff) << 8)
The issue is that if value is a 16-bit signed value, then 0xabcd >> 8 is 0xffab. The most significant bit stays 1 if it starts out as 1 in a signed right shift.
Finally, instead of writing this function yourself you should use ntohs().

Related

Why do some arithmetic instructions have a signed/unsigned variant and some don't

Assume we have:
a = 0b11111001;
b = 0b11110011;
If we do Addition and Multiplication on paper with hand we get this result, we don't care if its signed or not:
a + b = 111101100
a * b = 1110110001011011
I know that Multiplication doubles the width and addition could overflow:
Why is imul used for multiplying unsigned numbers?
Why do some CPUs have different instructions to do signed and unsigned operations?
My question is, why instructions like Add don't usually have a signed/unsigned version, but Multiply and Divide do?
Why can't we have a generic unsigned multiply, do the math like I did above and truncate the result if its singed, same way Add does.
Or the other, why can't Add have a signed/unsigned version. I have checked a few architectures and this seems to be the case.
I think your choice of example misled you into thinking the signed product could be obtained by truncating the 8x8 => 16-bit unsigned product down to 8 bits. That is not the case.
(249-256) * (243-256) = 0x005b, a small positive result that happens to fit in the low half of the full result. But the full signed result is not always the operand-size truncation of the unsigned product.
For example, -128 * 127 is -16256, or as 16-bit 2's complement, 0xc080.
But 0x80 * 0x7f is + 16256, i.e. 0x3f80. Same low half, different upper half.
Or for another example, see Why are signed and unsigned multiplication different instructions on x86(-64)?
Widening signed-multiply doesn't involve any truncation. The low half of signed and unsigned multiply is the same, that's why x86 for example only has immediate and 2-operand forms of imul, not also mul. Only widening multiply needs a separate form. (Or if you want FLAGS set according to unsigned overflow of the low half, instead of signed overflow. - so you can't easily use non-widening imul if you want to detect when the full unsigned result didn't fit.)

Bit Shift Operator '<<' creates Extra 0xffff?

I am currently stuck with this simple bit-shifting problem. The problem is that when I assign a short variable any values, and shift them with << 8, I get 0xffff(2 extra bytes) when I save the result to the 'short' variables. However, for 'long', it is OK. So I am wondering why this would anyhow happen ??
I mean, short isn't supposed to read more than 2 bytes but... it clearly shows that my short values are containing Extra 2 bytes with the value 0xffff.
I'm seeking for your wisdom.. :)
This image describes the problem. Clearly, when the 'sign' bit(15) of 'short' is set to 1 AFTER the bit shift operation, the whole 2 byte ahead turns into 0xffff. This is demonstrated by showing 127(0x7f) passing the test but 0x81 NOT passing the test because when it is shifted, Due to it's upper 8. That causes to set Bit15(sign bit) to '1'. Also, Because 257(0x101) doesn't set the bit 15 after shifting, it turns out to be OK.
There are several problems with your code.
First, you are doing bit shift operations with signed variables, this may have unexpected results. Use unsigned short instead of short to do bit shifting, unless you are sure of what you are doing.
You are explicitly casting a short to unsigned short and then storing the result back to a variable of type short. Not sure what you are expecting to happen here, this is pointless and will prevent nothing.
The issue is related to that. 129 << 8 is 33024, a value too big to fit in a signed short. You are accidently lighting the sign bit, causing the number to become negative. You would see that if you printed it as %d instead of %x.
Because short is implicitly promoted to int when passed as parameter to printf(), you see the 32-bit version of this negative number, which has its 16 most relevant bits lit in accordance. This is where the leading ffff come from.
You don't have this problem with long because even though its signed long its still large enough to store 33024 without overloading the sign bit.

How is this bitshifting working in this example?

I was going through the go tutorial on golang.org and I came across an example that i partially understand...
MaxInt uint64 = 1<<64 - 1
Now I understand this to be shifting the bit 64 places to the left which would make it a 1 followed by 64 0's.
My question is why is this the max integer that can be achieved in a 64 bit number. Wouldn't the max integer be 111111111....(until the 64th 1) instead of 100000...(until the 64th one)?
What happens here, step by step:
Take 1.
Shift it to the left 64 bits. This is tricky. The result actually needs 65 bits for representation - namely 1 followed by 64 zeroes. Since we are calculating a 64 bit value here why does this even compile instead of overflowing to 0 or 1 or producing a compile error?
It works because the arithmetic used to calculate constants in Go is a bit magic (https://blog.golang.org/constants) in that it has nothing to do whatsoever with the type of the named constant being calculated. You can say foo uint8 = 1<<415 / 1<<414 and foo is now 2.
Subtract 1. This brings us back into 64 bits numbers, as it's actually 11....1 (64 times), which is indeed the maximum value of uint64. Without this step, the compiler would complain about us trying to cram 65 bit value into uint64.
Name the constant MaxInt and give it type uint64. Success!
The magic arithmetic used to calculate constants still has limitations (obviously). Shifts greater than 500 or so produce funny named stupid shift errors.

Bit shifting in Ruby

I'm currently converting a Visual Basic application to Ruby because we're moving it to the web. However when converting some algorithms I've run into a problem concerning bit shifting.
How I understand it, the problem lies in the size mask VB enforces on Integer types (as explained Here). Ruby, in practice, doesn't differentiate in these types.
So the problem:
Visual Basic
Dim i As Integer = 182
WriteLine(i << 24) '-1241513984
Ruby
puts 182 << 24 # 3053453312
I've been Googling and reading up on bit shifting the last hours but haven't found a way, or direction even, to tackle this problem.
You need to replicate what visual basic is doing, namely
mask the shift value as documented
cap mask the result with 0xFFFFFFFF (since ruby will have promoted the value to a bignum for you
if the top most bit is set, subtract 2^32 from the result (since signed integers are stored with 2s complement
For example
def shift_32 x, shift_amount
shift_amount &= 0x1F
x <<= shift_amount
x &= 0xFFFFFFFF
if (x & (1<<31)).zero?
x
else
x - 2**32
end
end

How can you deal with BOTH signed and unsigned numbers in VHDL?

I'm writing a program that needs to work for signed AND unsigned numbers. You take a 32 bit input, first 24 bits is a whole number, last 8 bits is a fraction. Depending on what the fraction is you round up or down. Pretty simple, but how would you write a program that will work whether the input is signed OR unsigned? Do you just make two separate code blocks that execute depending on if a number is unsigned or not?
Your program would need to be aware of the source if the data, and from that information derive whether or not the number is signed. Otherwise, how is your program to know whether a vector of bits is (un)signed? Signage is a convention for humans to use to structure data. The hardware you implement just sees a vector of bits.
A 32-bit unsigned number with 8 fraction bits can represent numbers in the range 0 to ((2^32)-1)/256.
A 32-bit signed number with 8 fraction bits can represent numbers in the range -(2^31)/256 to ((2^31)-1)/256.
So, how about converting your 32-bit input (signed or unsigned) to 33-bit signed, which will be able to represent numbers in the range -(2^32)/256 to ((2^32)-1)/256, which will cover your whole range of inputs.
(You have not given any code. In addition to your 32-bit input, there must be some other input to signal whether those 32 bits represent an unsigned or a signed number. You'll need to test that input and do the appropriate conversion based on its state.)

Resources