How do you convert little Endian to big Endian with bitwise operations? - endianness

I get that you'd want to do something like take the first four bits put them on a stack (reading from left to right) then do you just put them in a register and shift them x times to put them at the right part of the number?
Something like
1000 0000 | 0000 0000 | 0000 0000 | 0000 1011
Stack: bottom - 1101 - top
shift it 28 times to the left
Then do something similar with the last four bits but shift to the right and store in a register.
Then you and that with an empty return value of 0
Is there an easier way?

Yes there is. Check out the _byteswap functions/intrinsics, and/or the bswap instruction.

You could do this way..
For example
I/p : 0010 1000 and i want output
1000 0010
input store into a variable x
int x;
i = x>>4
j = x<<4
k = i | j
print(K) //it will have 1000 0010.

Related

Why does the algorithm expression result get trucated?

If I run $((0x100 - 0 & 0xff)), I got 0.
However $((0x100 - 0)) gives me 256.
Why the result from the first expression got truncated?
Because & is a bitwise operator, and there are no matching bits in 0x100 and 0xff.
What that means is it looks at the bits that make up your numbers and you get a 1 back in the position where both inputs have a 1.
So if you do $((0x06 & 0x03))
In binary you end up with
6 = 0110
3 = 0011
So when you logical and those together, you'll get
0010 (binary) or 0x02
For the numbers you have, there are no bits in common:
0x100 in binary is
0000 0001 0000 0000
0xff in binary is
0000 0000 1111 1111
If you bitwise and them together, there are no matching bits, so you'll end up with
0000 0000 0000 0000
Interestingly, it does the subtraction before it does the bitwise and operation (I expected it to do the other way):
$((0x100 - 1 & 0xff)) gives 255 or 0xff because 0x100 - 1 = 0xff

Why does the CRC of "1" yield the generator polynomial itself?

While testing a CRC implementation, I noticed that the CRC of 0x01 usually (?) seems to be the polynomial itself. When trying to manually do the binary long division however, I keep ending up losing the leading "1" of the polynomial, e.g. with a message of "0x01" and the polynomial "0x1021", I would get
1 0000 0000 0000 (zero padded value)
(XOR) 1 0000 0010 0001
-----------------
0 0000 0010 0001 = 0x0021
But any sample implementation (I'm dealing with XMODEM-CRC here) results in 0x1021 for the given input.
Looking at https://en.wikipedia.org/wiki/Computation_of_cyclic_redundancy_checks, I can see how the XOR step of the upper bit leaving the shift register with the generator polynomial will cause this result. What I don't get is why this step is performed in that manner at all, seeing as it clearly alters the result of a true polynomial division?
I just read http://www.ross.net/crc/download/crc_v3.txt and noticed that in section 9, there is mention of an implicitly prepended 1 to enforce the desired polynomial width.
In my example case, this means that the actual polynomial used as divisor would not be 0x1021, but 0x11021. This results in the leading "1" being dropped, and the remainder being the "intended" 16-bit polynomial:
1 0000 0000 0000 0000 (zero padded value)
(XOR) 1 0001 0000 0010 0001
-----------------
0 0001 0000 0010 0001 = 0x1021

How to bitmask a number (in hex) using the AND operator?

I know that you can bitmask by ANDing a value with 0. However, how can I both bitmask certain nibbles and maintain others. In other words if I have 0x000f0b7c and I wanted to mask the everything but b (in other words my result would be 0x00000b00) how would I use AND to do this? Would it require multiple steps?
You can better understand boolean operations if you represent values in binary form.
The AND operation between two binary digits returns 1 if both the binary digits have a value of 1, otherwise it returns 0.
Suppose you have two binary digits a and b, you can build the following "truth table":
a | b | a AND b
---+---+---------
0 | 0 | 0
1 | 0 | 0
0 | 1 | 0
1 | 1 | 1
The masking operation consists of ANDing a given value with a "mask" where every bit that needs to be preserved is set to 1, while every bit to discard is set to 0.
This is done by ANDing each bit of the given value with the corresponding bit of the mask.
The given value, 0xf0b7c, can be converted as follows:
f 0 b 7 c (hex)
1111 0000 1011 0111 1100 (bin)
If you want to preserve only the bits corresponding to the "b" value (bits 8..11) you can mask it this way:
f 0 b 7 c
1111 0000 1011 0111 1100
0000 0000 1111 0000 0000
The value 0000 0000 1111 0000 0000 can be converted to hex and has a value of 0xf00.
So if you calculate "0xf0b7c AND 0xf00" you obtain 0xb00.

CRC polynomial calculation

I am trying to understand this document, but can't seem to get it right. http://www.ross.net/crc/download/crc_v3.txt
What's the algorithm used to calculate it?
I thought it uses XOR but I don't quite get it how he gets 0110 from 1100 XOR 1001. It should be 101 (or 0101 or 1010 if a bit goes down). If I can get this, I think the rest would come easy, but for some reason I just don't get it.
9= 1001 ) 0000011000010111 = 0617 = 1559 = DIVIDEND
DIVISOR 0000.,,....,.,,,
----.,,....,.,,,
0000,,....,.,,,
0000,,....,.,,,
----,,....,.,,,
0001,....,.,,,
0000,....,.,,,
----,....,.,,,
0011....,.,,,
0000....,.,,,
----....,.,,,
0110...,.,,,
0000...,.,,,
----...,.,,,
1100..,.,,,
1001..,.,,,
====..,.,,,
0110.,.,,,
0000.,.,,,
----.,.,,,
1100,.,,,
1001,.,,,
====,.,,,
0111.,,,
0000.,,,
----.,,,
1110,,,
1001,,,
====,,,
1011,,
1001,,
====,,
0101,
0000,
----
1011
1001
====
0010 = 02 = 2 = REMAINDER
The part you quoted is just standard long division like you learned in elementary school, except that it is done on binary numbers. At each step you perform a subtraction to get the remainder, and this is done in the example you gave: 1100 - 1001 = 0110.
Note that the article just uses this as a preliminary example, and it is not actually what is done in calculating CRC. Instead of normal numbers, CRC uses division of polynomials over the field GF(2). This can be modeled by using normal binary numbers and doing long division normally, except for using XOR instead of subtraction.
The link you provided says:
we'll do the division using good-'ol long division which you
learnt in school (remember?)
You just repetitively subtract, but since it is in binary, there are only two options: either the number fits once in the current selection, or 0 times. I annotated the steps:
0000011000010111
0000
1001 x 0
---- -
0000
1001 x 0
---- -
0001
1001 x 0
---- -
0011
1001 x 0
---- -
0110
1001 x 0
---- -
1100
1001 x 1
---- -
0110
1001 x 0
---- -
1100
1001 x 1
---- -
0110
and so on

Algorithm for bitwise fiddling

If I have a 32-bit binary number and I want to replace the lower 16-bit of the binary number with a 16-bit number that I have and keep the upper 16-bit of that number to produce a new binary number.. how can I do this using simple bitwise operator?
For example the 32-bit binary number is:
1010 0000 1011 1111 0100 1000 1010 1001
and the lower 16-bit I have is:
0000 0000 0000 0001
so the result is:
1010 0000 1011 1111 0000 0000 0000 0001
how can I do this?
You do this in two steps:
Mask out the bits that you want to replace (AND it with 0s)
Fill in the replacements (OR it with the new bits)
So in your case,
i32 number;
i32 mask_lower_16 = FFFF0000;
i16 newValue;
number = (number AND mask_lower_16) OR newValue;
In actual programming language implementation, you may also need to address the issue of sign extension on the 16-bit value. In Java, for example, you have to mask the upper 16 bits of the short like this:
short v = (short) 0xF00D;
int number = 0x12345678;
number = (number & 0xFFFF0000) | (v & 0x0000FFFF);
System.out.println(Integer.toHexString(number)); // "1234f00d"
(original32BitNumber & 0xFFFF0000) | 16bitNumber
Well, I could tell you the answer. But perhaps this is homework. So I won't.
Consider that you have a few options:
| // bitwise OR
^ // bitwise XOR
& // bitwise AND
Maybe draw up a little table and decide which one will give you the right result (when you operate on the right section of your larger binary number).
use & to mask off the low bits and then | to merge the 16 bit value with the 32 bit value
uint a = 0xa0bf68a9
short b = 1
uint result = (a & 0xFFFF0000) | b;

Resources