Using the classic code snippet:
if (x & (x-1)) == 0
If the answer is 1, then it is false and not a power of 2. However, working on 5 (not a power of 2) and 4 results in:
0001 1111
0001 1111
0000 1111
That's 4 1s.
Working on 8 and 7:
1111 1111
0111 1111
0111 1111
The 0 is first, but we have 4.
In this link (http://www.exploringbinary.com/ten-ways-to-check-if-an-integer-is-a-power-of-two-in-c/) for both cases, the answer starts with 0 and there is a variable number of 0s/1s. How does this answer whether the number is a power of 2?
You need refresh yourself on how binary works. 5 is not represented as 0001 1111 (5 bits on), it's represented as 0000 0101 (2^2 + 2^0), and 4 is likewise not 0000 1111 (4 bits on) but rather 0000 0100 (2^2). The numbers you wrote are actually in unary.
Wikipedia, as usual, has a pretty thorough overview.
Any power of two number can be represent in binary with a single 1 and multiple 0s.
eg.
10000(16)
1000(8)
100(4)
If you subtract 1 from any power of two number, you will get all 1s to the right of where the original one was.
10000(16) - 1 = 01111(15)
ANDing these two numbers will give you 0 every time.
In the case of a non-power of two number, subtracting one will leave at least one "1" unchanged somewhere in the number like:
10010(18) - 1 = 10001(17)
ANDing these two will result in
10000(16) != 0
Keep in mind that if x is a power of 2, there is exactly 1 bit set. Subtract 1, and you know two things: the resulting value is not a power of two, and the bit that was set is no longer set. So, when you do a bitwise and &, every bit that was set in x is not unset, and all the bits in (x-1) that are set must be matched against bits not set in x. So the and of each bit is always 0.
In other words, for any bit pattern, you are guaranteed that (x&(x-1)) is zero.
((n & (n-1)) == 0)
It checks whether the value of “n” is a power of 2.
Example:
if n = 8, the bit representation is 1000
n & (n-1) = (1000) & ( 0111) = (0000)
So it return zero only if its value is in power of 2.
The only exception to this is ‘0’.
0 & (0-1) = 0 but ‘0’ is not the power of two.
Why does this make sense?
Imagine what happens when you subtract 1 from a string of bits. You read from left to right,
turning each 0 to a 1 until you hit a 1, at which point that bit is flipped:
1000100100 -> (subtract 1) -> 1000100011
Thus, every bit, up through the first 1, is flipped. If there’s exactly one 1 in the number, then every bit (other than the leading zeros) will be flipped. Thus, n & (n-1) == 0 if there’s exactly one 1. If there’s exactly one 1, then it must be a power of two.
Related
For example, if I have a number 0101 1111 and I want to shift every 4 bit long section to the left to get 1010 1110. While I could just modulo off each section to get two 4-bit numbers, is there an algorithm that doesn't need to do this?
A naive approach
A first naive appraoch is to slice the 4 bit groups and process them individually. The expected result is obtained with the following for the first group of 4 bits.
(((x & 0xf) // take only 4 bits
<< 1) // shift them by 1
& 0xf) // get rid of potential overflow
For the n+1 th group of 4 bits, it's
(((x & (0xf<<(n*4)))
<< 1)
& (0xf<<(n*4)))
Since this is designed, so that there is no overlap around the 4 bits that are processed, you could iterate, and binary-or the partial results.
A less naive approach
Another approach is to simply shift the full x by 1, causing every 4 bit group to be shifted at once:
0101 1111 -> 1011 1110
We can then easily get rid of the overflow, and at the same time make sure that 0's are injected on the left, by clearing every 4th bit in the result of the shift:
1011 1110
& 1110 1110
---------
1010 1110
1110 is e in hexadecimal. So you need to generate a number with as many 0xe as there are 4 bit segments. In your case it's 0xee if it's just 8 bits. It's 0xeeeeeeeeeeeeeeee if it's 64 bits. Someone told this answer in the comments. Here you have the explanation.
Caution if your underlying data type is signed, because of the sign bit. Do this processing on unsigned integers to avoid any surprise.
Here is one way.
int bits = 0b1111_0001_0011_0111;
int result = 0;
int m = 0b1111;
while(m != 0) {
result |= ((bits & m) << 1) & m;
m <<= 4;
}
System.out.printf("%-7s = %s%n","src", Integer.toBinaryString(bits));
System.out.printf("%-7s = %s%n","result", Integer.toBinaryString(result));
Prints
src = 1111000100110111
result = 1110001001101110
Here is the python code snippet:
1 & -1 # 1
2 & -2 # 2
3 & -3 # 1
...
It seems any n & -n always return right most (last) bit, I don't really know why. Can someone help me to understand this?
It's due to the way that negative numbers are represented in binary, which is called two's complement representation.
To create the two's complement of some number n (in other words, to create the representation of -n):
Invert all the bits
Add 1
So in other words, when you write 1 & -1 it really means 1 & ((~1)+1). The initial ~1 gives the value 1111110 and adding one gives 11111111. (Let's stick with 8 bits for these examples.) ANDing that values with 1 gives just 1.
In the next case, 2 & -2 means 2 & ((~2)+1). Inverting 2 gives 11111101 and adding one gives 11111110. Then AND with 2 (10 in binary) gives 2.
Finally 3 & -3 means 3 & ((~3)+1). Invert 3 gives 11111100, add 1 gives 11111101, and AND with 3 (11 binary) gives 1.
~x = -1 -x
so
-x = ~x + 1
When you take the compliment of x (~x), all the 0-bits turn to 1 and all the 1 bits turn to zero. e.g. 101100 -> 010011.
When you add 1, the consecutive 1s on the right change to 0 and the first 0 bit gets set to 1: 010011 -> 010100
If you & that with the original, the 0 bits at the top that changed to 1 come out 0. The 1 bits at the bottom you flipped to 0 by adding come out 0. Only the rightmost 1 bit, which turned into the rightmost 0 bit in the complement and got reset to 1 by the addition, is 1 on both sides: 101100 & 010100 -> 000100
The integers are stored in the memory in the binary form. The non-negative integers are stored as it is in as they are in their binary but the negative numbers are stored in the two's complement form. For example take any arbitrary number 158.
158 = 0000000010011110
while its negative ie.
-158 = 1111111101100010
Take for any number and bitwise AND with its negative then you will get rightmost set bit. This is because in the process of conversion of two's complement we start from right and put the bits as it is till we encounter our first set bit. That is the right most set bit is being written as it is. Then We flip the digits in the left starting from here.
However Above procedure is just a shortcut method for calculating 2's complement. For actual process, you have to first take 1's complement of the number(flipping all bits set to unset and unset to set) and then add 1 to the whole result. Now here is insight of why does this shortcut works everytime
Why does this two's complement shortcut work?
This will give you more insight https://www.geeksforgeeks.org/efficient-method-2s-complement-binary-string/
And take some numbers and work on examples and see yourself.
The catch is in calculating the 2's complement not in performing bitwise AND operation, AND always needs same things to be same to give true.
What 2's complement is doing is, it is making the rightmost digit of both numbers taking part in bitwise AND operation, Look below how 2's complement operation is being calculated and you will find that we represent the number(decimal) in binary and then to calculate the 2's complement we start from right and copy all binary digits the same until we see the first 1, and when we see the first 1 then we make all left of it as opposite the the previous.
The catch is:
Say you want 2's complement of 4 i.e (-4) , so what it says is represent the decimal in binary and copy all bits(0) from right till you see the first 1 and after that reverse all 0s and 1s.
Example: we want 2's complement of 6 -> 0 1 1 0 = 0 0 1 0 , From right of 0110 we start and till we see the first 1 we copy exactly what is in first then we reverse all 0s and 1s.
Another operation with 2's complement
0100
1100 ( Bold are just same as above)
Now it is obvious that when you do bitwise AND then the right most
digit will only make through the AND operation as you need both to
be equal to get through AND.
We are required to compute the bit wise AND amongst all natural numbers lying between A and B, both inclusive.I came across this problem on a website and here is the approach they used but i couldn't understand the method.Can anyone explain this more clearly with an example ?
In order to solve this problem, we just need to focus on the occurrences of each power 2, which turn out to be cyclic. Now for each 2^i(the length of the cycle will be 2^(i+1) having 2^i zeros followed by same number of ones) we just need to compute if 1 remains constant in the given interval, which is done by simple arithmetic. If so, that power of 2 will be present in the answer, otherwise it won't.
Let's count (unsigned) with 3 bits to visualize some numbers first:
000 = 0
001 = 1
010 = 2
011 = 3
100 = 4
101 = 5
110 = 6
111 = 7
If you look at the columns, you can see that the lowest bit is alternating with a cycle of 1, the next with a cycle of 2, then 4, and the nth lowest bit is alternating with a cycle of 2^(n-1).
As soon as a bit was 0 once it is always 0 (because 0 and whatever is 0).
You could also say the nth bit is only 1 if the nth bit of A and B is 1 and d < 2^(n-1). In other words a bit will only be 1 if it is 1 at the beginning and the end and didn't had time to change to 0 in between because its cycle is too large.
I am trying to understand the first testcase of this challenge in codeforces.
The description is:
Sergey is testing a next-generation processor. Instead of bytes the processor works with memory cells consisting of n bits. These bits are numbered from 1 to n. An integer is stored in the cell in the following way: the least significant bit is stored in the first bit of the cell, the next significant bit is stored in the second bit, and so on; the most significant bit is stored in the n-th bit.
Now Sergey wants to test the following instruction: "add 1 to the value of the cell". As a result of the instruction, the integer that is written in the cell must be increased by one; if some of the most significant bits of the resulting number do not fit into the cell, they must be discarded.
Sergey wrote certain values of the bits in the cell and is going to add one to its value. How many bits of the cell will change after the operation?
Summary
Given a binary number, add 1 to its decimal value, count how many bits change after the operation?
Testcases
4
1100
= 3
4
1111
= 4
Note
In the first sample the cell ends up with value 0010, in the second sample — with 0000.
In the 2 test case 1111 is 15, so 15 + 1 = 16 (10000 in binary), so all the 1's change, therefore is 4
But in the 2 test case 1100 is 12, so 12 + 1 = 13 (01101), here just the left 1 at the end changes, but the result is 3 why?
You've missed the crucial part: the least significant bit is the first one (i.e. the leftmost one), not the last one, as we usually write binary.
Thus, 1100 is not 12 but 3. And so, 1100 + 1 = 3 + 1 = 4 = 0010, so 3 bits are changed.
The "least significant bit" means literally a bit that is not the most significant, so you can understand it as "the one representing the smallest value". In binary, the bit representing 2^0 is the least significant. So the binary code in your task is written as follows:
bit no. 0 1 2 3 4 (...)
value 2^0 2^1 2^2 2^3 2^4 (...)
| least | most
| significant | significant
| bit | bit
that's why 1100 is:
1100 = 1 * 2^0 + 1 * 2^1 + 0*2^2 + 0*2^3 = 1 + 2 + 0 + 0 = 3
not the other way around (as we write usually).
This is task from algorithms book.
The thing is that I completely don't know where to start!
Trace the following non-recursive algorithm to generate the binary reflexive
Gray code of order 4. Start with the n-bit string of all 0’s.
For i = 1, 2, ... 2^n-1, generate the i-th bit string by flipping bit b in the
previous bit string, where b is the position of the least significant 1 in the
binary representation of i.
So I know the Gray code for 1 bit should be 0 1, for 2 00 01 11 10 etc.
Many questions
1) Do I know that for n = 1 I can start of with 0 1?
2) How should I understand "start with the n-bit string of all 0's"?
3) "Previous bit string"? Which string is the "previous"? Previous means from lower n-bit? (for instance for n=2, previous is the one from n=1)?
4) How do I even convert 1-bit strings to 2-bit strings if the only operation there is to flip?
This confuses me a lot. The only "human" method I understand so far is: take sets from lower n-bit, duplicate them, invert the 2nd set, add 0's to every element in 1st set, add 1's do every elements in 2nd set. Done (example: 0 1 -> 0 1 | 0 1 -> 0 1 | 1 0 -> 00 01 | 11 10 -> 11 01 11 10 done.
Thanks for any help
The answer to all four your questions is that this algorithm does not start with lower values of n. All strings it generates have the same length, and the i-th (for i = 1, ..., 2n-1) string is generated from the (i-1)-th one.
Here is the fist few steps for n = 4:
Start with G0 = 0000
To generate G1, flip 0-th bit in G0, as 0 is the position of the least significant 1 in the binary representation of 1 = 0001b. G1 = 0001.
To generate G2, flip 1-st bit in G1, as 1 is the position of the least significant 1 in the binary representation of 2 = 0010b. G2 = 0011.
To generate G3, flip 0-th bit in G2, as 0 is the position of the least significant 1 in the binary representation of 3 = 0011b. G3 = 0010.
To generate G4, flip 2-nd bit in G3, as 2 is the position of the least significant 1 in the binary representation of 4 = 0100b. G4 = 0110.
To generate G5, flip 0-th bit in G4, as 0 is the position of the least significant 1 in the binary representation of 5 = 0101b. G5 = 0111.