Extracting numbers from a 32-bit integer - bit

I'm trying to solve a riddle in a programming test.
Disclaimer: It's a test for a job, but I'm not looking for an answer. I'm just looking for an understanding of how to do this. The test requires that I come up with a set of solutions to a set of problems within 2 weeks, and it doesn't state a requirement that I arrive at the solutions in isolation.
So, the problem:
I have a 32-bit number with the bits arranged like this:
siiiiiii iiiiiiii ifffffff ffffffff
Where:
s is the sign bit (1 == negative)
i is 16 integer bits
f is 15 fraction bits
The assignment is to write something that decodes a 32-bit integer into a floating-point number. Given the following inputs, it should produce the following outputs:
input output
0x00008000 1.0
0x80008000 -1.0
0x00010000 2.0
0x80014000 -2.5
0x000191eb 3.14
0x00327eb8 100.99
I'm having no trouble getting the sign bit or the integer part of the number. I get the sign bit like this:
boolean signed = ((value & (1 << 31)) != 0);
I get the integer and fraction parts like this:
int wholePart = ((value & 0x0FFFFFFF) >> 15);
int fractionPart = ((value & 0x0000FFFF >> 1));
The part I'm having an issue with is getting the number in the last 15 bits to match the expected values.
Instead of 3.14, I get 3.4587, etc.
If someone could give me a hint about what I'm doing wrong, I'd appreciate it. More than anything else, the fact that I haven't figured this out after hours of messing with it is kind of driving me nuts. :-)

Company's inputs aren't wrong. The fractional bits don't represent the the literal digits right of the decimal point, they represent the fractional part. Don't know how else to say it without giving it away. Would it be too big a hint to say there is a divide involved?

A few things...
Why not get the fractional part as
int fractionPart = value & 0x00007FFF; // i.e. no shifting needed...
Similarly, no shifting needed for the sign
boolean signed = ((value & (0x80000000) != 0); // signed is true when negative
See Ryan's response for the effective use of the fractional part, i.e. not taking this literally as the digit values for the decimal part but rather... some' involving a fraction...

Have a look at what you're anding the fraction part with prior to the shift.

Shift Right 31 gives you the signed bit 1=Neg 0=Pos
BEFORE siiiiiii iiiiiiii ifffffff ffffffff
SHR 31 00000000 00000000 00000000 0000000s
Shift Left 1 followed by Shift Right 16 gives you the Integer bits
BEFORE siiiiiii iiiiiiii ifffffff ffffffff
SHL 1 iiiiiiii iiiiiiii ffffffff fffffff0
SHR 16 00000000 00000000 iiiiiiii iiiiiiii
Shift Left 17 followed by Shift Right 15 gives for the Faction bits
BEFORE siiiiiii iiiiiiii ifffffff ffffffff
SHL 17 ffffffff fffffff0 00000000 00000000
SHR 16 00000000 00000000 0fffffff ffffffff

int wholePart = ((value & 0x7FFFFFFF) >> 15);
int fractionPart = (value & 0x00007FFF);
Key your bit-mask into Calculator in Binary mode and then flip it to Hex...

Related

Get value of one bit from 32 bits

How do you apply a mask to get only one bit after you shift right? Does it depend on how many positions you shifted right?
In a 32 bit structure I'm trying to get the value of the 9th bit and the 10th bit.
x := uint32(11537664)
0000 0000 1011 0000 0000 1101 0000 0000
^^
So for the 9th bit, if I shift right 23 bits I need to mask one byte? That seems to isolate the 9th bit because I'm getting a value of 1.
(x >> 23) & 0xff
9th bit...should be 1... looks ok.
00000000000000000000000000000001
0x1
So to get the 10th bit which should be 0 I am shifting one less bit which does make 0 all the way to the right. But there is a 1 after it which needs to be masked. I figured 1 byte plus 1 bit for the mask but I'm still seeing the the bit in position two so that can't be right.
(x >> 22) & 0x1ff
10th bit... should be 0, but this shift and mask does not look correct.
00000000000000000000000000000010
^ This bit I don't want.
0x2
Link to example:
https://play.golang.org/p/zqofCAAKDZz
package main
import (
"fmt"
)
func bin(i uint32) {
fmt.Printf("%032b\n", i)
}
func hex(i uint32) {
fmt.Printf("0x%x\n", i)
}
func show(i uint32) {
bin(i)
hex(i)
fmt.Println()
}
func main() {
x := uint32(11537664)
fmt.Println("Data")
show(x)
fmt.Println("First 8 bits.")
show(x >> 24)
fmt.Println("9th bit...should be 1")
show((x >> 23) & 0xff)
fmt.Println("10th bit... should be 0")
show((x >> 22) & 0x1ff)
}
After the shift you get a number being 0b10, and you only need the lowest bit. So why are you masking with 0x1ff? That has 9 one bits, that will leave the lowest 9 bits unchanged (unmasked).
Instead mask with 0b01 = 0x01. That only leaves the lowest bit, and zeroes all others:
show((x >> 22) & 0x01)
Try it on the Go Playground.
Also note that if you just want to test if a certain bit is one or zero, you don't neccessarily have to shift. Masking by a proper bitmask that contains a single one at the certain position is enough. You may compare the masking result with zero.
The proper bitmask for testing the nth bit is simply 1<<n (where bits are zero indexed). The 2 bits you want to test are the 22. and 23. bits.
See this example:
x := uint32(11537664)
fmt.Printf("x : %032b\n", x)
fmt.Println()
const mask22 = 1 << 22
fmt.Printf("mask22 : %032b\n", mask22)
fmt.Printf("22. bit: %032b %t\n", x&mask22, x&mask22 != 0)
fmt.Println()
const mask23 = 1 << 23
fmt.Printf("mask23 : %032b\n", mask23)
fmt.Printf("23. bit: %032b %t\n", x&mask23, x&mask23 != 0)
It outputs (try it on the Go Playground):
x : 00000000101100000000110100000000
mask22 : 00000000010000000000000000000000
22. bit: 00000000000000000000000000000000 false
mask23 : 00000000100000000000000000000000
23. bit: 00000000100000000000000000000000 true

Golang Left/Right shift behaviour for signed numbers

Can someone please explain the left/right shift behaviour in Golang. Please refer the sample code here: https://play.golang.org/p/7vjwCbOEkw
package main
import (
"fmt"
)
func main() {
var lf int8 = -3
fmt.Printf("-3 : %08b\n", lf)
fmt.Printf("<<1: %08b\n", lf<<1)
fmt.Printf("<<2: %08b\n", lf<<2)
fmt.Printf("<<3: %08b\n", lf<<3)
fmt.Printf("<<4: %08b\n", lf<<4)
fmt.Printf("<<5: %08b, %d\n", lf<<5, lf<<5)
fmt.Printf("<<6: %08b, %d\n", lf<<6, lf<<6)
fmt.Printf("<<7: %08b, %d\n", lf<<7, lf<<7)
fmt.Printf("<<8: %08b, %d\n", lf<<8, lf<<8)
fmt.Printf("<<9: %08b, %d\n", lf<<9, lf<<9)
}
-3 : -0000011
<<1: -0000110
<<2: -0001100
<<3: -0011000
<<4: -0110000
<<5: -1100000, -96
<<6: 01000000, 64
<<7: -10000000, -128
<<8: 00000000, 0
<<9: 00000000, 0
-3 is, in two's complement, 11111101 and what you see when the program prints -0000011 is a - and the binary representation of the absolute value of the number. In two's complement, the highest bit is 0 for positive (including zero), and 1 for negative numbers. If you shift this number (11111101) left, the lower 7 bits move one to the left and a 0 comes in from the right, replacing the lowest bit. Shifting as you do in your example will result in:
11111101 -3
11111010 -6
11110100 -12
11101000 -24
11010000 -48
10100000 -96
01000000 64
10000000 -128
00000000 0
00000000 0
...
You just have to consider all the bit patterns as two's complement, once you know how that works, everything will make sense.

how to find xor key/algorithm, for a given hex?

So i have this hex: B0 32 B6 B4 37
I know this hex is obfuscated with some key/algorithm.
I also know this hex is equal to: 61 64 6d 69 6e (admin)
How can i calculate the XOR key for this?
If you write out the binary representation, you can see the pattern:
encoded decoded
10110000 -> 01100001
00110010 -> 01100100
Notice that the bit patterns have the same number of bits before and after. To decode, you just bitwise rotate one bit left. So the value shifts left one place and the most significant bit wraps around to the least significant place. To encode, just do the opposite.
int value, encoded_value;
encoded_value = 0xB0;
value = ((encoded_value << 1) | (encoded_value >> 7)) & 255;
// value will be 0x61;
encoded_value = ((value >> 1) | (value << 7)) & 255;

Breaking a 32 bit integer into 8 bit chucks for Radix Sort

I am basically a beginner in Computer Science. Please forgive me if I ask elementary questions. I am trying to understand radix sort. I read that a 32 bit unsigned integer can be broken down into 4 8-bit chunks. After that, all it takes is "4 passes" to complete the radix sort. Can somebody please show me an example for how this breakdown (32 bit into 4 8-bit chunks) works? Maybe, a 32-bit integer like 2147507648.
Thanks!
You would divide the 32 bit integer up in 4 pieces of 8 bits. Extracting those pieces is a matter of using using some of the operators available in C.:
uint32_t x = 2147507648;
uint8_t chunk1 = x & 0x000000ff; //lower 8 bits
uint8_t chunk2 = (x & 0x0000ff00) >> 8;
uint8_t chunk3 = (x & 0x00ff0000) >> 16;
uint8_t chunk4 = (x & 0xff000000) >> 24; //highest 8 bits
2147507648 decimal is 0x80005DC0 hex. You an pretty much eyeball those 8 bits out of the hex representation, since each hex digit represents 4 bits, two and two of them represents 8 bits.
So that now means chunk 1 is 0xC0, chunk 2 is 0x5D, chunk3 is 0x00 and chunk 4 is 0x80
It's done as follows:
2147507648
=> 0x80005DC0 (hex value of 2147507648)
=> 0x80 0x00 0x5D 0xC0
=> 128 0 93 192
To do this, you'd need bitwise operations as nos suggested.

bitwise AND doesn't work on MSB

I'm implementing a bit vector by packing bits into an array of uints. The getBit(index) function does a (array[cell] & (1 << bit)) >> bit to get whether a bit has been set or not. This works perfectly well for all bits except the MSB. An example of where it doesn't work is as follows.
array[cell] = 11111001 11100000 00000000 00000000
(1 << bit) = 10000000 00000000 00000000 00000000
& operation = 01111001 11100000 00000000 00000000
I can't figure out why the Bitwise AND operation seems to be operating like an XOR. Either that or the MSB got unset. Can anyone explain whats happening?
Edit: Actual code
var cell:uint = int(index / 32);
var bit:uint = 32 - (index % 32) - 1;
return (array[cell] & (1 << bit)) >> bit;
In the instance that doesn't work, index = 0
If all values are such that things are well-defined,
(array[cell] & (1 << bit)) >> bit
is equivalent to the simpler
(array[cell] >> bit) & 1
for unsigned integers.
I'm not familiar with Action Script, but it could be that 1 << 31 behaves oddly because 1 is a signed integer.
Aside remark,
var bit:uint = 32 - (index % 32) - 1;
looks odd, usually one would use index % 32 as the bit number.
Use unsigned shift.
(uint(array[cell]) & (uint(1) << bit)) >>> bit

Resources