Convert signed 24 bit number to signed 20 bit number - bit

How can I convert signed 24 bit number to signed 20 bit number assuming after conversion number will fit into 20 bits. Most of the answer I found try to extend the bit but I am trying to contract the bit. Looking for answer using bitwise operation or bit manipulation or bit shifting.

Related

Convert packed bits to two's compliment int in Go

I have a 38 bit two's complement integer packed inside a byte slice. How do I extract this value correctly? If I simply pull out the 38 bits into an int64, it's not correct for negative numbers (because the left 28 bits are always all 0).
Here's an example: https://play.golang.org/p/BRvrihYAJ80
Bytes 4 to 8 make up 40 bits, I ignore the first two bits in byte 4 and then or+shift the rest into an int64. This works for positive numbers, but not negative. The binary is correct and when interpreted as an int38 come out (correctly) as -40520517670 - but when interpretted incorrectly as an int64 comes out as 234357389274.
How can I take these 38 bits and convert them to a 64 bit int correctly?

Probability of a collision using 32 bit CRC of a unique 32 byte array

I am trying to figure out if using 32 bit CRC will produce collision on 32 byte array.
BackGround
My system reads some configuration whenever it boots up from an external flash. I store the SHA256 hash of the last know configuration and when ever I read the configuration I calculate the SHA256 hash and compare it. If the two hash are different then the data is different.
I need to take that SHA256 and make it into a 32bit hash for another part of the system (due to some legacy code restrictions).
Questions
Will there be a high number of collision if I compute the 32 bit CRC on the 32 byte hash from SHA256?
I calculate the probability of collision to be 0. Can you let me know if this is correct?
The number of sample K is always 2 in my problem (I think) because I am calculating 32 bit CRC on two 32 bytes byte array (SHA256 byte array).
see calculation here
That's correct, if by "0" you mean that very small number. That small number is the probability that you would get a 32-bit CRC from random data that accidentally matches what you were expecting. It is simply 2-32.

Bitboard algorithms for board sizes greater than 64?

I know the Magic BitBoard technique is useful for modern games that are on a n 8x8 grid because you it aligns perfectly with a single 64-bit integer, but is the idea extensible to board sizes greater than 64 squares?
Some games like Shogi have larger board sizes such as 81 squares, which doesn't cleanly fit into a 64-bit integer.
I assume you'd have to use multiple integers but would it would it be better to use 2 64-bit integers or something like 3 32-bit ones?
I know there probably isn't a trivial answer to this, but what kind of knowledge would I need in order to research something like this? I only have some basic/intermediate algorithms and data structures knowledge.
Yes, you could do this with a structure that contains multiple integers of varying lengths. For example, you could use 11 unsigned bytes. Or a 64-bit integer and a 32-bit integer, etc. Anything that will add up to 81 or more bits.
I rather like the idea of three 32-bit integers because you can store three rows per integer. It makes your indexing code simpler than if you used a 64-bit integer and a 32-bit integer. 9 16-bit words would work well, too, but you're wasting almost half your bits.
You could use 11 unsigned bytes, but the indexing is kind of ugly.
All things considered, I'd probably go with the 3 32-bit integers, using the low 27 bits of each.

Decimal to sign and magnitude conversion

I've got a question which i have solved and many other similar ones which i have successfully completed as part of my assignment. I am having a little confusion with one of the question which is...
"The decimal number -256 is held in a 9 bit memory location. represent this in sign and magnitude."
the answer that i got is... 11 0000 0000
how i got this is by doing the following:
We first show the binary form and then invert the most significant bit (the bit on the far left. O represents a positive value and 1 represents a negative value) the sign determines whether it is a positive or negative value and the magnitude is the total of the numbers or total of bits etc.
Notice that I’ve added an extra bit to show the representation of -256 in sign and magnitude. This is simply because 9 bits are not enough to show whether 256 is negative or positive as it maxed out. The total of 9 bits gave the magnitude which is 256 and the 10th bit (on the far left) shows the sign which is ‘1’ and this shows that it is a negative 256.
What i find confusing is that the decimal -256 was held in a 9 bit memory location and the result i got was 1 0000 0000. i have added an extra bit to show that the number is negative which then represents it in 10 bit. I am having difficulty with how i can represent -256 in sign and magnitude using only 9 bit. it seems impossible to show in 9 bit and i have therefore used 10 bit but i am only allowed to use 9 bits. could someone help with how this could be achieved. your help will be greatly appreciated. I am just a bit stuck with this. My tutors expect us to use the internet or self knowledge and would not give us even a clue. so that's why i am here.
Know this is a bit late but I wondered this too in my HW just now and looked it up.
The magnitude, given w-bits, of sign magnitude is 2^(w-1)-1.
The decimal equivalent of 100000000 is 256.
Given that the number of bits is 9, 2^(8)-1 = 255.
So it would be impossible to store 256 given 9 bits in sign magnitude.
It is impossible to represent -256 in sign magnitude, with 9 bits, simply because it is not possible to represent 256 in 8 bits, (8 bits of precision, affords 2^8 = 256 different possible values, so you can represent from 0 up to 255, but no further).

How can you deal with BOTH signed and unsigned numbers in VHDL?

I'm writing a program that needs to work for signed AND unsigned numbers. You take a 32 bit input, first 24 bits is a whole number, last 8 bits is a fraction. Depending on what the fraction is you round up or down. Pretty simple, but how would you write a program that will work whether the input is signed OR unsigned? Do you just make two separate code blocks that execute depending on if a number is unsigned or not?
Your program would need to be aware of the source if the data, and from that information derive whether or not the number is signed. Otherwise, how is your program to know whether a vector of bits is (un)signed? Signage is a convention for humans to use to structure data. The hardware you implement just sees a vector of bits.
A 32-bit unsigned number with 8 fraction bits can represent numbers in the range 0 to ((2^32)-1)/256.
A 32-bit signed number with 8 fraction bits can represent numbers in the range -(2^31)/256 to ((2^31)-1)/256.
So, how about converting your 32-bit input (signed or unsigned) to 33-bit signed, which will be able to represent numbers in the range -(2^32)/256 to ((2^32)-1)/256, which will cover your whole range of inputs.
(You have not given any code. In addition to your 32-bit input, there must be some other input to signal whether those 32 bits represent an unsigned or a signed number. You'll need to test that input and do the appropriate conversion based on its state.)

Resources