What happens when you assign a value of greater than 1 to a bit in a register? - bit

What happens if I set a single bit to a value greater than 1 like below?
CANCDMOB = ( 1 << CONMOB1) | ( 1 << IDE ) | ( 8 << DLC0);
Will the DLC0 bit just be set to 1 or will it affect the next bits in the register aswell?
DLC0 is the LSB of an 8-bit register.

Related

Why am I getting negative integer after adding two positive 16 bit integers?

I am a newbie to golang, actually, I am new to type based programming. I have only knowledge of JS.
While going through simple examples in golang tutorials. I found that adding a1 + a2 provides a negative integer value?
var a1 int16 = 127
var a2 int16 = 32767
var rr int16 = a1 + a2
fmt.Println(rr)
Result:
-32642
Excepted:
The compiler will throw an error as a exceeded the int16 max.
( OR ) GO automatically convert the int16 to int32.
32,894
Can you guys explain why it is showing -32642.
This is the result of Integer Overflow behaving as defined in the specification.
You don't see your expected results, because
Overflow happens at runtime, not compile time.
Go is statically typed.
32,894 is greater than the max value representable by an int16.
It’s very simple.
The 16 bit integer maps the positive part I 0 - 32767 (0x0000, 0x7FFF) and the negative part from 0x8000 (−32768) to 0xFFFF (-1).
For example 0 - 1 = -1 and it’s store as 0xFFFF.
Now in your specific case: 32767 + 127.
You overflow because 32767 is the max value for a signed 16 bit integer, but, if you force the addition 0x7FFF + 7F = 807E and convert 807E to signed 16 bit integer you obtain -32642.
You can better understand here: Signed number representations
Aditionally, check these Math Constants:
const (
MaxInt8 = 1<<7 - 1
MinInt8 = -1 << 7
MaxInt16 = 1<<15 - 1
MinInt16 = -1 << 15
MaxInt32 = 1<<31 - 1
MinInt32 = -1 << 31
MaxInt64 = 1<<63 - 1
MinInt64 = -1 << 63
MaxUint8 = 1<<8 - 1
MaxUint16 = 1<<16 - 1
MaxUint32 = 1<<32 - 1
MaxUint64 = 1<<64 - 1
)
And check the human version of these values here

Bit datatype in SystemVerilog

bit id_pkt ;
id_pkt++ ;
I found this code snippet while learning some aspects of SV. Now, isn't 'bit' a 2 state data type? So technically it should only take either 0 or 1 right? How can you keep incrementing a variable of bit datatype? Or is it that a bit type variable has some default 32 bits allocated to it? And so this is also a valid bit variable -> 110000 ?
Yes, a single bit can only take on the values 0 and 1. So id_pky++ toggles the value from 0 to 1 and 1 to 0.
module testthebit ;
initial begin
bit wr_rd ;
for (int i = 0; i<10; i++)
begin
$display(" The value of wr_rd is %0h",wr_rd) ;
wr_rd++ ;
end
end
endmodule
Yeah, so I wrote this module, and the results were like you predicted #dave_59 :-
The value of wr_rd is 0
The value of wr_rd is 1
The value of wr_rd is 0
The value of wr_rd is 1
The value of wr_rd is 0
The value of wr_rd is 1
The value of wr_rd is 0
The value of wr_rd is 1
The value of wr_rd is 0
The value of wr_rd is 1

Converting a floating point to its corresponding bit-segments

Given a Ruby Float value, e.g.,
f = 12.125
I'd like to wind up a 3-element array containing the floating-point number's sign (1 bit), exponent (11 bits), and fraction (52 bits). (Ruby's floats are the IEEE 754 double-precision 64-bit representation.)
What's the best way to do that? Bit-level manipulation doesn't seem to be Ruby's strong point.
Note that I want the bits, not the numerical values they correspond to. For instance, getting [0, -127, 1] for the floating-point value of 1.0 is not what I'm after -- I want the actual bits in string form or an equivalent representation, like ["0", "0ff", "000 0000 0000"].
The bit data can be exposed via Arrays pack as Float doesn't provide functions internally.
str = [12.125].pack('D').bytes.reverse.map{|n| "%08b" %n }.join
=> "0100000000101000010000000000000000000000000000000000000000000000"
[ str[0], str[1..11], str[12..63] ]
=> ["0", "10000000010", "1000010000000000000000000000000000000000000000000000"]
This is a bit 'around about the houses' to pull it out from a string representation. I'm sure there is a more efficient way to pull the data from the original bytes...
Edit The bit level manipulation tweaked my interest so I had a poke around. To use the operations in Ruby you need to have an Integer so the float requires some more unpacking to convert into a 64 bit int. The big endian/ieee754 documented representation is fairly trivial. The little endian representation I'm not so sure about. It's a little odd, as you are not on complete byte boundaries with an 11 bit exponent and 52 bit mantissa. It's becomes fiddly to pull the bits out and swap them about to get what resembles little endian, and not sure if it's right as I haven't seen any reference to the layout. So the 64 bit value is little endian, I'm not too sure how that applies to the components of the 64bit value until you store them somewhere else, like a 16bit int for the mantissa.
As an example for an 11 bit value from little > big, The kind of thing I was doing was to shift the most significant byte left 3 to the front, then OR with the least significant 3 bits.
v = 0x4F2
((v & 0xFF) << 3) | ( v >> 8 ))
Here it is anyway, hopefully its of some use.
class Float
Float::LITTLE_ENDIAN = [1.0].pack("E") == [1.0].pack("D")
# Returns a sign, exponent and mantissa as integers
def ieee745_binary64
# Build a big end int representation so we can use bit operations
tb = [self].pack('D').unpack('Q>').first
# Check what we are
if Float::LITTLE_ENDIAN
ieee745_binary64_little_endian tb
else
ieee745_binary64_big_endian tb
end
end
# Force a little end calc
def ieee745_binary64_little
ieee745_binary64_little_endian [self].pack('E').unpack('Q>').first
end
# Force a big end calc
def ieee745_binary64_big
ieee745_binary64_big_endian [self].pack('G').unpack('Q>').first
end
# Little
def ieee745_binary64_little_endian big_end_int
#puts "big #{big_end_int.to_s(2)}"
sign = ( big_end_int & 0x80 ) >> 7
exp_a = ( big_end_int & 0x7F ) << 1 # get the last 7 bits, make it more significant
exp_b = ( big_end_int & 0x8000 ) >> 15 # get the 9th bit, to fill the sign gap
exp_c = ( big_end_int & 0x7000 ) >> 4 # get the 10-12th bit to stick on the front
exponent = exp_a | exp_b | exp_c
mant_a = ( big_end_int & 0xFFFFFFFFFFFF0000 ) >> 12 # F000 was taken above
mant_b = ( big_end_int & 0x0000000000000F00 ) >> 8 # F00 was left over
mantissa = mant_a | mant_b
[ sign, exponent, mantissa ]
end
# Big
def ieee745_binary64_big_endian big_end_int
sign = ( big_end_int & 0x8000000000000000 ) >> 63
exponent = ( big_end_int & 0x7FF0000000000000 ) >> 52
mantissa = ( big_end_int & 0x000FFFFFFFFFFFFF ) >> 0
[ sign, exponent, mantissa ]
end
end
and testing...
def printer val, vals
printf "%-15s sign|%01b|\n", val, vals[0]
printf " hex e|%3x| m|%013x|\n", vals[1], vals[2]
printf " bin e|%011b| m|%052b|\n\n", vals[1], vals[2]
end
floats = [ 12.125, -12.125, 1.0/3, -1.0/3, 1.0, -1.0, 1.131313131313, -1.131313131313 ]
floats.each do |v|
printer v, v.ieee745_binary64
printer v, v.ieee745_binary64_big
end
TIL my brain is big endian! You'll note the ints being worked with are both big endian. I failed at bit shifting the other way.
Use frexp from the Math module. From the doc:
fraction, exponent = Math.frexp(1234) #=> [0.6025390625, 11]
fraction * 2**exponent #=> 1234.0
The sign bit is easy to find on its own.

What does the "|" operator do?

I don't get this operator. What does it do?
Here is an example of where I find it:
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
That's bitwise OR. It's useful in this case for composing bitmasks. Those flags are defines for a number which has one bit set, and when you OR them together, you end up with a number with both bits set.
example:
I don't know what the exact value of those flags is, but let's imagine they are:
#define GL_COLOR_BUFFER_BIT 0x01
#define GL_DEPTH_BUFFER_BIT 0x08
If you write those out in binary, you get:
GL_COLOR_BUFFER_BIT = 00000001
GL_DEPTH_BUFFER_BIT = 00001000
And if you bitwise OR those together, you set the bit in the output to 1 if that bit is set in either GL_COLOR_BUFFER_BIT OR GL_DEPTH_BUFFER_BIT, so:
GL_COLOR_BUFFER_BIT = 00000001
GL_DEPTH_BUFFER_BIT = 00001000
= 00001001
So you end up with the number 0x09.
The function you're calling will examine the number you passed in, and based on which bits are set, it knows what flags you're passing in.
It depends a bit on the language you are using, but in many this is the bitwise or. This is often used to pass several flags that are encoded as a bit into a function. For instance if you have two flags
const char FLAG_1 = 0x01;
const char FLAG_2 = 0x02;
then FLAG_1 | FLAG_2 is 0x03. For bitwise disjoint bits this is equivalent to addition, which is can be confusing at first. In the call the bits are or'd
func( FLAG_1 | FLAG_2) // set both FLAG_1 and FLAG_2
The function func can then test the bits individually using the bitwise and:
void func( char flags ) {
if( flag & FLAG_1 ) { // test for FLAG_1
}
if( flag & FLAG_1 ) { // test for FLAG_2
}
}
It is a bitwise or operator. Bitwise or works in a way shown below:
Lets assume you have 2 bytes a,b.
a = 0 0 1 1 0 1 0 1
b = 1 0 1 1 0 0 0 0
Bitwise operators work on a bit level. What happens is it takes each bit in turn and performs an or operation on it. If at least one (or both) bits are 1s the the corresponding bit in the result byte will also be a one. If they are both 0 then result bit will also be 0.
So a bitwise or on a and b will do this:
a = 0 0 1 1 0 1 0 1
b = 1 0 1 1 0 0 0 0
c = 1 0 1 1 0 1 0 1
First bit 0 and 1. There is a 1 so bit in c will be a 1.
Second bit 0 and 0 so next result bit will be a 0.
Third bit both 1s. At least one 1 is present so next bit in c will be a 1... and so on.
Hope it clears it up for you.
PS. I used dummy bits here and by no means they correspond to the actual values used by GL_COLOR_BUFFER_BIT or GL_DEPTH_BUFFER_BIT.
Mostly bitwise operators are used to enforce a mask on a value or combine two values together.
| will combine the two together
& will enforce a mask
Lets assume you are given a value and you want to make it look like this (we will use & to enforce that mask).
0 0 0 1 1 1 1 0
0s will correspond to places where there should be no value and 1s correspond to the bits which you are interested in and want to extact.
a = 1 0 1 0 0 1 1 0
mask = 0 0 0 1 1 1 1 0
Now all bits in a which are above 0s will be dropped and result will have 0s in these bits because for and & you need both bits to be 1s to result in a bit 1.
Where you have 1s in the mask, thats where the potential 1s in result will occur. So using & on each bit you end up with this value.
result = 0 0 0 0 0 1 1 0
If you look closer the 4 bits in result which are in same position as 1s in mask are simply moved from a. So what it really did was removed the bits in a where mask had a 0 and kept the ones with 1s.

VB.NET enum declaration syntax

I recently saw a declaration of enum that looks like this:
<Serializable()>
<Flags()>
Public Enum SiteRoles
ADMIN = 10 << 0
REGULAR = 5 << 1
GUEST = 1 << 2
End Enum
I was wondering if someone can explain what does "<<" syntax do or what it is used for? Thank you...
The ENUM has a Flags attribute which means that the values are used as bit flags.
Bit Flags are useful when representing more than one attribute in a variable
These are the flags for a 16 bit (attribute) variable (hope you see the pattern which can continue on to X number of bits., limited by the platform/variable type of course)
BIT1 = 0x1 (1 << 0)
BIT2 = 0x2 (1 << 1)
BIT3 = 0x4 (1 << 2)
BIT4 = 0x8 (1 << 3)
BIT5 = 0x10 (1 << 4)
BIT6 = 0x20 (1 << 5)
BIT7 = 0x40 (1 << 6)
BIT8 = 0x80 (1 << 7)
BIT9 = 0x100 (1 << 8)
BIT10 = 0x200 (1 << 9)
BIT11 = 0x400 (1 << 10)
BIT12 = 0x800 (1 << 11)
BIT13 = 0x1000 (1 << 12)
BIT14 = 0x2000 (1 << 13)
BIT15 = 0x4000 (1 << 14)
BIT16 = 0x8000 (1 << 15)
To set a bit (attribute) you simply use the bitwise or operator:
UInt16 flags;
flags |= BIT1; // set bit (Attribute) 1
flags |= BIT13; // set bit (Attribute) 13
To determine of a bit (attribute) is set you simply use the bitwise and operator:
bool bit1 = (flags & BIT1) > 0; // true;
bool bit13 = (flags & BIT13) > 0; // true;
bool bit16 = (flags & BIT16) > 0; // false;
In your example above, ADMIN and REGULAR are bit number 5 ((10 << 0) and (5 << 1) are the same), and GUEST is bit number 3.
Therefore you could determine the SiteRole by using the bitwise AND operator, as shown above:
UInt32 SiteRole = ...;
IsAdmin = (SiteRole & ADMIN) > 0;
IsRegular = (SiteRole & REGULAR) > 0;
IsGuest = (SiteRole & GUEST) > 0;
Of course, you can also set the SiteRole by using the bitwise OR operator, as shown above:
UInt32 SiteRole = 0x00000000;
SiteRole |= ADMIN;
The real question is why do ADMIN and REGULAR have the same values? Maybe it's a bug.
These are bitwise shift operations. Bitwise shifts are used to transform the integer value of the enum mebers here to a different number. Each enum member will actually have the bit-shifted value. This is probably an obfuscation technique and is the same as setting a fixed integer value for each enum member.
Each integer has a binary reprsentation (like 0111011); bit shifting allows bits to move to the left (<<) or right (>>) depending on which operator is used.
For example:
10 << 0 means:
1010 (10 in binary form) moved with 0 bits left is 1010
5 << 1 means:
101 (5 in binary form) moved one bit to the left = 1010 (added a zero to the right)
so 5 << 1 is 10 (because 1010 represents the number 10)
and etc.
In general the x << y operation can be seen as a fast way to calculate x * Pow(2, y);
You can read this article for more detailed info on bit shifting in .NET http://www.blackwasp.co.uk/CSharpShiftOperators.aspx

Resources