I know that you can bitmask by ANDing a value with 0. However, how can I both bitmask certain nibbles and maintain others. In other words if I have 0x000f0b7c and I wanted to mask the everything but b (in other words my result would be 0x00000b00) how would I use AND to do this? Would it require multiple steps?
You can better understand boolean operations if you represent values in binary form.
The AND operation between two binary digits returns 1 if both the binary digits have a value of 1, otherwise it returns 0.
Suppose you have two binary digits a and b, you can build the following "truth table":
a | b | a AND b
---+---+---------
0 | 0 | 0
1 | 0 | 0
0 | 1 | 0
1 | 1 | 1
The masking operation consists of ANDing a given value with a "mask" where every bit that needs to be preserved is set to 1, while every bit to discard is set to 0.
This is done by ANDing each bit of the given value with the corresponding bit of the mask.
The given value, 0xf0b7c, can be converted as follows:
f 0 b 7 c (hex)
1111 0000 1011 0111 1100 (bin)
If you want to preserve only the bits corresponding to the "b" value (bits 8..11) you can mask it this way:
f 0 b 7 c
1111 0000 1011 0111 1100
0000 0000 1111 0000 0000
The value 0000 0000 1111 0000 0000 can be converted to hex and has a value of 0xf00.
So if you calculate "0xf0b7c AND 0xf00" you obtain 0xb00.
Related
fmt.Println(^1)
Why does this print -2?
The ^ operator is the bitwise complement operator. Spec: Arithmetic operators:
For integer operands, the unary operators +, -, and ^ are defined as follows:
+x is 0 + x
-x negation is 0 - x
^x bitwise complement is m ^ x with m = "all bits set to 1" for unsigned x
and m = -1 for signed x
So 1 in binary is a single 1 bit preceded with full of zeros:
0000000000000000000000000000000000000000000000000000000000000001
So the bitwise complement is a single 0 bit preceded by full of ones:
1111111111111111111111111111111111111111111111111111111111111110
The ^1 is an untyped constant expression. When it is passed to a function, it has to be converted to a type. Since 1 is an untyped integer constant, its default type int will be used. int in Go is represented using the 2's complement where negative numbers start with a 1. The number being full ones is -1, the number being smaller by one (in binary) is -2 etc.
The bit pattern above is the 2's complement representation of -2.
To print the bit patterns and type, use this code:
fmt.Println(^1)
fmt.Printf("%T\n", ^1)
fmt.Printf("%064b\n", 1)
i := ^1
fmt.Printf("%064b\n", uint(i))
It outputs (try it on the Go Playground):
-2
int
0000000000000000000000000000000000000000000000000000000000000001
1111111111111111111111111111111111111111111111111111111111111110
Okay, this has to do with the way that we use signed signs in computation.
For a 1 byte number, you can get
D
B
-8
1000
-7
1001
-6
1010
-5
1011
-4
1100
-3
1101
-2
1110
-1
1111
0
0000
1
0001
2
0010
3
0011
4
0100
5
0101
6
0110
7
0111
You can see here that 1 is equivalent to 0001 (Nothing changes) but -1 is equal to 1111. ^ operator does a bitwise xor operation. Therefore:
0001
1111 xor
-------
1110 -> That is actually -2.
All this is because of the convention of two complement that we use to do calculations with negative numbers. Of course, this can be extrapolated to longer binary numbers.
You can test this by using windows calculator to do a xor bitwise calculation.
If I run $((0x100 - 0 & 0xff)), I got 0.
However $((0x100 - 0)) gives me 256.
Why the result from the first expression got truncated?
Because & is a bitwise operator, and there are no matching bits in 0x100 and 0xff.
What that means is it looks at the bits that make up your numbers and you get a 1 back in the position where both inputs have a 1.
So if you do $((0x06 & 0x03))
In binary you end up with
6 = 0110
3 = 0011
So when you logical and those together, you'll get
0010 (binary) or 0x02
For the numbers you have, there are no bits in common:
0x100 in binary is
0000 0001 0000 0000
0xff in binary is
0000 0000 1111 1111
If you bitwise and them together, there are no matching bits, so you'll end up with
0000 0000 0000 0000
Interestingly, it does the subtraction before it does the bitwise and operation (I expected it to do the other way):
$((0x100 - 1 & 0xff)) gives 255 or 0xff because 0x100 - 1 = 0xff
How can I find out if a binary number is contained in a set, where it is possible that an element of the set has don’t care bits?
I thought about using hash table, but there is a need to duplicate the numbers with don’t care bits in the hash table in order to cover all the possibilities.
For example:
The set of numbers is:
0 00x1
1 10xx
2 110x
3 1010
4 11x1
5 0010
and the number is 0011, the result should be 0.
If number of digits of binary number are limited then you can duplicate those don't care bits and convert the binary numbers to integers then use these integers as keys for map and other as values.
Example
0 00x1
1 10xx
can be converted to
0001 0
0011 0
1000 1
1001 1
1010 1
1011 1
and saved as
i j
1 0
3 0
8 1
9 1
10 1
11 1
where i is the key and j is the value
Let's say you have the binary number 1xxx, that would match 8 numbers. So, do not go with duplicating for each option.
You have to keep the "do not care" bits somewhere. Use another number for this, set the "do not care" bits to 1. If we go over your example:
i x y
0 00x1 0010
1 10xx 0011
2 110x 0001
3 1010 0000
4 11x1 0010
5 0010 0000
And you need to decide what to use for x, 0 or 1. You can use any of them, once you keep the information in the second number it does not matter.
Now use bitwise operations:
if ((n ^ x[i]) | y[i]) == y[i] then match
This solution is based on checking the existence of any non-matching bits except do-not-care bits. (n xor x[i]) gives the non-matching bits, then or'ing it with y[i] should not be different than y[i].
If we go over your example, and assuming you choose 0 for x, the check becomes
i:0 -->> ((0011 ^ 0001) | 0010) == 0010 -->> match!
i:1 -->> ((0011 ^ 1000) | 0011) != 0011 -->> no match!
i:2 -->> ((0011 ^ 1100) | 0001) != 0001 -->> no match!
i:3 -->> ((0011 ^ 1010) | 0000) != 0001 -->> no match!
i:4 -->> ((0011 ^ 1101) | 0010) != 0001 -->> no match!
i:5 -->> ((0011 ^ 0010) | 0000) != 0000 -->> no match!
I am using an rs232 HID reader.
Its manual says that its output is
CCDDDDDDDDDDXX
where CC is reserved for HID
DDDDDDDDDD is the transponder (the card) data
XX is a checksum
the checksum is well explained and irrelevant here. About DDDDDDDDDD only says valid values are 0000000000 to 1FFFFFFFFF but no indication of how it converts to what is printed on front face of the card.
I have 3 sample cards, sadly on a short range (edit plus an extra one). here I show them:
readed from rs232 shown on card
00000602031C27 00398
00000602031F2A 00399
0000060203202B 00400
00000601B535F1 55962 **new
Also I have a DB with 1000 cards loaded (what is printed on front) so I need the the decode path from what I read on rs232 to what is printed on front.
Some values from DB (I have seen the cards, but I have no phisical access to them now)
55503
60237
00833
Thanks a lot to every one.
Googling for the string "CCDDDDDDDDDDXX" returns http://www.rfideas.com/downloads/SerialAppNote8.pdf which seems to describe how to decode the numbers. I don't guarantee if that is accurate.
Decoding the Standard 26-bit Format
Message sent by the reader:
C C D D D D D D D D D D X X
---------------------------
0 0 0 0 0 6 0 2 0 3 1 C 2 7
0 0 0 0 0 6 0 2 0 3 1 F 2 A
0 0 0 0 0 6 0 2 0 3 2 0 2 B
0 0 0 0 0 6 0 1 B 5 3 5 F 1
Stripping off the checksum, X, and reducing the data to binary gives:
C C D D D D D D D D D D
cccc cccc zzzz zzzz zzzz zspf ffff fffn nnnn nnnn nnnn nnnp
-----------------------------------------------------------
0000 0000 0000 0000 0000 0110 0000 0010 0000 0011 0001 1100
0000 0000 0000 0000 0000 0110 0000 0010 0000 0011 0001 1111
0000 0000 0000 0000 0000 0110 0000 0010 0000 0011 0010 0000
0000 0000 0000 0000 0000 0110 0000 0001 1011 0101 0011 0101
All the Card Data Characters to the left of the 7th can be ignored.
c = HID Specific Code.
z = leading zeros
s = start sentinel (it is always a 1)
p = parity odd and even (12 bits each).
f = Facility Code 8 bits
n = Card Number 16 bits
From this we can see that
00000602031C27 → n = 0b0000000110001110 = 398
00000602031F2A → n = 0b0000000110001111 = 399
0000060203202B → n = 0b0000000110010000 = 400
00000601B535F1 → n = 0b1101101010011010 = 55962
So, for your example, we may probably get:
55503
(f, n) = 0b0000_0001__1101_1000_1100_1111
odd parity of first 12 bits = 0
even parity of last 12 bits = 0
result = 00000403b19e56
I get that you'd want to do something like take the first four bits put them on a stack (reading from left to right) then do you just put them in a register and shift them x times to put them at the right part of the number?
Something like
1000 0000 | 0000 0000 | 0000 0000 | 0000 1011
Stack: bottom - 1101 - top
shift it 28 times to the left
Then do something similar with the last four bits but shift to the right and store in a register.
Then you and that with an empty return value of 0
Is there an easier way?
Yes there is. Check out the _byteswap functions/intrinsics, and/or the bswap instruction.
You could do this way..
For example
I/p : 0010 1000 and i want output
1000 0010
input store into a variable x
int x;
i = x>>4
j = x<<4
k = i | j
print(K) //it will have 1000 0010.