Hex String to Byte Array in Ruby [duplicate] - ruby

This question already has answers here:
Convert a string of 0-F into a byte array in Ruby
(2 answers)
Closed 8 years ago.
I am trying to convert a hex string into byte array using ruby.
48656c6c6f2c20576f726c6421 => 0100 1000 0110 0101 0110 1100 0110 1100 0110 1111 0010 1100 0010 0000 0101 0111 0110 1111 0111 0010 0110 1100 0110 0100 0010 0001 => [72, 65...]
Any suggestions on the best approach to do this?
This is what i have written till now but not that much happy to continue further, wondering there could be a much easier way
binaryArray = Array.new
hex.each_char do |x|
bin = x.hex.to_s(2) #get the binary value for the HEX
val = bin.rjust(4,'0') # padding with zeros to have a 4 digits
binaryArray.push(val)
end

"48656c6c6f2c20576f726c6421".to_i(16).to_s(2)
#=> "1001000011001010110110001101100011011110010110000100000010101110110111101110010011011000110010000100001"

Related

Why does the algorithm expression result get trucated?

If I run $((0x100 - 0 & 0xff)), I got 0.
However $((0x100 - 0)) gives me 256.
Why the result from the first expression got truncated?
Because & is a bitwise operator, and there are no matching bits in 0x100 and 0xff.
What that means is it looks at the bits that make up your numbers and you get a 1 back in the position where both inputs have a 1.
So if you do $((0x06 & 0x03))
In binary you end up with
6 = 0110
3 = 0011
So when you logical and those together, you'll get
0010 (binary) or 0x02
For the numbers you have, there are no bits in common:
0x100 in binary is
0000 0001 0000 0000
0xff in binary is
0000 0000 1111 1111
If you bitwise and them together, there are no matching bits, so you'll end up with
0000 0000 0000 0000
Interestingly, it does the subtraction before it does the bitwise and operation (I expected it to do the other way):
$((0x100 - 1 & 0xff)) gives 255 or 0xff because 0x100 - 1 = 0xff

Why does x3103 have the value x1482 at the end

Edit: The original question is this
Suppose the following LC-3 program is loaded into memory starting at
location x30FF:
x30FF 1110 0010 0000 0001
x3100 0110 0100 0100 0010
x3101 1111 0000 0010 0101
x3102 0001 0100 0100 0001
x3103 0001 0100 1000 0010
If the program is executed, what is the value in R2 at the end of
execution?
x30FF 1110 0010 0000 0001 ; R1 <- PC' + 1 ; R1 <- x3101
x3100 0110 0100 0100 0010 ; R2 <- mem[R1 + 2] ; R2 <- mem[x3103] = x1482
x3101 1111 0000 0010 0101 ; TRAP x25 = HALT
x3102 0001 0100 0100 0001 ; x1441
x3103 0001 0100 1000 0010 ; x1482
The question is what is the content of R2 at the end of the program
In this problem I understand everything until x3100
However I don't understand what mem[R1+2] means and how x3102 has x1441 in Register 2 and how x3103 has the value x1482.
As far I can tell, nothing is loaded into R2 at any point.
Where does x1441 and x1482 come from?
Can somebody explain how R2 has x1482 in it?
Going over the machine language you posted.
The first instruction which is LEA R1, 1 will simply store PC + 1 into R1. Since the PC will be x3100 at the time that instruction is executed x3101 is stored into R1.
The second instruction which is LDR R2, R1, 2 will take R1's value add 2
and then load from memory at the address formed from the previous computation and store it in R2. R1's value is x3101, x3101 + 2 is x3103 so whatever is at address x3103 will be stored in R2. Since you posted that x3103 contains x1482 that is what gets stored in R2.
The phrasing mem[R1+2] means to load from memory at the address computed by taking R1's value and adding 2 to it.
From your edit, yeah the x1441 and x1482 appear to just be data.

Self complementing Codes

This statement was deemed true: Given any self-complementing decimal code scheme, if we know the codes for the number 283, then we can deduce the codes for 671.
I wanna know why. I took Excess-3 BCD as the self complementing code:
0-0011
1-0100
2-0101
3-0110
4-0111
5-1000
6-1001
7-1010
8-1011
9-1100
So 283 = 0101 1011 0110 .
671 = 1001 1010 0011
So why is the statement as it is as 283-ex3 is not a 1s complement of 671-ex3?
Since it is self-complementing decimal code scheme, then the code for 9's compliment of 283 can be obtained by taking 1's complement of code for 283.
9's complement of 283 = 716
283 = 0101 1011 0110. so its 1's complement = 1010 0100 1001 will be the code for 716.
From this: code for 7 =1010, that for 1 =0100 and for 6 = 1001
So code for 671 = 1001 1010 0100

decoding HID data

I am using an rs232 HID reader.
Its manual says that its output is
CCDDDDDDDDDDXX
where CC is reserved for HID
DDDDDDDDDD is the transponder (the card) data
XX is a checksum
the checksum is well explained and irrelevant here. About DDDDDDDDDD only says valid values are 0000000000 to 1FFFFFFFFF but no indication of how it converts to what is printed on front face of the card.
I have 3 sample cards, sadly on a short range (edit plus an extra one). here I show them:
readed from rs232 shown on card
00000602031C27 00398
00000602031F2A 00399
0000060203202B 00400
00000601B535F1 55962 **new
Also I have a DB with 1000 cards loaded (what is printed on front) so I need the the decode path from what I read on rs232 to what is printed on front.
Some values from DB (I have seen the cards, but I have no phisical access to them now)
55503
60237
00833
Thanks a lot to every one.
Googling for the string "CCDDDDDDDDDDXX" returns http://www.rfideas.com/downloads/SerialAppNote8.pdf which seems to describe how to decode the numbers. I don't guarantee if that is accurate.
Decoding the Standard 26-bit Format
Message sent by the reader:
C C D D D D D D D D D D X X
---------------------------
0 0 0 0 0 6 0 2 0 3 1 C 2 7
0 0 0 0 0 6 0 2 0 3 1 F 2 A
0 0 0 0 0 6 0 2 0 3 2 0 2 B
0 0 0 0 0 6 0 1 B 5 3 5 F 1
Stripping off the checksum, X, and reducing the data to binary gives:
C C D D D D D D D D D D
cccc cccc zzzz zzzz zzzz zspf ffff fffn nnnn nnnn nnnn nnnp
-----------------------------------------------------------
0000 0000 0000 0000 0000 0110 0000 0010 0000 0011 0001 1100
0000 0000 0000 0000 0000 0110 0000 0010 0000 0011 0001 1111
0000 0000 0000 0000 0000 0110 0000 0010 0000 0011 0010 0000
0000 0000 0000 0000 0000 0110 0000 0001 1011 0101 0011 0101
All the Card Data Characters to the left of the 7th can be ignored.
c = HID Specific Code.
z = leading zeros
s = start sentinel (it is always a 1)
p = parity odd and even (12 bits each).
f = Facility Code 8 bits
n = Card Number 16 bits
From this we can see that
00000602031C27 → n = 0b0000000110001110 = 398
00000602031F2A → n = 0b0000000110001111 = 399
0000060203202B → n = 0b0000000110010000 = 400
00000601B535F1 → n = 0b1101101010011010 = 55962
So, for your example, we may probably get:
55503
(f, n) = 0b0000_0001__1101_1000_1100_1111
odd parity of first 12 bits = 0
even parity of last 12 bits = 0
result = 00000403b19e56

Are these endian transformations correct?

I am struggling to figure this out, I am trying to represent a 32bit variable in both big and little endian. For the sake of argument let's say we try the number, "666."
Big Endian: 0010 1001 1010 0000 0000 0000 0000
Little Endian: 0000 0000 0000 0000 0010 1001 1010
Is this correct, or is my thinking wrong here?
666 (decimal) as 32-bit binary is represented as:
[0000 0000] [0000 0000] [0000 0010] [1001 1010] (big endian, most significant byte first))
[1001 1010] [0000 0010] [0000 0000] [0000 0000] (little endian, least significant byte first)
Ref.
(I have used square brackets to group 4-bit nibbles into bytes)

Resources