I have a hex value (0x0020004E0000 ... which is the base address to a hardware address). I need to add 0x04 to the base for each register. I have been doing this by first converting the base address to a base 10 number, then adding 4 to that value. The sum I then take and convert back to hex all via the string class .to_s and .to_i.
Is there a better way to do this so I'm not converting back-and-forth between base 10 and base 16 all the time? (FYI, in my previous AppleScript script, I punted hex math to the OS and let bc take care of the addition for me).
0x0020004E0000 + 0x04
or simply
0x0020004E0000 + 4
You have four ways of representing integer values in Ruby
64 # integer
0x40 # hexadecimal
0100 # octal
0b1000000 # binary
# These are all 64.
A number is a number is a number, no matter how it is represented internally or displayed to the user. Just add them as you would any other number. If you want to view them as Hex later then fine; format them for output.
You are confusing representation with value. Ruby can parse a number represented in Hex just as well as it can parse a decimal, binary, or octal value.
0x04 (Hex) == 4 (decimal) == 100 (binary)
All the same thing.
Related
I want to know how many characters or numbers can I store in 1 bit only. It will be more helpful if you tell it in octal, hexadecimal.
I want to know how many characters or numbers can I store in 1 bit only.
It is not practical to use a single bit to store numbers or characters. However, you could say:
One integer provided that the integer is in the range 0 to 1.
One ASCII character provided that the character is either NUL (0x00) or SOH (0x01).
The bottom line is that a single bit has two states: 0 and 1. Any value domain with more that two values in the domain cannot be represented using a single bit.
It will be more helpful if you tell it in octal, hexadecimal.
That is not relevant to the problem. Octal and hexadecimal are different textual representations for numeric data. They make no difference to the meaning of the numbers, or (in most cases1) the way that you represent the numbers in a computer.
1 - The exception is when you are representing numbers as text; e.g. when you represent the number 42 in a text document as the character '4' followed by the character '2'.
A bit is a "binary digit", or a value from a set of size two. If you have one or more bits, you raise 2 to the power of the number of bits. So, 2ยน gives 2. The field in Mathematics is called combinatorics.
If we have to store 'A' in memory than ASCII value of A is 65. 65 will converted to binary 01000001 then it will be stored. So my question is when we store integers, are they also converted according to their ASCII value then in binary.
and if not then why do we have ASCII values for numbers
Integers are stored in memory as 32 bit ints. If an int doesn't fill all 32 bits 0's are used. There is no need to convert to ASCII unless you are explicitly trying to find the ASCII value of a specific integer.
Try this link for more info on integer representation.
I am trying to read a 2-byte value that is stored lower order byte first. So if the 2 bytes are 10 and 85 in that order (decimal) what is the number they represent? 8510? 851? Or something different?
These values represent the length of an encoded sequence and I need to know the length to properly handle the information it contains. Most only use the first byte and as a decimal number it accurately represents the total number of characters (or bytes) in the sequence... but some use both bytes and I don't understand how to interpret them.
IF anyone can help be with this I would appreciate it.
Thanks
What you're referring to is the "endianness" of the two-byte value (also called a WORD). It's generally not considered beneficial to discuss them in terms of decimal values because no, if the bytes are 10 and 85 in decimal, the values would not be any combination of 10/85. Rather, showing them in the hexadecimal values is far more prudent.
+----+----+
|0x0a|0x55|
+----+----+
To interpret this as Little Endian, the value would be 0x550a (21770 in decimal). And in Big Endian (highest order byte first), the value is 0x0a55 (2645 decimal).
It's very important, for this reason, to know the endianness of your system and be able to properly handle the two. It is also noteworthy that "Network Byte Order" is Big Endian.
I have a binary string that I need to convert to a hexadecimal string. I have this code that does it pretty well
binary = '0000010000000000000000000000000000000000000000000000000000000000'
binary.to_i(2).to_s(16)
This will normally work but in this situation, the first four zeros, representing the first hexadecimal place is left out. So instead of
0400000000000000 it is showing 400000000000000.
Now, I know i can loop through the binary string manually and convert 4 bits at a time, but is there a simpler way of getting to my wanted result of '0400000000000000'?
Would rjust(16,'0') be my ideal solution?
You should use string format for such complicated results.
"%016x" % binary.to_i(2)
# => "0400000000000000"
You can use this:
binary = "0000010000000000000000000000000000000000000000000000000000000000"
[binary].pack("B*").unpack("H*").first
# => "0400000000000000"
binary.to_i(2) will convert the value to a number. A number does not know about leading zeros. pack("B*") will convert 8 bit each to a byte, giving you a binary encoded String. 8 x 0 is "\x00", a zero byte. So unlike the number, the string preserves the leading zeros. unpack("H*") then converts 4 bits each into their hex representation. See Array#pack and String#unpack for more information.
Can someone explain why how the result for the following unpack is computed?
"aaa".unpack('h2H2') #=> ["16", "61"]
In binary, 'a' = 0110 0001. I'm not sure how the 'h2' can become 16 (0001 0000) or 'H2' can become 61 (0011 1101).
Not 16 - it is showing 1 and then 6. h is giving the hex value of each nibble, so you get 0110 (6), then 0001 (1), depending on whether its the high or low bit you're looking at. Use the high nibble first and you get 61, which is hex for 97 - the value of 'a'
Check out the Programming Ruby reference on unpack. Here's a snippet:
Decodes str (which may contain binary
data) according to the format string,
returning an array of each value
extracted. The format string consists
of a sequence of single-character
directives, summarized in Table 22.8
on page 379. Each directive may be
followed by a number, indicating the
number of times to repeat with this
directive. An asterisk ("*") will
use up all remaining elements. The
directives sSiIlL may each be followed
by an underscore ("_") to use the
underlying platform's native size for
the specified type; otherwise, it uses
a platform-independent consistent
size. Spaces are ignored in the format
string. See also Array#pack on page
286.
And the relevant characters from your example:
H Extract hex nibbles from each character (most significant first).
h Extract hex nibbles from each character (least significant first).
The hex code of char a is 61.
Template h2 is a hex string (low nybble first), H2 is the same with high nibble first.
Also see the perl documentation.