How to convert a signed integer to an unsigned integer in Ruby? - ruby

I have a number that I received from a C program that came to me as a negative number:
-1771632774
It's supposed to be this number:
2523334522
I realized that this must be due to some conversion from an signed integer to an unsigned integer. Now that I have this negative number in Ruby, how can I convert it back to the unsigned version?

Put the negative integer in an array. Call pack with an argument of 'L' which represents "32-bit unsigned, native endian (uint32_t)". Call unpack with the same argument. Finally, get the number out of the array.
[-1771632774].pack('L').unpack('L').first
#=> 2523334522
http://ruby-doc.org/core-2.4.0/Array.html#method-i-pack

Related

Converting 2 bytes to an integer in VB 6

I need to convert 2 bytes to an integer in VB6
I currently have the byte array as:
bytArray(0) = 26
bytArray(1) = 85
the resulting number I assume should be 21786
I need these 2 turned into an integer so I can convert to a single and do additional arithmetic on it.
How do I get the integer of the 2 bytes?
If your assumed value is correct, the pair of array elements are stored in little endian format. So the following would convert the two array elements into a signed short integer.
Dim Sum As Integer
Sum = bytArray(0) + bytArray(1) * 256
Note that if your elements would sum to more than 32,767 (bytArray(1) >= 128), you'll see an overflow exception occur.
You don't have to convert to an integer first, you can go directly to a single, using the logic shown by #MarkL
Dim Sngl as Single
Sngl = (bytArray(1) * 256!) + bytArray(0)
Edit: As #BillHileman notes, this will give an unsigned result. Do as he suggests to make it signed.

How MKI$ and CVI Functions works

I am working on GwBasic and want to know how 'CVI("aa")' returns '24929' is that converts each char to ASCII but code of "aa" is 9797.
CVI converts between a GW-BASIC integer and its internal representation in bytes. That internal representation is a 16-bit little-endian signed integer, so that the value you find is the same as ASC("a") + 256*ASC("a"), which is 97 + 256*97, which is 24929.
MKI$ is the opposite operation of CVI, so that MKI$(24929) returns the string "aa".
The 'byte reversal' is a consequence of the little endianness of GW-BASIC's internal representation of integers: the leftmost byte of the representation is the least significant byte, whereas in hexadecimal notation you would write the most significant byte on the left.

Why don't we use "unsigned int" instead of "char" to implement a big integer class?

I used to implement something acting as a very large integer using char. But it suddenly occurred to me that I can use unsigned int, which is more straight-forward to implement.
For example, I use every unsigned int to store at most 9 999 999, and make use of the most significant digit as a buffer to increment to the "next" unsigned int.
Thus, I can use 4 byte for 7 digits, instead of 4 digits while using char.
So, why do not we implement a big integer class with unsigned int?
Who says you don't use uint for BigInteger? The C# BCL implementation of BigInteger (in System.Numerics) uses uint[] to store its bits.
In general, it will be more efficient to use the bits to represent a number, rather than the bits to represent the character digit of a number.

converting a wire value to an integer in verilog

I want to convert the data in a wire to an integer. For example:
wire [2:0] w = 3'b101;
I want a method that converts this to '5' and stores it in an integer. How can I do that in a better way than this:
j=1;
for(i=0; i<=2; i=i+1)
begin
a=a+(w[i]*j);
j=j*2;
end
Also, how do I convert it back to binary once I have the value in an integer? This seems a clumsy way. Thank you.
Easy! Conversion is automatic in verilog if you assign to an integer. In verilog, all the data types are just collection on bits.
integer my_int;
always #( w )
my_int = w;
As Marty said : conversion between bit vectors and integers is automatic.
But there are a number of pitfalls. They are obvious if you keep in mind that an integer is a 32 bit signed value.
Don't try to assign e.g. a 40 bit value to an integer.
Default bit vector are unsigned so a 32 bit vector may become negative when it is an integer.
The opposite is also true: a negative integer e.g. -3 will become an 8 vector with value 8'b11111101
I don't know why you want to convert to an integer and back. I just want to point out that arithmetic operations are fully supported on signed and unsigned bit vectors. In fact they are more powerful as there is no 32-bit limit:
reg [127:0] huge_counter;
...
always #(posedge clk)
huge_counter <= huge_counter + 128'h1;
Also using a vector as index is supported:
wire [11:0] address;
reg [ 7:0] memory [0:4095];
...
assign read_data = memory[address];

Reading a binary 16-bit signed (big-endian) integer in Ruby

I tried to read from a file where numbers are stored as 16-bit signed integers in big-endian format.
I used unpack to read in the number, but there is no parameter for a 16-bit signed integer in big-endian format, only for an unsigned integer. Here is what I have so far:
number = f.read(2).unpack('s')[0]
Is there a way to interpret the number above as a signed integer or another way to achieve what I want?
I don't know if it's possible to use String#unpack for that, but to convert a 16bit-unsigned to signed, you can use the classical method:
>> value = 65534
>> (value & ~(1 << 15)) - (value & (1 << 15))
=> -2
Use BinData and there's no need for bit twiddling.
BinData::Int16be.read(io)
Found a solution that works by reading two 8bit unsigned integers and convert them to a 16bit big-endian integer
bytes = f.read(2).unpack('CC')
elevation = bytes[0] << 8 | bytes[1]
Apparently since Ruby 1.9.3 you can actually suffix the s with endiannes like so:
io.read(2).unpack('s>')

Resources