Lately, I've been struggling to search online for ways to convert hexadecimal strings into hexadecimal actuals. As an example, "0xffffffff" -> 0xffffffff. After loading the JSON file (which cannot store hexadecimal directly), the stored integer value, 4294967295, was successfully converted back to "0xffffffff" by using the following example code:
hex_str = "0x" << 4294967295.to_s(16) #--> "0xffffffff"
The real frustration is that I cannot seem to find a Ruby way to recreate that hexadecimal value without being a String datatype... I really hope I'm not overlooking anything. My reason for the use of non-string hexadecimals is to utilize them for Gosu-coloring notation. I do not want to use Gosu's Color class (inputting rgb values [255, 255, 255]) as it slows the performance drastically when many rectangular quad_draw() objects are generated in-game (it went down to about 42 fps from 60 fps when 600 rects were drawn). The program did run at 60 fps when I hard coded in the hexadecimal actuals (not of string datatypes), so I'm confident that using these values in that format are the way to go. This is something I'm looking for:
hex_int = hex_str.some_function_to_hex #--> 0xffffffff
Could you share a way that could convert 4294967295 to 0xffffffff directly?
You can directly pass integer to Gosu::Color.new to create color
3.0.0 :002 > Gosu::Color.new(4294967295)
=> #<Gosu::Color:ARGB=0xff_ffffff>
Or Gosu::Color.argb
3.0.0 :003 > Gosu::Color.argb(4294967295)
=> #<Gosu::Color:ARGB=0xff_ffffff>
Related
I am developing a software for using a RFID reader with ruby on rails and, after open the socket and get the tags, I convert data to hexadecimal with:
while line = s.gets
puts line.unpack('H*').to_s
end
Then I get "a55a0019833400393939393939303030303232fd6f02080d0a" for one tag.
The RFID reader user manual tells:
Remark:RSSI express as complement code, total 16 bits,which is 10 times the real value. For example, the real value is -65.7dBm,then RSSI=fd6f
I have found online calculators (mathsinfun and calc.penjee.com) where I am able to convert the fd6f in -675.
I would like to know how can I get this conversion in Ruby 2.3.1 to continue with my project.
Any help will be appreciated.
s> is the correct unpack symbol for a 16-bit unsigned big endian number, so:
"\xfd\x6f".unpack('s>')[0] / 10.0
Result is:
-65.7
I have data that I would like to represent as comma10.2 when less than 1,000,000 and e10. when greater than or equal to 1,000,000. It seems like there might be a way to do this using the picture format, so I thought I might also making missing values show up as --. This is what I've got so far:
proc format;
picture DashMiss . = '--' (noedit)
low - <1000000 = "000,009.99"
1000000 - high = ????;
run;
I'm not sure how to represent scientific notation using picture (hence the question marks). I don't have to just use picture if there's an easier way to do it.
I figured out how to use brackets to add the conditional format:
proc format;
picture DashMiss . = '--' (noedit)
low - <1000000 = "000,009.99"
1000000 - high = [e10.];
run;
I believe you could've simply used the best6. format or bestd6.2 to achieve the same results. It naturally uses scientific notation whenever the length is beyond the first of the 2 integers.
I have a homework in which i have to convert some images to grayscale and compress them using huffman encoding. I converted them to grayscale and then i tried to compress them but i get an error. I used the code i found here.
Here is the code i'm using:
A=imread('Gray\36.png');
[symbols,p]=hist(A,unique(A))
p=p/sum(p)
[dict,avglen]=huffmandict(symbols,p)
comp=huffmanenco(A,dict)
This is the error i get. It occurs at the second line.
Error using eps
Class must be 'single' or 'double'.
Error in hist (line 90)
bins = xx + eps(xx);
What am i doing wrong?
Thanks.
P.S. how can i find the compression ratio for each image?
The problem is that when you specify the bin locations (the second input argument of 'hist'), they need to be single or double. The vector A itself does not, though. That's nice because sometimes you don't want to convert your whole dataset from an integer type to floating precision. This will fix your code:
[symbols,p]=hist(A,double(unique(A)))
Click here to see this issue is discussed more in detail.
first, try :
whos A
Seems like its type must be single or double. If not, just do A = double(A) after the imread line. Should work that way, however I'm surprised hist is not doing the conversion...
[EDIT] I have just tested it, and I am right, hist won't work in uint8, but it's okay as soon as I convert my image to double.
I'm trying to read unsigned integers from a file (stored as consecutive byte) and convert them to Integers. I've tried this:
file = File.new(filename,"r")
num = file.read(2).unpack("S") #read an unsigned short
puts num #value will be less than expected
What am I doing wrong here?
You're not reading enough bytes. As you say in the comment to tadman's answer, you get 202 instead of 3405691582
Notice that the first 2 bytes of 0xCAFEBABE is 0xCA = 202
If you really want all 8 bytes in a single number, then you need to read more than the unsigned short
try
num = file.read(8).unpack("L_")
The underscore is assuming that the native long is going to be 8 bytes, which definitely is not guaranteed.
How about looking in The Pickaxe? (Ruby 1.9, p. 44)
File.open("testfile")
do |file|
file.each_byte {|ch| print "#{ch.chr}:#{ch} " }
end
each_byte iterates over a file byte by byte.
There are a couple of libraries that help with parsing binary data in Ruby, by letting you declare the data format in a simple high-level declarative DSL and then figure out all the packing, unpacking, bit-twiddling, shifting and endian-conversions by themselves.
I have never used one of these, but here's two examples. (There are more, but I don't know them):
BitStruct
BinData
Ok, I got it to work:
num = file.read(8).unpack("N")
Thanks for all of your help.
What format are the numbers stored in the file? Is it in hex? Your code looks correct to me.
When dealing with binary data you need to be sure you're opening the file in binary mode if you're on Windows. This goes for both reading and writing.
open(filename, "rb") do |file|
num = file.read(2).unpack("S")
puts num
end
There may also be issues with "endian" encoding depending on the source platform. For instance, PowerPC-based machines, which include old Mac systems, IBM Power servers, PS3 clusters, or Sun Sparc servers.
Can you post an example of how it's "less"? Usually there's an obvious pattern to the data.
For example, if you want 0x1234 but you get 0x3412 it's an endian problem.
I've got a plist (created in XCode) with an array full of "Numbers" (0.01, 1, 2, 6) that unpacks into NSValues when reconstituted with initWithContentsOfFile. How can I turn these NSValues into NSDecimalNumbers that I can use for adding together? They will be treated as currency values so only need precision of 2 (maybe 4) decimal places.
I've tried saving the plist values as "String" instead of "Number" and using NSDecimalNumber's initWithString to set the value but then NSValue doesn't respond to stringValue.
Seems like dealing with numbers is particularly confusing in Cocoa. so many container formats in so many frameworks... :-(
You should be able to directly store your numbers as strings in the property list. You don't need to do any NSValue wrapping for NSStrings when storing them in a plist. I'd recommend keeping the numbers in your application as NSDecimals or NSDecimalNumbers to avoid any floating-point errors, reading them from the plist using initWithString:locale:, and writing them to the plist using descriptionWithLocale:. Storing and retrieving the decimals as strings avoids any to-and-from floating point conversion errors.
The first lesson to learn is that when representing currency, use integers instead of floating-point (decimal) numbers if you want any kind of accuracy. (Divide by 100.0 whenever you need to display cents, etc.) Computers are flawless with binary (base 2) but if you try to represent in binary something that can't be broken down into factors of 1/(2^n), you'll run into precision errors. (Try 0.1 + 0.1 and see what you get.)
That said, the XML tag within which you specify the number definitely makes a difference in how the values are interpreted in terms of Cocoa classes when you use something like -[NSArray initWithContentsOfFile:] to "reconstitute" it. Consult the plist man page and this Apple article for more details and examples.
To accomplish what you're asking, make sure you're using <real> or <integer> (and the matching closing tag) around the values in your plist. (Property List Editor and Xcode should automatically use the correct one based on whether the number has a decimal point.) In my tests, both real and integer numbers were read in as NSNumber objects. NSDecimalNumber is a subclass of NSNumber, but I'm not entirely sure how the toll-free bridging with CFNumber works in all cases. Experimentation is probably the best way to figure that out.