This question already has answers here:
Arbitrary precision arithmetic with Ruby
(5 answers)
Closed 8 years ago.
Ruby can store extremely large numbers. Now that I think about it though, I don't even know how that's possible.
Computers store data in a series of two digits (0s and 1s). This is referred to as binary notation. However, there is a limit to the size of numbers they can store.
Most current Operating Systems these days run at 64 bits. That means the highest allocatable address space for a variable is 64 bits.
Integers are stored in a base 2 system, which means the highest value a computer should be able to store is
1111111111111111111111111111111111111111111111111111111111111111
Since computers can only read 2 possible values, this means the above number can be represented as
2 ^ 64
This means that the highest value an integer can read should be at most 18,446,744,073,709,551,615
I honestly don't even understand how it's possible to store integer values higher than that.
Ruby uses Bignum objects to store number larger than 2^64. You can see here a description of how this works:
On the left, you can see RBignum contains an inner structure called
RBasic, which contains internal, technical values used by all Ruby
objects. Below that I show values specific to Bignum objects: digits
and len. digits is a pointer to an array of 32-bit values that contain
the actual big integer’s bits grouped into sets of 32. len records how
many 32-bit groups are in the digits array. Since there can be any
number of groups in the digits array, Ruby can represent arbitrarily
large integers using RBignum.
Related
I want to write a program to convert hexadecimal numbers into their decimal forms without using a variable of fixed length to store the result because that would restrict the range of inputs that my program can work with.
Let's say I were to use a variable of type long long int to calculate, store and print the result. Doing so would limit the range of hexadecimal numbers that my program can handle to between 8000000000000001 and 7FFFFFFFFFFFFFFF. Anything outside this range would cause the variable to overflow.
I did write a program that calculates and stores the decimal result in a dynamically allocated string by performing carry and borrow operations but it runs much slower, even for numbers that are as big as 7FFFFFFFF!
Then I stumbled onto this site which could take numbers that are way outside the range of a 64 bit variable. I tried their converter with numbers much larger than 16^65 - 1 and still couldn't get it to overflow. It just kept on going and printing the result.
I figured that they must be using a much better algorithm for hex to decimal conversion, one that isn't limited to 64 bit values.
So far, Google's search results have only led me to algorithms that use some fixed-length variable for storing the result.
That's why I am here. I wanna know if such an algorithm exists and if it does, what is it?
Well, it sounds like you already did it when you wrote "a program that calculates and stores the decimal result in a dynamically allocated string by performing carry and borrow operations".
Converting from base 16 (hexadecimal) to base 10 means implementing multiplication and addition of numbers in a base 10x representation. Then for each hex digit d, you calculate result = result*16 + d. When you're done you have the same number in a 10-based representation that is easy to write out as a decimal string.
There could be any number of reasons why your string-based method was slow. If you provide it, I'm sure someone could comment.
The most important trick for making it reasonably fast, though, is to pick the right base to convert to and from. I would probably do the multiplication and addition in base 109, so that each digit will be as large as possible while still fitting into a 32-bit integer, and process 7 hex digits at a time, which is as many as I can while only multiplying by single digits.
For every 7 hex digts, I'd convert them to a number d, and then do result = result * (16^7) + d.
Then I can get the 9 decimal digits for each resulting digit in base 109.
This process is pretty easy, since you only have to multiply by single digits. I'm sure there are faster, more complicated ways that recursively break the number into equal-sized pieces.
This question already has answers here:
Why does floating-point arithmetic not give exact results when adding decimal fractions?
(31 answers)
Closed 4 years ago.
I was trying to solve a mathematical problem:
37.9 - 6.05
Ruby gives me:
37.9 - 6.05
#=> 31.849999999999998
37.90 - 6.05
#=> 31.849999999999998
37.90 + 6.05
#=> 43.949999999999996
Why am I getting this?
In a nutshell, computers have a problem working with real numbers and use
floating point representation to deal with them. Much in the same way you can only represent 256 numbers with 8 bits for natural numbers, you can only represent a fixed amount of numbers with 64 bits for real numbers. For more details on this, read this http://floating-point-gui.de/ or google for "floating point arithmetic".
How should i deal with that?
Never store currency values in floating point variables. Use BigDecimal or do your calculations in cents using only integer numbers.
use round to round your floats to a user friendly length. Rounding errors will occur, especially when adding up a lot of floats.
In SQL systems use decimal data type, or use integers and divide them by a constant factor in the UI (say you need 3 decimal digits, you could store 1234 as integer and display 1.234.
I'm working on a problem out of Cracking The Coding Interview which requires that I swap odd and even bits in an integer with as few instructions as possible (e.g bit 0 and 1 are swapped, bits 2 and 3 are swapped, etc.)
The author's solution revolves around using a mask to grab, in one number, the odd bits, and in another num the even bits, and then shifting them off by 1.
I get her solution, but I don't understand how she grabbed the even/odd bits. She creates two bit masks --both in hex -- for a 32 bit integer. The two are: 0xaaaaaaaa and 0x55555555. I understand she's essentially creating the equivalent of 1010101010... for a 32 bit integer in hexadecimal and then ANDing it with the original num to grab the even/odd bits respectively.
What I don't understand is why she used hex? Why not just code in 10101010101010101010101010101010? Did she use hex to reduce verbosity? And when should you use one over the other?
It's to reduce verbosity. Binary 10101010101010101010101010101010, hexadecimal 0xaaaaaaaa, and decimal 2863311530 all represent exactly the same value; they just use different bases to do so. The only reason to use one or another is for perceived readability.
Most people would clearly not want to use decimal here; it looks like an arbitrary value.
The binary is clear: alternating 1s and 0s, but with so many, it's not obvious that this is a 32-bit value, or that there isn't an adjacent pair of 1s or 0s hiding in the middle somewhere.
The hexadecimal version takes advantage of chunking. Assuming you recognize that 0x0a == 0b1010, you can mentally picture the 8 groups of 1010 in the assumed value.
Another possibility would be octal 25252525252, since... well, maybe not. You can see that something is alternating, but unless you use octal a lot, it's not clear what that alternating pattern in binary is.
What is the lowest and highest possible returns from sha1? (with respect that sha1 results are actualy 5 32 bit values rather than 1 true 160 bit value)
To create a secure hash the output of the hash must be indistinguishable from random. Many pseudo random number generators and key derivation methods actually use a hash as final calculation.
So the "highest" result consists of all zero's, the lowest consists of all ones. That is, if you interpret the result to be an unsigned integer of course. The chances of exactly getting those values is almost zero of course, as SHA-1 results should be evenly distributed. But the change of a number starting with 8 ones is still 1/2^8 == 1/256, which is certainly not insignificant.
Note that the result of SHA-1 should be interpreted as a bit string. Most runtimes don't have a very useful bitstring representation and use an octet string (aka byte array) instead. I would consider it very annoying of a SHA-1 implementation would return shorts instead of bytes. You don't want to annoy the user with differences in little-endian and big-endian representations, and most other primitives do expect their input represented as bytes.
This question already has answers here:
Closed 10 years ago.
Possible Duplicate:
How to convert byte size into human readable format in java?
Given an integer, I'd like to print it in a human-readable way using kilo, mega, giga etc. multipliers. How do I pick the "best" multiplier?
Here are some examples
1 print as 1
12345 print as 12.3k
987654321 print as 988M
Ideally the number of digits printed should be configurable, e.g. in the last example, 3 digits would lead to 988M, 2 digits would lead to 1.0G, 1 digit would lead to 1G, and 4 digits would lead to 987.7M.
Example: Apple uses an algorithm of this kind, I think, when OSX tells me how many more bytes have to be copied.
This will be for Java, but I'm more interested in the algorithm than the language.
As a starting point, you could use the Math.log() function to get the "magnitude" of your value, and then use some form of associative container for the suffix (k, M, G, etc).
var magnitude = Math.log(value) / Math.log(10);
Hope this helps somehow