Very large integers are often stored as variable-length arrays of digits in memory, as opposed to a straightforward binary representation as is the case with most primitive 'int' or 'long' types, as in Java or C. With this in mind, I would be interested to know algorithm(s) that can compute:
At what count an integer must reach before it becomes more efficient to store it as a BigInteger (or equivalent arbitrary-precision arithmetic construct) with a given radix for the integer's digits;
Which radix would be most efficient to store the digits of this large integer.
I have mentioned 'efficiency'; by this, I mean I am mainly concerned with the amount of space such a BigInteger would consume, though I would also be interested to hear any comments on processing speed or time complexity.
An integer should consume the least space if stored in a raw binary format (unless maybe it is a small integer and data type is way too wide for it - to store 1 in 128 bit long long). Storing differently does not save any memory and is used to make the work with such integers easier.
If byte by byte, this translates into 256'ecimal radix - 256 possible values, as much as the byte can hold.
BigInt is never more efficient than one of the integer types directly supported by hardware. If you can use what's supported directly, use it.
What's supported by hardware most efficiently, likely a power of 2 or, often equivalently, binary.
Related
I'm reading the documentation to the math/big package here:
https://golang.org/pkg/math/big/#pkg-constants
I am trying to understand how large a number is too big for math.big, and this looked like a constant I could interrogate.
I see on my machine:
fmt.Println(math.MaxUint32)
4294967295
How does this relate to the largest integer possible on my machine, for the purpose of calculation? What are the units of this number? Is this bytes, or decimal places or something other than the number itself?
bignum libraries usually store big numbers as a sequence of digits (e.g. in base 264). Their limitation is related to the memory available. So the largest number you could represent is tied to the limitation of your virtual address space. You can safely assume that a number even as large as 1010000 is representable in bignum. Of course, a googolplex is not representable as a bignum (because it has more bits than the number of particles in the universe).
Another limitation is the complexity of arithmetic operations. But there exist very efficient bignum algorithms.
FWIW, the GMPlib (a C library for bignums) can deal with numbers as long as there is memory for them. However, it is rumored than when malloc fails, GMPlib is aborting.
I don't know what happens inside Go bignums when a number is too big to be representable (and that limit varies from one machine to the next and could be different from one run to the next). For example, Go's Int.Mul gives a product whose size is the sum of the size of the arguments, and the "out of memory" error is undocumented (but obviously can happen).
When using bignums, prefer iterative algorithms to recursive ones. For example, a naive recursive factorial might overflow the call stack with large enough bignums, so you want to code it iteratively.
Hash functions always produce a fixed length output regardless of the input (i.e. MD5 >> 128 bits, SHA-256 >> 256 bits), but why?
I know that it is how the designer designed them to be, but why they designed the output to have the same length?
So that it can be stored in a consistent fashion? easier to be compared? less complicated?
Because that is what the definition of a hash is. Refer to wikipedia
A hash function is any function that can be used to map digital data
of arbitrary size to digital data of fixed size.
If your question relates to why it is useful for a hash to be a fixed size there are multiple reasons (non-exhaustive list):
Hashes typically encode a larger (often arbitrary size) input into a smaller size, generally in a lossy way, i.e. unlike compression functions, you cannot reconstruct the input from the hash value by "reversing" the process.
Having a fixed size output is convenient, especially for hashes designed to be used as a lookup key.
You can predictably (pre)allocate storage for hash values and index them in a contiguous memory segment such as an array.
For hashes of "native word sizes", e.g. 16, 32 and 64 bit integer values, you can do very fast equality and ordering comparisons.
Any algorithm working with hash values can use a single set of fixed size operations for generating and handling them.
You can predictably combine hashes produced with different hash functions in e.g. a bloom filter.
You don't need to waste any space to encode how big the hash value is.
There do exist special hash functions, that are capable of producing an output hash of a specified fixed length, such as so-called sponge functions.
As you can see it is the standard.
Also what you want is specified in standard :
Some application may require a hash function with a message digest
length different than those provided by the hash functions in this
Standard. In such cases, a truncated message digest may be used,
whereby a hash function with a larger message digest length is applied
to the data to be hashed, and the resulting message digest is
truncated by selecting an appropriate number of the leftmost bits.
Often it's because you want to use the hash value, or some part of it, to quickly store and look up values in a fixed-size array. (This is how a non-resizable hashtable works, for example.)
And why use a fixed-size array instead of some other, growable data structure (like a linked list or binary tree)? Because accessing them tends to be both theoretically and practically fast: provided that the hash function is good and the fraction of occupied table entries isn't too high, you get O(1) lookups (vs. O(log n) lookups for tree-based data structures or O(n) for lists) on average. And these accesses are fast in practice: after calculating the hash, which usually takes linear time in the size of the key with a low hidden constant, there's often just a bit shift, a bit mask and one or two indirect memory accesses into a contiguous block of memory that (a) makes good use of cache and (b) pipelines well on modern CPUs because few pointer indirections are needed.
For example, if I add a short integer and a float data type, will each allocate the same amount of memory? They have different maximum values, but have different data structures as well, so I was not sure.
Also, do mathematical operations take different amounts of time with different data types?
It's hard to answer this exactly without knowing which language/hardware you are using, but in most cases different data types do take up different amounts of memory and data type can affect the speed of mathematical operations. For example, in Java and C#, a short is 16 bits and a float is 32 bits. I would expect floating point math to be slower than integer math in general, although this can be complex as explained in this post.
For other languages (such as C), the size of the basic data types is hardware-dependent, so an int might be 32 bits on some machines and 64 bits on others.
the reason we have datatypes is efficiency. That includes both size and treatment of data(types).
A char datatype has 8 bits (you may also call it 8-bit integer since the numerical values are simply mapped to characters in the ASCII table) = 256 possible values
A 32-bit integer has ... you guessed it 32 bits = 2³² possible values
Mathematical Operations are performed by shifting or comparing those bits..
More bits = more operations = more time
Reference: http://en.wikipedia.org/wiki/Data_type
I am looking a general algorithm that would convert from one (arbitrary) numerical base to another (also arbitrary) without storing the result in a large integer and performing arithmetic operations on it in between.
The algorithm I am looking for takes an array of numerical values in a given base (that would mostly be a string of characters) and returns the result alike.
Thank you for help.
I would say it is not possible. For certain bases it would be possible to convert from one string to another, by just streaming the chars through (e.g. if one base is a multiple of the other, like octal->hex), but for arbitrary bases it is not possible without arithmetic operations.
If you would do it with strings/chars in between it would be still big integer arithmetic, but your integers were just in a (unnecessary big) unusual format.
So you have just the choice between: Either reprogram arithmetic operations with char encoded numbers, or do the step and use a big integer library and walk the convert(char(base1->bigInt), convert(bigInt->base2) path.
It's computable, but it's not pretty.
Seriously, it'd probably be easier and faster to include one of the many bignum libraries or write your own.
I read somewhere about other data structures similar to hashtables, dictionaries but instead of using ints, they were using floats/doubles, etc.
Anyone knows what they are?
If you mean using floats/doubles as keys in your hash, that's easy. For example, in .NET, it's just using Dictionary<double,MyValueType>.
If you're talking about having the hash be based off a double instead of an int....
Technically, you can have any element as your internal hash. Normally, this is done using an int or long, since these are fast, and the hashing algorithm is easy to compute.
However, the hash is really just a BitArray at heart, so anything would work. There really isn't much advantage to making this something other than an int or long, other than potentially allowing a larger set of hash values (ie: if you go to an 8 byte or larger type for your hash).
You mean as keys? That strikes me as tricky.
If you're using them as arbitrary keys, they're no better than integers.
If you expect to calculate a floating-point value and use it to look something up in a hash table, you're living very dangerously. Floating point numbers do not have infinite precision, and calculating the same thing in two slightly different ways can result in very tiny differences in the result. Hash keys rely on getting the exact same thing every time, so you'd have to be careful to round, and round in exactly the same way at all times. This is trickier than it sounds, by the way.
So, what would you do with floating-point hashes?
A hash algorithm is, in general terms, just a function that produces a smaller output from a larger input. Good hash functions have interesting properties like a large change in output for a small change in the input, and an assurance that they produce every possible output value for some input.
It's not hard to write a simple polynomial type hash function that outputs a floating-point value, rather than an integer value, but it's difficult to ensure that the resulting hash function has the desired properties without getting into the details of the particular floating-point representation used.
At least part of the reason that hash functions are nearly always implemented in integer arithmetic is because proving various properties about an integer calculation is easier than doing the same for a floating point calculation.
It's fairly easy to prove that some (sum of prime factors) modulo (another prime) must, necessarily, produce every possible output for some input. Doing the same for a calculation with a bunch of floating-point fractions would be a drag.
Add to that the relative difficulty of storing and transmitting floating-point values without corruption, and it's just not worth it.
Your question history shows that you use .Net, so I'll answer in that context.
If you want a Dictionary that is type aware, such that you can specify it should use floats or doubles for the keys or values, use System.Collections.Generic.Dictionary<T, U> http://msdn.microsoft.com/en-us/library/xfhwa508.aspx
If you want a Dictionary that is type blind, such that you can use floats AND doubles for keys and values, use System.Collections.HashTable http://msdn.microsoft.com/en-us/library/system.collections.hashtable.aspx