I am looking a general algorithm that would convert from one (arbitrary) numerical base to another (also arbitrary) without storing the result in a large integer and performing arithmetic operations on it in between.
The algorithm I am looking for takes an array of numerical values in a given base (that would mostly be a string of characters) and returns the result alike.
Thank you for help.
I would say it is not possible. For certain bases it would be possible to convert from one string to another, by just streaming the chars through (e.g. if one base is a multiple of the other, like octal->hex), but for arbitrary bases it is not possible without arithmetic operations.
If you would do it with strings/chars in between it would be still big integer arithmetic, but your integers were just in a (unnecessary big) unusual format.
So you have just the choice between: Either reprogram arithmetic operations with char encoded numbers, or do the step and use a big integer library and walk the convert(char(base1->bigInt), convert(bigInt->base2) path.
It's computable, but it's not pretty.
Seriously, it'd probably be easier and faster to include one of the many bignum libraries or write your own.
Related
assuming the mathematical expression has less number of characters than the original number.
example-
20880467999847912034355032910578 can be expressed as (23^23 +10)
this looks like a good compression method. Will it work for compressing large files?
UPDATE- i didn't mean converting a file into a large binary number. lets say i have a text file and i replace all the characters in it with their ascii values. now i have a large number in the decimal number system. i can express it as a mathematical expression like in the example above.
The notion you're looking for is Kolmogorov complexity - it's a measure of how algorithmically incompressible a number is. See this wiki article for a rigorous definition and examples of such numbers.
If you take the contents of a file as a large binary number, and find an expression which evaluates to that number and can be stored more compactly than the number itself, then yes, you have compressed the file.
Unfortunately, for most files, you'll never find such an expression.
Simple logic (see the link posted by #OliCharlesworth) should convince you that it's impossible to find such an expression for all or even most files. Even for files which might have a suitable expression, finding it will be very, very difficult. If you want to convince yourself of this, try this challenge:
Take the following ASCII string:
"Holy Kolmogorov complexity, Batman! Compress this sucker down good and you'll get a pretty penny, my fine lad!"
Interpreted as a binary number, with the high-order digits coming first, that is: 2280899635869589768629811602006623364651019118009864206881173103187172975244099647369151382436996220022807793898568915685059542016541775658916080587423284053601554008368389985872997499032440860090224967472423163775276043175694884234152335588829534778866153948275745.
Try to find a polynomial which evaluates to that number. All the numbers used must be integral, and the total number of decimal digits appearing in the polynomial must be less than 80. If you succeed, I will send you a small cash prize by PayPal.
Yes, by definition. You have correctly defined compression as representing something larger with something smaller.
How do you propose to do this? How often will that work? There's the rub.
Very large integers are often stored as variable-length arrays of digits in memory, as opposed to a straightforward binary representation as is the case with most primitive 'int' or 'long' types, as in Java or C. With this in mind, I would be interested to know algorithm(s) that can compute:
At what count an integer must reach before it becomes more efficient to store it as a BigInteger (or equivalent arbitrary-precision arithmetic construct) with a given radix for the integer's digits;
Which radix would be most efficient to store the digits of this large integer.
I have mentioned 'efficiency'; by this, I mean I am mainly concerned with the amount of space such a BigInteger would consume, though I would also be interested to hear any comments on processing speed or time complexity.
An integer should consume the least space if stored in a raw binary format (unless maybe it is a small integer and data type is way too wide for it - to store 1 in 128 bit long long). Storing differently does not save any memory and is used to make the work with such integers easier.
If byte by byte, this translates into 256'ecimal radix - 256 possible values, as much as the byte can hold.
BigInt is never more efficient than one of the integer types directly supported by hardware. If you can use what's supported directly, use it.
What's supported by hardware most efficiently, likely a power of 2 or, often equivalently, binary.
I'm looking at ways to deterministically replace unique strings with unique and optimally short replacements. So I have a finite set of strings, and the best compression I could achieve so far is through an enumeration algorithm, where I order the input set and then replace the strings with an enumeration of char strings over an extended alphabet (a..z, A...Z, aa...zz, aA... zZ, a0...z9, Aa..., aaa...zaa, aaA...zaaA, ....).
This works wonderfully as far as compression is concerned, but has the severe drawback that it is not atomic on any given input string. Rather, its result depends on knowing all input strings right from the start, and on the ordering of the input set.
Anybody knows of an algorithm that has similar compression but doesn't require knowing all input strings upfront?! Hashing for example would not work for me, as depending on the size of the input set I'd need a hash length of 8-12 for the hashes to be unique, and that would be too long as replacements (currently, the replacement strings are 1-3 chars long for my use cases (<10,000 input strings)). Also, if theoreticians among us know this is wasted effort, I would be interested to hear :-) .
You could use your enumeration scheme, but sorted by the order in which you first encounter the input strings.
For example, the first string you ever process can be mapped to "a".
The next distinct string would be mapped to "b", etc.
Every time you process a string, you'd need to look it up to see if it has already been mapped.
"Optimally short" depends on the population of strings from which your samples are drawn. In the absence of systematic redundancy in the population, you will find that only a fraction of arbitrary strings can be compressed at all (e.g., consider trying to compress random bit strings).
If you can make assumptions about your data, such as "the strings are expected to be mainly composed of English words" then you can do something simple and effective based on letter frequency (e.g., for English, the relative frequency order is something like ETAOINSHRDLUGCY..., so you would want to use fewer bits to represent Es and more bits to represent uncommon letters like Q).
Cheers.
I would like an algorithm for a function that takes n integers and returns one integer. For small changes in the input, the resulting integer should vary greatly. Even though I've taken a number of courses in math, I have not used that knowledge very much and now I need some help...
An important property of this function should be that if it is used with coordinate pairs as input and the result is plotted (as a grayscale value for example) on an image, any repeating patterns should only be visible if the image is very big.
I have experimented with various algorithms for pseudo-random numbers with little success and finally it struck me that md5 almost meets my criteria, except that it is not for numbers (at least not from what I know). That resulted in something like this Python prototype (for n = 2, it could easily be changed to take a list of integers of course):
import hashlib
def uniqnum(x, y):
return int(hashlib.md5(str(x) + ',' + str(y)).hexdigest()[-6:], 16)
But obviously it feels wrong to go over strings when both input and output are integers. What would be a good replacement for this implementation (in pseudo-code, python, or whatever language)?
A "hash" is the solution created to solve exactly the problem you are describing. See wikipedia's article
Any hash function you use will be nice; hash functions tend to be judged based on these criteria:
The degree to which they prevent collisions (two separate inputs producing the same output) -- a by-product of this is the degree to which the function minimizes outputs that may never be reached from any input.
The uniformity the distribution of its outputs given a uniformly distributed set of inputs
The degree to which small changes in the input create large changes in the output.
(see perfect hash function)
Given how hard it is to create a hash function that maximizes all of these criteria, why not just use one of the most commonly used and relied-on existing hash functions there already are?
From what it seems, turning integers into strings almost seems like another layer of encryption! (which is good for your purposes, I'd assume)
However, your question asks for hash functions that deal specifically with numbers, so here we go.
Hash functions that work over the integers
If you want to borrow already-existing algorithms, you may want to dabble in pseudo-random number generators
One simple one is the middle square method:
Take a digit number
Square it
Chop off the digits and leave the middle digits with the same length as your original.
ie,
1111 => 01234321 => 2342
so, 1111 would be "hashed" to 2342, in the middle square method.
This way isn't that effective, but for a few number of hashes, this has very low collision rates, a uniform distribution, and great chaos-potential (small changes => big changes). But if you have many values, time to look for something else...
The grand-daddy of all feasibly efficient and simple random number generators is the (Mersenne Twister)[http://en.wikipedia.org/wiki/Mersenne_twister]. In fact, an implementation is probably out there for every programming language imaginable. Your hash "input" is something that will be called a "seed" in their terminology.
In conclusion
Nothing wrong with string-based hash functions
If you want to stick with the integers and be fancy, try using your number as a seed for a pseudo-random number generator.
Hashing fits your requirements perfectly. If you really don't want to use strings, find a Hash library that will take numbers or binary data. But using strings here looks OK to me.
Bob Jenkins' mix function is a classic choice, at when n=3.
As others point out, hash functions do exactly what you want. Hashes take bytes - not character strings - and return bytes, and converting between integers and bytes is, of course, simple. Here's an example python function that works on 32 bit integers, and outputs a 32 bit integer:
import hashlib
import struct
def intsha1(ints):
input = struct.pack('>%di' % len(ints), *ints)
output = hashlib.sha1(input).digest()
return struct.unpack('>i', output[:4])
It can, of course, be easily adapted to work with different length inputs and outputs.
Have a look at this, may be you can be inspired
Chaotic system
In chaotic dynamics, small changes vary results greatly.
A x-bit block cipher will take an number and convert it effectively to another number. You could combine (sum/mult?) your input numbers and cipher them, or iteratively encipher each number - similar to a CBC or chained mode. Google 'format preserving encyption'. It is possible to create a 32-bit block cipher (not widely 'available') and use this to create a 'hashed' output. Main difference between hash and encryption, is that hash is irreversible.
I read somewhere about other data structures similar to hashtables, dictionaries but instead of using ints, they were using floats/doubles, etc.
Anyone knows what they are?
If you mean using floats/doubles as keys in your hash, that's easy. For example, in .NET, it's just using Dictionary<double,MyValueType>.
If you're talking about having the hash be based off a double instead of an int....
Technically, you can have any element as your internal hash. Normally, this is done using an int or long, since these are fast, and the hashing algorithm is easy to compute.
However, the hash is really just a BitArray at heart, so anything would work. There really isn't much advantage to making this something other than an int or long, other than potentially allowing a larger set of hash values (ie: if you go to an 8 byte or larger type for your hash).
You mean as keys? That strikes me as tricky.
If you're using them as arbitrary keys, they're no better than integers.
If you expect to calculate a floating-point value and use it to look something up in a hash table, you're living very dangerously. Floating point numbers do not have infinite precision, and calculating the same thing in two slightly different ways can result in very tiny differences in the result. Hash keys rely on getting the exact same thing every time, so you'd have to be careful to round, and round in exactly the same way at all times. This is trickier than it sounds, by the way.
So, what would you do with floating-point hashes?
A hash algorithm is, in general terms, just a function that produces a smaller output from a larger input. Good hash functions have interesting properties like a large change in output for a small change in the input, and an assurance that they produce every possible output value for some input.
It's not hard to write a simple polynomial type hash function that outputs a floating-point value, rather than an integer value, but it's difficult to ensure that the resulting hash function has the desired properties without getting into the details of the particular floating-point representation used.
At least part of the reason that hash functions are nearly always implemented in integer arithmetic is because proving various properties about an integer calculation is easier than doing the same for a floating point calculation.
It's fairly easy to prove that some (sum of prime factors) modulo (another prime) must, necessarily, produce every possible output for some input. Doing the same for a calculation with a bunch of floating-point fractions would be a drag.
Add to that the relative difficulty of storing and transmitting floating-point values without corruption, and it's just not worth it.
Your question history shows that you use .Net, so I'll answer in that context.
If you want a Dictionary that is type aware, such that you can specify it should use floats or doubles for the keys or values, use System.Collections.Generic.Dictionary<T, U> http://msdn.microsoft.com/en-us/library/xfhwa508.aspx
If you want a Dictionary that is type blind, such that you can use floats AND doubles for keys and values, use System.Collections.HashTable http://msdn.microsoft.com/en-us/library/system.collections.hashtable.aspx