Hash function for strings without respecting char's position - algorithm

The title of my question is self-descriptive.
I need to hash a structure of three 64bit variables (I'll convert them to a string of chars) each one contains a hand of cards - card game app, so swapping of some chars in these variables should produce the same hash.
One approach would be to sort resulting string. Is there any better solution?

If the representation of a hand is similar to a bit set, it is already unordered. For example, if you use a combination of bit masks to represent a combination of cards, say, like this
A♠ - 0x00000001
2♠ - 0x00000002
3♠ - 0x00000004
4♠ - 0x00000008
...
K♠ - 0x00001000
A♥ - 0x00002000
2♥ - 0x00004000
...
then you can represent hands using bit combinations, like this:
A♠ 4♠ 2♥ - 0x00004009
This representation is position-independent, i.e. hands 4♠ A♠ 2♥ and 2♥ 4♠ A♠ would have exactly the same representation as A♠ 4♠ 2♥. You can convert this representation to a string as necessary by iterating the individual bits, and adding a card to the string representation each time that you discover a bit that is set to 1.
A representation like this can be used to compute a 32-bit hash code by XOR-ing the upper 32 bits of the representation with the lower 32 bits:
uint64_t hand = ... // A representation of hand similar to what's described above
uint32_t hash = (uint32_t)(hand ^ (hand >> 32));
Currently my cards are presented as bytes, but bits in two cards can overlap: A♣ = 0x11; 10♣=0x12; K♣=0x13 ... and so on.
You can convert this representation to the one described above when computing the hash code, and avoid sorting that way:
// Each card is a number from 1 to 53, inclusive
uint8_t hand[HAND_SIZE] = ...; // The hand
uint64_t set = 0;
for (int i = 0 ; i != HAND_SIZE ; i++) {
set |= (1LL << hand[i]);
}
uint32_t hash = (uint32_t)(set ^ (set >> 32));

The other way to do this is to count the number of occurrences of each character and then hash the resulting vector(a vector count, where count[c] is the number of occurrences of a character c). I would not say that it is better than sorting(the number of characters is fixed(and probably pretty low) so you can use radix sort)(but I cannot say that it is worse either). The time complexity of both: sorting using radix sort and counting the number of occurrences of each character is linear(moreover, radix sort and counting characters is pretty much the same thing), so there shouldn't be much difference between these two.

Related

Text compression: Strategy to code prefix(header) to improve the performance

I am currently working on a huffman text compression exercise, but i've encounter some issues with the coding of my header.I used my character-frequency table to save the information I need as header for decompression of the file (All converted into binary strings then save in byte arrays).
So at first, I used 2 bytes for each character, 1 byte for character and 1 byte for its frequency. However I realized it will not work for a large text which the frequency of some characters could surpass 255(1byte).
Therefore I made a modification, I adjust the reserved bytes for each character depending on its frequency. it look something like this:
public String freString(int freq, String s){
freq = freq - 255;
s = s + ("11111111");
if(freq>=255){
s = freString(freq,s);
}else{
String remainFreq= Integer.toBinaryString(freq);
//patch the ommited zeros of last byte
remainFreq= String.format("%8s", remainFreq);
remainFreq= tempString.replace(' ', '0');
s = s + remainFreq;
}
return s;
}
With this, during decompression I will look at the next byte to see what is its value, if its 255, then keep adding the value of next byte... etc.
Example of my header:
[ 9 , 141, 3, 142, 255, 33, 143, 255, 255, 2 ]
[length of my header = 9 ,a = 3, b = 288, c = 512]
It works fine, but it greatly reduced my compression ratio as text gets larger and larger.
Ex: if the 'a' repeats 5000 times. I will en up using up to 20 bytes to store my string of frequency value instead of 2 bytes (00010011 10001000 = 5000)
So here is my question... is there a better strategy I can use to increase the reserved byte of a character dynamically and at the same time indicate the "end of freq string" ? I've though of reserving minimum of 3 bytes per character (1 for the char, 1 for its freq, and 1 to indicate end of freq string) but this will affect the compression ratio of smaller text file.
Is this the trade off i must take? or there exist a better way of doing it?
If you have a Huffman tree, then you can make many other Huffman trees that assigns the same length to all symbols but different code words by swapping the left and right child of any node. All those trees are equally good - they compress the data just as much because the lengths stay the same. Canonical Huffman is a way to agree beforehand how to choose one specific tree out of all those possible permuted trees, so that you don't have to communicate which one of those trees you're actually using.
What that means in practice is that the tree can be reconstructed from just the lengths. It's not necessary to actually reconstruct the tree, but the ability to reconstruct it means you have retained all information.
As is often the case, there are different choices you could make for which tree is the canonical one. The choice you make has some implications for decoding techniques, but that may be beyond the scope of this question. Anyway, one choice is to permute the tree such that
the longest code is all zeroes
shorter codes (padded with zeroes on the right to the same length as the longest code) are numerically greater than all longer codes
symbols with the same length are assigned codes in ascending order
Rules 1 and 2 make a tree that is deepest on one side, shallowest on the other, with no weird jumps in between. Rule 3 orders nodes that are at the same depth.
Turns out though, you don't need to do any tree restructuring. Using just the length assigned to every symbol, the codes can be constructed easily, like this:
// create histogram of lengths
const int maxcodelength = 15; // 15 is the max code length for Deflate
uint[] length_count = new uint[maxcodelength + 1];;
for (int i = 0; i < symbollengths.Length; i++)
length_count[symbollengths[i]]++;
// find starting point (lowest code) for each length
uint code = 0;
uint[] next_code = new uint[maxcodelength + 1];
next_code[maxcodelength] = 0;
for (int bits = maxcodelength - 1; bits >= 0; bits--)
{
code = (code + length_count[bits + 1]) >> 1;
next_code[bits] = code;
}
// assign codes to symbols
uint[] codes = new uint[256];
for (int n = 0; n < codes.Length; n++)
{
int len = symbollengths[n];
if (len != 0)
{
codes[n] = next_code[len];
next_code[len]++;
}
}
This is strongly related to the code on page 8 of rfc1951 (Deflate), but different (the shift goes the other way, resulting in the all-zero code having the longest length, in Deflate the all-zero code has the shortest length).
As for the header, now you need only 4 bits per symbol (if you also use the length limit of 15), certainly not more than 8 bits per symbol (codes longer than 256 would be kind of crazy). That would still be 128 or 256 bytes for the header (for an alphabet of 256). You could improve that, for example by borrowing Deflate's scheme of run-length encoding the lengths.
Additional stuff.
One way to guarantee the maximum length is not exceeded is to divide all frequencies by 2 (rounding up) and recreate the Huffman tree until the maximum length is no longer exceeded. There are also other ways to calculate a valid set of lengths, without building a tree, for example package-merge.
The length is limited in almost all compression formats that use Huffman coding. It's important for both encoding and decoding, mainly for decoding. For encoding, having codes no longer than 25 means that you can use a 32bit buffer and write out bytes (meaning that up to 7 bits can be left in the buffer) without needing a special case for when adding a code to the buffer would overflow. For decoding, a short(ish) maximum code length enables simple single-table lookup - it is indexed with maxcodelength bits (the "window") at the time, giving both the first symbol (the actual decoding) in the window and the length of that symbol (so it can be shifted out). If the maximum code length is longer, slightly more sophisticated techniques are required, such as multi-level tables, or my personal favourite, different tables depending on the number of leading zeroes in the window.

How to translate Text to Binary with Cocoa?

I'm making a simple Cocoa program that can encode text to binary and decode it back to text. I tried to make this script and I was not even close to accomplishing this. Can anyone help me? This has to include two textboxes and two buttons or whatever is best, Thanks!
There are two parts to this.
The first is to encode the characters of the string into bytes. You do this by sending the string a dataUsingEncoding: message. Which encoding you choose will determine which bytes it gives you for each character. Start with NSUTF8StringEncoding, and then experiment with other encodings, such as NSUnicodeStringEncoding, once you get it working.
The second part is to convert every bit of every byte into either a '0' character or a '1' character, so that, for example, the letter A, encoded in UTF-8 to a single byte, will be represented as 01000001.
So, converting characters to bytes, and converting bytes to characters representing bits. These two are completely separate tasks; the second part should work correctly for any stream of bytes, including any valid stream of encoded characters, any invalid stream of encoded characters, and indeed anything that isn't text at all.
The first part is easy enough:
- (NSString *) stringOfBitsFromEncoding:(NSStringEncoding)encoding
ofString:(NSString *)inputString
{
//Encode the characters to bytes using the UTF-8 encoding. The bytes are contained in an NSData object, which we receive.
NSData *data = [string dataUsingEncoding:NSUTF8StringEncoding];
//I did say these were two separate jobs.
return [self stringOfBitsFromData:data];
}
For the second part, you'll need to loop through the bytes of the data. A C for loop will do the job there, and that will look like this:
//This is the method we're using above. I'll leave out the method signature and let you fill that in.
- …
{
//Find out how many bytes the data object contains.
NSUInteger length = [data length];
//Get the pointer to those bytes. “const” here means that we promise not to change the values of any of the bytes. (The compiler may give a warning if we don't include this, since we're not allowed to change these bytes anyway.)
const char *bytes = [data bytes];
//We'll store the output here. There are 8 bits per byte, and we'll be putting in one character per bit, so we'll tell NSMutableString that it should make room for (the number of bytes times 8) characters.
NSMutableString *outputString = [NSMutableString stringWithCapacity:length * 8];
//The loop. We start by initializing i to 0, then increment it (add 1 to it) after each pass. We keep looping as long as i < length; when i >= length, the loop ends.
for (NSUInteger i = 0; i < length; ++i) {
char thisByte = bytes[i];
for (NSUInteger bitNum = 0; bitNum < 8; ++bitNum) {
//Call a function, which I'll show the definition of in a moment, that will get the value of a bit at a given index within a given character.
bool bit = getBitAtIndex(thisByte, bitNum);
//If this bit is a 1, append a '1' character; if it is a 0, append a '0' character.
[outputString appendFormat: #"%c", bit ? '1' : '0'];
}
}
return outputString;
}
Bits 101 (or, 1100101)
Bits are literally just digits in base 2. Humans in the Western world usually write out numbers in base 10, but a number is a number no matter what base it's written in, and every character, and every byte, and indeed every bit, is just a number.
Digits—including bits—are counted up from the lowest place, according to the exponent to which the base is raised to find the magnitude of that place. We want bits, so that base is 2, so our place values are:
2^0 = 1: The ones place (the lowest bit)
2^1 = 2: The twos place (the next higher bit)
2^2 = 4: The fours place
2^3 = 8: The eights place
And so on, up to 2^7. (Note that the highest exponent is exactly one lower than the number of digits we're after; in this case, 7 vs. 8.)
If that all reminds you of reading about “the ones place”, “the tens place”, “the hundreds place”, etc. when you were a kid, it should: it's the exact same principle.
So a byte such as 65, which (in UTF-8) completely represents the character 'A', is the sum of:
2^7 × 0 = 0
+ 2^6 × 0 = 64
+ 2^5 × 1 = 0
+ 2^4 × 0 = 0
+ 2^3 × 0 = 0
+ 2^2 × 0 = 0
+ 2^1 × 0 = 0
+ 2^0 × 1 = 1
= 0 + 64 +0+0+0+0+0 + 1
= 64 + 1
= 65
Back when you learned base 10 numbers as a kid, you probably noticed that ten is “10”, one hundred is “100”, etc. This is true in base 2 as well: as 10^x is “1” followed by x “0”s in base 10, so is 2^x “1” followed by “x” 0s in base 2. So, for example, sixty-four in base 2 is “1000000” (count the zeroes and compare to the table above).
We are going to use these exact-power-of-two numbers to test each bit in each input byte.
Finding the bit
C has a pair of “shift” operators that will insert zeroes or remove digits at the low end of a number. The former is called “shift left”, and is written as <<, and you can guess the opposite.
We want shift left. We want to shift 1 left by the number of the bit we're after. That is exactly equivalent to raising 2 (our base) to the power of that number; for example, 1 << 6 = 2^6 = “1000000”.
Testing the bit
C has an operator for bit testing, too; it's &, the bitwise AND operator. (Do not confuse this with &&, which is the logical AND operator. && is for using whole true/false values in making decisions; & is one of your tools for working with bits within values.)
Strictly speaking, & does not test single bits; it goes through the bits of both input values, and returns a new value whose bits are the bitwise AND of each input pair. So, for example,
01100101
& 00101011
----------
00100001
Each bit in the output is 1 if and only if both of the corresponding input bits were also 1.
Putting these two things together
We're going to use the shift left operator to give us a number where one bit, the nth bit, is set—i.e., 2^n—and then use the bitwise AND operator to test whether the same bit is also set in our input byte.
//This is a C function that takes a char and an int, promising not to change either one, and returns a bool.
bool getBitAtIndex(const char byte, const int bitNum)
//It could also be a method, which would look like this:
//- (bool) bitAtIndex:(const int)bitNum inByte:(const char)byte
//but you would have to change the code above. (Feel free to try it both ways.)
{
//Find 2^bitNum, which will be a number with exactly 1 bit set. For example, when bitNum is 6, this number is “1000000”—a single 1 followed by six 0s—in binary.
const int powerOfTwo = 1 << bitNum;
//Test whether the same bit is also set in the input byte.
bool bitIsSet = byte & powerOfTwo;
return bitIsSet;
}
A bit of magic I should acknowledge
The bitwise AND operator does not evaluate to a single bit—it does not evaluate to only 1 or 0. Remember the above example, in which the & operator returned 33.
The bool type is a bit magic: Any time you convert any value to bool, it automatically becomes either 1 or 0. Anything that is not 0 becomes 1; anything that is 0 becomes 0.
The Objective-C BOOL type does not do this, which is why I used bool in the code above. You are free to use whichever you prefer, except that you generally should use BOOL whenever you deal with anything that expects a BOOL, particularly when overriding methods in subclasses or implementing protocols. You can convert back and forth freely, though not losslessly (since bool will change non-zero values as described above).
Oh yeah, you said something about text boxes too
When the user clicks on your button, get the stringValue of your input field, call stringOfBitsFromEncoding:ofString: using a reasonable encoding (such as UTF-8) and that string, and set the resulting string as the new stringValue of your output field.
Extra credit: Add a pop-up button with which the user can choose an encoding.
Extra extra credit: Populate the pop-up button with all of the available encodings, without hard-coding or hard-nibbing a list.

Sorting 1 million 8-decimal-digit numbers with 1 MB of RAM

I have a computer with 1 MB of RAM and no other local storage. I must use it to accept 1 million 8-digit decimal numbers over a TCP connection, sort them, and then send the sorted list out over another TCP connection.
The list of numbers may contain duplicates, which I must not discard. The code will be placed in ROM, so I need not subtract the size of my code from the 1 MB. I already have code to drive the Ethernet port and handle TCP/IP connections, and it requires 2 KB for its state data, including a 1 KB buffer via which the code will read and write data. Is there a solution to this problem?
Sources Of Question And Answer:
slashdot.org
cleaton.net
There is one rather sneaky trick not mentioned here so far. We assume that you have no extra way to store data, but that is not strictly true.
One way around your problem is to do the following horrible thing, which should not be attempted by anyone under any circumstances: Use the network traffic to store data. And no, I don't mean NAS.
You can sort the numbers with only a few bytes of RAM in the following way:
First take 2 variables: COUNTER and VALUE.
First set all registers to 0;
Every time you receive an integer I, increment COUNTER and set VALUE to max(VALUE, I);
Then send an ICMP echo request packet with data set to I to the router. Erase I and repeat.
Every time you receive the returned ICMP packet, you simply extract the integer and send it back out again in another echo request. This produces a huge number of ICMP requests scuttling backward and forward containing the integers.
Once COUNTER reaches 1000000, you have all of the values stored in the incessant stream of ICMP requests, and VALUE now contains the maximum integer. Pick some threshold T >> 1000000. Set COUNTER to zero. Every time you receive an ICMP packet, increment COUNTER and send the contained integer I back out in another echo request, unless I=VALUE, in which case transmit it to the destination for the sorted integers. Once COUNTER=T, decrement VALUE by 1, reset COUNTER to zero and repeat. Once VALUE reaches zero you should have transmitted all integers in order from largest to smallest to the destination, and have only used about 47 bits of RAM for the two persistent variables (and whatever small amount you need for the temporary values).
I know this is horrible, and I know there can be all sorts of practical issues, but I thought it might give some of you a laugh or at least horrify you.
Here's some working C++ code which solves the problem.
Proof that the memory constraints are satisfied:
Editor: There is no proof of the maximum memory requirements offered by the author either in this post or in his blogs. Since the number of bits necessary to encode a value depends on the values previously encoded, such a proof is likely non-trivial. The author notes that the largest encoded size he could stumble upon empirically was 1011732, and chose the buffer size 1013000 arbitrarily.
typedef unsigned int u32;
namespace WorkArea
{
static const u32 circularSize = 253250;
u32 circular[circularSize] = { 0 }; // consumes 1013000 bytes
static const u32 stageSize = 8000;
u32 stage[stageSize]; // consumes 32000 bytes
...
Together, these two arrays take 1045000 bytes of storage. That leaves 1048576 - 1045000 - 2×1024 = 1528 bytes for remaining variables and stack space.
It runs in about 23 seconds on my Xeon W3520. You can verify that the program works using the following Python script, assuming a program name of sort1mb.exe.
from subprocess import *
import random
sequence = [random.randint(0, 99999999) for i in xrange(1000000)]
sorter = Popen('sort1mb.exe', stdin=PIPE, stdout=PIPE)
for value in sequence:
sorter.stdin.write('%08d\n' % value)
sorter.stdin.close()
result = [int(line) for line in sorter.stdout]
print('OK!' if result == sorted(sequence) else 'Error!')
A detailed explanation of the algorithm can be found in the following series of posts:
1MB Sorting Explained
Arithmetic Coding and the 1MB Sorting Problem
Arithmetic Encoding Using Fixed-Point Math
Please see the first correct answer or the later answer with arithmetic encoding. Below you may find some fun, but not a 100% bullet-proof solution.
This is quite an interesting task and here is an another solution. I hope somebody would find the result useful (or at least interesting).
Stage 1: Initial data structure, rough compression approach, basic results
Let's do some simple math: we have 1M (1048576 bytes) of RAM initially available to store 10^6 8 digit decimal numbers. [0;99999999]. So to store one number 27 bits are needed (taking the assumption that unsigned numbers will be used). Thus, to store a raw stream ~3.5M of RAM will be needed. Somebody already said it doesn't seem to be feasible, but I would say the task can be solved if the input is "good enough". Basically, the idea is to compress the input data with compression factor 0.29 or higher and do sorting in a proper manner.
Let's solve the compression issue first. There are some relevant tests already available:
http://www.theeggeadventure.com/wikimedia/index.php/Java_Data_Compression
"I ran a test to compress one million consecutive integers using
various forms of compression. The results are as follows:"
None 4000027
Deflate 2006803
Filtered 1391833
BZip2 427067
Lzma 255040
It looks like LZMA (Lempel–Ziv–Markov chain algorithm) is a good choice to continue with. I've prepared a simple PoC, but there are still some details to be highlighted:
Memory is limited so the idea is to presort numbers and use
compressed buckets (dynamic size) as temporary storage
It is easier to achieve a better compression factor with presorted
data, so there is a static buffer for each bucket (numbers from the buffer are to be sorted before LZMA)
Each bucket holds a specific range, so the final sort can be done for
each bucket separately
Bucket's size can be properly set, so there will be enough memory to
decompress stored data and do the final sort for each bucket separately
Please note, attached code is a POC, it can't be used as a final solution, it just demonstrates the idea of using several smaller buffers to store presorted numbers in some optimal way (possibly compressed). LZMA is not proposed as a final solution. It is used as a fastest possible way to introduce a compression to this PoC.
See the PoC code below (please note it just a demo, to compile it LZMA-Java will be needed):
public class MemorySortDemo {
static final int NUM_COUNT = 1000000;
static final int NUM_MAX = 100000000;
static final int BUCKETS = 5;
static final int DICT_SIZE = 16 * 1024; // LZMA dictionary size
static final int BUCKET_SIZE = 1024;
static final int BUFFER_SIZE = 10 * 1024;
static final int BUCKET_RANGE = NUM_MAX / BUCKETS;
static class Producer {
private Random random = new Random();
public int produce() { return random.nextInt(NUM_MAX); }
}
static class Bucket {
public int size, pointer;
public int[] buffer = new int[BUFFER_SIZE];
public ByteArrayOutputStream tempOut = new ByteArrayOutputStream();
public DataOutputStream tempDataOut = new DataOutputStream(tempOut);
public ByteArrayOutputStream compressedOut = new ByteArrayOutputStream();
public void submitBuffer() throws IOException {
Arrays.sort(buffer, 0, pointer);
for (int j = 0; j < pointer; j++) {
tempDataOut.writeInt(buffer[j]);
size++;
}
pointer = 0;
}
public void write(int value) throws IOException {
if (isBufferFull()) {
submitBuffer();
}
buffer[pointer++] = value;
}
public boolean isBufferFull() {
return pointer == BUFFER_SIZE;
}
public byte[] compressData() throws IOException {
tempDataOut.close();
return compress(tempOut.toByteArray());
}
private byte[] compress(byte[] input) throws IOException {
final BufferedInputStream in = new BufferedInputStream(new ByteArrayInputStream(input));
final DataOutputStream out = new DataOutputStream(new BufferedOutputStream(compressedOut));
final Encoder encoder = new Encoder();
encoder.setEndMarkerMode(true);
encoder.setNumFastBytes(0x20);
encoder.setDictionarySize(DICT_SIZE);
encoder.setMatchFinder(Encoder.EMatchFinderTypeBT4);
ByteArrayOutputStream encoderPrperties = new ByteArrayOutputStream();
encoder.writeCoderProperties(encoderPrperties);
encoderPrperties.flush();
encoderPrperties.close();
encoder.code(in, out, -1, -1, null);
out.flush();
out.close();
in.close();
return encoderPrperties.toByteArray();
}
public int[] decompress(byte[] properties) throws IOException {
InputStream in = new ByteArrayInputStream(compressedOut.toByteArray());
ByteArrayOutputStream data = new ByteArrayOutputStream(10 * 1024);
BufferedOutputStream out = new BufferedOutputStream(data);
Decoder decoder = new Decoder();
decoder.setDecoderProperties(properties);
decoder.code(in, out, 4 * size);
out.flush();
out.close();
in.close();
DataInputStream input = new DataInputStream(new ByteArrayInputStream(data.toByteArray()));
int[] array = new int[size];
for (int k = 0; k < size; k++) {
array[k] = input.readInt();
}
return array;
}
}
static class Sorter {
private Bucket[] bucket = new Bucket[BUCKETS];
public void doSort(Producer p, Consumer c) throws IOException {
for (int i = 0; i < bucket.length; i++) { // allocate buckets
bucket[i] = new Bucket();
}
for(int i=0; i< NUM_COUNT; i++) { // produce some data
int value = p.produce();
int bucketId = value/BUCKET_RANGE;
bucket[bucketId].write(value);
c.register(value);
}
for (int i = 0; i < bucket.length; i++) { // submit non-empty buffers
bucket[i].submitBuffer();
}
byte[] compressProperties = null;
for (int i = 0; i < bucket.length; i++) { // compress the data
compressProperties = bucket[i].compressData();
}
printStatistics();
for (int i = 0; i < bucket.length; i++) { // decode & sort buckets one by one
int[] array = bucket[i].decompress(compressProperties);
Arrays.sort(array);
for(int v : array) {
c.consume(v);
}
}
c.finalCheck();
}
public void printStatistics() {
int size = 0;
int sizeCompressed = 0;
for (int i = 0; i < BUCKETS; i++) {
int bucketSize = 4*bucket[i].size;
size += bucketSize;
sizeCompressed += bucket[i].compressedOut.size();
System.out.println(" bucket[" + i
+ "] contains: " + bucket[i].size
+ " numbers, compressed size: " + bucket[i].compressedOut.size()
+ String.format(" compression factor: %.2f", ((double)bucket[i].compressedOut.size())/bucketSize));
}
System.out.println(String.format("Data size: %.2fM",(double)size/(1014*1024))
+ String.format(" compressed %.2fM",(double)sizeCompressed/(1014*1024))
+ String.format(" compression factor %.2f",(double)sizeCompressed/size));
}
}
static class Consumer {
private Set<Integer> values = new HashSet<>();
int v = -1;
public void consume(int value) {
if(v < 0) v = value;
if(v > value) {
throw new IllegalArgumentException("Current value is greater than previous: " + v + " > " + value);
}else{
v = value;
values.remove(value);
}
}
public void register(int value) {
values.add(value);
}
public void finalCheck() {
System.out.println(values.size() > 0 ? "NOT OK: " + values.size() : "OK!");
}
}
public static void main(String[] args) throws IOException {
Producer p = new Producer();
Consumer c = new Consumer();
Sorter sorter = new Sorter();
sorter.doSort(p, c);
}
}
With random numbers it produces the following:
bucket[0] contains: 200357 numbers, compressed size: 353679 compression factor: 0.44
bucket[1] contains: 199465 numbers, compressed size: 352127 compression factor: 0.44
bucket[2] contains: 199682 numbers, compressed size: 352464 compression factor: 0.44
bucket[3] contains: 199949 numbers, compressed size: 352947 compression factor: 0.44
bucket[4] contains: 200547 numbers, compressed size: 353914 compression factor: 0.44
Data size: 3.85M compressed 1.70M compression factor 0.44
For a simple ascending sequence (one bucket is used) it produces:
bucket[0] contains: 1000000 numbers, compressed size: 256700 compression factor: 0.06
Data size: 3.85M compressed 0.25M compression factor 0.06
EDIT
Conclusion:
Don't try to fool the Nature
Use simpler compression with lower memory footprint
Some additional clues are really needed. Common bullet-proof solution does not seem to be feasible.
Stage 2: Enhanced compression, final conclusion
As was already mentioned in the previous section, any suitable compression technique can be used. So let's get rid of LZMA in favor of simpler and better (if possible) approach. There are a lot of good solutions including Arithmetic coding, Radix tree etc.
Anyway, simple but useful encoding scheme will be more illustrative than yet another external library, providing some nifty algorithm. The actual solution is pretty straightforward: since there are buckets with partially sorted data, deltas can be used instead of numbers.
Random input test shows slightly better results:
bucket[0] contains: 10103 numbers, compressed size: 13683 compression factor: 0.34
bucket[1] contains: 9885 numbers, compressed size: 13479 compression factor: 0.34
...
bucket[98] contains: 10026 numbers, compressed size: 13612 compression factor: 0.34
bucket[99] contains: 10058 numbers, compressed size: 13701 compression factor: 0.34
Data size: 3.85M compressed 1.31M compression factor 0.34
Sample code
public static void encode(int[] buffer, int length, BinaryOut output) {
short size = (short)(length & 0x7FFF);
output.write(size);
output.write(buffer[0]);
for(int i=1; i< size; i++) {
int next = buffer[i] - buffer[i-1];
int bits = getBinarySize(next);
int len = bits;
if(bits > 24) {
output.write(3, 2);
len = bits - 24;
}else if(bits > 16) {
output.write(2, 2);
len = bits-16;
}else if(bits > 8) {
output.write(1, 2);
len = bits - 8;
}else{
output.write(0, 2);
}
if (len > 0) {
if ((len % 2) > 0) {
len = len / 2;
output.write(len, 2);
output.write(false);
} else {
len = len / 2 - 1;
output.write(len, 2);
}
output.write(next, bits);
}
}
}
public static short decode(BinaryIn input, int[] buffer, int offset) {
short length = input.readShort();
int value = input.readInt();
buffer[offset] = value;
for (int i = 1; i < length; i++) {
int flag = input.readInt(2);
int bits;
int next = 0;
switch (flag) {
case 0:
bits = 2 * input.readInt(2) + 2;
next = input.readInt(bits);
break;
case 1:
bits = 8 + 2 * input.readInt(2) +2;
next = input.readInt(bits);
break;
case 2:
bits = 16 + 2 * input.readInt(2) +2;
next = input.readInt(bits);
break;
case 3:
bits = 24 + 2 * input.readInt(2) +2;
next = input.readInt(bits);
break;
}
buffer[offset + i] = buffer[offset + i - 1] + next;
}
return length;
}
Please note, this approach:
does not consume a lot of memory
works with streams
provides not so bad results
Full code can be found here, BinaryInput and BinaryOutput implementations can be found here
Final conclusion
No final conclusion :) Sometimes it is really good idea to move one level up and review the task from a meta-level point of view.
It was fun to spend some time with this task. BTW, there are a lot of interesting answers below. Thank you for your attention and happy codding.
A solution is possible only because of the difference between 1 megabyte and 1 million bytes. There are about 2 to the power 8093729.5 different ways to choose 1 million 8-digit numbers with duplicates allowed and order unimportant, so a machine with only 1 million bytes of RAM doesn't have enough states to represent all the possibilities. But 1M (less 2k for TCP/IP) is 1022*1024*8 = 8372224 bits, so a solution is possible.
Part 1, initial solution
This approach needs a little more than 1M, I'll refine it to fit into 1M later.
I'll store a compact sorted list of numbers in the range 0 to 99999999 as a sequence of sublists of 7-bit numbers. The first sublist holds numbers from 0 to 127, the second sublist holds numbers from 128 to 255, etc. 100000000/128 is exactly 781250, so 781250 such sublists will be needed.
Each sublist consists of a 2-bit sublist header followed by a sublist body. The sublist body takes up 7 bits per sublist entry. The sublists are all concatenated together, and the format makes it possible to tell where one sublist ends and the next begins. The total storage required for a fully populated list is 2*781250 + 7*1000000 = 8562500 bits, which is about 1.021 M-bytes.
The 4 possible sublist header values are:
00 Empty sublist, nothing follows.
01 Singleton, there is only one entry in the sublist and and next 7 bits hold it.
10 The sublist holds at least 2 distinct numbers. The entries are stored in non-decreasing order, except that the last entry is less than or equal to the first. This allows the end of the sublist to be identified. For example, the numbers 2,4,6 would be stored as (4,6,2). The numbers 2,2,3,4,4 would be stored as (2,3,4,4,2).
11 The sublist holds 2 or more repetitions of a single number. The next 7 bits give the number. Then come zero or more 7-bit entries with the value 1, followed by a 7-bit entry with the value 0. The length of the sublist body dictates the number of repetitions. For example, the numbers 12,12 would be stored as (12,0), the numbers 12,12,12 would be stored as (12,1,0), 12,12,12,12 would be (12,1,1,0) and so on.
I start off with an empty list, read a bunch of numbers in and store them as 32 bit integers, sort the new numbers in place (using heapsort, probably) and then merge them into a new compact sorted list. Repeat until there are no more numbers to read, then walk the compact list once more to generate the output.
The line below represents memory just before the start of the list merge operation. The "O"s are the region that hold the sorted 32-bit integers. The "X"s are the region that hold the old compact list. The "=" signs are the expansion room for the compact list, 7 bits for each integer in the "O"s. The "Z"s are other random overhead.
ZZZOOOOOOOOOOOOOOOOOOOOOOOOOO==========XXXXXXXXXXXXXXXXXXXXXXXXXX
The merge routine starts reading at the leftmost "O" and at the leftmost "X", and starts writing at the leftmost "=". The write pointer doesn't catch the compact list read pointer until all of the new integers are merged, because both pointers advance 2 bits for each sublist and 7 bits for each entry in the old compact list, and there is enough extra room for the 7-bit entries for the new numbers.
Part 2, cramming it into 1M
To Squeeze the solution above into 1M, I need to make the compact list format a bit more compact. I'll get rid of one of the sublist types, so that there will be just 3 different possible sublist header values. Then I can use "00", "01" and "1" as the sublist header values and save a few bits. The sublist types are:
A Empty sublist, nothing follows.
B Singleton, there is only one entry in the sublist and and next 7 bits hold it.
C The sublist holds at least 2 distinct numbers. The entries are stored in non-decreasing order, except that the last entry is less than or equal to the first. This allows the end of the sublist to be identified. For example, the numbers 2,4,6 would be stored as (4,6,2). The numbers 2,2,3,4,4 would be stored as (2,3,4,4,2).
D The sublist consists of 2 or more repetitions of a single number.
My 3 sublist header values will be "A", "B" and "C", so I need a way to represent D-type sublists.
Suppose I have the C-type sublist header followed by 3 entries, such as "C[17][101][58]". This can't be part of a valid C-type sublist as described above, since the third entry is less than the second but more than the first. I can use this type of construct to represent a D-type sublist. In bit terms, anywhere I have "C{00?????}{1??????}{01?????}" is an impossible C-type sublist. I'll use this to represent a sublist consisting of 3 or more repetitions of a single number. The first two 7-bit words encode the number (the "N" bits below) and are followed by zero or more {0100001} words followed by a {0100000} word.
For example, 3 repetitions: "C{00NNNNN}{1NN0000}{0100000}", 4 repetitions: "C{00NNNNN}{1NN0000}{0100001}{0100000}", and so on.
That just leaves lists that hold exactly 2 repetitions of a single number. I'll represent those with another impossible C-type sublist pattern: "C{0??????}{11?????}{10?????}". There's plenty of room for the 7 bits of the number in the first 2 words, but this pattern is longer than the sublist that it represents, which makes things a bit more complex. The five question-marks at the end can be considered not part of the pattern, so I have: "C{0NNNNNN}{11N????}10" as my pattern, with the number to be repeated stored in the "N"s. That's 2 bits too long.
I'll have to borrow 2 bits and pay them back from the 4 unused bits in this pattern. When reading, on encountering "C{0NNNNNN}{11N00AB}10", output 2 instances of the number in the "N"s, overwrite the "10" at the end with bits A and B, and rewind the read pointer by 2 bits. Destructive reads are ok for this algorithm, since each compact list gets walked only once.
When writing a sublist of 2 repetitions of a single number, write "C{0NNNNNN}11N00" and set the borrowed bits counter to 2. At every write where the borrowed bits counter is non-zero, it is decremented for each bit written and "10" is written when the counter hits zero. So the next 2 bits written will go into slots A and B, and then the "10" will get dropped onto the end.
With 3 sublist header values represented by "00", "01" and "1", I can assign "1" to the most popular sublist type. I'll need a small table to map sublist header values to sublist types, and I'll need an occurrence counter for each sublist type so that I know what the best sublist header mapping is.
The worst case minimal representation of a fully populated compact list occurs when all the sublist types are equally popular. In that case I save 1 bit for every 3 sublist headers, so the list size is 2*781250 + 7*1000000 - 781250/3 = 8302083.3 bits. Rounding up to a 32 bit word boundary, thats 8302112 bits, or 1037764 bytes.
1M minus the 2k for TCP/IP state and buffers is 1022*1024 = 1046528 bytes, leaving me 8764 bytes to play with.
But what about the process of changing the sublist header mapping ? In the memory map below, "Z" is random overhead, "=" is free space, "X" is the compact list.
ZZZ=====XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
Start reading at the leftmost "X" and start writing at the leftmost "=" and work right. When it's done the compact list will be a little shorter and it will be at the wrong end of memory:
ZZZXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX=======
So then I'll need to shunt it to the right:
ZZZ=======XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
In the header mapping change process, up to 1/3 of the sublist headers will be changing from 1-bit to 2-bit. In the worst case these will all be at the head of the list, so I'll need at least 781250/3 bits of free storage before I start, which takes me back to the memory requirements of the previous version of the compact list :(
To get around that, I'll split the 781250 sublists into 10 sublist groups of 78125 sublists each. Each group has its own independent sublist header mapping. Using the letters A to J for the groups:
ZZZ=====AAAAAABBCCCCDDDDDEEEFFFGGGGGGGGGGGHHIJJJJJJJJJJJJJJJJJJJJ
Each sublist group shrinks or stays the same during a sublist header mapping change:
ZZZ=====AAAAAABBCCCCDDDDDEEEFFFGGGGGGGGGGGHHIJJJJJJJJJJJJJJJJJJJJ
ZZZAAAAAA=====BBCCCCDDDDDEEEFFFGGGGGGGGGGGHHIJJJJJJJJJJJJJJJJJJJJ
ZZZAAAAAABB=====CCCCDDDDDEEEFFFGGGGGGGGGGGHHIJJJJJJJJJJJJJJJJJJJJ
ZZZAAAAAABBCCC======DDDDDEEEFFFGGGGGGGGGGGHHIJJJJJJJJJJJJJJJJJJJJ
ZZZAAAAAABBCCCDDDDD======EEEFFFGGGGGGGGGGGHHIJJJJJJJJJJJJJJJJJJJJ
ZZZAAAAAABBCCCDDDDDEEE======FFFGGGGGGGGGGGHHIJJJJJJJJJJJJJJJJJJJJ
ZZZAAAAAABBCCCDDDDDEEEFFF======GGGGGGGGGGGHHIJJJJJJJJJJJJJJJJJJJJ
ZZZAAAAAABBCCCDDDDDEEEFFFGGGGGGGGGG=======HHIJJJJJJJJJJJJJJJJJJJJ
ZZZAAAAAABBCCCDDDDDEEEFFFGGGGGGGGGGHH=======IJJJJJJJJJJJJJJJJJJJJ
ZZZAAAAAABBCCCDDDDDEEEFFFGGGGGGGGGGHHI=======JJJJJJJJJJJJJJJJJJJJ
ZZZAAAAAABBCCCDDDDDEEEFFFGGGGGGGGGGHHIJJJJJJJJJJJJJJJJJJJJ=======
ZZZ=======AAAAAABBCCCDDDDDEEEFFFGGGGGGGGGGHHIJJJJJJJJJJJJJJJJJJJJ
The worst case temporary expansion of a sublist group during a mapping change is 78125/3 = 26042 bits, under 4k. If I allow 4k plus the 1037764 bytes for a fully populated compact list, that leaves me 8764 - 4096 = 4668 bytes for the "Z"s in the memory map.
That should be plenty for the 10 sublist header mapping tables, 30 sublist header occurrence counts and the other few counters, pointers and small buffers I'll need, and space I've used without noticing, like stack space for function call return addresses and local variables.
Part 3, how long would it take to run?
With an empty compact list the 1-bit list header will be used for an empty sublist, and the starting size of the list will be 781250 bits. In the worst case the list grows 8 bits for each number added, so 32 + 8 = 40 bits of free space are needed for each of the 32-bit numbers to be placed at the top of the list buffer and then sorted and merged. In the worst case, changing the sublist header mapping results in a space usage of 2*781250 + 7*entries - 781250/3 bits.
With a policy of changing the sublist header mapping after every fifth merge once there are at least 800000 numbers in the list, a worst case run would involve a total of about 30M of compact list reading and writing activity.
Source:
http://nick.cleaton.net/ramsortsol.html
Gilmanov's answer is very wrong in its assumptions. It starts speculating based in a pointless measure of a million consecutive integers. That means no gaps. Those random gaps, however small, really makes it a poor idea.
Try it yourself. Get 1 million random 27-bit integers, sort them, compress with 7-Zip, xz, whatever LZMA you want. The result is over 1.5 MB. The premise on top is the compression of sequential numbers. Even delta encoding of that is over 1.1 MB. And never mind, this is using over 100 MB of RAM for compression. So even the compressed integers don't fit the problem and never mind run time RAM usage.
It's saddens me how people just upvote pretty graphics and rationalization.
#include <stdint.h>
#include <stdlib.h>
#include <time.h>
int32_t ints[1000000]; // Random 27-bit integers
int cmpi32(const void *a, const void *b) {
return ( *(int32_t *)a - *(int32_t *)b );
}
int main() {
int32_t *pi = ints; // Pointer to input ints (REPLACE W/ read from net)
// Fill pseudo-random integers of 27 bits
srand(time(NULL));
for (int i = 0; i < 1000000; i++)
ints[i] = rand() & ((1<<27) - 1); // Random 32 bits masked to 27 bits
qsort(ints, 1000000, sizeof (ints[0]), cmpi32); // Sort 1000000 int32s
// Now delta encode, optional, store differences to previous int
for (int i = 1, prev = ints[0]; i < 1000000; i++) {
ints[i] -= prev;
prev += ints[i];
}
FILE *f = fopen("ints.bin", "w");
fwrite(ints, 4, 1000000, f);
fclose(f);
exit(0);
}
Now compress ints.bin with LZMA...
$ xz -f --keep ints.bin # 100 MB RAM
$ 7z a ints.bin.7z ints.bin # 130 MB RAM
$ ls -lh ints.bin*
3.8M ints.bin
1.1M ints.bin.7z
1.2M ints.bin.xz
I think one way to think about this is from a combinatorics viewpoint: how many possible combinations of sorted number orderings are there? If we give the combination 0,0,0,....,0 the code 0, and 0,0,0,...,1 the code 1, and 99999999, 99999999, ... 99999999 the code N, what is N? In other words, how big is the result space?
Well, one way to think about this is noticing that this is a bijection of the problem of finding the number of monotonic paths in an N x M grid, where N = 1,000,000 and M = 100,000,000. In other words, if you have a grid that is 1,000,000 wide and 100,000,000 tall, how many shortest paths from the bottom left to the top right are there? Shortest paths of course require you only ever either move right or up (if you were to move down or left you would be undoing previously accomplished progress). To see how this is a bijection of our number sorting problem, observe the following:
You can imagine any horizontal leg in our path as a number in our ordering, where the Y location of the leg represents the value.
So if the path simply moves to the right all the way to the end, then jumps all the way to the top, that is equivalent to the ordering 0,0,0,...,0. if it instead begins by jumping all the way to the top and then moves to the right 1,000,000 times, that is equivalent to 99999999,99999999,..., 99999999. A path where it moves right once, then up once, then right one, then up once, etc to the very end (then necessarily jumps all the way to the top), is equivalent to 0,1,2,3,...,999999.
Luckily for us this problem has already been solved, such a grid has (N + M) Choose (M) paths:
(1,000,000 + 100,000,000) Choose (100,000,000) ~= 2.27 * 10^2436455
N thus equals 2.27 * 10^2436455, and so the code 0 represents 0,0,0,...,0 and the code 2.27 * 10^2436455 and some change represents 99999999,99999999,..., 99999999.
In order to store all the numbers from 0 to 2.27 * 10^2436455 you need lg2 (2.27 * 10^2436455) = 8.0937 * 10^6 bits.
1 megabyte = 8388608 bits > 8093700 bits
So it appears that we at least actually have enough room to store the result! Now of course the interesting bit is doing the sorting as the numbers stream in. Not sure the best approach to this is given we have 294908 bits remaining. I imagine an interesting technique would be to at each point assume that that is is the entire ordering, finding the code for that ordering, and then as you receive a new number going back and updating the previous code. Hand wave hand wave.
My suggestions here owe a lot to Dan's solution
First off I assume the solution must handle all possible input lists. I think the popular answers do not make this assumption (which IMO is a huge mistake).
It is known that no form of lossless compression will reduce the size of all inputs.
All the popular answers assume they will be able to apply compression effective enough to allow them extra space. In fact, a chunk of extra space large enough to hold some portion of their partially completed list in an uncompressed form and allow them to perform their sorting operations. This is just a bad assumption.
For such a solution, anyone with knowledge of how they do their compression will be able to design some input data that does not compress well for this scheme, and the "solution" will most likely then break due to running out of space.
Instead I take a mathematical approach. Our possible outputs are all the lists of length LEN consisting of elements in the range 0..MAX. Here the LEN is 1,000,000 and our MAX is 100,000,000.
For arbitrary LEN and MAX, the amount of bits needed to encode this state is:
Log2(MAX Multichoose LEN)
So for our numbers, once we have completed recieving and sorting, we will need at least Log2(100,000,000 MC 1,000,000) bits to store our result in a way that can uniquely distinguish all possible outputs.
This is ~= 988kb. So we actually have enough space to hold our result. From this point of view, it is possible.
[Deleted pointless rambling now that better examples exist...]
Best answer is here.
Another good answer is here and basically uses insertion sort as the function to expand the list by one element (buffers a few elements and pre-sorts, to allow insertion of more than one at a time, saves a bit of time). uses a nice compact state encoding too, buckets of seven bit deltas
Suppose this task is possible. Just prior to output, there will be an in-memory representation of the million sorted numbers. How many different such representations are there? Since there may be repeated numbers we can't use nCr (choose), but there is an operation called multichoose that works on multisets.
There are 2.2e2436455 ways to choose a million numbers in range 0..99,999,999.
That requires 8,093,730 bits to represent every possible combination, or 1,011,717 bytes.
So theoretically it may be possible, if you can come up with a sane (enough) representation of the sorted list of numbers. For example, an insane representation might require a 10MB lookup table or thousands of lines of code.
However, if "1M RAM" means one million bytes, then clearly there is not enough space. The fact that 5% more memory makes it theoretically possible suggests to me that the representation will have to be VERY efficient and probably not sane.
(My original answer was wrong, sorry for the bad math, see below the break.)
How about this?
The first 27 bits store the lowest number you have seen, then the difference to the next number seen, encoded as follows: 5 bits to store the number of bits used in storing the difference, then the difference. Use 00000 to indicate that you saw that number again.
This works because as more numbers are inserted, the average difference between numbers goes down, so you use less bits to store the difference as you add more numbers. I believe this is called a delta list.
The worst case I can think of is all numbers evenly spaced (by 100), e.g. Assuming 0 is the first number:
000000000000000000000000000 00111 1100100
^^^^^^^^^^^^^
a million times
27 + 1,000,000 * (5+7) bits = ~ 427k
Reddit to the rescue!
If all you had to do was sort them, this problem would be easy. It takes 122k (1 million bits) to store which numbers you have seen (0th bit on if 0 was seen, 2300th bit on if 2300 was seen, etc.
You read the numbers, store them in the bit field, and then shift the bits out while keeping a count.
BUT, you have to remember how many you have seen. I was inspired by the sublist answer above to come up with this scheme:
Instead of using one bit, use either 2 or 27 bits:
00 means you did not see the number.
01 means you saw it once
1 means you saw it, and the next 26 bits are the count of how many times.
I think this works: if there are no duplicates, you have a 244k list.
In the worst case you see each number twice (if you see one number three times, it shortens the rest of the list for you), that means you have seen 50,000 more than once, and you have seen 950,000 items 0 or 1 times.
50,000 * 27 + 950,000 * 2 = 396.7k.
You can make further improvements if you use the following encoding:
0 means you did not see the number
10 means you saw it once
11 is how you keep count
Which will, on average, result in 280.7k of storage.
EDIT: my Sunday morning math was wrong.
The worst case is we see 500,000 numbers twice, so the math becomes:
500,000 *27 + 500,000 *2 = 1.77M
The alternate encoding results in an average storage of
500,000 * 27 + 500,000 = 1.70M
: (
There is one solution to this problem across all possible inputs. Cheat.
Read m values over TCP, where m is near the max that can be sorted in memory, maybe n/4.
Sort the 250,000 (or so) numbers and output them.
Repeat for the other 3 quarters.
Let the receiver merge the 4 lists of numbers it has received as it processes them. (It's not much slower than using a single list.)
What kind of computer are you using? It may not have any other "normal" local storage, but does it have video RAM, for example? 1 megapixel x 32 bits per pixel (say) is pretty close to your required data input size.
(I largely ask in memory of the old Acorn RISC PC, which could 'borrow' VRAM to expand the available system RAM, if you chose a low resolution or low colour-depth screen mode!). This was rather useful on a machine with only a few MB of normal RAM.
I would try a Radix Tree. If you could store the data in a tree, you could then do an in-order traverse to transmit the data.
I'm not sure you could fit this into 1MB, but I think it's worth a try.
A radix tree representation would come close to handling this problem, since the radix tree takes advantage of "prefix compression". But it's hard to conceive of a radix tree representation that could represent a single node in one byte -- two is probably about the limit.
But, regardless of how the data is represented, once it is sorted it can be stored in prefix-compressed form, where the numbers 10, 11, and 12 would be represented by, say 001b, 001b, 001b, indicating an increment of 1 from the previous number. Perhaps, then, 10101b would represent an increment of 5, 1101001b an increment of 9, etc.
There are 10^6 values in a range of 10^8, so there's one value per hundred code points on average. Store the distance from the Nth point to the (N+1)th. Duplicate values have a skip of 0. This means that the skip needs an average of just under 7 bits to store, so a million of them will happily fit into our 8 million bits of storage.
These skips need to be encoded into a bitstream, say by Huffman encoding. Insertion is by iterating through the bitstream and rewriting after the new value. Output by iterating through and writing out the implied values. For practicality, it probably wants to be done as, say, 10^4 lists covering 10^4 code points (and an average of 100 values) each.
A good Huffman tree for random data can be built a priori by assuming a Poisson distribution (mean=variance=100) on the length of the skips, but real statistics can be kept on the input and used to generate an optimal tree to deal with pathological cases.
I have a computer with 1M of RAM and no other local storage
Another way to cheat: you could use non-local (networked) storage instead (your question does not preclude this) and call a networked service that could use straightforward disk-based mergesort (or just enough RAM to sort in-memory, since you only need to accept 1M numbers), without needing the (admittedly extremely ingenious) solutions already given.
This might be cheating, but it's not clear whether you are looking for a solution to a real-world problem, or a puzzle that invites bending of the rules... if the latter, then a simple cheat may get better results than a complex but "genuine" solution (which as others have pointed out, can only work for compressible inputs).
Google's (bad) approach, from HN thread. Store RLE-style counts.
Your initial data structure is '99999999:0' (all zeros, haven't seen any numbers) and then lets say you see the number 3,866,344 so your data structure becomes '3866343:0,1:1,96133654:0' as you can see the numbers will always alternate between number of zero bits and number of '1' bits so you can just assume the odd numbers represent 0 bits and the even numbers 1 bits. This becomes (3866343,1,96133654)
Their problem doesn't seem to cover duplicates, but let's say they use "0:1" for duplicates.
Big problem #1: insertions for 1M integers would take ages.
Big problem #2: like all plain delta encoding solutions, some distributions can't be covered this way. For example, 1m integers with distances 0:99 (e.g. +99 each one). Now think the same but with random distance in the range of 0:99. (Note: 99999999/1000000 = 99.99)
Google's approach is both unworthy (slow) and incorrect. But to their defense, their problem might have been slightly different.
I think the solution is to combine techniques from video encoding, namely the discrete cosine transformation. In digital video, rather recording the changing the brightness or colour of video as regular values such as 110 112 115 116, each is subtracted from the last (similar to run length encoding). 110 112 115 116 becomes 110 2 3 1. The values, 2 3 1 require less bits than the originals.
So lets say we create a list of the input values as they arrive on the socket. We are storing in each element, not the value, but the offset of the one before it. We sort as we go, so the offsets are only going to be positive. But the offset could be 8 decimal digits wide which this fits in 3 bytes. Each element can't be 3 bytes, so we need to pack these. We could use the top bit of each byte as a "continue bit", indicating that the next byte is part of the number and the lower 7 bits of each byte need to be combined. zero is valid for duplicates.
As the list fills up, the numbers should be get closer together, meaning on average only 1 byte is used to determine the distance to the next value. 7 bits of value and 1 bit of offset if convenient, but there may be a sweet spot that requires less than 8 bits for a "continue" value.
Anyway, I did some experiment. I use a random number generator and I can fit a million sorted 8 digit decimal numbers into about 1279000 bytes. The average space between each number is consistently 99...
public class Test {
public static void main(String[] args) throws IOException {
// 1 million values
int[] values = new int[1000000];
// create random values up to 8 digits lrong
Random random = new Random();
for (int x=0;x<values.length;x++) {
values[x] = random.nextInt(100000000);
}
Arrays.sort(values);
ByteArrayOutputStream baos = new ByteArrayOutputStream();
int av = 0;
writeCompact(baos, values[0]); // first value
for (int x=1;x<values.length;x++) {
int v = values[x] - values[x-1]; // difference
av += v;
System.out.println(values[x] + " diff " + v);
writeCompact(baos, v);
}
System.out.println("Average offset " + (av/values.length));
System.out.println("Fits in " + baos.toByteArray().length);
}
public static void writeCompact(OutputStream os, long value) throws IOException {
do {
int b = (int) value & 0x7f;
value = (value & 0x7fffffffffffffffl) >> 7;
os.write(value == 0 ? b : (b | 0x80));
} while (value != 0);
}
}
We could play with the networking stack to send the numbers in sorted order before we have all the numbers. If you send 1M of data, TCP/IP will break it into 1500 byte packets and stream them in order to the target. Each packet will be given a sequence number.
We can do this by hand. Just before we fill our RAM we can sort what we have and send the list to our target but leave holes in our sequence around each number. Then process the 2nd 1/2 of the numbers the same way using those holes in the sequence.
The networking stack on the far end will assemble the resulting data stream in order of sequence before handing it up to the application.
It's using the network to perform a merge sort. This is a total hack, but I was inspired by the other networking hack listed previously.
I would exploit the retransmission behaviour of TCP.
Make the TCP component create a large receive window.
Receive some amount of packets without sending an ACK for them.
Process those in passes creating some (prefix) compressed data structure
Send duplicate ack for last packet that is not needed anymore/wait for retransmission timeout
Goto 2
All packets were accepted
This assumes some kind of benefit of buckets or multiple passes.
Probably by sorting the batches/buckets and merging them. -> radix trees
Use this technique to accept and sort the first 80% then read the last 20%, verify that the last 20% do not contain numbers that would land in the first 20% of the lowest numbers. Then send the 20% lowest numbers, remove from memory, accept the remaining 20% of new numbers and merge.**
To represent the sorted array one can just store the first element and the difference between adjacent elements. In this way we are concerned with encoding 10^6 elements that can sum up to at most 10^8. Let's call this D. To encode the elements of D one can use a Huffman code. The dictionary for the Huffman code can be created on the go and the array updated every time a new item is inserted in the sorted array (insertion sort). Note that when the dictionary changes because of a new item the whole array should be updated to match the new encoding.
The average number of bits for encoding each element of D is maximized if we have equal number of each unique element. Say elements d1, d2, ..., dN in D each appear F times. In that case (in worst case we have both 0 and 10^8 in input sequence) we have
sum(1<=i<=N) F. di = 10^8
where
sum(1<=i<=N) F = 10^6, or F=10^6/N and the normalized frequency will be p= F/10^=1/N
The average number of bits will be -log2(1/P) = log2(N). Under these circumstances we should find a case that maximizes N. This happens if we have consecutive numbers for di starting from 0, or, di= i-1, therefore
10^8=sum(1<=i<=N) F. di = sum(1<=i<=N) (10^6/N) (i-1) = (10^6/N) N (N-1)/2
i.e.
N <= 201. And for this case average number of bits is log2(201)=7.6511 which means we will need around 1 byte per input element for saving the sorted array. Note that this doesn't mean D in general cannot have more than 201 elements. It just sows that if elements of D are uniformly distributed, it cannot have more than 201 unique values.
Here is a generalized solution to this kind of problem:
General procedure
The taken approach is as follows. The algorithm operates on a single buffer of 32-bit words. It performs the following procedure in a loop:
We start with a buffer filled with compressed data from the last iteration. The buffer looks like this
|compressed sorted|empty|
Calculate the maximum amount of numbers that can be stored in this buffer, both compressed and uncompressed. Split the buffer into these two sections, beginning with the space for compressed data, ending with the uncompressed data. The buffer looks like
|compressed sorted|empty|empty|
Fill the uncompressed section with numbers to be sorted. The buffer looks like
|compressed sorted|empty|uncompressed unsorted|
Sort the new numbers with an in-place sort. The buffer looks like
|compressed sorted|empty|uncompressed sorted|
Right-align any already compressed data from the previous iteration in the compressed section. At this point the buffer is partitioned
|empty|compressed sorted|uncompressed sorted|
Perform a streaming decompression-recompression on the compressed section, merging in the sorted data in the uncompressed section. The old compressed section is consumed as the new compressed section grows. The buffer looks like
|compressed sorted|empty|
This procedure is performed until all numbers have been sorted.
Compression
This algorithm of course only works when it's possible to calculate the final compressed size of the new sorting buffer before actually knowing what will actually be compressed. Next to that, the compression algorithm needs to be good enough to solve the actual problem.
The used approach uses three steps. First, the algorithm will always store sorted sequences, therefore we can instead store purely the differences between consecutive entries. Each difference is in the range [0, 99999999].
These differences are then encoded as a unary bitstream. A 1 in this stream means "Add 1 to the accumulator, A 0 means "Emit the accumulator as an entry, and reset". So difference N will be represented by N 1's and one 0.
The sum of all differences will approach the maximum value that the algorithm supports, and the count of all differences will approach the amount of values inserted in the algorithm. This means we expect the stream to, at the end, contain max value 1's and count 0's. This allows us to calculate the expected probability of a 0 and 1 in the stream. Namely, the probability of a 0 is count/(count+maxval) and the probability of a 1 is maxval/(count+maxval).
We use these probabilities to define an arithmetic coding model over this bitstream. This arithmetic code will encode exactly this amounts of 1's and 0's in optimal space. We can calculate the space used by this model for any intermediate bitstream as: bits = encoded * log2(1 + amount / maxval) + maxval * log2(1 + maxval / amount). To calculate the total required space for the algorithm, set encoded equal to amount.
To not require a ridiculous amount of iterations, a small overhead can be added to the buffer. This will ensure that the algorithm will at least operate on the amount of numbers that fit in this overhead, as by far the largest time cost of the algorithm is the arithmetic coding compression and decompression each cycle.
Next to that, some overhead is necessary to store bookkeeping data and to handle slight inaccuracies in the fixed-point approximation of the arithmetic coding algorithm, but in total the algorithm is able to fit in 1MiB of space even with an extra buffer that can contain 8000 numbers, for a total of 1043916 bytes of space.
Optimality
Outside of reducing the (small) overhead of the algorithm it should be theoretically impossible to get a smaller result. To just contain the entropy of the final result, 1011717 bytes would be necessary. If we subtract the extra buffer added for efficiency this algorithm used 1011916 bytes to store the final result + overhead.
If the input stream could be received few times this would be much
easier (no information about that, idea and time-performance problem).
Then, we could count the decimal values. With counted values it would be
easy to make the output stream. Compress by counting the values. It
depends what would be in the input stream.
If the input stream could be received few times this would be much easier (no info about that, idea and time-performance problem). Then, we could count the decimal values. With counted values it would be easy to make the output stream. Compress by counting the values.
It depends what would be in the input stream.
Sorting is a secondary problem here. As other said, just storing the integers is hard, and cannot work on all inputs, since 27 bits per number would be necessary.
My take on this is: store only the differences between the consecutive (sorted) integers, as they will be most likely small. Then use a compression scheme, e.g. with 2 additional bits per input number, to encode how many bits the number is stored on.
Something like:
00 -> 5 bits
01 -> 11 bits
10 -> 19 bits
11 -> 27 bits
It should be possible to store a fair number of possible input lists within the given memory constraint. The maths of how to pick the compression scheme to have it work on the maximum number of inputs, are beyond me.
I hope you may be able to exploit domain-specific knowledge of your input to find a good enough integer compression scheme based on this.
Oh and then, you do an insertion sort on that sorted list as you receive data.
Now aiming to an actual solution, covering all possible cases of input in the 8 digit range with only 1MB of RAM. NOTE: work in progress, tomorrow will continue. Using arithmetic coding of deltas of the sorted ints, worst case for 1M sorted ints would cost about 7bits per entry (since 99999999/1000000 is 99, and log2(99) is almost 7 bits).
But you need the 1m integers sorted to get to 7 or 8 bits! Shorter series would have bigger deltas, therefore more bits per element.
I'm working on taking as many as possible and compressing (almost) in-place. First batch of close to 250K ints would need about 9 bits each at best. So result would take about 275KB. Repeat with remaining free memory a few times. Then decompress-merge-in-place-compress those compressed chunks. This is quite hard, but possible. I think.
The merged lists would get closer and closer to the 7bit per integer target. But I don't know how many iterations it would take of the merge loop. Perhaps 3.
But the imprecision of the arithmetic coding implementation might make it impossible. If this problem is possible at all, it would be extremely tight.
Any volunteers?
You just need to store the differences between the numbers in sequence, and use an encoding to compress these sequence numbers. We have 2^23 bits. We shall divide it into 6bit chunks, and let the last bit indicate whether the number extends to another 6 bits (5bits plus extending chunk).
Thus, 000010 is 1, and 000100 is 2. 000001100000 is 128. Now, we consider the worst cast in representing differences in sequence of a numbers up to 10,000,000. There can be 10,000,000/2^5 differences greater than 2^5, 10,000,000/2^10 differences greater than 2^10, and 10,000,000/2^15 differences greater than 2^15, etc.
So, we add how many bits it will take to represent our the sequence. We have 1,000,000*6 + roundup(10,000,000/2^5)*6+roundup(10,000,000/2^10)*6+roundup(10,000,000/2^15)*6+roundup(10,000,000/2^20)*4=7935479.
2^24 = 8388608. Since 8388608 > 7935479, we should easily have enough memory. We will probably need another little bit of memory to store the sum of where are when we insert new numbers. We then go through the sequence, and find where to insert our new number, decrease the next difference if necessary, and shift everything after it right.
If we don't know anything about those numbers, we are limited by the following constraints:
we need to load all numbers before we can sort them them,
the set of numbers is not compressible.
If these assumptions hold, there is no way to carry out your task, as you will need at least 26,575,425 bits of storage (3,321,929 bytes).
What can you tell us about your data ?
The trick is to represent the algorithms state, which is an integer multi-set, as a compressed stream of "increment counter"="+" and "output counter"="!" characters. For example, the set {0,3,3,4} would be represented as "!+++!!+!", followed by any number of "+" characters. To modify the multi-set you stream out the characters, keeping only a constant amount decompressed at a time, and make changes inplace before streaming them back in compressed form.
Details
We know there are exactly 10^6 numbers in the final set, so there are at most 10^6 "!" characters. We also know that our range has size 10^8, meaning there are at most 10^8 "+" characters. The number of ways we can arrange 10^6 "!"s amongst 10^8 "+"s is (10^8 + 10^6) choose 10^6, and so specifying some particular arrangement takes ~0.965 MiB` of data. That'll be a tight fit.
We can treat each character as independent without exceeding our quota. There are exactly 100 times more "+" characters than "!" characters, which simplifies to 100:1 odds of each character being a "+" if we forget that they are dependent. Odds of 100:101 corresponds to ~0.08 bits per character, for an almost identical total of ~0.965 MiB (ignoring the dependency has a cost of only ~12 bits in this case!).
The simplest technique for storing independent characters with known prior probability is Huffman coding. Note that we need an impractically large tree (A huffman tree for blocks of 10 characters has an average cost per block of about 2.4 bits, for a total of ~2.9 Mib. A huffman tree for blocks of 20 characters has an average cost per block of about 3 bits, which is a total of ~1.8 MiB. We're probably going to need a block of size on the order of a hundred, implying more nodes in our tree than all the computer equipment that has ever existed can store.). However, ROM is technically "free" according to the problem and practical solutions that take advantage of the regularity in the tree will look essentially the same.
Pseudo-code
Have a sufficiently large huffman tree (or similar block-by-block compression data) stored in ROM
Start with a compressed string of 10^8 "+" characters.
To insert the number N, stream out the compressed string until N "+" characters have gone past then insert a "!". Stream the recompressed string back over the previous one as you go, keeping a constant amount of buffered blocks to avoid over/under-runs.
Repeat one million times: [input, stream decompress>insert>compress], then decompress to output
We have 1 MB - 3 KB RAM = 2^23 - 3*2^13 bits = 8388608 - 24576 = 8364032 bits available.
We are given 10^6 numbers in a 10^8 range. This gives an average gap of ~100 < 2^7 = 128
Let's first consider the simpler problem of fairly evenly spaced numbers when all gaps are < 128. This is easy. Just store the first number and the 7-bit gaps:
(27 bits) + 10^6 7-bit gap numbers = 7000027 bits required
Note repeated numbers have gaps of 0.
But what if we have gaps larger than 127?
OK, let's say a gap size < 127 is represented directly, but a gap size of 127 is followed by a continuous 8-bit encoding for the actual gap length:
10xxxxxx xxxxxxxx = 127 .. 16,383
110xxxxx xxxxxxxx xxxxxxxx = 16384 .. 2,097,151
etc.
Note this number representation describes its own length so we know when the next gap number starts.
With just small gaps < 127, this still requires 7000027 bits.
There can be up to (10^8)/(2^7) = 781250 23-bit gap number, requiring an extra 16*781,250 = 12,500,000 bits which is too much. We need a more compact and slowly increasing representation of gaps.
The average gap size is 100 so if we reorder them as
[100, 99, 101, 98, 102, ..., 2, 198, 1, 199, 0, 200, 201, 202, ...]
and index this with a dense binary Fibonacci base encoding with no pairs of zeros (for example, 11011=8+5+2+1=16) with numbers delimited by '00' then I think we can keep the gap representation short enough, but it needs more analysis.
While receiving the stream do these steps.
1st set some reasonable chunk size
Pseudo Code idea:
The first step would be to find all the duplicates and stick them in a dictionary with its count and remove them.
The third step would be to place number that exist in sequence of their algorithmic steps and place them in counters special dictionaries with the first number and their step like n, n+1..., n+2, 2n, 2n+1, 2n+2...
Begin to compress in chunks some reasonable ranges of number like every 1000 or ever 10000 the remaining numbers that appear less often to repeat.
Uncompress that range if a number is found and add it to the range and leave it uncompressed for a while longer.
Otherwise just add that number to a byte[chunkSize]
Continue the first 4 steps while receiving the stream. The final step would be to either fail if you exceeded memory or start outputting the result once all the data is collected by beginning to sort the ranges and spit out the results in order and uncompressing those in order that need to be uncompressed and sort them when you get to them.

compression algorithm for sorted integers

I have a large sequence of random integers sorted from the lowest to the highest. The numbers start from 1 bit and end near 45 bits. In the beginning of the list I have numbers very close to each other: 4, 20, 23, 40, 66. But when the numbers start to get higher the distance between them is a bit higher too (actually the distance between them is aleatory). There are no duplicated numbers.
I'm using bit packing to save some space. Nonetheless, this file can get really big.
I would like to know what kind of compression algorithm can be used in this situation, or any other technique to save as much space as possible.
Thank you.
You can compress optimally if you know the true distribution of the data. If you can provide a probability distribution for each integer you can use arithmetic coding or other entropy coding techniques to compress to theoretical minimal size.
The trick is in predicting accurately.
First, you should probably compress the distances between the numbers because that allows you to make statistical statements. If you were to compress the numbers directly you'd have a hard time modelling them because they occur only once.
Next, you could try to build a very simple model to predict the next distance. Keep a histogram of all previously seen distances and calculate the probabilities from the frequencies.
You probably need to account for missing values (you clearly can't assign them 0 probability because that is not expressible) but you can use heuristics for that, like encoding the next distance bit-by-bit and predicting each bit individually. You will pay almost nothing for the high-order bits because they are almost always 0 and entropy encoding optimizes them away.
All of this is much simpler if you know the distribution. Example: You you are compressing a list of all prime numbers you know the theoretical distribution of distances because there are formulae for that. So you already have a perfect model.
There's a very simple and fairly effective compression technique which can be used for sorted integers in a known range. Like most compression schemes, it is optimized for serial access, although you can build an index to speed up random access if needed.
It's a type of delta encoding (i.e. each number is represented by the distance from the previous one), consisting of a vector of codes which are either
a single 1-bit, representing a delta of 2k which is added to the delta in the following code, or
a 0-bit followed by a k-bit delta, indicating that the next number is the specified delta from the previous one.
For example, if k is 4, the sequence:
00011 1 1 00000 1 00001
codes three numbers. The first four-bit encoding (3) is the first delta, taken from an initial value of 0, so the first number is 3. The next two solitary 1's accumulate to a delta of 2&centerdot;24, or 32, which is added to the following delta of 0000, for a total of 32. So the second number is 3+32=35. Finally, the last delta is a single 24 plus 1, total 17, and the third number is 35+17=52.
The 1-bit indicates that the next delta should be incremented by 2k (or, more generally, each delta is incremented by 2k times the number of immediately preceding 1-bits.)
Another, possibly better, way of thinking of this is that each delta is coded as a variable length bit sequence: 1i0(1|0)k, representing a delta of i&centerdot;2k+[the k-bit suffix]. But the first presentation aligns better with the optimality proof.
Since each "1" code represents an increment of 2k, there cannot be more than m/2k of them, where m is the largest number in the set to be compressed. The remaining codes all correspond to numbers, and have a total length of n&centerdot;(k + 1) where n is the size of the set. The optimal value of k is roughly log2 m/n, which in your case would be 7 or 8.
I did a quick proof of concept of the algorithm, without worrying about optimizations. It's still plenty fast; sorting the random sample takes a lot longer than compressing/decompressing it. I tried it with a few different seeds and vector sizes from 16,400,000 to 31,000,000 with a value range of [0, 4,000,000,000). The bits used per data value ranged from 8.59 (n=31000000) to 9.45 (n=16400000). All of the tests were done with 7-bit suffixes; log2 m/n varies from 7.01 (n=31000000) to 7.93 (n=16400000). I tried with 6-bit and 8-bit suffixes; except in the case of n=31000000 where the 6-bit suffixes were slightly smaller, the 7-bit suffix was always the best. So I guess that the optimal k is not exactly floor(log2 m/n) but it's not far off.
Compression code:
void Compress(std::ostream& os,
const std::vector<unsigned long>& v,
unsigned long k = 0) {
BitOut out(os);
out.put(v.size(), 64);
if (v.size()) {
unsigned long twok;
if (k == 0) {
unsigned long ratio = v.back() / v.size();
for (twok = 1; twok <= ratio / 2; ++k, twok *= 2) { }
} else {
twok = 1 << k;
}
out.put(k, 32);
unsigned long prev = 0;
for (unsigned long val : v) {
while (val - prev >= twok) { out.put(1); prev += twok; }
out.put(0);
out.put(val - prev, k);
prev = val;
}
}
out.flush(1);
}
Decompression:
std::vector<unsigned long> Decompress(std::istream& is) {
BitIn in(is);
unsigned long size = in.get(64);
if (size) {
unsigned long k = in.get(32);
unsigned long twok = 1 << k;
std::vector<unsigned long> v;
v.reserve(size);
unsigned long prev = 0;
for (; size; --size) {
while (in.get()) prev += twok;
prev += in.get(k);
v.push_back(prev);
}
}
return v;
}
It can be a bit awkward to use variable-length encodings; an alternative is to store the first bit of each code (1 or 0) in a bit vector, and the k-bit suffixes in a separate vector. This would be particularly convenient if k is 8.
A variant, which results in slight longer files but is a bit easier to build indexes for, is to only use the 1-bits as deltas. Then the deltas are always a&centerdot;2k for some a, possibly 0, where a is the number of consecutive 1 bits preceding the suffix code. The index then consists of the locations of every Nth 1-bit in the bit vector, and the corresponding index into the suffix vector (i.e. the index of the suffix corresponding with the next 0 in the bit vector).
One option that worked well for me in the past was to store a list of 64-bit integers as 8 different lists of 8-bit values. You store the high 8 bits of the numbers, then the next 8 bits, etc. For example, say you have the following 32-bit numbers:
0x12345678
0x12349785
0x13111111
0x13444444
The data stored would be (in hex):
12,12,13,13
34,34,11,44
56,97,11,44
78,85,11,44
I then ran that through the deflate compressor.
I don't recall what compression ratios I was able to achieve with this, but it was significantly better than compressing the numbers themselves.
I want to add another answer with the simplest possible solution:
Convert the numbers to deltas as discussed previously
Run it through the 7-zip LZMA2 algorithm. It is even multi-core ready
I think this will give almost perfect results in your case because the distances have a simple distribution. 7-zip will be able to pick it up.
You can use Delta Encoding and Protocol Buffers simply.
Like your example: 4, 20, 23, 40, 66.
Delta Encoding compressed: 4, 16, 3, 17, 26.
Then you store all numbers as varint in Protocol Buffers directly. Only need 1 byte for number between 0-127. And 2 bytes for number between 128-16384... This is enough for most scenes.
Further more you can use entropy coding(huffman) to achieve more effective compression rate than varint. Even less than 8bits per number.
Divide a number to 2 part. Like 17=...0001 0001(binary)=(5)0001. The first part (5) is valid bit count. The suffix part (0001) is without the leading 1.
Like the example: 4, 16, 3, 17, 26 = (3)00 (5)0000 (2)1 (5)0001 (5)1010
The first part will be between 0-45 even there are a lot of numbers. So they can be compressed by entropy coding like huffman effectively.
If your sequence is made up of pseudo-random numbers, such as might be generated by a typical digital computer, then I don't think that any compression scheme will beat, for brevity of representation, simply storing the code for the generator and whatever parameters you need to define its initial state.
If your sequence is made up of truly random numbers generated in some non-deterministic way then the other answers already posted offer a variety of good advice.

hashing a small number to a random looking 64 bit integer

I am looking for a hash-function which operates on a small integer (say in the range 0...1000) and outputs a 64 bit int.
The result-set should look like a random distribution of 64 bit ints: a uniform distribution with no linear correlation between the results.
I was hoping for a function that only takes a few CPU-cycles to execute. (the code will be in C++).
I considered multiplying the input by a big prime number and taking the modulo 2**64 (something like a linear congruent generator), but there are obvious dependencies between the outputs (in the lower bits).
Googling did not show up anything, but I am probably using wrong search terms.
Does such a function exist?
Some Background-info:
I want to avoid using a big persistent table with pseudo random numbers in an algorithm, and calculate random-looking numbers on the fly.
Security is not an issue.
I tested the 64-bit finalizer of MurmurHash3 (suggested by #aix and this SO post). This gives zero if the input is zero, so I increased the input parameter by 1 first:
typedef unsigned long long uint64;
inline uint64 fasthash(uint64 i)
{
i += 1ULL;
i ^= i >> 33ULL;
i *= 0xff51afd7ed558ccdULL;
i ^= i >> 33ULL;
i *= 0xc4ceb9fe1a85ec53ULL;
i ^= i >> 33ULL;
return i;
}
Here the input argument i is a small integer, for example an element of {0, 1, ..., 1000}. The output looks random:
i fasthash(i) decimal: fasthash(i) hex:
0 12994781566227106604 0xB456BCFC34C2CB2C
1 4233148493373801447 0x3ABF2A20650683E7
2 815575690806614222 0x0B5181C509F8D8CE
3 5156626420896634997 0x47900468A8F01875
... ... ...
There is no linear correlation between subsequent elements of the series:
The range of both axes is 0..2^64-1
Why not use an existing hash function, such as MurmurHash3 with a 64-bit finalizer? According to the author, the function takes tens of CPU cycles per key on current Intel hardware.
Given: input i in the range of 0 to 1,000.
const MaxInt which is the maximum value that cna be contained in a 64 bit int. (you did not say if it is signed or unsigned; 2^64 = 18446744073709551616 )
and a function rand() that returns a value between 0 and 1 (most languages have such a function)
compute hashvalue = i * rand() * ( MaxInt / 1000 )
1,000 * 1,000 = 1,000,000. That fits well within an Int32.
Subtract the low bound of your range, from the number.
Square it, and use it as a direct subscript into some sort of bitmap.

Resources