Radix Sort for sorting 32 bit integers - radix-sort

I am trying to understand radix sort. I just posted the question here and learned how a 32-bit integer is broken into 4-bit chunks. How does it take 4 passes? For example:
2147507648 can be broken down into 128 0 93 192. 2147507672 yields 128 0 93 216. Wouldn't LSD radix sort compare 216 with 192, 93 with 93, 0 with 0, and 128 with 128? Comparing 216 with 128 itself would take 3 passes right?
Thanks!

Related

Representing decimal numbers in Big and Little Endian?

So if I have an unsigned int which consists of 4 bytes which are stored in address 10000 to 10011. If the representation of the storage is in Big Endian, what decimal value is stored in the varuable?
ADRESS INSTRUCTION
10000: 0b01000010
10001: 137
10010: 0x13
10011: 0b11000011
So the decimal numbers are: 66, 137, 19, 195.
I thought that the Big Endian representation just is 6 613 719 195. But apparently that is wrong. So what am I missing here? If it was Little Endian it should be 1 951 913 766. But again this is wrong. So what am I missing here? Yes this is a quiz question that I got wrong and I just don't get it completely. The question is literally:
"In a high-level language a variable is declared as an unsigned int and consists of 4 bytes which are stored in the address 10000-10011. If the representation of the storage is in Big.Endian, which decimal value is stored in the variable?
ADRESS INSTRUCTION
10000: 0b01000010
10001: 137
10010: 0x13
10011: 0b11000011
"
You calculate bad, you need to calculate like this:
6631246 = 195 * 2^24 + 19 * 2^16 + 137 * 2^8 + 66
The representation of the value in Big Endian is to store the most significant byte (MSB) first, so the value of each byte in memory is stored in the order 10011, 10010, 10001, 10000, which is 195, 19, 137, 66 in decimal.

How do I create strictly ordered uniformly distributed buckets out of an array?

I'm looking to take an array of integers and perform a partial bucket sort on that array. Every element in the bucket before it is less than the current bucket elements. For example, if I have 10 buckets for the values 0-100 0-9 would go in the first bucket, 10-19 for the second and so on.
For one example I can take 1 12 23 44 48 and put them into 4 buckets out of 10. But if I have 1, 2, 7, 4, 9, 1 then all values go into a single bucket. I'm looking a way to evenly distribute values to all the buckets while maintaining a ordering. Elements in each bucket don't have to be sorted. For example I'm looking similar to this.
2 1 9 2 3 8 7 4 2 8 11 4 => [[2, 1], [2, 2], [3], [4], [4], [7], [8, 8], [9], [11]]
I'm trying to use this as a quick way to partition a list in a map-reduce.
Thanks for the help.
Edit, maybe this clears things up:
I want to create a hashing function where all elements in bucket1 < bucket2 < bucket3 ..., where each bucket is unsorted.
If I understand it correctly you have around 100TB of data, or 13,743,895,347,200 unsigned 64-bit integers, that you want to distribute over a number of buckets.
A first step could be to iterate over the input, looking at e.g. the highest 24 bits of each integer, and counting them. That will give you a list of 16,777,216 ranges, each with a count of on average 819,200 so it may be possible to store them in 32-bit unsigned integers, which will take up 64 MB.
You can then use this to create a lookup table that tells you which bucket each of those 16,777,216 ranges goes into. You calculate how many integers are supposed to go into each bucket (input size divided by number of buckets) and go over the array, keeping a running total of the count, and set each range to bucket 1, until the running total is too much for bucket 1, then you set the ranges to bucket 2, and so on...
There will of course always be a range that has to be split between bucket n and bucket n+1. To keep track of this, you create a second table that stores how many integers in these split ranges are supposed to go into bucket n+1.
So you now have e.g.:
HIGH 24-BIT RANGE BUCKET BUCKET+1
0 0 ~ 2^40-1 1 0
1 2^40 ~ 2*2^40-1 1 0
2 2*2^40 ~ 3*2^40-1 1 0
3 3*2^40 ~ 4*2^40-1 1 0
...
16 16*2^40 ~ 17*2^40-1 1 0
17 17*2^40 ~ 18*2^40-1 1 284,724 <- highest 284,724 go into bucket 2
18 18*2^40 ~ 19*2^40-1 2 0
...
You can now iterate over the input again, and for each integer look at the highest 24 bits, and use the lookup table to see which bucket the integer is supposed to go into. If the range isn't split, you can immediately move the integer into the right bucket. For each split range, you create an ordered list or priority queue that can hold as many integers as need to go into the next bucket; you store only the highest values in this list or queue; any smaller integer goes straight to the bucket, and if an integer is added to the full list or queue, the smallest value is moved to the bucket. At the end this list or queue is added to the next bucket.
The number of ranges should be as high as possible with the available memory, because that minimises the number of integers in split ranges. With the huge input you have, you may need to save the split ranges to disk, and then afterwards look at each of them seperately, find the highest x values, and move them to the buckets accordingly.
The complexity of this is N for the first run, then you iterate over the ranges R, then N as you iterate over the input again, and then for the split ranges you'll have something like M.logM to sort and M to distribute, so a total of 2*N + R + M.LogM + M. Using a high number of ranges to keep the number of integers in split ranges low will probably be the best strategy to speed the process up.
Actually, the number of integers M that are in split ranges depends on the number of buckets B and ranges R, with M = N × B/R, so that e.g. with a thousand buckets and a million ranges, 0.1% of the input would be in split ranges and have to be sorted. (These are averages, depending on the actual distribution.) That makes the total complexity 2×N + R + (N×B/R).Log(N×B/R) + N×B/R.
Another example:
Input: N = 13,743,895,347,200 unsigned 64-bit integers
Ranges: 232 (using the highest 32 bits of each integer)
Integers per range: 3200 (average)
Count list: 232 16-bit integers = 8 GB
Lookup table: 232 16-bit integers = 8 GB
Split range table: B 16-bit integers = 2×B bytes
With 1024 buckets, that would mean that B/R = 1/222, and there are 1023 split ranges with around 3200 integers each, or around 3,276,800 integers in total; these will then have to be sorted and distributed over the buckets.
With 1,048,576 buckets, that would mean that B/R = 1/212, and there are 1,048,575 split ranges with around 3200 integers each, or around 3,355,443,200 integers in total. (More than 65,536 buckets would of course require a lookup table with 32-bit integers.)
(If you find that the total of the counts per range doesn't equal the total size of the input, there has been overflow in the count list, and you should switch to a larger integer type for the counts.)
Let's run through a tiny example: 50 integers in the range 1-100 have to be distributed over 5 buckets. We choose a number of ranges, say 20, and iterate over the input to count the number of integers in each range:
2 9 14 17 21 30 33 36 44 50 51 57 69 75 80 81 87 94 99
1 9 15 16 21 32 40 42 48 55 57 66 74 76 88 96
5 6 20 24 34 50 52 58 70 78 99
7 51 69
55
3 4 2 3 3 1 3 2 2 3 5 3 0 4 2 3 1 2 1 3
Then, knowing that each bucket should hold 10 integers, we iterate over the list of counts per range, and assign each range to a bucket:
3 4 2 3 3 1 3 2 2 3 5 3 0 4 2 3 1 2 1 3 <- count/range
1 1 1 1 2 2 2 2 3 3 3 4 4 4 4 5 5 5 5 5 <- to bucket
2 1 1 <- to next
When a range has to be split between two buckets, we store the number of integers that should go to the next bucket in a seperate table.
We can then iterate over the input again, and move all the integers in non-split ranges into the buckets; the integers in split ranges are temporarily moved into seperate buckets:
bucket 1: 9 14 2 9 1 15 6 5 7
temp 1/2: 17 16 20
bucket 2: 21 33 30 32 21 24 34
temp 2/3: 36 40
bucket 3: 44 50 48 42 50
temp 3/4: 51 55 52 51 55
bucket 4: 57 75 69 66 74 57 57 70 69
bucket 5: 81 94 87 80 99 88 96 76 78 99
Then we look at the temp buckets one by one, find the x highest integers as indicated in the second table, move them to the next bucket, and what is left over to the previous bucket:
temp 1/2: 17 16 20 (to next: 2) bucket 1: 16 bucket 2: 17 20
temp 2/3: 36 40 (to next: 1) bucket 2: 36 bucket 3: 40
temp 3/4: 51 55 52 51 55 (to next: 1) bucket 3: 51 51 52 55 bucket 4: 55
And the end result is:
bucket 1: 9 14 2 9 1 15 6 5 7 16
bucket 2: 21 33 30 32 21 24 34 17 20 36
bucket 3: 44 50 48 42 50 40 51 51 52 55
bucket 4: 57 75 69 66 74 57 57 70 69 55
bucket 5: 81 94 87 80 99 88 96 76 78 99
So, out of 50 integers, we've had to sort a group of 3, 2 and 5 integers.
Actually, you don't need to create a table with the number of integers in the split ranges that should go to the next bucket. You know how many integers are supposed to go into each bucket, so after the initial distribution you can look at how many integers are already in each bucket, and then add the necessary number of (lowest value) integers from the split range. In the example above, which expects 10 integers per bucket, that would be:
3 4 2 3 3 1 3 2 2 3 5 3 0 4 2 3 1 2 1 3 <- count/range
1 1 1 / 2 2 2 / 3 3 / 4 4 4 4 5 5 5 5 5 <- to bucket
bucket 1: 9 14 2 9 1 15 6 5 7 <- add 1
temp 1/2: 17 16 20 <- 3-1 = 2 go to next bucket
bucket 2: 21 33 30 32 21 24 34 <- add 3-2 = 1
temp 2/3: 36 40 <- 2-1 = 1 goes to next bucket
bucket 3: 44 50 48 42 50 <- add 5-1 = 4
temp 3/4: 51 55 52 51 55 <- 5-4 = 1 goes to next bucket
bucket 4: 57 75 69 66 74 57 57 70 69 <- add 1-1 = 0
bucket 5: 81 94 87 80 99 88 96 76 78 99 <- add 0
The calculation of how much of the input will be in split ranges and need to be sorted, given above as M = N × B/R, is an average for input that is roughly evenly distributed. A slight bias, with more values in a certain part of the input space will not have much effect, but it would indeed be possible to craft worst-case input to thwart the algorithm.
Let's look again at this example:
Input: N = 13,743,895,347,200 unsigned 64-bit integers
Ranges: 232 (using the highest 32 bits of each integer)
Integers per range: 3200 (average)
Buckets: 1,048,576
Integers per bucket: 13,107,200
For a start, if there are ranges that contain more than 232 integers, you'd have to use 64-bit integers for the count table, so it would be 32GB in size, which could force you to use fewer ranges, depending on the available memory.
Also, every range that holds more integers than the target size per bucket is automatically a split range. So if the integers are distributed with a lot of local clusters, you may find that most of the input is in split ranges that need to be sorted.
If you have enough memory to run the first step using 232 ranges, then each range has 232 different values, and you could distribute the split ranges over the buckets using a counting sort (which has linear complexity).
If you don't have the memory to use 232 ranges, and you end up with problematically large split ranges, you could use the complete algorithm again on the split ranges. Let's say you used 228 ranges, expecting each range to hold around 51,200 integers, and you end up with an unexpectedly large split range with 5,120,000,000 integers that need to be distributed over 391 buckets. If you ran the algorithm again for this limited range, you'd have 228 ranges (each holding on average 19 integers with a maximum of 16 different values) for just 391 buckets, and only a tiny risk of ending up with large split ranges again.
Note: the ranges that have to be split over two or more buckets don't necessarily have to be sorted. You can e.g. use a recursive version of Dijkstra's Dutch national flag algorithm to partition the range into a part with the x smallest values, and a part with the largest values. The average complexity of partitioning would be linear (when using a random pivot), against the O(N.LogN) complexity of sorting.

Issues with LZW algorithm variable-length decoding procedure

The setup
Say I've got:
A series of numbers resulting from LZW compression of a bitmap:
256 1 258 258 0 261 261 259 260 262 0 264 1 266 267 258 2 273 2 262 259 274 275 270 278 259 262 281 265 276 264 270 268 288 264 257
An LZW-compressed, variable-length-encoded bytestream (including the LZW code size header and sub-block markers) which represents this same series of numbers:
00001000 00101001 00000000 00000011 00001000 00010100 00001000 10100000
01100000 11000001 10000001 00000100 00001101 00000010 01000000 00011000
01000000 11100001 01000010 10000001 00000010 00100010 00001010 00110000
00111000 01010000 11100010 01000100 10000111 00010110 00000111 00011010
11001100 10011000 10010000 00100010 01000010 10000111 00001100 01000001
00100010 00001100 00001000 00000000
And an initial code width of 8.
The problem
I'm trying to derive the initial series of numbers (the integer array) from the bytestream.
From what I've read, the procedure here is to take the initial code width, scan right-to-left, reading initial code width + 1 bits at a time, to extract the integers from the bytestream. For example:
iteration #1: 1001011011100/001/ yield return 4
iteration #2: 1001011011/100/001 yield return 1
iteration #3: 1001011/011/100001 yield return 6
iteration #4: 1001/011/011100001 yield return 6
This procedure will not work for iteration #5, which will yield 1:
iteration #5: 1/001/011011100001 yield return 1 (expected 9)
The code width should have been increased by one.
The question
How am I supposed to know when to increase the code width when reading the variable-length-encoded bytestream? Do I have all of the required information necessary to decompress this bytestream? Am I conceptually missing something?
UPDATE:
After a long discussion with greybeard - I found out that I was reading the binary string incorrectly: 00000000 00000011 00 is to be interpreted as 256, 1. The bytestream is not read as big-endian.
And very roughly speaking, if you are decoding a bytestream, you increase the number of bits read every time you read 2^N-1 codes, where N is the current code width.
Decompressing, you are supposed to build a dictionary in much the same way as the compressor. You know you need to increase the code width as soon as the compressor might use a code too wide for the current width.
As long as the dictionary is not full (the maximum code is not assigned), a new code is assigned for every (regular) code put out (not the Clear Code or End Of Information codes).
With the example in the presentation you linked, 8 is assigned when the second 6 is "transmitted" - you need to switch to four bits before reading the next code.
(This is where the example and your series of numbers differ - the link presents 4, 1, 6, 6, 2, 9.)

what is low nibble and why the result is different by one number

I tested following code with ruby 1.9.2 .
"hello".unpack('H*')
=> ["68656c6c6f"]
> "hello".unpack('h*')
=> ["8656c6c6f6"]
Why the result of h* is off by 1. Also I thought nibble is 4 bits. However 68, 65, 6c, 6c and 6f are all taking one byte.
The difference between h* and H* is the order they write the halves of the byte (nibbles). h writes lower half byte first and H writes the higher half byte first.
And yes, nibble is half of the byte - that is 4 bits.
You can check out with detailed usage of pack/unpack in this post

Fastest gap sequence for shell sort?

According to Marcin Ciura's Optimal (best known) sequence of increments for shell sort algorithm,
the best sequence for shellsort is 1, 4, 10, 23, 57, 132, 301, 701...,
but how can I generate such a sequence?
In Marcin Ciura's paper, he said:
Both Knuth’s and Hibbard’s sequences
are relatively bad, because they are
defined by simple linear recurrences.
but most algorithm books I found tend to use Knuth’s sequence: k = 3k + 1, because it's easy to generate. What's your way of generating a shellsort sequence?
Ciura's paper generates the sequence empirically -- that is, he tried a bunch of combinations and this was the one that worked the best. Generating an optimal shellsort sequence has proven to be tricky, and the problem has so far been resistant to analysis.
The best known increment is Sedgewick's, which you can read about here (see p. 7).
If your data set has a definite upper bound in size, then you can hardcode the step sequence. You should probably only worry about generality if your data set is likely to grow without an upper bound.
The sequence shown seems to grow roughly as an exponential series, albeit with quirks. There seems to be a majority of prime numbers, but with non-primes in the mix as well. I don't see an obvious generation formula.
A valid question, assuming you must deal with arbitrarily large sets, is whether you need to emphasise worst-case performance, average-case performance, or almost-sorted performance. If the latter, you may find that a plain insertion sort using a binary search for the insertion step might be better than a shellsort. If you need good worst-case performance, then Sedgewick's sequence appears to be favoured. The sequence you mention is optimised for average-case performance, where the number of comparisons outweighs the number of moves.
I would not be ashamed to take the advice given in Wikipedia's Shellsort article,
With respect to the average number of comparisons, the best known gap
sequences are 1, 4, 10, 23, 57, 132, 301, 701 and similar, with gaps
found experimentally. Optimal gaps beyond 701 remain unknown, but good
results can be obtained by extending the above sequence according to
the recursive formula h_k = \lfloor 2.25 h_{k-1} \rfloor.
Tokuda's sequence [1, 4, 9, 20, 46, 103, ...], defined by the simple formula h_k = \lceil h'_k
\rceil, where h'k = 2.25h'k − 1 + 1, h'1 = 1, can be recommended for
practical applications.
guessing from the pseudonym, it seems Marcin Ciura edited the WP article himself.
The sequence is 1, 4, 10, 23, 57, 132, 301, 701, 1750. For every next number after 1750 multiply previous number by 2.25 and round down.
Sedgewick observes that coprimality is good. This rings true: if there are separate ‘streams’ not much cross-compared until the gap is small, and one stream contains mostly smalls and one mostly larges, then the small gap might need to move elements far. Coprimality maximises cross-stream comparison.
Gonnet and Baeza-Yates advise growth by a factor of about 2.2; Tokuda by 2.25. It is well known that if there is a mathematical constant between 2⅕ and 2¼ then it must† be precisely √5 ≈ 2.236.
So start {1, 3}, and then each subsequent is the integer closest to previous·√5 that is coprime to all previous except 1. This sequence can be pre-calculated and embedded in code. There follow the values up to 2⁶⁴ ≈ eighteen quintillion.
{1, 3, 7, 16, 37, 83, 187, 419, 937, 2099, 4693, 10499, 23479, 52501, 117391, 262495, 586961, 1312481, 2934793, 6562397, 14673961, 32811973, 73369801, 164059859, 366848983, 820299269, 1834244921, 4101496331, 9171224603, 20507481647, 45856123009, 102537408229, 229280615033, 512687041133, 1146403075157, 2563435205663, 5732015375783, 12817176028331, 28660076878933, 64085880141667, 143300384394667, 320429400708323, 716501921973329, 1602147003541613, 3582509609866643, 8010735017708063, 17912548049333207, 40053675088540303, 89562740246666023, 200268375442701509, 447813701233330109, 1001341877213507537, 2239068506166650537, 5006709386067537661, 11195342530833252689}
(Obviously, omit those that would overflow the relevant array index type. So if that is a signed long long, omit the last.)
On average these have ≈1.96 distinct prime factors and ≈2.07 non-distinct prime factors; 19/55 ≈ 35% are prime; and all but three are square-free (2⁴, 13·19² = 4693, 3291992692409·23³ ≈ 4.0·10¹⁶).
I would welcome formal reasoning about this sequence.
† There’s a little mischief in this “well known … must”. Choosing ∉ℚ guarantees that the closest number that is coprime cannot be a tie, but rational with odd denominator would achieve same. And I like the simplicity of √5, though other possibilities include e^⅘, 11^⅓, π/√2, and √π divided by the Chow-Robbins constant. Simplicity favours √5.
I've found this sequence similar to Marcin Ciura's sequence:
1, 4, 9, 23, 57, 138, 326, 749, 1695, 3785, 8359, 18298, 39744, etc.
For example, Ciura's sequence is:
1, 4, 10, 23, 57, 132, 301, 701, 1750
This is a mean of prime numbers. Python code to find mean of prime numbers is here:
import numpy as np
def isprime(n):
''' Check if integer n is a prime '''
n = abs(int(n)) # n is a positive integer
if n < 2: # 0 and 1 are not primes
return False
if n == 2: # 2 is the only even prime number
return True
if not n & 1: # all other even numbers are not primes
return False
# Range starts with 3 and only needs to go up the square root
# of n for all odd numbers
for x in range(3, int(n**0.5)+1, 2):
if n % x == 0:
return False
return True
# To apply a function to a numpy array, one have to vectorize the function
vectorized_isprime = np.vectorize(isprime)
a = np.arange(10000000)
primes = a[vectorized_isprime(a)]
#print(primes)
for i in range(2,20):
print(primes[0:2**i].mean())
The output is:
4.25
9.625
23.8125
57.84375
138.953125
326.1015625
749.04296875
1695.60742188
3785.09082031
8359.52587891
18298.4733887
39744.887085
85764.6216431
184011.130096
392925.738174
835387.635033
1769455.40302
3735498.24225
The gap in the sequence is slowly decreasing from 2.5 to 2.
Maybe this association could improve the Shellsort in the future.
I discussed this question here yesterday including the gap sequences I have found work best given a specific (low) n.
In the middle I write
A nasty side-effect of shellsort is that when using a set of random
combinations of n entries (to save processing/evaluation time) to test
gaps you may end up with either the best gaps for n entries or the
best gaps for your set of combinations - most likely the latter.
The problem lies in testing the proposed gaps such that valid conclusions can be drawn. Obviously, testing the gaps against all n! orderings that a set of n unique values can be expressed as is unfeasible. Testing in this manner for n=16, for example, means that 20,922,789,888,000 different combinations of n values must be sorted to determine the exact average, worst and reverse-sorted cases - just to test one set of gaps and that set might not be the best. 2^(16-2) sets of gaps are possible for n=16, the first being {1} and the last {15,14,13,12,11,10,9,8,7,6,5,4,3,2,1}.
To illustrate how using random combinations might give incorrect results assume n=3 that can assume six different orderings 012, 021, 102, 120, 201 and 210. You produce a set of two random sequences to test the two possible gap sets, {1} and {2,1}. Assume that these sequences turn out to be 021 and 201. for {1} 021 can be sorted with three comparisons (02, 21 and 01) and 201 with (20, 21, 01) giving a total of six comparisons, divide by two and voilà, an average of 3 and a worst case of 3. Using {2,1} gives (01, 02, 21 and 01) for 021 and (21, 10 and 12) for 201. Seven comparisons with a worst case of 4 and an average of 3.5. The actual average and worst case for {1] is 8/3 and 3, respectively. For {2,1} the values are 10/3 and 4. The averages were too high in both cases and the worst cases were correct. Had 012 been one of the cases {1} would have given a 2.5 average - too low.
Now extend this to finding a set of random sequences for n=16 such that no set of gaps tested will be favored in comparison with the others and the result close (or equal) to the true values, all the while keeping processing to a minimum. Can it be done? Possibly. After all, everything is possible - but is it probable? I think that for this problem random is the wrong approach. Selecting the sequences according to some system may be less bad and might even be good.
More information regarding jdaw1's post:
Gonnet and Baeza-Yates advise growth by a factor of about 2.2; Tokuda by 2.25. It is well known that if there is a mathematical constant between 2⅕ and 2¼ then it must† be precisely √5 ≈ 2.236.
It is known that √5 * √5 is 5 so I think every other index should increase by a factor of five. So first index being 1 insertion sort, second being 3 then each other subsequent is of the factor 5. There follow the values up to 2⁶⁴ ≈ eighteen quintillion.
{1, 3,, 15,, 75,, 375,, 1 875,, 9 375,, 46 875,, 234 375,, 1 171 875,, 5 859 375,, 29 296 875,, 146 484 375,, 732 421 875,, 3 662 109 375,, 18 310 546 875,, 91 552 734 375,, 457 763 671 875,, 2 288 818 359 375,, 11 444 091 796 875,, 57 220 458 984 375,, 286 102 294 921 875,, 1 430 511 474 609 375,, 7 152 557 373 046 875,, 35 762 786 865 234 375,, 178 813 934 326 171 875,, 894 069 671 630 859 375,, 4 470 348 358 154 296 875,}
The values in the gaps can simply be calculated by taking the value before and multiply by √5 rounding to whole numbers giving the resulting array (using 2.2360679775 * 5 ^ n * 3):
{1, 3, 7, 15, 34, 75, 168, 375, 839, 1 875, 4 193, 9 375, 20 963, 46 875, 104 816, 234 375, 524 078, 1 171 875, 2 620 392, 5 859 375, 13 101 961, 29 296 875, 65 509 804, 146 484 375, 327 549 020, 732 421 875, 1 637 745 101, 3 662 109 375, 8 188 725 504, 18 310 546 875, 40 943 627 518, 91 552 734 375, 204 718 137 589, 457 763 671 875, 1 023 590 687 943, 2 288 818 359 375, 5 117 953 439 713, 11 444 091 796 875, 25 589 767 198 563, 57 220 458 984 375, 127 948 835 992 813, 286 102 294 921 875, 639 744 179 964 066, 1 430 511 474 609 375, 3 198 720 899 820 328, 7 152 557 373 046 875, 15 993 604 499 101 639, 35 762 786 865 234 375, 79 968 022 495 508 194, 178 813 934 326 171 875, 399 840 112 477 540 970, 894 069 671 630 859 375, 1 999 200 562 387 704 849, 4 470 348 358 154 296 875, 9 996 002 811 938 524 246}
(Obviously, omit those that would overflow the relevant array index type. So if that is a signed long long, omit the last.)

Resources