Link list algorithm to find pairs adding up to 10 - data-structures

Can you suggest an algorithm that find all pairs of nodes in a link list that add up to 10.
I came up with the following.
Algorithm: Compare each node, starting with the second node, with each node starting from the head node till the previous node (previous to the current node being compared) and report all such pairs.
I think this algorithm should work however its certainly not the most efficient one having a complexity of O(n2).
Can anyone hint at a solution which is more efficient (perhaps takes linear time). Additional or temporary nodes can be used by such a solution.

If their range is limited (say between -100 and 100), it's easy.
Create an array quant[-100..100] then just cycle through your linked list, executing:
quant[value] = quant[value] + 1
Then the following loop will do the trick.
for i = -100 to 100:
j = 10 - i
for k = 1 to quant[i] * quant[j]
output i, " ", j
Even if their range isn't limited, you can have a more efficient method than what you proposed, by sorting the values first and then just keeping counts rather than individual values (same as the above solution).
This is achieved by running two pointers, one at the start of the list and one at the end. When the numbers at those pointers add up to 10, output them and move the end pointer down and the start pointer up.
When they're greater than 10, move the end pointer down. When they're less, move the start pointer up.
This relies on the sorted nature. Less than 10 means you need to make the sum higher (move start pointer up). Greater than 10 means you need to make the sum less (end pointer down). Since they're are no duplicates in the list (because of the counts), being equal to 10 means you move both pointers.
Stop when the pointers pass each other.
There's one more tricky bit and that's when the pointers are equal and the value sums to 10 (this can only happen when the value is 5, obviously).
You don't output the number of pairs based on the product, rather it's based on the product of the value minus 1. That's because a value 5 with count of 1 doesn't actually sum to 10 (since there's only one 5).
So, for the list:
2 3 1 3 5 7 10 -1 11
you get:
Index a b c d e f g h
Value -1 1 2 3 5 7 10 11
Count 1 1 1 2 1 1 1 1
You start pointer p1 at a and p2 at h. Since -1 + 11 = 10, you output those two numbers (as above, you do it N times where N is the product of the counts). Thats one copy of (-1,11). Then you move p1 to b and p2 to g.
1 + 10 > 10 so leave p1 at b, move p2 down to f.
1 + 7 < 10 so move p1 to c, leave p2 at f.
2 + 7 < 10 so move p1 to d, leave p2 at f.
3 + 7 = 10, output two copies of (3,7) since the count of d is 2, move p1 to e, p2 to e.
5 + 5 = 10 but p1 = p2 so the product is 0 times 0 or 0. Output nothing, move p1 to f, p2 to d.
Loop ends since p1 > p2.
Hence the overall output was:
(-1,11)
( 3, 7)
( 3, 7)
which is correct.
Here's some test code. You'll notice that I've forced 7 (the midpoint) to a specific value for testing. Obviously, you wouldn't do this.
#include <stdio.h>
#define SZSRC 30
#define SZSORTED 20
#define SUM 14
int main (void) {
int i, s, e, prod;
int srcData[SZSRC];
int sortedVal[SZSORTED];
int sortedCnt[SZSORTED];
// Make some random data.
srand (time (0));
for (i = 0; i < SZSRC; i++) {
srcData[i] = rand() % SZSORTED;
printf ("srcData[%2d] = %5d\n", i, srcData[i]);
}
// Convert to value/size array.
for (i = 0; i < SZSORTED; i++) {
sortedVal[i] = i;
sortedCnt[i] = 0;
}
for (i = 0; i < SZSRC; i++)
sortedCnt[srcData[i]]++;
// Force 7+7 to specific count for testing.
sortedCnt[7] = 2;
for (i = 0; i < SZSORTED; i++)
if (sortedCnt[i] != 0)
printf ("Sorted [%3d], count = %3d\n", i, sortedCnt[i]);
// Start and end pointers.
s = 0;
e = SZSORTED - 1;
// Loop until they overlap.
while (s <= e) {
// Equal to desired value?
if (sortedVal[s] + sortedVal[e] == SUM) {
// Get product (note special case at midpoint).
prod = (s == e)
? (sortedCnt[s] - 1) * (sortedCnt[e] - 1)
: sortedCnt[s] * sortedCnt[e];
// Output the right count.
for (i = 0; i < prod; i++)
printf ("(%3d,%3d)\n", sortedVal[s], sortedVal[e]);
// Move both pointers and continue.
s++;
e--;
continue;
}
// Less than desired, move start pointer.
if (sortedVal[s] + sortedVal[e] < SUM) {
s++;
continue;
}
// Greater than desired, move end pointer.
e--;
}
return 0;
}
You'll see that the code above is all O(n) since I'm not sorting in this version, just intelligently using the values as indexes.
If the minimum is below zero (or very high to the point where it would waste too much memory), you can just use a minVal to adjust the indexes (another O(n) scan to find the minimum value and then just use i-minVal instead of i for array indexes).
And, even if the range from low to high is too expensive on memory, you can use a sparse array. You'll have to sort it, O(n log n), and search it for updating counts, also O(n log n), but that's still better than the original O(n2). The reason the binary search is O(n log n) is because a single search would be O(log n) but you have to do it for each value.
And here's the output from a test run, which shows you the various stages of calculation.
srcData[ 0] = 13
srcData[ 1] = 16
srcData[ 2] = 9
srcData[ 3] = 14
srcData[ 4] = 0
srcData[ 5] = 8
srcData[ 6] = 9
srcData[ 7] = 8
srcData[ 8] = 5
srcData[ 9] = 9
srcData[10] = 12
srcData[11] = 18
srcData[12] = 3
srcData[13] = 14
srcData[14] = 7
srcData[15] = 16
srcData[16] = 12
srcData[17] = 8
srcData[18] = 17
srcData[19] = 11
srcData[20] = 13
srcData[21] = 3
srcData[22] = 16
srcData[23] = 9
srcData[24] = 10
srcData[25] = 3
srcData[26] = 16
srcData[27] = 9
srcData[28] = 13
srcData[29] = 5
Sorted [ 0], count = 1
Sorted [ 3], count = 3
Sorted [ 5], count = 2
Sorted [ 7], count = 2
Sorted [ 8], count = 3
Sorted [ 9], count = 5
Sorted [ 10], count = 1
Sorted [ 11], count = 1
Sorted [ 12], count = 2
Sorted [ 13], count = 3
Sorted [ 14], count = 2
Sorted [ 16], count = 4
Sorted [ 17], count = 1
Sorted [ 18], count = 1
( 0, 14)
( 0, 14)
( 3, 11)
( 3, 11)
( 3, 11)
( 5, 9)
( 5, 9)
( 5, 9)
( 5, 9)
( 5, 9)
( 5, 9)
( 5, 9)
( 5, 9)
( 5, 9)
( 5, 9)
( 7, 7)

Create a hash set (HashSet in Java) (could use a sparse array if your numbers are well-bounded, i.e. you know they fall into +/- 100)
For each node, first check if 10-n is in the set. If so, you have found a pair. Either way, then add n to the set and continue.
So for example you have
1 - 6 - 3 - 4 - 9
1 - is 9 in the set? Nope
6 - 4? No.
3 - 7? No.
4 - 6? Yup! Print (6,4)
9 - 1? Yup! Print (9,1)

This is a mini subset sum problem, which is NP complete.

If you were to first sort the set, it would eliminate the pairs of numbers that needed to be evaluated.

Related

Find greater number form self in left side and smaller number form self in right side

Consider an array a of n integers, indexed from 1 to n.
For every index i such that 1<i<n, define:
count_left(i) = number of indices j such that 1 <= j < i and a[j] > a[i];
count_right(i) = number of indices j such that i < j <= n and a[j] < a[i];
diff(i) = abs(count_left(i) - count_right(i)).
The problem is: given array a, find the maximum possible value of diff(i) for 1 < i < n.
I got solution by brute force. Can anyone give better solution?
Constraint: 3 < n <= 10^5
Example
Input Array: [3, 6, 9, 5, 4, 8, 2]
Output: 4
Explanation:
diff(2) = abs(0 - 3) = 3
diff(3) = abs(0 - 4) = 4
diff(4) = abs(2 - 2) = 0
diff(5) = abs(3 - 1) = 2
diff(6) = abs(1 - 1) = 0
maximum is 4.
O(nlogn) approach:
Walk through array left to right and add every element to augmented binary search tree (RB, AVL etc) containing fields of subtree size, initial index and temporary rank field. So immediately after adding we know rank of element in the current tree state.
lb = index - temprank
is number of left bigger elements - remember it in temprank field.
After filling the tree with all items traverse tree again, retrieving final element rank.
rs = finalrank - temprank
is number of right smaller elements. Now just get abs of difference of lb and rs
diff = abs(lb - rs) = abs(index - temprank - finalrank + temprank ) =
abs(index - finalrank)
But ... we can see that we don't need temprank at all.
Moreover - we don't need binary tree!
Just perform sorting of pairs (element; initial index) by element key and get max absolute difference of new_index - old_index (except for old indices 1 and n)
a 3, 6, 9, 5, 4, 8, 2
old 2 3 4 5 6
new 5 7 4 3 6
dif 3 4 0 2 0
Python code for concept checking
a = [3, 6, 9, 5, 4, 8, 2]
b = sorted([[e,i] for i,e in enumerate(a)])
print(b)
print(max([abs(n-o[1]) if 0<o[1]<len(a)-1 else 0 for n,o in enumerate(b)]))

codility:peaks: what's wrong with go implementation on performance parts testing?

Divide an array into the maximum number of same-sized blocks, each of which should contain an index P such that A[P - 1] < A[P] > A[P + 1].
My Solution: golang solution
However partly performance testing fails without reason, anyone can add some suggestion?
func Solution(A []int) int {
peaks := make([]int, 0)
for i := 1; i < len(A)-1; i++ {
if A[i] > A[i-1] && A[i] > A[i+1] {
peaks = append(peaks, i)
}
}
if len(peaks) <= 0 {
return 0
}
maxBlocks := 0
// we only loop through the possible block sizes which are less than
// the size of peaks, in other words, we have to ensure at least one
// peak inside each block
for i := 1; i <= len(peaks); i++ {
// if i is not the divisor of len(A), which means the A is not
// able to be equally divided, we ignore them;
if len(A)%i != 0 {
continue
}
// we got the block size
di := len(A) / i
peakState := 0
k := 0
// this loop is for verifying whether each block has at least one
// peak by checking the peak is inside A[k]~A[2k]-1
// if current peak is not valid, we step down the next peak until
// valid, then we move to the next block for finding valid peak;
// once all the peaks are consumed, we can verify whether all the
// blocks are valid with peak inside by checking the k value,
// if k reaches the
// final state, we can make sure that this solution is acceptable
for {
if peakState > len(peaks)-1 {
break
}
if k >= i {
break
}
if peaks[peakState] >= di*k && peaks[peakState] <= di*(k+1)-1 {
peakState++
} else {
k++
}
}
// if all peaks are checked truly inside the block, we can make
// sure this divide solution is acceptable and record it in the
// global max block size
if k == i-1 && peakState == len(peaks) {
maxBlocks = i
}
}
return maxBlocks
}
Thanks for adding more comments to your code. The idea seems to make sense. If the judge is reporting a wrong answer, I would try it with random data and some edge cases and a brute-force control to see if you can catch a failing example that's reasonably sized, and analyse what is wrong.
My own thought about a possible approach so far was to record a prefix array so as to tell in O(1) if a block has a peak. Add 1 if the element is a peak, 0 otherwise. For input,
1, 2, 3, 4, 3, 4, 1, 2, 3, 4, 6, 2
we would have:
1, 2, 3, 4, 3, 4, 1, 2, 3, 4, 6, 2
0 0 0 1 1 2 2 2 2 2 3 3
now when we divide, we know if a block contains a peak if its relative sum is positive:
1, 2, 3, 4, 3, 4, 1, 2, 3, 4, 6, 2
0|0 0 0 1| 1 2 2 2| 2 2 3 3
a b c d
If the first block did not contain a peak, we would expect b - a to equal 0 but instead we get 1, meaning there's a peak. This method would guarantee O(num blocks) for each divisor test.
The second thing I would try is to iterate from the smallest divisor (largest block size) to the largest divisor (smallest block size), but skip divisors that can be divided by a smaller divisor that failed validation. For example, if 2 succeeded but 3 failed, there's no way 6 can succeed, but 4 still could.
1 2 3 4 5 6 7 8 9 10 11 12
2 |
3 | |
6 | | | | |
4 x |x | x| x

How to find the count of numbers which are divisible by 7?

Given an integer N, how to efficiently find the count of numbers which are divisible by 7 (their reverse should also be divisible by 7) in the range:
[0, 10^N - 1]
Example:
For N=2, answer:
4 {0, 7, 70, 77}
[All numbers from 0 to 99 which are divisible by 7 (also their reverse is divisible)]
My approach, simple brute-force:
initialize count to zero
run a loop from i=0 till end
if a(i) % 7 == 0 && reverse(a(i)) % 7 == 0, then we increase the count
Note:
reverse(123) = 321, reverse(1200) = 21, for example!
Let's see what happens mod 7 when we add a digit, d, to a prefix, abc.
10 * abc + d =>
(10 mod 7 * abc mod 7) mod 7 + d mod 7
reversed number:
abc + d * 10^(length(prefix) =>
abc mod 7 + (d mod 7 * 10^3 mod 7) mod 7
Note is that we only need the count of prefixes of abc mod 7 for each such remainder, not the actual prefixes.
Let COUNTS(n,f,r) be the number of n-digit numbers such that n%7 = f and REVERSE(n)%7 = r
The counts are easy to calculate for n=1:
COUNTS(1,f,r) = 0 when f!=r, since a 1-digit number is the same as its reverse.
COUNTS(1,x,x) = 1 when x >= 3, and
COUNTS(1,x,x) = 2 when x < 3, since 7%3=0, 8%3=1, and 9%3=2
The counts for other lengths can be figured out by calculating what happens when you add each digit from 0 to 9 to the numbers characterized by the previous counts.
At the end, COUNTS(N,0,0) is the answer you are looking for.
In python, for example, it looks like this:
def getModCounts(len):
counts=[[0]*7 for i in range(0,7)]
if len<1:
return counts
if len<2:
counts[0][0] = counts[1][1] = counts[2][2] = 2
counts[3][3] = counts[4][4] = counts[5][5] = counts[6][6] = 1
return counts
prevCounts = getModCounts(len-1)
for f in range(0,7):
for r in range(0,7):
c = prevCounts[f][r]
rplace=(10**(len-1))%7
for newdigit in range(0,10):
newf=(f*10 + newdigit)%7
newr=(r + newdigit*rplace)%7
counts[newf][newr]+=c
return counts
def numFwdAndRevDivisible(len):
return getModCounts(len)[0][0]
#TEST
for i in range(0,20):
print("{0} -> {1}".format(i, numFwdAndRevDivisible(i)))
See if it gives the answers you're expecting. If not, maybe there's a bug I need to fix:
0 -> 0
1 -> 2
2 -> 4
3 -> 22
4 -> 206
5 -> 2113
6 -> 20728
7 -> 205438
8 -> 2043640
9 -> 20411101
10 -> 204084732
11 -> 2040990205
12 -> 20408959192
13 -> 204085028987
14 -> 2040823461232
15 -> 20408170697950
16 -> 204081640379568
17 -> 2040816769367351
18 -> 20408165293673530
19 -> 204081641308734748
This is a pretty good answer when counting up to N is reasonable -- way better than brute force, which counts up to 10^N.
For very long lengths like N=10^18 (you would probably be asked for a the count mod 1000000007 or something), there is a next-level answer.
Note that there is a linear relationship between the counts for length n and the counts for length n+1, and that this relationship can be represented by a 49x49 matrix. You can exponentiate this matrix to the Nth power using exponentiation by squaring in O(log N) matrix multiplications, and then just multiply by the single digit counts to get the length N counts.
There is a recursive solution using digit dp technique for any digits.
long long call(int pos , int Mod ,int revMod){
if(pos == len ){
if(!Mod && !revMod)return 1;
return 0;
}
if(dp[pos][Mod][revMod] != -1 )return dp[pos][Mod][revMod] ;
long long res =0;
for(int i= 0; i<= 9; i++ ){
int revValue =(base[pos]*i + revMod)%7;
int curValue = (Mod*10 + i)%7;
res += call(pos+1, curValue,revValue) ;
}
return dp[pos][Mod][revMod] = res ;
}

Find algorithm to split sequence in 2 to minimize difference in sum [duplicate]

This question already has answers here:
Is partitioning an array into halves with equal sums P or NP?
(5 answers)
Closed 9 years ago.
Here's the problem: given a sequence of numbers, split these numbers into 2 sequences, so that the difference between the two sequences is the minimum. For example, given the sequence: [5, 4, 3, 3, 3] the solution is:
[5, 4] -> sum is 9
[3, 3, 3] -> sum is 9
The difference is 0
In other terms, can you find an algorithm (C language preferred) that given an input vector (variable size) of integers, can output two vector where the difference between the two sum is minimum?
Brutal force algorithm should be avoided.
To be sure to get the right solution, should be nice to compare in a benchmark the results between your algorithm and a brutal force algorithm.
It sounds like a sub-arrays problem (which is my interpretation of "sequences").
Meaning the only possibilities for 5, 4, 3, 3, 3 are:
| 5, 4, 3, 3, 3 => 0 - 18 => 18
5 | 4, 3, 3, 3 => 5 - 13 => 8
5, 4 | 3, 3, 3 => 9 - 9 => 0
5, 4, 3 | 3, 3 => 12 - 6 => 6
5, 4, 3, 3 | 3 => 15 - 3 => 12
5, 4, 3, 3, 3 | => 18 - 0 => 18 (same as first)
It is as simple as just comparing the sums on either side of every index.
Code: (untested)
int total = 0;
for (int i = 0; i < n; i++)
total += arr[i];
int best = INT_MAX, bestPos = -1, current = 0;
for (int i = 0; i < n; i++)
{
current += arr[i];
int diff = abs(current - total);
if (diff < best)
{
best = diff;
bestPos = i;
}
// else break; - optimisation, may not work
}
printf("The best position is at %d\n", bestPos);
The above is O(n), logically, you can't do much better than that.
You can slightly optimize the above by doing a binary-search-like process on the sequence to get down to n + log n rather than 2n, but both are O(n). Basic pseudo-code:
sum[0] = arr[0]
// sum[i] represents sum from indices 0 to i
for (i = 1:n)
sum[i] = sum[i-1] + arr[i]
total = sum[n]
start = 0
end = n
best = MAX
repeat:
if (start == end) stop
mid = (start + end) / 2
sumFromMidToN = sum[n] - sum[mid]
best = max(best, abs(sumFromMidToN - sum[mid]))
if (sum[mid] > sumFromMidToN)
end = mid
else if (sum[mid] < sumFromMidToN)
start = mid
else
stop
If it's actually subsets, then, as already mentioned, it appears to be the optimization version of the Partition problem, which is a lot more difficult.

nᵗʰ ugly number

Numbers whose only prime factors are 2, 3, or 5 are called ugly numbers.
Example:
1, 2, 3, 4, 5, 6, 8, 9, 10, 12, 15, ...
1 can be considered as 2^0.
I am working on finding nth ugly number. Note that these numbers are extremely sparsely distributed as n gets large.
I wrote a trivial program that computes if a given number is ugly or not. For n > 500 - it became super slow. I tried using memoization - observation: ugly_number * 2, ugly_number * 3, ugly_number * 5 are all ugly. Even with that it is slow. I tried using some properties of log - since that will reduce this problem from multiplication to addition - but, not much luck yet. Thought of sharing this with you all. Any interesting ideas?
Using a concept similar to Sieve of Eratosthenes (thanks Anon)
for (int i(2), uglyCount(0); ; i++) {
if (i % 2 == 0)
continue;
if (i % 3 == 0)
continue;
if (i % 5 == 0)
continue;
uglyCount++;
if (uglyCount == n - 1)
break;
}
i is the nth ugly number.
Even this is pretty slow. I am trying to find the 1500th ugly number.
A simple fast solution in Java. Uses approach described by Anon..
Here TreeSet is just a container capable of returning smallest element in it. (No duplicates stored.)
int n = 20;
SortedSet<Long> next = new TreeSet<Long>();
next.add((long) 1);
long cur = 0;
for (int i = 0; i < n; ++i) {
cur = next.first();
System.out.println("number " + (i + 1) + ": " + cur);
next.add(cur * 2);
next.add(cur * 3);
next.add(cur * 5);
next.remove(cur);
}
Since 1000th ugly number is 51200000, storing them in bool[] isn't really an option.
edit
As a recreation from work (debugging stupid Hibernate), here's completely linear solution. Thanks to marcog for idea!
int n = 1000;
int last2 = 0;
int last3 = 0;
int last5 = 0;
long[] result = new long[n];
result[0] = 1;
for (int i = 1; i < n; ++i) {
long prev = result[i - 1];
while (result[last2] * 2 <= prev) {
++last2;
}
while (result[last3] * 3 <= prev) {
++last3;
}
while (result[last5] * 5 <= prev) {
++last5;
}
long candidate1 = result[last2] * 2;
long candidate2 = result[last3] * 3;
long candidate3 = result[last5] * 5;
result[i] = Math.min(candidate1, Math.min(candidate2, candidate3));
}
System.out.println(result[n - 1]);
The idea is that to calculate a[i], we can use a[j]*2 for some j < i. But we also need to make sure that 1) a[j]*2 > a[i - 1] and 2) j is smallest possible.
Then, a[i] = min(a[j]*2, a[k]*3, a[t]*5).
I am working on finding nth ugly number. Note that these numbers are extremely sparsely distributed as n gets large.
I wrote a trivial program that computes if a given number is ugly or not.
This looks like the wrong approach for the problem you're trying to solve - it's a bit of a shlemiel algorithm.
Are you familiar with the Sieve of Eratosthenes algorithm for finding primes? Something similar (exploiting the knowledge that every ugly number is 2, 3 or 5 times another ugly number) would probably work better for solving this.
With the comparison to the Sieve I don't mean "keep an array of bools and eliminate possibilities as you go up". I am more referring to the general method of generating solutions based on previous results. Where the Sieve gets a number and then removes all multiples of it from the candidate set, a good algorithm for this problem would start with an empty set and then add the correct multiples of each ugly number to that.
My answer refers to the correct answer given by Nikita Rybak.
So that one could see a transition from the idea of the first approach to that of the second.
from collections import deque
def hamming():
h=1;next2,next3,next5=deque([]),deque([]),deque([])
while True:
yield h
next2.append(2*h)
next3.append(3*h)
next5.append(5*h)
h=min(next2[0],next3[0],next5[0])
if h == next2[0]: next2.popleft()
if h == next3[0]: next3.popleft()
if h == next5[0]: next5.popleft()
What's changed from Nikita Rybak's 1st approach is that, instead of adding next candidates into single data structure, i.e. Tree set, one can add each of them separately into 3 FIFO lists. This way, each list will be kept sorted all the time, and the next least candidate must always be at the head of one ore more of these lists.
If we eliminate the use of the three lists above, we arrive at the second implementation in Nikita Rybak' answer. This is done by evaluating those candidates (to be contained in three lists) only when needed, so that there is no need to store them.
Simply put:
In the first approach, we put every new candidate into single data structure, and that's bad because too many things get mixed up unwisely. This poor strategy inevitably entails O(log(tree size)) time complexity every time we make a query to the structure. By putting them into separate queues, however, you will see that each query takes only O(1) and that's why the overall performance reduces to O(n)!!! This is because each of the three lists is already sorted, by itself.
I believe you can solve this problem in sub-linear time, probably O(n^{2/3}).
To give you the idea, if you simplify the problem to allow factors of just 2 and 3, you can achieve O(n^{1/2}) time starting by searching for the smallest power of two that is at least as large as the nth ugly number, and then generating a list of O(n^{1/2}) candidates. This code should give you an idea how to do it. It relies on the fact that the nth number containing only powers of 2 and 3 has a prime factorization whose sum of exponents is O(n^{1/2}).
def foo(n):
p2 = 1 # current power of 2
p3 = 1 # current power of 3
e3 = 0 # exponent of current power of 3
t = 1 # number less than or equal to the current power of 2
while t < n:
p2 *= 2
if p3 * 3 < p2:
p3 *= 3
e3 += 1
t += 1 + e3
candidates = [p2]
c = p2
for i in range(e3):
c /= 2
c *= 3
if c > p2: c /= 2
candidates.append(c)
return sorted(candidates)[n - (t - len(candidates))]
The same idea should work for three allowed factors, but the code gets more complex. The sum of the powers of the factorization drops to O(n^{1/3}), but you need to consider more candidates, O(n^{2/3}) to be more precise.
A lot of good answers here, but I was having trouble understanding those, specifically how any of these answers, including the accepted one, maintained the axiom 2 in Dijkstra's original paper:
Axiom 2. If x is in the sequence, so is 2 * x, 3 * x, and 5 * x.
After some whiteboarding, it became clear that the axiom 2 is not an invariant at each iteration of the algorithm, but actually the goal of the algorithm itself. At each iteration, we try to restore the condition in axiom 2. If last is the last value in the result sequence S, axiom 2 can simply be rephrased as:
For some x in S, the next value in S is the minimum of 2x,
3x, and 5x, that is greater than last. Let's call this axiom 2'.
Thus, if we can find x, we can compute the minimum of 2x, 3x, and 5x in constant time, and add it to S.
But how do we find x? One approach is, we don't; instead, whenever we add a new element e to S, we compute 2e, 3e, and 5e, and add them to a minimum priority queue. Since this operations guarantees e is in S, simply extracting the top element of the PQ satisfies axiom 2'.
This approach works, but the problem is that we generate a bunch of numbers we may not end up using. See this answer for an example; if the user wants the 5th element in S (5), the PQ at that moment holds 6 6 8 9 10 10 12 15 15 20 25. Can we not waste this space?
Turns out, we can do better. Instead of storing all these numbers, we simply maintain three counters for each of the multiples, namely, 2i, 3j, and 5k. These are candidates for the next number in S. When we pick one of them, we increment only the corresponding counter, and not the other two. By doing so, we are not eagerly generating all the multiples, thus solving the space problem with the first approach.
Let's see a dry run for n = 8, i.e. the number 9. We start with 1, as stated by axiom 1 in Dijkstra's paper.
+---------+---+---+---+----+----+----+-------------------+
| # | i | j | k | 2i | 3j | 5k | S |
+---------+---+---+---+----+----+----+-------------------+
| initial | 1 | 1 | 1 | 2 | 3 | 5 | {1} |
+---------+---+---+---+----+----+----+-------------------+
| 1 | 1 | 1 | 1 | 2 | 3 | 5 | {1,2} |
+---------+---+---+---+----+----+----+-------------------+
| 2 | 2 | 1 | 1 | 4 | 3 | 5 | {1,2,3} |
+---------+---+---+---+----+----+----+-------------------+
| 3 | 2 | 2 | 1 | 4 | 6 | 5 | {1,2,3,4} |
+---------+---+---+---+----+----+----+-------------------+
| 4 | 3 | 2 | 1 | 6 | 6 | 5 | {1,2,3,4,5} |
+---------+---+---+---+----+----+----+-------------------+
| 5 | 3 | 2 | 2 | 6 | 6 | 10 | {1,2,3,4,5,6} |
+---------+---+---+---+----+----+----+-------------------+
| 6 | 4 | 2 | 2 | 8 | 6 | 10 | {1,2,3,4,5,6} |
+---------+---+---+---+----+----+----+-------------------+
| 7 | 4 | 3 | 2 | 8 | 9 | 10 | {1,2,3,4,5,6,8} |
+---------+---+---+---+----+----+----+-------------------+
| 8 | 5 | 3 | 2 | 10 | 9 | 10 | {1,2,3,4,5,6,8,9} |
+---------+---+---+---+----+----+----+-------------------+
Notice that S didn't grow at iteration 6, because the minimum candidate 6 had already been added previously. To avoid this problem of having to remember all of the previous elements, we amend our algorithm to increment all the counters whenever the corresponding multiples are equal to the minimum candidate. That brings us to the following Scala implementation.
def hamming(n: Int): Seq[BigInt] = {
#tailrec
def next(x: Int, factor: Int, xs: IndexedSeq[BigInt]): Int = {
val leq = factor * xs(x) <= xs.last
if (leq) next(x + 1, factor, xs)
else x
}
#tailrec
def loop(i: Int, j: Int, k: Int, xs: IndexedSeq[BigInt]): IndexedSeq[BigInt] = {
if (xs.size < n) {
val a = next(i, 2, xs)
val b = next(j, 3, xs)
val c = next(k, 5, xs)
val m = Seq(2 * xs(a), 3 * xs(b), 5 * xs(c)).min
val x = a + (if (2 * xs(a) == m) 1 else 0)
val y = b + (if (3 * xs(b) == m) 1 else 0)
val z = c + (if (5 * xs(c) == m) 1 else 0)
loop(x, y, z, xs :+ m)
} else xs
}
loop(0, 0, 0, IndexedSeq(BigInt(1)))
}
Basicly the search could be made O(n):
Consider that you keep a partial history of ugly numbers. Now, at each step you have to find the next one. It should be equal to a number from the history multiplied by 2, 3 or 5. Chose the smallest of them, add it to history, and drop some numbers from it so that the smallest from the list multiplied by 5 would be larger than the largest.
It will be fast, because the search of the next number will be simple:
min(largest * 2, smallest * 5, one from the middle * 3),
that is larger than the largest number in the list. If they are scarse, the list will always contain few numbers, so the search of the number that have to be multiplied by 3 will be fast.
Here is a correct solution in ML. The function ugly() will return a stream (lazy list) of hamming numbers. The function nth can be used on this stream.
This uses the Sieve method, the next elements are only calculated when needed.
datatype stream = Item of int * (unit->stream);
fun cons (x,xs) = Item(x, xs);
fun head (Item(i,xf)) = i;
fun tail (Item(i,xf)) = xf();
fun maps f xs = cons(f (head xs), fn()=> maps f (tail xs));
fun nth(s,1)=head(s)
| nth(s,n)=nth(tail(s),n-1);
fun merge(xs,ys)=if (head xs=head ys) then
cons(head xs,fn()=>merge(tail xs,tail ys))
else if (head xs<head ys) then
cons(head xs,fn()=>merge(tail xs,ys))
else
cons(head ys,fn()=>merge(xs,tail ys));
fun double n=n*2;
fun triple n=n*3;
fun ij()=
cons(1,fn()=>
merge(maps double (ij()),maps triple (ij())));
fun quint n=n*5;
fun ugly()=
cons(1,fn()=>
merge((tail (ij())),maps quint (ugly())));
This was first year CS work :-)
To find the n-th ugly number in O (n^(2/3)), jonderry's algorithm will work just fine. Note that the numbers involved are huge so any algorithm trying to check whether a number is ugly or not has no chance.
Finding all of the n smallest ugly numbers in ascending order is done easily by using a priority queue in O (n log n) time and O (n) space: Create a priority queue of numbers with the smallest numbers first, initially including just the number 1. Then repeat n times: Remove the smallest number x from the priority queue. If x hasn't been removed before, then x is the next larger ugly number, and we add 2x, 3x and 5x to the priority queue. (If anyone doesn't know the term priority queue, it's like the heap in the heapsort algorithm). Here's the start of the algorithm:
1 -> 2 3 5
1 2 -> 3 4 5 6 10
1 2 3 -> 4 5 6 6 9 10 15
1 2 3 4 -> 5 6 6 8 9 10 12 15 20
1 2 3 4 5 -> 6 6 8 9 10 10 12 15 15 20 25
1 2 3 4 5 6 -> 6 8 9 10 10 12 12 15 15 18 20 25 30
1 2 3 4 5 6 -> 8 9 10 10 12 12 15 15 18 20 25 30
1 2 3 4 5 6 8 -> 9 10 10 12 12 15 15 16 18 20 24 25 30 40
Proof of execution time: We extract an ugly number from the queue n times. We initially have one element in the queue, and after extracting an ugly number we add three elements, increasing the number by 2. So after n ugly numbers are found we have at most 2n + 1 elements in the queue. Extracting an element can be done in logarithmic time. We extract more numbers than just the ugly numbers but at most n ugly numbers plus 2n - 1 other numbers (those that could have been in the sieve after n-1 steps). So the total time is less than 3n item removals in logarithmic time = O (n log n), and the total space is at most 2n + 1 elements = O (n).
I guess we can use Dynamic Programming (DP) and compute nth Ugly Number. Complete explanation can be found at http://www.geeksforgeeks.org/ugly-numbers/
#include <iostream>
#define MAX 1000
using namespace std;
// Find Minimum among three numbers
long int min(long int x, long int y, long int z) {
if(x<=y) {
if(x<=z) {
return x;
} else {
return z;
}
} else {
if(y<=z) {
return y;
} else {
return z;
}
}
}
// Actual Method that computes all Ugly Numbers till the required range
long int uglyNumber(int count) {
long int arr[MAX], val;
// index of last multiple of 2 --> i2
// index of last multiple of 3 --> i3
// index of last multiple of 5 --> i5
int i2, i3, i5, lastIndex;
arr[0] = 1;
i2 = i3 = i5 = 0;
lastIndex = 1;
while(lastIndex<=count-1) {
val = min(2*arr[i2], 3*arr[i3], 5*arr[i5]);
arr[lastIndex] = val;
lastIndex++;
if(val == 2*arr[i2]) {
i2++;
}
if(val == 3*arr[i3]) {
i3++;
}
if(val == 5*arr[i5]) {
i5++;
}
}
return arr[lastIndex-1];
}
// Starting point of program
int main() {
long int num;
int count;
cout<<"Which Ugly Number : ";
cin>>count;
num = uglyNumber(count);
cout<<endl<<num;
return 0;
}
We can see that its quite fast, just change the value of MAX to compute higher Ugly Number
Using 3 generators in parallel and selecting the smallest at each iteration, here is a C program to compute all ugly numbers below 2128 in less than 1 second:
#include <limits.h>
#include <stdio.h>
#if 0
typedef unsigned long long ugly_t;
#define UGLY_MAX (~(ugly_t)0)
#else
typedef __uint128_t ugly_t;
#define UGLY_MAX (~(ugly_t)0)
#endif
int print_ugly(int i, ugly_t u) {
char buf[64], *p = buf + sizeof(buf);
*--p = '\0';
do { *--p = '0' + u % 10; } while ((u /= 10) != 0);
return printf("%d: %s\n", i, p);
}
int main() {
int i = 0, n2 = 0, n3 = 0, n5 = 0;
ugly_t u, ug2 = 1, ug3 = 1, ug5 = 1;
#define UGLY_COUNT 110000
ugly_t ugly[UGLY_COUNT];
while (i < UGLY_COUNT) {
u = ug2;
if (u > ug3) u = ug3;
if (u > ug5) u = ug5;
if (u == UGLY_MAX)
break;
ugly[i++] = u;
print_ugly(i, u);
if (u == ug2) {
if (ugly[n2] <= UGLY_MAX / 2)
ug2 = 2 * ugly[n2++];
else
ug2 = UGLY_MAX;
}
if (u == ug3) {
if (ugly[n3] <= UGLY_MAX / 3)
ug3 = 3 * ugly[n3++];
else
ug3 = UGLY_MAX;
}
if (u == ug5) {
if (ugly[n5] <= UGLY_MAX / 5)
ug5 = 5 * ugly[n5++];
else
ug5 = UGLY_MAX;
}
}
return 0;
}
Here are the last 10 lines of output:
100517: 338915443777200000000000000000000000000
100518: 339129266201729628114355465608000000000
100519: 339186548067800934969350553600000000000
100520: 339298130282929870605468750000000000000
100521: 339467078447341918945312500000000000000
100522: 339569540691046437734055936000000000000
100523: 339738624000000000000000000000000000000
100524: 339952965770562084651663360000000000000
100525: 340010386766614455386112000000000000000
100526: 340122240000000000000000000000000000000
Here is a version in Javascript usable with QuickJS:
import * as std from "std";
function main() {
var i = 0, n2 = 0, n3 = 0, n5 = 0;
var u, ug2 = 1n, ug3 = 1n, ug5 = 1n;
var ugly = [];
for (;;) {
u = ug2;
if (u > ug3) u = ug3;
if (u > ug5) u = ug5;
ugly[i++] = u;
std.printf("%d: %s\n", i, String(u));
if (u >= 0x100000000000000000000000000000000n)
break;
if (u == ug2)
ug2 = 2n * ugly[n2++];
if (u == ug3)
ug3 = 3n * ugly[n3++];
if (u == ug5)
ug5 = 5n * ugly[n5++];
}
return 0;
}
main();
here is my code , the idea is to divide the number by 2 (till it gives remainder 0) then 3 and 5 . If at last the number becomes one it's a ugly number.
you can count and even print all ugly numbers till n.
int count = 0;
for (int i = 2; i <= n; i++) {
int temp = i;
while (temp % 2 == 0) temp=temp / 2;
while (temp % 3 == 0) temp=temp / 3;
while (temp % 5 == 0) temp=temp / 5;
if (temp == 1) {
cout << i << endl;
count++;
}
}
This problem can be done in O(1).
If we remove 1 and look at numbers between 2 through 30, we will notice that there are 22 numbers.
Now, for any number x in the 22 numbers above, there will be a number x + 30 in between 31 and 60 that is also ugly. Thus, we can find at least 22 numbers between 31 and 60. Now for every ugly number between 31 and 60, we can write it as s + 30. So s will be ugly too, since s + 30 is divisible by 2, 3, or 5. Thus, there will be exactly 22 numbers between 31 and 60. This logic can be repeated for every block of 30 numbers after that.
Thus, there will be 23 numbers in the first 30 numbers, and 22 for every 30 after that. That is, first 23 uglies will occur between 1 and 30, 45 uglies will occur between 1 and 60, 67 uglies will occur between 1 and 30 etc.
Now, if I am given n, say 137, I can see that 137/22 = 6.22. The answer will lie between 6*30 and 7*30 or between 180 and 210. By 180, I will have 6*22 + 1 = 133rd ugly number at 180. I will have 154th ugly number at 210. So I am looking for 4th ugly number (since 137 = 133 + 4)in the interval [2, 30], which is 5. The 137th ugly number is then 180 + 5 = 185.
Another example: if I want the 1500th ugly number, I count 1500/22 = 68 blocks. Thus, I will have 22*68 + 1 = 1497th ugly at 30*68 = 2040. The next three uglies in the [2, 30] block are 2, 3, and 4. So our required ugly is at 2040 + 4 = 2044.
The point it that I can simply build a list of ugly numbers between [2, 30] and simply find the answer by doing look ups in O(1).
Here is another O(n) approach (Python solution) based on the idea of merging three sorted lists. The challenge is to find the next ugly number in increasing order. For example, we know the first seven ugly numbers are [1,2,3,4,5,6,8]. The ugly numbers are actually from the following three lists:
list 1: 1*2, 2*2, 3*2, 4*2, 5*2, 6*2, 8*2 ... ( multiply each ugly number by 2 )
list 2: 1*3, 2*3, 3*3, 4*3, 5*3, 6*3, 8*3 ... ( multiply each ugly number by 3 )
list 3: 1*5, 2*5, 3*5, 4*5, 5*5, 6*5, 8*5 ... ( multiply each ugly number by 5 )
So the nth ugly number is the nth number of the list merged from the three lists above:
1, 1*2, 1*3, 2*2, 1*5, 2*3 ...
def nthuglynumber(n):
p2, p3, p5 = 0,0,0
uglynumber = [1]
while len(uglynumber) < n:
ugly2, ugly3, ugly5 = uglynumber[p2]*2, uglynumber[p3]*3, uglynumber[p5]*5
next = min(ugly2, ugly3, ugly5)
if next == ugly2: p2 += 1 # multiply each number
if next == ugly3: p3 += 1 # only once by each
if next == ugly5: p5 += 1 # of the three factors
uglynumber += [next]
return uglynumber[-1]
STEP I: computing three next possible ugly numbers from the three lists
ugly2, ugly3, ugly5 = uglynumber[p2]*2, uglynumber[p3]*3, uglynumber[p5]*5
STEP II, find the one next ugly number as the smallest of the three above:
next = min(ugly2, ugly3, ugly5)
STEP III: moving the pointer forward if its ugly number was the next ugly number
if next == ugly2: p2+=1
if next == ugly3: p3+=1
if next == ugly5: p5+=1
note: not using if with elif nor else
STEP IV: adding the next ugly number into the merged list uglynumber
uglynumber += [next]

Resources