Numbers whose only prime factors are 2, 3, or 5 are called ugly numbers.
Example:
1, 2, 3, 4, 5, 6, 8, 9, 10, 12, 15, ...
1 can be considered as 2^0.
I am working on finding nth ugly number. Note that these numbers are extremely sparsely distributed as n gets large.
I wrote a trivial program that computes if a given number is ugly or not. For n > 500 - it became super slow. I tried using memoization - observation: ugly_number * 2, ugly_number * 3, ugly_number * 5 are all ugly. Even with that it is slow. I tried using some properties of log - since that will reduce this problem from multiplication to addition - but, not much luck yet. Thought of sharing this with you all. Any interesting ideas?
Using a concept similar to Sieve of Eratosthenes (thanks Anon)
for (int i(2), uglyCount(0); ; i++) {
if (i % 2 == 0)
continue;
if (i % 3 == 0)
continue;
if (i % 5 == 0)
continue;
uglyCount++;
if (uglyCount == n - 1)
break;
}
i is the nth ugly number.
Even this is pretty slow. I am trying to find the 1500th ugly number.
A simple fast solution in Java. Uses approach described by Anon..
Here TreeSet is just a container capable of returning smallest element in it. (No duplicates stored.)
int n = 20;
SortedSet<Long> next = new TreeSet<Long>();
next.add((long) 1);
long cur = 0;
for (int i = 0; i < n; ++i) {
cur = next.first();
System.out.println("number " + (i + 1) + ": " + cur);
next.add(cur * 2);
next.add(cur * 3);
next.add(cur * 5);
next.remove(cur);
}
Since 1000th ugly number is 51200000, storing them in bool[] isn't really an option.
edit
As a recreation from work (debugging stupid Hibernate), here's completely linear solution. Thanks to marcog for idea!
int n = 1000;
int last2 = 0;
int last3 = 0;
int last5 = 0;
long[] result = new long[n];
result[0] = 1;
for (int i = 1; i < n; ++i) {
long prev = result[i - 1];
while (result[last2] * 2 <= prev) {
++last2;
}
while (result[last3] * 3 <= prev) {
++last3;
}
while (result[last5] * 5 <= prev) {
++last5;
}
long candidate1 = result[last2] * 2;
long candidate2 = result[last3] * 3;
long candidate3 = result[last5] * 5;
result[i] = Math.min(candidate1, Math.min(candidate2, candidate3));
}
System.out.println(result[n - 1]);
The idea is that to calculate a[i], we can use a[j]*2 for some j < i. But we also need to make sure that 1) a[j]*2 > a[i - 1] and 2) j is smallest possible.
Then, a[i] = min(a[j]*2, a[k]*3, a[t]*5).
I am working on finding nth ugly number. Note that these numbers are extremely sparsely distributed as n gets large.
I wrote a trivial program that computes if a given number is ugly or not.
This looks like the wrong approach for the problem you're trying to solve - it's a bit of a shlemiel algorithm.
Are you familiar with the Sieve of Eratosthenes algorithm for finding primes? Something similar (exploiting the knowledge that every ugly number is 2, 3 or 5 times another ugly number) would probably work better for solving this.
With the comparison to the Sieve I don't mean "keep an array of bools and eliminate possibilities as you go up". I am more referring to the general method of generating solutions based on previous results. Where the Sieve gets a number and then removes all multiples of it from the candidate set, a good algorithm for this problem would start with an empty set and then add the correct multiples of each ugly number to that.
My answer refers to the correct answer given by Nikita Rybak.
So that one could see a transition from the idea of the first approach to that of the second.
from collections import deque
def hamming():
h=1;next2,next3,next5=deque([]),deque([]),deque([])
while True:
yield h
next2.append(2*h)
next3.append(3*h)
next5.append(5*h)
h=min(next2[0],next3[0],next5[0])
if h == next2[0]: next2.popleft()
if h == next3[0]: next3.popleft()
if h == next5[0]: next5.popleft()
What's changed from Nikita Rybak's 1st approach is that, instead of adding next candidates into single data structure, i.e. Tree set, one can add each of them separately into 3 FIFO lists. This way, each list will be kept sorted all the time, and the next least candidate must always be at the head of one ore more of these lists.
If we eliminate the use of the three lists above, we arrive at the second implementation in Nikita Rybak' answer. This is done by evaluating those candidates (to be contained in three lists) only when needed, so that there is no need to store them.
Simply put:
In the first approach, we put every new candidate into single data structure, and that's bad because too many things get mixed up unwisely. This poor strategy inevitably entails O(log(tree size)) time complexity every time we make a query to the structure. By putting them into separate queues, however, you will see that each query takes only O(1) and that's why the overall performance reduces to O(n)!!! This is because each of the three lists is already sorted, by itself.
I believe you can solve this problem in sub-linear time, probably O(n^{2/3}).
To give you the idea, if you simplify the problem to allow factors of just 2 and 3, you can achieve O(n^{1/2}) time starting by searching for the smallest power of two that is at least as large as the nth ugly number, and then generating a list of O(n^{1/2}) candidates. This code should give you an idea how to do it. It relies on the fact that the nth number containing only powers of 2 and 3 has a prime factorization whose sum of exponents is O(n^{1/2}).
def foo(n):
p2 = 1 # current power of 2
p3 = 1 # current power of 3
e3 = 0 # exponent of current power of 3
t = 1 # number less than or equal to the current power of 2
while t < n:
p2 *= 2
if p3 * 3 < p2:
p3 *= 3
e3 += 1
t += 1 + e3
candidates = [p2]
c = p2
for i in range(e3):
c /= 2
c *= 3
if c > p2: c /= 2
candidates.append(c)
return sorted(candidates)[n - (t - len(candidates))]
The same idea should work for three allowed factors, but the code gets more complex. The sum of the powers of the factorization drops to O(n^{1/3}), but you need to consider more candidates, O(n^{2/3}) to be more precise.
A lot of good answers here, but I was having trouble understanding those, specifically how any of these answers, including the accepted one, maintained the axiom 2 in Dijkstra's original paper:
Axiom 2. If x is in the sequence, so is 2 * x, 3 * x, and 5 * x.
After some whiteboarding, it became clear that the axiom 2 is not an invariant at each iteration of the algorithm, but actually the goal of the algorithm itself. At each iteration, we try to restore the condition in axiom 2. If last is the last value in the result sequence S, axiom 2 can simply be rephrased as:
For some x in S, the next value in S is the minimum of 2x,
3x, and 5x, that is greater than last. Let's call this axiom 2'.
Thus, if we can find x, we can compute the minimum of 2x, 3x, and 5x in constant time, and add it to S.
But how do we find x? One approach is, we don't; instead, whenever we add a new element e to S, we compute 2e, 3e, and 5e, and add them to a minimum priority queue. Since this operations guarantees e is in S, simply extracting the top element of the PQ satisfies axiom 2'.
This approach works, but the problem is that we generate a bunch of numbers we may not end up using. See this answer for an example; if the user wants the 5th element in S (5), the PQ at that moment holds 6 6 8 9 10 10 12 15 15 20 25. Can we not waste this space?
Turns out, we can do better. Instead of storing all these numbers, we simply maintain three counters for each of the multiples, namely, 2i, 3j, and 5k. These are candidates for the next number in S. When we pick one of them, we increment only the corresponding counter, and not the other two. By doing so, we are not eagerly generating all the multiples, thus solving the space problem with the first approach.
Let's see a dry run for n = 8, i.e. the number 9. We start with 1, as stated by axiom 1 in Dijkstra's paper.
+---------+---+---+---+----+----+----+-------------------+
| # | i | j | k | 2i | 3j | 5k | S |
+---------+---+---+---+----+----+----+-------------------+
| initial | 1 | 1 | 1 | 2 | 3 | 5 | {1} |
+---------+---+---+---+----+----+----+-------------------+
| 1 | 1 | 1 | 1 | 2 | 3 | 5 | {1,2} |
+---------+---+---+---+----+----+----+-------------------+
| 2 | 2 | 1 | 1 | 4 | 3 | 5 | {1,2,3} |
+---------+---+---+---+----+----+----+-------------------+
| 3 | 2 | 2 | 1 | 4 | 6 | 5 | {1,2,3,4} |
+---------+---+---+---+----+----+----+-------------------+
| 4 | 3 | 2 | 1 | 6 | 6 | 5 | {1,2,3,4,5} |
+---------+---+---+---+----+----+----+-------------------+
| 5 | 3 | 2 | 2 | 6 | 6 | 10 | {1,2,3,4,5,6} |
+---------+---+---+---+----+----+----+-------------------+
| 6 | 4 | 2 | 2 | 8 | 6 | 10 | {1,2,3,4,5,6} |
+---------+---+---+---+----+----+----+-------------------+
| 7 | 4 | 3 | 2 | 8 | 9 | 10 | {1,2,3,4,5,6,8} |
+---------+---+---+---+----+----+----+-------------------+
| 8 | 5 | 3 | 2 | 10 | 9 | 10 | {1,2,3,4,5,6,8,9} |
+---------+---+---+---+----+----+----+-------------------+
Notice that S didn't grow at iteration 6, because the minimum candidate 6 had already been added previously. To avoid this problem of having to remember all of the previous elements, we amend our algorithm to increment all the counters whenever the corresponding multiples are equal to the minimum candidate. That brings us to the following Scala implementation.
def hamming(n: Int): Seq[BigInt] = {
#tailrec
def next(x: Int, factor: Int, xs: IndexedSeq[BigInt]): Int = {
val leq = factor * xs(x) <= xs.last
if (leq) next(x + 1, factor, xs)
else x
}
#tailrec
def loop(i: Int, j: Int, k: Int, xs: IndexedSeq[BigInt]): IndexedSeq[BigInt] = {
if (xs.size < n) {
val a = next(i, 2, xs)
val b = next(j, 3, xs)
val c = next(k, 5, xs)
val m = Seq(2 * xs(a), 3 * xs(b), 5 * xs(c)).min
val x = a + (if (2 * xs(a) == m) 1 else 0)
val y = b + (if (3 * xs(b) == m) 1 else 0)
val z = c + (if (5 * xs(c) == m) 1 else 0)
loop(x, y, z, xs :+ m)
} else xs
}
loop(0, 0, 0, IndexedSeq(BigInt(1)))
}
Basicly the search could be made O(n):
Consider that you keep a partial history of ugly numbers. Now, at each step you have to find the next one. It should be equal to a number from the history multiplied by 2, 3 or 5. Chose the smallest of them, add it to history, and drop some numbers from it so that the smallest from the list multiplied by 5 would be larger than the largest.
It will be fast, because the search of the next number will be simple:
min(largest * 2, smallest * 5, one from the middle * 3),
that is larger than the largest number in the list. If they are scarse, the list will always contain few numbers, so the search of the number that have to be multiplied by 3 will be fast.
Here is a correct solution in ML. The function ugly() will return a stream (lazy list) of hamming numbers. The function nth can be used on this stream.
This uses the Sieve method, the next elements are only calculated when needed.
datatype stream = Item of int * (unit->stream);
fun cons (x,xs) = Item(x, xs);
fun head (Item(i,xf)) = i;
fun tail (Item(i,xf)) = xf();
fun maps f xs = cons(f (head xs), fn()=> maps f (tail xs));
fun nth(s,1)=head(s)
| nth(s,n)=nth(tail(s),n-1);
fun merge(xs,ys)=if (head xs=head ys) then
cons(head xs,fn()=>merge(tail xs,tail ys))
else if (head xs<head ys) then
cons(head xs,fn()=>merge(tail xs,ys))
else
cons(head ys,fn()=>merge(xs,tail ys));
fun double n=n*2;
fun triple n=n*3;
fun ij()=
cons(1,fn()=>
merge(maps double (ij()),maps triple (ij())));
fun quint n=n*5;
fun ugly()=
cons(1,fn()=>
merge((tail (ij())),maps quint (ugly())));
This was first year CS work :-)
To find the n-th ugly number in O (n^(2/3)), jonderry's algorithm will work just fine. Note that the numbers involved are huge so any algorithm trying to check whether a number is ugly or not has no chance.
Finding all of the n smallest ugly numbers in ascending order is done easily by using a priority queue in O (n log n) time and O (n) space: Create a priority queue of numbers with the smallest numbers first, initially including just the number 1. Then repeat n times: Remove the smallest number x from the priority queue. If x hasn't been removed before, then x is the next larger ugly number, and we add 2x, 3x and 5x to the priority queue. (If anyone doesn't know the term priority queue, it's like the heap in the heapsort algorithm). Here's the start of the algorithm:
1 -> 2 3 5
1 2 -> 3 4 5 6 10
1 2 3 -> 4 5 6 6 9 10 15
1 2 3 4 -> 5 6 6 8 9 10 12 15 20
1 2 3 4 5 -> 6 6 8 9 10 10 12 15 15 20 25
1 2 3 4 5 6 -> 6 8 9 10 10 12 12 15 15 18 20 25 30
1 2 3 4 5 6 -> 8 9 10 10 12 12 15 15 18 20 25 30
1 2 3 4 5 6 8 -> 9 10 10 12 12 15 15 16 18 20 24 25 30 40
Proof of execution time: We extract an ugly number from the queue n times. We initially have one element in the queue, and after extracting an ugly number we add three elements, increasing the number by 2. So after n ugly numbers are found we have at most 2n + 1 elements in the queue. Extracting an element can be done in logarithmic time. We extract more numbers than just the ugly numbers but at most n ugly numbers plus 2n - 1 other numbers (those that could have been in the sieve after n-1 steps). So the total time is less than 3n item removals in logarithmic time = O (n log n), and the total space is at most 2n + 1 elements = O (n).
I guess we can use Dynamic Programming (DP) and compute nth Ugly Number. Complete explanation can be found at http://www.geeksforgeeks.org/ugly-numbers/
#include <iostream>
#define MAX 1000
using namespace std;
// Find Minimum among three numbers
long int min(long int x, long int y, long int z) {
if(x<=y) {
if(x<=z) {
return x;
} else {
return z;
}
} else {
if(y<=z) {
return y;
} else {
return z;
}
}
}
// Actual Method that computes all Ugly Numbers till the required range
long int uglyNumber(int count) {
long int arr[MAX], val;
// index of last multiple of 2 --> i2
// index of last multiple of 3 --> i3
// index of last multiple of 5 --> i5
int i2, i3, i5, lastIndex;
arr[0] = 1;
i2 = i3 = i5 = 0;
lastIndex = 1;
while(lastIndex<=count-1) {
val = min(2*arr[i2], 3*arr[i3], 5*arr[i5]);
arr[lastIndex] = val;
lastIndex++;
if(val == 2*arr[i2]) {
i2++;
}
if(val == 3*arr[i3]) {
i3++;
}
if(val == 5*arr[i5]) {
i5++;
}
}
return arr[lastIndex-1];
}
// Starting point of program
int main() {
long int num;
int count;
cout<<"Which Ugly Number : ";
cin>>count;
num = uglyNumber(count);
cout<<endl<<num;
return 0;
}
We can see that its quite fast, just change the value of MAX to compute higher Ugly Number
Using 3 generators in parallel and selecting the smallest at each iteration, here is a C program to compute all ugly numbers below 2128 in less than 1 second:
#include <limits.h>
#include <stdio.h>
#if 0
typedef unsigned long long ugly_t;
#define UGLY_MAX (~(ugly_t)0)
#else
typedef __uint128_t ugly_t;
#define UGLY_MAX (~(ugly_t)0)
#endif
int print_ugly(int i, ugly_t u) {
char buf[64], *p = buf + sizeof(buf);
*--p = '\0';
do { *--p = '0' + u % 10; } while ((u /= 10) != 0);
return printf("%d: %s\n", i, p);
}
int main() {
int i = 0, n2 = 0, n3 = 0, n5 = 0;
ugly_t u, ug2 = 1, ug3 = 1, ug5 = 1;
#define UGLY_COUNT 110000
ugly_t ugly[UGLY_COUNT];
while (i < UGLY_COUNT) {
u = ug2;
if (u > ug3) u = ug3;
if (u > ug5) u = ug5;
if (u == UGLY_MAX)
break;
ugly[i++] = u;
print_ugly(i, u);
if (u == ug2) {
if (ugly[n2] <= UGLY_MAX / 2)
ug2 = 2 * ugly[n2++];
else
ug2 = UGLY_MAX;
}
if (u == ug3) {
if (ugly[n3] <= UGLY_MAX / 3)
ug3 = 3 * ugly[n3++];
else
ug3 = UGLY_MAX;
}
if (u == ug5) {
if (ugly[n5] <= UGLY_MAX / 5)
ug5 = 5 * ugly[n5++];
else
ug5 = UGLY_MAX;
}
}
return 0;
}
Here are the last 10 lines of output:
100517: 338915443777200000000000000000000000000
100518: 339129266201729628114355465608000000000
100519: 339186548067800934969350553600000000000
100520: 339298130282929870605468750000000000000
100521: 339467078447341918945312500000000000000
100522: 339569540691046437734055936000000000000
100523: 339738624000000000000000000000000000000
100524: 339952965770562084651663360000000000000
100525: 340010386766614455386112000000000000000
100526: 340122240000000000000000000000000000000
Here is a version in Javascript usable with QuickJS:
import * as std from "std";
function main() {
var i = 0, n2 = 0, n3 = 0, n5 = 0;
var u, ug2 = 1n, ug3 = 1n, ug5 = 1n;
var ugly = [];
for (;;) {
u = ug2;
if (u > ug3) u = ug3;
if (u > ug5) u = ug5;
ugly[i++] = u;
std.printf("%d: %s\n", i, String(u));
if (u >= 0x100000000000000000000000000000000n)
break;
if (u == ug2)
ug2 = 2n * ugly[n2++];
if (u == ug3)
ug3 = 3n * ugly[n3++];
if (u == ug5)
ug5 = 5n * ugly[n5++];
}
return 0;
}
main();
here is my code , the idea is to divide the number by 2 (till it gives remainder 0) then 3 and 5 . If at last the number becomes one it's a ugly number.
you can count and even print all ugly numbers till n.
int count = 0;
for (int i = 2; i <= n; i++) {
int temp = i;
while (temp % 2 == 0) temp=temp / 2;
while (temp % 3 == 0) temp=temp / 3;
while (temp % 5 == 0) temp=temp / 5;
if (temp == 1) {
cout << i << endl;
count++;
}
}
This problem can be done in O(1).
If we remove 1 and look at numbers between 2 through 30, we will notice that there are 22 numbers.
Now, for any number x in the 22 numbers above, there will be a number x + 30 in between 31 and 60 that is also ugly. Thus, we can find at least 22 numbers between 31 and 60. Now for every ugly number between 31 and 60, we can write it as s + 30. So s will be ugly too, since s + 30 is divisible by 2, 3, or 5. Thus, there will be exactly 22 numbers between 31 and 60. This logic can be repeated for every block of 30 numbers after that.
Thus, there will be 23 numbers in the first 30 numbers, and 22 for every 30 after that. That is, first 23 uglies will occur between 1 and 30, 45 uglies will occur between 1 and 60, 67 uglies will occur between 1 and 30 etc.
Now, if I am given n, say 137, I can see that 137/22 = 6.22. The answer will lie between 6*30 and 7*30 or between 180 and 210. By 180, I will have 6*22 + 1 = 133rd ugly number at 180. I will have 154th ugly number at 210. So I am looking for 4th ugly number (since 137 = 133 + 4)in the interval [2, 30], which is 5. The 137th ugly number is then 180 + 5 = 185.
Another example: if I want the 1500th ugly number, I count 1500/22 = 68 blocks. Thus, I will have 22*68 + 1 = 1497th ugly at 30*68 = 2040. The next three uglies in the [2, 30] block are 2, 3, and 4. So our required ugly is at 2040 + 4 = 2044.
The point it that I can simply build a list of ugly numbers between [2, 30] and simply find the answer by doing look ups in O(1).
Here is another O(n) approach (Python solution) based on the idea of merging three sorted lists. The challenge is to find the next ugly number in increasing order. For example, we know the first seven ugly numbers are [1,2,3,4,5,6,8]. The ugly numbers are actually from the following three lists:
list 1: 1*2, 2*2, 3*2, 4*2, 5*2, 6*2, 8*2 ... ( multiply each ugly number by 2 )
list 2: 1*3, 2*3, 3*3, 4*3, 5*3, 6*3, 8*3 ... ( multiply each ugly number by 3 )
list 3: 1*5, 2*5, 3*5, 4*5, 5*5, 6*5, 8*5 ... ( multiply each ugly number by 5 )
So the nth ugly number is the nth number of the list merged from the three lists above:
1, 1*2, 1*3, 2*2, 1*5, 2*3 ...
def nthuglynumber(n):
p2, p3, p5 = 0,0,0
uglynumber = [1]
while len(uglynumber) < n:
ugly2, ugly3, ugly5 = uglynumber[p2]*2, uglynumber[p3]*3, uglynumber[p5]*5
next = min(ugly2, ugly3, ugly5)
if next == ugly2: p2 += 1 # multiply each number
if next == ugly3: p3 += 1 # only once by each
if next == ugly5: p5 += 1 # of the three factors
uglynumber += [next]
return uglynumber[-1]
STEP I: computing three next possible ugly numbers from the three lists
ugly2, ugly3, ugly5 = uglynumber[p2]*2, uglynumber[p3]*3, uglynumber[p5]*5
STEP II, find the one next ugly number as the smallest of the three above:
next = min(ugly2, ugly3, ugly5)
STEP III: moving the pointer forward if its ugly number was the next ugly number
if next == ugly2: p2+=1
if next == ugly3: p3+=1
if next == ugly5: p5+=1
note: not using if with elif nor else
STEP IV: adding the next ugly number into the merged list uglynumber
uglynumber += [next]
Related
Suppose I have a matrix A, it is symmetric. That is A(i,j)=A(j,i)
The value of A(i,j) can be i or j.
How can I fill the value into matrix A to make sure the exist times of each value as close as possible? (or as balance as possible)? Is there any algorithm can handle this?
Example A:
A = 1 1 1 1
1 2 2 2
1 2 3 3
1 2 3 4
exist times of 1 is 7
exist times of 2 is 5
exist times of 3 is 3
exist times of 4 is 1
Example B:
A = 1 2 1 1
2 2 3 2
1 3 3 4
1 2 4 4
exist times of 1 is 5
exist times of 2 is 4
exist times of 3 is 3
exist times of 4 is 3
In example B the values is (5,4,3,3), they are closer than example A (7,5,3,1)
I am looking forward a solution for nxn matrix.
Extend
If the matrix is sparse, that is the some element can not be filled in matrix. Which algorithm can be used to handle this problem?
Thanks for your time.
Found one solution, but without a real algorithm...
1 2 3 1 1
2 2 3 4 2
3 3 3 4 5
1 4 4 4 5
1 2 5 5 5
Basically: 25/5=5, looked for how to fill with 5 of each 1-5.
for 5 - reversed L from corner,
then up and left one spot for 4s,
and for 3s.
got "creative" for 2s and 1s...
I guess it's kind of algorithm...
Here is a solution written in python based on Weighted Bipartite Matching (or the isomorphic Minimum Cost Flow problem.)
#!/usr/bin/python
"""
filename: mcf_matrix_assign.py
purpose: demonstrate the use of weighted bipartite matching (isomorphic to MCF
with a suitable transform) to solve a matrix assignment problem with
certain conditions and optimization goals.
"""
import networkx as nx
N = 5
K = N # ensure K is large enough to satisfy flow, N <= K <= N*N
# setting K larger simply means a longer runtime
G = nx.DiGraph()
total_demand = 0
for i in range(N*N):
# assert a row-major linear indexing of the matrix
row, col = i / N, i % N
if row >= col:
continue # symmetry fix certain values
total_demand += 1
G.add_node('s'+str(i),demand=-1);
G.add_edge('s'+str(i), 'v'+str(row), weight = 0, capacity = 1)
G.add_edge('s'+str(i), 'v'+str(col), weight = 0, capacity = 1)
G.add_node('sink', demand = total_demand)
# attach each 'value' to the sink with incrementally larger weight
for i in range(N):
for j in range(K):
dummy_node = 'v'+str(i)+'w'+str(j)
G.add_edge('v'+str(i), dummy_node, weight = j, capacity = 1)
G.add_edge(dummy_node, 'sink', weight = 0, capacity = 1)
flow_dict = nx.min_cost_flow(G)
# decode the solution to get the matrix assignment reported by the MCF (or
# equivalently weighted bipartite matching)
solution = [ -1 for i in range(N*N) ]
for i in range(N*N):
# assert a row-major linear indexing of the matrix
row, col = i / N, i % N
if row == col:
solution[i] = row
continue # symmetry fix certain values
if row > col:
solution[i] = solution[col*N+row]
continue # symmetry fix certain values
adjacency = flow_dict['s'+str(i)]
solution[i] = row if adjacency['v'+str(row)] == 1 else col;
# print the solution
for row in range(N):
print ''.join(['-' for _ in range(4*N+1)])
print '|',
for col in range(N):
print str(solution[row*N+col]+1) + ' |',
print '\n',
print ''.join(['-' for _ in range(4*N+1)])
print 'Histogram summary:'
counts = [ (i+1, sum([ 0 if s != i else 1 for s in solution ])) for i in range(N) ]
for value, count in counts:
print ' Value ', value, " appears ", count, " times."
This produces the solution:
---------------------
| 1 | 1 | 3 | 1 | 5 |
---------------------
| 1 | 2 | 2 | 4 | 2 |
---------------------
| 3 | 2 | 3 | 4 | 3 |
---------------------
| 1 | 4 | 4 | 4 | 5 |
---------------------
| 5 | 2 | 3 | 5 | 5 |
---------------------
Histogram summary:
Value 1 appears 5 times.
Value 2 appears 5 times.
Value 3 appears 5 times.
Value 4 appears 5 times.
Value 5 appears 5 times.
And here is the solution when N=4 in the script.
-----------------
| 1 | 2 | 1 | 4 |
-----------------
| 2 | 2 | 3 | 4 |
-----------------
| 1 | 3 | 3 | 3 |
-----------------
| 4 | 4 | 3 | 4 |
-----------------
Histogram summary:
Value 1 appears 3 times.
Value 2 appears 3 times.
Value 3 appears 5 times.
Value 4 appears 5 times.
It's fairly easy to prove that this will always find an optimal answer in polynomial time.
Explanation
It is probably easiest to explain what is happening by describing the graph construction for a small case. For this discussion, fix N=3.
In this case we have a matrix assignment with variables
X s0 s1
X X s2
X X X
where X denotes a fixed value and sk denotes the kth slot in the array to fill.
In this case we also have 3 available value assignments [1,2,3] for each of the slots sk. (This is where it is easy to make modifications to the "allowed" values for any sk.)
If we construct a bipartite graph between the slots sk and the value assignments v1,v2,v3 in a way that edges of capacity 1 and weight zero are used to connect sk to each legal vi assignment, we can then solve it easily using MCF.
For illustration, the appropriate graph for N=3 is shown below:
Once the minimum cost flow is computed, we can decode the assignment by checking which edges are used in the solution.
A note on performance
networkx was used here in python purely out of convenience, it is by no means efficient in any sense of the word. The quality of implementation of the MCF algorithm in networkx is quite low and I would not recommend trying to scale it up.
For serious application, I would instead recommend the lemon MCF library (in particular the cost-scaling algorithm is competitive) or, you can use Andrew Goldberg's implementation of cost-scaling (which is hard to find but exists) and is probably quite efficient as well.
There is a special pattern to follow in order to get the best possible result. For each column (of row 1), start filling the matrix diagonally with values 1, 2, ..., n, fixing the correspondent symmetric slot. At the end, you will have the best possible result.
#include <iostream>
using namespace std;
int main(){
int n = 4; //size of matrix
int values[n]; for(int i = 0; i < n; i++) values[i] = 0;
int matrix[n][n]; for(int i = 0; i < n; i++) for(int j = 0; j < n; j++) matrix[i][j] = -1;
for(int c = 0; c < n; c++){
int i = 0, j = c;
for(int x = 0; x < n; x++){
if(matrix[i][j] != -1) {
break;
}
matrix[i][j] = matrix[j][i] = x;
i = (i + 1) % n;
j = (j + 1) % n;
}
}
for(int i = 0; i < n; i++){
for(int j = 0; j < n; j++){
cout<<matrix[i][j] + 1<<" ";
values[matrix[i][j]]++;
}
cout<<endl;
}
cout<<endl;
for (int i = 0; i < n; i++) {
cout<<(i + 1)<<" appears "<<values[i]<<" times"<<endl;
}
return 0;
}
OUTPUT
1 1 1 4
1 2 2 2
1 2 3 3
4 2 3 4
1 appears 5 times
2 appears 5 times
3 appears 3 times
4 appears 3 times
You can test it here.
The complexity is O(n²), since you have to fill all the matrix.
When n is odd, the solution is always n occurrences for each number, but when n is even, this is impossible.
Given an integer N, how to efficiently find the count of numbers which are divisible by 7 (their reverse should also be divisible by 7) in the range:
[0, 10^N - 1]
Example:
For N=2, answer:
4 {0, 7, 70, 77}
[All numbers from 0 to 99 which are divisible by 7 (also their reverse is divisible)]
My approach, simple brute-force:
initialize count to zero
run a loop from i=0 till end
if a(i) % 7 == 0 && reverse(a(i)) % 7 == 0, then we increase the count
Note:
reverse(123) = 321, reverse(1200) = 21, for example!
Let's see what happens mod 7 when we add a digit, d, to a prefix, abc.
10 * abc + d =>
(10 mod 7 * abc mod 7) mod 7 + d mod 7
reversed number:
abc + d * 10^(length(prefix) =>
abc mod 7 + (d mod 7 * 10^3 mod 7) mod 7
Note is that we only need the count of prefixes of abc mod 7 for each such remainder, not the actual prefixes.
Let COUNTS(n,f,r) be the number of n-digit numbers such that n%7 = f and REVERSE(n)%7 = r
The counts are easy to calculate for n=1:
COUNTS(1,f,r) = 0 when f!=r, since a 1-digit number is the same as its reverse.
COUNTS(1,x,x) = 1 when x >= 3, and
COUNTS(1,x,x) = 2 when x < 3, since 7%3=0, 8%3=1, and 9%3=2
The counts for other lengths can be figured out by calculating what happens when you add each digit from 0 to 9 to the numbers characterized by the previous counts.
At the end, COUNTS(N,0,0) is the answer you are looking for.
In python, for example, it looks like this:
def getModCounts(len):
counts=[[0]*7 for i in range(0,7)]
if len<1:
return counts
if len<2:
counts[0][0] = counts[1][1] = counts[2][2] = 2
counts[3][3] = counts[4][4] = counts[5][5] = counts[6][6] = 1
return counts
prevCounts = getModCounts(len-1)
for f in range(0,7):
for r in range(0,7):
c = prevCounts[f][r]
rplace=(10**(len-1))%7
for newdigit in range(0,10):
newf=(f*10 + newdigit)%7
newr=(r + newdigit*rplace)%7
counts[newf][newr]+=c
return counts
def numFwdAndRevDivisible(len):
return getModCounts(len)[0][0]
#TEST
for i in range(0,20):
print("{0} -> {1}".format(i, numFwdAndRevDivisible(i)))
See if it gives the answers you're expecting. If not, maybe there's a bug I need to fix:
0 -> 0
1 -> 2
2 -> 4
3 -> 22
4 -> 206
5 -> 2113
6 -> 20728
7 -> 205438
8 -> 2043640
9 -> 20411101
10 -> 204084732
11 -> 2040990205
12 -> 20408959192
13 -> 204085028987
14 -> 2040823461232
15 -> 20408170697950
16 -> 204081640379568
17 -> 2040816769367351
18 -> 20408165293673530
19 -> 204081641308734748
This is a pretty good answer when counting up to N is reasonable -- way better than brute force, which counts up to 10^N.
For very long lengths like N=10^18 (you would probably be asked for a the count mod 1000000007 or something), there is a next-level answer.
Note that there is a linear relationship between the counts for length n and the counts for length n+1, and that this relationship can be represented by a 49x49 matrix. You can exponentiate this matrix to the Nth power using exponentiation by squaring in O(log N) matrix multiplications, and then just multiply by the single digit counts to get the length N counts.
There is a recursive solution using digit dp technique for any digits.
long long call(int pos , int Mod ,int revMod){
if(pos == len ){
if(!Mod && !revMod)return 1;
return 0;
}
if(dp[pos][Mod][revMod] != -1 )return dp[pos][Mod][revMod] ;
long long res =0;
for(int i= 0; i<= 9; i++ ){
int revValue =(base[pos]*i + revMod)%7;
int curValue = (Mod*10 + i)%7;
res += call(pos+1, curValue,revValue) ;
}
return dp[pos][Mod][revMod] = res ;
}
I was wondering if there is an algorithm that checks wether a given number is factorable into a set of prime numbers and if no give out the nearest number.
The problem comes always up when I use the FFT.
Thanks a lot for your help guys.
In general this looks like a hard problem, particularly finding the next largest integer that factors into your set of primes. However, if your set of primes isn't too big, one approach would be to turn this into an integer optimization problem by taking the logs. Here is how to find the smallest number > n that factors into a set of primes p_1...p_k
choose integers x_1,...,x_k to minimize (x_1 log p_1 + ... + x_k log p_k - log n)
Subject to:
x_1 log p_1 + ... + x_k log p_k >= log n
x_i >= 0 for all i
The x_i will give you the exponents for the primes. Here is an implementation in R using lpSolve:
minfact<-function(x,p){
sol<-lp("min",log(p),t(log(p)),">=",log(x),all.int=T)
prod(p^sol$solution)
}
> p<-c(2,3,13,31)
> x<-124363183
> y<-minfact(x,p)
> y
[1] 124730112
> factorize(y)
Big Integer ('bigz') object of length 13:
[1] 2 2 2 2 2 2 2 2 3 13 13 31 31
> y-x
[1] 366929
>
Using big integers, this works pretty well even for large numbers:
> p<-c(2,3,13,31,53,79)
> x<-as.bigz("1243631831278461278641361")
> y<-minfact(x,p)
y
>
Big Integer ('bigz') :
[1] 1243634072805560436129792
> factorize(y)
Big Integer ('bigz') object of length 45:
[1] 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2
[26] 2 2 2 2 2 2 2 2 3 3 3 3 13 31 31 31 31 53 53 53
>
Your question is about well-known factorization problem - which could not be resolved with 'fast' (polynomial) time. Lenstra's elliptic algorithm is the most efficient (known) way in common case, but it requires strong knowledge of numbers theory - and it's also sub-exponential (but not polynomial).
Other algorithms are listed in the page by first link in my post, but such things as direct try (brute force) are much more slower, of cause.
Please, note, that under "could not be resolved with polynomial time" - I mean that there's no way to do this now - but not that such way does not exist (at least now, number theory can not provide such solution for this problem)
Here is a brute force method in C++. It returns the factorization of the nearest factorable number. If N has two equidistant factorable neighbours, it returns the smallest one.
GCC 4.7.3: g++ -Wall -Wextra -std=c++0x factorable-neighbour.cpp
#include <iostream>
#include <vector>
using ints = std::vector<int>;
ints factor(int n, const ints& primes) {
ints f(primes.size(), 0);
for (int i = 0; i < primes.size(); ++i) {
while (0< n && !(n % primes[i])) {
n /= primes[i];
++f[i]; } }
// append the "remainder"
f.push_back(n);
return f;
}
ints closest_factorable(int n, const ints& primes) {
int d = 0;
ints r;
while (true) {
r = factor(n + d, primes);
if (r[r.size() - 1] == 1) { break; }
++d;
r = factor(n - d, primes);
if (r[r.size() - 1] == 1) { break; }
}
r.pop_back();
return r; }
int main() {
for (int i = 0; i < 30; ++i) {
for (const auto& f : closest_factorable(i, {2, 3, 5, 7, 11})) {
std::cout << f << " "; }
std::cout << "\n"; }
}
I suppose that you have a (small) set of prime numbers S and an integer n and you want to know is n factors only using number in S. The easiest way seems to be the following:
P <- product of s in S
while P != 1 do
P <- GCD(P, n)
n <- n/P
return n == 1
You compute the GCD using Euclid's algorithm.
The idea is the following: Suppose that S = {p1, p2, ... ,pk}. You can write n uniquely as
n = p1^n1 p2^n2 ... pk^nk * R
where R is coprime wrt the pi. You want to know whether R=1.
Then
GCD(n, P) = prod ( pi such that ni <> 0 ).
Therefore n/p decrease all those non zeros ni by 1 so that they eventually become 0. At the end only R remains.
For example: S = {2,3,5}, n = 5600 = 2^5*5^2*7. Then P = 2*3*5 = 30. One gets GCD(n, p)=10=2*5. And therefore n/GCD(n,p) = 560 = 2^4*5*7.
You are now back to the same problem: You want to know if 560 can be factored using S = {2,5} hence the loop. So the next steps are
GCD(560, 10) = 10. 560/GCD = 56 = 2^3 * 7.
GCD(56, 10) = 2. 56/2 = 28 = 2^2 * 7
GCD(28, 2) = 2. 28/2 = 14 = 2 * 7
GCD(14, 2) = 2. 14/2 = 7
GCD(7, 2) = 1 so that R = 7 ! Your answer if FALSE.
kissfft has a function
int kiss_fft_next_fast_size(int n)
that returns the next largest N that is an aggregate of 2,3,5.
Also related is a kf_factor function that factorizes a number n, pulling out the "nice" FFT primes first (e.g. 4's are pulled out before 2's)
You are given N blocks of height 1…N. In how many ways can you arrange these blocks in a row such that when viewed from left you see only L blocks (rest are hidden by taller blocks) and when seen from right you see only R blocks? Example given N=3, L=2, R=1 there is only one arrangement {2, 1, 3} while for N=3, L=2, R=2 there are two ways {1, 3, 2} and {2, 3, 1}.
How should we solve this problem by programming? Any efficient ways?
This is a counting problem, not a construction problem, so we can approach it using recursion. Since the problem has two natural parts, looking from the left and looking from the right, break it up and solve for just one part first.
Let b(N, L, R) be the number of solutions, and let f(N, L) be the number of arrangements of N blocks so that L are visible from the left. First think about f because it's easier.
APPROACH 1
Let's get the initial conditions and then go for recursion. If all are to be visible, then they must be ordered increasingly, so
f(N, N) = 1
If there are suppose to be more visible blocks than available blocks, then nothing we can do, so
f(N, M) = 0 if N < M
If only one block should be visible, then put the largest first and then the others can follow in any order, so
f(N,1) = (N-1)!
Finally, for the recursion, think about the position of the tallest block, say N is in the kth spot from the left. Then choose the blocks to come before it in (N-1 choose k-1) ways, arrange those blocks so that exactly L-1 are visible from the left, and order the N-k blocks behind N it in any you like, giving:
f(N, L) = sum_{1<=k<=N} (N-1 choose k-1) * f(k-1, L-1) * (N-k)!
In fact, since f(x-1,L-1) = 0 for x<L, we may as well start k at L instead of 1:
f(N, L) = sum_{L<=k<=N} (N-1 choose k-1) * f(k-1, L-1) * (N-k)!
Right, so now that the easier bit is understood, let's use f to solve for the harder bit b. Again, use recursion based on the position of the tallest block, again say N is in position k from the left. As before, choose the blocks before it in N-1 choose k-1 ways, but now think about each side of that block separately. For the k-1 blocks left of N, make sure that exactly L-1 of them are visible. For the N-k blocks right of N, make sure that R-1 are visible and then reverse the order you would get from f. Therefore the answer is:
b(N,L,R) = sum_{1<=k<=N} (N-1 choose k-1) * f(k-1, L-1) * f(N-k, R-1)
where f is completely worked out above. Again, many terms will be zero, so we only want to take k such that k-1 >= L-1 and N-k >= R-1 to get
b(N,L,R) = sum_{L <= k <= N-R+1} (N-1 choose k-1) * f(k-1, L-1) * f(N-k, R-1)
APPROACH 2
I thought about this problem again and found a somewhat nicer approach that avoids the summation.
If you work the problem the opposite way, that is think of adding the smallest block instead of the largest block, then the recurrence for f becomes much simpler. In this case, with the same initial conditions, the recurrence is
f(N,L) = f(N-1,L-1) + (N-1) * f(N-1,L)
where the first term, f(N-1,L-1), comes from placing the smallest block in the leftmost position, thereby adding one more visible block (hence L decreases to L-1), and the second term, (N-1) * f(N-1,L), accounts for putting the smallest block in any of the N-1 non-front positions, in which case it is not visible (hence L stays fixed).
This recursion has the advantage of always decreasing N, though it makes it more difficult to see some formulas, for example f(N,N-1) = (N choose 2). This formula is fairly easy to show from the previous formula, though I'm not certain how to derive it nicely from this simpler recurrence.
Now, to get back to the original problem and solve for b, we can also take a different approach. Instead of the summation before, think of the visible blocks as coming in packets, so that if a block is visible from the left, then its packet consists of all blocks right of it and in front of the next block visible from the left, and similarly if a block is visible from the right then its packet contains all blocks left of it until the next block visible from the right. Do this for all but the tallest block. This makes for L+R packets. Given the packets, you can move one from the left side to the right side simply by reversing the order of the blocks. Therefore the general case b(N,L,R) actually reduces to solving the case b(N,L,1) = f(N,L) and then choosing which of the packets to put on the left and which on the right. Therefore we have
b(N,L,R) = (L+R choose L) * f(N,L+R)
Again, this reformulation has some advantages over the previous version. Putting these latter two formulas together, it's much easier to see the complexity of the overall problem. However, I still prefer the first approach for constructing solutions, though perhaps others will disagree. All in all it just goes to show there's more than one good way to approach the problem.
What's with the Stirling numbers?
As Jason points out, the f(N,L) numbers are precisely the (unsigned) Stirling numbers of the first kind. One can see this immediately from the recursive formulas for each. However, it's always nice to be able to see it directly, so here goes.
The (unsigned) Stirling numbers of the First Kind, denoted S(N,L) count the number of permutations of N into L cycles. Given a permutation written in cycle notation, we write the permutation in canonical form by beginning the cycle with the largest number in that cycle and then ordering the cycles increasingly by the first number of the cycle. For example, the permutation
(2 6) (5 1 4) (3 7)
would be written in canonical form as
(5 1 4) (6 2) (7 3)
Now drop the parentheses and notice that if these are the heights of the blocks, then the number of visible blocks from the left is exactly the number of cycles! This is because the first number of each cycle blocks all other numbers in the cycle, and the first number of each successive cycle is visible behind the previous cycle. Hence this problem is really just a sneaky way to ask you to find a formula for Stirling numbers.
well, just as an empirical solution for small N:
blocks.py:
import itertools
from collections import defaultdict
def countPermutation(p):
n = 0
max = 0
for block in p:
if block > max:
n += 1
max = block
return n
def countBlocks(n):
count = defaultdict(int)
for p in itertools.permutations(range(1,n+1)):
fwd = countPermutation(p)
rev = countPermutation(reversed(p))
count[(fwd,rev)] += 1
return count
def printCount(count, n, places):
for i in range(1,n+1):
for j in range(1,n+1):
c = count[(i,j)]
if c > 0:
print "%*d" % (places, count[(i,j)]),
else:
print " " * places ,
print
def countAndPrint(nmax, places):
for n in range(1,nmax+1):
printCount(countBlocks(n), n, places)
print
and sample output:
blocks.countAndPrint(10)
1
1
1
1 1
1 2
1
2 3 1
2 6 3
3 3
1
6 11 6 1
6 22 18 4
11 18 6
6 4
1
24 50 35 10 1
24 100 105 40 5
50 105 60 10
35 40 10
10 5
1
120 274 225 85 15 1
120 548 675 340 75 6
274 675 510 150 15
225 340 150 20
85 75 15
15 6
1
720 1764 1624 735 175 21 1
720 3528 4872 2940 875 126 7
1764 4872 4410 1750 315 21
1624 2940 1750 420 35
735 875 315 35
175 126 21
21 7
1
5040 13068 13132 6769 1960 322 28 1
5040 26136 39396 27076 9800 1932 196 8
13068 39396 40614 19600 4830 588 28
13132 27076 19600 6440 980 56
6769 9800 4830 980 70
1960 1932 588 56
322 196 28
28 8
1
40320 109584 118124 67284 22449 4536 546 36 1
40320 219168 354372 269136 112245 27216 3822 288 9
109584 354372 403704 224490 68040 11466 1008 36
118124 269136 224490 90720 19110 2016 84
67284 112245 68040 19110 2520 126
22449 27216 11466 2016 126
4536 3822 1008 84
546 288 36
36 9
1
You'll note a few obvious (well, mostly obvious) things from the problem statement:
the total # of permutations is always N!
with the exception of N=1, there is no solution for L,R = (1,1): if a count in one direction is 1, then it implies the tallest block is on that end of the stack, so the count in the other direction has to be >= 2
the situation is symmetric (reverse each permutation and you reverse the L,R count)
if p is a permutation of N-1 blocks and has count (Lp,Rp), then the N permutations of block N inserted in each possible spot can have a count ranging from L = 1 to Lp+1, and R = 1 to Rp + 1.
From the empirical output:
the leftmost column or topmost row (where L = 1 or R = 1) with N blocks is the sum of the
rows/columns with N-1 blocks: i.e. in #PengOne's notation,
b(N,1,R) = sum(b(N-1,k,R-1) for k = 1 to N-R+1
Each diagonal is a row of Pascal's triangle, times a constant factor K for that diagonal -- I can't prove this, but I'm sure someone can -- i.e.:
b(N,L,R) = K * (L+R-2 choose L-1) where K = b(N,1,L+R-1)
So the computational complexity of computing b(N,L,R) is the same as the computational complexity of computing b(N,1,L+R-1) which is the first column (or row) in each triangle.
This observation is probably 95% of the way towards an explicit solution (the other 5% I'm sure involves standard combinatoric identities, I'm not too familiar with those).
A quick check with the Online Encyclopedia of Integer Sequences shows that b(N,1,R) appears to be OEIS sequence A094638:
A094638 Triangle read by rows: T(n,k) =|s(n,n+1-k)|, where s(n,k) are the signed Stirling numbers of the first kind (1<=k<=n; in other words, the unsigned Stirling numbers of the first kind in reverse order).
1, 1, 1, 1, 3, 2, 1, 6, 11, 6, 1, 10, 35, 50, 24, 1, 15, 85, 225, 274, 120, 1, 21, 175, 735, 1624, 1764, 720, 1, 28, 322, 1960, 6769, 13132, 13068, 5040, 1, 36, 546, 4536, 22449, 67284, 118124, 109584, 40320, 1, 45, 870, 9450, 63273, 269325, 723680, 1172700
As far as how to efficiently compute the Stirling numbers of the first kind, I'm not sure; Wikipedia gives an explicit formula but it looks like a nasty sum. This question (computing Stirling #s of the first kind) shows up on MathOverflow and it looks like O(n^2), as PengOne hypothesizes.
Based on #PengOne answer, here is my Javascript implementation:
function g(N, L, R) {
var acc = 0;
for (var k=1; k<=N; k++) {
acc += comb(N-1, k-1) * f(k-1, L-1) * f(N-k, R-1);
}
return acc;
}
function f(N, L) {
if (N==L) return 1;
else if (N<L) return 0;
else {
var acc = 0;
for (var k=1; k<=N; k++) {
acc += comb(N-1, k-1) * f(k-1, L-1) * fact(N-k);
}
return acc;
}
}
function comb(n, k) {
return fact(n) / (fact(k) * fact(n-k));
}
function fact(n) {
var acc = 1;
for (var i=2; i<=n; i++) {
acc *= i;
}
return acc;
}
$("#go").click(function () {
alert(g($("#N").val(), $("#L").val(), $("#R").val()));
});
Here is my construction solution inspired by #PengOne's ideas.
import itertools
def f(blocks, m):
n = len(blocks)
if m > n:
return []
if m < 0:
return []
if n == m:
return [sorted(blocks)]
maximum = max(blocks)
blocks = list(set(blocks) - set([maximum]))
results = []
for k in range(0, n):
for left_set in itertools.combinations(blocks, k):
for left in f(left_set, m - 1):
rights = itertools.permutations(list(set(blocks) - set(left)))
for right in rights:
results.append(list(left) + [maximum] + list(right))
return results
def b(n, l, r):
blocks = range(1, n + 1)
results = []
maximum = max(blocks)
blocks = list(set(blocks) - set([maximum]))
for k in range(0, n):
for left_set in itertools.combinations(blocks, k):
for left in f(left_set, l - 1):
other = list(set(blocks) - set(left))
rights = f(other, r - 1)
for right in rights:
results.append(list(left) + [maximum] + list(right))
return results
# Sample
print b(4, 3, 2) # -> [[1, 2, 4, 3], [1, 3, 4, 2], [2, 3, 4, 1]]
We derive a general solution F(N, L, R) by examining a specific testcase: F(10, 4, 3).
We first consider 10 in the leftmost possible position, the 4th ( _ _ _ 10 _ _ _ _ _ _ ).
Then we find the product of the number of valid sequences in the left and in the right of 10.
Next, we'll consider 10 in the 5th slot, calculate another product and add it to the previous one.
This process will go on until 10 is in the last possible slot, the 8th.
We'll use the variable named pos to keep track of N's position.
Now suppose pos = 6 ( _ _ _ _ _ 10 _ _ _ _ ). In the left of 10, there are 9C5 = (N-1)C(pos-1) sets of numbers to be arranged.
Since only the order of these numbers matters, we could look at 1, 2, 3, 4, 5.
To construct a sequence with these numbers so that 3 = L-1 of them are visible from the left, we can begin by placing 5 in the leftmost possible slot ( _ _ 5 _ _ ) and follow similar steps to what we did before.
So if F were defined recursively, it could be used here.
The only difference now is that the order of numbers in the right of 5 is immaterial.
To resolve this issue, we'll use a signal, INF (infinity), for R to indicate its unimportance.
Turning to the right of 10, there will be 4 = N-pos numbers left.
We first consider 4 in the last possible slot, position 2 = R-1 from the right ( _ _ 4 _ ).
Here what appears in the left of 4 is immaterial.
But counting arrangements of 4 blocks with the mere condition that 2 of them should be visible from the right is no different than counting arrangements of the same blocks with the mere condition that 2 of them should be visible from the left.
ie. instead of counting sequences like 3 1 4 2, one can count sequences like 2 4 1 3
So the number of valid arrangements in the right of 10 is F(4, 2, INF).
Thus the number of arrangements when pos == 6 is 9C5 * F(5, 3, INF) * F(4, 2, INF) = (N-1)C(pos-1) * F(pos-1, L-1, INF)* F(N-pos, R-1, INF).
Similarly, in F(5, 3, INF), 5 will be considered in a succession of slots with L = 2 and so on.
Since the function calls itself with L or R reduced, it must return a value when L = 1, that is F(N, 1, INF) must be a base case.
Now consider the arrangement _ _ _ _ _ 6 7 10 _ _.
The only slot 5 can take is the first, and the following 4 slots may be filled in any manner; thus F(5, 1, INF) = 4!.
Then clearly F(N, 1, INF) = (N-1)!.
Other (trivial) base cases and details could be seen in the C implementation below.
Here is a link for testing the code
#define INF UINT_MAX
long long unsigned fact(unsigned n) { return n ? n * fact(n-1) : 1; }
unsigned C(unsigned n, unsigned k) { return fact(n) / (fact(k) * fact(n-k)); }
unsigned F(unsigned N, unsigned L, unsigned R)
{
unsigned pos, sum = 0;
if(R != INF)
{
if(L == 0 || R == 0 || N < L || N < R) return 0;
if(L == 1) return F(N-1, R-1, INF);
if(R == 1) return F(N-1, L-1, INF);
for(pos = L; pos <= N-R+1; ++pos)
sum += C(N-1, pos-1) * F(pos-1, L-1, INF) * F(N-pos, R-1, INF);
}
else
{
if(L == 1) return fact(N-1);
for(pos = L; pos <= N; ++pos)
sum += C(N-1, pos-1) * F(pos-1, L-1, INF) * fact(N-pos);
}
return sum;
}
Can you suggest an algorithm that find all pairs of nodes in a link list that add up to 10.
I came up with the following.
Algorithm: Compare each node, starting with the second node, with each node starting from the head node till the previous node (previous to the current node being compared) and report all such pairs.
I think this algorithm should work however its certainly not the most efficient one having a complexity of O(n2).
Can anyone hint at a solution which is more efficient (perhaps takes linear time). Additional or temporary nodes can be used by such a solution.
If their range is limited (say between -100 and 100), it's easy.
Create an array quant[-100..100] then just cycle through your linked list, executing:
quant[value] = quant[value] + 1
Then the following loop will do the trick.
for i = -100 to 100:
j = 10 - i
for k = 1 to quant[i] * quant[j]
output i, " ", j
Even if their range isn't limited, you can have a more efficient method than what you proposed, by sorting the values first and then just keeping counts rather than individual values (same as the above solution).
This is achieved by running two pointers, one at the start of the list and one at the end. When the numbers at those pointers add up to 10, output them and move the end pointer down and the start pointer up.
When they're greater than 10, move the end pointer down. When they're less, move the start pointer up.
This relies on the sorted nature. Less than 10 means you need to make the sum higher (move start pointer up). Greater than 10 means you need to make the sum less (end pointer down). Since they're are no duplicates in the list (because of the counts), being equal to 10 means you move both pointers.
Stop when the pointers pass each other.
There's one more tricky bit and that's when the pointers are equal and the value sums to 10 (this can only happen when the value is 5, obviously).
You don't output the number of pairs based on the product, rather it's based on the product of the value minus 1. That's because a value 5 with count of 1 doesn't actually sum to 10 (since there's only one 5).
So, for the list:
2 3 1 3 5 7 10 -1 11
you get:
Index a b c d e f g h
Value -1 1 2 3 5 7 10 11
Count 1 1 1 2 1 1 1 1
You start pointer p1 at a and p2 at h. Since -1 + 11 = 10, you output those two numbers (as above, you do it N times where N is the product of the counts). Thats one copy of (-1,11). Then you move p1 to b and p2 to g.
1 + 10 > 10 so leave p1 at b, move p2 down to f.
1 + 7 < 10 so move p1 to c, leave p2 at f.
2 + 7 < 10 so move p1 to d, leave p2 at f.
3 + 7 = 10, output two copies of (3,7) since the count of d is 2, move p1 to e, p2 to e.
5 + 5 = 10 but p1 = p2 so the product is 0 times 0 or 0. Output nothing, move p1 to f, p2 to d.
Loop ends since p1 > p2.
Hence the overall output was:
(-1,11)
( 3, 7)
( 3, 7)
which is correct.
Here's some test code. You'll notice that I've forced 7 (the midpoint) to a specific value for testing. Obviously, you wouldn't do this.
#include <stdio.h>
#define SZSRC 30
#define SZSORTED 20
#define SUM 14
int main (void) {
int i, s, e, prod;
int srcData[SZSRC];
int sortedVal[SZSORTED];
int sortedCnt[SZSORTED];
// Make some random data.
srand (time (0));
for (i = 0; i < SZSRC; i++) {
srcData[i] = rand() % SZSORTED;
printf ("srcData[%2d] = %5d\n", i, srcData[i]);
}
// Convert to value/size array.
for (i = 0; i < SZSORTED; i++) {
sortedVal[i] = i;
sortedCnt[i] = 0;
}
for (i = 0; i < SZSRC; i++)
sortedCnt[srcData[i]]++;
// Force 7+7 to specific count for testing.
sortedCnt[7] = 2;
for (i = 0; i < SZSORTED; i++)
if (sortedCnt[i] != 0)
printf ("Sorted [%3d], count = %3d\n", i, sortedCnt[i]);
// Start and end pointers.
s = 0;
e = SZSORTED - 1;
// Loop until they overlap.
while (s <= e) {
// Equal to desired value?
if (sortedVal[s] + sortedVal[e] == SUM) {
// Get product (note special case at midpoint).
prod = (s == e)
? (sortedCnt[s] - 1) * (sortedCnt[e] - 1)
: sortedCnt[s] * sortedCnt[e];
// Output the right count.
for (i = 0; i < prod; i++)
printf ("(%3d,%3d)\n", sortedVal[s], sortedVal[e]);
// Move both pointers and continue.
s++;
e--;
continue;
}
// Less than desired, move start pointer.
if (sortedVal[s] + sortedVal[e] < SUM) {
s++;
continue;
}
// Greater than desired, move end pointer.
e--;
}
return 0;
}
You'll see that the code above is all O(n) since I'm not sorting in this version, just intelligently using the values as indexes.
If the minimum is below zero (or very high to the point where it would waste too much memory), you can just use a minVal to adjust the indexes (another O(n) scan to find the minimum value and then just use i-minVal instead of i for array indexes).
And, even if the range from low to high is too expensive on memory, you can use a sparse array. You'll have to sort it, O(n log n), and search it for updating counts, also O(n log n), but that's still better than the original O(n2). The reason the binary search is O(n log n) is because a single search would be O(log n) but you have to do it for each value.
And here's the output from a test run, which shows you the various stages of calculation.
srcData[ 0] = 13
srcData[ 1] = 16
srcData[ 2] = 9
srcData[ 3] = 14
srcData[ 4] = 0
srcData[ 5] = 8
srcData[ 6] = 9
srcData[ 7] = 8
srcData[ 8] = 5
srcData[ 9] = 9
srcData[10] = 12
srcData[11] = 18
srcData[12] = 3
srcData[13] = 14
srcData[14] = 7
srcData[15] = 16
srcData[16] = 12
srcData[17] = 8
srcData[18] = 17
srcData[19] = 11
srcData[20] = 13
srcData[21] = 3
srcData[22] = 16
srcData[23] = 9
srcData[24] = 10
srcData[25] = 3
srcData[26] = 16
srcData[27] = 9
srcData[28] = 13
srcData[29] = 5
Sorted [ 0], count = 1
Sorted [ 3], count = 3
Sorted [ 5], count = 2
Sorted [ 7], count = 2
Sorted [ 8], count = 3
Sorted [ 9], count = 5
Sorted [ 10], count = 1
Sorted [ 11], count = 1
Sorted [ 12], count = 2
Sorted [ 13], count = 3
Sorted [ 14], count = 2
Sorted [ 16], count = 4
Sorted [ 17], count = 1
Sorted [ 18], count = 1
( 0, 14)
( 0, 14)
( 3, 11)
( 3, 11)
( 3, 11)
( 5, 9)
( 5, 9)
( 5, 9)
( 5, 9)
( 5, 9)
( 5, 9)
( 5, 9)
( 5, 9)
( 5, 9)
( 5, 9)
( 7, 7)
Create a hash set (HashSet in Java) (could use a sparse array if your numbers are well-bounded, i.e. you know they fall into +/- 100)
For each node, first check if 10-n is in the set. If so, you have found a pair. Either way, then add n to the set and continue.
So for example you have
1 - 6 - 3 - 4 - 9
1 - is 9 in the set? Nope
6 - 4? No.
3 - 7? No.
4 - 6? Yup! Print (6,4)
9 - 1? Yup! Print (9,1)
This is a mini subset sum problem, which is NP complete.
If you were to first sort the set, it would eliminate the pairs of numbers that needed to be evaluated.