Is Time complexity O(n) or O(n^2)? - algorithm

I feel that the time complexity of this js function I wrote is O(n) but at the same time it feels like its O(n^2). What's the correct time complexity? The function is supposed to find the last index of the first duplicate item found. For example in the first example, 1 was found at index 0 and 1 was also found at index 6. So the result would be 6 because thats the last index of the first duplicate values in that array. If no duplicates found then we return -1.
// [1, 2, 4, 5, 2, 3, 1] --> output: 6
// [1, 1, 3, 2, 4] --> output: 1
// [1, 2, 3, 4, 5, 6] --> output: -1(not found)
// It can be in any order, just need to find the last index of the first duplicate value thats there
const findFirstDuplicateIndex = (arr) => {
let ptr1 = 0
let ptr2 = 1
while (ptr1 < arr.length - 1) { // O(n)
if(arr[ptr1] === arr[ptr2]) {
return ptr2 + 1
} else {
ptr2++
}
if (ptr2 === arr.length - 1) {ptr1++; ptr2 = ptr1 + 1}
}
return -1
}

The time complexity of your code is O(n^2).
Your code is another version of two nested loops.
if (ptr2 === arr.length - 1) {ptr1++; ptr2 = ptr1 + 1}
This code is equal to adding a nesting loop i.e
for(ptr2 = ptr1+1; ptr2 < arr.length; ; ++ptr2)
If we rewrite your code with two nested for loops, we have
const findFirstDuplicateIndex = (arr) => {
for(let ptr1=0; ptr1 < arr.length - 1; ++ptr1) {
for(let ptr2=ptr1+1; ptr2 < arr.length; ++ptr2){
if(arr[ptr1] === arr[ptr2]) {
return ptr2
}
}
}
return -1;
}
Now,
For the 1st iteration: the inner loop will cost N-1.
For the 2nd iteration: the inner loop will cost N-2.
For the 3rd iteration: the inner loop will cost N-3.
........................
........................
For the (N-1)th iteration: the inner loop will cost 1.
So, the total time complexity is the sum of the cost which is
(N-1) + (N-2) + (N-3) + . . . . + 1
which is an arithmetic series and by using the arithmetic sum formula we have
(N-1) + (N-2) + (N-3) + . . . . + 1 = (N-1)*(N-2)/2 = O(N^2)
Hence, the time complexity of your code is O(n^2).

No matter, how you write it down, eventually. Your code runs the variable arr1 from 0..n-2 and your variable arr2 for each arr1 from arr1+1..n-1. Which yields the O(N^2) (worst case) time complexity.
But since it is not as easy to spot for more complicated algorithms, one good
way to assess the complexity is to have an instrumented version of the algorithm, where you simply count the number of steps. And use pathologically worst case data as input.
In your case, the worst case is, when the duplicate is positioned as the last 2 values in the array (e.g. [5 4 3 2 1 1]).
So, in step 1, write yourself some test data generator:
(defun gen-test-data (n)
(make-array (+ n 1)
:initial-contents
(append
(loop for x from n downto 1
collecting x)
'(1))))
It produces the pathological pattern in an array of length (n+1).
(gen-test-data 20)
#(20 19 18 17 16 15 14 13 12 11 10 9 8 7 6 5 4 3 2 1 1)
Next, write your instrumented algorithm, which returns a little extra information:
(defun first-duplicate (data)
(let* ((arr (etypecase data
((simple-vector *) data)
(cons (make-array (length data)
:initial-contents data))))
(n (array-dimension arr 0)))
(loop
with counter = 0
for i0 below (- n 1)
do (loop
for i1 from (+ i0 1) below n
do (when (= (aref arr i0) (aref arr i1))
(return-from first-duplicate
(list :value (aref arr i0)
:i0 i0
:i1 i1
:counter counter
:n n)))
do (incf counter)))))
I picked the nested loop notation here, because that is what you do anyway.
The last step now is to run the function for various n, so you can see, how n relates to counter:
(loop for n from 2 to 100 by 10
collecting (first-duplicate (gen-test-data n)))
((:VALUE 1 :I0 1 :I1 2 :COUNTER 2 :N 3)
(:VALUE 1 :I0 11 :I1 12 :COUNTER 77 :N 13)
(:VALUE 1 :I0 21 :I1 22 :COUNTER 252 :N 23)
(:VALUE 1 :I0 31 :I1 32 :COUNTER 527 :N 33)
(:VALUE 1 :I0 41 :I1 42 :COUNTER 902 :N 43)
(:VALUE 1 :I0 51 :I1 52 :COUNTER 1377 :N 53)
(:VALUE 1 :I0 61 :I1 62 :COUNTER 1952 :N 63)
(:VALUE 1 :I0 71 :I1 72 :COUNTER 2627 :N 73)
(:VALUE 1 :I0 81 :I1 82 :COUNTER 3402 :N 83)
(:VALUE 1 :I0 91 :I1 92 :COUNTER 4277 :N 93))
The output now clearly shows, that it cannot be O(N) and is rather O(N^2).

In time complexity concept we have to asign a 1 value for declaring in loops we have to check the condition and after that we have to check how many times it will run and for return any value we have to add 1 and after completing we have to add alll .

Related

The sum of all numbers less than 1000, multiples of 3 or 5

If we list all natural numbers less than 10 that are multiples of 3 or 5, we get 3, 5, 6 and 9. The sum of these multiples is 23. Find the sum of all numbers less than 1000, multiples of 3 or 5.
I just started learning Ruby, I used to work only with C languages. Please explain why this code doesn't work. Thank you!!!
Code:
sum = 0;
i = 3;
while (i < 1000) do
if ((i % 3 == 0) || (i % 5 == 0))
sum += i;
end
end
puts "The sum of all the multiples of 3 or 5 below 1000: #{sum}"
And when I run the file, it loads indefinitely.
enter image description here
You are never incrementing i.
The while loop will terminate if: i >= 1000
But i = 3 and there is no i+=1 so this loop will never terminate.
#Raavgo has explained the problem with your code. If you are looking for a fast solution I suggest the following.
def tot(n, limit)
m, rem = limit.divmod(n)
m * (n + limit - rem)/2
end
tot(3, 999) + tot(5, 999) - tot(15, 999)
#=> 233168
The term tot(15, 999) is to compensate for double-counting of terms that are divisible by both 3 and 5.
See Numeric#divmod.
Suppose
n = 5
limit = 999
Then
m, rem = limit.divmod(n)
#=> [199, 4]
So
m #=> 199
rem #=> 4
Then we want to compute
5 + 10 + ... + 999 - rem
#=> 5 + 10 + ... + 995
This is simply the the sum of an arithmetic progression:
199 * (5 + 995)/2
which equals
m * (n + limit - rem)/2
(0..1000).select(&->(i){ (i % 3).zero? || (i % 5).zero? }).sum
(0..1000).filter { |i| i % 3 == 0 || i % 5 == 0 }.sum
your approach is fine if you increment i as said in the other answer, but a more idiomatic Ruby looks like this.

Implementation: Algorithm for a special distribution Problem

We are given a number x, and a set of n coins with denominations v1, v2, …, vn.
The coins are to be divided between Alice and Bob, with the restriction that each person's coins must add up to at least x.
For example, if x = 1, n = 2, and v1 = v2 = 2, then there are two possible distributions: one where Alice gets coin #1 and Bob gets coin #2, and one with the reverse. (These distributions are considered distinct even though both coins have the same denomination.)
I'm interested in counting the possible distributions. I'm pretty sure this can be done in O(nx) time and O(n+x) space using dynamic programming; but I don't see how.
Count the ways for one person to get just less than x, double it and subtract from the doubled total number of ways to divide the collection in two, (Stirling number of the second kind {n, 2}).
For example,
{2, 3, 3, 5}, x = 5
i matrix
0 2: 1
1 3: 1 (adding to 2 is too much)
2 3: 2
3 N/A (≥ x)
3 ways for one person to get
less than 5.
Total ways to partition a set
of 4 items in 2 is {4, 2} = 7
2 * 7 - 2 * 3 = 8
The Python code below uses MBo's routine. If you like this answer, please consider up-voting that answer.
# Stirling Algorithm
# Cod3d by EXTR3ME
# https://extr3metech.wordpress.com
def stirling(n,k):
n1=n
k1=k
if n<=0:
return 1
elif k<=0:
return 0
elif (n==0 and k==0):
return -1
elif n!=0 and n==k:
return 1
elif n<k:
return 0
else:
temp1=stirling(n1-1,k1)
temp1=k1*temp1
return (k1*(stirling(n1-1,k1)))+stirling(n1-1,k1-1)
def f(coins, x):
a = [1] + (x-1) * [0]
# Code by MBo
# https://stackoverflow.com/a/53418438/2034787
for c in coins:
for i in xrange(x - 1, c - 1, -1):
if a[i - c] > 0:
a[i] = a[i] + a[i - c]
return 2 * (stirling(len(coins), 2) - sum(a) + 1)
print f([2,3,3,5], 5) # 8
print f([1,2,3,4,4], 5) # 16
If sum of all coins is S, then the first person can get x..S-x of money.
Make array A of length S-x+1 and fill it with numbers of variants of changing A[i] with given coins (like kind of Coin Change problem).
To provide uniqueness (don't count C1+C2 and C2+C1 as two variants), fill array in reverse direction
A[0] = 1
for C in Coins:
for i = S-x downto C:
if A[i - C] > 0:
A[i] = A[i] + A[i - C]
//we can compose value i as i-C and C
then sum A entries in range x..S-x
Example for coins 2, 3, 3, 5 and x=5.
S = 13, S-x = 8
Array state after using coins in order:
0 1 2 3 4 5 6 7 8 //idx
1 1
1 1 1 1
1 1 2 2 1 1
1 1 2 3 1 1 3
So there are 8 variants to distribute these coins. Quick check (3' denotes the second coin 3):
2 3 3' 5
2 3' 3 5
2 3 3' 5
2 5 3 3'
3 3' 2 5
3 5 2 3'
3' 5 2 3
5 2 3 3'
You can also solve it in O(A * x^2) time and memory adding memoization to this dp:
solve(A, pos, sum1, sum2):
if (pos == A.length) return sum1 == x && sum2 == x
return solve(A, pos + 1, min(sum1 + A[pos], x), sum2) +
solve(A, pos + 1, sum1, min(sum2 + A[pos], x))
print(solve(A, 0, 0, 0))
So depending if x^2 < sum or not you could use this or the answer provided by #Mbo (in terms of time complexity). If you care more about space, this is better only when A * x^2 < sum - x

Generate random numbers that sum up to n

How to generate between 1 and n random numbers (positive integers greater than 0) which sum up to exactly n?
Example results if n=10:
10
2,5,3
1,1,1,1,1,1,1,1,1,1
1,1,5,1,1,1
Each of the permutations should have the same probability of occurring, however, I don't need it to be mathematically precise. So if the probabilities are not the same due to some modulo error, I don't care.
Is there a go-to algorithm for this? I only found algorithms where the number of values is fixed (i.e., give me exactly m random numbers which sum up to n).
Imagine the number n as a line built of n equal, indivisible sections. Your numbers are lengths of those sections that sum up to the whole. You can cut the original length between any two sections, or none.
This means there are n-1 potential cut points.
Choose a random n-1-bit number, that is a number between 0 and 2^(n-1); its binary representation tells you where to cut.
0 : 000 : [-|-|-|-] : 1,1,1,1
1 : 001 : [-|-|- -] : 1,1,2
3 : 011 : [-|- - -] : 1,3
5 : 101 : [- -|- -] : 2,2
7 : 111 : [- - - -] : 4
etc.
Implementation in python-3
import random
def perm(n, np):
p = []
d = 1
for i in range(n):
if np % 2 == 0:
p.append(d)
d = 1
else:
d += 1
np //= 2
return p
def test(ex_n):
for ex_p in range(2 ** (ex_n - 1)):
p = perm(ex_n, ex_p)
print(len(p), p)
def randperm(n):
np = random.randint(0, 2 ** (n - 1))
return perm(n, np)
print(randperm(10))
you can verify it by generating all possible solutions for small n
test(4)
output:
4 [1, 1, 1, 1]
3 [2, 1, 1]
3 [1, 2, 1]
2 [3, 1]
3 [1, 1, 2]
2 [2, 2]
2 [1, 3]
1 [4]
Use a modulo.
This should make your day:
#include <stdio.h>
#include <stdlib.h>
#include <time.h>
int main()
{
srand(time(0));
int n=10;
int x=0; /* sum of previous random number */
while (x<n) {
int r = rand() % (n-x) + 1;
printf("%d ", r);
x += r;
}
/* done */
printf("\n");
}
Example output:
10
1 1 8
3 4 1 1 1
6 3 1
9 1
6 1 1 1 1
5 4 1
Generate Random Integers With Fixed Sum
Method One: Multinomial Distribution
Deviation cannot be controlled strictly within the desired range.
# Python
import numpy as np
_sum = 800
n = 16
rnd_array = np.random.multinomial(_sum, np.ones(n)/n, size=1)[0]
print('Array:', rnd_array, ', Sum:', sum(rnd_array))
# returns Array: [64 41 57 49 48 44 46 44 40 55 58 54 54 54 39 53] , Sum: 800
Method Two: Random Integer Generator Within Lower and Upper Bounds
To control the deviation
# Python
import random
def generate_random_integers(_sum, n):
mean = _sum / n
variance = int(5 * mean)
min_v = mean - variance
max_v = mean + variance
array = [min_v] * n
diff = _sum - min_v * n
while diff > 0:
a = random.randint(0, n - 1)
if array[a] >= max_v:
continue
array[a] += 1
diff -= 1
return np.array([int(number) for number in array])
_sum = 800
n = 16
rnd_array = generate_random_integers(_sum, n)
print('Array:', rnd_array, ', Sum:', sum(rnd_array))
# Returns Array: [45 46 46 58 53 77 33 53 39 38 44 51 33 60 75 49] , Sum: 800
Archived from http://sunny.today/generate-random-integers-with-fixed-sum/

Google Interview: Arrangement of Blocks

You are given N blocks of height 1…N. In how many ways can you arrange these blocks in a row such that when viewed from left you see only L blocks (rest are hidden by taller blocks) and when seen from right you see only R blocks? Example given N=3, L=2, R=1 there is only one arrangement {2, 1, 3} while for N=3, L=2, R=2 there are two ways {1, 3, 2} and {2, 3, 1}.
How should we solve this problem by programming? Any efficient ways?
This is a counting problem, not a construction problem, so we can approach it using recursion. Since the problem has two natural parts, looking from the left and looking from the right, break it up and solve for just one part first.
Let b(N, L, R) be the number of solutions, and let f(N, L) be the number of arrangements of N blocks so that L are visible from the left. First think about f because it's easier.
APPROACH 1
Let's get the initial conditions and then go for recursion. If all are to be visible, then they must be ordered increasingly, so
f(N, N) = 1
If there are suppose to be more visible blocks than available blocks, then nothing we can do, so
f(N, M) = 0 if N < M
If only one block should be visible, then put the largest first and then the others can follow in any order, so
f(N,1) = (N-1)!
Finally, for the recursion, think about the position of the tallest block, say N is in the kth spot from the left. Then choose the blocks to come before it in (N-1 choose k-1) ways, arrange those blocks so that exactly L-1 are visible from the left, and order the N-k blocks behind N it in any you like, giving:
f(N, L) = sum_{1<=k<=N} (N-1 choose k-1) * f(k-1, L-1) * (N-k)!
In fact, since f(x-1,L-1) = 0 for x<L, we may as well start k at L instead of 1:
f(N, L) = sum_{L<=k<=N} (N-1 choose k-1) * f(k-1, L-1) * (N-k)!
Right, so now that the easier bit is understood, let's use f to solve for the harder bit b. Again, use recursion based on the position of the tallest block, again say N is in position k from the left. As before, choose the blocks before it in N-1 choose k-1 ways, but now think about each side of that block separately. For the k-1 blocks left of N, make sure that exactly L-1 of them are visible. For the N-k blocks right of N, make sure that R-1 are visible and then reverse the order you would get from f. Therefore the answer is:
b(N,L,R) = sum_{1<=k<=N} (N-1 choose k-1) * f(k-1, L-1) * f(N-k, R-1)
where f is completely worked out above. Again, many terms will be zero, so we only want to take k such that k-1 >= L-1 and N-k >= R-1 to get
b(N,L,R) = sum_{L <= k <= N-R+1} (N-1 choose k-1) * f(k-1, L-1) * f(N-k, R-1)
APPROACH 2
I thought about this problem again and found a somewhat nicer approach that avoids the summation.
If you work the problem the opposite way, that is think of adding the smallest block instead of the largest block, then the recurrence for f becomes much simpler. In this case, with the same initial conditions, the recurrence is
f(N,L) = f(N-1,L-1) + (N-1) * f(N-1,L)
where the first term, f(N-1,L-1), comes from placing the smallest block in the leftmost position, thereby adding one more visible block (hence L decreases to L-1), and the second term, (N-1) * f(N-1,L), accounts for putting the smallest block in any of the N-1 non-front positions, in which case it is not visible (hence L stays fixed).
This recursion has the advantage of always decreasing N, though it makes it more difficult to see some formulas, for example f(N,N-1) = (N choose 2). This formula is fairly easy to show from the previous formula, though I'm not certain how to derive it nicely from this simpler recurrence.
Now, to get back to the original problem and solve for b, we can also take a different approach. Instead of the summation before, think of the visible blocks as coming in packets, so that if a block is visible from the left, then its packet consists of all blocks right of it and in front of the next block visible from the left, and similarly if a block is visible from the right then its packet contains all blocks left of it until the next block visible from the right. Do this for all but the tallest block. This makes for L+R packets. Given the packets, you can move one from the left side to the right side simply by reversing the order of the blocks. Therefore the general case b(N,L,R) actually reduces to solving the case b(N,L,1) = f(N,L) and then choosing which of the packets to put on the left and which on the right. Therefore we have
b(N,L,R) = (L+R choose L) * f(N,L+R)
Again, this reformulation has some advantages over the previous version. Putting these latter two formulas together, it's much easier to see the complexity of the overall problem. However, I still prefer the first approach for constructing solutions, though perhaps others will disagree. All in all it just goes to show there's more than one good way to approach the problem.
What's with the Stirling numbers?
As Jason points out, the f(N,L) numbers are precisely the (unsigned) Stirling numbers of the first kind. One can see this immediately from the recursive formulas for each. However, it's always nice to be able to see it directly, so here goes.
The (unsigned) Stirling numbers of the First Kind, denoted S(N,L) count the number of permutations of N into L cycles. Given a permutation written in cycle notation, we write the permutation in canonical form by beginning the cycle with the largest number in that cycle and then ordering the cycles increasingly by the first number of the cycle. For example, the permutation
(2 6) (5 1 4) (3 7)
would be written in canonical form as
(5 1 4) (6 2) (7 3)
Now drop the parentheses and notice that if these are the heights of the blocks, then the number of visible blocks from the left is exactly the number of cycles! This is because the first number of each cycle blocks all other numbers in the cycle, and the first number of each successive cycle is visible behind the previous cycle. Hence this problem is really just a sneaky way to ask you to find a formula for Stirling numbers.
well, just as an empirical solution for small N:
blocks.py:
import itertools
from collections import defaultdict
def countPermutation(p):
n = 0
max = 0
for block in p:
if block > max:
n += 1
max = block
return n
def countBlocks(n):
count = defaultdict(int)
for p in itertools.permutations(range(1,n+1)):
fwd = countPermutation(p)
rev = countPermutation(reversed(p))
count[(fwd,rev)] += 1
return count
def printCount(count, n, places):
for i in range(1,n+1):
for j in range(1,n+1):
c = count[(i,j)]
if c > 0:
print "%*d" % (places, count[(i,j)]),
else:
print " " * places ,
print
def countAndPrint(nmax, places):
for n in range(1,nmax+1):
printCount(countBlocks(n), n, places)
print
and sample output:
blocks.countAndPrint(10)
1
1
1
1 1
1 2
1
2 3 1
2 6 3
3 3
1
6 11 6 1
6 22 18 4
11 18 6
6 4
1
24 50 35 10 1
24 100 105 40 5
50 105 60 10
35 40 10
10 5
1
120 274 225 85 15 1
120 548 675 340 75 6
274 675 510 150 15
225 340 150 20
85 75 15
15 6
1
720 1764 1624 735 175 21 1
720 3528 4872 2940 875 126 7
1764 4872 4410 1750 315 21
1624 2940 1750 420 35
735 875 315 35
175 126 21
21 7
1
5040 13068 13132 6769 1960 322 28 1
5040 26136 39396 27076 9800 1932 196 8
13068 39396 40614 19600 4830 588 28
13132 27076 19600 6440 980 56
6769 9800 4830 980 70
1960 1932 588 56
322 196 28
28 8
1
40320 109584 118124 67284 22449 4536 546 36 1
40320 219168 354372 269136 112245 27216 3822 288 9
109584 354372 403704 224490 68040 11466 1008 36
118124 269136 224490 90720 19110 2016 84
67284 112245 68040 19110 2520 126
22449 27216 11466 2016 126
4536 3822 1008 84
546 288 36
36 9
1
You'll note a few obvious (well, mostly obvious) things from the problem statement:
the total # of permutations is always N!
with the exception of N=1, there is no solution for L,R = (1,1): if a count in one direction is 1, then it implies the tallest block is on that end of the stack, so the count in the other direction has to be >= 2
the situation is symmetric (reverse each permutation and you reverse the L,R count)
if p is a permutation of N-1 blocks and has count (Lp,Rp), then the N permutations of block N inserted in each possible spot can have a count ranging from L = 1 to Lp+1, and R = 1 to Rp + 1.
From the empirical output:
the leftmost column or topmost row (where L = 1 or R = 1) with N blocks is the sum of the
rows/columns with N-1 blocks: i.e. in #PengOne's notation,
b(N,1,R) = sum(b(N-1,k,R-1) for k = 1 to N-R+1
Each diagonal is a row of Pascal's triangle, times a constant factor K for that diagonal -- I can't prove this, but I'm sure someone can -- i.e.:
b(N,L,R) = K * (L+R-2 choose L-1) where K = b(N,1,L+R-1)
So the computational complexity of computing b(N,L,R) is the same as the computational complexity of computing b(N,1,L+R-1) which is the first column (or row) in each triangle.
This observation is probably 95% of the way towards an explicit solution (the other 5% I'm sure involves standard combinatoric identities, I'm not too familiar with those).
A quick check with the Online Encyclopedia of Integer Sequences shows that b(N,1,R) appears to be OEIS sequence A094638:
A094638 Triangle read by rows: T(n,k) =|s(n,n+1-k)|, where s(n,k) are the signed Stirling numbers of the first kind (1<=k<=n; in other words, the unsigned Stirling numbers of the first kind in reverse order).
1, 1, 1, 1, 3, 2, 1, 6, 11, 6, 1, 10, 35, 50, 24, 1, 15, 85, 225, 274, 120, 1, 21, 175, 735, 1624, 1764, 720, 1, 28, 322, 1960, 6769, 13132, 13068, 5040, 1, 36, 546, 4536, 22449, 67284, 118124, 109584, 40320, 1, 45, 870, 9450, 63273, 269325, 723680, 1172700
As far as how to efficiently compute the Stirling numbers of the first kind, I'm not sure; Wikipedia gives an explicit formula but it looks like a nasty sum. This question (computing Stirling #s of the first kind) shows up on MathOverflow and it looks like O(n^2), as PengOne hypothesizes.
Based on #PengOne answer, here is my Javascript implementation:
function g(N, L, R) {
var acc = 0;
for (var k=1; k<=N; k++) {
acc += comb(N-1, k-1) * f(k-1, L-1) * f(N-k, R-1);
}
return acc;
}
function f(N, L) {
if (N==L) return 1;
else if (N<L) return 0;
else {
var acc = 0;
for (var k=1; k<=N; k++) {
acc += comb(N-1, k-1) * f(k-1, L-1) * fact(N-k);
}
return acc;
}
}
function comb(n, k) {
return fact(n) / (fact(k) * fact(n-k));
}
function fact(n) {
var acc = 1;
for (var i=2; i<=n; i++) {
acc *= i;
}
return acc;
}
$("#go").click(function () {
alert(g($("#N").val(), $("#L").val(), $("#R").val()));
});
Here is my construction solution inspired by #PengOne's ideas.
import itertools
def f(blocks, m):
n = len(blocks)
if m > n:
return []
if m < 0:
return []
if n == m:
return [sorted(blocks)]
maximum = max(blocks)
blocks = list(set(blocks) - set([maximum]))
results = []
for k in range(0, n):
for left_set in itertools.combinations(blocks, k):
for left in f(left_set, m - 1):
rights = itertools.permutations(list(set(blocks) - set(left)))
for right in rights:
results.append(list(left) + [maximum] + list(right))
return results
def b(n, l, r):
blocks = range(1, n + 1)
results = []
maximum = max(blocks)
blocks = list(set(blocks) - set([maximum]))
for k in range(0, n):
for left_set in itertools.combinations(blocks, k):
for left in f(left_set, l - 1):
other = list(set(blocks) - set(left))
rights = f(other, r - 1)
for right in rights:
results.append(list(left) + [maximum] + list(right))
return results
# Sample
print b(4, 3, 2) # -> [[1, 2, 4, 3], [1, 3, 4, 2], [2, 3, 4, 1]]
We derive a general solution F(N, L, R) by examining a specific testcase: F(10, 4, 3).
We first consider 10 in the leftmost possible position, the 4th ( _ _ _ 10 _ _ _ _ _ _ ).
Then we find the product of the number of valid sequences in the left and in the right of 10.
Next, we'll consider 10 in the 5th slot, calculate another product and add it to the previous one.
This process will go on until 10 is in the last possible slot, the 8th.
We'll use the variable named pos to keep track of N's position.
Now suppose pos = 6 ( _ _ _ _ _ 10 _ _ _ _ ). In the left of 10, there are 9C5 = (N-1)C(pos-1) sets of numbers to be arranged.
Since only the order of these numbers matters, we could look at 1, 2, 3, 4, 5.
To construct a sequence with these numbers so that 3 = L-1 of them are visible from the left, we can begin by placing 5 in the leftmost possible slot ( _ _ 5 _ _ ) and follow similar steps to what we did before.
So if F were defined recursively, it could be used here.
The only difference now is that the order of numbers in the right of 5 is immaterial.
To resolve this issue, we'll use a signal, INF (infinity), for R to indicate its unimportance.
Turning to the right of 10, there will be 4 = N-pos numbers left.
We first consider 4 in the last possible slot, position 2 = R-1 from the right ( _ _ 4 _ ).
Here what appears in the left of 4 is immaterial.
But counting arrangements of 4 blocks with the mere condition that 2 of them should be visible from the right is no different than counting arrangements of the same blocks with the mere condition that 2 of them should be visible from the left.
ie. instead of counting sequences like 3 1 4 2, one can count sequences like 2 4 1 3
So the number of valid arrangements in the right of 10 is F(4, 2, INF).
Thus the number of arrangements when pos == 6 is 9C5 * F(5, 3, INF) * F(4, 2, INF) = (N-1)C(pos-1) * F(pos-1, L-1, INF)* F(N-pos, R-1, INF).
Similarly, in F(5, 3, INF), 5 will be considered in a succession of slots with L = 2 and so on.
Since the function calls itself with L or R reduced, it must return a value when L = 1, that is F(N, 1, INF) must be a base case.
Now consider the arrangement _ _ _ _ _ 6 7 10 _ _.
The only slot 5 can take is the first, and the following 4 slots may be filled in any manner; thus F(5, 1, INF) = 4!.
Then clearly F(N, 1, INF) = (N-1)!.
Other (trivial) base cases and details could be seen in the C implementation below.
Here is a link for testing the code
#define INF UINT_MAX
long long unsigned fact(unsigned n) { return n ? n * fact(n-1) : 1; }
unsigned C(unsigned n, unsigned k) { return fact(n) / (fact(k) * fact(n-k)); }
unsigned F(unsigned N, unsigned L, unsigned R)
{
unsigned pos, sum = 0;
if(R != INF)
{
if(L == 0 || R == 0 || N < L || N < R) return 0;
if(L == 1) return F(N-1, R-1, INF);
if(R == 1) return F(N-1, L-1, INF);
for(pos = L; pos <= N-R+1; ++pos)
sum += C(N-1, pos-1) * F(pos-1, L-1, INF) * F(N-pos, R-1, INF);
}
else
{
if(L == 1) return fact(N-1);
for(pos = L; pos <= N; ++pos)
sum += C(N-1, pos-1) * F(pos-1, L-1, INF) * fact(N-pos);
}
return sum;
}

How to check divisibility of a number not in base 10 without converting?

Let's say I have a number of base 3, 1211. How could I check this number is divisible by 2 without converting it back to base 10?
Update
The original problem is from TopCoder
The digits 3 and 9 share an interesting property. If you take any multiple of 3 and sum its digits, you get another multiple of 3. For example, 118*3 = 354 and 3+5+4 = 12, which is a multiple of 3. Similarly, if you take any multiple of 9 and sum its digits, you get another multiple of 9. For example, 75*9 = 675 and 6+7+5 = 18, which is a multiple of 9. Call any digit for which this property holds interesting, except for 0 and 1, for which the property holds trivially.
A digit that is interesting in one base is not necessarily interesting in another base. For example, 3 is interesting in base 10 but uninteresting in base 5. Given an int base, your task is to return all the interesting digits for that base in increasing order. To determine whether a particular digit is interesting or not, you need not consider all multiples of the digit. You can be certain that, if the property holds for all multiples of the digit with fewer than four digits, then it also holds for multiples with more digits. For example, in base 10, you would not need to consider any multiples greater than 999.
Notes
- When base is greater than 10, digits may have a numeric value greater than 9. Because integers are displayed in base 10 by default, do not be alarmed when such digits appear on your screen as more than one decimal digit. For example, one of the interesting digits in base 16 is 15.
Constraints
- base is between 3 and 30, inclusive.
This is my solution:
class InterestingDigits {
public:
vector<int> digits( int base ) {
vector<int> temp;
for( int i = 2; i <= base; ++i )
if( base % i == 1 )
temp.push_back( i );
return temp;
}
};
The trick was well explained here : https://math.stackexchange.com/questions/17242/how-does-base-of-a-number-relate-to-modulos-of-its-each-individual-digit
Thanks,
Chan
If your number k is in base three, then you can write it as
k = a0 3^n + a1 3^{n-1} + a2 3^{n-2} + ... + an 3^0
where a0, a1, ..., an are the digits in the base-three representation.
To see if the number is divisible by two, you're interested in whether the number, modulo 2, is equal to zero. Well, k mod 2 is given by
k mod 2 = (a0 3^n + a1 3^{n-1} + a2 3^{n-2} + ... + an 3^0) mod 2
= (a0 3^n) mod 2 + (a1 3^{n-1}) mod 2 + ... + an (3^0) mod 2
= (a0 mod 2) (3^n mod 2) + ... + (an mod 2) (3^0 mod 2)
The trick here is that 3^i = 1 (mod 2), so this expression is
k mod 2 = (a0 mod 2) + (a1 mod 2) + ... + (an mod 2)
In other words, if you sum up the digits of the ternary representation and get that this value is divisible by two, then the number itself must be divisible by two. To make this even cooler, since the only ternary digits are 0, 1, and 2, this is equivalent to asking whether the number of 1s in the ternary representation is even!
More generally, though, if you have a number in base m, then that number is divisible by m - 1 iff the sum of the digits is divisible by m. This is why you can check if a number in base 10 is divisible by 9 by summing the digits and seeing if that value is divisible by nine.
You can always build a finite automaton for any base and any divisor:
Normally to compute the value n of a string of digits in base b
you iterate over the digits and do
n = (n * b) + d
for each digit d.
Now if you are interested in divisibility you do this modulo m instead:
n = ((n * b) + d) % m
Here n can take at most m different values. Take these as states of a finite automaton, and compute the transitions depending on the digit d according to that formula. The accepting state is the one where the remainder is 0.
For your specific case we have
n == 0, d == 0: n = ((0 * 3) + 0) % 2 = 0
n == 0, d == 1: n = ((0 * 3) + 1) % 2 = 1
n == 0, d == 2: n = ((0 * 3) + 2) % 2 = 0
n == 1, d == 0: n = ((1 * 3) + 0) % 2 = 1
n == 1, d == 1: n = ((1 * 3) + 1) % 2 = 0
n == 1, d == 2: n = ((1 * 3) + 2) % 2 = 1
which shows that you can just sum the digits 1 modulo 2 and ignore any digits 0 or 2.
Add all the digits together (or even just count the ones) - if the answer is odd, the number is odd; if it's even, the nmber is even.
How does that work? Each digit from the number contributes 0, 1 or 2 times (1, 3, 9, 27, ...). A 0 or a 2 adds an even number, so no effect on the oddness/evenness (parity) of the number as a whole. A 1 adds one of the powers of 3, which is always odd, and so flips the parity). And we start from 0 (even). So by counting whether the number of flips is odd or even we can tell whether the number itself is.
I'm not sure on what CPU you have a number in base-3, but the normal way to do this is to perform a modulus/remainder operation.
if (n % 2 == 0) {
// divisible by 2, so even
} else {
// odd
}
How to implement the modulus operator is going to depend on how you're storing your base-3 number. The simplest to code will probably be to implement normal pencil-and-paper long division, and get the remainder from that.
0 2 2 0
_______
2 ⟌ 1 2 1 1
0
---
1 2
1 1
-----
1 1
1 1
-----
0 1 <--- remainder = 1 (so odd)
(This works regardless of base, there are "tricks" for base-3 as others have mentioned)
Same as in base 10, for your example:
1. Find the multiple of 2 that's <= 1211, that's 1210 (see below how to achieve it)
2. Substract 1210 from 1211, you get 1
3. 1 is < 10, thus 1211 isn't divisible by 2
how to achieve 1210:
1. starts with 2
2. 2 + 2 = 11
3. 11 + 2 = 20
4. 20 + 2 = 22
5. 22 + 2 = 101
6. 101 + 2 = 110
7. 110 + 2 = 112
8. 112 + 2 = 121
9. 121 + 2 = 200
10. 200 + 2 = 202
... // repeat until you get the biggest number <= 1211
it's basically the same as base 10 it's just the round up happens on 3 instead of 10.

Resources