Prooving by induction that a function gets called n-1 times - algorithm

This is the pseudo-code from the problem:
Procedure Foo(A,f,L), precondition:
A[f..L] is an array of integers
f,L, are two naturals >=1 with f<=L.
Code
procedure Foo(A,f,L
if (f=L) then
return A[f]
else
m <-- [(f+L)/2]
return min(Foo(A,f,m), Foo(A, m+1,L))
end if
The Question:
Using induction, argue that Foo invokes min at most n-1 times.
I am a little lost on how to continue my proof for part (iii). I have the claim written out as well as the base case. Which i believe it to be n>=2. But how do I do it for k + 1 terms? Since this is a proof by induction.
Thanks

We will proceed by induction on n = L - f + 1.
Base case: when n = 1, f=L and we immediately return A[f] calling min zero times; we have n - 1 = 1 - 1 = 0.
Induction hypothesis: assume that the claim is true for all n up to and including k - 1.
Induction step: we must show the claim is true for k. Since L > f we execute the second branch which calls min once, and invokes Foo on subarrays of sizes floor(k/2) and ceiling(k/2).
if k is even, k/2 is an integer and floor(k/2) = ceiling(k/2) = k/2. Both of these are less than k and so we know that Foo invokes min at least k/2 - 1 times for each call. But 2(k/2 - 1) + 1 = k - 2 + 1 = k - 1, so the minimum number of invocations must be k - 1 for n = k.
if k is odd, k/2 is not an integer and floor(k/2) = ceiling(k/2) - 1. For k > 1, both of these are less than k and so we know that each recursive call invokes min at least floor(k/2) - 1 and ceiling(k/2) - 1 times, respectively. But floor(k/2) - 1 + ceiling(k/2) - 1 + 1 = floor(k/2) - 1 + floor(k/2) + 1 = 2*floor(k/2) - 1 + 1 = 2*floor(k/2). Since k is an odd integer, k/2 can be written w+1/2 where w = floor(k/2). Rearranging, we have that k = 2w + 1 and we invoke min at least 2*w times. But k - 1 = 2*w + 1 - 1 = 2*w = 2*floor(k/2), as required
Since k is either even or odd, and we have shown that in both cases the minimum number of invocations of min is at least k - 1, this completes the proof.

Related

Count number of subsequences of A such that every element of the subsequence is divisible by its index (starts from 1)

B is a subsequence of A if and only if we can turn A to B by removing zero or more element(s).
A = [1,2,3,4]
B = [1,4] is a subsequence of A.(Just remove 2 and 4).
B = [4,1] is not a subsequence of A.
Count all subsequences of A that satisfy this condition : A[i]%i = 0
Note that i starts from 1 not 0.
Example :
Input :
5
2 2 1 22 14
Output:
13
All of these 13 subsequences satisfy B[i]%i = 0 condition.
{2},{2,2},{2,22},{2,14},{2},{2,22},{2,14},{1},{1,22},{1,14},{22},{22,14},{14}
My attempt :
The only solution that I could came up with has O(n^2) complexity.
Assuming the maximum element in A is C, the following is an algorithm with time complexity O(n * sqrt(C)):
For every element x in A, find all divisors of x.
For every i from 1 to n, find every j such that A[j] is a multiple of i, using the result of step 1.
For every i from 1 to n and j such that A[j] is a multiple of i (using the result of step 2), find the number of B that has i elements and the last element is A[j] (dynamic programming).
def find_factors(x):
"""Returns all factors of x"""
for i in range(1, int(x ** 0.5) + 1):
if x % i == 0:
yield i
if i != x // i:
yield x // i
def solve(a):
"""Returns the answer for a"""
n = len(a)
# b[i] contains every j such that a[j] is a multiple of i+1.
b = [[] for i in range(n)]
for i, x in enumerate(a):
for factor in find_factors(x):
if factor <= n:
b[factor - 1].append(i)
# There are dp[i][j] sub arrays of A of length (i+1) ending at b[i][j]
dp = [[] for i in range(n)]
dp[0] = [1] * n
for i in range(1, n):
k = x = 0
for j in b[i]:
while k < len(b[i - 1]) and b[i - 1][k] < j:
x += dp[i - 1][k]
k += 1
dp[i].append(x)
return sum(sum(dpi) for dpi in dp)
For every divisor d of A[i], where d is greater than 1 and at most i+1, A[i] can be the dth element of the number of subsequences already counted for d-1.
JavaScript code:
function getDivisors(n, max){
let m = 1;
const left = [];
const right = [];
while (m*m <= n && m <= max){
if (n % m == 0){
left.push(m);
const l = n / m;
if (l != m && l <= max)
right.push(l);
}
m += 1;
}
return right.concat(left.reverse());
}
function f(A){
const dp = [1, ...new Array(A.length).fill(0)];
let result = 0;
for (let i=0; i<A.length; i++){
for (d of getDivisors(A[i], i+1)){
result += dp[d-1];
dp[d] += dp[d-1];
}
}
return result;
}
var A = [2, 2, 1, 22, 14];
console.log(JSON.stringify(A));
console.log(f(A));
I believe that for the general case we can't provably find an algorithm with complexity less than O(n^2).
First, an intuitive explanation:
Let's indicate the elements of the array by a1, a2, a3, ..., a_n.
If the element a1 appears in a subarray, it must be element no. 1.
If the element a2 appears in a subarray, it can be element no. 1 or 2.
If the element a3 appears in a subarray, it can be element no. 1, 2 or 3.
...
If the element a_n appears in a subarray, it can be element no. 1, 2, 3, ..., n.
So to take all the possibilities into account, we have to perform the following tests:
Check if a1 is divisible by 1 (trivial, of course)
Check if a2 is divisible by 1 or 2
Check if a3 is divisible by 1, 2 or 3
...
Check if a_n is divisible by 1, 2, 3, ..., n
All in all we have to perform 1+ 2 + 3 + ... + n = n(n - 1) / 2 tests, which gives a complexity of O(n^2).
Note that the above is somewhat inaccurate, because not all the tests are strictly necessary. For example, if a_i is divisible by 2 and 3 then it must be divisible by 6. Nevertheless, I think this gives a good intuition.
Now for a more formal argument:
Define an array like so:
a1 = 1
a2 = 1× 2
a3 = 1× 2 × 3
...
a_n = 1 × 2 × 3 × ... × n
By the definition, every subarray is valid.
Now let (m, p) be such that m <= n and p <= n and change a_mtoa_m / p`. We can now choose one of two paths:
If we restrict p to be prime, then each tuple (m, p) represents a mandatory test, because the corresponding change in the value of a_m changes the number of valid subarrays. But that requires prime factorization of each number between 1 and n. By the known methods, I don't think we can get here a complexity less than O(n^2).
If we omit the above restriction, then we clearly perform n(n - 1) / 2 tests, which gives a complexity of O(n^2).

Formulating dp problem [Codeforces 414 B]

all here is the problem statement from an old contest on codeforces
A sequence of l integers b 1, b 2, ..., b l (1 ≤ b 1 ≤ b 2 ≤ ... ≤ b
l ≤ n) is called good if each number divides (without a remainder) by
the next number in the sequence. More formally for all i
(1 ≤ i ≤ l - 1).
Given n and k find the number of good sequences of length k. As the
answer can be rather large print it modulo 1000000007 (109 + 7).
I have formulated my dp[i][j] as the number of good sequences of length i which ends with the jth number, and the transition table as the following pseudocode
dp[k][n] =
for each factor of n as i do
for j from 1 to k - 1
dp[k][n] += dp[j][i]
end
end
But in the editorial it is given as
Lets define dp[i][j] as number of good sequences of length i that ends in j.
Let's denote divisors of j by x1, x2, ..., xl. Then dp[i][j] = sigma dp[i - 1][xr]
But in my understanding, we need two sigmas, one for the divisors and the other for length. Please help me correct my understanding.
My code ->
MOD = 10 ** 9 + 7
N, K = map(int, input().split())
dp = [[0 for _ in range(N + 1)] for _ in range(K + 1)]
for k in range(1, K + 1):
for n in range(1, N + 1):
c = 1
for i in range(1, n):
if n % i != 0:
continue
for j in range(1, k):
c += dp[j][i]
dp[k][n] = c
c = 0
for i in range(1, N + 1):
c = (c + dp[K][i]) % MOD
print(c)
Link to the problem: https://codeforces.com/problemset/problem/414/B
So let's define dp[i][j] as the number of good sequences of length exactly i and which ends with a value j as its last element.
Now, dp[i][j] = Sum(dp[i-1][x]) for all x s.t. x is a divisor of i. Note that x can be equal to j itself.
This is true because if there is some sequence of length i-1 which we have already found that ends with some value x, then we can simply add j to its end and form a new sequence which satisfies all the conditions.
I guess your confusion is with the length. The thing is that since our current length is i, we can add j to the end of a sequence only if its length is i-1, we cannot iterate over other lengths.
Hope this is clear.

Count number of subsequences with given k modulo sum

Given an array a of n integers, count how many subsequences (non-consecutive as well) have sum % k = 0:
1 <= k < 100
1 <= n <= 10^6
1 <= a[i] <= 1000
An O(n^2) solution is easily possible, however a faster way O(n log n) or O(n) is needed.
This is the subset sum problem.
A simple solution is this:
s = 0
dp[x] = how many subsequences we can build with sum x
dp[0] = 1, 0 elsewhere
for i = 1 to n:
s += a[i]
for j = s down to a[i]:
dp[j] = dp[j] + dp[j - a[i]]
Then you can simply return the sum of all dp[x] such that x % k == 0. This has a high complexity though: about O(n*S), where S is the sum of all of your elements. The dp array must also have size S, which you probably can't even afford to declare for your constraints.
A better solution is to not iterate over sums larger than or equal to k in the first place. To do this, we will use 2 dp arrays:
dp1, dp2 = arrays of size k
dp1[0] = dp2[0] = 1, 0 elsewhere
for i = 1 to n:
mod_elem = a[i] % k
for j = 0 to k - 1:
dp2[j] = dp2[j] + dp1[(j - mod_elem + k) % k]
copy dp2 into dp1
return dp1[0]
Whose complexity is O(n*k), and is optimal for this problem.
There's an O(n + k^2 lg n)-time algorithm. Compute a histogram c(0), c(1), ..., c(k-1) of the input array mod k (i.e., there are c(r) elements that are r mod k). Then compute
k-1
product (1 + x^r)^c(r) mod (1 - x^k)
r=0
as follows, where the constant term of the reduced polynomial is the answer.
Rather than evaluate each factor with a fast exponentiation method and then multiply, we turn things inside out. If all c(r) are zero, then the answer is 1. Otherwise, recursively evaluate
k-1
P = product (1 + x^r)^(floor(c(r)/2)) mod (1 - x^k).
r=0
and then compute
k-1
Q = product (1 + x^r)^(c(r) - 2 floor(c(r)/2)) mod (1 - x^k),
r=0
in time O(k^2) for the latter computation by exploiting the sparsity of the factors. The result is P^2 Q mod (1 - x^k), computed in time O(k^2) via naive convolution.
Traverse a and count a[i] mod k; there ought to be k such counts.
Recurse and memoize over the distinct partitions of k, 2*k, 3*k...etc. with parts less than or equal to k, adding the products of the appropriate counts.
For example, if k were 10, some of the partitions would be 1+2+7 and 1+2+3+4; but while memoizing, we would only need to calculate once how many pairs mod k in the array produce (1 + 2).
For example, k = 5, a = {1,4,2,3,5,6}:
counts of a[i] mod k: {1,2,1,1,1}
products of distinct partitions of k:
5 => 1
4,1 => 2
3,2 => 1
products of distinct partitions of 2 * k with parts <= k:
5,4,1 => 2
5,3,2 => 1
4,1,3,2 => 2
products of distinct partitions of 3 * k with parts <= k:
5,4,1,3,2 => 2
answer = 11
{1,4} {4,6} {2,3} {5}
{1,4,2,3} {1,4,5} {4,6,2,3} {4,6,5} {2,3,5}
{1,4,2,3,5} {4,6,2,3,5}

Formal proof for what algorithm return

I need to formal proof that below algorithm return 1 for n = 1 and 0 in other cases.
function K( n: word): word;
begin
if (n < 2) then K := n
else K := K(n − 1) * K(n − 2);
end;
Anyone could help? Thank you
This can be proven by induction, but as previous posters have shown, it's tricky to get formally correct when referring to K directly in the proof.
Here's my suggestion: Let P(n) be the property we want to show:
P(n) holds iff K(n) yields 1 for n = 1, and 0 for n ≠ 1.
Now we can clearly express what we want to show: Ɐn.P(n)
Base case: n &leq; 2
Trivial check by case analysis:
P(0) is ok, since K(0) = 0
P(1) is ok, since K(1) = 1
Induction hypothesis:
P(n) holds for all 2 &leq; n < c.
Inductive step: Show that P(c) holds
By definition of K we have K(c) = K(c-1) × K(c-2)
By the induction hypothesis, we know that P(c-1) and P(c-2) hold.
Since at most one of K(c-1) and K(c-2) can be 1 (and the other must be 0) the product is 0.
Which means that P(c) holds
Qed.
For n=1 by invoking the algorithm the answer is K=n=1, so we're done with that case.
For n=0, by definition, K(0) = 0.
For the case where n>1, we can solve it by induction:
Base: for n=2, we get: K(2) = K(1)*K(0) = 1*0 = 0
For n=3, we get: K(3) = K(2)*K(1) = 0*1 = 0 Note that K(2)=0 because we just showed it one line up.
Claim: For any 1<k<n, we get K(k) = 0
Proof for any n>3: K(n) = K(n-1)*K(n-2) =(1) 0*0 = 0
(1): Induction hypothesis, and since K(n-1),K(n-2) are both apply for it, since n-1,n-2>1
P.S. Note that the claim is true for non-negative numbers, for example, if you allow n=-5, you get K(-5)=-5 - which is a counter example to the claim.
Say n = 0. Since 0 < 2 we get 0 as result.
Say n = 1. Since 1 < 2 we get 1 as result.
Say n = 2. K(2) = K(1)*K(0). Since K(0) = 0 we get 0 as result.
For n > 2 now we suppose that statement about algorithm is true, i.e. K(n) = 0.
Now let show that it is also true for n + 1:
K(n + 1) = K(n)*K(n - 1). Since K(n) = 0 it is obvious that K(n)*K(n - 1) = 0.

solving recurrence recurrence

Ok, I'm struggling with Knuth's Concrete Mathematics and there are some examples which I do not understand yet.
J(n) = 2*J(n/2) - 1
it's from the first chapter. Specefically it solves The Josephus Problem for those who might be familiar with Concrete Mathematics. There's a solution given but absolutely no explanation.
I tried to solve it with Iteration method. Here's what ive come up with so far
J(n) = (2^k)*J(n/(2^k)) - (2^k - 1)
And I'm stuck here. Any help or hints will be appreciated.
I will recall the Josephus problem first.
We have n people gathered in circle. An executioner will process the circle in the following fashion :
The executioner starts from person at position i = 1
When at position i, he spares i but kills i's following person
He performs this until only one person is alive
By quickly looking at this procedure, we can see that every person in an even position will be killed in the first run. When all the "even" are dead, who are the remaining people ? Well it depends on the parity of n.
If n is even (say n = 2i), then the remaining people are 1,3,5,...,2i-1. The remaining problem is a circle of i people instead of n. Let's introduce a mapping mapeven between the position in the "new" circle and the initial position in the circle of n people.
mapeven(x) = 2.x - 1
This means that the person at position x in the new circle was in position 2.x - 1 in the initial one. If the survivor's position in the new circle is J(i), then the position that someone must occupy to survive in a circle of n = 2.i people is
mapeven(J(i)) = 2.J(i) - 1
We have the first recursion rule :
For any integer n :
J(2.n) = 2.J(n) - 1
But if n is odd (n = 2.j + 1), then the first run ends up killing all the "evens" and the executioner is at position n. n follower is 1 ... Thus the next to be killed is 1. The survivors are 3,5,..,2j+1 and the executioner proceeds as if we had a circle of j people. The mapping is a bit different from the even case :
mapodd(x) = 2.x + 1
3 is the new 1, 5 the new 2, and so on ...
If the survivor's position in the circle of j people is J(j), then the person who wants to survive in a circle of n = 2j+1 must occupy the position J(2j+1) :
J(2j+1) = mapodd(J(j)) = 2.J(j) + 1
The second recursion relationship is drawn :
For any integer n, we have :
J(2.n + 1) = 2.J(n) + 1
From now on, we are able to compute J(n) for ANY integer n using the 2 recursion relationships. But if we look a bit further, we can make it better ...
As a consequence, for every n = 2k, we have J(n) = 1. Ok that's great, but for other numbers ? If you write down the first results (say up to n = 20), you will see that the sequence seems pseudo-periodic :
1 2 3 4 5 6 7 8 9 10 11
1 1 3 1 3 5 7 1 3 5 7
Starting from a power of two, it seems that the position increases by 2 at each step until the next power of two, where we start again from 1 ... Since, given an integer n there is a unique integer m(n) such that
2m(n) ≤ n < 2m(n)+1
Let s(n) be the integer such that n = 2m(n) + s(n) (I call it "s" for "shift").
The mathematical translation of our observation is that J(n) = 1 + 2.s(n)
Let's prove it using strong induction.
For n = 1, we have J(1) = 1 = 1 + 2.0 = 1 + 2.s(1)
For n = 2, we have J(2) = 1 = 1 + 2.0 = 1 + 2.s(2)
Assuming J(k) = 1 + 2.s(k) for any k such that k ∈ [1,n], let's prove that J(n+1) = 1 + 2.s(n+1).
We have n = 2m(n+1) + s(n+1). Obviously, 2m(n) is even (except in the trivial case where n = 1), thus the parity of n is carried by s(n).
If s(n+1) is even, then we denote s(n+1) = 2j. We have
J(n+1) = 2.J((n+1)/2) - 1 = 2.J(2m(n+1)-1 + j) - 1
Since the statement is true for any k ∈ [1,n], it is true for 1 ≤ k = (n+1)/2 < n and thus :
J(n+1) = 2.(2j + 1) - 1 = 2.s(n+1) + 1
We can similarly resolve the odd case.
The formula is established for any integer n :
J(n) = 2.s(n) + 1, with m(n), s(n) ∈ ℕ the unique integers such that
2m(n) ≤ n < 2m(n)+1 and s(n) = n - 2m(n)
In other terms : m(n) = ⌊ln2(n)⌋ and s(n) = n - 2⌊ln2(n)⌋
Start with a few easy examples, make a guess, then use induction to (dis)prove your guess.
Consider n = some power of 2.
J(2^0) = 1 (given)
J(2^1) = 2J(2^0) - 1 = 1
J(2^2) = 2J(2^1) - 1 = 1
Okay, let's guess J(n) = 1 for all n >= 1.
Base case: J(1) = 1, which is true by definition.
Inductive step: assume J(k) = 1 for some arbitrary k. Then J(2k) = 2J(k) - 1 = 1.
Therefore, by induction, J(n) = 1 for all n (assuming division rounds down to integers).
J(n)=2*J(n/2)-1
J(n)-1=2*J(n/2)-2
J(n)-1=2*(J(n/2)-1)
T(n)=2*T(n/2), where T(n)=J(n)-1
T(n)=2^log2(n)*T(1)
J(n)=2^log2(n)*(J(1)-1)+1

Resources