Number prodigy is given X - there's a X digit number N, reverse of N is M. Number prodigy is interested in finding out how many X digit numbers are of form : N+M=10^X-1 and N is expected not have trailing zeroes. Means that N%10 != 0 .
In case of X=1, 9 such combinations exist.
Denote A[i] - the i'th digit of A.
We first need to understand that to get N+M=10^X-1, we need N[i]+M[i]=9 for all i. Since M[i]=N[X-i], it means we need N[i] + N[X-i] = 9. This means, once N[i] is set, also N[X-i].
We can now derive a recursive formula:
F(X) = 10*F(X-2)
The idea is - we look at the first digit of X, we have 10 possibilities for it, and for each possibility, we set N[0] and N[X-1].
However, this allows leading and trailing zeros, which we don't want. The first and last number can be anything by 0.
G(X) = 8*F(X-2)
The above is chosing one of 1,2,...,8 as N[0], then setting (one option) the last number so N[X-1] = 9 - N[0], and invoke the recursive call without restrictions. Note neither N[0] nor N[X-1] can be zero.
Base clauses:
F(0) = 1
F(1) = 0
F(1)=0 because there is no natural number n such that n+n=9.
All in all, we found a recursive formula to compute the total number of elements. This recursive formula can be transformed into a close one with some basic algebra. I leave this part for you.
Related
Given the following pseudo-code, the question is how many times on average is the variable m being updated.
A[1...n]: array with n random elements
m = a[1]
for I = 2 to n do
if a[I] < m then m = a[I]
end for
One might answer that since all elements are random, then the variable will be updated on average on half the number of iterations of the for loop plus one for the initialization.
However, I suspect that there must be a better (and possibly the only correct) way to prove it using binomial distribution with p = 1/2. This way, the average number of updates on m would be
M = 1 + Σi=1 to n-1[k.Cn,k.pk.(1-p)(n-k)]
where Cn,k is the binomial coefficient. I have tried to solve this but I have stuck some steps after since I do not know how to continue.
Could someone explain me which of the two answers is correct and if it is the second one, show me how to calculate M?
Thank you for your time
Assuming the elements of the array are distinct, the expected number of updates of m is the nth harmonic number, Hn, which is the sum of 1/k for k ranging from 1 to n.
The summation formula can also be represented by the recursion:
H1 = 1
Hn = Hn−1+1/n (n > 1)
It's easy to see that the recursion corresponds to the problem.
Consider all permutations of n−1 numbers, and assume that the expected number of assignments is Hn−1. Now, every permutation of n numbers consists of a permutation of n−1 numbers, with a new smallest number inserted in one of n possible insertion points: either at the beginning, or after one of the n−1 existing values. Since it is smaller than every number in the existing series, it will only be assigned to m in the case that it was inserted at the beginning. That has a probability of 1/n, and so the expected number of assignments of a permutation of n numbers is Hn−1 + 1/n.
Since the expected number of assignments for a vector of length one is obviously 1, which is H1, we have an inductive proof of the recursion.
Hn is asymptotically equal to ln n + γ where γ is the Euler-Mascheroni constant, approximately 0.577. So it increases without limit, but quite slowly.
The values for which m is updated are called left-to-right maxima, and you'll probably find more information about them by searching for that term.
I liked #rici answer so I decided to elaborate its central argument a little bit more so to make it clearer to me.
Let H[k] be the expected number of assignments needed to compute the min m of an array of length k, as indicated in the algorithm under consideration. We know that
H[1] = 1.
Now assume we have an array of length n > 1. The min can be in the last position of the array or not. It is in the last position with probability 1/n. It is not with probability 1 - 1/n. In the first case the expected number of assignments is H[n-1] + 1. In the second, H[n-1].
If we multiply the expected number of assignments of each case by their probabilities and sum, we get
H[n] = (H[n-1] + 1)*1/n + H[n-1]*(1 - 1/n)
= H[n-1]*1/n + 1/n + H[n-1] - H[n-1]*1/n
= 1/n + H[n-1]
which shows the recursion.
Note that the argument is valid if the min is either in the last position or in any the first n-1, not in both places. Thus we are using that all the elements of the array are different.
We got this problem in our course that no one who I had talked to solved it. I would like for some help. So here's the problem:
Let A be array of length n which contains n digits (digit is between 0-9).
A numeral sub-sequence of A is a sequence of positive numbers which their digits compose a sub-sequence of A, when all digits of a certain number in the sequence appear in a row in A.
For example: the sequence 13,1,345,89,23 is a numeral sub-sequence of input array A:
[1,3,5,1,2,3,4,5,8,9,4,5,2,3]
Length of a numeral sub-sequence is the amount of numbers which appear in it (in the example above: 5)
Numeral sub-sequence is increasing if every number in the sequence is bigger than the number before it.
The request is to find an algorithm in dynamic programming approach (based on recursive formula) that finds the longest increasing numeral sub-sequence of an input array A.
Thanks in advance for all helpers!
Look at the first digit in the array. Either this digit is not part of a number in your number sequence or it is. If it is, the number could have 1, 2, ..., n digits. For each guess, return:
not in a number: return f(array[2...n], -1)
1st digit of 1-digit number: return array[1] union f(array[2...n], number(array[1]))
1st digit of 2-digit number: return array[1...2] union f(array[3...n], number(array[1...2]))
1st digit of 3-digit number: return array[1...3] union f(array[4...n], number(array[1...3]))
...
1st digit of n-digit number: return array[1...n]
There are some optimizations you can do here to skip some steps along the way.
f(array[1...k], x) = f(array[1...k], y) if the smallest choice for the next number in the sequence given hypothetical last numbers x and y is the same. So, if the smallest choice for the next number in array[1...k] is the same for x and y, and we already computed the value of f for x, we can reuse that value for y.
f(array[1...k], x) = c + f(array[2...k], x) whenever array[1] = 0, where c = 1 if x < 0 and c = 0 if x >= 0. That is, we can ignore leading zeroes except possibly a leading zero at the beginning of the array which should always be chosen as our first one-digit number.
when deciding whether a digit will be the first digit of a k-digit number, if you never choose leading zeroes, you know an upper bound on the number of remaining numbers in your sequence is given by n/k, since any numbers chosen after this one will need to be at least k digits long. If you remember the longest sequence you've seen so far, you can recognize paths that have no hope of doing better than what you've seen and ignore them.
if an array has at least k(k+1)/2 non-zero digits in it, there is a number sequence of length at least k obtained by taking numbers with 1, 2, ..., k non-zero digits sequentially left to right. So, if you pre-compute this value, you can potentially avoid some paths right off the bat.
Here's rough pseudocode with the optimizations discussed:
solve(array[1...n])
z = number of non-zero entries in array
last_number = -1
min_soln = floor((sqrt(1 + 8z) - 1) / 2)
return solve_internal(array[1...n], min_soln, last_number)
memo = {}
solve_internal(array[1...n], min_soln, last_number)
// ignore potentially leading zeroes except the first one
if array[1] = 0 then
if last_number < 0 then
return {0} union solve_internal(array[2...n], min_soln - 1, 0)
else then
return solve_internal(array[2...n], min_soln, last_number)
// abort since we don't have enough digits left to get a solution
if floor(n / #digits(last_number)) < min_soln return []
// look up current situation in previous partial solutions
z = smallest number formable in array greater than last_number
if memo contains (n, z) then
return memo[n, z]
soln = {}
for k = 1 to n do
soln_k = solve_internal(array[k+1...n], min_soln - 1, array[1...k])
if |soln_k| > |soln| then
soln = soln_k
min_soln = |soln|
memo[n, z] = soln
return soln
Problem: given an n-digit number, which k (k < n) digits should be deleted from it to make the number left is the smallest among all cases (the relative sequence of remaining digits should not be changed). e.g. delete 2 digits from '24635', the smallest left number is '235'.
A solution: Delete the first digit (from left to right) which is larger than or equal to its right neighbor, or the last digit, if we cannot find one as such. Repeat this procedure for k times. (see codecareer for reference. There are other solutions such as geeksforgeeks, stackoverflow, but I thought the one described here is more intuitive, so I prefer this one.)
The problem now is, how to prove the solution above is correct, i.e. how can it guarantee the final number is smallest by making it the smallest after deleting a single digit at each step.
Suppose k = 1.
Let m = Σi=0,...,n aibi and n+1 digit number anan-1...a1a0 with base b, i.e. 0 ≤ ai < b ∀ 0 ≤ i ≤ n (e.g. b = 10).
Proof
∃ j > 0 with aj > aj-1 and let j be maximal.
This means aj is the last digit of a (not necessary strictly) increasing sequence of consecutive digits.
Then the digit aj is now removed from the number and the resulting number m' has the value
m' = Σi=0,...,j-1 aibi + Σi=j+1,...,n aibi-1
The aim of this reduction is to maximize the difference m-m'. So lets take a look:
m - m' = Σi=0,...,n aibi - (Σi=0,...,j-1 aibi + Σi=j+1,...,n aibi-1)
= ajbj + Σi=j+1,...,n (aibi - aibi-1)
= anbn + Σi=j,...,n-1 (ai - ai+1)bi
Can there be a better choice of j to get a bigger difference?
Since an...aj is an increasing sub sequence, ai-ai+1 ≥ 0 holds. So choosing j' > j instead of j, you get more zeros where you now have a positive number, i.e. the difference gets not bigger, but lower if there exists an i with ai+1 < ai (strict smaller).
j is supposed to be maximal, i.e. aj-1-aj < 0. We know
bj-1 > Σi=0,...,j-2(b-1)bi = bi-1-1
This means, that if we choose `j' < j', we get a negative addition to the difference, so it also gets not bigger.
If ∄ j > 0 with aj > aj-1 the above proof works for j = 0.
What is left to do?
This is only the proof that your algorithm works for k = 1.
It is possible to extend the above proof to multiple sub sequences of (not necessary strictly) increasing digits. It's exact the same proof but much less readable, due to the number of indexes you need.
Maybe you can also use induction, since there are no interactions between the digits (blocking following next choices or something).
Here is a simple argument that your algorithm works for any k. Suppose there is a digit in the mth place that is less than or equal to it's right (m+1)th digit neighbor, and you delete the mth digit but not the (m+1)th. Then you can delete the (m+1)th digit instead of the mth, and you will get an answer less than or equal to your original answer.
notice: this proof is for building the maximum number after removing k digits, but the thinking is similar
key lemma: maximum (m + 1)-digit number contains maximum m-digit
number for all m = 0, 1, ..., n - 1
proof:
greedy solution to delete one digit from some number to get the maximum
result: just delete the first digit which next digit is greater than it, or the last digit if digits are in non-ascending order. This is very easy to prove.
we use contradiction to proof the lemma.
suppose the first time the lemma is broken when m = k, so S(k) ⊄ S(k + 1). Notice that the S(k) ⊂ S(n) as the initial number contains all sub optimal ones, so there must exist a x that S(k) ⊂ S(x) and S(k) ⊄ S(x - 1), k + 2 <= x <= n
we use the greedy solution above to delete only one digit S[X][y] from S(x) to get S(x - 1), so S[X][y] ∈ S(x) and S[X][y] ∉ S(x - 1) and S(k) must contain it. We now use contradiction to prove that S(k) does not need to contain this digit .
According to our greedy solution, all digits from beginning to S[X][y] are
in non-ascending order.
if S[X][y] is at the tail, then S(k) can be the first k digits of S(x) ---> contradiction!
otherwise, we firstly know that all digits in S[X][1, 2,..., y] are in S[k]. If there is a S[X][z] is not inS(k), 1 <= z <= y - 1, then we can shift digits of S(k) that in range S[X][z + 1, y] to left one unit to get a greater or equal S(k). Therefore, there are at least 2 digit after S[X][y] that are not in S(k) as x >= k + 2. Then, we can follow the prefix of S(k) to S[X][y], but we do not use S[X][y], we use from S[X][y + 1]. As S[X][y + 1] > S[X][y], we can build a greater S(k) -------> contradiction!
so, we prove lemma. If we have got S(m + 1), and we know S(m + 1) contains S(m), then S(m) must be the maximum number after removing one digit from S(m + 1)
The problems is to find the count of numbers between A and B (inclusive) that have sum of digits equal to S.
Also print the smallest such number between A and B (inclusive).
Input:
Single line consisting of A,B,S.
Output:
Two lines.
In first line the number of integers between A and B having sum of digits equal to S.
In second line the smallest such number between A and B.
Constraints:
1 <= A <= B < 10^15
1 <= S <= 135
Source: Hacker Earth
My solution works for only 30 pc of their inputs. What could be the best possible solution to this?
The algorithm I am using now computes the sum of the smallest digit and then upon every change of the tens digit computes the sum again.
Below is the solution in Python:
def sum(n):
if (n<10):return n
return n%10 + sum(n/10)
stri = raw_input()
min = 99999
stri = stri.split(" ")
a= long (stri[0])
b= long (stri[1])
s= long (stri[2])
count= 0
su = sum(a)
while a<=b :
if (a % 10 == 0 ):
su = sum(a)
print a
if ( s == su):
count+=1
if (a<= min):
min=a
a+=1
su+=1
print count
print min
There are two separate problems here: finding the smallest number between those numbers that has the right digit sum and finding the number of values in the range with that digit sum. I'll talk about those problems separately.
Counting values between A and B with digit sum S.
The general approach for solving this problem will be the following:
Compute the number of values less than or equal to A - 1 with digit sum S.
Compute the number of values less than or equal to B with digit sum S.
Subtract the first number from the second.
To do this, we should be able to use a dynamic programming approach. We're going to try to answer queries of the following form:
How many D-digit numbers are there, whose first digit is k, whose digits that sum up to S?
We'll create a table N[D, k, S] to hold these values. We know that D is going to be at most 16 and that S is going to be at most 136, so this table will have only 10 × 16 × 136 = 21,760 entries, which isn't too bad. To fill it in, we can use the following base cases:
N[1, S, S] = 1 for 0 ≤ S ≤ 9, since there's only one one-digit number that sums up to any value less than ten.
N[1, k, S] = 0 for 0 ≤ S ≤ 9 if k ≠ S, since no one-digit number whose first digit isn't a particular sum sums up to some value.
N[1, k, S] = 0 for 10 ≤ S ≤ 135, since no one-digit number sums up to exactly S for any k greater than a single digit.
N[1, k, S] = 0 for any S < 0.
Then, we can use the following logic to fill in the other table entries:
N[D + 1, k, S] = sum(i from 0 to 9) N[D, i, S - k].
This says that the number of (D+1)-digit numbers whose first digit is k that sum up to S is given by the number of D-digit numbers that sum up to S - k. The number of D-digit numbers that sum up to S - k is given by the number of D-digit numbers that sum up to S - k whose first digit is 0, 1, 2, ..., 9, so we have to sum up over them.
Filling in this DP table takes time only O(1), and in fact you could conceivably precompute it and hardcode it into the program if you were really concerned about time.
So how can we use this table? Well, suppose we want to know how many numbers that sum up to S are less than or equal to some number X. To do this, we can process the digits of X one at a time. Let's write X one digit at a time as d1 ... dn. We can start off by looking at N[n, d1, S]. This gives us the number of n-digit numbers whose first digit is d1 that sum up to S. This may overestimate the number of values less than or equal to X that sum up to S. For example, if our number is 21,111 and we want the number of values that sum up to exactly 12, then looking up this table value will give us false positives for numbers like 29,100 that start with a 2 and are five digits long, but which are still greater than X. To handle this, we can move to the next digit of the number X. Since the first digit was a 2, the rest of the digits in the number must sum up to 10. Moreover, since the next digit of X (21,111) is a 1, we can now subtract from our total the number of 4-digit numbers starting with 2, 3, 4, 5, ..., 9 that add up to 10. We can then repeat this process one digit at a time.
More generally, our algorithm will be as follows. Let X be our number and S the target sum. Write X = d1d2...dn and compute the following:
# Begin by starting with all numbers whose first digit is less than d[1].
# Those count as well.
result = 0
for i from 0 to d[1]:
result += N[n, i, S]
# Now, exclude everything whose first digit is d[1] that is too large.
S -= d[1]
for i = 2 to n:
for j = d[i] to 8:
result -= N[n, d[i], S]
S -= d[i]
The value of result will then be the number of values less than or equal to X that sum up to exactly S. This algorithm will only run for at most 16 iterations, so it should be very quick. Moreover, using this algorithm and the earlier subtraction trick, we can use it to compute how many values between A and B sum up to exactly S.
Finding the smallest value in [A, B] with digit sum S.
We can use a similar trick with our DP table to find the smallest number greater than A number that sums up to exactly S. I'll leave the details as an exercise, but as a hint, work one digit at a time, trying to find the smallest number for which the DP table returns a nonzero value.
Hope this helps!
The problem statement is following:
Given N. We need to find x1,x2,..,xp such that N = x1 + x2 + .. + xp, p must be minimum(means number of terms in the sum) and we also must be able to get all the numbers from 1 to (N-1) from the sum of the subset of (x1,x2,x3..xp).And numbers in the set might be repeated also.
For example if N=7.
7 = 1+2+4
And 6= (2,4) , 5= (4,1), 4 = (4),3=(1,2) and so on.
Example 2:
8 = 1+2+4+1
Example 3:(invalid)
8 = 1+2+5
But we can't get 4 from the subset of (1,2,5).So (1,2,5) is not a valid combination
My approach is if 'N-1'can be written as sum of p terms than 'N' either have p or p+1 terms. But that approach will require to check all possible combinations which sums up to "N-1" and have "p" terms. Can anyone has better solution other than this?
Solution:
Step1:
Assume that we got "K" entries in our set as our answer. Therefore we can obtain 2^K different numbers of sums from these numbers because each entry either will appear or not appear in the sum. And also if the the number is "N", we need to compute the sum for '1' to 'N'. Therefore (2^K -1) = N K=log(N+1)
Step2:
After the step1, we know that our answer must include "K" entries but what these entries actual are? Assume that our entries are (a1,a2,a3...ak). So number P can be written as
P = a1*b1 + a2*b2 + a3*b3....+ ak*bk. Where all b[i] = 0 or 1. Here, we can see P as a decimal representation of binary number (b1 b2 b3 bk), therefore we can take a[i] = 2^(i-1).
You should take all numbers 1,2,4 ....2^k, N-(1+...+2^k). (The last one only if it doesn't equal to 0)
Proof
First of all, if we only get k numbers, we can get maximum 2^k - 1 different sums except 0. So if N>=2^k, We need at least k + 1 numbers. So you can see that if our group of numbers correct it's minimum by size(or one of the minimums)
It's easy to see that we can get any number from 0 to 2^(k+1) - 1 using first numbers. What If we need more? We just get last number because it's less than 2^(k + 1). And get difference using first elements
I haven't run out the numbers on this, but you should be very very interested in the fact that you have listed the first three powers of two.
If I were looking for a better solution, that's where I'd start.