Permutation of N numbers based on a property - algorithm

I have a Number N and String S. I have to create an array of size N containing permutation of numbers from 0 to N-1 indexing from 0 to N-1 such that for 0 to N-2, if:
S[i]=='1' then A[i]>A[i+1]
else A[i]<A[i+1]
I have to find the number of ways to create this array.
Problem: I know this can be solved using dynamic programming but the main problem is, how do I keep track of the numbers that that have been used.
If N is between 1 to 25 then I can use dp[1<<n][n]. Where Mask tells me the number being used, how can this be solved when is N is up to 3000.
How can I keep track of the numbers that are used in a sequence since each number can be used only one time?
For Ex:
3
S =11
Ans = 1
The only permutation that satisfies the given pattern is (2,1,0).

Related

Given four arrays of equal size n, how many ways can I choose an element of each array with a sum of k?

If I had four arrays with the same size, how do I determine the number of ways to choose one element from each array such that the four elements have the same sum?
For example, there are 81 ways to choose choose an element from each array with a sum of 4.
A. 1 1 1
B. 1 1 1
C. 1 1 1
D. 1 1 1
I am not sure how to do this without some sort of brute forcing.
The idea
Number of ways to get sum k from 1 array = number of occurences of k in the array.
Number of ways to get sum k from 2 arrays = sum(count(k-x) for each element x in array 2), where count(y) is number of ways of getting sum y from the first array. If you memoize the results from point #1, getting count(y) can be done in O(1).
Number of ways to get sum k from 3 arrays = sum(count(k-x) for each element x in array 3), where count(y) is number of ways of getting sum y from the first two arrays. Again, if you memoize the results from point #2, getting count(y) can be done in O(1).
You should get the concept by now; number of ways to get sum k from n arrays = sum(count(k-x) for each element x in the nth array), where count(y) is number of ways of getting sum y from the first n-1 arrays.
Dynamic programming
You need to build a memoization table which basically tells you the number of ways of getting sum x from the first y arrays. If you have this table for the first n-1 arrays, you can calculate all sums including the n-th array efficiently and update the table.

Calculating limits in dynamic programming

I found this question on topcoder:
Your friend Lucas gave you a sequence S of positive integers.
For a while, you two played a simple game with S: Lucas would pick a number, and you had to select some elements of S such that the sum of all numbers you selected is the number chosen by Lucas. For example, if S={2,1,2,7} and Lucas chose the number 11, you would answer that 2+2+7 = 11.
Lucas now wants to trick you by choosing a number X such that there will be no valid answer. For example, if S={2,1,2,7}, it is not possible to select elements of S that sum up to 6.
You are given the int[] S. Find the smallest positive integer X that cannot be obtained as the sum of some (possibly all) elements of S.
Constraints: - S will contain between 1 and 20 elements, inclusive. - Each element of S will be between 1 and 100,000, inclusive.
But in the editorial solution it has been written:
How about finding the smallest impossible sum? Well, we can try the following naive algorithm: First try with x = 1, if this is not a valid sum (found using the methods in the previous section), then we can return x, else we increment x and try again, and again until we find the smallest number that is not a valid sum.
Let's find an upper bound for the number of iterations, the number of values of x we will need to try before we find a result. First of all, the maximum sum possible in this problem is 100000 * 20 (All numbers are the maximum 100000), this means that 100000 * 20 + 1 will not be an impossible value. We can be certain to need at most 2000001 steps.
How good is this upper bound? If we had 100000 in each of the 20 numbers, 1 wouldn't be a possible sum. So we actually need one iteration in that case. If we want 1 to be a possible sum, we should have 1 in the initial elements. Then we need a 2 (Else we would only need 2 iterations), then a 4 (3 can be found by adding 1+2), then 8 (Numbers from 5 to 7 can be found by adding some of the first 3 powers of two), then 16, 32, .... It turns out that with the powers of 2, we can easily make inputs that require many iterations. With the first 17 powers of two, we can cover up to the first 262143 integer numbers. That should be a good estimation for the largest number. (We cannot use 2^18 in the input, smaller than 100000).
Up to 262143 times, we need to query if a number x is in the set of possible sums. We can just use a boolean array here. It appears that even O(log(n)) data structures should be fast enough, however.
I did understand the first paragraph. But after that they have explained something about "How good is this upper bound?...". I couldnt understand that paragraph. How did they deduce to the fact that we need to query 262143 times if a number x is in the set of possible sums?
I am a newbie at dynamic programming and so it would be great if somebody could explain this to me.
Thank you.
The idea is as follows:
If the input sequence contains the first k powers of two: 2^0, 2^1, ... 2^(k-1), then the sum can be any integer between 0 and (2^k) - 1. Since the greatest power of two that can appear in the sequence is 2^17, the greatest sum that you can build from 18 numbers is 2^18 - 1=262,143. If a power of two would be missing, there would be a smaller sum that was not possible to achieve.
However, the statement is missing that there may be 2 more numbers in the sequence (at most 20). From these two numbers, you can repeat the same process. Hence, the maximum number to check is actually (2^18) - 1 + (2^2) - 1.
You may wonder why we use powers of two and not any other powers. The reason is the binary selection that we perform on the numbers in the input sequence. Either we add a number to the sum or we don't. So, if we represent this selection for number ni as a selection variable si (either 0 or 1), then the possible sum is:
s = s0 * n0 + s1 * n1 + s2 * n2 + ...
Now, if we choose the ni to be powers of two ni = 2^i, then:
s = s0 * 2^0 + s1 * 2^1 + s2 * 2^2 + ...
= sum si * 2^i
This is equivalent to the binary representations of numbers (see Positional Notation). By definition, different choices for the selection variables will produce different sums. Hence, the number of possible sums is maximal by choosing powers of two in the input sequence.

Finding a certain set of numbers from a list

I am working on this project where the user inputs a list of numbers. I put these numbers in an array. I need to find a set of numbers with a given length whose sum is divisible by 5.
For example, if the list is 9768014, and the length required is 6, then the output would be 987641.
What algorithm do I need to find that set of numbers?
You can solve this by dynamic programming. Let f(n,m,k) be the largest index between 1 and n of the number in a subset of indices {1,2,....,n} that gives a sum of k mod 5 that uses m numbers. (It's possible that f(n,m,k) = None). You can compute f(n+1,m,k) and f(n,m+1,k) if you know the values of f(N,M,k) for all N <= n + 1 and M < m and also for all N <= n and M < m + 1 and also for N=n,M=m, and all k = 0,1,2,3,4. If you ever find that f(n,m,0) has a solution where m is your desired number of numbers to use, then you're done. Also you don't have to compute f(N,M,k) for any M greater than your desired count of numbers to use. Total complexity is O(n*m) where n is the total count of numbers and m is the size of subset that you are trying to reach.

Is this possible? Last few digits of sum equal to another number

I have a n-digit number and a list of numbers, from which any number can be used any number of times.
Taking numbers from the list, how do I know that it is possible to generate a sum such that the last n-digits of the sum are the the n-digit number?
Note: The sum has some initial value, its not zero.
EDIT - If a solution exists, I need to find the minimum number of the numbers added to get a number such that it has the last 4 digits as the given number. That be easily solved with DP (minimum coin change problem).
For example, if n=4,
Given number = 1212
Initial value = 5234
List = [1023, 101, 1]
A solution exists: 21212 = 5234 + 1023*15 + 101*6 + 1*27
It's easy to find a counterexample (see comments).
Now, for the solution here's a dynamic programming approach:
All arithmetic is modulo 10^n. For each value in the range 0 - 10^n-1 you need a flag whether it was found and you need a queue for the elements to be processed.
Push the initial value to the to-be-processed-list.
Get an element from the to-be-processed list. If empty, finished. No solution.
Try to add each number separately to this number. If it was already found, nothing to do. If sum is found, you've finished, there's a solution. If not, mark it as found and push it to the queue.
Goto 2
An actual solution can be reconstructed if you store how you reached a number. You just have to walk back from sum till you hit the initial value.
If the greatest common factor of the numbers in the list is a unit modulo 10n (that is, not divisible by 2 or 5) you can solve the problem for any choice of the other given values: use the extended Euclid's algorithm to find a linear combination of the list that sums to the gcf, find the multiplicative inverse of the gcf modulo 10n and multiply by the difference between the given and the initial values.
If the gcf of the numbers in the list is divisible by 2 or 5 (that is, is not a unit) and the difference between the given and the initial value is also divisible by 2 or 5, divide the numbers in the list and the difference by the largest powers of 2 and 5 that divide them all. If the gcf you end up with is a unit there is a solution and you can find it with the procedure above. Otherwise there is no solution.
For example, given 16 and initial value for the sum 5, and list of numbers [3].
The gcf of the numbers in the list is 3 which is a unit. Its inverse modulo 100 is 67 (3×67 = 201).
Multiplying by the difference between the given number and the initial value 16-5 = 11 we get the factor 67*11 = 737 for 3. Since we're working modulo 100 that's the same as 37.
Checking the result: 5 + 37×3 = 16. Yep, that works.

Efficient way to find all zeros in a matrix?

I am thinking of efficient algorithm to find the number of zeros in a row of matrix but can only think of O(n2) solution (i.e by iterating over each row and column). Is there a more efficient way to count the zeros?
For example, given the matrix
3, 4, 5, 6
7, 8, 0, 9
10, 11, 12, 3
4, 0, 9, 10
I would report that there are two zeros.
Without storing any external information, no, you can't do any better than Θ(N2). The rationale is simple - if you don't look at all N2 locations in the matrix, then you can't guarantee that you've found all of the zeros and might end up giving the wrong answer back. For example, if I know that you look at fewer than N2 locations, then I can run your algorithm on a matrix and see how many zeros you report. I could then look at the locations that you didn't access, replace them all with zeros, and run your algorithm again. Since your algorithm doesn't look at those locations, it can't know that they have zeros in them, and so at least one of the two runs of the algorithm would give back the wrong answer.
More generally, when designing algorithms to process data, a good way to see if you can do better than certain runtimes is to use this sort of "adversarial analysis." Ask yourself the question: if I run faster than some time O(f(n)), could an adversary manipulate the data in ways that change the answer but I wouldn't be able to detect? This is the sort of analysis that, along with some more clever math, proves that comparison-based sorting algorithms cannot do any better than Ω(n log n) in the average case.
If the matrix has some other properties to it (for example, if it's sorted), then you might be able to do a better job than running in O(N2). As an example, suppose that you know that all rows of the matrix are sorted. Then you can easily do a binary search on each row to determine how many zeros it contains, which takes O(N log N) time and is faster.
Depending on the parameters of your setup, you might be able to get the algorithm to run faster if you assume that you're allowed to scan in parallel. For example, if your machine has K processors on it that can be dedicated to the task of scanning the matrix, then you could split the matrix into K roughly evenly-sized groups, have each processor count the number of zeros in the group, then sum the results of these computations up. This ends up giving you a runtime of Θ(N2 / K), since the runtime is split across multiple cores.
Always O(n^2) - or rather O(n x m). You cannot jump over it.
But if you know that matrix is sparse (only a few elements have nonzero values), you can store only values that are non zero and matrix size. Then consider using hashing over storing whole matrix - generally create hash which maps a row number to a nested hash.
Example:
m =
[
0 0 0 0
0 2 0 0
0 0 1 0
0 0 1 0
]
Will be represented as:
row_numbers = 4
column_numbers = 4
hash = { 1 => { 1 => 2}, 2 => {2 => 1, 3 => 2}}
Then:
number_of_zeros = row_numbers * column_numbers - number_of_cells_in_hash(hash)
For any un sorted matrix it should be O(n). Since generally we represent total elements with 'n'.
If Matrix contains X Rows and Y Columns, X by Y = n.
E.g In 4 X 4 un sorted matrix it total elements 16. so When we iterate in linear with 2 loops 4 X 4 = 16 times. it will be O(n) because the total elements in the array are 16.
Many people voted for O(n^2) because they considered n X n as matrix.
Please correct me if my understanding is wrong.
Assuming that when you say "in a row of a matrix", you mean that you have the row index i and you want to count the number of zeros in the i-th row, you can do better than O(N^2).
Suppose N is the number of rows and M is the number of columns, then store your
matrix as a single array [3,4,5,6,7,8,0,9,10,11,12,34,0,9,10], then to access row i, you access the array at index N*i.
Since arrays have constant time access, this part doesn't depend on the size of the matrix. You can then iterate over the whole row by visiting the element N*i + j for j from 0 to N-1, this is O(N), provided you know which row you want to visit and you are using an array.
This is not a perfect answer for the reasons I'll explain, but it offers an alternative solution potentially faster than the one you described:
Since you don't need to know the position of the zeros in the matrix, you can flatten it into a 1D array.
After that, perform a quicksort on the elements, this may provide a performance of O(n log n), depending on the randomness of the matrix you feed in.
Finally, count the zero elements at the beginning of the array until you reach a non-zero number.
In some cases, this will be faster than checking every element, although in a worst-case scenario the quicksort will take O(n2), which in addition to the zero counting at the end may be worse than iterating over each row and column.
assuming the given Matrix is M do an M+(-M) operation but do use the default + use instead my_add(int a, int b) such that
int my_add(int a, int b){
return (a == b == 0) ? 1 : (a+b);
}
That will give you a matrix like
0 0 0 0
0 0 1 0
0 0 0 0
0 1 0 0
Now you create a s := 0 and keep adding all elements to s. s += a[i][j]
You can do both in one cycle even. s += my_add(a[i][j], (-1)*a[i][j])
But still Its O(m*n)
NOTE
To count the number of 1's you generally check all items in the Matrix. without operating on all elements I don't think you can tell the number of 1's. and to loop all elements its (m*n). It can be faster than (m*n) if and only if you can leave some elements unchecked and say the number of 1's
EDIT
However if you move a 2x2 kernel over the matrix and hop you will get (m*n)/k iteration e.g. if you operate on neighboring elements a[i][j], a[i+1][j], a[i][j+1], a[i+1][j+1] till i < m & i< n

Resources