I am trying to work through this question in LeetCode.
119. Pascal's Triangle II
Given a non-negative index k where k ≤ 33, return the kth index row of the Pascal's triangle.
Note that the row index starts from 0.
In Pascal's triangle, each number is the sum of the two numbers directly above it.
Example:
Input: 3
Output: [1,3,3,1]
Follow up:
Could you optimize your algorithm to use only O(k) extra space?
import java.util.Arrays;
class Solution {
public List<Integer> getRow(int rowIndex) {
Integer[] dp = new Integer[rowIndex+1];
Arrays.fill(dp,1);
for(int i = 2; i <= rowIndex;i++){
for(int j = i- 1; j > 0;j--){
dp[j] = dp[j-1] + dp[j];
}
}
return Arrays.asList(dp);
}
}
And I see some one giving this working solution.
I can understand why it is correct.
But I am still quite unclear why the array is updating in this order.
In this case I know the transition of status is like:
P(n) = P(n-1) + P(n)
But how can this give clues on how to choose the direction of updating the array?
Why exactly the ascending order doesn't work in this case if we think in the way of DP. I know substantially this could cause duplicated calculation.
I know this may be subtle but still how anyone could at least cast even a little light on that.
Possibly the formula Pn = Pn-1 + Pn brings confusion, as it is not a true recurrence relationship. If it were, it would be infinite.
The true recurrence relationship is given by:
Prow, n = Prow-1, n-1 + Prow-1, n
Or in more complete terms:
∀ n ∈ {1, row-1}: Prow, n = Prow-1, n-1 + Prow-1, n
Prow, 0 = 1
Pn, n = 1
If you would implement this naively, you would create a 2-dimensional DP matrix. Starting with row 0, you would build up the DP matrix going from one row to the next, using the above recurrence relationship.
You then find that you only need the previous row's DP data to calculate the current row's. All the DP rows that come before the previous one are idle: they don't serve any purpose anymore. They are a waste of space.
So then you decide to not create a whole DP matrix, but just two rows. Once you have completed the second row of this DP structure, you make that row the first, and reset the second row in the DP structure. And then you can continue filling that second row, until it is complete again, and you repeat this "shift" of rows...
And now we come to the last optimisation, which brings us to your question:
You can actually do it with one DP row. That row will represent both the previous DP row as the current. For that to work, you need to update that row from right to left.
Every value that is updated is considered "current row", and every value that you read is considered "previous row". That way the right side of the recurrence relation refers to the previous DP row, and the left side (that is assigned) to the current.
This works only from right to left, because the recurrence formula never refers to n+1, but to n at the most. If it would have referred to n and n+1, then you would have had to go from left to right.
At the moment we read the value at n, it is still the value that corresponds to the previous DP row, and once we write to it, we will not need that previous value anymore. And when we read the value at n-1 we are sure it is still the previous row's value, since we come from the right, and did not update it yet.
You can imagine how we wipe-and-replace the values of the "previous" row with the new values of the "current" row.
Hope this clarifies it a bit.
Related
Given a list of n houses, each house has a certain number of coins in it. And a target value t. We have to find the minimum number of steps required to reach the target.
The person can choose to start at any house and then go right or left and collect coins in that direction until it reaches the target value. But the person cannot
change the direction.
Example: 5 1 2 3 4 These are supposed the coin values in 5 houses and the target is 13 then the minimum number of steps required is 5 because we have to select all the coins.
My Thoughts:
One way will be for each index i calculate the steps required in left or right direction to reach the target and then take the minimum of all these 2*n values.
Could there be a better way ?
First, let's simplify and canonize the problem.
Observation 1: The "choose direction" capability is redundant, if you choose to go from house j to house i, you can also go from i to j to have the same value, so it is sufficient to look at one direction only.
Observation 2: Now that we can look at the problem as going from left to right (observation 1), it is clear that we are looking for a subarray whose value exceeds k.
This means that we can canonize the problem:
Given an array with non negative values a, find minimal subarray
with values summing k or more.
There are various ways to solve this, one simple solution using a sorted map (balanced tree for example) is to go from left to right, summing values, and looking for the last element seen whose value was sum - k.
Pseudo code:
solve(array, k):
min_houses = inf
sum = 0
map = new TreeMap()
map.insert(0, -1) // this solves issue where first element is sufficient on its own.
for i from 0 to array.len():
sum = sum + array[i]
candidate = map.FindClosestLowerOrEqual(sum - k)
if candidate == null: // no matching sum, yet
continue
min_houses = min(min_houses, i - candidate)
map.insert(sum, i)
return min_houses
This solution runs in O(nlogn), as each map insertion takes O(logn), and there are n+1 of those.
An optimization, running in O(n), can be done if we take advantage of "non negative" trait of the array. This means, as we go on in the array - the candidate chosen (in the map seek) is always increasing.
We can utilize it to have two pointers running concurrently, and finding best matches, instead of searching from scratch in the index as we did before.
solve(array, k):
left = 0
sum = 0
min_houses = infinity
for right from 0 to len(array):
sum = sum + array[right]
while (left < right && sum >= k):
min_houses = min(min_houses, right - left)
sum = sum - array[left]
left = left + 1
return min_houses
This runs in O(n), as each index is increased at most n times, and every operation is O(1).
I have a problem designing an algorithm. The problem is that this should be executed in O(n) time.
Here is the assignment:
There is an unsorted array "a" with n numbers.
mij=min{ai, ai+1, ..., aj}, Mij=max{ai, ai+1, ..., aj}
Calculate:
S=SUM[i=1,n] { SUM[j=i,n] { (Mij - mij) } }
I am able to solve this in O(nlogn) time. This is a university research assignment. Everything that I tried suggests that this is not possible. I would be very thankful if you could point me in the right direction where to find the solution. Or at least prove that this is not possible.
Further explanation:
Given i and j, find the maximum and minimum elements of the array slice a[i:j]. Subtract those to get the range of the slice, a[max]-a[min].
Now, add up the ranges of all slices for all (i, j) such that 1 <= i <= j <= n. Do it in O(n) time.
This is pretty straight forward problem.
I will assume that it is array of objects (like pair of values or tuples) not numbers. First value is index in array and the second is value.
Right question here is how many time we need to multiply each number and add/subtract it from the sum ie in how many sub-sequences it is maximum and minimum element.
This problem is connected to finding next greatest element (nge), you can see here, just to know it for future problems.
I will write it in pseudo code.
subsum (A):
returnSum = 0
//i am pushing object into the stack. Firt value is index in array, secong is value
lastStackObject.push(-1, Integer.MAX_INT)
for (int i=1; i<n; i++)
next = stack.pop()
stack.push(next)
while (next.value < A[i].value)
last = stack.pop()
beforeLast = stack.peek()
retrunSum = returnSum + last.value*(i-last.index)*(last.index-beforeLast.index)
stack.push(A[i])
while stack is not empty:
last = stack.pop()
beforeLast = stack.peek()
retrunSum = returnSum + last.value*(A.length-last.index)*(last.index-beforeLast.index)
return returnSum
sum(A)
//first we calculate sum of maximum values in subarray, and then sum of minimum values. This is done by simply multiply each value in array by -1
retrun subsum(A)+subsum(-1 for x in A.value)
Time complexity of this code is O(n).
Peek function is just to read next value in stack without popping it.
I have a n x n array. Each field has a cost associated with it (a natural number) and I here's my problem:
I start in the first column. I need to find the cheapest way to move through an array (from any field in the first column to any in the last column) following these two rules:
I can only make moves to the right, to the top right, the lower right an to the bottom.
In a path I can only make k (some constant) moves to the bottom.
Meaning when I'm at cell x I can moves to these cells o:
How do I find the cheapest way to move through an array? I thought of this:
-For each field of the n x n array I keep a helpful array of how many bottom moves it takes to get there in the cheapest path. For the first column it's all 0's.
-We go through each of the field in this orded : columns left to right and rows top to bottom.
-For each field we check which of their neighbours is 'the cheapest'. If it's the upper one (meaning we have to take a bottom route to get from him) we check if it took k bottom moves to get to him, if not then then we assign the cost of getting to analyzed field as the sum of getting to field at the top+cost of the field, and in the auxilary array for the record corresponding to the field the put the number of bottom moves as x+1, where x is how many bottom moves we took to get to his upper neightbour.
-If the upper neighbour is not the cheapest we assign the cost of the other cheapest neighbour and the number of bottom moves as the number of moves we took to get to him.
Time complexity is O(n^2), and so is memory.
Is this correct?
Here is DP solution in O(N^2) time and O(N) memory :-
Dist(i,j) = distance from point(i,j) to last column.
Dist(i,j) = cost(i,j) + min { Dist(i+1,j),Dist(i,j+1),Dist(i+1,j+1),Dist(i-1,j+1) }
Dist(i,N) = cost[i][N]
Cheapest = min { D(i,0) } i in (1,M)
This DP equation suggests that you need only values of next rows to get current row so O(N) space for maintaining previous calculation. It also suggests that higher row values in same column need to evaluated first.
Pseudo Code :-
int Prev[N],int Curr[N];
// last row calculation => Base Case for DP
for(i=0;i<M;i++)
Prev[i] = cost[i][N-1];
// Evaluate the rows and columns in descending manner
for(j=N-2;j>=0;j--) {
for(i=M-1;i>=0;i--) {
Curr[i][j] = cost[i][j] + min ( Curr[i+1][j],Prev[i][j+1],Prev[i-1][j+1],Prev[i+1][j+1] )
}
Prev = Curr
}
// find row with cheapest cost
Cheapest = min(Prev)
I have NxN boolean matrix, all elements of which have initial state false
bool[][] matrix = GetMatrix(N);
In each step of the loop, I want to choose one cell (row i, column j) uniformly at random among all false cells, and set it to true until some condition happen.
Which method to use? I have this two ways in mind.
Create a NxN array from 0...(NxN-1), shuffle using uniformly shuffling algorithm than sequentially take i element from this array and set matrix[i/N][i%N].
Uses O(N^2) additional memory, and initialization take O(N^2) time
And second
Generate random i from 0...(N^2-1) and if (i/N, i%N) is set in matrix, repeat random generation until founding unset element.
This way doesn't use any additional memory, but I have a difficulty to estimate performance... can it be a case, when all elements except one are set, and random repeats a lot of times looking for free cell? Am I right, that as soon as random theoretically works uniformly, this case should not happen so often?
I'll try to reply to your questions, with a worst case scenario anaysis that happens when, as you have pointed out, all cells but one are taken.
Let's start by noting that p = P(X = m) = 1/N^2 . From this, we obtain that the probability that you'll have to wait k tosses before getting the desired result is P( Y = k) = p * (1-p)^(k-1) . This means that, for N = 10 you will need 67 random numbers to have a probability greater than 50% to get yours, and 457 to have a probability greater than 99%.
The general formula that gives you the number k of tosses needed to get the a probaiblity greater than alpha to get your value is:
k > (log(1 - alpha) / log(1-p)) -1
where p is define as above, equal to 1/N^2
That could get much worse with N getting bigger. You could think about creating a list of the indices you need and get one randomly for it.
Generate random i from 0...(N^2-1) and if (i/N, i%N) is set in matrix,
repeat random generation until founding unset element.
The analysis of this algorithm is the same as the coupon collector's problem. The running time is Theta(n^2 log n).
I have a problem resembling the one described here:
Algorithm to return all combinations of k elements from n
I am looking for something similar that covers all possible combinations of k from n. However, I need a subset to vary a lot from the one drawn previously. For example, if I were to draw a subset of 3 elements from a set of 8, the following algorithm wouldn't be useful to me since every subset is very similar to the one previously drawn:
11100000,
11010000,
10110000,
01110000,
...
I am looking for an algorithm thats picks the subsets in a more "random" looking fashion, ie. where the majority of elements in one subset is not reused in the next:
11100000,
00010011,
00101100,
...
Does anyone know of such an algorithm?
I hope my question made sence and that someone can help me out =)
Kind regards,
Christian
How about first generating all possible combinations of k from n, and then rearranging them with help of a random function.
If you have the result in a vector, loop through the vector: for each element let it change place with the element at a random position.
This of course becomes slow for large k and n.
This is not really random, but depending on your needs it might suit you.
Calculate the number of possible combinations. Let's name them N.
Calculate a large number which is coprime to N. Let's name it P.
Order the combinations and give them numbers from 1 to N. Let's name them C1 to CN
Iterate for output combinations. The first one will be VP mod N, the second one will be C2*P mod N, the third one C3*P mod N, etc. In essence, Outputi = Ci*P mod N. Mod is meant as the modulus operator.
If P is picked carefully, you will get seemingly random combinations. Values close to 1 or to N will produce values that differ little. Better pick values close to, say N/4 or N/5. You can also randomize the generation of P for every iteration that you need.
As a follow-up to my comment on this answer, here is some code that allows one to determine the composition of a subset from its "index", in colex order.
Shamelessly stolen from my own assignments.
//////////////////////////////////////
// NChooseK
//
// computes n!
// --------
// k!(n-k)!
//
// using Pascal's identity
// i.e. (n,k) = (n-1,k-1) + (n-1,k)
//
// easily optimizable by memoization
long long NChooseK(int n, int k)
{
if(k >= 0 && k <= n && n >= 1)
{
if( k > n / 2)
k = n - k;
if(k == 0 || n == 0)
return 1;
else
return NChooseK(n-1, k-1) + NChooseK(n-1, k);
}
else
return 0;
}
///////////////////////////////////////////////////////////////////////
// SubsetColexUnrank
// The unranking works by finding each element
// in turn, beginning with the biggest, leftmost one.
// We just have to find, for each element, how many subsets there are
// before the one beginning with the elements we have already found.
//
// It stores its results (indices of the elements present in the subset) into T, in ascending order.
void SubsetColexUnrank(long long r, int * T, int subsetSize)
{
assert( subsetSize >= 1 );
// For each element in the k-subset to be found
for(int i = subsetSize; i >= 1; i--)
{
// T[i] cannot be less than i
int x = i;
// Find the smallest element such that, of all the k-subsets that contain it,
// none has a rank that exceeds r.
while( NChooseK(x, i) <= r )
x++;
// update T with the newly found element
T[i] = x;
// if the subset we have to find is not the first one containing this element
if(r > 0)
{
// finding the next element of our k-subset
// is like finding the first one of the same subset
// divided by {T[i]}
r -= NChooseK(x - 1, i);
}
}
}
Random-in, random-out.
The colex order is such that its unranking function does not need the size of the set from which to pick the elements to work; the number of elements is assumed to be NChooseK(size of the set, size of the subset).
How about randomly choosing the k elements. ie choose the pth where p is random between 1 and n, then reorder what's left and choose the qth where q is between 1 and n-1 etc?
or maybe i misunderstood. do you still want all possibilities? in that case you can always generate them first and then choose random entries from your list
By "random looking" I think you mean lexicographically distant.. does this apply to combination i vs. i-1, or i vs. all previous combinations?
If so, here are some suggestions:
since most of the combinators yield ordered output, there are two options:
design or find a generator which somehow yields non-ordered output
enumerate and store enough/all combinations in a tie'd array file/db
if you decide to go with door #2, then you can just access randomly ordered combinations by random integers between 1 and the # of combinations
Just as a final check, compare the current and previous combination using a measure of difference/distance between combinations, e.g. for an unsigned Bit::Vector in Perl:
$vec1->Lexicompare($vec2) >= $MIN_LEX_DIST
You might take another look behind door #1, since even for moderate values of n and k you can get a big array:
EDIT:
Just saw your comment to AnnK... maybe the lexicompare might still help you skip similar combinations?
Depending on what you are trying to do, you could do something like playing cards. Keep two lists: Source is your source (unused) list; and Used the second is the "already-picked" list. As you randomly pick k items from Source, you move them to your Used list.
If there are k items left in Source when you need to pick again, you pick them all and swap the lists. If there are fewer than k items, you pick j items from Used and add them to Source to make k items in Source, then pick them all and swap the lists.
This is kind of like picking k cards from a deck. You discard them to the used pile. Once you reach the end or need more cards, you shuffle the old ones back into play.
This is just to make sure each set is definitely different from the previous subsets.
Also, this will not really guarantee that all possible subsets are picked before old ones start being repeated.
The good is that you don't need to worry about pre-calculating all the subsets, and your memory requirements are linear with your data (2 n-sized lists).