Minimum Window for the given numbers in an array - algorithm

Saw this question recently:
Given 2 arrays, the 2nd array containing some of the elements of the 1st array, return the minimum window in the 1st array which contains all the elements of the 2nd array.
Eg :
Given A={1,3,5,2,3,1} and B={1,3,2}
Output : 3 , 5 (where 3 and 5 are indices in the array A)
Even though the range 1 to 4 also contains the elements of A, the range 3 to 5 is returned Since it contains since its length is lesser than the previous range ( ( 5 - 3 ) < ( 4 - 1 ) )
I had devised a solution but I am not sure if it works correctly and also not efficient.
Give an Efficient Solution for the problem. Thanks in Advance

A simple solution of iterating through the list.
Have a left and right pointer, initially both at zero
Move the right pointer forwards until [L..R] contains all the elements (or quit if right reaches the end).
Move the left pointer forwards until [L..R] doesn't contain all the elements. See if [L-1..R] is shorter than the current best.
This is obviously linear time. You'll simply need to keep track of how many of each element of B is in the subarray for checking whether the subarray is a potential solution.
Pseudocode of this algorithm.
size = bestL = A.length;
needed = B.length-1;
found = 0; left=0; right=0;
counts = {}; //counts is a map of (number, count)
for(i in B) counts.put(i, 0);
//Increase right bound
while(right < size) {
if(!counts.contains(right)) continue;
amt = count.get(right);
count.set(right, amt+1);
if(amt == 0) found++;
if(found == needed) {
while(found == needed) {
//Increase left bound
if(counts.contains(left)) {
amt = count.get(left);
count.set(left, amt-1);
if(amt == 1) found--;
}
left++;
}
if(right - left + 2 >= bestL) continue;
bestL = right - left + 2;
bestRange = [left-1, right] //inclusive
}
}

Related

Arranging the number 1 in a 2d matrix

Given the number of rows and columns of a 2d matrix
Initially all elements of matrix are 0
Given the number of 1's that should be present in each row
Given the number of 1's that should be present in each column
Determine if it is possible to form such matrix.
Example:
Input: r=3 c=2 (no. of rows and columns)
2 1 0 (number of 1's that should be present in each row respectively)
1 2 (number of 1's that should be present in each column respectively)
Output: Possible
Explanation:
1 1
0 1
0 0
I tried solving this problem for like 12 hours by checking if summation of Ri = summation of Ci
But I wondered if wouldn't be possible for cases like
3 3
1 3 0
0 2 2
r and c can be upto 10^5
Any ideas how should I move further?
Edit: Constraints added and output should only be "possible" or "impossible". The possible matrix need not be displayed.
Can anyone help me now?
Hint: one possible solution utilizes Maximum Flow Problem by creating a special graph and running the standard maximum flow algorithm on it.
If you're not familiar with the above problem, you may start reading about it e.g. here https://en.wikipedia.org/wiki/Maximum_flow_problem
If you're interested in the full solution please comment and I'll update the answer. But it requires understading the above algorithm.
Solution as requested:
Create a graph of r+c+2 nodes.
Node 0 is the source, node r+c+1 is the sink. Nodes 1..r represent the rows, while r+1..r+c the columns.
Create following edges:
from source to nodes i=1..r of capacity r_i
from nodes i=r+1..r+c to sink of capacity c_i
between all the nodes i=1..r and j=r+1..r+c of capacity 1
Run maximum flow algorithm, the saturated edges between row nodes and column nodes define where you should put 1.
Or if it's not possible then the maximum flow value is less than number of expected ones in the matrix.
I will illustrate the algorithm with an example.
Assume we have m rows and n columns. Let rows[i] be the number of 1s in row i, for 0 <= i < m,
and cols[j] be the number of 1s in column j, for 0 <= j < n.
For example, for m = 3, and n = 4, we could have: rows = {4 2 3}, cols = {1 3 2 3}, and
the solution array would be:
1 3 2 3
+--------
4 | 1 1 1 1
2 | 0 1 0 1
3 | 0 1 1 1
Because we only want to know whether a solution exists, the values in rows and cols may be permuted in any order. The solution of each permutation is just a permutation of the rows and columns of the above solution.
So, given rows and cols, sort cols in decreasing order, and rows in increasing order. For our example, we have cols = {3 3 2 1} and rows = {2 3 4}, and the equivalent problem.
3 3 2 1
+--------
2 | 1 1 0 0
3 | 1 1 1 0
4 | 1 1 1 1
We transform cols into a form that is better suited for the algorithm. What cols tells us is that we have two series of 1s of length 3, one series of 1s of length 2, and one series of 1s of length 1, that are to be distributed among the rows of the array. We rewrite cols to capture just that, that is COLS = {2/3 1/2 1/1}, 2 series of length 3, 1 series of length 2, and 1 series of length 1.
Because we have 2 series of length 3, a solution exists only if we can put two 1s in the first row. This is possible because rows[0] = 2. We do not actually put any 1 in the first row, but record the fact that 1s have been placed there by decrementing the length of the series of length 3. So COLS becomes:
COLS = {2/2 1/2 1/1}
and we combine our two counts for series of length 2, yielding:
COLS = {3/2 1/1}
We now have the reduced problem:
3 | 1 1 1 0
4 | 1 1 1 1
Again we need to place 1s from our series of length 2 to have a solution. Fortunately, rows[1] = 3 and we can do this. We decrement the length of 3/2 and get:
COLS = {3/1 1/1} = {4/1}
We have the reduced problem:
4 | 1 1 1 1
Which is solved by 4 series of length 1, just what we have left. If at any step, the series in COLS cannot be used to satisfy a row count, then no solution is possible.
The general processing for each row may be stated as follows. For each row r, starting from the first element in COLS, decrement the lengths of as many elements count[k]/length[k] of COLS as needed, so that the sum of the count[k]'s equals rows[r]. Eliminate series of length 0 in COLS and combine series of same length.
Note that because elements of COLS are in decreasing order of lengths, the length of the last element decremented is always less than or equal to the next element in COLS (if there is a next element).
EXAMPLE 2 : Solution exists.
rows = {1 3 3}, cols = {2 2 2 1} => COLS = {3/2 1/1}
1 series of length 2 is decremented to satisfy rows[0] = 1, and the 2 other series of length 2 remains at length 2.
rows[0] = 1
COLS = {2/2 1/1 1/1} = {2/2 2/1}
The 2 series of length 2 are decremented, and 1 of the series of length 1.
The series whose length has become 0 is deleted, and the series of length 1 are combined.
rows[1] = 3
COLS = {2/1 1/0 1/1} = {2/1 1/1} = {3/1}
A solution exists for rows[2] can be satisfied.
rows[2] = 3
COLS = {3/0} = {}
EXAMPLE 3: Solution does not exists.
rows = {0 2 3}, cols = {3 2 0 0} => COLS = {1/3 1/2}
rows[0] = 0
COLS = {1/3 1/2}
rows[1] = 2
COLS = {1/2 1/1}
rows[2] = 3 => impossible to satisfy; no solution.
SPACE COMPLEXITY
It is easy to see that it is O(m + n).
TIME COMPLEXITY
We iterate over each row only once. For each row i, we need to iterate over at most
rows[i] <= n elements of COLS. Time complexity is O(m x n).
After finding this algorithm, I found the following theorem:
The Havel-Hakimi theorem (Havel 1955, Hakimi 1962) states that there exists a matrix Xn,m of 0’s and 1’s with row totals a0=(a1, a2,… , an) and column totals b0=(b1, b2,… , bm) such that bi ≥ bi+1 for every 0 < i < m if and only if another matrix Xn−1,m of 0’s and 1’s with row totals a1=(a2, a3,… , an) and column totals b1=(b1−1, b2−1,… ,ba1−1, ba1+1,… , bm) also exists.
from the post Finding if binary matrix exists given the row and column sums.
This is basically what my algorithm does, while trying to optimize the decrementing part, i.e., all the -1's in the above theorem. Now that I see the above theorem, I know my algorithm is correct. Nevertheless, I checked the correctness of my algorithm by comparing it with a brute-force algorithm for arrays of up to 50 cells.
Here is the C# implementation.
public class Pair
{
public int Count;
public int Length;
}
public class PairsList
{
public LinkedList<Pair> Pairs;
public int TotalCount;
}
class Program
{
static void Main(string[] args)
{
int[] rows = new int[] { 0, 0, 1, 1, 2, 2 };
int[] cols = new int[] { 2, 2, 0 };
bool success = Solve(cols, rows);
}
static bool Solve(int[] cols, int[] rows)
{
PairsList pairs = new PairsList() { Pairs = new LinkedList<Pair>(), TotalCount = 0 };
FillAllPairs(pairs, cols);
for (int r = 0; r < rows.Length; r++)
{
if (rows[r] > 0)
{
if (pairs.TotalCount < rows[r])
return false;
if (pairs.Pairs.First != null && pairs.Pairs.First.Value.Length > rows.Length - r)
return false;
DecrementPairs(pairs, rows[r]);
}
}
return pairs.Pairs.Count == 0 || pairs.Pairs.Count == 1 && pairs.Pairs.First.Value.Length == 0;
}
static void DecrementPairs(PairsList pairs, int count)
{
LinkedListNode<Pair> pair = pairs.Pairs.First;
while (count > 0 && pair != null)
{
LinkedListNode<Pair> next = pair.Next;
if (pair.Value.Count == count)
{
pair.Value.Length--;
if (pair.Value.Length == 0)
{
pairs.Pairs.Remove(pair);
pairs.TotalCount -= count;
}
else if (pair.Next != null && pair.Next.Value.Length == pair.Value.Length)
{
pair.Value.Count += pair.Next.Value.Count;
pairs.Pairs.Remove(pair.Next);
next = pair;
}
count = 0;
}
else if (pair.Value.Count < count)
{
count -= pair.Value.Count;
pair.Value.Length--;
if (pair.Value.Length == 0)
{
pairs.Pairs.Remove(pair);
pairs.TotalCount -= pair.Value.Count;
}
else if(pair.Next != null && pair.Next.Value.Length == pair.Value.Length)
{
pair.Value.Count += pair.Next.Value.Count;
pairs.Pairs.Remove(pair.Next);
next = pair;
}
}
else // pair.Value.Count > count
{
Pair p = new Pair() { Count = count, Length = pair.Value.Length - 1 };
pair.Value.Count -= count;
if (p.Length > 0)
{
if (pair.Next != null && pair.Next.Value.Length == p.Length)
pair.Next.Value.Count += p.Count;
else
pairs.Pairs.AddAfter(pair, p);
}
else
pairs.TotalCount -= count;
count = 0;
}
pair = next;
}
}
static int FillAllPairs(PairsList pairs, int[] cols)
{
List<Pair> newPairs = new List<Pair>();
int c = 0;
while (c < cols.Length && cols[c] > 0)
{
int k = c++;
if (cols[k] > 0)
pairs.TotalCount++;
while (c < cols.Length && cols[c] == cols[k])
{
if (cols[k] > 0) pairs.TotalCount++;
c++;
}
newPairs.Add(new Pair() { Count = c - k, Length = cols[k] });
}
LinkedListNode<Pair> pair = pairs.Pairs.First;
foreach (Pair p in newPairs)
{
while (pair != null && p.Length < pair.Value.Length)
pair = pair.Next;
if (pair == null)
{
pairs.Pairs.AddLast(p);
}
else if (p.Length == pair.Value.Length)
{
pair.Value.Count += p.Count;
pair = pair.Next;
}
else // p.Length > pair.Value.Length
{
pairs.Pairs.AddBefore(pair, p);
}
}
return c;
}
}
(Note: to avoid confusion between when I'm talking about the actual numbers in the problem vs. when I'm talking about the zeros in the ones in the matrix, I'm going to instead fill the matrix with spaces and X's. This obviously doesn't change the problem.)
Some observations:
If you're filling in a row, and there's (for example) one column needing 10 more X's and another column needing 5 more X's, then you're sometimes better off putting the X in the "10" column and saving the "5" column for later (because you might later run into 5 rows that each need 2 X's), but you're never better off putting the X in the "5" column and saving the "10" column for later (because even if you later run into 10 rows that all need an X, they won't mind if they don't all go in the same column). So we can use a somewhat "greedy" algorithm: always put an X in the column still needing the most X's. (Of course, we'll need to make sure that we don't greedily put an X in the same column multiple times for the same row!)
Since you don't need to actually output a possible matrix, the rows are all interchangeable and the columns are all interchangeable; all that matter is how many rows still need 1 X, how many still need 2 X's, etc., and likewise for columns.
With that in mind, here's one fairly simple approach:
(Optimization.) Add up the counts for all the rows, add up the counts for all the columns, and return "impossible" if the sums don't match.
Create an array of length r+1 and populate it with how many columns need 1 X, how many need 2 X's, etc. (You can ignore any columns needing 0 X's.)
(Optimization.) To help access the array efficiently, build a stack/linked-list/etc. of the indices of nonzero array elements, in decreasing order (e.g., starting at index r if it's nonzero, then index r−1 if it's nonzero, etc.), so that you can easily find the elements representing columns to put X's in.
(Optimization.) To help determine when there'll be a row can't be satisfied, also make note of the total number of columns needing any X's, and make note of the largest number of X's needed by any row. If the former is less than the latter, return "impossible".
(Optimization.) Sort the rows by the number of X's they need.
Iterate over the rows, starting with the one needing the fewest X's and ending with the one needing the most X's, and for each one:
Update the array accordingly. For example, if a row needs 12 X's, and the array looks like [..., 3, 8, 5], then you'll update the array to look like [..., 3+7 = 10, 8+5−7 = 6, 5−5 = 0]. If it's not possible to update the array because you run out of columns to put X's in, return "impossible". (Note: this part should never actually return "impossible", because we're keeping count of the number of columns left and the max number of columns we'll need, so we should have already returned "impossible" if this was going to happen. I mention this check only for clarity.)
Update the stack/linked-list of indices of nonzero array elements.
Update the total number of columns needing any X's. If it's now less than the greatest number of X's needed by any row, return "impossible".
(Optimization.) If the first nonzero array element has an index greater than the number of rows left, return "impossible".
If we complete our iteration without having returned "impossible", return "possible".
(Note: the reason I say to start with the row needing the fewest X's, and work your way to the row with the most X's, is that a row needing more X's may involve examining updating more elements of the array and of the stack, so the rows needing fewer X's are cheaper. This isn't just a matter of postponing the work: the rows needing fewer X's can help "consolidate" the array, so that there will be fewer distinct column-counts, making the later rows cheaper than they would otherwise be. In a very-bad-case scenario, such as the case of a square matrix where every single row needs a distinct positive number of X's and every single column needs a distinct positive number of X's, the fewest-to-most order means you can handle each row in O(1) time, for linear time overall, whereas the most-to-fewest order would mean that each row would take time proportional to the number of X's it needs, for quadratic time overall.)
Overall, this takes no worse than O(r+c+n) time (where n is the number of X's); I think that the optimizations I've listed are enough to ensure that it's closer to O(r+c) time, but it's hard to be 100% sure. I recommend trying it to see if it's fast enough for your purposes.
You can use brute force (iterating through all 2^(r * c) possibilities) to solve it, but that will take a long time. If r * c is under 64, you can accelerate it to a certain extent using bit-wise operations on 64-bit integers; however, even then, iterating through all 64-bit possibilities would take, at 1 try per ms, over 500M years.
A wiser choice is to add bits one by one, and only continue placing bits if no constraints are broken. This will eliminate the vast majority of possibilities, greatly speeding up the process. Look up backtracking for the general idea. It is not unlike solving sudokus through guesswork: once it becomes obvious that your guess was wrong, you erase it and try guessing a different digit.
As with sudokus, there are certain strategies that can be written into code and will result in speedups when they apply. For example, if the sum of 1s in rows is different from the sum of 1s in columns, then there are no solutions.
If over 50% of the bits will be on, you can instead work on the complementary problem (transform all ones to zeroes and vice-versa, while updating row and column counts). Both problems are equivalent, because any answer for one is also valid for the complementary.
This problem can be solved in O(n log n) using Gale-Ryser Theorem. (where n is the maximum of lengths of the two degree sequences).
First, make both sequences of equal length by adding 0's to the smaller sequence, and let this length be n.
Let the sequences be A and B. Sort A in non-decreasing order, and sort B in non-increasing order. Create another prefix sum array P for B such that ith element of P is equal to sum of first i elements of B.
Now, iterate over k's from 1 to n, and check for
The second sum can be calculated in O(log n) using binary search for index of last number in B smaller than k, and then using precalculated P.
Inspiring from the solution given by RobertBaron I have tried to build a new algorithm.
rows = [int(x)for x in input().split()]
cols = [int (ss) for ss in input().split()]
rows.sort()
cols.sort(reverse=True)
for i in range(len(rows)):
for j in range(len(cols)):
if(rows[i]!= 0 and cols[j]!=0):
rows[i] = rows[i] - 1;
cols[j] =cols[j]-1;
print("rows: ",rows)
print("cols: ",cols)
#if there is any non zero value, print NO else print yes
flag = True
for i in range(len(rows)):
if(rows[i]!=0):
flag = False
break
for j in range(len(cols)):
if(cols[j]!=0):
flag = False
if(flag):
print("YES")
else:
print("NO")
here, i have sorted the rows in ascending order and cols in descending order. later decrementing particular row and column if 1 need to be placed!
it is working for all the test cases posted here! rest GOD knows

Arrangements with the following conditions

Given a matrix AxB with comprising of integers >=0. The sum of each column of the matrix should be non decreasing on moving from left to right. Also the sum of Bth column (last column) is less than or equal to A.
Find the number of distinct matrices of such type for a given A and B.
I tried to solve it using recursion and memoization as follows-
The function solve() is-
ll solve(ll i,ll curlevel)
{
if(dp[i][curlevel]!=-1)
return dp[i][curlevel];
if(i<0)
return dp[i][curlevel]=0;
if(curlevel==B)
return dp[i][curlevel]=test(i,c);
if(curlevel>B)
return dp[i][curlevel]=0;
ll ans=0;
for(ll k=i;k>=0;k--)
{
ans+= test(i,A)* solve(k, curlevel+1);
}
return dp[i][curlevel]=ans;
}
The function test is defined as follows-
(It calculates the no of ways a sum ='sum' can occur as a sum of distinct non-negative numbers='places')
ll test(ll sum,ll places)
{
if(mem[sum][places] != -1)
return mem[sum][places];
if(sum==0)
return mem[sum][places]=1;
if(places==0)
return mem[sum][places]=0;
ll val=0;
for(ll i=0;i<=sum;i++)
{
val+=test(sum-i,places-1);
}
return mem[sum][places]=val;
}
This method however is too slow.
Is there a faster way to do this?(Maybe a better combinatorics approach)
Starting from the last cell of the last column, if that cell has the value A, then all the other cells in the last column must be 0, so in that case the last column has 1 possible arrangement.
If the last cell has the value A-1, then the cell next to it in the same column can be 0 or 1, so there is one arrangement in which the last column sums to A-1 and A-1 arrangements in which the column sums to A.
In general, the recursive function is:
NumberOfArrangementsOfColumn( cells_remaining, value_remaining ){
if( value_remaining == 0 ) return 1;
if( cells_remaining == 1 ) return value_remaining + 1;
int total = 0;
for( int sub_value = 1; sub_value <= value_remaining; sub_value++ ){
total +=
NumberOfArrangementsOfColumn( cells_remaining - 1, sub_value );
}
return total;
}
This function will determine the number of arrangements for the last column. You then need to create another recursive function for computing each of the remaining columns starting with the next to last column etc. for each possible value.
You have to precalculate Partitions array - numbers of integer partitions of A into A non-negative parts (including zeros) and taking part order into account (i.e. counting both 0 0 1 and 0 1 0 etc).
Edit:
Partitions(k) = C(A + k - 1, A - 1)
Example for A = 4
Partitions[4] = C(7,3)=7!/(4!3!)=35
whole array:
Partitions = {1,4,10,20,35}
To calculate Partitions, use table - rotated Pascal triangle
1 1 1 1 1
1 2 3 4 5 //sum of 1st row upto ith element
1 3 6 10 15 //sum of 2st row
1 4 10 20 35 //sum of upper row
for A = 1000 you need about 1000*sizeof(int64) memory (one or two rows) and about 10^6 modulo additions. If you need to make calculations for many A values, just store the whole table (8 MBytes)
Then use this formula: //corrected
S(columns, minsum) = Partitions[k] * Sum[k=minsum..A]{ S(columns - 1, k) }
S(1,k) = Partitions[k]
Result = Sum[k=0..A] { S[B,k] }

Find max sum of elements in an array ( with twist)

Given a array with +ve and -ve integer , find the maximum sum such that you are not allowed to skip 2 contiguous elements ( i.e you have to select at least one of them to move forward).
eg :-
10 , 20 , 30, -10 , -50 , 40 , -50, -1, -3
Output : 10+20+30-10+40-1 = 89
This problem can be solved using Dynamic Programming approach.
Let arr be the given array and opt be the array to store the optimal solutions.
opt[i] is the maximum sum that can be obtained starting from element i, inclusive.
opt[i] = arr[i] + (some other elements after i)
Now to solve the problem we iterate the array arr backwards, each time storing the answer opt[i].
Since we cannot skip 2 contiguous elements, either element i+1 or element i+2 has to be included
in opt[i].
So for each i, opt[i] = arr[i] + max(opt[i+1], opt[i+2])
See this code to understand:
int arr[n]; // array of given numbers. array size = n.
nput(arr, n); // input the array elements (given numbers)
int opt[n+2]; // optimal solutions.
memset(opt, 0, sizeof(opt)); // Initially set all optimal solutions to 0.
for(int i = n-1; i >= 0; i--) {
opt[i] = arr[i] + max(opt[i+1], opt[i+2]);
}
ans = max(opt[0], opt[1]) // final answer.
Observe that opt array has n+2 elements. This is to avoid getting illegal memory access exception (memory out of bounds) when we try to access opt[i+1] and opt[i+2] for the last element (n-1).
See the working implementation of the algorithm given above
Use a recurrence that accounts for that:
dp[i] = max(dp[i - 1] + a[i], <- take two consecutives
dp[i - 2] + a[i], <- skip a[i - 1])
Base cases left as an exercise.
If you see a +ve integer add it to the sum. If you see a negative integer, then inspect the next integer pick which ever is maximum and add it to the sum.
10 , 20 , 30, -10 , -50 , 40 , -50, -1, -3
For this add 10, 20, 30, max(-10, -50), 40 max(-50, -1) and since there is no element next to -3 discard it.
The last element will go to sum if it was +ve.
Answer:
I think this algorithm will help.
1. Create a method which gives output the maximum sum of particular user input array say T[n], where n denotes the total no. of elements.
2. Now this method will keep on adding array elements till they are positive. As we want to maximize the sum and there is no point in dropping positive elements behind.
3. As soon as our method encounters a negative element, it will transfer all consecutive negative elements to another method which create a new array say N[i] such that this array will contain all the consecutive negative elements that we encountered in T[n] and returns N[i]'s max output.
In this way our main method is not affected and its keep on adding positive elements and whenever it encounters negative element, it instead of adding their real values adds the net max output of that consecutive array of negative elements.
for example: T[n] = 29,34,55,-6,-5,-4,6,43,-8,-9,-4,-3,2,78 //here n=14
Main Method Working:
29+34+55+(sends data & gets value from Secondary method of array [-6,-5,-4])+6+43+(sends data & gets value from Secondary method of array [-8,-9,-4,-3])+2+78
Process Terminates with max output.
Secondary Method Working:
{
N[i] = gets array from Main method or itself as and when required.
This is basically a recursive method.
say N[i] has elements like N1, N2, N3, N4, etc.
for i>=3:
Now choice goes like this.
1. If we take N1 then we can recurse the left off array i.e. N[i-1] which has all elements except N1 in same order. Such that the net max output will be
N1+(sends data & gets value from Secondary method of array N[i-1] recursively)
2. If we doesn't take N1, then we cannot skip N2. So, Now algorithm is like 1st choice but starting with N2. So max output in this case will be
N2+(sends data & gets value from Secondary method of array N[i-2] recursively).
Here N[i-2] is an array containing all N[i] elements except N1 & N2 in same order.
Termination: When we are left with the array of size one ( for N[i-2] ) then we have to choose that particular value as no option.
The recursions will finally yield the max outputs and we have to finally choose the output of that choice which is more.
and redirect the max output to wherever required.
for i=2:
we have to choose the value which is bigger
for i=1:
We can surely skip that value.
So max output in this case will be 0.
}
I think this answer will help to you.
Given array:
Given:- 10 20 30 -10 -50 40 -50 -1 -3
Array1:-10 30 60 50 10 90 40 89 86
Array2:-10 20 50 40 0 80 30 79 76
Take the max value of array1[n-1],array1[n],array2[n-1],array2[n] i.e 89(array1[n-1])
Algorithm:-
For the array1 value assign array1[0]=a[0],array1=a[0]+a[1] and array2[0]=a[0],array2[1]=a[1].
calculate the array1 value from 2 to n is max of sum of array1[i-1]+a[i] or array1[i-2]+a[i].
for loop from 2 to n{
array1[i]=max(array1[i-1]+a[i],array1[i-2]+a[i]);
}
similarly for array2 value from 2 to n is max of sum of array2[i-1]+a[i] or array2[i-2]+a[i].
for loop from 2 to n{
array2[i]=max(array2[i-1]+a[i],array2[i-2]+a[i]);
}
Finally find the max value of array1[n-1],array[n],array2[n-1],array2[n];
int max(int a,int b){
return a>b?a:b;
}
int main(){
int a[]={10,20,30,-10,-50,40,-50,-1,-3};
int i,n,max_sum;
n=sizeof(a)/sizeof(a[0]);
int array1[n],array2[n];
array1[0]=a[0];
array1[1]=a[0]+a[1];
array2[0]=a[0];
array2[1]=a[1];
for loop from 2 to n{
array1[i]=max(array1[i-1]+a[i],array1[i-2]+a[i]);
array2[i]=max(array2[i-1]+a[i],array2[i-2]+a[i]);
}
--i;
max_sum=max(array1[i],array1[i-1]);
max_sum=max(max_sum,array2[i-1]);
max_sum=max(max_sum,array2[i]);
printf("The max_sum is %d",max_sum);
return 0;
}
Ans: The max_sum is 89
public static void countSum(int[] a) {
int count = 0;
int skip = 0;
int newCount = 0;
if(a.length==1)
{
count = a[0];
}
else
{
for(int i:a)
{
newCount = count + i;
if(newCount>=skip)
{
count = newCount;
skip = newCount;
}
else
{
count = skip;
skip = newCount;
}
}
}
System.out.println(count);
}
}
Let the array be of size N, indexed as 1...N
Let f(n) be the function, that provides the answer for max sum of sub array (1...n), such that no two left over elements are consecutive.
f(n) = max (a[n-1] + f(n-2), a(n) + f(n-1))
In first option, which is - {a[n-1] + f(n-2)}, we are leaving the last element, and due to condition given in question selecting the second last element.
In the second option, which is - {a(n) + f(n-1)} we are selecting the last element of the subarray, so we have an option to select/deselect the second last element.
Now starting from the base case :
f(0) = 0 [Subarray (1..0) doesn't exist]
f(1) = (a[1] > 0 ? a[1] : 0); [Subarray (1..1)]
f(2) = max( a(2) + 0, a[1] + f(1)) [Choosing atleast one of them]
Moving forward we can calculate any f(n), where n = 1...N, and store them to calculate next results. And yes, obviously, the case f(N) will give us the answer.
Time complexity o(n)
Space complexity o(n)
n = arr.length().
Append a 0 at the end of the array to handle boundary case.
ans: int array of size n+1.
ans[i] will store the answer for array a[0...i] which includes a[i] in the answer sum.
Now,
ans[0] = a[0]
ans[1] = max(a[1], a[1] + ans[0])
for i in [2,n-1]:
ans[i] = max(ans[i-1] , ans[i-2]) + a[i]
Final answer would be a[n]
If you want to avoid using Dynamic Programming
To find the maximum sum, first, you've to add all the positive
numbers.
We'll be skipping only negative elements. Since we're not
allowed to skip 2 contiguous elements, we will put all contiguous
negative elements in a temp array, and can figure out the maximum sum
of alternate elements using sum_odd_even function as defined below.
Then we can add the maximum of all such temp arrays to our sum of all
positive numbers. And the final sum will give us the desired output.
Code:
def sum_odd_even(arr):
sum1 = sum2 = 0
for i in range(len(arr)):
if i%2 == 0:
sum1 += arr[i]
else:
sum2 += arr[i]
return max(sum1,sum2)
input = [10, 20, 30, -10, -50, 40, -50, -1, -3]
result = 0
temp = []
for i in range(len(input)):
if input[i] > 0:
result += input[i]
if input[i] < 0 and i != len(input)-1:
temp.append(input[i])
elif input[i] < 0:
temp.append(input[i])
result += sum_odd_even(temp)
temp = []
else:
result += sum_odd_even(temp)
temp = []
print result
Simple Solution: Skip with twist :). Just skip the smallest number in i & i+1 if consecutive -ve. Have if conditions to check that till n-2 elements and check for the last element in the end.
int getMaxSum(int[] a) {
int sum = 0;
for (int i = 0; i <= a.length-2; i++) {
if (a[i]>0){
sum +=a[i];
continue;
} else if (a[i+1] > 0){
i++;
continue;
} else {
sum += Math.max(a[i],a[i+1]);
i++;
}
}
if (a[a.length-1] > 0){
sum+=a[a.length-1];
}
return sum;
}
The correct recurrence is as follow:
dp[i] = max(dp[i - 1] + a[i], dp[i - 2] + a[i - 1])
The first case is the one we pick the i-th element. The second case is the one we skip the i-th element. In the second case, we must pick the (i-1)th element.
The problem of IVlad's answer is that it always pick i-th element, which can lead to incorrect answer.
This question can be solved using include,exclude approach.
For first element, include = arr[0], exclude = 0.
For rest of the elements:
nextInclude = arr[i]+max(include, exclude)
nextExclude = include
include = nextInclude
exclude = nextExclude
Finally, ans = Math.max(include,exclude).
Similar questions can be referred at (Not the same)=> https://www.youtube.com/watch?v=VT4bZV24QNo&t=675s&ab_channel=Pepcoding.

How to get to array with the smallest sum

I was given this interview question, and I totally blanked out. How would you guys solve this:
Go from the start of an array to the end in a way that you minimize the sum of elements that you land on.
You can move to the next element, i.e go from index 1 to index 2.
Or you can hop one element over. i.e go from index 1 to index 3.
Assuming that you only move from left to right, and you want to find a way to get from index 0 to index n - 1 of an array of n elements, so that the sum of the path you take is minimum. From index i, you can only move ahead to index i + 1 or index i + 2.
Observe that the minimum path to get from index 0 to index k is the minimum between the minimum path to get from index 0 to index k - 1 and the mininum path from index 0 to index k- 2. There is simply no other path to take.
Therefore, we can have a dynamic programming solution:
DP[0] = A[0]
DP[1] = A[0] + A[1]
DP[k] = min(DP[0], DP[1]) + A[k]
A is the array of elements.
DP array will store the minimum sum to reach element at index i from index 0.
The result will be in DP[n - 1].
Java:
static int getMinSum(int elements[])
{
if (elements == null || elements.length == 0)
{
throw new IllegalArgumentException("No elements");
}
if (elements.length == 1)
{
return elements[0];
}
int minSum[] = new int[elements.length];
minSum[0] = elements[0];
minSum[1] = elements[0] + elements[1];
for (int i = 2; i < elements.length; i++)
{
minSum[i] = Math.min(minSum[i - 1] + elements[i], minSum[i - 2] + elements[i]);
}
return Math.min(minSum[elements.length - 2], minSum[elements.length - 1]);
}
Input:
int elements[] = { 1, -2, 3 };
System.out.println(getMinSum(elements));
Output:
-1
Case description:
We start from the index 0. We must take 1. Now we can go to index 1 or 2. Since -2 is attractive, we choose it. Now we can go to index 2 or hop it. Better hop and our sum is minimal 1 + (-2) = -1.
Another examples (pseudocode):
getMinSum({1, 1, 10, 1}) == 3
getMinSum({1, 1, 10, 100, 1000}) == 102
Algorithm:
O(n) complexity. Dynamic programming. We go from left to right filling up minSum array. Invariant: minSum[i] = min(minSum[i - 1] + elements[i] /* move by 1 */ , minSum[i - 2] + elements[i] /* hop */ ).
This seems like the perfect place for a dynamic programming solution.
Keeping track of two values, odd/even.
We will take Even to mean we used the previous value, and Odd to mean we haven't.
int Even = 0; int Odd = 0;
int length = arr.length;
Start at the back. We can either take the number or not. Therefore:
Even = arr[length];
Odd = 0;`
And now we move to the next element with two cases. Either we were even, in which case we have the choice to take the element or skip it. Or we were odd and had to take the element.
int current = arr[length - 1]
Even = Min(Even + current, Odd + current);
Odd = Even;
We can make a loop out of this and achieve a O(n) solution!

How can I efficiently determine if two lists contain elements ordered in the same way?

I have two ordered lists of the same element type, each list having at most one element of each value (say ints and unique numbers), but otherwise with no restrictions (one may be a subset of the other, they may be completely disjunct, or share some elements but not others).
How do I efficiently determine if A is ordering any two items in a different way than B is? For example, if A has the items 1, 2, 10 and B the items 2, 10, 1, the property would not hold as A lists 1 before 10 but B lists it after 10. 1, 2, 10 vs 2, 10, 5 would be perfectly valid however as A never mentions 5 at all, I cannot rely on any given sorting rule shared by both lists.
You can get O(n) as follows. First, find the intersection of the two sets using hashing. Second, test whether A and B are identical if you only consider elements from the intersection.
My approach would be to first make sorted copies of A and B which also record the positions of elements in the original lists:
for i in 1 .. length(A):
Apos[i] = (A, i)
sortedApos = sort(Apos[] by first element of each pair)
for i in 1 .. length(B):
Bpos[i] = (B, i)
sortedBpos = sort(Bpos[] by first element of each pair)
Now find those elements in common using a standard list merge that records the positions in both A and B of the shared elements:
i = 1
j = 1
shared = []
while i <= length(A) && j <= length(B)
if sortedApos[i][1] < sortedBpos[j][1]
++i
else if sortedApos[i][1] > sortedBpos[j][1]
++j
else // They're equal
append(shared, (sortedApos[i][2], sortedBpos[j][2]))
++i
++j
Finally, sort shared by its first element (position in A) and check that all its second elements (positions in B) are increasing. This will be the case iff the elements common to A and B appear in the same order:
sortedShared = sort(shared[] by first element of each pair)
for i = 2 .. length(sortedShared)
if sortedShared[i][2] < sortedShared[i-1][2]
return DIFFERENT
return SAME
Time complexity: 2*(O(n) + O(nlog n)) + O(n) + O(nlog n) + O(n) = O(nlog n).
General approach: store all the values and their positions in B as keys and values in a HashMap. Iterate over the values in A and look them up in B's HashMap to get their position in B (or null). If this position is before the largest position value you've seen previously, then you know that something in B is in a different order than A. Runs in O(n) time.
Rough, totally untested code:
boolean valuesInSameOrder(int[] A, int[] B)
{
Map<Integer, Integer> bMap = new HashMap<Integer, Integer>();
for (int i = 0; i < B.length; i++)
{
bMap.put(B[i], i);
}
int maxPosInB = 0;
for (int i = 0; i < A.length; i++)
{
if(bMap.containsKey(A[i]))
{
int currPosInB = bMap.get(A[i]);
if (currPosInB < maxPosInB)
{
// B has something in a different order than A
return false;
}
else
{
maxPosInB = currPosInB;
}
}
}
// All of B's values are in the same order as A
return true;
}

Resources