Maximum sum from a 2D array-DP - algorithm

Given a 2D array with weights, find the maximum sum of the 2D array with the condition that we can select only one element from a row and the element under the selected element cannot be selected(this condition should hold true for all elements which are selected). Also we can see that sum will contain elements equal to the number of rows.
If arr[i][j] is any selected element then I cannot select arr[i+1][j]. Also from each row only one element can be selected. Example if arr[i][1] is selected arr[i] [2 or 3 or..] cannot be selected
Edit- I tried solving it using DP.
Took a 2D array DP where
DP[i][j]= max(arr[i+1][k] for k=1 to n and k!=j)+ arr[i][j]
Then did this to build the DP matrix and finally looped to calculate the maximum.
But I think complexity is very high when I approach like this. Please help!
Input Matrix-
1 2 3 4
5 6 7 8
9 1 4 2
6 3 5 7
Output-
27

class Solution {
private static int maximumSum(int[][] mat){
int rows = mat.length;
int cols = mat[0].length;
int[] ans = new int[cols];
int[] index = new int[cols];
int max_val = 0;
for(int i=0;i<cols;++i){
ans[i] = mat[0][i];
index[i] = i;
max_val = Math.max(max_val,ans[i]); // needed for 1 row input
}
for(int i=1;i<rows;++i){
int[] temp = new int[cols];
for(int j=0;j<cols;++j){
temp[j] = ans[j];
int max_row_index = -1;
for(int k=0;k<cols;++k){
if(k == index[j]) continue;
if(max_row_index == -1 || mat[i][k] > mat[i][max_row_index]){
max_row_index = k;
}
}
temp[j] += mat[i][max_row_index];
index[j] = max_row_index;
max_val = Math.max(max_val,temp[j]);
}
ans = temp;
}
return max_val;
}
public static void main(String[] args) {
int[][] arr = {
{1,2,3,4},
{5,6,7,8},
{9,1,4,2},
{6,3,5,7}
};
System.out.println(maximumSum(arr));
}
}
Output:
27
Algorithm:
Let's adapt a top-down approach here. We go from start to end rows maintaining the answers in our ans array.
Let's workout through your example.
Case:
{1,2,3,4},
{5,6,7,8},
{9,1,4,2},
{6,3,5,7}
For first row, ans is as is [1,2,3,4].
For second row, we loop through [5,6,7,8] for each 1,2,3,4 skipping underneath columns for each index. For example, for 1, we skip 5 underneath and take max among all columns and add it to 1. Same goes for other elements.
So, now ans array looks like [9, 10, 11, 11].
Now, we workout for [9, 10, 11, 11] with next row [9,1,4,2] and so on. For this, we get [13, 19, 20, 20] and for this with last row [6,3,5,7], we get [20, 26, 27, 26] where 27 is the highest value and the final answer.
Time Complexity: O(n3), Space complexity: O(m) where m is the number of columns.
Update #1:
You can reduce the complexity from O(n3) to O(n2) by maintaining 2 max indexes for each row. This would always work since even if index of 1 max is same as the current index j of temp[j], the other max index would always provide the maximum value. Thanks to #MBo for this suggestion. This I leave as an exercise to the reader.
Update #2:
We also need to maintain the indexes of which element was picked in the last row.
This is necessary to remember since we can judge the path accurately for the current row.

Related

How to get original array from random shuffle of an array

I was asked in an interview today below question. I gave O(nlgn) solution but I was asked to give O(n) solution. I could not come up with O(n) solution. Can you help?
An input array is given like [1,2,4] then every element of it is doubled and
appended into the array. So the array now looks like [1,2,4,2,4,8]. How
this array is randomly shuffled. One possible random arrangement is
[4,8,2,1,2,4]. Now we are given this random shuffled array and we want to
get original array [1,2,4] in O(n) time.
The original array can be returned in any order. How can I do it?
Here's an O(N) Java solution that could be improved by first making sure that the array is of the proper form. For example it shouldn't accept [0] as an input:
import java.util.*;
class Solution {
public static int[] findOriginalArray(int[] changed) {
if (changed.length % 2 != 0)
return new int[] {};
// set Map size to optimal value to avoid rehashes
Map<Integer,Integer> count = new HashMap<>(changed.length*100/75);
int[] original = new int[changed.length/2];
int pos = 0;
// count frequency for each number
for (int n : changed) {
count.put(n, count.getOrDefault(n,0)+1);
}
// now decide which go into the answer
for (int n : changed) {
int smallest = n;
for (int m=n; m > 0 && count.getOrDefault(m,0) > 0; m = m/2) {
//System.out.println(m);
smallest = m;
if (m % 2 != 0) break;
}
// trickle up from smallest to largest while count > 0
for (int m=smallest, mm = 2*m; count.getOrDefault(mm,0) > 0; m = mm, mm=2*mm){
int ct = count.getOrDefault(mm,0);
while (count.get(m) > 0 && ct > 0) {
//System.out.println("adding "+m);
original[pos++] = m;
count.put(mm, ct -1);
count.put(m, count.get(m) - 1);
ct = count.getOrDefault(mm,0);
}
}
}
// check for incorrect format
if (count.values().stream().anyMatch(x -> x > 0)) {
return new int[] {};
}
return original;
}
public static void main(String[] args) {
int[] changed = {1,2,4,2,4,8};
System.out.println(Arrays.toString(changed));
System.out.println(Arrays.toString(findOriginalArray(changed)));
}
}
But I've tried to keep it simple.
The output is NOT guaranteed to be sorted. If you want it sorted it's going to cost O(NlogN) inevitably unless you use a Radix sort or something similar (which would make it O(NlogE) where E is the max value of the numbers you're sorting and logE the number of bits needed).
Runtime
This may not look that it is O(N) but you can see that it is because for every loop it will only find the lowest number in the chain ONCE, then trickle up the chain ONCE. Or said another way, in every iteration it will do O(X) iterations to process X elements. What will remain is O(N-X) elements. Therefore, even though there are for's inside for's it is still O(N).
An example execution can be seen with [64,32,16,8,4,2].
If this where not O(N) if you print out each value that it traverses to find the smallest you'd expect to see the values appear over and over again (for example N*(N+1)/2 times).
But instead you see them only once:
finding smallest 64
finding smallest 32
finding smallest 16
finding smallest 8
finding smallest 4
finding smallest 2
adding 2
adding 8
adding 32
If you're familiar with the Heapify algorithm you'll recognize the approach here.
def findOriginalArray(self, changed: List[int]) -> List[int]:
size = len(changed)
ans = []
left_elements = size//2
#IF SIZE IS ODD THEN RETURN [] NO SOLN. IS POSSIBLE
if(size%2 !=0):
return ans
#FREQUENCY DICTIONARY given array [0,0,2,1] my map will be: {0:2,2:1,1:1}
d = {}
for i in changed:
if(i in d):
d[i]+=1
else:
d[i] = 1
# CHECK THE EDGE CASE OF 0
if(0 in d):
count = d[0]
half = count//2
if((count % 2 != 0) or (half > left_elements)):
return ans
left_elements -= half
ans = [0 for i in range(half)]
#CHECK REST OF THE CASES : considering the values will be 10^5
for i in range(1,50001):
if(i in d):
if(d[i] > 0):
count = d[i]
if(count > left_elements):
ans = []
break
left_elements -= d[i]
for j in range(count):
ans.append(i)
if(2*i in d):
if(d[2*i] < count):
ans = []
break
else:
d[2*i] -= count
else:
ans = []
break
return ans
I have a simple idea which might not be the best, but I could not think of a case where it would not work. Having the array A with the doubled elements and randomly shuffled, keep a helper map. Process each element of the array and, each time you find a new element, add it to the map with the value 0. When an element is processed, increment map[i] and decrement map[2*i]. Next you iterate over the map and print the elements that have a value greater than zero.
A simple example, say that the vector is:
[1, 2, 3]
And the doubled/shuffled version is:
A = [3, 2, 1, 4, 2, 6]
When processing 3, first add the keys 3 and 6 to the map with value zero. Increment map[3] and decrement map[6]. This way, map[3] = 1 and map[6] = -1. Then for the next element map[2] = 1 and map[4] = -1 and so forth. The final state of the map in this example would be map[1] = 1, map[2] = 1, map[3] = 1, map[4] = -1, map[6] = 0, map[8] = -1, map[12] = -1.
Then you just process the keys of the map and, for each key with a value greater than zero, add it to the output. There are certainly more efficient solutions, but this one is O(n).
In C++, you can try this.
With time is O(N + KlogK) where N is the length of input, and K is the number of unique elements in input.
class Solution {
public:
vector<int> findOriginalArray(vector<int>& input) {
if (input.size() % 2) return {};
unordered_map<int, int> m;
for (int n : input) m[n]++;
vector<int> nums;
for (auto [n, cnt] : m) nums.push_back(n);
sort(begin(nums), end(nums));
vector<int> out;
for (int n : nums) {
if (m[2 * n] < m[n]) return {};
for (int i = 0; i < m[n]; ++i, --m[2 * n]) out.push_back(n);
}
return out;
}
};
Not so clear about the space complexity required in the question, so this is my top-of-the-mind attempt to this question if this requires O(n) time complexity.
If the length of the input array is not even, then its wrong !!
Create a map, add the elements of the input array to it.
Divide each element in the input array by 2 and check if that value exists in the map. If it exists, add it to the array (slice) orig.
There is a chance we have added duplicate values to this original array, clean it!!
Here is a sample go code:
https://go.dev/play/p/w4mm-rloHyi
I am sure we can optimize this code in a lot of ways for space complexities. But its O(n) time complexity.

Given an array of numbers. At each step we can pick a number like N in this array and sum N with another number that exist in this array

I'm stuck on this problem.
Given an array of numbers. At each step we can pick a number like N in this array and sum N with another number that exist in this array. We continue this process until all numbers in this array equals to zero. What is the minimum number of steps required? (We can guarantee initially the sum of numbers in this array is zero).
Example: -20,-15,1,3,7,9,15
Step 1: pick -15 and sum with 15 -> -20,0,1,3,7,9,0
Step 2: pick 9 and sum with -20 -> -11,0,1,3,7,0,0
Step 3: pick 7 and sum with -11 -> -4,0,1,3,0,0,0
Step 4: pick 3 and sum with -4 -> -1,0,1,0,0,0,0
Step 5: pick 1 and sum with -1 -> 0,0,0,0,0,0,0
So the answer of this example is 5.
I've tried using greedy algorithm. It works like this:
At each step we pick maximum and minimum number that already available in this array and sum these two numbers until all numbers in this array equals to zero.
but it doesn't work and get me wrong answer. Can anyone help me to solve this problem?
#include <bits/stdc++.h>
using namespace std;
int a[] = {-20,-15,1,3,7,9,15};
int bruteforce(){
bool isEqualToZero = 1;
for (int i=0;i<(sizeof(a)/sizeof(int));i++)
if (a[i] != 0){
isEqualToZero = 0;
break;
}
if (isEqualToZero)
return 0;
int tmp=0,m=1e9;
for (int i=0;i<(sizeof(a)/sizeof(int));i++){
for (int j=i+1;j<(sizeof(a)/sizeof(int));j++){
if (a[i]*a[j] >= 0) continue;
tmp = a[j];
a[i] += a[j];
a[j] = 0;
m = min(m,bruteforce());
a[j] = tmp;
a[i] -= tmp;
}
}
return m+1;
}
int main()
{
cout << bruteforce();
}
This is the brute force approach that I've written for this problem. Is there any algorithm to solve this problem faster?
This has an np-complete feel, but the following search does an A* search through all possible normalized partial sums on the way to a single non-zero term. Which solves your problem, and means that you don't get into an infinite loop if the sum is not zero.
If greedy works, this will explore the greedy path first, verify that you can't do better, and return fairly quickly. If greedy doesn't work, this may...take a lot longer.
Implementation in Python because that is easy for me. Translation into another language is an exercise for the reader.
import heapq
def find_minimal_steps (numbers):
normalized = tuple(sorted(numbers))
seen = set([normalized])
todo = [(min_steps_remaining(normalized), 0, normalized, None)]
while todo[0][0] < 7:
step_limit, steps_taken, prev, path = heapq.heappop(todo)
steps_taken = -1 * steps_taken # We store negative for sort order
if min_steps_remaining(prev) == 0:
decoded_path = []
while path is not None:
decoded_path.append((path[0], path[1]))
path = path[2]
return steps_taken, list(reversed(decoded_path))
prev_numbers = list(prev)
for i in range(len(prev_numbers)):
for j in range(len(prev_numbers)):
if i != j:
# Track what they were
num_i = prev_numbers[i]
num_j = prev_numbers[j]
# Sum them
prev_numbers[i] += num_j
prev_numbers[j] = 0
normalized = tuple(sorted(prev_numbers))
if (normalized not in seen):
seen.add(normalized)
heapq.heappush(todo, (
min_steps_remaining(normalized) + steps_taken + 1,
-steps_taken - 1, # More steps is smaller is looked at first
normalized,
(num_i, num_j, path)))
# set them back.
prev_numbers[i] = num_i
prev_numbers[j] = num_j
print(find_minimal_steps([-20,-15,1,3,7,9,15]))
For fun I also added a linked list implementation that doesn't just tell you how many minimal steps, but which ones it found. In this case its steps were (-15, 15), (7, 9), (3, 16), (1, 19), (-20, 20) meaning add 15 to -15, 9 to 7, 16 to 3, 19 to 1, and 20 to -20.

Maximum subsets of intervals that does not exceed coverage limit?

Here's one coding question I'm confused about.
Given a 2-D array [[1, 9], [2, 8], [2, 5], [3, 4], [6, 7], [6, 8]], each inner array represents an interval; and if we pile up these intervals, we'll see:
1 2 3 4 5 6 7 8 9
2 3 4 5 6 7 8
2 3 4 5
3 4
6 7
6 7 8
Now there's a limit that the coverage should be <= 3 for each position; and obviously we could see for position 3, 4, 6, 7, the coverage is 4.
Then question is: maximally how many subsets of intervals can be chosen so that each interval could fit the <=3 limit? It's quite clear that for this case, we simply remove the longest interval [1, 9], so maximal subset number is 6 - 1 = 5.
What algorithm should I apply to such question? I guess it's variant question to interval scheduling?
Thanks
I hope I have understood the question right. This is the solution I could able to get with C#:
//test
int[][] grid = { new int[]{ 1, 9 }, new int[] { 2, 8 }, new int[] { 2, 5 }, new int[] { 3, 4 }, new int[] { 6, 7 }, new int[] { 6, 8 } };
SubsetFinder sf = new SubsetFinder(grid);
int t1 = sf.GetNumberOfIntervals(1);//6
int t2 = sf.GetNumberOfIntervals(2);//5
int t3 = sf.GetNumberOfIntervals(3);//5
int t4 = sf.GetNumberOfIntervals(4);//2
int t5 = sf.GetNumberOfIntervals(5);//0
class SubsetFinder
{
Dictionary<int, List<int>> dic;
int intervalCount;
public SubsetFinder(int[][] grid)
{
init(grid);
}
private void init(int[][] grid)
{
this.dic = new Dictionary<int, List<int>>();
this.intervalCount = grid.Length;
for (int r = 0; r < grid.Length; r++)
{
int[] row = grid[r];
if (row.Length != 2) throw new Exception("not grid");
int start = row[0];
int end = row[1];
if (end < start) throw new Exception("bad interval");
for (int i = start; i <= end; i++)
if (!dic.ContainsKey(i))
dic.Add(i, new List<int>(new int[] { r }));
else
dic[i].Add(r);
}
}
public int GetNumberOfIntervals(int coverageLimit)
{
HashSet<int> hsExclude = new HashSet<int>();
foreach (int key in dic.Keys)
{
List<int> lst = dic[key];
if (lst.Count < coverageLimit)
foreach (int i in lst)
hsExclude.Add(i);
}
return intervalCount - hsExclude.Count;
}
}
I think you can solve this problem using a sweep algorithm. Here's my approach:
The general idea is that instead of finding out the maximum number of intervals you can choose and still fit the limit, we will find the minimum number of intervals that must be deleted in order to make all the numbers fit the limit. Here's how we can do that:
First create a vector of triples, the first part is an integer, the second is a boolean, while the third part is an integer. The first part represents all the numbers from the input (both the start and end of intervals), the second part tells us whether the first part is the start or the end of an interval, while the third part represents the id of the interval.
Sort the created vector based on the first part, in case of a tie, the start should come before the end of some intervals.
In the example you provided the vector will be:
1,0 , 2,0 , 2,0 , 2,0 , 3,0 , 4,1 , 5,1 , 6.0 , 6.0 , 7,1 , 8,1 , 8,1 , 9,1
Now, iterate over the vector, while keeping a set of integers, which represents the intervals that are currently taken. The numbers inside the set represent the ends of the currently taken intervals. This set should be kept sorted in the increasing order.
While iterating over the vector, we might encounter one of the following 2 possibilities:
We are currently handling the start of an interval. In this case we simply add the end of this interval (which is identified by the third part id) to the set. If the size of the set is more than the limit, we must surely delete exactly one interval, but which interval is the best for deleting? Of course it's the interval with the biggest end because deleting this interval will not only help you reduce the number of taken intervals to fit the limit, but it will also be most helpful in the future since it lasts the most. Simply delete this interval from the set (the corresponding end will be last in the set, since the set is sorted in increasing order of the end)
We are currently handling the end of an interval, in this case check out the set. If it contains the specified end, just delete it, because the corresponding interval has come to its end. If the set doesn't contain an end that matches the one we are handling, simply just continue iterating to the next element, because this means we have already decided not to take the corresponding interval.
If you need to count the number of taken intervals, or even print them, it can be done easily. Whenever you handle the end of an interval, and you actually find this end at the set, this means that the corresponding interval is a taken one, and you may increment your answer by one, print it or keep it in some vector representing your answer.
The total complexity of my approach is : N Log(N), where N is the number of intervals given in the input.

Is it possible to find the largest drop between two numbers in an array in less than O(n²) complexity?

i have an array full of numbers.
i need to find the maximum different between 2 numbers but the biggest number is before the smallest number in the array.
public static int maximalDrop (int [] a)
For example:
for the array 5, 21, 3, 27, 12, 24, 7, 6, 4 the result will be 23 (27 - 4)
for the array 5, 21, 3, 22, 12, 7, 26, 14 the result will be 18 (21 - 3)
My solution is take the first element in the array (this number will be the big) and check the the difference between this number and all other numbers in the array and after that do the same thing but with the next number in the array and of course compare the difference and return the biggest one.
i thing that my solution is O(n²) so can i do that in less ?
Unless I misunderstand the question I believe you can do this in one pass of the array. You just need to keep track of the maximum value and maximum difference you have seen so far. As you go through the array calc the difference between the current number and the maximum seen so far.
So for your second example 5, 21, 3, 22, 12, 7, 26, 14
1: 5 is first value so set maximum to 5
2: 21 > 5 so reset maximum
3: 21 - 3 = 18
4: 22 > 21 so reset maximum
5: 22 - 12 = 10
6: 22 - 7 = 15
7: 26 > 22 so reset maximum
8: 26 - 14 = 12
As the smaller number comes after the larger when you find a new maximum any smaller number beyond it in the array needs to be subtracted from this new maximum.
The answer required is the maximum value seen during this process - in this case the 18 that is calulated in step 3.
Try this:
public static int maximalDrop (int[]a)
{
int max= a[0];
int dif= 0;
for (int i=0; i<a.length; i++)
{
if(a[i]>max){
max=a[i];
if (dif<max-a[i+1])
{
dif=max-a[i+1];
}
}
}
return dif;
}
Well, I'm not sure whether my understanding about this question is correct or not. However, I think you only need to keep track of the largest value you have already visited so far and the drop value.
Consider this, if the largest drop made by a-b; and there is another value c before b which is larger than a, then c-b definitely larger than a-b, then the largest drop should be c-b.
While, even though there will be a larger number replacing the max value later on, it won't change the drop value unless it can make a larger drop.
This code maybe work, it's in java:
So the time cost is O(n).
If I misunderstood some concepts, please let me know.
public int findDrop(int[] ar){
int max = ar[0];
int drop = 0;
for(int i=1;i<ar.length;i++){
if(ar[i] > max){
max = ar[i];
}
else
{
if(max - ar[i] > drop){
drop = max - ar[i];
}
}
}
return drop;
}
O(N) solution
public static int findMaxDrop(int[] arr){
int maxSoFar=0;
int currDrop=0;
int maxDrop=0;
for(int i=0;i<=arr.length-1;i++){
if(arr[i] > maxSoFar){
maxSoFar=arr[i];
}else{
currDrop = maxSoFar-arr[i];
maxDrop=Math.max(currDrop, maxDrop);
}
}
return maxDrop;
}
You should only need a minor tweak to merge sort to do this in O(n log n)!
can be done in O(n) :
merge sort the list
get the minimum and the maximum items and calculate the diff between them.

0/1 Knapsack Dynamic Programming Optimization, from 2D matrix to 1D matrix

I need some clarification from wikipedia: Knapsack, on the part
This solution will therefore run in O(nW) time and O(nW) space. Additionally, if
we use only a 1-dimensional array m[W] to store the current optimal values
and pass over this array i+1 times, rewriting from m[W] to m[1] every time, we
get the same result for only O(W) space.
I am having trouble understanding how to turn a 2D matrix into a 1D matrix to save space. In addition, to what does rewriting from m[W] to m[1] every time mean (or how does it work).
Please provide some example. Say if I have the set {V,W} --> {(5,4),(6,5),(3,2)} with K = 9.
How would the 1D array look like?
I know this is an old question. But I had to spend some time searching for this and I'm just documenting the approaches here for anyone's future reference.
Method 1
The straightforward 2D method that uses N rows is:
int dp[MAXN][MAXW];
int solve()
{
memset(dp[0], 0, sizeof(dp[0]));
for(int i = 1; i <= N; i++) {
for(int j = 0; j <= W; j++) {
dp[i][j] = (w[i] > j) ? dp[i-1][j] : max(dp[i-1][j], dp[i-1][j-w[i]] + v[i]);
}
}
return dp[N][W];
}
This uses O(NW) space.
Method 2
You may notice that while calculating the entries of the matrix for a particular row, we're only looking at the previous row and not the rows before that. This can be exploited to maintain only 2 rows and keep swapping their roles as current & previous row.
int dp[2][MAXW];
int solve()
{
memset(dp[0], 0, sizeof(dp[0]));
for(int i = 1; i <= N; i++) {
int *cur = dp[i&1], *prev = dp[!(i&1)];
for(int j = 0; j <= W; j++) {
cur[j] = (w[i] > j) ? prev[j] : max(prev[j], prev[j-w[i]] + v[i]);
}
}
return dp[N&1][W];
}
This takes O(2W) = O(W) space. cur is the i-th row and prev is the (i-1)-th row.
Method 3
If you look again, you can see that while we're writing an entry in a row, we're only looking at the items to the left of that in the previous row. We could use this to use a single row and process it right to left so that while we're computing new value for an entry, entries to its left have their old value. This is the 1D table method.
int dp[MAXW];
int solve()
{
memset(dp, 0, sizeof(dp));
for(int i =1; i <= N; i++) {
for(int j = W; j >= 0; j--) {
dp[j] = (w[i] > j) ? dp[j]: max(dp[j], dp[j-w[i]] + v[i]);
}
}
return dp[W];
}
This also uses O(W) space but just uses a single row. The main reason the inner loop has to be reversed is because when we use dp[j-w[i]], we need the value from the previous iteration of outer loop. For this the j values have to be processed in a large to small fashion.
Test case (from http://www.spoj.com/problems/PARTY/)
N = 10, W = 50
w[] = {0, 12, 15, 16, 16, 10, 21, 18, 12, 17, 18} // 1 based indexing
v[] = {0, 3, 8, 9, 6, 2, 9, 4, 4, 8, 9}
answer = 26
In many dynamic programming problems, you will build up a 2D table row by row where each row only depends on the row that immediately precedes it. In the case of the 0/1 knapsack problem, the recurrence (from Wikipedia) is the following:
m[i, w] = m[i - 1, w] if wi > w
m[i, w] = max(m[i - 1, w], m[i - 1, w - wi] + vi) otherwise
Notice that all reads from the table when filling row i only come from row i - 1; the earlier rows in the table aren't actually used. Consequently, you could save space in the 2D table by only storing two rows - the immediately previous row and the row you're filling in. You can further optimize this down to just one row by being a bit more clever about how you fill in the table entries. This reduces the space usage from O(nW) (O(n) rows and O(W) columns) to O(W) (one or two rows and O(W) columns).
This comes at a cost, though. Many DP algorithms don't explicitly compute solutions as they go, but instead fill in the table, then do a second pass over the table at the end to recover the optimal answer. If you only store one row, then you will get the value of the optimal answer, but you might not know what that optimal answer happens to be. In this case, you could read off the maximum value that you can fit into the knapsack, but you won't necessarily be able to recover what you're supposed to do in order to achieve that value.
Hope this helps!
To answer your question: Here if we use 0-based indexing for the array then the correct way to write the recurrence relation would be:
dp[i][j] = (w[i-1] > j) ? dp[i-1][j] : max(dp[i-1][j], dp[i-1][j-w[i-1]] + v[i-1]);
Since i denotes the 1st i items, so for example if i is 5,
then the 5th item would be located in the 4th position in the weights and values array, respectively, hence wt[i-1] and v[i-1].

Resources