Classical bottom-up iterative approach for 0-1 Knapsack variation - algorithm

I am not good at DP and trying to improve with practice over time. I was trying to solve this problem.
Given prices and favor-value of n items, determine the max value that
you can buy for a pocket money of m. The twist here is that once you
spend above $2000, you can get refund of $200, so effectively your
purchasing power becomes m+200.
I could solve this using recursive approach, but I am more curious to solve it with classical iterative 0-1 Knapsack bottom-up approach as given below, by some modification :
int Knapsack(vector<int> wt, vector<int> val, int cap)
{
int i,j;
int n = wt.size();
for(i=0;i<=n;i++)
t[i][0] = 0;
for(j=0;j<=cap;j++) //cap is the max allowed weight of knapsack
t[0][j] = 0;
for(i=1;i<=n;i++)
{
for(j=1;j<=cap;j++)
{
if(wt[i-1]<=j)
t[i][j] = max(val[i-1]+t[i-1][j-wt[]i-1], t[i-1][j]);
else
t[i][j] = t[i-1][j];
}
}
return t[n][cap];
}
In the above, the wt[] can actually become the price of each item and favor-value is val[]. The cap needs to be modified to cap+200 but with some tricky checks which I couldn't figure out after trying for 2 days now. What modifications can I do here to take extra $200 for shopping above $2000. I tried many approaches but failed. I tried searching on internet but mostly there are recursive approaches used which I have already figured out or they have use space saving trick by using 1-D array. But I want to modify the classical solution to solve it, so as to improve and verify my understanding. Can some please give me a hint or direction.

Related

Why should we use Dynamic Programming with Memoization in order to solve - Minimum Number of Coins to Make Change

The Problem Statement:
Given an infinite supply of coins of values {C1, C2, ..., Cn} and a sum, find the minimum number of coins that can represent the sum X.
Most of the solutions on the web include dynamic programming with memoization. Here is an example from Youtube: https://www.youtube.com/watch?v=Kf_M7RdHr1M
My question is: why don't we sort the array of coins in descending order first and start exploring recursively by minimizing the sum until we reach 0? When we reach 0, we know that we have found the needed coins to make up the sum. Because we sorted the array in descending order, we know that we will always choose the greatest coin. Therefore, the first time the sum reaches down to 0, the count will have to be minimum.
I'd greatly appreciate if you help understand the complexity of my algorithm and compare it to the dynamic programming with memoization approach.
For simplicty, we are assuming there will always be a "$1" coin and thus there is always a way to make up the sum.
import java.util.*;
public class Solution{
public static void main(String [] args){
MinCount cnt=new MinCount(new Integer []{1,2,7,9});
System.out.println(cnt.count(12));
}
}
class MinCount{
Integer[] coins;
public MinCount(Integer [] coins){
Arrays.sort(coins,Collections.reverseOrder());
this.coins=coins;
}
public int count(int sum){
if(sum<0)return Integer.MAX_VALUE;
if(sum==0)return 0;
int min=Integer.MAX_VALUE;
for(int i=0; i<coins.length; i++){
int val=count(sum-coins[i]);
if(val<min)min=val;
if(val!=Integer.MAX_VALUE)break;
}
return min+1;
}
}
Suppose that you have coins worth $1, $50, and $52, and that your total is $100. Your proposed algorithm would produce a solution that uses 49 coins ($52 + $1 + $1 + … + $1 + $1); but the correct minimum result requires only 2 coins ($50 + $50).
(Incidentally, I think it's cheating to write
For simplicty we are assuming there will always be a "$1" coin and thus there is always a way to make up the sum.
when this is not in the problem statement, and therefore not assumed in other sources. That's a bit like asking "Why do sorting algorithms always put a lot of effort into rearranging the elements, instead of just assuming that the elements are in the right order to begin with?" But as it happens, even assuming the existence of a $1 coin doesn't let you guarantee that the naïve/greedy algorithm will find the optimal solution.)
I will complement the answer that has already been provided to your question with some algorithm design advice.
The solution that you propose is what is called a "greedy algorithm": a problem solving strategy that makes the locally optimal choice at each stage with the hope of finding a global optimum.
In many problems, a greedy strategy does not produce an optimal solution. The best way to disprove the correctess of an algorithm is to find a counter-example, such as the case of the "$52", "$50", and "$1" coins. To find counter-examples, Steven Skiena gives the following advice in his book "The Algorithm Design Manual":
Think small: when an algorithm fails, there is usually a very simple example on which it fails.
Hunt for the weakness: if the proposed algorithm is of the form "always take the biggest" (that is, a greedy algorithm), think about why that might prove to be the wrong thing to do. In particular, ...
Go for a tie: A devious way to break a greedy algorithm is to provide instances where everything is the same size. This way the algorithm may have nothing to base its decision on.
Seek extremes: many counter-examples are mixtures of huge and tiny, left and right, few and many, near and far. It is usually easier to verify or reason about extreme examples than more muddled ones.
#recursive solution in python
import sys
class Solution:
def __init__(self):
self.ans=0
self.maxint=sys.maxsize-1
def minCoins(self, coins, m, v):
res=self.solve(coins,m,v)
if res==sys.maxsize-1:
return -1
return res
def solve(self,coins,m,v):
if m==0 and v>0:
return self.maxint
if v==0:
return 0
if coins[m-1]<=v:
self.ans=min(self.solve(coins,m,v-coins[m-1])+1,self.solve(coins,m-1,v))
return self.ans
else:
self.ans=self.solve(coins,m-1,v)
return self.ans

Finding the shortest path in a tree

I'm having troubles to find a solution to the following question.
Suppose a company needs to have a machine over the next five year period. Each new machine costs $100,000. The annual cost of operating a machine during its ith year of operation is given as follows: C1 = $6000, C2 = $8000 and C3 = $12,000. A machine may be kept up to three years before being traded in. This means that a machine can be either kept or traded with in the first two years and has to be traded when its age is three. The trade in value after i years is t1= $80,000, t2 = $60,000 and t3 = $50,000. How can the company minimize costs over the five year period (year 0 to year 5) if the company is going to buy a new machine in the year 0?
Devise an optimal solution based on dynamic programming.
This problem can be represent using a tree. Here's the diagram.
Now I think that finding the shortest path in the above tree will give me the optimal solution. But I have no idea how to do that. Here are my questions,
Is there a classic problem regarding this question? (Like Travelling salesman problem or Change-making problem)
If yes, then What is it? What are the methods to solve it?
If not, then how to solve this problem.
Any other suggestions are also welcome.
Guys, I want some guidance and help for this question. (Do NOT think this as a request to get my homework done from you.) I have found a full Java implementation for this question here. But it does not use dynamic programming to solve the problem. Thank you in advance.
The company wants to minimize the cost over 5 year period. In the year 0 they are going to buy a machine & each year they have to decide whether the machine is being kept or traded. In order to arrive at an optimal solution, we have to make a set of choices at each end of the year. As we make each choice, sub problems of the same from often arise.
Thus we arrives at a position where given sub problem may arise from more than one partial set of choices. When devising an optimal solution based on dynamic programming, we can solve the problem by combining the solutions to sub problems.
Let the stages correspond to each year. The state is the age of the machine for that year. The decisions are whether to keep the machine or trade it in for a new one. Let Ft(x) be the minimum cost incurred from time t to time 5, given the machine is x years old in time t.
Base cases:
Since we have to trade machine in the end of 5 year F5(x)=-S[x]
In the year 0 we buy a new machine F0(1)=N+M[1]+F1(0)
In the range of 5 years and at most 3 years of age : 0
Keep existing machine at most 3 years : Ft(3)=N+M[0]+Ft+1(0) ; t≠0,1,2
def Fxy(self,time,age):
if self.Matrix[time][age]==None: <- Overlaping subproblems avoided
if(time>5 or age>2):
return 0
if time==5:
self.Matrix[time][age]=T=self.S[age]
self.Flag[time][age]='TRADE'
elif time==0:
self.Matrix[time][age]=K=self.N+self.M[0]+self.Fxy(time+1,time)
self.Flag[time][age]='KEEP'
elif time==3 and age==2:
self.Matrix[time][age]=T=self.S[age]+self.N+self.M[0]+self.Fxy(time+1,0)
self.Flag[time][age]='TRADE'
else:
T=self.S[age]+self.N+self.M[0]+self.Fxy(time+1,0)
if age+1<len(self.Matrix[0]):
K=self.M[age+1]+self.Fxy(time+1,age+1)
else:
K=self.M[age+1]
self.Matrix[time][age]=min(T,K)
if(self.Matrix[time][age]==T and self.Matrix[time][age]==K):
self.Flag[time][age]='TRADE OR KEEP'
elif(self.Matrix[time][age]==T):
self.Flag[time][age]='TRADE'
else:
self.Flag[time][age]='KEEP'
return self.Matrix[time][age]
else:
return self.Matrix[time][age]
Optimal solutions can be achieved via drawing a decisions tree contains all possible paths & takes the minimum cost paths. We use a recursive algorithm where it traverses each tree level & make the path where current decision point occurs.
Ex: When it traverses F1(0), it has ‘TRADE OR KEEP’ decision binds with it. Then we can traverse two possible paths. When it traverses F2(1), since it has ‘KEEP’ decision then recursively we traverse F3(2), the right child. When ‘TRADE’ met, the left child continuously until it reaches the leaves.
def recursePath(self,x,y):
if(x==5):
self.dic[x].append(self.Flag[x][y])
return self.Flag[x][y]
else:
if(self.Flag[x][y]=='TRADE OR KEEP'):
self.recursePath(x+1,y)
self.recursePath(x+1,y+1)
if(self.Flag[x][y]=='KEEP'):
self.recursePath(x+1,y+1)
if(self.Flag[x][y]=='TRADE'):
self.recursePath(x+1,y)
self.dic[x].append(self.Flag[x][y])
return self.Flag[x][y]
If you want something like Dijkstra, why don't you just do Dijkstra? You'd need to change a few things in your graph interpretation but it seems very much doable:
Dijkstra will settle nodes according to a minimum cost criterium. Set that criterium to be "money lost by company". You will also settle nodes in Dijkstra and you need to determine what exactly will be a node. Consider a node to be a time and a property state e.g. year 4 with a working machine of x years old. In year 0, the money lost will be 0 and you will have no machine. You then add all possible edges/choices/state transitions, here being 'buy a machine'. You end up with a new node on the Dijkstra PQ [year 1, working machine of age 1] with a certain cost.
From thereon, you can always sell the machine (yielding a [year 1, no machine] node), buy a new machine [year 1, new machine]) or continue with the same ([year 2, machine of age 2]). You just continue to develop that shortest path tree untill you have everything you want for year 5 (or more).
You then have a set of nodes [year i, machine of age j]. To find the optimum for your company at year i, just look among all possibilities for it (I think it will always be [year i, no machine]) to get your answer.
As Dijkstra is an all-pairs shortest path algorithm, it gives you all best paths to all years
edit: some pseudo code for java
first you should create a node object/class to hold your node information.
Node{
int cost;
int year;
int ageOfMachine;
}
Then you could just add nodes and settle them. Make sure your PQ is sorting the nodes based on the cost field. Starting at the root:
PQ<Node> PQ=new PriorityQueue<Node>();
Node root= new Root(0,0,-1);//0 cost, year 0 and no machine)
PQ.offer(root);
int [] best= new int[years+1];
//set best[0..years] equal to a very large negative number
while(!PQ.isEmpty()){
Node n=PQ.poll();
int y=n.year;
int a=n.ageOfMachine;
int c=n.cost;
if(already have a cost for year y and machine of age a)continue;
else{
add [year y, age a, cost c] to list of settled nodes;
//examine all possible further actions
//add nodes for keeping a machine and trading a machine
PQ.offer(new Node(cost+profit selling current-cost of new machine,year+1,1));
PQ.offer(new Node(cost,year+1,age+1);//only if your machine can last an extra year
//check to see if you've found the best way of business in year i
if(cost+profit selling current>best[i])best[i]=cost+profit selling current;
}
}
Something along those lines will give you the best practice to reach year i with cost best[i]
I think I have found a simpler dynamic program solution.
Suggest Cost(n) is the whole cost when sell at year n . And the cost of keeping a machine for 1, 2 ,3 year is cost1,cost2,cost3 ( which is 26000, 54000, 76000 in this problem ) .
Then we can divide the problem to sub-problems like this:
**Cost(n)= MIN( Cost(n-1)+cost1, Cost(n-2)+cost2, Cost(n-3)+cost3 );**
So we can calculate it in 'bottom-up way', which is just O(n).
I have implemented and tested it using C :
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
struct costAndSell_S{
int lastSellYear;
int cost;
};
int operatCost[3]={6000,8000,12000};
int sellGain[3]={80000,60000,50000};
int newMachinValue=100000;
int sellCost[3];
struct costAndSell_S costAndSell[20];
void initSellCost(){
memset( costAndSell, 0, sizeof(costAndSell));
sellCost[0]=operatCost[0]+newMachinValue-sellGain[0];
sellCost[1]=operatCost[0]+operatCost[1]+newMachinValue-sellGain[1];
sellCost[2]=operatCost[0]+operatCost[1]+operatCost[2]+newMachinValue-sellGain[2];
costAndSell[0].cost=100000;
return;
}
int sellAt( int year ){
if ( year<0){
return(costAndSell[0].cost );
}
return costAndSell[year].cost;
}
int minCost( int i1, int i2, int i3 ){
if ( (i1<=i2) && (i1<=i3) ){
return(0);
}else if ( (i2<=i1) && (i2<=i3) ){
return(1);
}else if ( (i3<=i1) && (i3<=i2) ){
return(2);
}
}
void findBestPath( int lastYear ){
int i;
int rtn;
int sellYear;
for( i=1; i<=lastYear; i++ ){
rtn=minCost( sellAt(i-1)+sellCost[0], sellAt(i-2)+sellCost[1], sellAt(i-3)+sellCost[2]);
switch (rtn){
case 0:
costAndSell[i].cost=costAndSell[i-1].cost+sellCost[0];
costAndSell[i].lastSellYear=i-1;
break;
case 1:
costAndSell[i].cost=costAndSell[i-2].cost+sellCost[1];
costAndSell[i].lastSellYear=i-2;
break;
case 2:
costAndSell[i].cost=costAndSell[i-3].cost+sellCost[2];
costAndSell[i].lastSellYear=i-3;
break;
}
}
sellYear=costAndSell[lastYear].lastSellYear;
printf("sellAt[%d], cost[%d]\n", lastYear, costAndSell[lastYear].cost );
do{
sellYear=costAndSell[sellYear].lastSellYear;
printf("sellAt[%d], cost[%d]\n", sellYear, costAndSell[sellYear].cost );
} while( sellYear>0 );
}
void main(int argc, char * argv[]){
int lastYear;
initSellCost();
lastYear=atoi(argv[1]);
findBestPath(lastYear);
}
The output is :
sellAt[5], cost[228000]
sellAt[3], cost[176000]
sellAt[0], cost[100000]
I think it can't use dynamic programming because I can't find optimal substructure and overlapping subproblems for it.
The cost of year N depends on the behavior of year N-1, N-2 and N-3 . It is difficult to find a optimal substructure.
The link you provided is dynamic programming - and the code is pretty easy to read. I'd recommend taking a good look at the code to see what it is doing.
The following method might work.
At the end of each year , you have two choices. You can either Choose to Trade or choose to Pay the maintenance cost for another year. Except for the 3rd year, where you dont have a choice. You have to trade for sure.
This can be solved by a recursion method where you choose the minimum cost among trading and not trading during a particular year.
A Table can be maintained for offset, nth year (without trading) so that values need not be recalculated

Forming Dynamic Programming algorithm for a variation of Knapsack Problem

I was thinking,
I wanted to do a variation on the Knapsack Problem.
Imagine the original problem, with items with various weights/value.
My version will, along with having the normal weights/values, contain a "group" value.
eg.
Item1[5kg, $600, electronic]
Item2[1kg, $50, food]
Now, having a set of items like this, how would I code up the knapsack problem to make sure that a maximum of 1 item from each "group" is selected.
Notes:
You don't need to choose an item from that group
There are multiple items in each group
You're still minimizing weight, maximizing value
The amount of groups are predefined, along with their values.
I'm just writing a draft of the code out at this stage, and I've chosen to use a dynamic approach. I understand the idea behind the dynamic solution for the regular knapsack problem, how do I alter this solution to incorporate these "groups"?
KnapSackVariation(v,w,g,n,W)
{
for (w = 0 to W)
V[0,w] = 0;
for(i = 1 to n)
for(w = 0 to W)
if(w[i] <= w)
V[i,w] = max{V[i-1, w], v[i] + V[i-1, w-w[i]]};
else
V[i,w] = V[i-1, w];
return V[n,W];
}
That's what I have so far, need to add it so that it will remove all corresponding items from the group it is in each time it solves this.
just noticed your question trying to find an answer to a question of my own. The problem you've stated is a well-known and well-studied problem called the Multiple Choice Knapsack Problem. If you google that you'll find all sorts of information, and I can also recommend this book: http://www.amazon.co.uk/Knapsack-Problems-Hans-Kellerer/dp/3642073115/ref=sr_1_1?ie=UTF8&qid=1318767496&sr=8-1, which dedicates a whole chapter to the problem. In the classic formulation of MCKP, you have to choose one item from each group. However, you can easily convert that version of the problem to your version by adding a dummy item to each group with profit and weight = 0, and the same algorithms will work. I would caution you against trying to adapt code for the binary knapsack problem to the MCKP with a few tweaks--this approach is likely to lead you to a solution whose performance degrades unacceptably as the number of items in each group increases.
Assume
c[i] : The category of the ith element
V[i,w,S] : Maximum value of the knapsack such that it contains at max one item from each category in S
Recursive Formulation
V[i,w,S] = max(V[i-1,w,S],V[i,w-w[i],S-{c[i]}] + v[i])
Base Case
V[0,w,S] = -`infinity if w!=0 or S != {}`

Algorithm to check if a number if a perfect number

I am looking for an algorithm to find if a given number is a perfect number.
The most simple that comes to my mind is :
Find all the factors of the number
Get the prime factors [except the number itself, if it is prime] and add them up to check if it is a perfect number.
Is there a better way to do this ?.
On searching, some Euclids work came up, but didnt find any good algorithm. Also this golfscript wasnt helpful: https://stackoverflow.com/questions/3472534/checking-whether-a-number-is-mathematically-a-perfect-number .
The numbers etc can be cached etc in real world usage [which I dont know where perfect nos are used :)]
However, since this is being asked in interviews, I am assuming there should be a "derivable" way of optimizing it.
Thanks !
If the input is even, see if it is of the form 2^(p-1)*(2^p-1), with p and 2^p-1 prime.
If the input is odd, return "false". :-)
See the Wikipedia page for details.
(Actually, since there are only 47 perfect numbers with fewer than 25 million digits, you might start with a simple table of those. Ask the interviewer if you can assume you are using 64-bit numbers, for instance...)
Edit: Dang, I failed the interview! :-(
In my over zealous attempt at finding tricks or heuristics to improve upon the "factorize + enumerate divisors + sum them" approach, I failed to note that being 1 modulo 9 was merely a necessary, and certainly not a sufficient condition for at number (other than 6) to be perfect...
Duh... with on average 1 in 9 even number satisfying this condition, my algorithm would sure find a few too many perfect numbers ;-).
To redeem myself, persist and maintain the suggestion of using the digital root, but only as a filter, to avoid the more expensive computation of the factor, in most cases.
[Original attempt: hall of shame]
If the number is even,<br>
compute its [digital root][1].
if the digital root is 1, the number is perfect, otherwise it isn't.
If the number is odd...
there are no shortcuts, other than...
"Not perfect" if the number is smaller than 10^300
For bigger values, one would then need to run the algorithm described in
the question, possibly with a few twists typically driven by heuristics
that prove that the sum of divisors will be lacking when the number
doesn't have some of the low prime factors.
My reason for suggesting the digital root trick for even numbers is that this can be computed without the help of an arbitrary length arithmetic library (like GMP). It is also much less computationally expensive than the decomposition in prime factors and/or the factorization (2^(p-1) * ((2^p)-1)). Therefore if the interviewer were to be satisfied with a "No perfect" response for odd numbers, the solution would be both very efficient and codable in most computer languages.
[Second and third attempt...]
If the number is even,<br>
if it is 6
The number is PERFECT
otherwise compute its [digital root][1].
if the digital root is _not_ 1
The number is NOT PERFECT
else ...,
Compute the prime factors
Enumerate the divisors, sum them
if the sum of these divisor equals the 2 * the number
it is PERFECT
else
it is NOT PERFECT
If the number is odd...
same as previously
On this relatively odd interview question...
I second andrewdski's comment to another response in this post, that this particular question is rather odd in the context of an interview for a general purpose developer. As with many interview questions, it can be that the interviewer isn't seeking a particular solution, but rather is providing an opportunity for the candidate to demonstrate his/her ability to articulate the general pros and cons of various approaches. Also, if the candidate is offered an opportunity to look-up generic resources such as MathWorld or Wikipedia prior to responding, this may also be a good test of his/her ability to quickly make sense of the info offered there.
Here's a quick algorithm just for fun, in PHP - using just a simple for loop. You can easliy port that to other languages:
function isPerfectNumber($num) {
$out = false;
if($num%2 == 0) {
$divisors = array(1);
for($i=2; $i<$num; $i++) {
if($num%$i == 0)
$divisors[] = $i;
}
if(array_sum($divisors) == $num)
$out = true;
}
return $out ? 'It\'s perfect!' : 'Not a perfect number.';
}
Hope this helps, not sure if this is what you're looking for.
#include<stdio.h>
#include<stdlib.h>
int sumOfFactors(int );
int main(){
int x, start, end;
printf("Enter start of the range:\n");
scanf("%d", &start);
printf("Enter end of the range:\n");
scanf("%d", &end);
for(x = start;x <= end;x++){
if(x == sumOfFactors(x)){
printf("The numbers %d is a perfect number\n", x);
}
}
return 0;
}
int sumOfFactors(int x){
int sum = 1, i, j;
for(j=2;j <= x/2;j++){
if(x % j == 0)
sum += j;
}
return sum;
}

Sudoku Solver by Backtracking not working

Assuming a two dimensional array holding a 9x9 sudoku grid, where is my solve function breaking down? I'm trying to solve this using a simple backtracking approach. Thanks!
bool solve(int grid[9][9])
{
int i,j,k;
bool isSolved = false;
if(!isSolved(grid))
isSolved = false;
if(isSolved)
return isSolved;
for(i=0; i<9; i++)
{
for(j=0; j<9; j++)
{
if(grid[i][j] == 0)
{
for(k=1; k<=9; k++)
{
if(legalMove(grid,i,j,k))
{
grid[i][j] = k;
isSolved = solve(grid);
if (isSolved)
return true;
}
grid[i][j] = 0;
}
isSolved = false;
}
}
}
return isSolved;
}
Even after changing the isSolved issues, my solution seems to breakdown into an infinite loop. It appears like I am missing some base-case step, but I'm not sure where or why. I have looked at similar solutions and still can't identify the issue. I'm just trying to create basic solver, no need to go for efficiency. Thanks for the help!
Yea your base case is messed up. In recursive functions base cases should be handled at the start. You got
bool isSolved = false;
if(!isSolved(grid))
isSolved = false;
if(isSolved)
return isSolved;
notice your isSolved variable can never be set to true, hence your code
if(isSolved)
return isSolved;
is irrelevant.
Even if you fix this, its going to feel like an infinite loop even though it is finite. This is because your algorithm has a possible total of 9*9*9 = 729 cases to check every time it calls solve. Entering this function n times may require up to 729^n cases to be checked. It won't be checking that many cases obviously because it will find dead ends when placement is illegal, but whose to say that 90% of the arragements of the possible numbers result in cases where all but one number fit legally? Moreover, even if you were to check k cases on average where k is a small number (k<=10) this would still blow up (run time of k^n.)
The trick is to "try" placing numbers where they will likely result in a high probability of being the actual good placement. Probably the simplest way I can think of doing this is a constraint satisfaction solver, or a search algorithm with a heuristic (like A*.)
I actually wrote a sudoku solver based on a constraint satisfaction solver and it would solve 100x100 sudokus in less than a second.
If by some miracle the "brute force" backtracking algorithm works well for you in the 9x9 case try higher values, you will quickly see a deterioation in run time.
I'm not bashing the backtracking algorithm, in fact I love it, its been shown time and time again that backtracking if implemented correctly can be just as efficient as dynamic programming, however, in your case you aren't implementing it correctly. You are bruteforcing it, you might as well just make your code non-recursive, it will accomplish the same thing.
You refer to isSolved as both a function and a boolean variable.
I don't think this is legal, and its definitely not smart.
Your functions should have distinct names from your variables.
It seems that regardless of whether or not it is a legal move, you are assigning "0" to the square, with that "grid[i][j] = 0;" line. Maybe you meant to put "else" and THEN "grid[i][j] = 0;" ?

Resources