This question is not limited to Sudoku, but may include Kakuro, Hitori, Nurikabe, etc.
I understand the algorithm to solve Sudoku and other similar puzzles, but I'm having a hard time figuring out how to create them.
Say I want a Sudoku generator (to take the most popular). I guess it needs to work in two steps:
Create a valid solution
Remove parts of the solution until the desired amount of clues are left.
Creating a solution isn't trivial, it usually works well if you go randomly until you reach the last steps and end up in a deadlock.
Removing some parts of the solution required to be sure to remove only redundant ones, which isn't trivial either.
Is there a generic algorithm to work it out? How can I implement such a thing?
I understand my question is "broad" and that I don't show a lot of what I've got so far (splitting the problem in two), but I don't have any lead to start thinking about the algorithm. I'm not asking for a solution, but rather for hints on how to begin.
You could in general approach this as follows:
Define a set of rules which can assist a human in progressing in a game. For instance, in Sudoku, one of those rules could be:
Call the "field of influence" of a given cell, the cells that are either in the same row, the same column or in the same 3x3 block as that given cell. The rule is that this cell cannot have any of the values that are already used in its field of influence. If that means there is only one valid value left, then place that value in this cell.
Another rule could be:
If there is a value that cannot be used anywhere else in the same 3x3 block, then place that value in this cell. Similarly if a value cannot be used anywhere else in the cell's row; or cannot be used anywhere else in the cell's column.
There are obviously other rules. These rules can be more complicated. Rank the rules by how difficult it is for a human to verify and apply them. Try to be as complete as you can, by looking how you, as a human, reason when solving the game. Implement these rules as functions in the program. In the Sudoku example, such a rule function can be applied to a given cell, and return success (i.e. the cell gets a value) or failure (the rule cannot be used to deduct its value).
Let's say the program should generate a Sudoku of a given difficulty. We will interpret that to mean that solving the Sudoku will require the player to use at least once a rule that has at least that difficulty, or an exotic rule that was never foreseen.
Now start from a solved Sudoku. Remove randomly 50% of the values. Check if the Sudoku can be solved by only using known rules that are within the difficulty range. If not, restore 25% of the removed cells, and repeat. If it could be solved, remove 25% more cells randomly. Continue halving the number of involved cells (either restoring them or removing them), much like a binary search algorithm, until you arrive at the end of this search. For a Sudoku game, this process would take about 7 iterations. Then you will have a kind of "local minimum", where the rules can be applied to get to a solution.
This is far from perfect, as it could well be there is some other cell that could be cleared, while still allowing the rules to work towards a solution. So, if you want to refine this search, you could add some additional iterations to remove random cells as long as the resulting board can still be solved by applying the rules.
You could create a Sudoku puzzle by solving one that starts with nothing assigned to start with. If your solver progresses by filling in squares, you could "remove" them by stopping (or rolling back) that process at the appropriate point.
Related
Foreword: I am aware there is another question like this, however mine has very specific restrictions. I have done my best to make this question applicable to many, as it is a generic grid issue, but if it still does not belong here, then I am sorry, and please be nice about it. I have found in the past stackoverflow to be a very picky and hostile environment to question askers, but I'm hoping that was just a bad couple people.
Goal(abstract): Check all connected grid squares in a 3D grid that are of the same type and touching on one face.
Goal(specific/implementation): Create a "fill bucket" tool in Minecraft with command blocks.
Knowledge of Minecraft not really necessary to answer, this is more of an algorithm question, and I will be staying away from Minecraft specifics.
Restrictions: I can do this in code with recursive functions, but in Minecraft there are some limitations I am wondering if are possible to get around. 1: no arrays(data structure) permitted. In Minecraft I can store an integer variable and do basic calculations with it (+,-,*,/,%(mod),=,==), but that's it. I cannot dynamically create variables or have the program create anything with a name that I did not set out ahead of time. I can do "IF" and "OR" statements, and everything that derives from them. I CANNOT have multiple program pointers - that is, I can't have things like recursive functions, which require a program to stop executing, execute itself from beginning to end, and then resume executing where it was - I have minimal control over the program flow. I can use loops and conditional exits (so FOR loops). I can have a marker on the grid in 3D space that can move regardless of the presence of blocks (I'm using an armour stand, for those who know), and I can test grid squares relative to that marker.
So say my grid is full of empty spaces only. There are separate clusters of filled squares in opposite corners, not touching each other. If I "use" my fillbucket tool on one block / filled grid square, I want it to use a single marker to check and identify all the connected grid squares - basically, I need to be sure that it traverses the entire shape, all the nooks and crannies, but not the squares that are not connected to that shape. So in the end, one of the two clusters, from me only selecting a single square of it, will be erased/replaced by another kind of block, without affecting the other blocks around it.
Again, apologies if this doesn't belong here. And only answer this if you WANT to tackle the challenge - it's not important or anything, I just want to do this. You don't have to answer it if you don't want to. Or if you can solve this problem for a 2D grid, that would be helpful as well, as I could possibly extend that to work for 3D.
Thank you, and if I get nobody degrading me for how I wrote this post or the fact that I did, then I will consider this a success :)
With help from this and other sources, I figured it out! It turns out that, since all recursive functions (or at least most of them) can be written as FOR loops, that I can make a recursive function in Minecraft. So I did, and the general idea of it is as follows:
For explaining the program, you may assuming the situation is a largely empty grid with a grouping of filled squares in one part of it, and the goal is to replace the kind of block that that grouping is made of with a different block. We'll say the grouping currently consists of red blocks, and we want to change them to blue blocks.
Initialization:
IDs - A objective (data structure) for holding each marker's ID (score)
numIDs - An integer variable for holding number of IDs/markers active
Create one marker at selected grid position with ID [1] (aka give it a score of 1 in the "IDs" objective). This grid position will be a filled square from which to start replacing blocks.
Increment numIDs
Main program:
FOR loop that goes from 1 to numIDs
{
at marker with ID [1], fill grid square with blue block
step 1. test block one to the +x for a red block
step 2. if found, create marker there with ID [numIDs]
step 3. increment numIDs
[//repeat steps 1 2 and 3 for the other five adjacent grid squares: +z, -x, -z, +y, and -y]
delete stand[1]
numIDs -= 1
subtract 1 from every marker's ID's, so that the next marker to evaluate, which was [2], now has ID [1].
} (end loop)
So that's what I came up with, and it works like a charm. Sorry if my explanation is hard to understand, I'm trying to explain in a way that might make sense to both coders and Minecraft players, and maybe achieving neither :P
It's a homework. I have to design and lights out game using backtracking description is below.
The game consists of a 5-by-5 grid of lights; when the game starts, a set of these lights (random, or one of a set of stored puzzle patterns) are switched on. Pressing one of the lights will toggle it, and the four lights adjacent to it, on and off. (Diagonal neighbours are not affected.) The game provides a puzzle: given some initial configuration where some lights are on and some are off, the goal is to switch all the lights off, preferably in
as few button presses as possible.
My approach is go from 1 to 25 and check if all the lights are off or not. If not then I will check for 1 to 24 and so on until I reach 1 or found solution. No if there is no solution then I will start from 2 to 24 and follow the above process till I reach 2 or found solution.
But through this I am not getting the result ? for example light at (0,0) (1,1) (2,2) (3,3) (4,4) are ON?
If any one need code I can post it.
Can any one tell me correct approach using backtracking to solve this game ?
Thanks.
There is a standard algorithm for solving this problem that is based on Gaussian elimination over GF(2). The idea is to set up a matrix representing the button presses a column vector representing the lights and then to use standard matrix simplification techniques to determine which buttons to press. It runs in polynomial time and does not require any backtracking.
I have an implementation of this algorithm that includes a mathematical description of how it works available on my personal site. I hope you find it useful!
Edit: If you are forced to use backtracking, you can use the following facts to do so:
Any solution will never push the same button twice, since doing so would cancel out a previous move.
Any solution either pushes the first button or does not.
Given this approach, you could solve this using backtracking using a simple recursive algorithm that keeps track of the current state of the board and which buttons you've already made decisions about:
If you've decided about each button, then return whether the board is solved or not.
Otherwise:
Try pushing the next button and seeing if the board is recursively solvable from there.
If so, return success.
Otherwise, try not pushing the next button and seeing if the board is recursively solvable from there.
If so, return success. If not, return failure.
This will explore a search space of size 225, which is about 32 million. That's big, but not insurmountably big.
Hope this helps!
Backtracking means:
Incrementally build a solution, throwing away impossible solutions.
Here is one approach using the fact that there is locality of inputs and outputs (pressing a button affects square around it).
problem = GIVEN
solutions = [[0],[1]] // array of binary matrix answers (two entries, a zero and a one)
for (problemSize = 1; problemSize <= 5; problemSize++) {
newSolutions = [];
foreach (solutions as oldSolution) {
candidateSolutions = arrayOfNByNMatriciesWithMatrixAtTopLeft(min(5,problemSize+1), oldSolution);
// size of candidateSolutions is 2^((problemSize+1)^2 - problemSize^2)
// except last round candidateSolutions == solutions
foreach (candidateSolutions as candidateSolution) {
candidateProblem = boardFromPressingButtonsInSolution(candidateSolution);
if (compareMatrix(problem, candidateProblem, 0, 0, problemSize, problemSize)==0)
newSolutions[] = candidateSolution;
}
}
solutions = newSolutions;
}
return solutions;
As already suggested, you should first form a set of simultaneous equations.
First thing to note is that a particular light button shall be pressed at most once because it does not make sense to toggle the set of lights twice.
Let Aij = Light ij Toggled { Aij = 0 or 1 }
There shall be 25 such variables.
Now for each of the lights, you can form an equation looking like
summation (Amn) = 0. { Amn = 5 light buttons that toggle the light mn }
So you will have 25 variables and 25 unknowns. You can solve these equations simultaneously.
If you need to solve it using backtracking or recursion you can solve the equations that way. Just assume an initial value of variables, see if they satisfy all the equations. If not, then back track.
The Naive Solution
First, you're going to need a way to represent the state of the board and a stack to store all of the states. At each step, make a copy of the board, changed to the new state. Compare that state to all states of the board you've encountered so far. If you haven't seen it, push that state on top of the stack and move on to the next move. If you have seen it, try the next move. Each level will have to try all possible 64 moves before popping the state from the stack (backtracking). You will want to use recursion to manage the state of the next move to check.
There are at most 264 possible board configurations, meaning you could potentially go on a very long chain of unique states and still run out of memory. (For reference, 1 GB is 230 bytes and you need a minimum of 8 bytes to store the board configuration) This algorithm is not likely to terminate in the lifetime of the known universe.
You need to do something clever to reduce your search space...
Greedy-First Search
You can do better by searching the states that are closest to the solved configuration first. At each step, sort the possible moves in order from most lights off to least lights off. Iterate in that order. This should work reasonably well but is not guaranteed to get the optimal solution.
Not All Lights-Out Puzzles Are Solvable
No matter what algorithm you use, there may not be a solution, meaning you might search forever (or several trillion years, at least) without finding a solution.
You will want to check the board for solvability (which is a much faster algorithm as it turns out) before wasting any time trying to find a solution.
I am trying to built some rules about cellular automata. In every cell I don't have one element (predator/prey) I have a number of population. To achieve movement between my population can I compare every cell with one of its neighbours every time? or do I have to compare the cell with all of its neighbours and add some conditions.
I am using Moore neighbourhood with the following update function
update[site,_,_,_,_,_,_,_,_]
I tried to make them move according to all of their neighbours but it's very complicated and I am wondering if by simplify it and check it with all of its neighbours individually it will be wrong.
Thanks
As a general advice I would recommend against going down the pattern recognition technique in Mathematica for specifying the rule table in CA, they tend to get out of hand very quickly.
Doing a predator-prey kind of simulation with CA is a little tricky since in each step, (unlike in traditional CA) the value of the center cell changes ALONG WITH THE VALUE OF THE NEIGHBOR CELL!
This will lead to issues since when the transition function is applied on the neighbor cell it will again compute a new value for itself, but it also needs to "remember" the changes done to it previously when it was a neighbor.
In fluid dynamics simulations using CA, they encounter problems like this and they use a different neighborhood called Margolus neighborhood. In a Margolus neighborhood, the CA lattice is broken into distinct blocks and the update rule applied on each block. In the next step the block boundaries are changed and the transition rules applied on the new boundary, hence information transfer occurs across block boundaries.
Before I give my best attempted answer, you have to realize that your question is oddly phrased. The update rules of your cellular automaton are specified by you, so I wouldn't know whether you have additional conditions you need to implement.
I think you're asking what is the best way to select a neighborhood, and you can do this with Part:
(* We abbreviate 'nbhd' for neighborhood *)
getNbhd[A_, i_Integer?Positive, j_Integer?Positive] :=
A[[i - 1 ;; i + 1, j - 1 ;; j + 1]];
This will select the appropriate Moore neighborhood, including the additional central cell, which you can filter out when you call your update function.
Specifically, to perform the update step of a cellular automaton, one must update all cells simultaneously. In the real world, this means creating a separate array, and placing the updated values there, scrapping the original array afterward.
For more details, see my blog post on Cellular Automata, which includes an implementation of Conway's Game of Life in Mathematica.
Regardless of the layout being used for the tiles, is there any good way to divvy out the tiles so that you can guarantee the user that, at the beginning of the game, there exists at least one path to completing the puzzle and winning the game?
Obviously, depending on the user's moves, they can cut themselves off from winning. I just want to be able to always tell the user that the puzzle is winnable if they play well.
If you randomly place tiles at the beginning of the game, it's possible that the user could make a few moves and not be able to do any more. The knowledge that a puzzle is at least solvable should make it more fun to play.
Place all the tiles in reverse (ie layout out the board starting in the middle, working out)
To tease the player further, you could do it visibly but at very high speed.
Play the game in reverse.
Randomly lay out pieces pair by pair, in places where you could slide them into the heap. You'll need a way to know where you're allowed to place pieces in order to end up with a heap that matches some preset pattern, but you'd need that anyway.
I know this is an old question, but I came across this when solving the problem myself. None of the answers here are quite perfect, and several of them have complicated caveats or will break on pathological layouts. Here is my solution:
Solve the board (forward, not backward) with unmarked tiles. Remove two free tiles at a time. Push each pair you remove onto a "matched pair" stack. Often, this is all you need to do.
If you run into a dead end (numFreeTiles == 1), just reset your generator :) I have found I usually don't hit dead ends, and have so far have a max retry count of 3 for the 10-or-so layouts I have tried. Once I hit 8 retries, I give up and just randomly assign the rest of the tiles. This allows me to use the same generator for both setting up the board, and the shuffle feature, even if the player screwed up and made a 100% unsolvable state.
Another solution when you hit a dead end is to back out (pop off the stack, replacing tiles on the board) until you can take a different path. Take a different path by making sure you match pairs that will remove the original blocking tile.
Unfortunately, depending on the board, this may loop forever. If you end up removing a pair that resembles a "no outlet" road, where all subsequent "roads" are a dead end, and there are multiple dead ends, your algorithm will never complete. I don't know if it is possible to design a board where this would be the case, but if so, there is still a solution.
To solve that bigger problem, treat each possible board state as a node in a DAG, with each selected pair being an edge on that graph. Do a random traversal, until you find a leaf node at depth 72. Keep track of your traversal history so that you never repeat a descent.
Since dead ends are more rare than first-try solutions in the layouts I have used, what immediately comes to mind is a hybrid solution. First try to solve it with minimal memory (store selected pairs on your stack). Once you've hit the first dead end, degrade to doing full marking/edge generation when visiting each node (lazy evaluation where possible).
I've done very little study of graph theory, though, so maybe there's a better solution to the DAG random traversal/search problem :)
Edit: You actually could use any of my solutions w/ generating the board in reverse, ala the Oct 13th 2008 post. You still have the same caveats, because you can still end up with dead ends. Generating a board in reverse has more complicated rules, though. E.g, you are guaranteed to fail your setup if you don't start at least SOME of your rows w/ the first piece in the middle, such as in a layout w/ 1 long row. Picking a completely random (legal) first move in a forward-solving generator is more likely to lead to a solvable board.
The only thing I've been able to come up with is to place the tiles down in matching pairs as kind of a reverse Mahjong Solitaire game. So, at any point during the tile placement, the board should look like it's in the middle of a real game (ie no tiles floating 3 layers up above other tiles).
If the tiles are place in matching pairs in a reverse game, it should always result in at least one forward path to solve the game.
I'd love to hear other ideas.
I believe the best answer has already been pushed up: creating a set by solving it "in reverse" - i.e. starting with a blank board, then adding a pair somewhere, add another pair in a solvable position, and so on...
If you a prefer "Big Bang" approach (generating the whole set randomly at the beginning), are a very macho developer or just feel masochistic today, you could represent all the pairs you can take out from the given set and how they depend on each other via a directed graph.
From there, you'd only have to get the transitive closure of that set and determine if there's at least one path from at least one of the initial legal pairs that leads to the desired end (no tile pairs left).
Implementing this solution is left as an exercise to the reader :D
Here are rules i used in my implementation.
When buildingheap, for each fret in a pair separately, find a cells (places), which are:
has all cells at lower levels already filled
place for second fret does not block first, considering if first fret already put onboard
both places are "at edges" of already built heap:
EITHER has at least one neighbour at left or right side
OR it is first fret in a row (all cells at right and left are recursively free)
These rules does not guarantee a build will always successful - it sometimes leave last 2 free cells self-blocking, and build should be retried (or at least last few frets)
In practice, "turtle" built in no more then 6 retries.
Most of existed games seems to restrict putting first ("first on row") frets somewhere in a middle. This come up with more convenient configurations, when there are no frets at edges of very long rows, staying up until last player moves. However, "middle" is different for different configurations.
Good luck :)
P.S.
If you've found algo that build solvable heap in one turn - please let me know.
You have 144 tiles in the game, each of the 144 tiles has a block list..
(top tile on stack has an empty block list)
All valid moves require that their "current__vertical_Block_list" be empty.. this can be a 144x144 matrix so 20k of memory plus a LEFT and RIGHT block list, also 20 k each.
Generate a valid move table from (remaning_tiles) AND ((empty CURRENT VERTICAL BLOCK LIST) and ((empty CURRENT LEFT BLOCK LIST) OR (empty CURRENT RIGHT BLOCK LIST)))
Pick 2 random tiles from the valid move table, record them
Update the (current tables Vert, left and right), record the Tiles removed to a stack
Now we have a list of moves that constitute a valid game. Assign matching tile types to each of the 72 moves.
for challenging games, track when each tile becomes available. find sets that have are (early early early late) and (late late late early) since it's blank, you find 1 EE 1 LL and 2 LE blocks.. of the 2 LE block, find an EARLY that blocks ANY other EARLY that (except rightblocking a left side piece)
Once youve got a valid game play around with the ordering.
Solitaire? Just a guess, but I would assume that your computer would need to beat the game(or close to it) to determine this.
Another option might be to have several preset layouts(that allow winning, mixed in with your current level.
To some degree you could try making sure that one of the 4 tiles is no more than X layers below another X.
Most games I see have the shuffle command for when someone gets stuck.
I would try a mix of things and see what works best.
To experiment, I've (long ago) implemented Conway's Game of Life (and I'm aware of this related question!).
My implementation worked by keeping 2 arrays of booleans, representing the 'last state', and the 'state being updated' (the 2 arrays being swapped at each iteration). While this is reasonably fast, I've often wondered about how to optimize this.
One idea, for example, would be to precompute at iteration N the zones that could be modified at iteration (N+1) (so that if a cell does not belong to such a zone, it won't even be considered for modification at iteration (N+1)). I'm aware that this is very vague, and I never took time to go into the details...
Do you have any ideas (or experience!) of how to go about optimizing (for speed) Game of Life iterations?
I am going to quote my answer from the other question, because the chapters I mention have some very interesting and fine-tuned solutions. Some of the implementation details are in c and/or assembly, yes, but for the most part the algorithms can work in any language:
Chapters 17 and 18 of
Michael Abrash's Graphics
Programmer's Black Book are one of
the most interesting reads I have ever
had. It is a lesson in thinking
outside the box. The whole book is
great really, but the final optimized
solutions to the Game of Life are
incredible bits of programming.
There are some super-fast implementations that (from memory) represent cells of 8 or more adjacent squares as bit patterns and use that as an index into a large array of precalculated values to determine in a single machine instruction if a cell is live or dead.
Check out here:
http://dotat.at/prog/life/life.html
Also XLife:
http://linux.maruhn.com/sec/xlife.html
You should look into Hashlife, the ultimate optimization. It uses the quadtree approach that skinp mentioned.
As mentioned in Arbash's Black Book, one of the most simple and straight forward ways to get a huge speedup is to keep a change list.
Instead of iterating through the entire cell grid each time, keep a copy of all the cells that you change.
This will narrow down the work you have to do on each iteration.
The algorithm itself is inherently parallelizable. Using the same double-buffered method in an unoptimized CUDA kernel, I'm getting around 25ms per generation in a 4096x4096 wrapped world.
what is the most efficient algo mainly depends on the initial state.
if the majority of cells is dead, you could save a lot of CPU time by skipping empty parts and not calculating stuff cell by cell.
im my opinion it can make sense to check for completely dead spaces first, when your initial state is something like "random, but with chance for life lower than 5%."
i would just divide the matrix up into halves and start checking the bigger ones first.
so if you have a field of 10,000 * 10,000, you´d first accumulate the states of the upper left quarter of 5,000 * 5,000.
and if the sum of states is zero in the first quarter, you can ignore this first quarter completely now and check the upper right 5,000 * 5,000 for life next.
if its sum of states is >0, you will now divide up the second quarter into 4 pieces again - and repeat this check for life for each of these subspaces.
you could go down to subframes of 8*8 or 10*10 (not sure what makes the most sense here) now.
whenever you find life, you mark these subspaces as "has life".
only spaces which "have life" need to be divided into smaller subspaces - the empty ones can be skipped.
when you are finished assigning the "has life" attribute to all possible subspaces, you end up with a list of subspaces which you now simply extend by +1 to each direction - with empty cells - and perform the regular (or modified) game of life rules to them.
you might think that dividn up a 10,000*10,000 spae into subspaces of 8*8 is a lot os tasks - but accumulating their states values is in fact much, much less computing work than performing the GoL algo to each cell plus their 8 neighbours plus comparing the number and storing the new state for the net iteration somewhere...
but like i said above, for a random init state with 30% population this wont make much sense, as there will be not many completely dead 8*8 subspaces to find (leave alone dead 256*256 subpaces)
and of course, the way of perfect optimisation will last but not least depend on your language.
-110
Two ideas:
(1) Many configurations are mostly empty space. Keep a linked list (not necessarily in order, that would take more time) of the live cells, and during an update, only update around the live cells (this is similar to your vague suggestion, OysterD :)
(2) Keep an extra array which stores the # of live cells in each row of 3 positions (left-center-right). Now when you compute the new dead/live value of a cell, you need only 4 read operations (top/bottom rows and the center-side positions), and 4 write operations (update the 3 affected row summary values, and the dead/live value of the new cell). This is a slight improvement from 8 reads and 1 write, assuming writes are no slower than reads. I'm guessing you might be able to be more clever with such configurations and arrive at an even better improvement along these lines.
If you don't want anything too complex, then you can use a grid to slice it up, and if that part of the grid is empty, don't try to simulate it (please view Tyler's answer). However, you could do a few optimizations:
Set different grid sizes depending on the amount of live cells, so if there's not a lot of live cells, that likely means they are in a tiny place.
When you randomize it, don't use the grid code until the user changes the data: I've personally tested randomizing it, and even after a long amount of time, it still fills most of the board (unless for a sufficiently small grid, at which point it won't help that much anymore)
If you are showing it to the screen, don't use rectangles for pixel size 1 and 2: instead set the pixels of the output. Any higher pixel size and I find it's okay to use the native rectangle-filling code. Also, preset the background so you don't have to fill the rectangles for the dead cells (not live, because live cells disappear pretty quickly)
Don't exactly know how this can be done, but I remember some of my friends had to represent this game's grid with a Quadtree for a assignment. I'm guess it's real good for optimizing the space of the grid since you basically only represent the occupied cells. I don't know about execution speed though.
It's a two dimensional automaton, so you can probably look up optimization techniques. Your notion seems to be about compressing the number of cells you need to check at each step. Since you only ever need to check cells that are occupied or adjacent to an occupied cell, perhaps you could keep a buffer of all such cells, updating it at each step as you process each cell.
If your field is initially empty, this will be much faster. You probably can find some balance point at which maintaining the buffer is more costly than processing all the cells.
There are table-driven solutions for this that resolve multiple cells in each table lookup. A google query should give you some examples.
I implemented this in C#:
All cells have a location, a neighbor count, a state, and access to the rule.
Put all the live cells in array B in array A.
Have all the cells in array A add 1 to the neighbor count of their
neighbors.
Have all the cells in array A put themselves and their neighbors in array B.
All the cells in Array B Update according to the rule and their state.
All the cells in Array B set their neighbors to 0.
Pros:
Ignores cells that don't need to be updated
Cons:
4 arrays: a 2d array for the grid, an array for the live cells, and an array
for the active cells.
Can't process rule B0.
Processes cells one by one.
Cells aren't just booleans
Possible improvements:
Cells also have an "Updated" value, they are updated only if they haven't
updated in the current tick, removing the need of array B as mentioned above
Instead of array B being the ones with live neighbors, array B could be the
cells without, and those check for rule B0.