Ive made a function to run a fight simulation. Its got a random element so would like to run it 100 times to check results.
Ive learnt that ruby cant have functions inside functions.
$p1_skill = 10
$p1_health = 10
$p2_skill = 10
$p2_health = 10
def hp_check
if $p2_health >= 1 && $p1_health == 0
return "p2_wins"
elsif $p1_health >= 1 && $p2_health == 0
return "p1_wins"
else
battle
end
end
def battle
p1_fight = $p1_skill + rand(2..12)
p2_fight = $p2_skill + rand(2..12)
if p1_fight > p2_fight
$p2_health -= 2
hp_check
elsif p2_fight > p1_fight
$p1_health -= 2
hp_check
else
battle
end
end
battle
Right now this accurately produces a winner. It rolls two dice and adds them to a players skill. If its higher than the other players the other player loses 2 health.
The skills and hp of players will change throughout the game, this is for a project assignment.
Id like this to produce odds for win chances for balancing issues.
I have several suggestions regarding your implementation. Note that since this is a homework I'm providing the answer in pieces rather than just giving you an entire program. In no particular order...
Don't use global variables. I suspect this is the major hurdle you're running into with trying to achieve multiple runs of your model. The model state should be contained within the model methods, and initial state can be passed to it as arguments. Example:
def battle(p1_skill, p1_health, p2_skill, p2_health)
Unless your instructor has mandated that you use recursion, a simple loop structure will serve you much better. There's no need to check who won until one player or the other drops down to zero (or lower). There's also no need for an else to recursively call battle, the loop will iterate to the next round of the fight if both are still in the running, even if neither player took a hit.
while p1_health > 0 && p2_health > 0
# roll the dice and update health
end
# check who won and return that answer
hp_check really isn't needed, when you lose the recursive calls it becomes a one-liner if you perform the check after breaking out of the loop. Also, it would be more useful to return just the winner, so whoever gets that return value can decide whether they want to print it, use it to update a tally, both, or something else entirely. After you break out of the loop outlined above:
# determine which player won, since somebody's health dropped to 0 or less
p1_health > 0 ? 1 : 2
When you're incrementing or decrementing a quantity, don't do equality testing. p1_health <= 0 is much safer than p1_health == 0, because some day you or somebody else is going to start from an odd number while decrementing by 2's, or decrement by some other (random?) amount.
Generating a number uniformly between 2 and 12 is not the same as summing two 6-sided dice. There are 36 possible outcomes for the two dice. Only one of the 36 yields a 2, only one yields a 12, and at the other extreme, there are six ways to get a sum of 7. I created a little die-roll method which takes the number of dice as an argument:
def roll_dice(n)
n.times.inject(0) { |total| total + rand(1..6) }
end
so, for example, determining player 1's fight score becomes p1_fight = p1_skill + roll_dice(2).
After making these sorts of changes, tallying up the statistics is pretty straightforward:
n = 10000
number_of_p1_wins = 0
n.times { number_of_p1_wins += 1 if battle(10, 10, 10, 10) == 1 }
proportion = number_of_p1_wins.to_f / n
puts "p1 won #{"%5.2f" % (100.0 * proportion)}% of the time"
If you replace the constant 10's in the call to battle by getting user input or iterating over ranges, you can explore a rich set of other scenarios.
I'm currently developing a solver for a trick-based card game called Skat in a perfect information situation. Although most of the people may not know the game, please bear with me; my problem is of a general nature.
Short introduction to Skat:
Basically, each player plays one card alternatingly, and every three cards form a trick. Every card has a specific value. The score that a player has achieved is the result of adding up the value of every card contained in the tricks that the respective player has won. I left out certain things that are unimportant for my problem, e.g. who plays against whom or when do I win a trick.
What we should keep in mind is that there is a running score, and who played what before when investigating a certain position (-> its history) is relevant to that score.
I have written an alpha beta algorithm in Java which seems to work fine, but it's way too slow. The first enhancement that seems the most promising is the use of a transposition table. I read that when searching the tree of a Skat game, you will encounter a lot of positions that have already been investigated.
And that's where my problem comes into play: If I find a position that has already been investigated before, the moves leading to this position have been different. Therewith, in general, the score (and alpha or beta) will be different, too.
This leads to my question: How can I determine the value of a position, if I know the value of the same position, but with a different history?
In other words: How can I decouple a subtree from its path to the root, so that it can be applied to a new path?
My first impulse was it's just not possible, because alpha or beta could have been influenced by other paths, which might not be applicable to the current position, but...
There already seems to be a solution
...that I don't seem to understand. In Sebastion Kupferschmid's master thesis about a Skat solver, I found this piece of code (maybe C-ish / pseudo code?):
def ab_tt(p, alpha, beta):
if p isa Leaf:
return 0
if hash.lookup(p, val, flag):
if flag == VALID:
return val
elif flag == LBOUND:
alpha = max(alpha, val)
elif flag == UBOUND:
beta = min(beta, val)
if alpha >= beta:
return val
if p isa MAX_Node:
res = alpha
else:
res = beta
for q in succ(p):
if p isa MAX_Node:
succVal = t(q) + ab_tt(q, res - t(q), beta - t(q))
res = max(res, succVal)
if res >= beta:
hash.add(p, res, LBOUND)
return res
elif p isa MIN_Node:
succVal = t(q) + ab_tt(q, alpha - t(q), res - t(q))
res = min(res, succVal)
if res <= alpha:
hash.add(p, res, UBOUND)
return res
hash.add(p, res, VALID)
return res
It should be pretty self-explanatory. succ(p) is a function that returns every possible move at the current position. t(q) is what I believe to be the running score of the respective position (the points achieved so far by the declarer).
Since I don't like copying stuff without understanding it, this should just be an aid for anyone who would like to help me out. Of course, I have given this code some thought, but I can't wrap my head around one thing: By subtracting the current score from alpha/beta before calling the function again [e.g. ab_tt(q, res - t(q), beta - t(q))], there seems to be some kind of decoupling going on. But what exactly is the benefit if we store the position's value in the transposition table without doing the same subtraction right here, too? If we found a previously investigated position, how come we can just return its value (in case it's VALID) or use the bound value for alpha or beta? The way I see it, both storing and retrieving values from the transposition table won't account for the specific histories of these positions. Or will it?
Literature:
There's almost no English sources out there that deal with AI in skat games, but I found this one: A Skat Player Based on Monte Carlo Simulation by Kupferschmid, Helmert. Unfortunately, the whole paper and especially the elaboration on transposition tables is rather compact.
Edit:
So that everyone can imagine better how the score develops thoughout a Skat game until all cards have been played, here's an example. The course of the game is displayed in the lower table, one trick per line. The actual score after each trick is on its left side, where +X is the declarer's score (-Y is the defending team's score, which is irrelevant for alpha beta). As I said, the winner of a trick (declarer or defending team) adds the value of each card in this trick to their score.
The card values are:
Rank J A 10 K Q 9 8 7
Value 2 11 10 4 3 0 0 0
I solved the problem. Intead of doing weird subtractions upon each recursive call, as suggested by the reference in my question, I subtract the running score from the resulting alpha beta value, only when storing a position in the transposition table:
For exact values (the position hasn't been pruned):
transpo.put(hash, new int[] { TT_VALID, bestVal - node.getScore()});
If the node caused a beta-cutoff:
transpo.put(hash, new int[] { TT_LBOUND, bestVal - node.getScore()});
If the node caused an alpha-cutoff:
transpo.put(hash, new int[] { TT_UBOUND, bestVal - node.getScore()});
Where:
transpo is a HashMap<Long, int[]>
hash is the long value representing that position
bestVal is either the exact value or the value that caused a cutoff
TT_VALID, TT_LBOUND and TT_UBOUND are simple constants, describing the type of transposition table entry
However, this didn't work per se. After posting the same question on gamedev.net, a user named Álvaro gave me the deciding hint:
When storing exact scores (TT_VALID), I should only store positions, that improved alpha.
How can In break a heap into two heaps in the Grundy's game?
What about breaking a heap into any number of heaps (no two of them being equal)?
Games of this type are analyzed in great detail in the book series "Winning Ways for your Mathematical Plays". Most of the things you are looking for are probably in volume 1.
You can also take a look at these links: Nimbers (Wikipedia), Sprague-Grundy theorem (Wikipedia) or do a search for "combinatorial game theory".
My knowledge on this is quite rusty, so I'm afraid I can't help you myself with this specific problem. My excuses if you were already aware of everything I linked.
Edit: In general, the method of solving these types of games is to "build up" stack sizes. So start with a stack of 1 and decide who wins with optimal play. Then do the same for a stack of 2, which can be split into 1 & 1. The move on to 3, which can be split into 1 & 2. Same for 4 (here it gets trickier): 3 & 1 or 2 & 2, using the Spague-Grundy theorem & the algebraic rules for nimbers, you can calculate who will win. Keep going until you reach the stack size for which you need to know the answer.
Edit 2: The website I was talking about in the comments seems to be down. Here is a link of a backup of it: Wayback Machine - Introduction to Combinatorial Games.
Grundy's Game, and many games like it, can be solved with an algorithm like this:
//returns a Move object representing the current player's optimal move, or null if the player has no chance of winning
function bestMove(GameState g){
for each (move in g.possibleMoves()){
nextState = g.applyMove(move)
if (bestMove(nextState) == null){
//the next player's best move is null, so if we take this move,
//he has no chance of winning. This is good for us!
return move;
}
}
//none of our possible moves led to a winning strategy.
//We have no chance of winning. This is bad for us :-(
return null;
}
Implementations of GameState and Move depend on the game. For Grundy's game, both are simple.
GameState stores a list of integers, representing the size of each heap in the game.
Move stores an initialHeapSize integer, and a resultingHeapSizes list of integers.
GameState::possibleMoves iterates through its heap size list, and determines the legal divisions for each one.
GameState::applyMove(Move) returns a copy of the GameState, except the move given to it is applied to the board.
GameState::possibleMoves can be implemented for "classic" Grundy's Game like so:
function possibleMoves(GameState g){
moves = []
for each (heapSize in g.heapSizes){
for each (resultingHeaps in possibleDivisions(heapSize)){
Move m = new Move(heapSize, resultingHeaps)
moves.append(m)
}
}
return moves
}
function possibleDivisions(int heapSize){
divisions = []
for(int leftPileSize = 1; leftPileSize < heapSize; leftPileSize++){
int rightPileSize = heapSize - leftPileSize
if (leftPileSize != rightPileSize){
divisions.append([leftPileSize, rightPileSize])
}
}
return divisions
}
Modifying this to use the "divide into any number of unequal piles" rule is just a matter of changing the implementation of possibleDivisions.
I haven't calculated it exactly, but an unoptimized bestMove has a pretty crazy worst-case runtime. Once you start giving it a starting state of around 12 stones, you'll get long wait times. So you should implement memoization to improve performance.
For best results, keep each GameState's heap size list sorted, and discard any heaps of size 2 or 1.
I am currently working on a text based web game, where in I simulate the battle sequences automatically like MyBrute and Pockie Ninja
So this is the situation.
We have 2 Players with different attack speed
attack speed(determines the number of seconds needed for a player to start attacking)
(Easy Example) Lets assume Player 1 has 6s and Player 2 has 3s
This means Player 2 will attack twice before Player 1 does
(its because if two player tied on a attack turn, the one with the better attack speed goes first)
(but if they have the same attack speed, the player who have not attack lately will go)
Now my problem is in the loop.
I'd like to determine who's turn it is with the minimum number of loops
for our Easy Example we could just create an infinite loop with a counter that increments 3 values to determine whos turn it's going to be and just check if every iteration if we have a winner and exit the loop. (this is my algo you can suggest better one)
The big problem for me is when i have decimal values now for attack speed
Realistic Example (assume that i only use 1 digit for decimal)
Player1 attack speed = 5.7
Player2 attack speed = 6.6
at worst we could have is 0.1 as a an LCD and use as subtrahend per loop but i want to determine the the best subtrahend(LCD) value.
Hope it makes sense.
Thank you. I appreciate you sharing your great minds.
UPDATE
//THIS IS NOT THE ACTUAL CODES BUT THIS IS THE LOGIC
decimal Player1Turn = Player1.attackspeed;
decimal Player2Turn = Player2.attackspeed;
decimal LCD = GetLCD(Player1.attackspeed,Player2.attackspeed) ***//THIS IS WHAT I WANT TO DETERMINE***
while (Player1.HP >0 && Player2.HP >0)
{
Player1Turn -= LCD;
Player2Turn -= LCD;
if (Player1Turn<=0)
{
//DO STUFF
Player1Turn = Player1.attackspeed;
}
if (Player2Turn<=0)
{
//DO STUFF
Player2Turn = Player2.attackspeed;
}
}
WE CAN USE A FUNCTION LIKE
public decimal GetLCD(decimal num1, decimal num2)
{
//returns the lcd
}
The following code processes the battle sequence without using the lowest common denominator. It will also run about 1 million times faster than all possible attempts with using the lowest common denominator for player attack speeds equal e.g. 1000 and 1000.001 respectively.
decimal time = 0;
while (player1.HP > 0 && player2.HP > 0) {
decimal player1remainingtime = player1.attackspeed - (time % player1.attackspeed);
decimal player2remainingtime = player2.attackspeed - (time % player2.attackspeed);
time += Math.Min(player1remainingtime, player2remainingtime);
if(player1remainingtime < player2remainingtime) {
//it is player 1 turn; do stuff;
} else if(player1remainingtime > player2remainingtime) {
//it is player 2 turn; do stuff;
} else {
//both player turns now
if(player1.attackspeed < player2.attackspeed) {
//player 1 is faster, its player 1 turn; do stuff
//now do stuff for player 2
} else {
//player 2 is faster, its player 2 turn; do stuff
//now do stuff for player 1
}
}
}
If you are using an object oriented language then you can do this:
Players will be objects of type Player and there will be a Timer object.
The Timer will use the Observer design pattern.
Players will register themselves to the Timer with their response time.
When their time is due then they are notified that they can take action.
In a tic-tac-toe implementation I guess that the challenging part is to determine the best move to be played by the machine.
What are the algorithms that can pursued? I'm looking into implementations from simple to complex. How would I go about tackling this part of the problem?
The strategy from Wikipedia for playing a perfect game (win or tie every time) seems like straightforward pseudo-code:
Quote from Wikipedia (Tic Tac Toe#Strategy)
A player can play a perfect game of Tic-tac-toe (to win or, at least, draw) if they choose the first available move from the following list, each turn, as used in Newell and Simon's 1972 tic-tac-toe program.[6]
Win: If you have two in a row, play the third to get three in a row.
Block: If the opponent has two in a row, play the third to block them.
Fork: Create an opportunity where you can win in two ways.
Block Opponent's Fork:
Option 1: Create two in a row to force
the opponent into defending, as long
as it doesn't result in them creating
a fork or winning. For example, if "X"
has a corner, "O" has the center, and
"X" has the opposite corner as well,
"O" must not play a corner in order to
win. (Playing a corner in this
scenario creates a fork for "X" to
win.)
Option 2: If there is a configuration
where the opponent can fork, block
that fork.
Center: Play the center.
Opposite Corner: If the opponent is in the corner, play the opposite
corner.
Empty Corner: Play an empty corner.
Empty Side: Play an empty side.
Recognizing what a "fork" situation looks like could be done in a brute-force manner as suggested.
Note: A "perfect" opponent is a nice exercise but ultimately not worth 'playing' against. You could, however, alter the priorities above to give characteristic weaknesses to opponent personalities.
What you need (for tic-tac-toe or a far more difficult game like Chess) is the minimax algorithm, or its slightly more complicated variant, alpha-beta pruning. Ordinary naive minimax will do fine for a game with as small a search space as tic-tac-toe, though.
In a nutshell, what you want to do is not to search for the move that has the best possible outcome for you, but rather for the move where the worst possible outcome is as good as possible. If you assume your opponent is playing optimally, you have to assume they will take the move that is worst for you, and therefore you have to take the move that MINimises their MAXimum gain.
The brute force method of generating every single possible board and scoring it based on the boards it later produces further down the tree doesn't require much memory, especially once you recognize that 90 degree board rotations are redundant, as are flips about the vertical, horizontal, and diagonal axis.
Once you get to that point, there's something like less than 1k of data in a tree graph to describe the outcome, and thus the best move for the computer.
-Adam
A typical algo for tic-tac-toe should look like this:
Board : A nine-element vector representing the board. We store 2 (indicating
Blank), 3 (indicating X), or 5 (indicating O).
Turn: An integer indicating which move of the game about to be played.
The 1st move will be indicated by 1, last by 9.
The Algorithm
The main algorithm uses three functions.
Make2: returns 5 if the center square of the board is blank i.e. if board[5]=2. Otherwise, this function returns any non-corner square (2, 4, 6 or 8).
Posswin(p): Returns 0 if player p can’t win on his next move; otherwise, it returns the number of the square that constitutes a winning move. This function will enable the program both to win and to block opponents win. This function operates by checking each of the rows, columns, and diagonals. By multiplying the values of each square together for an entire row (or column or diagonal), the possibility of a win can be checked. If the product is 18 (3 x 3 x 2), then X can win. If the product is 50 (5 x 5 x 2), then O can win. If a winning row (column or diagonal) is found, the blank square in it can be determined and the number of that square is returned by this function.
Go (n): makes a move in square n. this procedure sets board [n] to 3 if Turn is odd, or 5 if Turn is even. It also increments turn by one.
The algorithm has a built-in strategy for each move. It makes the odd numbered
move if it plays X, the even-numbered move if it plays O.
Turn = 1 Go(1) (upper left corner).
Turn = 2 If Board[5] is blank, Go(5), else Go(1).
Turn = 3 If Board[9] is blank, Go(9), else Go(3).
Turn = 4 If Posswin(X) is not 0, then Go(Posswin(X)) i.e. [ block opponent’s win], else Go(Make2).
Turn = 5 if Posswin(X) is not 0 then Go(Posswin(X)) [i.e. win], else if Posswin(O) is not 0, then Go(Posswin(O)) [i.e. block win], else if Board[7] is blank, then Go(7), else Go(3). [to explore other possibility if there be any ].
Turn = 6 If Posswin(O) is not 0 then Go(Posswin(O)), else if Posswin(X) is not 0, then Go(Posswin(X)), else Go(Make2).
Turn = 7 If Posswin(X) is not 0 then Go(Posswin(X)), else if Posswin(X) is not 0, then Go(Posswin(O)) else go anywhere that is blank.
Turn = 8 if Posswin(O) is not 0 then Go(Posswin(O)), else if Posswin(X) is not 0, then Go(Posswin(X)), else go anywhere that is blank.
Turn = 9 Same as Turn=7.
I have used it. Let me know how you guys feel.
Since you're only dealing with a 3x3 matrix of possible locations, it'd be pretty easy to just write a search through all possibilities without taxing you computing power. For each open space, compute through all the possible outcomes after that marking that space (recursively, I'd say), then use the move with the most possibilities of winning.
Optimizing this would be a waste of effort, really. Though some easy ones might be:
Check first for possible wins for
the other team, block the first one
you find (if there are 2 the games
over anyway).
Always take the center if it's open
(and the previous rule has no
candidates).
Take corners ahead of sides (again,
if the previous rules are empty)
You can have the AI play itself in some sample games to learn from. Use a supervised learning algorithm, to help it along.
An attempt without using a play field.
to win(your double)
if not, not to lose(opponent's double)
if not, do you already have a fork(have a double double)
if not, if opponent has a fork
search in blocking points for possible double and fork(ultimate win)
if not search forks in blocking points(which gives the opponent the most losing possibilities )
if not only blocking points(not to lose)
if not search for double and fork(ultimate win)
if not search only for forks which gives opponent the most losing possibilities
if not search only for a double
if not dead end, tie, random.
if not(it means your first move)
if it's the first move of the game;
give the opponent the most losing possibility(the algorithm results in only corners which gives 7 losing point possibility to opponent)
or for breaking boredom just random.
if it's second move of the game;
find only the not losing points(gives a little more options)
or find the points in this list which has the best winning chance(it can be boring,cause it results in only all corners or adjacent corners or center)
Note: When you have double and forks, check if your double gives the opponent a double.if it gives, check if that your new mandatory point is included in your fork list.
Rank each of the squares with numeric scores. If a square is taken, move on to the next choice (sorted in descending order by rank). You're going to need to choose a strategy (there are two main ones for going first and three (I think) for second). Technically, you could just program all of the strategies and then choose one at random. That would make for a less predictable opponent.
This answer assumes you understand implementing the perfect algorithm for P1 and discusses how to achieve a win in conditions against ordinary human players, who will make some mistakes more commonly than others.
The game of course should end in a draw if both players play optimally. At a human level, P1 playing in a corner produces wins far more often. For whatever psychological reason, P2 is baited into thinking that playing in the center is not that important, which is unfortunate for them, since it's the only response that does not create a winning game for P1.
If P2 does correctly block in the center, P1 should play the opposite corner, because again, for whatever psychological reason, P2 will prefer the symmetry of playing a corner, which again produces a losing board for them.
For any move P1 may make for the starting move, there is a move P2 may make that will create a win for P1 if both players play optimally thereafter. In that sense P1 may play wherever. The edge moves are weakest in the sense that the largest fraction of possible responses to this move produce a draw, but there are still responses that will create a win for P1.
Empirically (more precisely, anecdotally) the best P1 starting moves seem to be first corner, second center, and last edge.
The next challenge you can add, in person or via a GUI, is not to display the board. A human can definitely remember all the state but the added challenge leads to a preference for symmetric boards, which take less effort to remember, leading to the mistake I outlined in the first branch.
I'm a lot of fun at parties, I know.
A Tic-tac-toe adaptation to the min max algorithem
let gameBoard: [
[null, null, null],
[null, null, null],
[null, null, null]
]
const SYMBOLS = {
X:'X',
O:'O'
}
const RESULT = {
INCOMPLETE: "incomplete",
PLAYER_X_WON: SYMBOLS.x,
PLAYER_O_WON: SYMBOLS.o,
tie: "tie"
}
We'll need a function that can check for the result. The function will check for a succession of chars. What ever the state of the board is, the result is one of 4 options: either Incomplete, player X won, Player O won or a tie.
function checkSuccession (line){
if (line === SYMBOLS.X.repeat(3)) return SYMBOLS.X
if (line === SYMBOLS.O.repeat(3)) return SYMBOLS.O
return false
}
function getResult(board){
let result = RESULT.incomplete
if (moveCount(board)<5){
return result
}
let lines
//first we check row, then column, then diagonal
for (var i = 0 ; i<3 ; i++){
lines.push(board[i].join(''))
}
for (var j=0 ; j<3; j++){
const column = [board[0][j],board[1][j],board[2][j]]
lines.push(column.join(''))
}
const diag1 = [board[0][0],board[1][1],board[2][2]]
lines.push(diag1.join(''))
const diag2 = [board[0][2],board[1][1],board[2][0]]
lines.push(diag2.join(''))
for (i=0 ; i<lines.length ; i++){
const succession = checkSuccesion(lines[i])
if(succession){
return succession
}
}
//Check for tie
if (moveCount(board)==9){
return RESULT.tie
}
return result
}
Our getBestMove function will receive the state of the board, and the symbol of the player for which we want to determine the best possible move. Our function will check all possible moves with the getResult function. If it is a win it will give it a score of 1. if it's a loose it will get a score of -1, a tie will get a score of 0. If it is undetermined we will call the getBestMove function with the new state of the board and the opposite symbol. Since the next move is of the oponent, his victory is the lose of the current player, and the score will be negated. At the end possible move receives a score of either 1,0 or -1, we can sort the moves, and return the move with the highest score.
const copyBoard = (board) => board.map(
row => row.map( square => square )
)
function getAvailableMoves (board) {
let availableMoves = []
for (let row = 0 ; row<3 ; row++){
for (let column = 0 ; column<3 ; column++){
if (board[row][column]===null){
availableMoves.push({row, column})
}
}
}
return availableMoves
}
function applyMove(board,move, symbol) {
board[move.row][move.column]= symbol
return board
}
function getBestMove (board, symbol){
let availableMoves = getAvailableMoves(board)
let availableMovesAndScores = []
for (var i=0 ; i<availableMoves.length ; i++){
let move = availableMoves[i]
let newBoard = copyBoard(board)
newBoard = applyMove(newBoard,move, symbol)
result = getResult(newBoard,symbol).result
let score
if (result == RESULT.tie) {score = 0}
else if (result == symbol) {
score = 1
}
else {
let otherSymbol = (symbol==SYMBOLS.x)? SYMBOLS.o : SYMBOLS.x
nextMove = getBestMove(newBoard, otherSymbol)
score = - (nextMove.score)
}
if(score === 1) // Performance optimization
return {move, score}
availableMovesAndScores.push({move, score})
}
availableMovesAndScores.sort((moveA, moveB )=>{
return moveB.score - moveA.score
})
return availableMovesAndScores[0]
}
Algorithm in action, Github, Explaining the process in more details