I'm implementing a game called Neutreeko (5x5 board, each player has three paws, game ends when one of them forms a connected orthogonal or diagonal line) and am currently thinking about the most optimal way to check if the game has ended. I'm storing the state of the board in a one-dimensional array and I only figured the brute force approach in which I just check each row, column and every diagonal line of lenght 3 and 5 until I find a hit. Is there any better way of finding the end of game in such scenario?
If you store the board as a bitmap (with position i, j stored in bit i + j*5), you can do the checks quickly.
For example,
x & (x >> 1) & (x >> 2) & 0x739ce7
is non-zero if there's a horizontal row of three.
x & (x >> 5) & (x >> 10)
is non-zero if there's a vertical row of three.
x & (x >> 6) & (x >> 12) & 0x1ce7
is non-zero if there's a diagonal row of three (on a diagonal like (0,0),(1,1),(2,2)).
x & (x >> 4) & (x >> 8) & 0x739c
is non-zero if there's a diagonal row of three (on a diagonal like (2,0),(1,1),(0,2)).
These kinds of bitmask checks are very common in boardgame position evaluation.
I want to start by saying write what is easy to read, not micro-optimisations that you won't notice at run-time.
That said, this is how I'd do it:
It would be better to check the other pawns' positions relative to the first one, rather than brute forcing the entire board.
Further, since the board is square, you can work out positions much more easily in terms of the game board's elements since lines are made up of +1s, +4s, +5s, and +6s.
There would not be any decrements as we've found there are no pawns prior to a hit.
[ 0][ 1][ 2][ 3][ 4]
[ 5][ 6][ 7][ 8][ 9]
[10][11][12][13][14]
[15][16][17][18][19]
[20][21][22][23][24]
Say the first pawn was on 12.
You would only have to check 13, 16, 17, 18.
Why not 6? Since you've already shown there's no pawn on 0 or 6 hence would have been pointless to check.
After all, if you hit one pawn and then failed, you can skip that player's remaining spaces since they haven't got a line!
It the next pawn hits, then recognise what line you're matching, and see if that line continues to the only place available (14, 20, 22, 24 respectively).
Further optimisations could be made in making invalid lines whereby it's pointless to check for anything other than a +4 or +5 from the last column, etc.
While not as elegant or efficient as Anonymous' answer, one could also use a bitboard like this:
/*
parseInt("111",2) == 7
parseInt("1"
+ "00001"
+ "00001",2) == 1057
parseInt("1"
+ "00010"
+ "00100",2) == 1092
parseInt("100"
+ "00010"
+ "00001",2) == 4161
To win, a player's bit-board must represent a multiple of one of the masks;
the multiple must be a power of two; and the mask cannot straddle both sides
of the bit-board:
*/
function testWinner(board){
var i = 0,
masks = [7,1057,1092,4161],
winner = false
while (!winner && masks[i]){
winner = board % masks[i] == 0
&& !(board / masks[i] & (board / masks[i] - 1))
&& !(board & 17318416 && board & 1082401)
i++
}
return winner
}
Related
Hello fellow programmers!
A week ago I have been asigned the task of implementing the Connected Components Algorithm, mainly to extract the number of objects from an image.
You can read more about the algorithm here (https://en.wikipedia.org/wiki/Connected-component_labeling), the variant I am trying to implement is the two pass one.
This is my current attempt:
% ------------------------------------------------------------------------------
% -> Connected Component Labeling (CCL) Algorithm
% -> 4-Connectivity Version
% ------------------------------------------------------------------------------
% ------------------------------------------------------------------------------
% - [ Pre-Scan Code To Get Everything Ready ] -
% ------------------------------------------------------------------------------
% Starting With A Clean (Workspace) And (Command Window).
clear, clc;
% Instead Of Loading An Actual Image, We Are Creating A Matrix Of Zeros And Ones, Representing A Binary Image.
originalImage = [ ...
0 1 0
1 0 1
0 1 0 ];
% Creating A Bigger Matrix That We Will Use To Store The Original Image In Its Middle, This Will Help Us Eliminate Border Checking In The Raster Scan.
binaryImage = zeros(size(originalImage) + 2);
% Copying The Pixels From The Original Image Into The Middle Of The Larger Matrix We Created.
binaryImage(2:size(originalImage, 1) + 1, 2:size(originalImage, 2) + 1) = originalImage;
% Getting The Number Of Rows (Height) And Number Of Columns (Width) Of The Binary Image.
[imageRows, imageColumns] = size(binaryImage);
% Creating A Matrix The Same Dimensions As The Binary Image In Which The Labeling Will Happen.
labeledImage = zeros(imageRows, imageColumns);
% Creating A Label Counter That We Will Use To Assign When We Create New Labels.
labelCounter = 1;
% ------------------------------------------------------------------------------
% - [First Scan: Assigning Labels To Indices] -
% ------------------------------------------------------------------------------
% Going Over Each Row In The Image One By One.
for r = 1:imageRows
% Going Over Each Column In The Image One By One.
for c = 1:imageColumns
% If The Pixel Currently Being Scanned Is A Foreground Pixel (1).
if (binaryImage(r, c) == 1)
% Since We Are Working With 4-Connectivity We Only Need To Read 2 Labels, Mainly The (Left) And (Top) Labels.
% Storing Them In Variables So Referencing Them Is Easier.
left = labeledImage(r, c - 1);
top = labeledImage(r - 1, c);
% If Left == 0 And Top == 0 -> Create A New Label, And Increment The Label Counter, Also Add The Label To The Equivalency List.
if (left == 0 && top == 0)
labeledImage(r, c) = labelCounter;
labelCounter = labelCounter + 1;
% If Left == 0 And Top >= 1 -> Copy The Top Label.
elseif (left == 0 && top >= 1)
labeledImage(r, c) = top;
% If Left >= 1 And Top == 0 -> Copy The Left Label.
elseif (left >= 1 && top == 0)
labeledImage(r, c) = left;
% If Left >= 1 And Top >= 1 -> Find The Minimum Of The Two And Copy It, Also Add The Equivalent Labels To The Equivalency List, So We Can Fix Them In The Second Scan.
elseif (left >= 1 && top >= 1)
labeledImage(r, c) = min(left, top);
end
end
end
end
% ------------------------------------------------------------------------------
% - [Second Scan: Fixing The Connected Pixels But Mismatched Labels] -
% ------------------------------------------------------------------------------
for r = 1:imageRows
for c = 1:imageColumns
end
end
This first pass is going through without any issues, I have tried multiple tests on it, however I have no idea how to implement the second pass, in which I have to fix the equivalent labels in the labeled matrix.
I did do my research online, and the preferred way to do it is to use the union-find (disjoint set) data structure to store the equivalences between the labels.
However, since I am using MATLAB and the union-find data structure is not implemented, I have to implement it myself, which is cumbersome and requires massive time and hard work due to MATLAB being an interpreted language.
So, I am open to ideas on implementing the second pass without having to use the union-find data structure.
Thanks in advance!
If you don't want to use union-find, then you should really do the flood-fill-each-component algorithm (the other one in Wikipedia).
Union-find is neither difficult nor slow, however, and it's what I would use to solve this problem. To make a simple and fast implementation:
Use a matrix of integers the same shape as your image to represent the sets -- each integer in the matrix represents the set of the corresponding pixel.
An integer x represents a set as follows: If x < 0, then it is a root set of size -x. If x>=0 then it's a child set with parent set x (i.e., row x/width, column x%width)). Initialize all sets to -1 since they all start off as roots with size 1.
Use union-by-size and path compression
Is there a name for this algorithm? (I've been calling it changeBinary)
DESCRIPTION:
You take a binary string as input.
The first bit of the output is the same as the first bit of the input.
Every bit after that is 0 if the bit at that index of the input string is the same as the bit at the previous index in the input string. Otherwise, it's 1.
For example,
Input: 00011000001010100001001000010011
Output: 00010100001111110001101100011010
Here is a simple javascript implementation:
var changeBinary = function(binaryString){
var output = binaryString[0] === '0' ? '0' : 1;
for (var i = 1; i < binaryString.length; i++){
var nextBit = binaryString[i] === binaryString[i - 1] ? '0' : '1';
output += nextBit;
}
return output;
}
OBSERVATIONS:
First, it seems that if you keep applying the algorithm to a string, it eventually returns to its original value. Second, it the number of iterations it takes to do so seems to always be a power of 2 (including 2^0 = 1). For example, if you apply the changeBinary function above 32 times to the string above, it will return to the original value.
Has anyone ever encountered this before, and if so, do you know of any other information about it?
It just seems to me like this is something so simple and basic that someone must have studied it more in depth.
Any feedback would be greatly appreciated.
It may be interesting to know that this is x ^ (x << 1) on a BigInteger (or, if you limit the length of the strings, the same thing but on a fixed-size integer), also describable as clmul(x, 3).
Carryless multiplication, which is essentially just like normal multiplication, but instead of adding the partial products you XOR them, has some fairly nice properties, such as being commutative and associative. The associative property is especially of interest since it allows you to reason easily about what composing your algorithm with itself a couple of times does: for example
changeBinary o changeBinary is clmul(clmul(x, 3), 3) = clmul(x, clmul(3, 3)) = clmul(x, 5)
That it's a carryless multiplication by 3 also explains why it "undoes" itself when applied often enough, as the carryless multiplicative inverse of 3 is the number with all bits set, which with 32 bits is 0xffffffff, which can be formed as 331 (with carryless exponentiation). This also follows from the equivalence of a carryless square to a "bit-spread", so it takes a bit string abcd to a0b0c0d, and thus clpow(3, 32) = 1 - 5 spreads have spread the bits so far apart that only the original lsb is left over, the rest does not fit in a 32bit number.
And that also gives a faster inversion, because the number with all bits set can be decomposed into small number of (carryless) factors:
3 x 5 x 17 x 257 x 65537 ...
With a number of factors that is the base two logarithm of the number of bits (rounded up).
Since x ^ (x >> 1) converts a number to Gray Code, I suppose you might call this a "mirrored" Gray Code. The same trick with the factors is used "in the mirror image" to convert a Gray Code back to binary:
x ^= x >> 1 // this is like a "mirror" of x = clmul(x, 3)
x ^= x >> 2 // 5
x ^= x >> 4 // 17
x ^= x >> 8
x ^= x >> 16
Here we just flip the direction of the shift to get:
x ^= x << 1
x ^= x << 2
x ^= x << 4
x ^= x << 8
x ^= x << 16
Which is clmul(x, 0xffffffff) and has also been called PS-XOR(x)
The algorithm you described is an example of Delta Encoding.
I've just started with a Design Analysis and Algorithms course and we've begin with simple algorithms.
There is a division algorithm which I can't make any sense of.
function divide(x,)
Input: 2 integers x and y where y>=1
Output: quotient and remainder of x divided by y
if x=0: return (q,r)=(0,0)
(q,r)=divide(floor (x/2), y)
q=2q, r=2r
if x is odd: r=r+1
if r>=y: r=r-y, q=q+1
return(q,r)
* floor is lower bound
We were supposed to try this algo for 110011%101 ( binary values )...I tried something and I got a weird answer...converted into decimal values and it was wrong.
So I tried it using simple decimal values instead of binary first.
x=25, y=5
This is what I'm doing
1st: q=x,r= 12,5
2nd: q=x,r= 6,5
3rd: q=x,r= 3,5
4th: q=x,r= 1,5
5th: q=x,r= 0,5
How will this thing work? Everytime I will run it, the last value of last x will be 0(condition) it will stop and return q=0,r=0
Can someone guide me where I'm going wrong...
Thanks
I implemented your algorithm (with obvious correction in the arg list) in Ruby:
$ irb
irb(main):001:0> def div(x,y)
irb(main):002:1> return [0,0] if x == 0
irb(main):003:1> q,r = div(x >> 1, y)
irb(main):004:1> q *= 2
irb(main):005:1> r *= 2
irb(main):006:1> r += 1 if x & 1 == 1
irb(main):007:1> if r >= y
irb(main):008:2> r -= y
irb(main):009:2> q += 1
irb(main):010:2> end
irb(main):011:1> [q,r]
irb(main):012:1> end
=> nil
irb(main):013:0> div(25, 5)
=> [5, 0]
irb(main):014:0> div(25, 2)
=> [12, 1]
irb(main):015:0> div(144,12)
=> [12, 0]
irb(main):016:0> div(144,11)
=> [13, 1]
It's working, so you must not be tracking the recursion properly when you're trying to hand-trace it. I find it helpful to write the logic out on a new sheet of paper for each recursive call and place the old sheet of paper on top of a stack of prior calls. When I get to a return statement on the current sheet, wad it up, throw it away, and write the return value in place of the recursive call on the piece of paper on top of the stack. Carry through with the logic on this sheet until you get to another recursive call or a return. Keep repeating this until you run out of sheets on the stack - the return from the last piece of paper is the final answer.
The function has a recursive structure, which might be why it's a bit tricky. I'm assuming there's a typo in your function declaration where divide(x,) should be divide(x, y). Given that the desired result is x/y with the remainder, let's continue. The first line in the function definition claims that IF the numerator is 0, return 0 with a remainder of 0. This makes sense: while b != 0 and a = 0, a / b = 0 for all integers. Then we set the result to a recursive call with half the original numerator and the current denominator. At some point, "half the original numerator" turns into 0 and the base case is reached. There's a bit of computation at the end of each recursive call in what seems to be tail recursion. Because we divided by 2 on each deepning, multiply by 2 to get the original result and add 1 to the remainder if it's odd. It's hard to visualize in text alone so step through it on paper with a given problem.
Mathematically, the division algorithm (it's called that) states that the remainder must be less than or equal to 5 when you input 25,5.
The algorithm gives 0, 5. This might mean to NOT consider the remainder when the quotient is 0 or there needs to be a check on the size of the remainder.
function divide(x,) Input: 2 integers x and y where y>=1 Output: quotient and remainder of x divided by y
if x=0: return (q,r)=(0,0)
(q,r)=divide(floor (x/2), y)
q=2q, r=2r
if x is odd: r=r+1
if r>=y: r=r-y, q=q+1
return(q,r)
* floor is lower bound
If I remember correctly, this is one of the most basic ways of doing integral division in a simple ALU. It's nice because you can run all the recursive divisions in parallel, since each division is based on just looking at one less bit of the binary.
To understand what this does, simply walk through it on paper, as Chris Zhang suggested. Here's what divide(25,5) looks like:
(x,y)=(25,5)
divide(12, 5)
divide(6,5)
divide(3,5)
divide(1,5)
divide(0,5) // x = 0!
return(0,0)
(q,r)=(2*0,2*0)
x is odd, so (q,r)=(0,1)
r < y
return(0,1)
(q,r)=(2*0,2*1)
x is odd, so (q,r)=(0,3)
r < y
return(0,3)
(q,r)=(2*0,2*3)
x is even
r >= y, so (q,r)=(1,1)
return(1,1)
(q,r)=(2*1,2*1)
x is even
r < y
return(2,2)
(q,r)=(2*2,2*2)
x is odd, so (q,r)=(4,5)
r >= y, so (q,r)=(5,0)
return(5,0)
As you can see, it work - it gives you a q of 5 and an r of 0. The part you noticed, that you'll always eventually have a 0 term is what Chris properly calls "the base case" - the case that makes the recursive call unfold.
This algorithm works with any base number for the division and the multiplication. It uses the same principle as the following: "123 / 5 = (100 + 20 + 3) / 5 = 20 + 4 + r3 = 24r3", just done in binary.
This is part of a search function on a website. So im trying to find a way to get to the end result as fast as possible.
Have a binary number where digit order matters.
Input Number = 01001
Have a database of other binary numbers all the same length.
01000, 10110, 00000, 11111
I dont know how to write what im doing, so im going to do it more visually below.
// Zeros mean nothing & the location of a 1 matters, not the total number of 1's.
input num > 0 1 0 0 1 = 2 possible matches
number[1] > 0 1 0 0 0 = 1 match = 50% match
number[2] > 1 0 1 1 0 = 0 match = 0% match
number[3] > 0 0 0 0 0 = 0 match = 0% match
number[4] > 1 1 1 1 1 = 2 match = 100% match
Now obviously, you could go digit by digit, number by number and compare it that way (using a loop and what not). But I was hoping there might be an algorithm or something that will help. Mostly because in the above example I only used 5 digit numbers. But im going to be routinely comparing around 100,000 numbers with 200 digits each, that's a lot of calculating.
I usually deal with php and MySQL. But if something spectacular comes up I could always learn.
If it's possible to somehow chop up your bitstrings in integer-size chunks some elementary boolean arithmetic would do, and that kind of instructions is generally pretty fast
$matchmask = ~ ($inputval ^ $tomatch) & $inputval
What this does:
the xor determines the bits that are different in the inputval and tomatch
negation gives a value where all bits that are equal in inputval and tomatch are set
and that with inputval and only the bits that are 1 in both inputval and tomatch remain set.
Then count the number of bits set in the result, look at How to count the number of set bits in a 32-bit integer? for an optimal solution, easily translated into php
Instead of checking each bit, you could pre-process the input and determine which bits need checking. In the worst case, this devolves into processing each bit, but for a normal distribution, you'll save some processing.
That is, for input
01001, iterate over the database and determine if number1[0] & input is non-zero, and (number1[3] >> 8) & input is non-zero, assuming 0 as the index of the LSB. How you get fast bit-shifting and anding with the large numbers is on you, however. If you detect 1s than 0s in the input, you could always invert the input and test for zero to detect coverage.
This will give you modest improvement, but it's at best a constant-time reduction of the problem. If most of your inputs are balanced between 0s and 1s, you'll halve the number of required operations. If it's more biased, you'll get better results.
Well, the first thing I can think of is a simple bitwise AND between the two numbers; you can then analyze the result to get the match percentage:
if( result >= input )
//100% match
else {
result ^= input;
/* The number of 1's in result is the number of 1 of "input"
* that are missing in "result".
*/
}
Of course, you'll need to implement your own AND and XOR function (this will work only for 32 bit integers). Note that it works only with unsigned numbers.
Suppose the input number is called A (so in your example A = 01001) and the other number is x. You'll have 100% match when x & A == A. Otherwise, for partial matches, the number of 1 bits will be (taken from hacker's delight):
x = (x & 0x55555555) + ((x >> 1) & 0x55555555);
x = (x & 0x33333333) + ((x >> 2) & 0x33333333);
x = (x & 0x0F0F0F0F) + ((x >> 4) & 0x0F0F0F0F);
x = (x & 0x00FF00FF) + ((x >> 8) & 0x00FF00FF);
x = (x & 0x0000FFFF) + ((x >>16) & 0x0000FFFF);
Note this will work for 32 bits integers.
Let's assume you have a function bit1count, then from what you describe, the "likeness" formula should be:
100.0 / min(bit1count(n1), bit1count(n2)) * bit1count(n1 & n2)
With n1 and n2 being the two numbers and & being the logical and operator.
bit1count can be easily implemented using a loop, or, more elegant, using the algorithm provided in BigBears answer.
There is actually a BIT_COUNT in mysql, so something like this should work:
SELECT 100.0 / IF(BIT_COUNT(n1) < BIT_COUNT(n2), BIT_COUNT(n1), BIT_COUNT(n2)) * BIT_COUNT(n1 & n2) FROM table
How can I have random movements in my Cellular automaton model? For example, if the elements in a cell is much more than two or more neighboring cells I'd like to randomly choose a few neighbors to give some elements. I tried all the codes that came to my mind but my problem is that in Mathematica I have to be sure that the same time an element is living from a cell and is going to another. I thought of doing it using conditions but I am not sure how. Can anyone please help me?
Edit: the code I used so far
My actual code is very complicated so I will try to tell you what I have done with a simpler cellular automaton. I wanted to succeed movements in a Moore neighbourhood. Every cell in my cellular automaton has more than one individual (or none). I want to make random movements between my cells. I couldn't do this, so I tried the following code and in my cellular automaton I used it like you can see below.
w[i_, j_] :=
If[(i - 4) > j, -1, If[(i - 4) == j, 0, If[(j - 4) > i, 1, 0]]];
dogs[p, p1, p2,p3,p4,p5,p6,p7,p8]:=newp &[
newp = w[p, p1] + w[p, p2] + w[p, p3] + w[p, p4] + w[p, p5] +
w[p, p6] + w[p, p7] + w[p, p8]]
This code is doing movements, but is not exactly what I want because if a cell has 0 individuals in it and its neighbours all 5, then at the end it has 8 and its neighbours 4 but I don't want that because I don't want the cell with the less individuals in it to have more than its neighbours at the end. I want all of them to have close values in them and still have movements. I don't know how to do that in Mathematica.
A cellular automaton is not particularly complicated, so my first point of advice is to figure out exactly what you want. Then, I recommend you separate the classical transition rules from the "random" aspect you're introducing.
For instance, here is my implementation of Conway's Game of Life:
(* We abbreviate 'nbhd' for neighborhood *)
getNbhd[A_, i_, j_] := A[[i - 1 ;; i + 1, j - 1 ;; j + 1]];
evaluateCell[A_, i_, j_] :=
Module[{nbhd, cell = A[[i, j]], numNeighbors},
(* no man's land edge strategy *)
If[i == 1 || j == 1 || i == Length[A] || j == Length[A[[1]]],
Return[0]];
nbhd = getNbhd[A, i, j];
numNeighbors = Apply[Plus, Flatten[nbhd]];
If[cell == 1 && (numNeighbors - 1 < 2 || numNeighbors - 1 > 3),
Return[0]];
If[cell == 0 && numNeighbors == 3, Return[1]];
Return[cell];
];
evaluateAll[A_] := Table[evaluateCell[A, i, j],
{i, 1, Length[A]}, {j, 1, Length[A[[1]]]}];
After performing evaluateAll, you can search through the matrix for "lonely" cells and move them as you please.
For additional information about how the code works, and to see examples of the code in action, see my blog post on Conway's Life. It includes a Mathematica notebook with the full implementation and plenty of examples.