Consider the following scenario:
We have a number of sequential building blocks (e.g. 12 building blocks, ordered from 1 to 12), distributed randomly (but not necessarily equally) on a number of builders (e.g. 3 builders).
The builders are required to work in order and start building the wall from block number 4, both ways; down to block number 1 or up to block 12.
Each builder doesn't have any knowledge about what block numbers the other builders may have, though he knows how many.
Builders must try to finish first by preventing others from making their moves. They should not pass and have to place a block, if they can.
Any builder who finishes all his blocks first will be granted the highest reward, then the second, and so on...
Can we predict who will finish first, second and last? Is there any algorithm the builders should follow to get their work done first?
The following is a practical example of the problem:
Let us say:
builder 1 has: b2 b5 b8 b9
builder 2 has: b1 b11
builder 3 has: b3 b4 b6 b7 b10 b12
builder 1, and builder 2 will have to wait for builder 3 to place b4.
builder 3 will place b4, and gives his place back to builder 1.
wall: b4
builder 1 will have to put up b5, as there are no other options for him.
wall: b4 b5
builder 2 will follow, but he cant place his blocks, he will have to wait for b2 or b10.
builder 3 now have two options: b3 or b6, he must choose the one which help him finish first.
wall: b4 b5 b6
builder 1 has nothing to do, he'll pass his turn to builder 2.
builder 2 is still waiting for the installation of b2 or b10.
builder 3 will have to place b7.
wall: b4 b5 b6 b7
builder 1 now will place b8.
wall: b4 b5 b6 b7 b8
builder 2 is still waiting patiently...
builder 3 is forced to put down b3, as there are no other options, he was hoping that builder 2 may place b9... but his hope faded!
wall: b3 b4 b5 b6 b7 b8
builder 1 is totally in charge now, and feeling very happy! but he is confused! after thinking he decided that b2 may allow him to keep preventing a larger number of blocks, which in turn increases his chance.
wall: b2 b3 b4 b5 b6 b7 b8
builder 2 says: finally! some action! and places b1.
wall: b1 b2 b3 b4 b5 b6 b7 b8
builder 3 lost his hope on becoming first!
builder 1 now will install his final block and go home with the biggest reward!
wall: b1 b2 b3 b4 b5 b6 b7 b8 b9
builder 2 will wait...
builder 3 sadly places b10
builder 2 places b11 and goes home with the second reward...
Any known algorithm for solving such problems?
At first glance, a player's strength is a function of the range spanned by his highest and lowest blocks. In your example game, we can see that Builder 1 completely dominates Builder 2.
Builder 1: 2 ----------- 9
Builder 2: 1 ----------------- 11
Builder 3: 3 --------------- 12
Start position: ^^
Since the game starts on b4, the most important pieces are at the high end. For example, Builder 3 has b3, which prevents 2 other moves (b2 and b1); however, this isn't very decisive. Block b3, in its ability to prevent b2 and b1, is only as powerful as b5, which prevents b6 and b7.
The real power lies on the right side of the diagram above. This means that games with the initial starting ranges depicted above will usually finish like this: Builder 1, Builder 2, and then Builder 3.
As for player strategy, here's an admittedly speculative guideline: hold on to your most powerful pieces, meaning those that prevent the largest number of moves by other players. In this strategy, every piece you hold can be assigned a score based on the number of other moves it prevents.
For example, suppose the wall is at b3-b4-b5 and that you hold b2, b6, and b9. You can play either b2 or b6. How do you value your pieces?
b2 score = 1 (prevents b1)
b9 score = 3 (prevents b10, b11, b12)
b6 score = 2 (prevents b7, b8)
Note that b6 does not get credit for preventing b10 and higher, because b9 is doing that job (Matthieu M. also makes this point). In this scenario, you should prefer to play b2 first because it exposes you to the least risk of another player finishing.
Other answers have raised interesting ideas about not wanting to prevent your own progress, suggesting that you should play b6 first. But I don't think there is anything to be gained by accelerating the movement toward b9. You want to delay b9 as long as possible, because it's the piece that gives you the most insurance (from a probabilistic point of view) of preventing other players from finishing.
Update:
I wrote a Perl simulation to test a few simple player strategies. I'm starting to wonder whether player strategy is irrelevant. I tried the following: (a) pick the highest block; (b) pick the lowest block; and (c) my recommended strategy of picking the safest block (the one that prevents the most moves by others). I evaluated the strategies by awarding 3 points for 1st place, 2 for 2nd, and 1 for 3rd. None of these strategies performed consistently better (or worse) than random selection.
Certainly, one can concoct scenarios where a player's choice affects the outcome. For example, if the blocks are distributed like this, player 3 will get either 1st or 2nd place.
b1 b2 b3 b4 b5 b6 b7 b8 b9 b10 b11 b12
2 1 3 1 3 2 2 2 2 2 2 2
However, from a probabilistic point of view, this variation in outcome can be simplified to the following: player 3 will win unless he picks a the block adjacent to a player who has only one block remaining. In other words, the precise outcome is a coin toss.
So here's the question: Can anyone provide a scenario with an outcome that is neither foreordained nor a coin toss? I tried for about 15 minutes and then got bored.
This is a one-suited variant of the card game Sevens - it also goes by other names; I have heard it widely called Fan Tan.
You might like to search the web for algorithms for that.
p.s. This smells like a homework assignment. It is considered polite to use the "homework" tag in such circumstances.
#FM is right - the more pieces you block of your enemies, the better the move is. However, there is another part to the strategy that is not being considered there.
Consider if you have B3, B7, and B11. Suppose that B3 and B7 are currently both legal moves. (You are in a reasonably good position. Because you have neither B12 or B1, you cannot come third.)
Choosing B3 means that you are only opening up B1 and B2, so it is the best move under FM's strategy.
However, by not placing B7, you are delaying the eventual play of B10, which is necessary for you to win. B7 is probably a better move.
Since I don't have had the precisions yet, let's start with the (reasonable) assumption that if you can play then you have to. It's nice to prevent the game to be stuck.
Because of the rules, you have 0, 1 or 2 moves possible. You can only choose when you are in a 2 moves solution.
1. Decision Tree
Like many games, the easiest way to see what happens is to trace a tree of all possible moves and then explore this tree to make your decision. Since there is not much decision taking place, the tree should not be that big.
For example, consider that we are in the state:
wall = [3, ..., 8]
b1 = [2,9]
b2 = [1,11]
b3 = [10,12]
And it's b1 turns to play.
b1[2] -> b2[1] -> b3[] -> b1[9] (1st) -> b3[10] -> b2[11] (2nd) -> b3[12]
or
b1[9] -> b2[] -> b3[10] -> b1[2] (1st) -> b2[1] -> b3[] -> b2[11] (2nd) -> b3[12]
or
b2[11] -> b3[12] (2nd) -> b2[1]
So basically we have 2 choices in the part of the tree.
b1 gets to choose between 2 and 9
b2 gets to choose between 1 and 11
We can summarize the consequences of a choice by listing the positions the players will get, obviously in an unbiased party each player choose in order to get the best position.
So, let's express a reduced version of the tree (where we only show the choices):
b1[2] -> [1,2,3]
b1[9] -> b2[1] -> [1,2,3]
b1[9] -> b2[11] -> [1,3,2]
Now we can apply a reduced view, based on a given player.
For b1, the tree looks like:
[2,9] -> [1,1] (both are equivalent)
For b2, it looks like:
[1,11] -> [2,3]
For b3, there is never a choice...
2. Possible outcomes
Of course, the players don't get this tree, since they don't know what the others have, but it gives you, as an observer, the possibility to study the various possible outcomes, at different stages of the party.
Note for example that on this subtree, we have 2 choices, the second being conditional on the first. If we have Pi(x in {x,y}) express the probability that player i choose x when facing a choice between x and y, then we can express the probabilities of each outcome.
P([1,2,3]) = P1(2 in {2,9}) + P1(9 in {2,9}) * P2(1 in {1,11})
P([1,3,2]) = P1(9 in {2,9}) * P2(11 in {1,11})
3. Players Strategy
From what we can see here, it appears that the best strategy is to try and block as many pieces as possible: ie when choosing between 1 and 11, you'd better play 1 because it does not block anyone while 11 blocks 1 piece. However this only works when you are down to 2 pieces.
We need something a bit more generic for the case when you actually have a list of pieces.
For example, if you hold {3, 9, 11} and the wall is [4, ..., 8] which should you pose ? Apparently 3 blocks less pieces than 9, but 9 blocks one of your own pieces!
Personally, I would go for 9 in this case, because I will need to place my 11 anyway and 11 blocks less pieces than 3 (with 3 I have a chance of terminating first, with 11 it's less likely...).
I think I can give a score to each piece in my hand, depending on the number of pieces they block:
3 -> 2
9 -> 1
11 -> 1
While is 9 attributed only 1 ? Because it only blocks 10 since I hold the 11 :)
Then I would play first the piece of the lowest score (if I have a choice).
Related
We have
4 different storage spaces, and
5 different boxes (named b1, b2, b3, b4 and b5) which they wanted to put in this storage spaces.
Each storage space can be filled with only one unique box at a time.
*But B5 has a special condition which allows to be used in multiple storage spaces at the same time.
Each box has specific weight as assign to it (b1:4, b2:6, b3:5, b4:6 and b5:5).
Each box has a specific probability to be filled in to the storage spaces (b1:1, b2:0.6, b3=1, b4=0.8, b5=1).
We try to get the probable content of the storage spaces and their probabilities if the total weight is 22. ! (which we will use this as an evidence mechanism)
For example :
SS1 - b2(6)
SS2 - b5(5)
SS3 - b4(6)
SS4 - b5(5)
Where the total weight will be 22
And the probability of this content.
In my code bellow I get the answer for one of the probable content as totalboxweight(b2, b5, b4, b5, 22) which is okay for me. It means first box b2 is in first storage space, b5 is in second storage space and so on.
Here is my code so far, I add comments also to explain my intentions
But I need help to update it add the probabilities and apply some of the conditions I talked about.
box(b1,4).
box(b2,6).
box(b3,5).
box(b4,6).
box(b5,5). % I tried to define the boxes but I dont know how to assign probabilites to them in this format
total(D1,D2,D3,D4,Sum) :-
Sum is D1+D2+D3+D4. % I defined the sum calculation
totalboxweight(A,B,C,D,Sum) :-
box(A,D1), box(B,D2) , box(C,D3), box(D,D4),
total(D1,D2,D3,D4,Sum). % I am sum up all weights
sumtotal(Sum) :-
box(A,D1), box(B,D2) , box(C,D3), box(D,D4),
total(D1,D2,D3,D4,Sum). % I defined this one to use it as an evidence
evidence(sumtotal(22),true). % if we know the total weight is 22
query(totalboxweight(D1,D2,D3,D4,22)). % what is the probable content
I am using an online Problog editor to test my code. Here is the link.
And I am trying to do it in Problog not Prolog, so the syntax is different.
Right now with the help of answers I overcome some issues, the problems I still have ;
I couldn't apply probabilities
I couldn't apply the condition ( Each storage space can be filled with only one unique box at a time. But B5 has a special condition which allows to be used in multiple storage spaces at the same time.)
Thanks you in advance.
I'm trying to draw a Rete network for a sample rule which has no binding between variables in different patterns. I know that beta network is used to make sure that the bended variable in different patterns are consistent.
(defrule R1
(type1 c1 c2)
(type2 c3)
=>
)
(defrule R2
(type2 c3)
(type3 c4 v1)
(type4 c5 v1)
=>
)
In R1, there is no binded variables between the two patterns, how should I combine their result in the Rete network then?
In R2, two rules have binded variable while the third has not. How to combine the three rules in the network?
I searched for Rete network example for such a situation but didn't find any. I tried to draw the network and below is my network. Is it right?
UPDATE: New network based on Gary's answer
Thanks
Beta nodes store partial matches regardless of whether there are variables specified in the patterns that need to be checked for consistency. The variables bindings just serve to filter the partial matches that are stored in the beta memory. If there are no variables, then all generated partial matches will be stored in the beta memories.
Your diagram should look like this:
a1 a2 a3 a4
\ / \ / /
b1 b2 /
| \ /
r1 b3
|
r2
Say I have 20 frames on a 4-node H2O cluster: a1..a5, b1..b5, c1..c5, d1..d5. And I want to combine them into one big frame, from which I will build a model.
Is it better to combine sets of columns, then combine rows:
h2o.rbind(
h2o.cbind(a1, b1, c1, d1),
h2o.cbind(a2, b2, c2, d2),
h2o.cbind(a3, b3, c3, d3),
h2o.cbind(a4, b4, c4, d4),
h2o.cbind(a5, b5, c5, d5)
)
Or, to combine the rows first, then the columns:
h2o.cbind(
h2o.rbind(a1, a2, a3, a4, a5),
h2o.rbind(b1, b2, b3, b4, b5),
h2o.rbind(c1, c2, c3, c4, c5),
h2o.rbind(d1, d2, d3, d4, d5)
)
For the sake of argument, 1/2/3/4/5 might each represent one month of data, which is why they got imported separately. And a/b/c/d are different sets of features, which again explains why they were imported separately. Let's say, a1..a5 have 1728 columns, b1..b5 have 113 columns, c1..c5 have 360 columns, and d1..d5 is a single column (the answer I'll be modelling). (Though I suspect, as H2O is a column database, that the relative number of columns in a/b/c/d does not matter?)
By "better" I mean quicker, but if there is a memory-usage difference in one or the other, that would also be good to know: I'm mainly interested in the Big Data case, where the combined frame is big enough that I wouldn't be able to fit it in the memory of just a a single node.
I'm now fairly sure the answer is: doesn't matter.
Point 1: The two examples in the question are identical. This is because both h2o.cbind() and h2o.rbind() use lazy evaluation. So either way it returns immediately, and nothing happens until you perform some operation. (I've been using nrow() or ncol() to force creation of the new frame - it also allows me to check that I've got what I expected.)
Point 2: I've been informed by an H2O developer that they is no difference (CPU or memory), because either way the data will be copied.
Point 3: I've not noticed any significant speed difference on some reasonably big cbind/rbinds, with final frame size of 17GB (compressed size). This has not been rigorous, but I've never waited more than 30 to 40 seconds for the nrow() command to complete the copy.
Bonus Tip: Following on from point 1, it is essential you call nrow() (or whatever) to force the copy to happen, before you delete the constituent parts. If you do the all = rbind(parts), then h2o.rm(parts), then nrow(all) you get an error (and your data is lost and needs to be imported again).
I am using the Levenshtein distance to find similar strings after OCR. However, for some strings the edit distance is the same, although the visual appearance is obviously different.
For example the string Co will return these matches:
CY (1)
CZ (1)
Ca (1)
Considering, that Co is the result from an OCR engine, Ca would be the more likely match than the ones. Therefore, after calculating the Levenshtein distance, I'd like to refine query result by ordering by visual similarity. In order to calculate this similarity a I'd like to use standard sans-serif font, like Arial.
Is there a library I can use for this purpose, or how could I implement this myself? Alternatively, are there any string similarity algorithms that are more accurate than the Levenshtein distance, which I could use in addition?
If you're looking for a table that will allow you to calculate a 'replacement cost' of sorts based on visual similarity, I've been searching for such a thing for awhile with little success, so I started looking at it as a new problem. I'm not working with OCR, but I am looking for a way to limit the search parameters in a probabilistic search for mis-typed characters. Since they are mis-typed because a human has confused the characters visually, the same principle should apply to you.
My approach was to categorize letters based on their stroke components in an 8-bit field. the bits are, left to right:
7: Left Vertical
6: Center Vertical
5: Right Vertical
4: Top Horizontal
3: Middle Horizontal
2: Bottom Horizontal
1: Top-left to bottom-right stroke
0: Bottom-left to top-right stroke
For lower-case characters, descenders on the left are recorded in bit 1, and descenders on the right in bit 0, as diagonals.
With that scheme, I came up with the following values which attempt to rank the characters according to visual similarity.
m: 11110000: F0
g: 10111101: BD
S,B,G,a,e,s: 10111100: BC
R,p: 10111010: BA
q: 10111001: B9
P: 10111000: B8
Q: 10110110: B6
D,O,o: 10110100: B4
n: 10110000: B0
b,h,d: 10101100: AC
H: 10101000: A8
U,u: 10100100: A4
M,W,w: 10100011: A3
N: 10100010: A2
E: 10011100: 9C
F,f: 10011000: 98
C,c: 10010100: 94
r: 10010000: 90
L: 10000100: 84
K,k: 10000011: 83
T: 01010000: 50
t: 01001000: 48
J,j: 01000100: 44
Y: 01000011: 43
I,l,i: 01000000: 40
Z,z: 00010101: 15
A: 00001011: 0B
y: 00000101: 05
V,v,X,x: 00000011: 03
This, as it stands, is too primitive for my purposes and requires more work. You may be able to use it, however, or perhaps adapt it to suit your purposes. The scheme is fairly simple. This ranking is for a mono-space font. If you are using a sans-serif font, then you likely have to re-work the values.
This table is a hybrid table including all characters, lower- and upper-case, but if you split it into upper-case only and lower-case only it might prove more effective, and that would also allow to apply specific casing penalties.
Keep in mind that this is early experimentation. If you see a way to improve it (for example by changing the bit-sequencing) by all means feel free to do so.
In general I've seen Damerau-Levenshtein used much more often than just Levenshtein , and it basically adds the transposition operation. It is supposed to account for more than 80% of human misspelling, so you should certainly consider that.
As to your specific problem, you could easily modify the algorithm to increase the cost when substituting a capital letter with a non capital letter, and the opposite to obtain something like that:
dist(Co, CY) = 2
dist(Co, CZ) = 2
dist(Co, Ca) = 1
So in your distance function just have a different cost for replacing different pairs of characters.
That is, rather than a replacement adding a set cost of one or two irrepective of the characters involved - instead have a replace cost function that returns something in between 0.0 and 2.0 for the cost of replacing certain characters in certain contexts.
At each step of the memoization, just call this cost function:
cost[x][y] = min(
cost[x-1][y] + 1, // insert
cost[x][y-1] + 1, // delete,
cost[x-1][y-1] + cost_to_replace(a[x],b[y]) // replace
);
Here is my full Edit Distance implementation, just swap the replace_cost constant for a replace_cost function as shown:
https://codereview.stackexchange.com/questions/10130/edit-distance-between-two-strings
In terms of implementing the cost_to_replace function you need a matrix of characters with costs based on how similiar the characters are. There may be such a table floating around, or you could implement it yourself by writing each pair of characters to a pair of images and then comparing the images for similiarity using standard vision techniques.
Alternatively you could use a supervised method whereby you correct several OCR misreads and note the occurences in a table that will then become the above cost table. (ie If the OCR gets it wrong than the characters must be similiar).
I don't really care about the language used.
I have a large database of around 84.1k entries for various aeronautical coordinates around the world, they are formatted like this:
A1 023 UBL 15.245197 104.865917
A1 024 BUTRA 15.418278 105.596083
A1 025 PAPRA 15.766667 107.183333
A1 026 BATEM 15.931389 107.765556
A1 027 DAN 16.052778 108.198333
A1 028 BUNTA 16.833334 109.395000
A1 029 LENKO 17.416667 110.300000
A1 030 IKELA 18.661667 112.245000
A1 031 IDOSI 19.000000 112.500000
A1 032 CH 22.219542 114.030056
The first number is the air route (there are hundreds of these). The second number is the position the coordinate holds in terms of the air route sequence. The third is the name of the fix, 4th and 5th are the coordinates themselves.
A better way to describe it would be a highway. Let's say the A1 is a highway. UBL, BUTRA, PAPRA, etc... are all the exits. 023, 024, 025 is the order that you will encounter these exits (I will see UBL after 22 exits, as it is the 23rd then BUTRA, 24 then PAPRA, 25).
However, those exits lead to new highways rather than cities. For example, the UBL exit leads onto
A1 023 UBL 15.245197 104.865917
G473 006 UBL 15.245197 104.865917
R470 001 UBL 15.245197 104.865917
W1 018 UBL 15.245197 104.865917
W4 031 UBL 15.245197 104.865917
W5 013 UBL 15.245197 104.865917
My ultimate goal is to, using these points, find the shortest distance in between 2 cities, using these air routes. However, That is not my problem. I can figure out that, but I'm not sure of which structure to use to hold this thing. It was my programming teacher who first suggested that I'll need some sort of structure to organize the data.
I'm thinking.. since I'm going to have the first and last points available, to search through the list, grab all the possible "highways" that that point leads into, use something like A* to find the shortest path, and restrict the number of branches by using some distance restrictions. However, as said, I was left unclear of which data structure to use.
Any help appreciated.
You should use a spatial data structure like the PM1 Quadtree.
You should also evaluate it against the PM2 Quadtree and PM3 Quadtree to see which one fits better for your constraints.