Rete network without beta network? - clips

I'm trying to draw a Rete network for a sample rule which has no binding between variables in different patterns. I know that beta network is used to make sure that the bended variable in different patterns are consistent.
(defrule R1
(type1 c1 c2)
(type2 c3)
=>
)
(defrule R2
(type2 c3)
(type3 c4 v1)
(type4 c5 v1)
=>
)
In R1, there is no binded variables between the two patterns, how should I combine their result in the Rete network then?
In R2, two rules have binded variable while the third has not. How to combine the three rules in the network?
I searched for Rete network example for such a situation but didn't find any. I tried to draw the network and below is my network. Is it right?
UPDATE: New network based on Gary's answer
Thanks

Beta nodes store partial matches regardless of whether there are variables specified in the patterns that need to be checked for consistency. The variables bindings just serve to filter the partial matches that are stored in the beta memory. If there are no variables, then all generated partial matches will be stored in the beta memories.
Your diagram should look like this:
a1 a2 a3 a4
\ / \ / /
b1 b2 /
| \ /
r1 b3
|
r2

Related

Mutually exclusivity in Problog

We have
4 different storage spaces, and
5 different boxes (named b1, b2, b3, b4 and b5) which they wanted to put in this storage spaces.
Each storage space can be filled with only one unique box at a time.
*But B5 has a special condition which allows to be used in multiple storage spaces at the same time.
Each box has specific weight as assign to it (b1:4, b2:6, b3:5, b4:6 and b5:5).
Each box has a specific probability to be filled in to the storage spaces (b1:1, b2:0.6, b3=1, b4=0.8, b5=1).
We try to get the probable content of the storage spaces and their probabilities if the total weight is 22. ! (which we will use this as an evidence mechanism)
For example :
SS1 - b2(6)
SS2 - b5(5)
SS3 - b4(6)
SS4 - b5(5)
Where the total weight will be 22
And the probability of this content.
In my code bellow I get the answer for one of the probable content as totalboxweight(b2, b5, b4, b5, 22) which is okay for me. It means first box b2 is in first storage space, b5 is in second storage space and so on.
Here is my code so far, I add comments also to explain my intentions
But I need help to update it add the probabilities and apply some of the conditions I talked about.
box(b1,4).
box(b2,6).
box(b3,5).
box(b4,6).
box(b5,5). % I tried to define the boxes but I dont know how to assign probabilites to them in this format
total(D1,D2,D3,D4,Sum) :-
Sum is D1+D2+D3+D4. % I defined the sum calculation
totalboxweight(A,B,C,D,Sum) :-
box(A,D1), box(B,D2) , box(C,D3), box(D,D4),
total(D1,D2,D3,D4,Sum). % I am sum up all weights
sumtotal(Sum) :-
box(A,D1), box(B,D2) , box(C,D3), box(D,D4),
total(D1,D2,D3,D4,Sum). % I defined this one to use it as an evidence
evidence(sumtotal(22),true). % if we know the total weight is 22
query(totalboxweight(D1,D2,D3,D4,22)). % what is the probable content
I am using an online Problog editor to test my code. Here is the link.
And I am trying to do it in Problog not Prolog, so the syntax is different.
Right now with the help of answers I overcome some issues, the problems I still have ;
I couldn't apply probabilities
I couldn't apply the condition ( Each storage space can be filled with only one unique box at a time. But B5 has a special condition which allows to be used in multiple storage spaces at the same time.)
Thanks you in advance.

cbind before rbind, or rbind before cbind?

Say I have 20 frames on a 4-node H2O cluster: a1..a5, b1..b5, c1..c5, d1..d5. And I want to combine them into one big frame, from which I will build a model.
Is it better to combine sets of columns, then combine rows:
h2o.rbind(
h2o.cbind(a1, b1, c1, d1),
h2o.cbind(a2, b2, c2, d2),
h2o.cbind(a3, b3, c3, d3),
h2o.cbind(a4, b4, c4, d4),
h2o.cbind(a5, b5, c5, d5)
)
Or, to combine the rows first, then the columns:
h2o.cbind(
h2o.rbind(a1, a2, a3, a4, a5),
h2o.rbind(b1, b2, b3, b4, b5),
h2o.rbind(c1, c2, c3, c4, c5),
h2o.rbind(d1, d2, d3, d4, d5)
)
For the sake of argument, 1/2/3/4/5 might each represent one month of data, which is why they got imported separately. And a/b/c/d are different sets of features, which again explains why they were imported separately. Let's say, a1..a5 have 1728 columns, b1..b5 have 113 columns, c1..c5 have 360 columns, and d1..d5 is a single column (the answer I'll be modelling). (Though I suspect, as H2O is a column database, that the relative number of columns in a/b/c/d does not matter?)
By "better" I mean quicker, but if there is a memory-usage difference in one or the other, that would also be good to know: I'm mainly interested in the Big Data case, where the combined frame is big enough that I wouldn't be able to fit it in the memory of just a a single node.
I'm now fairly sure the answer is: doesn't matter.
Point 1: The two examples in the question are identical. This is because both h2o.cbind() and h2o.rbind() use lazy evaluation. So either way it returns immediately, and nothing happens until you perform some operation. (I've been using nrow() or ncol() to force creation of the new frame - it also allows me to check that I've got what I expected.)
Point 2: I've been informed by an H2O developer that they is no difference (CPU or memory), because either way the data will be copied.
Point 3: I've not noticed any significant speed difference on some reasonably big cbind/rbinds, with final frame size of 17GB (compressed size). This has not been rigorous, but I've never waited more than 30 to 40 seconds for the nrow() command to complete the copy.
Bonus Tip: Following on from point 1, it is essential you call nrow() (or whatever) to force the copy to happen, before you delete the constituent parts. If you do the all = rbind(parts), then h2o.rm(parts), then nrow(all) you get an error (and your data is lost and needs to be imported again).

Define a measuring function for an operator in OCaml

I would like to define an operator l_op : A list * A list -> A list whose implementation requires another operator op : A * A -> A. Given a0: A, though for all a1 : A op a0 a1 always returns a result as A, for some a1 the result makes more sense than other a1.
Intuitively l_op al0 al1 needs a strategy of matching, which finds a meaningful element of al1, with regard to op, for each element of al0. Then the list of the results by op is the result of l_op.
So I need a measure of meaning.
One possible choice is, a function measure: A * A * A -> int can be defined. For instance, measure a0 a1 (op a0 a1) gives an int from 1 to 10 which represents how op a0 a1 makes sense. Then in the implementation of l_op al0 al1, for each a0 of al0, I can find a1 such that measure a0 a1 (op a0 a1) >= measure a0 a1' (op a0 a1') for all a1' in al1. Then I remove a0 and a1 from the two lists, and match the rest of the two lists...
Another choice is, I change a little bit op to op : A * A -> A * int where the integer represents how the operation makes sense. Then in the implementation of l_op al0 al1, for each a0 of al0, I can find a1 such that for all a1' in al1, m1 >= m1' where (_, m1), (_, m1') = op a0 a1, op a0 a1'.
An advantage of the second choice is that, we can save some code because we can calculating the measuring while doing op a0 a1. A disadvantage is that I find the signature op : A * A -> A * int is less good-looking than op : A * A -> A.
So my questions are:
1) There is a conventional word for this kind of measuring function (which starts by h probably), but I have forgotten it, could anyone remind?
2) Do you think int is a good type for measuring? Maybe we can define a type for that... What is the most conventional way?
3) Which choice that I mentioned above is better? Or does anyone have a better idea?
There is a conventional word for this kind of measuring function (which starts by h probably),
Maybe "heuristic"? It comes from the Ancient Greek for "find out, discover" and is used in computer science to name methods that look for "good enough" results, often approximating the perfect behavior in a simpler but nearly as effective way. It is really appropriate here (unless your "meaning measurement" really is an heuristic/approximation) but begins with 'h'.
I would suggest just calling your measurements a "score", or a "weight".
Do you think int is a good type for measuring? Maybe we can define a type for that... What is the most conventional way?
In depends on how your measuring is defined. How much structure do you need on the results (eg. you could want to keep the justification of your measurement, needing a richer structure)? What kind of operations do you use while measuring? If you only use addition and constants, int is fine, if you use division etc., float may be needed. You probably need a type whose values it is possible to compare, in all cases.
I guess that int will be ok in most circumstances, and otherwise you'll be able to change your mind relatively easily. If you plan to change this, you can use a type alias:
type measure = int
This way you can use measure instead of int in most of your code, and don't need to replace all occurrences afterwards. That said, in OCaml we usually don't write a lot of type annotations, thanks to inference, so in practice I don't expect details of your typing choices to be spread in a lot of code.
Which choice that I mentioned above is better? Or does anyone have a better idea?
I'd pick the second choice. I suspect there is some redundancy between the A -> A -> A operation of "computing the result" and the A -> A -> int operation of "computing the result meaning". By doing both at the same time (A -> A -> A * int) you can reuse the same logical structure, which makes that correspondence clearer (and uses less code). If on the contrary the two operations are totally unrelated, you can consider having two separate operator (but I'd still use A -> A -> int for scoring; if you need to get the result to measure meaning, you can still call the first operation internally).

Is this a minimal set-cover problem?

I have the following scenario (preliminary apologies for length, but I wanted to be as descriptive as possible):
I am presented with a list of "recipes" (Ri) that must be fulfilled, in the order presented, to complete a given task. Each recipe consists of a list of the parts (Pj) required to complete it. A recipe typically requires up to 3 or 4 parts, but might require as many as 16. An example recipe list might look like:
R1 = {P1}
R2 = {P4}
R3 = {P2, P3, P4}
R4 = {P1, P4}
R5 = {P1, P2, P2} //Note that more than 1 of a given part may be needed. (Here, P2)
R6 = {P2, P3}
R7 = {P3, P3}
R8 = {P1} //Note that recipes may recur within the list. (Same as R1)
The longest list might consist of a few hundred recipes, but typically contains many recurrences of some recipes, so eliminating identical recipes will generally reduce the list to fewer than 50 unique recipes.
I have a bank of machines (Mk), each of which has been pre-programmed (this happens once, before list processing has begun) to produce some (or all) of the available types of parts.
An iteration of the fulfillment process occurs as follows:
The next recipe in the list is presented to the bank of machines.
On each machine, one of its available programs is selected to produce one of the parts required by this recipe, or, if it is not required for this recipe, it is set "offline."
A "crank" is turned, and each machine (that has not been "offlined") spits out one part.
Combining the parts produced by one turn of the crank fulfills the recipe. Order is irrelevant, e.g., fulfilling recipe {P1, P2, P3} is the same as fulfilling recipe {P1, P3, P2}.
The machines operate instantaneously, in parallel, and have unlimited raw materials, so there are no resource or time/scheduling constraints. The size k of the bank of machines must be at least equal to the number of elements in the longest recipe, and thus has roughly the same range (typically 3-4, possibly up to 16) as the recipe lengths noted above. So, in the example above, k=3 (as determined by the size of R3 and R5) seems a reasonable choice.
The question at hand is how to pre-program the machines so that the bank is capable of fulfilling all of the recipes in a given list. The machine bank shares a common pool of memory, so I'm looking for an algorithm that produces a programming configuration that eliminates (entirely, or as much as possible) redundancy between machines, so as to minimize the amount of total memory load. The machine bank size k is flexible, i.e., if increasing the number of machines beyond the length of the longest recipe in a given list produces a more optimal solution for the list (but keeping a hard limit of 16), that's fine.
For now, I'm considering this a unicost problem, i.e., each program requires the same amount of memory, although I'd like the flexibility to add per-program weighting in the future. In the example above, considering all recipes, P1 occurs at most once, P2 occurs at most twice (in R5), P3 occurs at most twice (in R7), and P4 occurs at most once, so I would ideally like to achieve a configuration that matches this - only one machine configured to produce P1, two machines configured to produce P2, two machines configured to produce P3, and one machine configured to produce P4. One possible minimal configuration for the above example, using machine bank size k=3, would be:
M1 is programmed to produce either P1 or P3
M2 is programmed to produce either P2 or P3
M3 is programmed to produce either P2 or P4
Since there are no job-shop-type constraints here, my intuition tells me that this should reduce to a set-cover problem - something like the minimal unate set-cover problem found in designing digital systems. But I can't seem to adapt my (admittedly limited) knowledge of those algorithms to this scenario. Can someone confirm or deny the feasibility of this approach, and, in either case, point me towards some helpful algorithms? I'm looking for something I can integrate into an existing chunk of code, as opposed to something prepackaged like Berkeley's Espresso.
Thanks!
This reminds me of the graph coloring problem used for register allocation in compilers.
Step 1: if the same part is repeated in a recipe, rename it; e.g., R5 = {P1, P2, P2'}
Step 2: insert all the parts into a graph with edges between parts in the same recipe
Step 3: color the graph so that no two connected nodes (parts) have the same color
The colors are the machine identities to make the parts.
This is sub-optimal because the renamed parts create false constraints in other recipes. You may be able to fix this with "coalescing." See Briggs.

Algorithm needed for solving building-blocks problem

Consider the following scenario:
We have a number of sequential building blocks (e.g. 12 building blocks, ordered from 1 to 12), distributed randomly (but not necessarily equally) on a number of builders (e.g. 3 builders).
The builders are required to work in order and start building the wall from block number 4, both ways; down to block number 1 or up to block 12.
Each builder doesn't have any knowledge about what block numbers the other builders may have, though he knows how many.
Builders must try to finish first by preventing others from making their moves. They should not pass and have to place a block, if they can.
Any builder who finishes all his blocks first will be granted the highest reward, then the second, and so on...
Can we predict who will finish first, second and last? Is there any algorithm the builders should follow to get their work done first?
The following is a practical example of the problem:
Let us say:
builder 1 has: b2 b5 b8 b9
builder 2 has: b1 b11
builder 3 has: b3 b4 b6 b7 b10 b12
builder 1, and builder 2 will have to wait for builder 3 to place b4.
builder 3 will place b4, and gives his place back to builder 1.
wall: b4
builder 1 will have to put up b5, as there are no other options for him.
wall: b4 b5
builder 2 will follow, but he cant place his blocks, he will have to wait for b2 or b10.
builder 3 now have two options: b3 or b6, he must choose the one which help him finish first.
wall: b4 b5 b6
builder 1 has nothing to do, he'll pass his turn to builder 2.
builder 2 is still waiting for the installation of b2 or b10.
builder 3 will have to place b7.
wall: b4 b5 b6 b7
builder 1 now will place b8.
wall: b4 b5 b6 b7 b8
builder 2 is still waiting patiently...
builder 3 is forced to put down b3, as there are no other options, he was hoping that builder 2 may place b9... but his hope faded!
wall: b3 b4 b5 b6 b7 b8
builder 1 is totally in charge now, and feeling very happy! but he is confused! after thinking he decided that b2 may allow him to keep preventing a larger number of blocks, which in turn increases his chance.
wall: b2 b3 b4 b5 b6 b7 b8
builder 2 says: finally! some action! and places b1.
wall: b1 b2 b3 b4 b5 b6 b7 b8
builder 3 lost his hope on becoming first!
builder 1 now will install his final block and go home with the biggest reward!
wall: b1 b2 b3 b4 b5 b6 b7 b8 b9
builder 2 will wait...
builder 3 sadly places b10
builder 2 places b11 and goes home with the second reward...
Any known algorithm for solving such problems?
At first glance, a player's strength is a function of the range spanned by his highest and lowest blocks. In your example game, we can see that Builder 1 completely dominates Builder 2.
Builder 1: 2 ----------- 9
Builder 2: 1 ----------------- 11
Builder 3: 3 --------------- 12
Start position: ^^
Since the game starts on b4, the most important pieces are at the high end. For example, Builder 3 has b3, which prevents 2 other moves (b2 and b1); however, this isn't very decisive. Block b3, in its ability to prevent b2 and b1, is only as powerful as b5, which prevents b6 and b7.
The real power lies on the right side of the diagram above. This means that games with the initial starting ranges depicted above will usually finish like this: Builder 1, Builder 2, and then Builder 3.
As for player strategy, here's an admittedly speculative guideline: hold on to your most powerful pieces, meaning those that prevent the largest number of moves by other players. In this strategy, every piece you hold can be assigned a score based on the number of other moves it prevents.
For example, suppose the wall is at b3-b4-b5 and that you hold b2, b6, and b9. You can play either b2 or b6. How do you value your pieces?
b2 score = 1 (prevents b1)
b9 score = 3 (prevents b10, b11, b12)
b6 score = 2 (prevents b7, b8)
Note that b6 does not get credit for preventing b10 and higher, because b9 is doing that job (Matthieu M. also makes this point). In this scenario, you should prefer to play b2 first because it exposes you to the least risk of another player finishing.
Other answers have raised interesting ideas about not wanting to prevent your own progress, suggesting that you should play b6 first. But I don't think there is anything to be gained by accelerating the movement toward b9. You want to delay b9 as long as possible, because it's the piece that gives you the most insurance (from a probabilistic point of view) of preventing other players from finishing.
Update:
I wrote a Perl simulation to test a few simple player strategies. I'm starting to wonder whether player strategy is irrelevant. I tried the following: (a) pick the highest block; (b) pick the lowest block; and (c) my recommended strategy of picking the safest block (the one that prevents the most moves by others). I evaluated the strategies by awarding 3 points for 1st place, 2 for 2nd, and 1 for 3rd. None of these strategies performed consistently better (or worse) than random selection.
Certainly, one can concoct scenarios where a player's choice affects the outcome. For example, if the blocks are distributed like this, player 3 will get either 1st or 2nd place.
b1 b2 b3 b4 b5 b6 b7 b8 b9 b10 b11 b12
2 1 3 1 3 2 2 2 2 2 2 2
However, from a probabilistic point of view, this variation in outcome can be simplified to the following: player 3 will win unless he picks a the block adjacent to a player who has only one block remaining. In other words, the precise outcome is a coin toss.
So here's the question: Can anyone provide a scenario with an outcome that is neither foreordained nor a coin toss? I tried for about 15 minutes and then got bored.
This is a one-suited variant of the card game Sevens - it also goes by other names; I have heard it widely called Fan Tan.
You might like to search the web for algorithms for that.
p.s. This smells like a homework assignment. It is considered polite to use the "homework" tag in such circumstances.
#FM is right - the more pieces you block of your enemies, the better the move is. However, there is another part to the strategy that is not being considered there.
Consider if you have B3, B7, and B11. Suppose that B3 and B7 are currently both legal moves. (You are in a reasonably good position. Because you have neither B12 or B1, you cannot come third.)
Choosing B3 means that you are only opening up B1 and B2, so it is the best move under FM's strategy.
However, by not placing B7, you are delaying the eventual play of B10, which is necessary for you to win. B7 is probably a better move.
Since I don't have had the precisions yet, let's start with the (reasonable) assumption that if you can play then you have to. It's nice to prevent the game to be stuck.
Because of the rules, you have 0, 1 or 2 moves possible. You can only choose when you are in a 2 moves solution.
1. Decision Tree
Like many games, the easiest way to see what happens is to trace a tree of all possible moves and then explore this tree to make your decision. Since there is not much decision taking place, the tree should not be that big.
For example, consider that we are in the state:
wall = [3, ..., 8]
b1 = [2,9]
b2 = [1,11]
b3 = [10,12]
And it's b1 turns to play.
b1[2] -> b2[1] -> b3[] -> b1[9] (1st) -> b3[10] -> b2[11] (2nd) -> b3[12]
or
b1[9] -> b2[] -> b3[10] -> b1[2] (1st) -> b2[1] -> b3[] -> b2[11] (2nd) -> b3[12]
or
b2[11] -> b3[12] (2nd) -> b2[1]
So basically we have 2 choices in the part of the tree.
b1 gets to choose between 2 and 9
b2 gets to choose between 1 and 11
We can summarize the consequences of a choice by listing the positions the players will get, obviously in an unbiased party each player choose in order to get the best position.
So, let's express a reduced version of the tree (where we only show the choices):
b1[2] -> [1,2,3]
b1[9] -> b2[1] -> [1,2,3]
b1[9] -> b2[11] -> [1,3,2]
Now we can apply a reduced view, based on a given player.
For b1, the tree looks like:
[2,9] -> [1,1] (both are equivalent)
For b2, it looks like:
[1,11] -> [2,3]
For b3, there is never a choice...
2. Possible outcomes
Of course, the players don't get this tree, since they don't know what the others have, but it gives you, as an observer, the possibility to study the various possible outcomes, at different stages of the party.
Note for example that on this subtree, we have 2 choices, the second being conditional on the first. If we have Pi(x in {x,y}) express the probability that player i choose x when facing a choice between x and y, then we can express the probabilities of each outcome.
P([1,2,3]) = P1(2 in {2,9}) + P1(9 in {2,9}) * P2(1 in {1,11})
P([1,3,2]) = P1(9 in {2,9}) * P2(11 in {1,11})
3. Players Strategy
From what we can see here, it appears that the best strategy is to try and block as many pieces as possible: ie when choosing between 1 and 11, you'd better play 1 because it does not block anyone while 11 blocks 1 piece. However this only works when you are down to 2 pieces.
We need something a bit more generic for the case when you actually have a list of pieces.
For example, if you hold {3, 9, 11} and the wall is [4, ..., 8] which should you pose ? Apparently 3 blocks less pieces than 9, but 9 blocks one of your own pieces!
Personally, I would go for 9 in this case, because I will need to place my 11 anyway and 11 blocks less pieces than 3 (with 3 I have a chance of terminating first, with 11 it's less likely...).
I think I can give a score to each piece in my hand, depending on the number of pieces they block:
3 -> 2
9 -> 1
11 -> 1
While is 9 attributed only 1 ? Because it only blocks 10 since I hold the 11 :)
Then I would play first the piece of the lowest score (if I have a choice).

Resources