I'm having troubles understanding why doesn't this clips code get trapped in an infinite loop
(defrule rule0
=>
(assert (my-fact))
)
(defrule rule1
?f <- (my-fact)
=>
(retract ?f)
)
As far as I know, rule0 is executed asserting my-fact then rule1 is executed retracting it. Why doesn't rule0 execute again now?
Here are my thoughts:
Clips memorizes for each rule if it was executed using some basis facts and avoids re-executing this rule using the same basis facts.
There is some sort of optimizer that detected a loop and avoided it.
Clips memorizes the facts that were inserted and deleted, and avoids re-inserting these facts (highly doubt it, I'm almost sure this can't be the case).
Note: I abstracted this piece of code from another small program that uses templates instead of facts.
Wikipedia has a good overview of how the Rete Algorithm works. One of the key concepts to understand is that rules do not seek the data that satisfy them, rather data seeks out the rules they satisfy. The Rete Algorithm assumes that most data remains the same after each rule firing, so having the rules seek out data would be inefficient since only a fraction of the data changes after each rule firing. Instead rules save the state of what has already been matched and when changes to data is made that effects that state, it is updated.
When rule rule0 is defined, it is activated because it has no conditions. When rule rule1 is defined, it is not activated because my-fact does not yet exist. When rule rule0 is executed, the fact my-fact is asserted, and then rule rule1 has its state updated and is activated. When rule rule1 is executed, my-fact is retracted and the state of rule rule1 is updated since it matches my-fact. Rule rule0 is not affected by this retraction because it doesn't have conditions which match my-fact.
Your first explanation is the one to go with. The principle that a rule doesn't fire a second time for the same set of facts is called refraction. With the same set of facts I do not only mean the same value but also the same fact address.
Here, we have a special case. Because rule0 has no LHS, it wouldn't fire a second time, even if the fact base changes. No LHS means no pattern matching and therefore no further activation.
But you can make a rule fire again with the refresh command.
CLIPS> (run)
CLIPS> (refresh rule0)
CLIPS> (agenda)
0 rule0: *
For a total of 1 activation.
Normally, you are not able to insert a fact if the same fact is already in your factbase (if it was retracted you are free to add it again).
You can change that with (set-fact-duplication):
CLIPS> (set-fact-duplication TRUE)
But I wouldn't recommend that.
Related
I've developed a set of rules that should control the execution of a specific flight mission scenario. I've tested this rule set in simulation to check if the expected scenario will be executed, and everything was working as expected. Therefore, I'm sure that the rules are working successfully to do the "expected" scenario. What I need to do is to check if the rule set handles all the possible situations that may occur including those "unexpected" or "unseen" situations. I.e., the situations that should not happen from the first place, however they may happen because of some error or outside force. For example, a drone should not climb over a certain threshold, however, it may climb over this threshold because of a strong thrust of air or a faulty pressure sensor. My rule-base has round 33 rules and 6 templates that all have around 25 attributes. Trying considering all the combinations of these 25 attributes (which vary between integers and symbols with different numbers and values of allowed-symbols) is very complex and is very difficult to be done manually. Is there a tool that can automatically checks if all the possible combinations of the templates' attributes (i.e., all the possible situations) are covered by the rule set? Briefly, the tool should answer the question: Are there any missing roles that handle possible situations (or combinations) that I forgot to consider or didn't think of?
Thanks
I'm not aware of any ready-to-use tools for CLIPS. If I recall correctly, when I developed an application for JRules, the table editor for rules supported completeness checking since it knew the rows of the table represented a grouped collection of rules and it could make some inferences by comparing the rows, but it didn't support completeness checking for individual rules written using the business or technical rule syntax. Since my application had a significant number of complex rules that couldn't be expressed using tables, I had to use the unit testing functionality and manually generate a representative set of test cases since there was no way to test every scenario.
With CLIPS, there aren't any high level table editors, so all your testing is limited to unit tests. There's a set of test cases (https://sourceforge.net/projects/clipsrules/files/CLIPS/6.30/feature_tests_630.zip) used for unit testing CLIPS functionality that you can use as a framework for unit testing other applications. To run the test cases, launch CLIPS and execute a (batch "testall.tst") command from the top level directory of the test cases. When the test cases complete, you can use a diff program to compare the contents of the Expected output directory to the Actual output directory. Individual test cases consist of batch files that when executed dump their output to a text file. That output is then compared to a text file containing the expected output. There is no guide on how to create test cases, but there are over 100 test cases provided so if you just use those as a template it's not that difficult to figure out how to write one.
Generally speaking, it's not possible to prove that any given program works correctly or even terminates, so given the limited amount of information you've given about your rules it's not possible to say whether their correctness can be proven. However, if your rules are relatively simple and can be represented as facts, you can use CLIPS itself to validate some aspects of your program. For example, the CLIPS animal program (https://sourceforge.net/p/clipsrules/code/HEAD/tree/branches/64x/examples/animal.clp) represents its rules as facts:
(rule (if order is scales and
rounded.shell is yes)
(then type.animal is turtle))
You can then write rules such as this one which checks to see if a rule condition can be satisfied:
(defrule VALIDATE::reachable
(rule (name ?name) (validate yes)
(if ?a ?c ?v $?))
(not (question (variable ?a)))
(not (rule (then ?a $?)))
=>
(printout t "In rule " ?name " no question or rule could be found "
"that can supply a value for the variable " ?a ":" crlf
" " ?a " " ?c " " ?v crlf))
This is similar to the approach used by Drools Verifier (https://developer.jboss.org/wiki/DroolsVerifier) which converts rules to facts in order to do an analysis. Your actual program doesn't need to represent rules as facts, but if you do this for analysis there's lots of things you can check by using rules to reason about your rules. I was able to find the Drools Verifier in a couple of minutes using a search engine, so you can probably find other examples of this technique.
I have several Prolog facts indicating that something or someone is either a person, location or object. I have a clause go(person,location) that indicated that a person moves from where they are to the location given in the clause. However, when I ask the relevant query to find out if someone is at a certain location, Prolog responds with every person that was ever there according to the clauses. How do I go about writing a rule that says that if you are in one location you are by definition not in any of the others?
It appears that you left one important aspect out when modeling the situation as Prolog facts: When did the person go to the location?
Assume you had instead facts of the form:
person_went_to_at(Person, Location, Time).
then it would be pretty easy to determine, for any point in time, where everyone was, and where they moved to last (and, therefore, are now).
You probably need to add timing information to your facts. Imagine the following situtation:
go(dad, kitchen, bathroom).
go(dad, bathroom, garage).
go(dad, garage, kitchen).
Since Prolog is (more or less) declarative, in this case, the actual order of the facts in the file does not matter. So, you cannot conclude that dad is in the kitchen, he might have started from and returned to the garage. Even if you add some kind of starting predicate, say startLoc(dad, kitchen), this does not help with loops (e.g. when you add go(dad, kitchen, outside) to the above rules).
If you add timing information (and leave out the previous room, as this is clear from the timing information), this becomes:
go(dad, bathroom,1).
go(dad, garage,2).
go(dad, kitchen,3).
The actual numbers are not relevant, just their order. You can now get the latest location by ensuring that there is no later "go" command with dad:
location(X, Y) :- go(X, Y, T), \+ ( go(X, _, T2), T2 > T ).
Background
I am learning the sicp according to an online course and got confused by its lecture notes. In the lecture notes, the applicative order seems to equal cbv and normal order to cbn.
Confusion
But the wiki points out that, beside evaluation orders(left to right, right to left, or simultaneous), there is a difference between the applicative order and cbv:
Unlike call-by-value, applicative order evaluation reduces terms within a function body as much as possible before the function is applied.
I don't understand what does it mean by reduced. Aren't applicative order and cbv both going to get the exact value of a variable before going into the function evaluation.
And for the normal order and cbv, I am even more confused according to wiki.
In contrast, a call-by-name strategy does not evaluate inside the body of an unapplied function.
I guess does it mean that normal order would evaluate inside the body of an unapplied function. How could it be?
Question
Could someone give me some more concrete definitions of the four strategies.
Could someone show an example for each strategy, using whatever programming language.
Thanks a lot?
Applicative order (without taking into account
the order of evaluation ,which in scheme is undefined) would be equivalent to cbv. All arguments of a function call are fully evaluated before entering the functions body. This is the example given in
SICP
(define (try a b)
(if (= a 0) 1 b))
If you define the function, and call it with these arguments:
(try 0 (/ 1 0))
When using applicative order evaluation (default in scheme) this will produce and error. It will evaluate
(/ 1 0) before entering the body. While with normal order evaluation, this would return 1. The arguments
will be passed without evaluation to the functions body and (/ 1 0) will never be evaluated because (= a 1) is true, avoiding the error.
In the article you link to, they are talking about Lambda Calculus when they mention Applicative and Normal order evaluation. In this article wiki It is explained more clearly I think.
Reduced means applying reduction rules to the expression. (also in the link).
α-conversion: changing bound variables (alpha);
β-reduction: applying functions to their arguments (beta);
Normal order:
The leftmost, outermost redex is always reduced first. That is, whenever possible the arguments are substituted into the body of an abstraction before the arguments are reduced.
Call-by-name
As normal order, but no reductions are performed inside abstractions. For example λx.(λx.x)x is in normal form according to this strategy, although it contains the redex (λx.x)x.
A normal form is an equivalent expression that cannot be reduced any further under the rules imposed by the form
In the same article, they say about call-by-value
Only the outermost redexes are reduced: a redex is reduced only when its right hand side has reduced to a value (variable or lambda abstraction).
And Applicative order:
The leftmost, innermost redex is always reduced first. Intuitively this means a function's arguments are always reduced before the function itself.
You can read the article I linked for more information about lambda-calculus.
Also Programming Language Foundations
I'm creating several puzzle solvers in Prolog SWI with CHR (Constraint Handling Rules)
Everything works great but, I like to test which solver is best one.
Therefore I like to find out, which solver uses the least amount of backtracks.
Is there a clever way to find out (or print out), the amount of backtracks that the solver had needed for solving a particular puzzle?
Logically, counting would help, but it doesn't --> backtracking ! <-- .
Also, printing a new line on the screen isn't effective, because of SWI's GUI. You can't print more than +/- 50 lines and can't select properly
It is indeed not trivial to accomplish this, given Constraint Handling Rules maintain a 'constraint store' and execution of rules may add, rewrite or remove rules from this store at runtime. This changes the state of the program and makes it somewhat difficult to keep track of global states throughout execution.
However, since CHR is integrated in SWI, you can make use of the non-logical operation nb_setarg/3 to keep count of the backtracks.
Notes from the doc:
Compatible with GNU-Prolog's setarg(A,T,V,false)
This implementation is thread-safe, reentrant and capable of handling exceptions
EDIT
As regarding where to count the backtracks, this of course depends on your program, but will usually occur in the CHR constraint rule that defines the fail condition of your search, allowing it to 'branch' (= rewrite CHR rules). Every time a rewrite of the constraint store occurs during search, it represents a backtrack and you can increase a counter accordingly using the operation as defined above.
Consider a small, abstract example:
invalid_state ==> increment_backtracks, fail.
guess <=> branch
I've developed a program which generates insurance quotes using different types of coverages based on state criteria. Now I want to add the ability to specify 'rules'. For example we may have 3 types of coverage (we'll call them UM, BI, and PD). Well some states don't allow PD to be greater than BI and other states don't allow UM to exist without BI. So I've added the ability for the user to create these rules so that when the quote is generated the rule will be followed and thus no state regulations will be violated when the program generates the quote.
The Problem
I don't want the user to be able to select conflicting rules. The user can select any of the VB mathematical operators (>, <, >=, <=, =, <>) and set a coverage on either side. They can do this multiple times (but only one at a time) so they might end up with a list of rules like this:
A > B
B > C
C > A
As you can see, the last rule conflicts with the previously set rules. My solution to this was to validate the list each time the user clicks 'Add rule to list'.
Pretend the 3rd list item is not yet in the list but the user has clicked 'add rule' to put it in the list. The validation process first checks to see if both incoming variables have already been used on the same line. If not, it just searches for the left side incoming variable (in this case 'C') in the already created list. if it finds it, it then sets tmp1 equal to the variable across from the match (tmp1 = 'B'). It then does the same for the incoming variable on the right side (in this case 'A'). Then tmp2 is set equal to the variable across from A (tmp2 = 'B'). If tmp1 and tmp2 are equal then the incoming rule is either conflicting OR is irrelevant regardless of the operators used. I'm pretty sure this is solid logic given 3 variables. However, I found that adding any additional variables could easily bypass my validation. There could be upwards of 10 coverage types in any given state so it is important to be able to validate more than just 3.
Is there any uniform way to do a sound validation given any number of variables? Any ideas or thoughts are appreciated. I hope my explanation makes sense.
Thanks
My best bet is some sort of hierarchical tree of rules. When the user adds the first rule (say A > B), the application could create a data structure like this (lowerValues is a Map which the key leads to a list of values):
lowerValues['A'] = ['B']
Now when the user adds the next rule (B > C), the application could check if B is already in a any lowerValues list (in this case, A). If that happens, C is added to lowerValues['A'], and lowerValues['B'] is also created:
lowerValues['A'] = ['B', 'C']
lowerValues['B'] = ['C']
Finally, when the last rule is provided by the user (C > A), the application checks if C is in any lowerValues list. Since it's in B and A, the rule is invalid.
Hope that helps. I don't remember if there's some sort of mapping in VB. I think you should try the Dictionary object.
In order to this idea works out, all the operations must be internally translated to a simple type. So, for example:
A > B
could be translated as
B <= A
Good luck
In general this is a pretty hard problem. What you in fact want to know is if a set of propositional equations over (apparantly) some set of arithmetic is true. To do this you need what amounts to constraint solvers that "know" arithmetic. Not likely to find that in VB6, but you might be able to invoke one as a subprocess.
If the rules are propositional equations only over inequalities (AA", write them only one way).
Second, try solving the propositions for tautology (see for Wang's algorithm which you can likely implment awkwardly in VB6).
If the propositions are not a tautology, now you want build chains of inequalities (e.g, A > B > C) as a graph and look for cycles. The place this fails is when your propositions have disjunctions, e.g., ("A>B or B>Q"); you'll have to generate an inequality chain for each combination of disjunctions, and discard the inconsistent ones. If you discard all of them, the set is inconsistent. Watch out for expressions like "A and B"; by DeMorgans theorem, they're equivalent to "not A or not B", e.g., "A>B and B>Q" is the same as "A<=B or B<=Q". You might want to reduce the conditions to disjunctive normal form to avoid getting suprised.
There are apparantly decision procedures for such inequalities. They're likely hard to implement.