I want to process arithmetic expressions with variables in it. Variables should be left as it is, and the other parts should be calculated. For example,
?-result(7*x,R).
R=7*x.
?-result(x+(2*3),R).
R=x+6.
How should I do this?
Related
I am writing a program for which I need terms in their prefix notation.
The point is to being able to parse mathematical expressions to prefix notation, while preserving the correct order of Operations. I then want to save the result in the database for later use (using assert), which includes translating to another language, which uses prefix notation. Prolog Operators do all have a fixed priority which is a feature I want to use, as I will be using all sorts of operators (including clp operators).
As among others I need to include complete mathematical expressions, such as the equality operator. Thus I cannot recursively use the Univ operator (=..), because it won't accept equality operators etc. Or can I somehow use =.. ?
Essentially I want to work with the internal representation of
N is 3*4+5 % just a random example
which would be
is(N,+(*(3,4),5))
Now, I do know that I can use, write_canonical(N is 3*4+5) to get the internal representation as seen above.
So is there a way to somehow get the internal representation as a term or a list, or something.
Would it be possible to bind the output of write_canonical to a variable?
I hope my question is clear enough.
Prolog terms can be despicted as trees. But, when writing a term, the way a term is displayed depends on the defined operators and write options. Consider:
?- (N is 3*4+5) = is(N,+(*(3,4),5)).
true.
?- (N is 3*4+5) = is(Variable, Expression).
N = Variable,
Expression = 3*4+5.
?- 3*4+5 = +(*(3,4),5).
true.
I.e. operators are syntactic sugar. They don't change how terms are represented, only how terms are displayed.
A quick and simple question regarding what role anonymous variables play in the resolution of a Prolog query given a set of program rule. So, the way I understand how the simplest form of SLD resolution works, an SLD tree is constructed by taking some term from a set of goal terms (based on a selection rule, e.g. FIRST) and going through all the program rules to see which rule's left hand side (the consequent, so to say) can be unified with the term at hand. The way to unify two given terms is to take a difference set of two terms and see if variables can be substituted for terms such that the difference vanishes, you do this by successively taking the leftmost single difference and checking if, out of the two sets constituting the difference, one is a variable not appearing in the other and composing your current substitution with one mapping the variable onto the term (starting with the empty or identity substitution).
Now, when anonymous variables (_) come into play, I suspect the trick in doing it correctly and efficiently lies in changing the way you determine the leftmost difference between two terms to ignore a pair of terms whenever one of them is an anonymous variable. The obviously correct way to do it would be to rename every instance of _ in the goal and the program set to a new variable name and solve using those.
How is it actually done? Is my idea sufficient, or is there more to it than that? (Also, would appreciate it very much if something is missing in the way I understand SLD resolution works, barring negation, call, capsuling, arithmetic predicates and the more complicated stuff.)
Prolog anonymous variables don't play a role in SLD resolution or in term unification but do play a practical role in Prolog code and Prolog queries. A fundamental aspect of anonymous variables is that each occurrence of an anonymous variable is a different variable. Consider the following query:
| ?- a(_, _) = a(1, 2).
yes
The unification would have failed if the two anonymous variables were the same variable. Now consider the query:
| ?- a(X, _) = a(1, 2).
X = 1
yes
Variable bindings are only reported for variables that are not anonymous variables. This allows using an anonymous variable everytime we are not interested in any bindings for a variable.
Anonymous variables also simplify writing predicate definitions where they similarly act as "don't care" variables. Consider as an example the usual definition of the member/2 predicate:
member(Element, [Element| _]).
member(Element, [_| List]) :-
member(Element, List).
In the first clause, we don't care about the list tail. In the second clause, we don't care about the list head. By using anonymous variables, we can ignore those sub-terms and avoid the compiler complaining about variables that would be used once in a clause.
Update
Note that all different variables in a query get unique internal variable references, not to be confused with variable names as typed by the user. The variables names are only used by the top-level interpreter to report bindings for successful queries. The inference mechanism used to prove a query use the variable (internal) references. The following query, using the ISO Prolog standard read_term/2 predicate with standard options may help:
| ?- read_term(Term, [variable_names(Names), variables(Variables)]).
a(X, _, Y, _).
Names = ['X'=A,'Y'=B]
Term = a(A,C,B,D)
Variables = [A,C,B,D]
yes
In the term read, there are four distinct variables but only two of them have (user provided) names.
This is a comment in an answer because a comment can not format this as needed.
Using SWI-Prolog
?- trace,(_=_).
Call: (11) _1834=_1836 ? creep
Exit: (11) _1834=_1834 ? creep
true.
Each anonymous variable is created as a separate variable. When the unification takes place the one variable is unified with the other variable.
After reading this answer on a CSS question, I wonder:
In Computer Science, is a single, constant value considered an expression?
In other words, is 7px an expression? What about just 7?
Quoting Wikipedia, emphasis mine:
An expression in a programming language is a combination of one or more explicit values, constants, variables, operators, and functions that the programming language interprets [...] and computes to produce [...] another value. This process, as for mathematical expressions, is called evaluation.
Quoting MS Docs, emphasis mine:
An expression is a sequence of one or more operands and zero or more operators that can be evaluated to a single value, object, method, or namespace. Expressions can consist of a literal value [...].
These both seems to indicate that values are expressions. However, one could argue that a value will not be evaluated, as it is already only a value, and therefore doesn't qualify.
Quoting Techopedia, emphasis mine:
[...] In terms of structure, experts point out that an expression inherently needs at least one 'operand’ or value that is acted on, and must have one or more operators. [...]
This suggests that even x does not qualify as expression as it is lacking one or more operators.
It depends on the exact definition of course, but under most definitions expressions are defined recursively with constants being one of the basis cases. So, yes, literal values are special cases of expressions.
You can look at grammars for various languages such as the one for Python
If you trace through the grammar you see that an expr can be an atom which includes number literals. The fact that number literals are Python expressions is also obvious when you consider productions like:
comparison: expr (comp_op expr)*
This is the production which captures expressions like x < 7, which wouldn't be captured if 7 isn't a valid expression.
In Computer Science, is a single, constant value considered an expression?
It depends entirely on the context. For example, FORTRAN, BASIC, and COBOL all have line numbers. Those are numeric constant values that are not expressions.
In other contexts (even within those languages) a numeric constant may be an expression.
In logic I see skolem constant many times, it said it is used to substitute existential variable occurrences. but what is special about skolem constant and why we do such substitution, what is that skolem constant for? why not just leave existential variable alone?
another question is what is logic variable, what is that for?
could any one explain to me.
Thanks in advance!
I have a huge set (20000) of boolean expressions. They consist of AND, OR and NOT operators and a large number of boolean variables A1, A2, A3 ... (about 1000). Most expression contain only 5, maybe 20 of these variables.
Given an assignment of the variables (A1 = true, A2 = false, A3 = false ...) I have to find those expressions that evaluate to false.
The same set of expressions will be evaluated for multiple (10-100) assignments
For this purpose:
How should I store the expressions on disk so I can load and parse them fast (I currently have them either as some specialized DSL or as a more or less normalized (and dead slow) relational data structure, but I can change that)
Is there a fast algorithm / data structure for evaluating such expressions that I can use?
Do implementations on the JVM exist?
You may want to look at converting your expressions into Conjunctive Normal Form and combining like terms. You then can have a two-way mapping of an expression to a set of terms, any of which evaluating to false implies that the whole expression evaluates false. For each assignment of variables, start with a set of expressions, evaluate CNF terms until one evaluates to false. If that term is false, then all expressions involving that term will also be false, so those expressions can also be removed from the set.
Whether such an approach fits your case can't be said without looking at the expressions - with 1000 variables and 20000 expressions, it might not be that they have many CNF terms in common.
Outside of Java, and for much larger numbers of expressions, DNF is possibly more useful, since its implementation on the GPU is obvious.
The SOP answer to this is to store the expressions as strings in RPN (Reverse Polish Notation) and then write a simple Stack Machine parser to evaluate them.
Generally, an RPN string can be evaluated almost as fast as an already in-memory AST (Abstract Symbol Tree). And the stack machine parser is dead easy to write.
You seem attached to Java, but have you considered feeding these things to a language that has an eval() function? It would probably reduce the problem to saving an expression in a file and evaluating it. Note that if you don't trust the (source of the) expressions, this has security implications!
Jython comes to mind, but there are probably several that would make very short work of this.
If you're married to java, you could probably implement a recursive descent parser for boolean algebra. But that's quite a bit more involved.
UPDATE: The following site has code that might help.
Convert your list of expressions into source code for a function that when called with the value of the variables will evaluate all the functions and return an indication of which expressions evaluate to false. compile the function then call it for your different variable values.
I have done similar and used Python. The only parsing and interpretation I had to write was to translate the input boolean operators, '&', '|', '~' into their Python equivalents.
Your problem size seems quite OK for a Python solution.
You could build an index where for each variable you record two sets of expressions, those where the variable occurs positively and those where it occurs negatively. Depending on the values of the variables you collect those expressions which could become false due to this variable (positive occurrences if the variables is set to false and vice versa). Edit: These are just candidates, you still need to evaluate them to find out if they really become false.
Whether this helps compared to just evaluating all your expressions depends on the structure of your expressions and how many evaluate to false.
Try to convert them into CNF and use MiniSat to check whether the expression evaluates to true or false