How can I convert disyunctive normal form to conjunctive normal form using equivalence rules? - logic

For example, I have this (P^¬Q)v(Q^¬P)v(¬RvS) and I need to transform it into cnf with equivalence rules. More or less I can try to apply some things or others and make it.
More or less I can try to apply some things or others and make it. But maybe any of you can show me some kind of method to do or some steps to follow which are effective in every expression of boolean algebra to reach the cnf and pass with a 10 my exams.

Related

Why use temporal logic for interpolation-based model checking?

I am new in model checking field. I want to know why it's preferred to use linear-time temporal logic properties in interpolation and bounded model checking. Why can't we directly use propositional logic?
You can also restrict yourself to propositional logic, but then you cannot express interesting properties of your model.
Propositional logic is less expressive than temporal logic. In propositional logic you can only describe one situation/state/world, and model checking is very easy: Given the current state (i.e. the set of true propositions) you only need to evaluate a propositional formula.
In contrast, temporal logics like LTL can talk about the future and the past, with operators like Gϕ saying "ϕ will always be true in the future". A model for temporal logic then is not just the current state, but also a description (transition relation) of what was the case before and how it will develop.

How can I get derivative forms of formula programmatically?

For example if I have I = V / R as input, I want V = I * R and R = V / I as output. I get it can be a broad question but how should I get started with it? Should I use stack/tree as when building postfix notation/interpreter?
You need to be able to represent the formulas symbolically, and apply algebraic rules to manipulate those formulas.
The easiest way to do this is to define a syntax that will accept your formula; best if defined explicitly as BNF. With that you can build a parser for such formulas; done appropriately your parser can build an abstract syntax tree representing the formula. You can do with tools like lex and yacc or ANTLR. Here's my advice on how do this with a custom recursive descent parser: Is there an alternative for flex/bison that is usable on 8-bit embedded systems?.
Once you have trees encoding your formulas, you can implement procedures to modify the trees according to algebraic laws, such as:
X=Y/Z => X*Z = Y if Z ~= 0
Now you can implement such a rule by writing procedural code that climbs over the tree, finds a match to pattern, and then smashes the tree to produce the result. This is pretty straightforward compiler technology. If you are enthusiastic, you can probably code a half dozen algebraic laws fairly quickly. You'll discover the code that does this is pretty grotty, what with climbing up and down the tree, matching node, and smashing links between nodes to produce the result.
Another way to do this is to use a program transformation system that will let you
define a grammar for your formulas directly,
define (tree) rewrite rules directly in terms of your grammar (e.g., essentially you provide the algebra rule above directly),
apply the rewrite rules on demand for you
regenerate the symbolic formula from the AST
My company's DMS Software Reengineering Toolkit can do this. You can see a fully worked example (too large to copy here) of algebra and calculus at Algebra Defined By Transformation Rules

In Reinforcement learning using feature approximation, does one have a single set of weights or a set of weights for each action?

This question is an attempt to reframe this question to make it clearer.
This slide shows an equation for Q(state, action) in terms of a set of weights and feature functions.
These discussions (The Basic Update Rule and Linear Value Function Approximation) show a set of weights for each action.
The reason they are different is that the first slide assumes you can anticipate the result of performing an action and then find features for the resulting states. (Note that the feature functions are functions of both the current state and the anticipated action.) In that case, the same set of weights can be applied to all the resulting features.
But in some cases, one can't anticipate the effect of an action. Then what does one do? Even if one has perfect weights, one can't apply them to the results of applying the actions if one can't anticipate those results.
My guess is that the second pair of slides deals with that problem. Instead of performing an action and then applying weights to the features of the resulting states, compute features of the current state and apply possibly different weights for each action.
Those are two very different ways of doing feature-based approximation. Are they both valid? The first one makes sense in situations, e.g., like Taxi, in which one can effectively simulate what the environment will do at each action. But in some cases, e.g., cart-pole, that's not possible/feasible. Then it would seem you need a separate set of weights for each action.
Is this the right way to think about it, or am I missing something?
Thanks.

Whats the best way to approach rule validation

So I'm currently working as an intern at a company and have been tasked with creating the middle tier layer of a UI rule editor for a analytical engine. As part of this task I have ensure that all rules created are valid rules. These rules can be quite complex, consisting of around 10 fields with multiple possibilities for each field.
I'm in way over my head here , I've been trying to find some material to guide me on this task but I cant seem to find much. Is there any pattern or design approach I can take to break this up into more manageable tasks? A book to read? Anything ideas or guidance would be appreciated.
You may consider to invest the time to learn a lexer/parser e.g. Anltr4. You can use the Antlrwork2 ide to assist in the visualization and debugging.
Antlrworks2: http://tunnelvisionlabs.com/products/demo/antlrworks
You can get off the ground by searching for example grammars and then tweak them for your particular needs.
Grammars: https://github.com/antlr/grammars-v4
Antlr provides output bindings in a number of different languages - so you will likely have one that fits your needs.
This is not a trivial task in any case - but an interesting and rewarding one.
You need to build the algorithm for the same.
Points to be followed
1.) Validating for Parameters based on datatype support and there compatibility.
2.) Which operator to be followed by operand of specific datatype.
3.) The return result of some expression should again be compatible with next operand or operator.
Give a feature of simulating the rule, where in user can select the dataset on which rule has to be fired.
eg
a + b > c
Possible combinations.
1.) A, b can be String, number or integer.
2.) But combination result of a+b if String then operator ">" cannot come.

normalize boolean expression for caching reasons. is there a more efficient way than truth tables?

My current project is an advanced tag database with boolean retrieval features. Records are being queried with boolean expressions like such (e.g. in a music database):
funky-music and not (live or cover)
which should yield all funky music in the music database but not live or cover versions of the songs.
When it comes to caching, the problem is that there exist queries which are equivalent but different in structure. For example, applying de Morgan's rule the above query could be written like this:
funky-music and not live and not cover
which would yield exactly the same records but of cause break caching when caching would be implemented by hashing the query string, for example.
Therefore, my first intention was to create a truth table of the query which could then be used as a caching key as equivalent expressions form the same truth table. Unfortunately, this is not practicable as the truth table grows exponentially with the number of inputs (tags) and I do not want to limit the number of tags used in one query.
Another approach could be traversing the syntax tree applying rules defined by the boolean algebra to form a (minimal) normalized representation which seems to be tricky too.
Thus the overall question is: Is there a practicable way to implement recognition of equivalent queries without the need of circuit minimization or truth tables (edit: or any other algorithm which is NP-hard)?
The ne plus ultra would be recognizing already cached subqueries but that is no primary target.
A general and efficient algorithm to determine whether a query is equivalent to "False" could be used to solve NP-complete problems efficiently, so you are unlikely to find one.
You could try transforming your queries into a canonical form. Because of the above, there will be always be queries that are very expensive to transform into any given form, but you might find that, in practice, some form works pretty well most of the time - and you can always give up halfway through a transformation if it is becoming too hard.
You could look at http://en.wikipedia.org/wiki/Conjunctive_normal_form, http://en.wikipedia.org/wiki/Disjunctive_normal_form, http://en.wikipedia.org/wiki/Binary_decision_diagram.
You can convert the queries into conjunctive normal form (CNF). It is a canonical, simple representation of boolean formulae that is normally the basis for SAT solvers.
Most likely "large" queries are going to have lots of conjunctions (rather than lots of disjunctions) so CNF should work well.
The Quine-McCluskey algorithm should achieve what you are looking for. It is similiar to Karnaugh's Maps, but easier to implement in software.

Resources