Conjunctive Normal Form (CNF) is a standardized notation for propositional formulas that dictate that every formula should be written as a conjunction of disjunctions. Every boolean formula can be converted to CNF. So for example:
A | (B & C)
Has a representation in CNF like this:
(A | B) & (A | C)
Is it a best practice in programming to write conditionals in CNF?
No, this is not a good idea. Conjunctive normal form is primarily used in theoretical computer science. There are algorithms to solve formulas in CNF, and proofs about time complexity and NP-hardness.
From a pragmatic point of view, you should write code using Boolean operators that most "naturally" describe the logic. This means taking full advantage of nested expressions, operators like XOR, negation, and so on. As you exemplified, CNF often contradicts this goal of "naturalness" because the expression is longer and often repeats subexpressions.
As a theoretical side note, in the worst case, an unrestricted Boolean formula containing n operators can transform into a CNF formula whose length is exponential in n. So CNF can potentially blow up a formula by very large amount. A sequence of examples illustrating this behavior:
(A & B) | (C & D) ==
(A | C) & (A | D) & (B | C) & (B | D).
(A & B) | (C & D) | (E & F) ==
(A | C | E) & (A | C | F) & (A | D | E) & (A | D | F) & (B | C | E) & (B | C | F) & (B | D | E) & (B | D | F).
(A & B) | (C & D) | (E & F) | (G & H) ==
(A | C | E | G) & (A | C | E | H) & (A | C | F | G) & (A | C | F | H) & (A | D | E | G) & (A | D | E | H) & (A | D | F | G) & (A | D | F | H) & (B | C | E | G) & (B | C | E | H) & (B | C | F | G) & (B | C | F | H) & (B | D | E | G) & (B | D | E | H) & (B | D | F | G) & (B | D | F | H).
Related
How do I convert below grammar to CNF?
S → ASA | aB
A → B | S
B → b | ε
We can split the transformation of context free grammars to Chomsky Normal Form into four steps.
The Bin step ensures that all alternatives in all productions contains no more than two terminals or non-terminals.
The Del step "deletes" all empty string tokens.
The Unit steps "inlines" productions that directly map to a single non-terminal.
The Term step makes sure terminals and non-terminals are not mixed in any alternative.
From your example, describing each step, the transformation to CNF can look like the following.
Bin
Alternatives in production S is split up into smaller productions. New non-terminals are T.
S → AT | aB
A → B | S
B → b | ε
T → SA
Del
From the production of S, nullable non-terminals A and B were factored out.
S → AT | T | aB | a
A → B | S
B → b | ε
T → SA
For the production of A, no action need be taken.
S → AT | T | aB | a
A → B | S
B → b | ε
T → SA
From the production of B, empty string tokens were removed.
S → AT | T | aB | a
A → B | S
B → b
T → SA
From the production of T, nullable non-terminal A were factored out.
S → AT | T | aB | a
A → B | S
B → b
T → SA | S
Unit
"Inlined" the production for B in A.
S → AT | T | aB | a
A → b | S
B → b
T → SA | S
Term
Replaced a terminal "a" in production S with the new non-terminal U.
S → AT | T | UB | a
A → b | S
B → b
T → SA | S
U → a
And you're done.
I am practicing from textbook and cannot find the reason when I see the result.
On prolog data base, it shows
f(1,one).
f(s(1),two).
f(s(s(1)),three).
f(s(s(s(X))),N) :- f(X,N).
When I run the program with
f(s(s(s(s(s(s(1)))))),C).
The response of program is "C = one."
How does it work?
Prolog is very simple. Its programs consist of rules of the form
to_prove_this :- must_prove_this, and_this. % and perhaps also,
to_prove_this :- must_otherwise_prove_this, and_this_too.
So your program just means
1. to prove `f( 1, one)` :- there's no need to prove anything more.
2. to prove `f( s(1), two)` :- there's no need to prove anything more.
3. to prove `f( s(s(1)), three)` :- there's no need to prove anything more.
4. to prove `f( s(s(s(X))), N)` :- must prove `f( X, N)`.
So you start with
to prove: f( s(s(s(s(s(s(1)))))), C).
Can rule 1. be used?
| Is `f( s(s(s(s(s(s(1)))))), C)` similar to `f(1,one)`?
| | Is `f` similar to `f`?
| | -- Yes.
| | Is `s(s(s(s(s(s(1))))))` similar to `1`?
| | -- No.
| -- No, `f( s(s(s(s(s(s(1)))))), C)` and `f(1,one)` are not similar.
-- No, the rule 1. can't be used.
Can rule 2. be used?
| Is `f( s(s(s(s(s(s(1)))))), C)` similar to `f(s(1),two)`?
. . . . .
. . . . .
. . . . .
Can rule 4. be used?
| Is `f( s(s(s(s(s(s(1)))))), C)` similar to `f(s(s(s(X))),N)`?
| | Is `f` similar to `f`?
| | -- Yes.
| | Is `s(s(s(s(s(s(1))))))` similar to `s(s(s(X)))`?
| | | Is `s(s(s(s(s(1)))))` similar to `s(s(X))`?
| | | | Is `s(s(s(s(1))))` similar to `s(X)`?
| | | | | Is `s(s(s(1)))` similar to `X`?
| | | | | -- Yes, with `X = s(s(s(1)))`.
| | Is `C` similar to `N`?
| | -- Yes, with `C = N`.
| -- Yes, it is similar, with `X = s(s(s(1)))` and `C = N`.
-- Yes, it can be used, with `X = s(s(s(1)))` and `C = N`.
This means, we need to prove f(X,N) now, with X = s(s(s(1))) and C = N.
This means, we need to prove f(X1,N1) now, with X1 = s(s(s(1))) and C = N1.
This means, we need to prove f( s(s(s(1))), C ) now.
Can rule 1. be used?
. . . .
. . . .
. . . .
This means, we need to prove f(X,N) now, with X = 1 and C = N.
This means, we need to prove f(X2,N2) now, with X2 = 1 and C = N2.
This means, we need to prove f( 1, C ) now.
Can rule 1. be used now?
In school we have been studying metalanguages, in particular, railroad diagrams and EBNF. I received a question where an imaginary programming language (winston) was described in EBNF. Here it is:
Digit = 0 | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9
LCase = a | b | c | d
UCase = A | B | C | D | E | F | G | H | I | J
Operator = + | - | * | /
Logical = < | > | <= | >= | <>
Constant = [-] <Digit>{<Digit>}
Identifier = <UCase>{<LCase> | <Digit>}
Assignment = Set <Identifier> to <Constant> | <Identifier
{<Operator>(<Constant> | <Identifier>)}
Condition = <Identifier> <Logical> (<Identifier> | <Constant>)
{(and | or) <Identifier> <Logical> (<Identifier> | <Constant>)}
When = (<Assignment> | <Condition> {<Assignment> | <Condition>})
Statement = <Input> | <Output> | <Assignment> | <Condition> | <When> | <Pretest> | <Posttest>
Program = Start <Statement> {! <Statement>} Stop
The program written below was made with winston but doesn't execute properly. Use the EBNF descriptions to identify the error.
Start
Input J1
Input J2
When (J1 = J2, Set A3 to 0), (J1 < J2, Set A3 to -1), Set A3 to 1
Output A3
Stop
My working so far: To me, this program seems legitimate. It is a program so if must start with "start" and end with "stop", which it does. The statements in the middle seemingly are allowed to be in there. Can someone point me in the right direction?
Also, can someone tell me what it means in the EBNF description of a program what this means:<statement>
I think it means statements like when and if but Im not too sure. Thanks for the help :)
When is comma-separated and the grammar doesn't specify commas at all.
J1 = J2 -- there is no = comparison op in the grammar (see Logical), so J1 = J2 is neither Assignment, nor Condition and is thus invalid.
<statement> -- the grammar wraps symbols in angle brackets on the right-hand side, e.g. Identifier on the left-hand and, later, <Identifier> in Assignment rule -- that doesn't look like a valid EBNF.
I've encountered some obj-c code and I'm wondering if there's a way to simplify it:
#if ( A && !(B || C)) || ( B || C )
is this the same as?
#if ( A || B || C )
If not, is there another way to formulate it that would be easier to read?
[edit]
I tried the truth table before asking the question, but thought I had to be missing something because I doubted that Foundation.framework/Foundation.h would employ this more complex form. Is there a good reason for it?
Here's the original code (from Foundation.h):
#if (TARGET_OS_MAC && !(TARGET_OS_EMBEDDED || TARGET_OS_IPHONE)) || (TARGET_OS_EMBEDDED || TARGET_OS_IPHONE)
Yes. Like others said, you can truth table it. The De Morgan rules can also help.
However, I think the best option is to use a Karnaugh Map. It takes a few minutes to learn, but Karnaugh Maps allow you to consistently find the most minimal expression for boolean logic. Truth tables can verify a minimization, but they can't give it to you.
Here's how I got it:
First, the table layout:
AB
00 01 11 10
0| | | | |
C 1| | | | |
Now, considering your equation, B || C will always cause a truth:
AB
00 01 11 10
0| | T | T | |
C 1| T | T | T | T |
This leaves only two cases. In either case, the right side evaluates to false. For 000, the left side also evaluates to false (0 && !(whatever) is false). For 100, 1 && !(0 ||| 0) evaluates to true. Thus, the statement is true. Filling in:
AB
00 01 11 10
0| F | T | T | T |
C 1| T | T | T | T |
Now, we only need to "cover" all the truths. "C" will cover the bottom row. "B" will cover the middle square (of four values). Thus, "B || C" covers all but the top right square. Now, "A" will cover the right four-space square. It's OK that this is redundant. Thus, "A || B || C" covers all the true squares and omits the only false one.
Get pen + paper + try it, there are only 8 possible inputs
They are the same. You can use Truth Table Generator to test it. Both these expressions give false only in one case, when A, B and C are false.
A | B | C | (B || C) | (!(B || C)) | (A && !(B || C)) | (A && (!(B || C)) || (B || C) | (A || B || C)
------------------------------------------------------------------------------------------------------
T | T | T | T | F | F | T | T
T | T | F | T | F | F | T | T
T | F | T | T | F | F | T | T
T | F | F | F | T | T | T | T
F | T | T | T | F | F | T | T
F | T | F | T | F | F | T | T
F | F | T | T | F | F | T | T
F | F | F | F | T | F | F | F
Based on the last two columns, I would say yes.
Yes it is the same. Using De Morgan rules:
(A && !(B || C)) || (B || C) = (A && !B && !C) || (B || C).
So the second will be true when A = 1 and B, C = 0. If that is not the case the second part (B || C) will be true when B || C. So it is equal to the first.
You could also say :
(A && !(B || C)) || (B || C) rewrites to (A && !W) || W (1)
(1) rewrites to (A && !W) || (A || !A || W) (2)
(2) rewrites (A && !W) || (A || W) || (!A || W) (3)
(3) rewrites (A && !W) || !(A && !W) || (A || W) (4)
(4) leads to A || W and then A || B || C
Yes, the two expressions are equivalent. (I just wrote a couple of functions to test all eight possibilities.)
Does anyone know the rules for valid Ruby variable names? Can it be matched using a RegEx?
UPDATE: This is what I could come up with so far:
^[_a-z][a-zA-Z0-9_]+$
Does this seem right?
Identifiers are pretty straightforward. They begin with letters or an underscore, and contain letters, underscore and numbers. Local variables can't (or shouldn't?) begin with an uppercase letter, so you could just use a regex like this.
/^[a-z_][a-zA-Z_0-9]*$/
It's possible for variable names to be unicode letters, in which case most of the existing regexes don't match.
varname = "\u2211" # => "∑"
eval(varname + '= "Tony the Pony"') => "Tony the Pony"
puts varname # => ∑
local_variable_identifier = /Insert large regular expression here/
varname =~ local_variable_identifier # => nil
See also "Fun with Unicode" in either the Ruby 1.9 Pickaxe or at Fun with Unicode.
According to http://rubylearning.com/satishtalim/ruby_names.html a Ruby variable consists of:
A name is an uppercase letter,
lowercase letter, or an underscore
("_"), followed by Name characters
(this is any combination of upper- and
lowercase letters, underscore and
digits).
In addition, global variables begin with a dollar sign, instance variables with a single at-sign, and class variables with two at-signs.
A regular expression to match all that would be:
%r{
(\$|#{1,2})? # optional leading punctuation
[A-Za-z_] # at least one upper case, lower case, or underscore
[A-Za-z0-9_]* # optional characters (including digits)
}x
Hope that helps.
I like #aboutruby's answer, but just to complete it, here's the equivalent using POSIX bracket expressions.
/^[_[:lower:]][_[:alnum:]]*$/
Or, since a-z is actually shorter than [:lower:]:
/^[_a-z][_[:alnum:]]*$/
I think /^(\$){0,1}[_a-zA-Z][a-zA-Z0-9_]*([?!]){0,1}$/ is a bit closer to what you will need...
It depends on whether you want to match method names as well.
If you are trying to match a name that might be encountered in an expression, then it might start with $ and it might end with ? or !. If you know for sure that it is just a local variable then the rule will be much simpler.
i was trying to figure one out for a rails patch, and Matthew Draper wrote this one, using the ruby parser as a reference:
/\A(?![A-Z0-9])(?:[[:alnum:]_]|[^\0-\177])+\z/
And here it is, straight from the horse's mouth. (The horse in this case is the Draft ISO Ruby Specification):
local-variable-identifier → ( lowercase-character | _ ) identifier-character *
identifier-character → lowercase-character | uppercase-character | decimal-digit | _
uppercase-character → A | B | C | D | E | F | G | H | I | J | K | L | M | N | O | P | Q | R | S | T | U | V | W | X | Y | Z
lowercase-character → a | b | c | d | e | f | g | h | i | j | k | l | m | n | o | p | q | r | s | t | u | v | w | x | y | z
decimal-digit → 0 | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9
In Ruby 1.9, using named groups, you can translate this literally:
local_variable_identifier = %r{
(?<uppercase_character> A | B | C | D | E | F | G | H | I | J | K | L | M
| N | O | P | Q | R | S | T | U | V | W | X | Y | Z
){0}
(?<lowercase_character> a | b | c | d | e | f | g | h | i | j | k | l | m
| n | o | p | q | r | s | t | u | v | w | x | y | z
){0}
(?<decimal_digit> 0 | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9){0}
(?<identifier_character> \g<lowercase_character>
| \g<uppercase_character>
| \g<decimal_digit>
| _
){0}
( \g<lowercase_character> | _ ) \g<identifier_character>*
}x
Of course, this is not how you would really write it.