how to draw a circuit diagram with only using nor and not gates? - nor

I am approaching with double invert, with ~(~Y) but cannot find the way to draw a circuit diagram of Y=A(B+CD).
Any advice would really be appreciated !

~(X nor Y) is (X or Y)
(~X nor ~Y) is (X and Y)
~(~X) is X
So,
CD = (~C nor ~D)
B+CD = ~(B nor (~C nor ~D))
A(B+CD) = ~A nor ~(~(B nor (~C nor ~D))) = ~A nor (B nor (~C nor ~D))

Related

How do you use the Ring solver in Cubical Agda?

I have started playing around with Cubical Agda. Last thing I tried doing was building the type of integers (assuming the type of naturals is already defined) in a way similar to how it is done in classical mathematics (see the construction of integers on wikipedia). This is
data dInt : Set where
_⊝_ : ℕ → ℕ → dInt
canc : ∀ a b c d → a + d ≡ b + c → a ⊝ b ≡ c ⊝ d
trunc : isSet (dInt)
After doing that, I wanted to define addition
_++_ : dInt → dInt → dInt
(x ⊝ z) ++ (u ⊝ v) = (x + u) ⊝ (z + v)
(x ⊝ z) ++ canc a b c d u i = canc (x + a) (z + b) (x + c) (z + d) {! !} i
...
I am now stuck on the part between the two braces. A term of type x + a + (z + d) ≡ z + b + (x + c) is asked. Not wanting to prove this by hand, I wanted to use the ring solver made in Cubical Agda. But I could never manage to make it work, even trying to set it up for simple ring equalities like x + x + x ≡ 3 * x.
How can I make it work ? Is there a minimal example to make it work for naturals ? There is a file NatExamples.agda in the library, but it makes you have to rewrite your equalities in a convoluted way.
You can see how the solver for natural numbers is supposed to be used in this file in the cubical library:
Cubical/Tactics/NatSolver/Examples.agda
Note that this solver is different from the solver for commutative rings, which is designed for proving equations in abstract rings and is explained here:
Cubical/Tactics/CommRingSolver/Examples.agda
However, if I read your problem correctly, the equality you want to prove requires the use of other propositional equalities in Nat. This is not supported by any solver in the cubical library (as far as I know, also the standard library doesn't support it). But of course, you can use the solver for all the steps that don't use other equalities.
Just in case you didn't spot this: here is a definition of the integers in math-style using the SetQuotients of the cubical library. SetQuotients help you to avoid the work related to your third constructor trunc. This means you basically just need to show some constructions are well defined as you would in 'normal' math.
I've successfully used the ring solver for exactly the same problem: defining Int as a quotient of ℕ ⨯ ℕ. You can find the complete file here, the relevant parts are the following:
Non-cubical propositional equality to path equality:
open import Cubical.Core.Prelude renaming (_+_ to _+̂_)
open import Relation.Binary.PropositionalEquality renaming (refl to prefl; _≡_ to _=̂_) using ()
fromPropEq : ∀ {ℓ A} {x y : A} → _=̂_ {ℓ} {A} x y → x ≡ y
fromPropEq prefl = refl
An example of using the ring solver:
open import Function using (_$_)
import Data.Nat.Solver
open Data.Nat.Solver.+-*-Solver
using (prove; solve; _:=_; con; var; _:+_; _:*_; :-_; _:-_)
reorder : ∀ x y a b → (x +̂ a) +̂ (y +̂ b) ≡ (x +̂ y) +̂ (a +̂ b)
reorder x y a b = fromPropEq $ solve 4 (λ x y a b → (x :+ a) :+ (y :+ b) := (x :+ y) :+ (a :+ b)) prefl x y a b
So here, even though the ring solver gives us a proof of _=̂_, we can use _=̂_'s K and _≡_'s reflexivity to turn that into a path equality which can be used further downstream to e.g. prove that Int addition is representative-invariant.

Reduce Lambda Term to Normal Form

I just learned about lambda calculus and I'm having issues trying to reduce
(λx. (λy. y x) (λz. x z)) (λy. y y)
to its normal form. I get to (λy. y (λy. y y) (λz. (λy. y y) z) then get kind of lost. I don't know where to go from here, or if it's even correct.
(λx. (λy. y x) (λz. x z)) (λy. y y)
As #ymonad notes, one of the y parameters needs to be renamed to avoid capture (conflating different variables that only coincidentally share the same name). Here I rename the latter instance (using α-equivalence):
(λx. (λy. y x) (λz. x z)) (λm. m m)
Next step is to β-reduce. In this expression we can do so in one of two places: we can either reduce the outermost application (λx) or the inner application (λy). I'm going to do the latter, mostly on arbitrary whim / because I thought ahead a little bit and think it will result in shorter intermediate expressions:
(λx. (λz. x z) x) (λm. m m)
Still more β-reduction to do. Again I'm going to choose the inner expression because I can see where this is going, but it doesn't actually matter in this case, I'll get the same final answer regardless:
(λx. x x) (λm. m m)
Side note: these two lambda expressions (which are also known as the "Mockingbird" (as per Raymond Smullyan)) are actually α-equivalent, and the entire expression is the (in)famous Ω-combinator. If we ignore all that however, and apply another β-reduction:
(λm. m m) (λm. m m)
Ah, that's still β-reducible. Or is it? This expression is α-equivalent to the previous. Oh dear, we appear to have found ourselves stuck in an infinite loop, as is always possible in Turing-complete (or should we say Lambda-complete?) languages. One might denote this as our original expression equalling "bottom" (in Haskell parlance), denoted ⊥:
(λx. (λy. y x) (λz. x z)) (λy. y y) = ⊥
Is this a mistake? Well, some good LC theory to know is:
if an expression has a β-normal form, then it will be the same β-normal form no matter what order of reductions was used to reach it, and
if an expression has a β-normal form, then normal order evaluation is guaranteed to find it.
So what is normal order? In short, it is β-reducing the outermost expression at every step. Let's take this expression for another spin!
(λx. (λy. y x) (λz. x z)) (λm. m m)
(λy. y (λm. m m)) (λz. (λm. m m) z)
(λz. (λm. m m) z) (λm. m m)
(λm. m m) (λm. m m)
Darn. Looks like this expression has no normal form – it diverges (doesn't terminate).

Pseudocode to Logic[Predicate Logic in CS]

We try to translate a very simple program in pseudo-code to Predicate Logic.
The program is straightforward and does not contain loops. (sequential)
It only consists of assignments of variables and if-else statements.
Unfortunately we do not have any good information provided to solve the problem. It would be great if someone has some
examples "conversions" of simple 5liner code snippets or
links to sources for free information, which describe the topic on the surface level. ( We only do predicate and prepositional logic and do not want to dive much deeper in the logic space. )
Kind regards
UPDATE:
After enough research I found the solution and can share it inc. examples.
The trick is to think of the program state as a set of all our arbitrary variables inc. a program counter which stands for the current instruction to be executed.
x = input;
x = x*2;
if (y>0)
x = x∗y ;
else
x = y;
We will form the Predicate P(x,i,y,pc).
From here we can build promises e.g.:
∀i∀x∀y(P (x, i, y, 1) => P (i, i, y, 2))
∀i∀x∀y(P (x, i, y, 2) => P (x ∗ 2, i, y, 3))
∀i∀x∀y(P (x, i, y, 3) ∧ y > 0 =⇒ P (x ∗ y, i, y, 4))
∀i∀x∀y(P (x, i, y, 3) ∧ ¬y > 0 =⇒ P (y, i, y, 4))
By incrementing the Program counter we make sure that the promises follow in order. Now we are able to define make a proof when given a premise for the Input e.g. P(x,4,7,1).

Equality for two simple expressions with only bitwise operations

Given the following two functions in C language:
int f(int x, int y, int z) {
return (x & y) | ((~x) & z);
}
int g(int x, int y, int z) {
return z ^ (x & (y ^ z));
}
The results of the two functions are equal for any valid integer.
I just wonder the mathematics between the two expressions.
I've first seen the expression for function f in the SHA-1 algorithm on wikipedia.
http://en.wikipedia.org/wiki/Sha1
In the "SHA-1 pseudocode" part, inside the Main loop:
if 0 ≤ i ≤ 19 then
f = (b and c) or ((not b) and d)
k = 0x5A827999
...
In some open source implementation, it uses the form in function g: z ^ (x & (y ^ z)).
I write a program and iterate all the possible values for x, y, z, and all the results are equal.
How to deduce the form
(x & y) | ((~x) & z)
to the form
z ^ (x & (y ^ z))
in mathematics? Not just only proving the equality.
Since bitwise operations are equivalent to boolean operations on the individual bits, you can prove the equivalence simply by enumerating the eight assignments of the {x, y, z} three-tuples.
Fill out the truth tables for each of these two functions, and then compare the eight positions to each other. If all eight positions match, the two functions are equivalent; otherwise, the functions are different.
You do not need to do it manually either: plug in both functions in three nested loops that give x, y, and z values from zero to one, inclusive, and compare the results of invoking f(x,y,z) to g(x,y,z).
You can do this using a Karnaugh Map. Given the truth table for z ^ (x & (y ^ z)), the Karnaugh map is:
As can be seen, you can make two groups from the diagram, giving you (x & y) | (~x & z)

Besides AND/OR/NOT, what's the point of the other logical operators in programming?

I've been programming nearly all of my life (around 20+ years), and I don't think I can remember a single time when I was looking at a if-statement and think "Hmmm, this would be a good time to use XOR." The entire logical programming universe seems to revolve around just these three.
Granted, with AND/OR/NOT gates, you can make any other logical statement. However, there might be a time where it might save you some code to combine two or three statements into a single logical statement. Let's look at the 16 possible combinations of logical connectives:
FALSE = Contradiction = 0, null, NOT TRUE
TRUE = Tautology = 1, NOT FALSE
X = Proposition X = X
NOT X = Negation of X = !X
Y = Proposition Y = Y
NOT Y = Negation of Y = !Y
X AND Y = Conjunction = NOT (X NAND Y)
X NAND Y = Alternative Denial = NOT (X AND Y), !X OR !Y
X OR Y = Disjunction = NOT (!X AND !Y)
X NOR Y = Joint Denial = NOT (X OR Y), !X AND !Y
X ⊅ Y = Material Nonimplication = X AND !Y, NOT(!X OR Y), (X XOR Y) AND X, ???
X ⊃ Y = Material Implication = !X OR Y, NOT(X AND !Y), (X XNOR Y) OR X, ???
X ⊄ Y = Converse Nonimplication = !X AND Y, NOT(X OR !Y), (X XOR Y) AND Y, ???
X ⊂ Y = Converse Implication = X OR !Y, NOT(!X AND Y), (X XNOR Y) OR Y, ???
X XOR Y = Exclusive disjunction = NOT (X IFF Y), NOT (X XNOR Y), X != Y
X XNOR Y = Biconditional = X IFF Y, NOT (X XOR Y), !X AND !Y
So, items 1-2 involve zero variables, items 3-6 involve one, and items 7-10 are terms we are familiar with. (Though, we don't usually have a NAND operator, but at least Perl has "unless" for universal NOT.)
Items 11-14 seem like interesting ones, but I've never seen these in programming. Items 15-16 are the XOR/XNOR.
Can any of these be used for AND/OR/NOT simplification? If so, have you used them?
UPDATE: "Not equal" or != is really XOR, which is used constantly. So, XOR is being used after all.
Going to close this question after the Not Equals/XOR thing. Out of the 16 possible operators, programmers use 9 of them:
FALSE, TRUE, X, Y, !X, !Y, AND (or ==), OR, XOR (or !=)
All of the other operators don't typically exist in programming languages:
X NAND Y = Alternative Denial = NOT (X AND Y), !X OR !Y
X NOR Y = Joint Denial = NOT (X OR Y), !X AND !Y
X ⊅ Y = Material Nonimplication = X AND !Y, NOT(!X OR Y), (X XOR Y) AND X, ???
X ⊃ Y = Material Implication = !X OR Y, NOT(X AND !Y), (X XNOR Y) OR X, ???
X ⊄ Y = Converse Nonimplication = !X AND Y, NOT(X OR !Y), (X XOR Y) AND Y, ???
X ⊂ Y = Converse Implication = X OR !Y, NOT(!X AND Y), (X XNOR Y) OR Y, ???
X XNOR Y = Biconditional = X IFF Y, NOT (X XOR Y), !X AND !Y
Perhaps there's room for them later on, because NAND/NOR seems pretty handy, and cleaner than typing NOT (X xxx Y).
Consider this:
if(an odd number of conditions are true) then return 1 else return 0
Using and/or/not, you might try
if(one is true || three are true || ... 2n+1 are true) then return 1 else return 0
That's pretty ugly because you end up having to specify each of the 1-sets, 3-sets, 5-sets, ..., 2n+1 sets which are subsets of the set of your conditions. The XOR version is pretty elegant, though...
if(C1 XOR C2 XOR ... XOR CN) then return 1 else return 0
For a large or variable N, this is probably better handled with a loop and counter system anyway, but when N isn't too large (~10), and you aren't already storing the conditions as an array, this isn't so bad. Works the same way for checking an even number of conditions.
You can come up with similar examples for the others, too. An interesting exercise would be to try programming something like
if((A && !B) || (!A && B)) then return 1 else return 0
And see whether the compiler emits assembly language for ANDs, ORs and NOTs or is smart enough to recognize this is XOR and, based on this, emits (a possibly cheaper) XOR instruction.
When programming in java, I tend to mostly use the following logic functions:
not !
and &&
or ||
xnor ==
xor !=,
extending this to other basic functions:
material implication A || !B
converse implication !A || B
material nonimplication !A && B
converse nonimplication A && !B
Knowing when to use the xor and xnor comes down to simplifying the logic. In general, when you have a complex function:
1) simplify to either CNF ("conjunctive normal form" aka "sum over product") or DNF ("disjunctive normal form" aka "product over sum").*
2) remove extra terms A && (A || B),A || (A && B) -> A
2) simplify (A || !B) && (!A || B),(!A && !B) || (A && B) -> A == B
3) simplify (A || B) && (!A || !B),(A && !B) || (!A && B) -> A != B
Using these 3 simplifications can lead to much cleaner code using both the xor and xnor functions.
*It should be noted that a logical function may be much simpler in DNF than CNF or vice versa.
"Items 11-14 seem like interesting ones, but I've never seen these in programming."
I disagree. item 12, Material Implication is basically a "IF" statement, it is everywhere in programming.
I see Material Implication same as:
if(A) {
return B
}
Material nonimplication/abjunction use case
Got an instance now where I'd like to do a material nonimplication/abjunction.
Truth Table
╔═══╦═══╦══════════╗
║ P ║ Q ║ P -/-> Q ║
╠═══╬═══╬══════════╣
║ T ║ T ║ F ║
║ T ║ F ║ T ║ <-- all I care about. True followed by false.
║ F ║ T ║ F ║
║ F ║ F ║ F ║
╚═══╩═══╩══════════╝
I'm dealing with a number of permissions (all luckily true/false) for a number of entities at once and have a roles and rights situation where I want to see if a system user can change another user's permissions. The same operation is being attempted on all the entities at once.
First I want the delta between the between an entity's old permission state and new commonly desired permission state for all entities.
Then I want to compare that delta to the current user's change rights for this specific entity.
I don't care about permissions where the delta requires no change.
I only want a true flag where an action should be blocked.
Example
before: 10001
proposed after: 11001
delta: 01000 <<< I want a material nonimplication...
user rights 10101 <<< ... between these two...
blocked actions: 01000 <<< ... to produce this flag set
Right now I'm just doing an XOR and then an AND, which is the same thing.
Which kinda code smells that there's an easier way to do the comparison, but at least in this incredibly stodgy, step-by-step logic, it would be nice to have that operator.

Resources