Logic gate whose truth is 1101 - algorithm

Is there a way to find a logic gate or how to make a more complex gate / bit operator from a wanted truth table
I wish to have this truth table :
0 0 = 1
0 1 = 1
1 0 = 0
1 1 = 1

There is a formal way of doing this if you cannot see it from the table.
Write the sum of products expression for the table.
If 0 0 = 1, then A'B' is in the expression.
If 0 1 = 1, then A'B is in the expression.
If 1 1 = 1, then AB is in the expression.
Then, F = A'B' + A'B + AB
Now, simplify the expression:
F = A'B' + A'B + AB
F = A'(B'+B) + AB Distribution Law
F = A'(1) + AB Complement Law
F = A' + AB Identity Law
F = A'(B + 1) + AB Annulment Law
F = A'B + A' + AB Distribution Law
F = (A' + A)B + A' Distribution Law
F = (1)B + A' Complement Law
F = B + A' Identity Law
It is not(A) or B.
You have to use a not gate and an or gate.
Alternative Solution
As pointed out in the comments, you can come up from the negated version of the function.
If 1 0 = 0, then F' = AB'
Simply, F = not(A and not(B)). If you distribute the not, then it will correspond to the same boolean expression as above.
Thanks to the comments for pointing out the shorter way.

The truth table gives exactly one possibility for 0. This is the same pattern as with the OR operator (also one possibility for 0), except that the first operand should then be flipped. So it follows that the first operand should be negated to get the desired result:
NOT(A) OR B
This is actually the implication operator: the expression is false when and only if the first operand is true and the second not:
A => B
If we represent an operator with the 4 outcome bits for operands 00, 01, 10, and 11, then this operator has code 1101 (read the output column from top to bottom). For all possible operations we have this:
Output
Expression
0000
FALSE
0001
A AND B
0010
A AND NOT(B)
0011
A
0100
NOT(A) AND B
0101
B
0110
A XOR B
0111
A OR B
1000
NOT(A OR B)
1001
A <=> B
1010
NOT B
1011
B => A
1100
NOT A
1101
A => B
1110
NOT(A AND B)
1111
TRUE
There are many alternative ways to write these expressions, but as a rule of thumb: if you need three 1-outputs, look for an OR operator where potentially one or both arguments need to be flipped. If you need one 1-output, do the same with an AND operator. If you have two 1-outputs, do the same with a XOR (or <=>) operator, unless it is a trivial case where one of the operands determines the result on its own.

if your input is X1 and X2 and you want to have a 1 as output, you can look at the combination which is easy to define: threre is only one 0 - and then invert it
the case for output 0: x1 and (not (x2) )
invert the solution (de Mogan): not (x1 and (not (x2) )) = not(x1) or x2
You need 1 x not and 1 x or
or you need 2 x not and 1 x and

Related

different between "&" and "and" in expressions [duplicate]

This question already has answers here:
Boolean operators vs Bitwise operators
(9 answers)
Closed 4 months ago.
What the different between logical operators and, or and bitwise analogs &, | in usage? Is there any difference in efficiency in various solutions?
Logical operators operate on logical values, while bitwise operators operate on integer bits. Stop thinking about performance, and use them for they're meant for.
if x and y: # logical operation
...
z = z & 0xFF # bitwise operation
Bitwise = Bit by bit checking
# Example
Bitwise AND: 1011 & 0101 = 0001
Bitwise OR: 1011 | 0101 = 1111
Logical = Logical checking or in other words, you can say True/False checking
# Example
# both are non-zero so the result is True
Logical AND: 1011 && 0101 = 1 (True)
# one number is zero so the result is False
Logical AND: 1011 && 0000 = 0 (False)
# one number is non-zero so the result is non-zero which is True
Logical OR: 1011 || 0000 = 1 (True)
# both numbers are zero so the result is zero which is False
Logical OR: 0000 || 0000 = 0 (False)
Logical operators are used for booleans, since true equals 1 and false equals 0. If you use (binary) numbers other than 1 and 0, then any number that's not zero becomes a one. Ex: int x = 5; (101 in binary) int y = 0; (0 in binary) In this case, printing x && y would print 0, because 101 was changed to 1, and 0 was kept at zero: this is the same as printing true && false, which returns false (0). On the other hand, bitwise operators perform an operation on every single bit of the two operands (hence the term "bitwise"). Ex: int x = 5; int y = 8; printing x | y (bitwise OR) would calculate this: 000101 (5)| 1000 (8) ----------- = 1101 (13) Meaning it would print 13.

Algorithms :XOR operation [duplicate]

This question already has answers here:
Given an XOR and SUM of two numbers, how to find the number of pairs that satisfy them?
(2 answers)
Closed 5 years ago.
You are given a sum S and X , you need to find , if it there exist two numbers a and b such that a+b = S and a^b = X
I used a loop upto S/2 and check if it is possible or not
for(int i=0;i<=s/2;i++)
{
if(i^(s-i)==X)
return true;
}
complexity : O(n)
Need some better approach
Given that a+b = (a XOR b) + (a AND b)*2 (from here) we can calculate (a AND b):
If S < X => not possible, otherwise take S-X. If this is odd => not possible, otherwise (a AND b) = (S-X)/2.
now we can look at the bits of a and b individually. Checking all four combinations we see there is only one result that is impossible namely XOR and AND both 1.
So if (a XOR b) AND (a AND b) != 0 there is no solution. Otherwise one can find a and b that solve the equation.
if (S < X) return false;
Y = S - X;
if (Y is odd) return false;
if ((X & (Y/2)) != 0) return false;
return true;
Without previous knowledge of the equation, a+b = a^b + (a&b)*2, we can think of another solution. This solution is O(logK) where K is the maximum possible value of S and X. That is, if S and X are unsigned int then K is 2^32 - 1.
Start from the MSB of S and X. With the information that whether summing this bit must provide carry or not, we can check for this bit with condition that whether summing the bits to the right should provide carry or not.
Case1 ) summing must not provide carry
S X | need carry from right
------------------------------
0 0 | no (a = 0, b = 0)
0 1 | impossible
1 0 | yes (a = 0, b = 0)
1 1 | no (a = 1, b = 0 or 0,1)
Case2 ) summing must provide carry
S X | need carry from right
------------------------------
0 0 | no (a = 1, b = 1)
0 1 | yes (a = 1, b = 0 or 0,1)
1 0 | yes (a = 1, b = 1)
1 1 | impossible
There is a special case for the MSB where the carry doesn't matter.
Case3 ) don't care
S X | need carry from right
------------------------------
0 0 | no (a = 0, b = 0 or 1,1)
0 1 | yes (a = 1, b = 0 or 0,1)
1 0 | yes (a = 0, b = 0 or 1,1)
1 1 | no (a = 1, b = 0 or 0,1)
Lastly, the LSB must end with 'no need carry from right'.
The implementation and test code is here. It compares the output of the accepted solution and this solution.
I assume that you are assuming that your input sequence is sorted.
If it is, then this problem is as good as finding pair for a given sum and while checking sum check for their (a^b == SUM) this should be enough and this problem can be solved in O(n). URL
And you can't do better than that. In worst case you have to visit each element atleast once.

Prolog Extended Euclidian Algorithm

I have been struggling with some prolog code for several days and I couldn't find a way out of it. I am trying to write the extended Euclidean algorithm and find values p and s in :
a*p + b*s = gcd(a,b)
equation and here is what I have tried :`
common(X,X,X,_,_,_,_,_,_).
common(0,Y,Y,_,_,_,_,_,_).
common(X,0,X,_,_,_,_,_,_).
common(X,Y,_,1,0,L1,L2,SF,TF):-
append(L1,1,[H]),
append(L2,0,[A]),
SF is H ,
TF is A,
common(X,Y,_,0,1,[H],[A],SF,TF).
common(X,Y,_,0,1,L1,L2,SF,TF):-
append(L1,0,[_,S2]),
append(L2,1,[_,T2]),
Q is truncate(X/Y),
S is 1-Q*0,T is 0-Q*1 ,
common(X,Y,_,S,T,[S2,S],
[T2,T],SF,TF).
common(X,Y,N,S,T,[S1,S2],[T1,T2],SF,TF):-
Q is truncate(X/Y),
K is X-(Y*Q),
si_finder(S1,S2,Q,SF),
ti_finder(T1,T2,Q,TF),
common(Y,K,N,S,T,[S2,S],[T2,T],SF,TF).
si_finder(PP,P,Q,C):- C is PP - Q*P.
ti_finder(P2,P1,QA,C2):- C2 is P2 - QA*P1.
After a little search I found that s and p coefficients start from 1 and 0 and the second values for them are 0 and 1 respectively.Then it continues in a pattern which is what I have done in si_finder and ti_finder predicates.Common predicates are where I tried to control the pattern recursively. However the common predicates keeps on returning false in every call. Can anyone help me implement this algorithm in Prolog.
Thanks in advance.
First let's think about the arity of the predicate. Obviously you want to have the numbers A and B as well as the Bézout coefficients P and S as arguments. Since the algorithm is calculating the GCD anyway, it is opportune to have that as an argument as well. That leaves us with arity 5. As we're talking about the extended Euclidean algorithm, let' call the predicate eeuclid/5. Next, consider an example: Let's use the algorithm to calculate P, S and GCD for A=242 and B=69:
quotient (Q) | remainder (B1) | P | S
-------------+-------------------+-------+-------
| 242 | 1 | 0
| 69 | 0 | 1
242/69 = 3 | 242 − 3*69 = 35 | 1 | -3
69/35 = 1 | 69 − 1*35 = 34 | -1 | 4
35/34 = 1 | 35 − 1*34 = 1 | 2 | -7
34/1 = 34 | 34 − 34*1 = 0 | -69 | 242
We can observe the following:
The algorithm stops if the remainder becomes 0
The line before the last row contains the GCD in the remainder column (in this example 1) and the Bézout coefficients in the P and S columns respectively (in this example 2 and -7)
The quotient is calculated from the previous to remainders. So in the next iteration A becomes B and B becomes B1.
P and S are calculated from their respective predecessors. For example: P3 = P1 - 3*P2 = 1 - 3*0 = 1 and S3 = S1 - 3*S2 = 0 - 3*1 = -3. And since it's sufficient to have the previous two P's and S's, we might as well pass them on as pairs, e.g. P1-P2 and S1-S2.
The algorithm starts with the pairs P: 1-0 and S: 0-1
The algorithm starts with the bigger number
Putting all this together, the calling predicate has to ensure that A is the bigger number and, in addition to it's five arguments, it has to pass along the starting pairs 1-0 and 0-1 to the predicate describing the actual relation, here a_b_p_s_/7:
:- use_module(library(clpfd)).
eeuclid(A,B,P,S,GCD) :-
A #>= B,
GCD #= A*P + B*S, % <- new
a_b_p_s_(A,B,P,S,1-0,0-1,GCD).
eeuclid(A,B,P,S,GCD) :-
A #< B,
GCD #= A*P + B*S, % <- new
a_b_p_s_(B,A,S,P,1-0,0-1,GCD).
The first rule of a_b_p_s_/7 describes the base case, where B=0 and the algorithm stops. Then A is the GCD and P1, S1 are the Bézout coefficients. Otherwise the quotient Q, the remainder B1 and the new values for P and S are calculated and a_b_p_s_/7 is called with those new values:
a_b_p_s_(A,0,P1,S1,P1-_P2,S1-_S2,A).
a_b_p_s_(A,B,P,S,P1-P2,S1-S2,GCD) :-
B #> 0,
A #> B, % <- new
Q #= A/B,
B1 #= A mod B,
P3 #= P1-(Q*P2),
S3 #= S1-(Q*S2),
a_b_p_s_(B,B1,P,S,P2-P3,S2-S3,GCD).
Querying this with the above example yields the desired result:
?- eeuclid(242,69,P,S,GCD).
P = 2,
S = -7,
GCD = 1 ;
false.
And indeed: gcd(242,69) = 1 = 2*242 − 7*69
EDIT: On a second thought I would suggest to add two constraints. Firstly Bézout's identity before calling a_b_p_s_/7 and secondly A #> B after the first goal of a_b_p_s_/7. I edited the predicates above and marked the new goals. These additions make eeuclid/5 more versatile. For example, you could ask what numbers A and B have the Bézout coefficients 2 and -7 and 1 as the gcd. There is no unique answer to this query and Prolog will give you residual goals for every potential solution. However, you can ask for a limited range for A and B, say between 0 and 50 and then use label/1 to get actual numbers:
?- [A,B] ins 0..50, eeuclid(A,B,2,-7,1), label([A,B]).
A = 18,
B = 5 ;
A = 25,
B = 7 ;
A = 32,
B = 9 ;
A = 39,
B = 11 ;
A = 46,
B = 13 ;
false. % <- previously loop here
Without the newly added constraints the query would not terminate after the fifth solution. However, with the new constraints Prolog is able to determine, that there are no more solutions between 0 and 50.

Finding the largest power of a number that divides a factorial in haskell

So I am writing a haskell program to calculate the largest power of a number that divides a factorial.
largestPower :: Int -> Int -> Int
Here largestPower a b has find largest power of b that divides a!.
Now I understand the math behind it, the way to find the answer is to repeatedly divide a (just a) by b, ignore the remainder and finally add all the quotients. So if we have something like
largestPower 10 2
we should get 8 because 10/2=5/2=2/2=1 and we add 5+2+1=8
However, I am unable to figure out how to implement this as a function, do I use arrays or just a simple recursive function.
I am gravitating towards it being just a normal function, though I guess it can be done by storing quotients in an array and adding them.
Recursion without an accumulator
You can simply write a recursive algorithm and sum up the result of each call. Here we have two cases:
a is less than b, in which case the largest power is 0. So:
largestPower a b | a < b = 0
a is greater than or equal to b, in that case we divide a by b, calculate largestPower for that division, and add the division to the result. Like:
| otherwise = d + largestPower d b
where d = (div a b)
Or putting it together:
largestPower a b | a < b = 1
| otherwise = d + largestPower d b
where d = (div a b)
Recursion with an accumuator
You can also use recursion with an accumulator: a variable you pass through the recursion, and update accordingly. At the end, you return that accumulator (or a function called on that accumulator).
Here the accumulator would of course be the running product of divisions, so:
largestPower = largestPower' 0
So we will define a function largestPower' (mind the accent) with an accumulator as first argument that is initialized as 1.
Now in the recursion, there are two cases:
a is less than b, we simply return the accumulator:
largestPower' r a b | a < b = r
otherwise we multiply our accumulator with b, and pass the division to the largestPower' with a recursive call:
| otherwise = largestPower' (d+r) d b
where d = (div a b)
Or the full version:
largestPower = largestPower' 1
largestPower' r a b | a < b = r
| otherwise = largestPower' (d+r) d b
where d = (div a b)
Naive correct algorithm
The algorithm is not correct. A "naive" algorithm would be to simply divide every item and keep decrementing until you reach 1, like:
largestPower 1 _ = 0
largestPower a b = sumPower a + largestPower (a-1) b
where sumPower n | n `mod` b == 0 = 1 + sumPower (div n b)
| otherwise = 0
So this means that for the largestPower 4 2, this can be written as:
largestPower 4 2 = sumPower 4 + sumPower 3 + sumPower 2
and:
sumPower 4 = 1 + sumPower 2
= 1 + 1 + sumPower 1
= 1 + 1 + 0
= 2
sumPower 3 = 0
sumPower 2 = 1 + sumPower 1
= 1 + 0
= 1
So 3.
The algorithm as stated can be implemented quite simply:
largestPower :: Int -> Int -> Int
largestPower 0 b = 0
largestPower a b = d + largestPower d b where d = a `div` b
However, the algorithm is not correct for composite b. For example, largestPower 10 6 with this algorithm yields 1, but in fact the correct answer is 4. The problem is that this algorithm ignores multiples of 2 and 3 that are not multiples of 6. How you fix the algorithm is a completely separate question, though.

How does less than and greater than work on a logical level in binary?

I see how equality could work by just comparing bit patterns, but how would one write one's own less than and greater than operators? What is the actual process to mechanically compare values to each other?
I assume you want to just know how logic does it? And mimic that? Well here goes:
First off you have to talk about unsigned greater than or less than vs signed greater than vs less than because they matter. Just do three bit numbers to make life easier, it all scales up to N bits wide.
As processor documentation usually states a compare instruction will do a subtraction of the two operands in order to generate flags, so it is a subtract that doesnt save the answer just modifies the flags. That then later jump or branch on some flag pattern can be used. Some processors dont use flags but have a similar solution, they still use the equivalent of flags but just dont save them anywhere. compare and branch if greater than kind of an instruction instead of separate compare and branch if greater than instructions.
What is a subtract in logic? Well in grade school we learned that
a - b = a + (-b).
We also know from intro programming classes that a negative in twos complement means invert and add one so
a - b = a + (~b) + 1.
Note ones complement means to invert ~b in C means invert all the bits it is also known as taking the ones complement.
so 7 - 5 is
1
111
+010
=====
Pretty cool we can use the carry in as the "plus one".
1111
111
+010
=====
010
So the answer is 2 with the carry out set. Carry out set is a good thing means we didnt borrow. Not all processors do the same but so far with a subtract we use an adder but invert the second operand and invert the carry in. If we invert the carry out we can call it a borrow, 7 - 5 doesnt borrow. Again some processor architectures dont invert and call it a borrow they just call it carry out. It still goes into the carry flag if they have flags.
Why does any of this matter? Just hang on.
Lets look at 6-5 5-5 and 4-5 and see what the flags tell us.
1101
110
+010
=====
001
1111
101
+010
=====
000
0001
100
+010
=====
111
So what this means is the carry out tells us (unsigned) less than if 0. If if 1 then greater than or equal. That was using a - b where b is what we were comparing against. So if we then do b - a that would imply that we can get (unsigned) greater than with the carry bit. 5 - 4, 5 - 5, 5 - 6. We already know what 5 - 5 looks like zero with the carry out set
1111
101
011
====
001
0011
101
001
====
111
Yep we can determine (unsigned) greater than or equal or (unsigned) less than or equal using the carry flag. Not less than is the same as greater than or equal and vice versa. You just need to get the operand you want to compare against in the right place. I probably did this backward from every compare instruction as I think they subtract a - b where a is what is being compared against.
Now that you have seen this you can easily scratch out the above math on a piece of paper in a few seconds to know which order you need things and what flags to use. Or do a simple experiment like this with a processor you are using and look at the flags to figure out which is subtracted from which and/or whether or not they invert the carry out and call it a borrow.
We can see from doing it with paper and pencil as in grade school that addition boils down to one column at a time, you have a carry in plus two operands, and you get a result and a carry out. You can cascade the carry out to the carry in of the next column and repeat this for any number of bits you can afford to store.
Some/many instruction sets use flags (carry, zero, signed overflow, and negative are the set you need to do most comparisons). You can see I hope that you dont need assembly language nor an instruction set with flags, you can do this yourself with a programming language that has the basic boolean and math operations.
Untested code I just banged out in this edit window.
unsigned char a;
unsigned char b;
unsigned char aa;
unsigned char bb;
unsigned char c;
aa = a&0xF;
bb = (~b)&0xF;
c = aa + bb + 1;
c >>=4;
a >>=4;
b >>=4;
aa = a&0xF;
bb = (~b)&0xF;
c = c + aa + bb;
c >>=4;
And there you go, using an equals comparison, compare c with zero. And depending on your operand order it tells you (unsigned) less than or greater than. If you want to do more than 8 bits then keep on cascading that add with carry indefinitely.
Signed numbers...
ADDITION (and subtraction) LOGIC DOES NOT KNOW THE DIFFERENCE BETWEEN SIGNED AND UNSIGNED OPERANDS. Important to know. This is the beauty of twos complement. Try it yourself write a program that adds bit patterns together and prints out the bit patterns. Interpret those bit patterns input and output as all unsigned or all signed and you see this works (note some combinations overflow and the result is clipped).
Now saying that the flags do vary for subtraction comparisons. We know from grade school math where we carry the one or whatever over as we see in binary as well. The carry out is an unsigned overflow for unsigned. If set you overflowed we have a 1 we cant fit into our register so the result is too big we fail. Signed overflow though is the V bit which tells us if the carry IN and carry OUT of the msbit were the same or not.
Now lets use four bits, cause I want to. We can do the 5 - 4, 5 - 5, and 5 - 6. These are positive numbers so we have seen this but we didnt look at the V flag nor N flag (nor z flag). The N flag is the msbit of the result which indicates negative using twos complement notation, not to be confused with a sign bit although it is as a side effect, it is just not a separate sign bit that you remove from the number.
11111
0101
1011
=====
0001
c = 1, v = 0, n = 0
11111
0101
1010
=====
0000
c = 1, v = 0, n = 0
00011
0101
1001
=====
1111
c = 0, v = 0, n = 1
Now negative numbers -5 - -6, -5 - -5, -5 - -4
10111
1011
1010
=====
0110
c = 0, v = 1, n = 1
You know what, there is an easier way.
#include <stdio.h>
int main ( void )
{
unsigned int ra;
unsigned int rb;
unsigned int rc;
unsigned int rd;
unsigned int re;
int a;
int b;
for(a=-5;a<=5;a++)
{
for(b=-5;b<=5;b++)
{
ra = a&0xF;
rb = (-b)&0xF;
rc = ra+rb;
re = rc&8;
re >>=3;
rc >>=4;
ra = a&0x7;
rb = (-b)&0x7;
rd = ra+rb;
rd >>=3;
rd += rc;
rd &=1;
printf("%+d vs %+d: c = %u, n = %u, v = %u\n",a,b,rc,re,rd);
}
}
return(0);
}
and a subset of the results
-5 vs -5: c = 1, n = 0, v = 0
-5 vs -4: c = 0, n = 1, v = 0
-4 vs -5: c = 1, n = 0, v = 0
-4 vs -4: c = 1, n = 0, v = 0
-4 vs -3: c = 0, n = 1, v = 0
-3 vs -4: c = 1, n = 0, v = 0
-3 vs -3: c = 1, n = 0, v = 0
-3 vs -2: c = 0, n = 1, v = 0
-2 vs -3: c = 1, n = 0, v = 0
-2 vs -2: c = 1, n = 0, v = 0
-2 vs -1: c = 0, n = 1, v = 0
-1 vs -2: c = 1, n = 0, v = 0
-1 vs -1: c = 1, n = 0, v = 0
-1 vs +0: c = 0, n = 1, v = 0
+0 vs -1: c = 0, n = 0, v = 0
+0 vs +0: c = 0, n = 0, v = 0
+0 vs +1: c = 0, n = 1, v = 0
+1 vs +0: c = 0, n = 0, v = 0
+1 vs +1: c = 1, n = 0, v = 0
+1 vs +2: c = 0, n = 1, v = 0
+3 vs +2: c = 1, n = 0, v = 0
+3 vs +3: c = 1, n = 0, v = 0
+3 vs +4: c = 0, n = 1, v = 0
And I will just tell you the answer...You are looking for n == v or not n == v. So if you compute n and v, then x = (n+v)&1. Then if that is a zero they were equal, if that is a 1 they were not. You can use an equals comparison. When they were not equal b was greater than a. Reverse your operands and you can check b less than a.
You can change the code above to only print out if n and v are equal. So if you are using a processor with only an equals comparison, you can still survive with real programming languages and comparisons.
Some processor manuals may chart this out for you. They may say n==v for one thing or not z and n!=v for the other (LT vs GT). But it could be simplified, from grade school the alligator eats the bigger one a > b when you flip it b < a. So feed the operators in one way and you get a > b, feed them the other way you get b < a.
Equals is just a straight bit comparison that goes through a separate logic path, not something that falls out of the addition. N is grabbed from the msbit of the result. C and V fall out of the addition.
One method that is O(1) over all numbers in the given domain (keeping the number of bits constant) would be to subtract the numbers and check the sign bit. I.e., suppose you have A and B, then
C = A - B
If C's sign bit is 1, then B > A. Otherwise, A >= B.

Resources