Equality for two simple expressions with only bitwise operations - algorithm

Given the following two functions in C language:
int f(int x, int y, int z) {
return (x & y) | ((~x) & z);
}
int g(int x, int y, int z) {
return z ^ (x & (y ^ z));
}
The results of the two functions are equal for any valid integer.
I just wonder the mathematics between the two expressions.
I've first seen the expression for function f in the SHA-1 algorithm on wikipedia.
http://en.wikipedia.org/wiki/Sha1
In the "SHA-1 pseudocode" part, inside the Main loop:
if 0 ≤ i ≤ 19 then
f = (b and c) or ((not b) and d)
k = 0x5A827999
...
In some open source implementation, it uses the form in function g: z ^ (x & (y ^ z)).
I write a program and iterate all the possible values for x, y, z, and all the results are equal.
How to deduce the form
(x & y) | ((~x) & z)
to the form
z ^ (x & (y ^ z))
in mathematics? Not just only proving the equality.

Since bitwise operations are equivalent to boolean operations on the individual bits, you can prove the equivalence simply by enumerating the eight assignments of the {x, y, z} three-tuples.
Fill out the truth tables for each of these two functions, and then compare the eight positions to each other. If all eight positions match, the two functions are equivalent; otherwise, the functions are different.
You do not need to do it manually either: plug in both functions in three nested loops that give x, y, and z values from zero to one, inclusive, and compare the results of invoking f(x,y,z) to g(x,y,z).

You can do this using a Karnaugh Map. Given the truth table for z ^ (x & (y ^ z)), the Karnaugh map is:
As can be seen, you can make two groups from the diagram, giving you (x & y) | (~x & z)

Related

Filtering with a predicate that takes 2 arguments

What I want would basically be a O(n^2) iteration over a list. Let's say I have a list of two integers,
let list = [2312, 8000, 3456, 7000, 1234]
and a function to check if adding two integers together would produce a result higher than 20000 (this could be an arbitrary function that takes two integers and returns a boolean).
myPredicate :: Int -> Int -> Bool
myPredicate x y = x + y > 10000
Is there a way to apply this predicate to the above list to get a list of lists that include valid pairs, like this:
>> filter myPredicate list
>> [[2312, 8000], [3456, 8000], [3456, 7000], [8000, 7000]]
If I understood you correctly you want to construct the list of pairs,
pairs xs = [(y,z) | (y:ys) <- tails xs, z <- ys]
and then filter it out using your predicate which needs to be in uncurried form,
myPredicate' :: (Int,Int) -> Bool
myPredicate' x y = x + y > 10000
so,
filter myPredicate' (pairs list)
or equivalently
filter (uncurry myPredicate) (pairs list)
This is supported by Haskell syntax directly.
[(x, y) | x <- myList, y <- myList, x + y > 20000]
This will return reversed and repeated pairs. If that's not what you need, consider these list comprehensions:
[(x, y) | x <- myList, y <- myList, x < y, x + y > 20000] -- no reversed pairs, no repeats
[(x, y) | x <- myList, y <- myList, x <= y, x + y > 20000] -- repeats, no reversed pairs
If for some reason unknown to science you have a list with duplicate elements, say [30000,30000] and you want only elements at different positions to form valid pairs, then this simple list comprehension won't work. I have no idea what kind of real life problem would require this, but here you are:
[(y,z) | (y:ys) <- tails xs, z <- ys, y + z > 20000]
(idea stolen from the other answer)

How does the power function work

This is My First Logic Programming Language course so this is a really Dumb Question But I cannot for the life of me figure out how does this power predicate work I've tried making a search tree to trace it But I still cannot understand how is it working
mult(_ , 0 ,0).
mult(X , Y, Z):-
Y > 0,
Y1 is Y - 1,
mult(X,Y1,Z1),
Z is Z1 + X.
exp2(_ ,0 , 1).
exp2(X,Y,Z):-
Y > 0,
Y1 is Y - 1,
exp2(X , Y1 , Z1),
mult(X,Z1,Z).
I so far get that I'm going to call the exp2 predicate till I reach the point where the Y is going to be Zero then I'm going to start multiplying from there, but At the last call when it's at exp2(2 , 1 , Z) what is the Z value and how does the predicate work from there?
Thank you very much =)
EDIT: I'm really sorry for the Late reply I had some problems and couldn't access my PC
I'll walk through mult/3 in more detail here, but I'll leave exp2/3 to you as an exercise. It's similar..
As I mentioned in my comment, you want to read a Prolog predicate as a rule.
mult(_ , 0 ,0).
This rule says 0 is the result of multiplying anything (_) by 0. The variable _ is an anonymous variable, meaning it is not only a variable, but you don't care what its value is.
mult(X, Y, Z) :-
This says, Z is the result of multiplying X by Y if....
Y > 0,
Establish that Y is greater than 0.
Y1 is Y - 1,
And that Y1 has the value of Y minus 1.
mult(X, Y1, Z1),
And that Z1 is the result of multiplying X by Y1.
Z is Z1 + X.
And Z is the value of Z1 plus X.
Or reading the mult(X, Y, Z) rule altogether:
Z is the result of multiplying X by Y if Y is greater than 0, and Y1 is Y-1, and Z1 is the result of multiplying X by Y1, and Z is the result of adding Z1 to X.
Now digging a little deeper, you can see this is a recursive definition, as in the multiplication of two numbers is being defined by another multiplication. But what is being multiplied is important. Mathematically, it's using the fact that x * y is equal to x * (y - 1) + x. So it keeps reducing the second multiplicand by 1 and calling itself on the slightly reduced problem. When does this recursive reduction finally end? Well, as shown above, the second rule says Y must be greater than 0. If Y is 0, then the first rule, mult(_, 0, 0) applies and the recursion finally comes back with a 0.
If you are not sure how recursion works or are unfamiliar with it, I highly recommend Googling it to understand it. That is, indeed, a concept that applies to many computer languages. But you need to be careful about learning Prolog via comparison with other languages. Prolog is fundamentally different in it's behavior from procedural/imperative languages like Java, Python, C, C++, etc. It's best to get used to interpreting Prolog rules and facts as I have described above.
Say you want to compute 2^3 as assign result to R.
For that you will call exp2(2, 3, R).
It will recursively call exp2(2, 2, R1) and then exp2(2, 1, R2) and finally exp(2, 0, R3).
At this point exp(_, 0, 1) will match and R3 will be assigned to 1.
Then when call stack unfolds 1 will be multiplied by 2 three times.
In Java this logic would be encoded as follows. Execution would go pretty much the same route.
public static int Exp2(int X, int Y) {
if (Y == 0) { // exp2(_, 0, 1).
return 1;
}
if (Y > 0) { // Y > 0
int Y1 = Y - 1; // Y1 is Y - 1
int Z1 = Exp2(X, Y1); // exp2(X, Y1, Z1);
return X * Z1; // mult(X, Z1, Z).
}
return -1; // this should never happen.
}

boolean logic case to construct truth table

So I have this boolean logic case.
three variables x, y, and z. The (ternary) parity function p(x, y, z) is a boolean function with value
• F, if an even number of the inputs x, y, and z have truth value T
• T, if an odd number of inputs have truth value T
this is my truth table (ill put just a few case) below
x y z - p(x, y, z)
F F F (?)
F F T evaluates to T because there is one T which is odd
F T T evaluates to F because there is two T which is even
my question is that, what if all of the three input evaluates to F. It is neither T odd or T false. So what would it evaluates to?
The definition of an even number x is that it is divisible by 2 with no remainder. 0/2 = 0, so if there are zero T, it will evaluate to F

Pseudocode to Logic[Predicate Logic in CS]

We try to translate a very simple program in pseudo-code to Predicate Logic.
The program is straightforward and does not contain loops. (sequential)
It only consists of assignments of variables and if-else statements.
Unfortunately we do not have any good information provided to solve the problem. It would be great if someone has some
examples "conversions" of simple 5liner code snippets or
links to sources for free information, which describe the topic on the surface level. ( We only do predicate and prepositional logic and do not want to dive much deeper in the logic space. )
Kind regards
UPDATE:
After enough research I found the solution and can share it inc. examples.
The trick is to think of the program state as a set of all our arbitrary variables inc. a program counter which stands for the current instruction to be executed.
x = input;
x = x*2;
if (y>0)
x = x∗y ;
else
x = y;
We will form the Predicate P(x,i,y,pc).
From here we can build promises e.g.:
∀i∀x∀y(P (x, i, y, 1) => P (i, i, y, 2))
∀i∀x∀y(P (x, i, y, 2) => P (x ∗ 2, i, y, 3))
∀i∀x∀y(P (x, i, y, 3) ∧ y > 0 =⇒ P (x ∗ y, i, y, 4))
∀i∀x∀y(P (x, i, y, 3) ∧ ¬y > 0 =⇒ P (y, i, y, 4))
By incrementing the Program counter we make sure that the promises follow in order. Now we are able to define make a proof when given a premise for the Input e.g. P(x,4,7,1).

Bitwise XORing two numbers results in sum or difference of the numbers

When I XOR any two numbers, I am getting either absolute value of their difference or sum.
I have searched a lot on Google trying to find out any relevant formula for this. But no apparent formula or statement is available on this.
Example:
10 XOR 2 = 1010 XOR 10 = 1000(8)
1 XOR 2 = 01 XOR 10 = 11(3)
Is it true for all the numbers?
No, it's not always true.
6 = 110
3 = 11
----
XOR 101 = 5
SUM 9
DIFF 3
This is by no means a complete analysis, but here's what I see:
For your first example, the least significant bits of 1010 are the same as the bits of 10, which would cause that you get the difference when XORing.
For your second example, all the corresponding bits are different, which would cause that you get the sum when XORing.
Why these properties hold should be fairly easy to see.
As shown by Dukelings answer and CmdrMoozy's comment, it's not always true. As shown by your post, it's true at least sometimes. So here's a slightly more detailed analysis.
The +-side
Obviously, if (but not only if) (x & y) == 0 then (x ^ y) == x + y, because
x + y = (x ^ y) + ((x & y) << 1)
That accounts for 332 cases (for every bit position, there are 3 choices that result in a 0 after the AND) where (x ^ y) == (x + y).
Then there are the cases where (x & y) != 0. Those cases are precisely the cases such that
(x & y) == 0x80000000, because the carry out of the highest bit is the only carry that doesn't affect anything.
That adds 331 cases (for 31 bit positions there are 3 choices, for the highest bit there is only 1 choice).
The --side
For subtraction, there's the lesser known identity x - y == (x ^ y) - ((~x & y) << 1).
That's really not too different from addition, and the analysis almost the same. This time, if (but not only if) (~x & y) == 0 then (x ^ y) == x - y. That ~ doesn't change the number of cases: still 332. Most of them are different cases than before, but not all (consider y = 0, then x can be anything).
There are again 331 extra cases, this time from (~x & y) == 0x80000000.
Both sides
The + and - sides aren't disjoint. Sometimes, x ^ y = x + y = x - y. That can only happen when either y = 0 or y = 0x80000000. If y = 0, x can be anything because (x & 0) == 0 and (~x & 0) == 0 for all x. If y = 0x80000000, x can again be anything, this time because x & 0x80000000 and ~x & 0x80000000 can both either work out to 0 or to 0x80000000, and both are fine.
That gives 233 cases where x ^ y = x + y = x - y.
It also gives (332 + 331) * 2 - 233 cases where x ^ y is x + y or x - y or both, which is 4941378580336984 or in base 16, 118e285afb5158, which is also the answer given by this site.
That's a lot of cases, but only roughly 0.02679% of the total space of 264.
Actually, there's an interesting answer to your observation and it can be explained why you observe this for so many numbers.
There's a relationship between a + b and a ^ b. It is given by:
a + b = a^b + 2*(a & b)
Hence,
a^b = a + b - 2*(a & b)
(where ^ is the bitwise XOR and & is bitwise AND)
see this link to get some more idea about the above relation. Hence, for every a and b, where a & b = 0 you will get a+b = a^b which explains the sum part. And if a & b is not equal to 0, then that explains the difference part. Hope it clarifies your question! :D
Let assume N is power of 2 for some K (N=2 pow k)
then
0<=X<N --> N XOR X is always sum of N and X
N<=Y<(2 pow k+1) --> N XOR Y is always diff of N and Y

Resources