One and only one zero comparison with the least operations - algorithm

This might be a silly question, but in some function I have two int s, and need to compare them to find if exactly one of them is ==0... Both of the ways I came up with require 5 logical operations, and I can't think of anything shorter... Is there a shorter way?
What I came up with is:
int x, y;
if (((x==0) || (y==0)) && (x != y)){} // most obvious option, or:
if (((x^y) == x) ^ ((x^y) == y)){} // looks better, but still 5 ops

The expression ((x^y) == x) ^ ((x^y) == y) can be simplified in a straightforward way.
(x^y) == x if and only if y == 0, and likewise (x^y) == y if and only if y == 0. So the following uses three operations to compute the condition:
if((x == 0) ^ (y == 0)) {
// ...
}
This makes sense because using XOR as a logical operator, the result is true if and only if one but not both of the operands is true. So this directly tests whether one but not both of x == 0 and y == 0 are true.

Related

How to convert logical statements into their opposites?

I have a scenario where I am checking whether to hide something depending on multiple variables values. For some these statements, if they evaluate to true, then I will skip over showing something (showing by default). However, I need to re-write my statements so that if they evaluate to true, then I will show something (hiding by default).
For Example:
If X !== 1 && Y !==1
For Example:
If X === 3 or Y === 3
If either example were true, then I would skip (hide) something.
I need to flip conditions the conditions written in the code so that the expressions are their opposites. I cannot just evaluate the original expression and then swap the end true/false result.
Would
X === 1 or Y === 1
And
X !== 3 && Y !== 3
Be the logical opposites respectively? How can I approach re-writing these expressions so that they would evaluate to their opposite result every time?
I cannot simply do !(If X !== 1 && Y !==1) in this situation. Would De Morgan's law be applicable to what I am trying to do? I'm considering just flipping all "===" to "!===" (and vice versa) and flipping all "ands/ors" so that I can get the results I want. This seems to work in cases I've tried, however I am not sure whether it would be working in "all" cases. Thank you!
if I follow what you're asking the inverse of
If X !== 1 && Y !==1
is:
if (X == 1 || Y == 1)
which is the same as:
if !(X !== 1 && Y !==1)
Here's how to think about it: In your example, both X and Y must not equal 1 to be true; therefore if either X or Y equal 1 the statement is false. Now convert that last sentence to boolean.
When inverting a logical statement, invert the arguments and the logical condition: "and" becomes an "or" and vise-versa.
If the goal is to negate the whole condition, the simplest way to do it is to just wrap the entire expression in !.
For example the following conditional clause:
If X !== 1 && Y !== 1
can be flipped by replacing it with:
If !(X !== 1 && Y !== 1)
Now, this is perfectly correct, but it may seem rather ugly to you. This is where DeMorgan's laws can be applied, to rewrite the expression in an equivalent, but possibly syntactically simpler form.
DeMorgan's laws state that:
!(x && y) === !x || !y, and that
!(x || y) === !x && !y.
So we can for example take our ugly (but correctly flipped) conditional clause above:
If !(X !== 1 && Y !== 1)
and use the first of DeMorgan's laws listed above to rewrite it (without changing the meaning at all):
If !(X !== 1) || !(Y !== 1)
and then by simple reasoning (!(a !== b) is equivalent to !(!(a === b)) is equivalent to a === b):
If X === 1 || Y === 1
and this is (also) a flipped version of the original clause If X !== 1 && Y !== 1.

Sign computation with a macro

What the following expression computes, exactly?
#define SIGN(x) ((x < 0) ? -1 : (x > 0))
what yields if x is zero, less than zero, more than zero?
I guess I know the answer, but I'd like to check for clarity...
Thank you
EDIT: added missing parenthesis
EDIT: more info here
First, the macro doesn't compute anything. It is substituted into a source code, expanded, and the resulting text gets compiled. What the resulting text is, depends on the way you use the macro, especially on what parameter you give.
Second, the macro lacks one closing paren, so it probably would not give you a meaningful expression to be compiled.
Third, even when you add the lacking paren:
#define SIGN(x) ((x < 0) ? -1 : (x > 0))
it is possible you get unexpected results if you use the macro in a non-simplest way. For example,
SIGN(a ^ b)
would result in
((a ^ b < 0) ? -1 : (a ^ b > 0))
which is interpreted in C and C++ as
((a ^ (b < 0)) ? -1 : (a ^ (b > 0)))
which certainly is not what we intend.
You should add parentheses to avoid unwanted operators binding – for:
#define SIGN(x) (((x) < 0) ? -1 : ((x) > 0))
the above example will yield a sensible expression
(((a ^ b) < 0) ? -1 : ((a ^ b) > 0))
but that still doesn't protect you against unwanted double increment/decrement in case of plus-plus or minus-minus operators or double execution of a function in case the expression substituted for x contains a function call.
It does exactly what you probably think it does, gives -1 for negative numbers, 0 for zero, and 1 for positive numbers.
However, you should generally avoid function-like macros since they will not do what you expect if, for example, you try to calculate SIGN(value++). Since they're simple text substitutions, that would resolve to:
((value++ < 0) ? -1 : (value++ > 0)
You're far better off just using real functions and letting the compiler inline them if it decides it's worth it. You can also suggest to the compiler that inlining it, with the inline keyword, but keep in mind it is a suggestion.
That macro got a stray parenthesis.
It looks like it is meant to be an implementation of signum function, which returns -1, 1 or 0 depending on value of argument.
For sake of being safe and writing C++ code, it is prudent
to replace macro by template, similar to
template <class T>
int SIGN( T x )
{
return (x < T(0)) ? -1 : (x > T(0));
}
First comparision is argument of trenary operator ?:. Ternary would return -1 if expression evaluates to true , i.e. x is less than 0, otherwise it would return result of x > T(0).
That expression would evaluated to 0 if x equals to 0, otherwise it would be evaluated to 1.
Note that my implementation is not ideal, you can find better implementation elsewhere on SO.
An alternative expression can be:
return (T(0)<x) - (T(0)>x);
Which may be more effective with platforms that implement certain CPU instructions
If you use it with values only and not expressions that macro will produce -1, 0, 1, otherwise you may have serious problems. The tricky part is (x>0). Lets read the standard:
5.9 Relational operators [expr.rel]
The operators < (less than), > (greater than), <= (less than or equal
to), and >= (greater than or equal to) all yield false or true. The
type of the result is bool.
and
3.9.1 Fundamental types [basic.fundamental]
Values of type bool are either true or false. Values of type bool participate in integral promotions (4.5).
Thus x>0 is either true or false.
4.5 Integral promotions [conv.prom]
A prvalue of type bool can be converted to a prvalue of type int, with false becoming zero and true becoming one.
and is promoted to either 1 or 0 (respectively).

Difference between negation and inequality in Ruby?

What is the difference between if !variable == 10 {...} and if variable != 10 {...} in Ruby programming language? Of course this can be applied to all other languages. In first case are we just negating condition, and in second testing if variable is actually unequal to 10?
if variable != 10 is equivalent to unless variable == 10
while !variable is negating the variable, so first it's evaluated as an expression, then negated.
Example
variable = 5
variable != 10 # => true
!variable # => false (5 is evaluated to true, then negated)
!variable = 10 # => warning: found = in conditional, should be ==
To a large degree all programming languages, including Ruby, follow boolean algebra rules that state there's a number of ways to express the same condition.
x != y
!(x == y)
!!(x != y)
This also plays out in other domains, such as:
!x && !y
!(x || y)
As others have pointed out, be very careful when using the == comparator instead of the = assignment operator.

Verify that the logic !(a==k || b==k || c==k) is not equivalent to (a != k || b!=k || c!=k)

I ran into a bug in my program where I had to differentiate between the cases stated in the title. a,b,c are distinct entities but either a,b, or c may equal k.
My intent, which was to be represented as a compound logical statement, was to say that if a,b,or c equals k (a==k or b==k or c==k), then the statement should return false.
The buggy expression was: (a!=k || b!=k || c!=k)
My correction was: !(a==k || b==k || c==k)
The reasoning being that the former says that at a,b, AND c needs to equal k for the statement to be false.
I just wanted to verify my correction.
(a == k || b == k || c == k) is equivalent to !(a != k && b != k && c != k). This is an application of De-Morgan's law.
The formal proof is not easy so will not attempt to give it. (But is one of the first proofs you can do once you've established some mathematical axioms.) You can prove that your statements are not equivalent by using a counterexample: setting a != b will mean that (a != k || b != k || c != k) is always true, but !(a == k || b == k || c == k) might be true.
Of course their not equivalent. The first one is equivalent to a conjunction of inequalities when you use de-morgan's. An example where they are different: a=1, k=2, b=2, c=3.

Write an algorithm that tells me if two and only two numbers in three are the same

Deceptively simple algorithm question I came across. I'm trying to pull it off in 3 or less operations, and I'm reasonably convinced it can be solved with math, but I can't figure it out (and the source for the question didn't have an answer).
EDIT:
(a[0] == a[1] + a[0] == a[2] + a[1] == a[2]) == 1
is what I originally thought, but I'd like to see if it can be done in less operations (1 comparison being an operation).
Assuming the 3 numbers are a, b and c,
(b == c) ? (a != c) : (a == b || a == c)
If (a, b, c) = (1, 1, 1), then we will call b == c (true) and then a != c (false) and done.
If (a, b, c) = (1, 1, 2), then we will call b == c (false) and then a == b (true) and done.
If (a, b, c) = (1, 2, 1), then we will call b == c (false) and then a == b (false) and a == c (true) and done.
If (a, b, c) = (2, 1, 1), then we will call b == c (true) and then a != c (true) and done.
If (a, b, c) = (1, 2, 3), then we will call b == c (false) and then a == b (false) and a == c (false) and done.
So at most 3 comparison are performed.
There are also 2 conditional branching points at ?: and || but OP does not count it.
Depending on what you consider to be an "operation"...
The following uses only 3 comparisons out of the array. There is a forth comparison though, the == 1 to ensure that there is exactly one match. I believe you could use a ton of branching to conditionally eliminate some of the comparisons, but if this is an optimization, the branching would probably make it perform worse.
There are exactly 3 outcomes:
none of the values will be the same (sum is zero)
two will be the same (sum is one)
all three are the same (sum is three)
if (((array[0] == array[1]) +
(array[1] == array[2]) +
(array[0] == array[2])) == 1)
{
// stuff
}
This trades comparisons with branching to achieve a maximum of 3 comparisons and a routes that only requires 2:
if (array[0] == array[1]) // if these are equal
return !(array[1] == array[2]); // and these are not equal, return true
else
return (array[0] == array[2]) || (array[1] == array[2]); // otherwise, if these are equal, we already know the others are not equal because we already tested them so return true
You can write the expression:
((b == a) | (b == c)) ^ (a == c)
which has constant cost, always performs three comparisons and two logic operations. Having no branches, it ought to go easy on the processor.
Depending on the architecture,
((b == a) || (b == c)) ^ (a == c)
might be faster (this one performs two or three comparisons, one logic operation and one branch).
My try...
return (ab ? (!ac) : (ac ? true : bc));
Where:
ab = (a==b)
ac = (a==c)
bc = (b==c)
This uses 2 or 3 comparisons, at the expense of conditional jumps sometimes. Let us check the number of operations on each case:
a == c == b: (a==b) + jump + (a==c) + negation [returns a!=c] 4 operations
a == b != c: the same as above, 4 operations
a != b == c: (a==b) + jump + (a==c) + jump + (b==c) [returns this value] 5 operations
a == c != b: (a==b) + jump + (a==c) + jump [returns true] 4 operations
a != c != b: the same as above, 4 operations
Of course, this depends on your concept of operation... If jumps are not considered...

Resources