Which one is faster ! or == operation? - algorithm

I wanted to know if there is any difference in performance upon using ! operator in place of == operator.
This is my understanding and please correct me if I am wrong.
! operator - does inverts all the bits and for integer it goes over 32 bits for flipping all the bits. works with one operand and maps to not operator in assembly code.
== operator - works with two operand and involves CMP and eventually JMP operation in assembly which is costly.
For a simple statement like the following which one performs better?
function(){
return (some operation) == 0
}
or
function(){
return !(some operation)
}
Languages: C++, Java, Python
platform: linux

Related

Concise (one line?) binary search in Raku

Many common operations aren't built in to Raku because they can be concisely expressed with a combination of (meta) operators and/or functions. It feels like binary search of a sorted array ought to be expressable in that way (maybe with .rotor? or …?) but I haven't found a particularly good way to do so.
For example, the best I've come up with for searching a sorted array of Pairs is:
sub binary-search(#a, $target) {
when +#a ≤ 1 { #a[0].key == $target ?? #a[0] !! Empty }
&?BLOCK(#a[0..^*/2, */2..*][#a[*/2].key ≤ $target], $target)
}
That's not awful, but I can't shake the feeling that it could be an awfully lot better (both in terms of concision and readability). Can anyone see what elegant combo of operations I might be missing?
Here's one approach that technically meets my requirements (in that the function body it fits on a single normal-length line). [But see the edit below for an improved version.]
sub binary-search(#a, \i is copy = my $=0, :target($t)) {
for +#a/2, */2 … *≤1 {#a[i] cmp $t ?? |() !! return #a[i] with i -= $_ × (#a[i] cmp $t)}
}
# example usage (now slightly different, because it returns the index)
my #a = ((^20 .pick(*)) Z=> 'a'..*).sort;
say #a[binary-search(#a».key, :target(17))];
say #a[binary-search(#a».key, :target(1))];
I'm still not super happy with this code, because it loses a bit of readability – I still feel like there could/should be a concise way to do a binary sort that also clearly expresses the underlying logic. Using a 3-way comparison feels like it's on that track, but still isn't quite there.
[edit: After a bit more thought, I came up with an more readable version of the above using reduce.
sub binary-search(#a, :target(:$t)) {
(#a/2, */2 … *≤.5).reduce({ $^i - $^pt×(#a[$^i] cmp $t || return #a[$^i]) }) && Nil
}
In English, that reads as: for a sequence starting at the midpoint of the array and dropping by 1/2, move your index $^i by the value of the next item in the sequence – with the direction of the move determined by whether the item at that index is greater or lesser than the target. Continue until you find the target (in which case, return it) or you finish the sequence (which means the target wasn't present; return Nil)]

comparison of “if” statements use against direct use of logical operators

To start with, I'll show some code:
//Declarations
bool cmp=filter();
//case 1
cmp && mainOperation();
cmp || elseOperation();
//case 2 :void*
cmp ?
mainOperatiom() &&
elseOperation() ;
//case 3
cmp || goto other;
mainOperation();
goto end;
other:
elseOperation();
end:
//case 0
if(cmp){
mainOperation();
} else {
elseOperation();
}
I'm actually not sure what the differences are between these codes from complexity view.
I'd like to know which case compiles the same as case 0? I mean which set of instructions will compile the same bytecode as the if statement.
Use case 0. It's readable, it's what any serious developer would use, it's the code that you are not asked to change in a code review, it's the code that I can read without thinking "what kind of xxxxx wrote this".
If you are even thinking about using another version to make your code run faster, then you need to learn how to save microseconds or milliseconds, not nanoseconds.

If/else Performance

I'm trying to figure out the difference in CPU usage and performance comparing two if/else statements, take the following functions
function f(x):
if(condition) return true;
else return false;
function f'(x):
if(condition) return true;
return false;
The purpose of the function is not important, of course in both cases you want to return true if the 'if' is true, otherwise return false.
Both pieces of code do the same thing, I'm thinking with regards to performance and CPU usage, would there be any difference between these two programs when removing the else statement and using the sequential execution to do the 'else' instead, or when compiled would the difference just be lost?
There is no difference between the two functions. Any half-decent compiler would generate identical code for them.
Because the if branch contains return at the end, else in the first program is redundant. When the program is translated to machine instructions, you end up with something like this:
start: LD $condition -- Check condition
JZ else_br -- Conditional jump
LD true_val
RET -- Return true
else_br: LD false_val
RET -- Return false
In the second program else branch is empty, so the sequence of instructions would be the same.

Is there difference in processing time between multiple if statement and one if with multiple condition?

I prefer to avoid nested if but I wonder if there's time difference between these two :
if(a && b)
{
...
}
or
if(a)
{
if(b)
{
...
}
}
They're exactly identical.
However it depends on your compiler , you have not mentioned the language you will be using.
But most of the compiler will interpret these both code into same thing
example for java
Should if(a&&b) take more time than if(a) if(b)?
example for c++
Nested if statements and "&&" operator
It is the same.
After a gives true in (a && b && c && ...), b is tested, then c and so on – one by one until some expression in chain will result false. After it happens all subsequent expressions will be skipped. As far as I know it works the same way in all C-like languages.
Your code will be more readable if you use && operator instead of nesting many if blocks.

Coding styles in conditional expression of some programming languages

It's a bit confusing to me about what is the difference between these condition expressions below:
if( 1 == a) {
//something
}
and
if( a == 1 ) {
//something
}
I saw the above one in some scripts I have downloaded and I wonder what's the difference between them.
The former has been coined a Yoda Condition.
Using if(constant == variable) instead of if(variable == constant), like if(1 == a). Because it's like saying "if blue is the sky" or "if tall is the man".
The constant == variable syntax is often used to avoid mistyping == as =. It is, of course, often used without understanding also when you have constant == function_call_retuning_nothing_modifiable.
Other than that there's no difference, unless you have some weird operator override.
Many programming languages allow assignments like a = 1 to be used as expressions, making the following code syntactically valid (given that integers can be used in conditionals, such as in C or many scripting languages):
if (a = 1) {
// something
}
This is rarely desired, and can lead to unexpected behavior. If 1 == a is used, then this mistake cannot occur because 1 = a is not valid.
Well, I am not sure about the trick. Generally, we could say the equal sign is commutative. So, a = b implies b = a. However, when you have == or === this doesn't work in certain cases, for example when on the right side you have a range: 5 === (1..10) vs. (1..10) === 5.

Resources