for loop and if statement positioning efficiency - performance

I'm doing some simple logic with for loop and if statement, and I was wondering which of the following two positioning is better, or whether there is a significant performance difference between the two.
Case 1:
if condition-is-true:
for loop of length n:
common code
do this
else:
another for loop of length n
common code
do that
Case 2:
for loop of length n:
common code
if condition-is-true:
do this
else:
do that
Basically, I have a for loop that needs to be executed slightly differently based on a condition, but there is certain stuff that needs to happen in the for loop no matter what. I would prefer the second one because I don't have to repeat the commond code twice, but I'm wondering if case 1 would perform significantly better?
I know in terms of big-O notation it doesn't really matter because the if-else statement is a constant anyway, but I'm wondering realistically on a dataset that is not way too big (maybe n = a few thousands), if the two cases make a difference.
Thank you!

First one is good one because there is no need to check the condition every time but in second case you have to check the condition on very iteration. But your length of code will be long . If code size matters then put the common code into the method and just call the method instead of block of common code.

Related

Are successive for loops over the same iterable slower than a single loop?

I often see, and have an inclination to write code where only 1 pass is made over a for loop, if it can be helped. I think there is a part of me (and of the people whose code I read) that feels it might be more efficient. But is it actually? Often, multiple passes over a list, each doing something different, lends to much more separable and easily understood code. I recognize there's some overhead for creating a for loop but I can't imagine it's significant at all?
From a big-O perspective, clearly both are O(n), but if You had an overhead* o, and y for loops each with O(1) operations vs 1 for loop with O(y) ops:
In the first case, y•O(n+o) = y•O(n+nc) = O(yn+ync) = O(yn) + O(ync) = O(yn)+ y•O(o)
In the second, O(ny+o) = O(yn)+O(o)
Basically they're the same except that the overhead is multiplied by y (this makes sense since we're making y for loops, and get +o overhead for each).
How relevant is the overhead here? Do compiler optimizations change this analysis at all (and if so how do the most popular languages respond to this)? Does this mean that in loops where the number of operations (say we split 1 for loop into 4) is comparable to the number of elements mapped over (say a couple dozen), making many loops does make a big difference? Or does it depend by case and is the type of thing that needs to be tested and benchmarked?
*I'm assuming o is proportional to cn, since there is an update step in each iteration

Faster way of testing a condition in MATLAB

I need to run many many tests of the form a<0 where a is a vector (a relatively short one). I am currently doing it with
all(v<0)
Is there a faster way?
Not sure which one will be faster (that may depend on the machine and Matlab version), but here are some alternatives to all(v<0):
~any(v>0)
nnz(v>=0)==0 %// Or ~nnz(v>=0)
sum(v>=0)==0 %// Or ~sum(v>=0)
isempty(find(v>0, 1)) %// Or isempty(find(v>0))
I think the issue is that the conditional is executed on all elements of the array first, then the condition is tested... That is, for the test "any(v<0)", matlab does the following I believe:
Step 1: compute v<0 for every element of v
Step 2: search through the results of step 1 for a true value
So even if the first element of v is less than zero, the conditional was first computed for all elements, hence wasting a lot of time. I think this is also true for any of the alternative solutions offered above.
I don't know of a faster way to do it easily, but wish I did. In some cases, breaking the array v up into smaller chunks and testing incrementally could speed things up, particularly if the condition is common. For example:
function result = anyLessThanZero(v);
w = v(:);
result = true;
for i=1:numel(w)
if ( w(i) < 0 )
return;
end
end
result = false;
end
but that can be very inefficient if the condition is rare. (If you were to really do this, there is probably a better way than I illustrate above to handle any condition, not just <0, but I show it this way to make it clear).

suggestions on constructing nested statements properly

I am having trouble constructing my own nested selection statements (ifs) and repetition statements (for loops, whiles and do-whiles). I can understand what most simple repetition and selection statements are doing and although it takes me a bit longer to process what the nested statements are doing I can still get the general gist of the code (keeping count of the control variables and such). However, the real problem comes down to the construction of these statements, I just can't for the life of me construct my own statements that properly aligns with the pseudo-code.
I'm quite new to programming in general so I don't know if this is an experience thing or I just genuinely lack a very logical mind. It is VERY demoralising when it takes me a about an hour to complete 1 question in a book when I feel like it should just take a fraction of the time.
Can you people give me some pointers on how I can develop proper nested selection and repetition statements?
First of all, you need to understand this:
An if statement defines behavior for when **a decision is made**.
A loop (for, while, do-while) signifies **repetitive (iterative) work being done** (such as counting things).
Once you understand these, the next step, when confronted with a problem, is to try to break that problem up in small, manageable components:
i.e. decisions, that provide you with the possible code paths down the way,
and various work you need to do, much of which will end up being repetitive,
hence the need for one or more loops.
For instance, say we have this simple problem:
Given a positive number N, if N is even, count (display numbers) from
0(zero) to N in steps of 2, if N is odd, count from 0 to N in steps of
1.
First, let's break up this problem.
Step 1: Get the value of N. For this we don't need any decision, simply get it using the preferred method (from file, read console, etc.)
Step 2: Make a decision: is N odd or even?
Step 3: According to the decision made in Step 2, do work (count) - we will iterate from 0 to N, in steps of 1 or 2, depending on N's parity, and display the number at each step.
Now, we code:
//read N
int N;
cin<<N;
//make decision, get the 'step' value
int step=0;
if (N % 2 == 0) step = 2;
else step = 1;
//do the work
for (int i=0; i<=N; i=i+step)
{
cout >> i >> endl;
}
These principles, in my opinion, apply to all programming problems, although of course, usually it is not so simple to discern between concepts.
Actually, the complicated phase is usually the problem break-up. That is where you actually think.
Coding is just translating your thinking so the computer can understand you.

foreach loop is not working(parallelization)

If I want to speed up the following code, How can I do that?
pcg <- foreach(boot.iter=1:boot.rep) %dopar% {
d.boot<-d[in.sample[[boot.iter]],]
*here in.sample[[boot.iter]] randomly generates 1000 row numbers.
I planned to split the overall tasks and send the seperated trials to each core. for example,
sub_task<-foreach(i=1:cores.use)%dopar%{
for (j in 1:trialsPerCore){
d.boot<-d[in.sample[[structure[i,j]]],]}}
*structure is a matrix which contains from 1 to boot.rep
But this one would not work, seems like we cannot use "for" loop inside the foreach? Also, the d.boot only keeps the last iteration of each core.
I tried to search online, I found the following code works,
sub_task<foreach(i=1:cores.use)%:%
foreach(j=1:trialsPerCore)%dopar%{
d.boot<-d[in.sample[[structure[i,j]]],]}
But I think it is similar to my original function, and I do not think there is a great enhancement.
Do you guys have any suggestions?
Unless I'm missing something, it doesn't look like you're doing much if any computation in your foreach loop. You appear to be simply creating a list of matrices from d. That wouldn't benefit from parallel computing unless you can perform an operation on those matrices in your loop, and ideally return a relatively small result from that operation.
Although "chunking" often helps to execute parallel loops more efficiently, I don't think it's going to help here. The communication may be a little more efficient, but you're still just doing a lot of communication and essentially no computation.
Note that your attempt at chunking doesn't work because the for loop in the foreach loop is repeatedly assigning a matrix to the same variable. Then, the for loop itself returns a NULL as the body of the foreach loop, so that sub_task is a list of NULL's. An lapply would work much better in this context.
It will help a little to compute the values in the in.sample list in the foreach loop. That will decrease the amount of data that is auto-exported to each of the workers at the cost of a bit more computation on the workers, which is generally what you want to do in parallel loops. At the very least, you could iterate over in.sample directly:
pcg <- foreach(i=in.sample) %dopar% d[i,]
In this form, it's all the more obvious that there isn't enough computation to warrant parallel computing. If there isn't any real computation to perform, you're better off using lapply:
pcg <- lapply(in.sample, function(i) d[i,])

Detecting infinite loop in brainfuck program

I have written a simple brainfuck interpreter in MATLAB script language. It is fed random bf programs to execute (as part of a genetic algorithm project). The problem I face is, the program turns out to have an infinite loop in a sizeable number of cases, and hence the GA gets stuck at the point.
So, I need a mechanism to detect infinite loops and avoid executing that code in bf.
One obvious (trivial) case is when I have
[]
I can detect this and refuse to run that program.
For the non-trivial cases, I figured out that the basic idea is: to determine how one iteration of the loop changes the current cell. If the change is negative, we're eventually going to reach 0, so it's a finite loop. Otherwise, if the change is non-negative, it's an infinite loop.
Implementing this is easy for the case of a single loop, but with nested loops it becomes very complicated. For example, (in what follows (1) refers to contents of cell 1, etc. )
++++ Put 4 in 1st cell (1)
>+++ Put 3 in (2)
<[ While( (1) is non zero)
-- Decrease (1) by 2
>[ While( (2) is non zero)
- Decrement (2)
<+ Increment (1)
>]
(2) would be 0 at this point
+++ Increase (2) by 3 making (2) = 3
<] (1) was decreased by 2 and then increased by 3, so net effect is increment
and hence the code runs on and on. A naive check of the number of +'s and -'s done on cell 1, however, would say the number of -'s is more, so would not detect the infinite loop.
Can anyone think of a good algorithm to detect infinite loops, given arbitrary nesting of arbitrary number of loops in bf?
EDIT: I do know that the halting problem is unsolvable in general, but I was not sure whether there did not exist special case exceptions. Like, maybe Matlab might function as a Super Turing machine able to determine the halting of the bf program. I might be horribly wrong, but if so, I would like to know exactly how and why.
SECOND EDIT: I have written what I purport to be infinite loop detector. It probably misses some edge cases (or less probably, somehow escapes Mr. Turing's clutches), but seems to work for me as of now.
In pseudocode form, here it goes:
subroutine bfexec(bfprogram)
begin
Looping through the bfprogram,
If(current character is '[')
Find the corresponding ']'
Store the code between the two brackets in, say, 'subprog'
Save the value of the current cell in oldval
Call bfexec recursively with subprog
Save the value of the current cell in newval
If(newval >= oldval)
Raise an 'infinite loop' error and exit
EndIf
/* Do other character's processings */
EndIf
EndLoop
end
Alan Turing would like to have a word with you.
http://en.wikipedia.org/wiki/Halting_problem
When I used linear genetic programming, I just used an upper bound for the number of instructions a single program was allowed to do in its lifetime. I think that this is sensible in two ways: I cannot really solve the halting problem anyway, and programs that take too long to compute are not worthy of getting more time anyway.
Let's say you did write a program that could detect whether this program would run in an infinite loop. Let's say for the sake of simplicity that this program was written in brainfuck to analyze brainfuck programs (though this is not a precondition of the following proof, because any language can emulate brainfuck and brainfuck can emulate any language).
Now let's say you extend the checker program to make a new program. This new program exits immediately when its input loops indefinitely, and loops forever when its input exits at some point.
If you input this new program into itself, what will the results be?
If this program loops forever when run, then by its own definition it should exit immediately when run with itself as input. And vice versa. The checker program cannot possibly exist, because its very existence implies a contradiction.
As has been mentioned before, you are essentially restating the famous halting problem:
http://en.wikipedia.org/wiki/Halting_problem
Ed. I want to make clear that the above disproof is not my own, but is essentially the famous disproof Alan Turing gave back in 1936.
State in bf is a single array of chars.
If I were you, I'd take a hash of the bf interpreter state on every "]" (or once in rand(1, 100) "]"s*) and assert that the set of hashes is unique.
The second (or more) time I see a certain hash, I save the whole state aside.
The third (or more) time I see a certain hash, I compare the whole state to the saved one(s) and if there's a match, I quit.
On every input command ('.', IIRC) I reset my saved states and list of hashes.
An optimization is to only hash the part of state that was touched.
I haven't solved the halting problem - I'm detecting infinite loops while running the program.
*The rand is to make the check independent of loop period
Infinite loop cannot be detected, but you can detect if the program is taking too much time.
Implement a timeout by incrementing a counter every time you run a command (e.g. <, >, +, -). When the counter reaches some large number, which you set by observation, you can say that it takes very long time to execute your program. For your purpose, "very long" and infinite is a good-enough approximation.
As already mentioned this is the Halting Problem.
But in your case there might be a solution: The Halting Problem is considering is about the Turing machine, which has unlimited memory.
In case you know that you have a upper limit of memory (e.g. you know you dont use more than 10 memory cells), you can execute your programm and stop it. The idea is that the computation space bounds computation time (as you cant write more than one cell at one step). After you executed as much steps as you can have different memory configurations, you can break. E.g. if you have 3 cells, with 256 conditions, you can have at most 3^256 different states, and so you can stop after executing that many steps. But be careful, there are implicit cells, like the instruction pointer and the registers. You do it even shorter, if you save every state configuration and as soon as you detect one, which you already had, you have an infite loop. This approach is definitly much better in the run time, but therefor needs much more space (here it might be suitable to hash the configurations).
This is not the halting problem, however, it is still not reasonable to try to detect halting even in such a limited machine as a 1000 cell BF machine.
Consider this program:
+[->[>]+<[-<]+]
This program will not repeat until it has filled up the entire of memory which for just 1000 cells will take about 10^300 years.
If I remember correctly, the halting problem proof was only true for some extreme case that involved self reference. However it's still trivial to show a practical example of why you can't make an infinite loop detector.
Consider Fermat's Last Theorem. It's easy to create a program that iterates through every number (or in this case 3 numbers), and detects if it's a counterexample to the theorem. If so it halts, otherwise it continues.
So if you have an infinite loop detector, it should be able to prove this theorem, and many many others (perhaps all others, if they can be reduced to searching for counterexamples.)
In general, any program that involves iterating through numbers and only stopping under some condition, would require a general theorem prover to prove if that condition can ever be met. And that's the simplest case of looping there is.
Off the top of my head (and I could be wrong), I would think it would be a little bit difficult to detect whether or not a program has an infinite loop without actually executing the program itself.
As the conditional execution of portions of the program depends on the execution state of the program, it will be difficult to know the particular state of the program without actually executing the program.
If you don't require that a program with an infinite loop be executed, you could try having an "instructions executed" counter, and only execute a finite number of instructions. This way, if a program does have an infinite loop, the interpreter can terminate the program which is stuck in an infinite loop.

Resources