I am studying for our final exam, our prof gave us some exercises with the answers to practice. Pretty sure his solution is wrong here... can someone confirm/deny?
SUPPLIED QUESTION: Can we parallelize the following loop? If yes, do it. If not, why not?
a[0] = 0;
for( i = 1; i < n; i++)
a[i] = a[i-1] + i;
(prof) SUPPLIED ANSWER: Yes, we can, if we realize that:
a[0] = 0
a[1] = a[0]+i = i
a[2] = a[1] + i = 2i
a[3] = 3i
(me) MY REASONING: If you follow the output this doesn't seem to hold:
a[0] = 0
a[1] = 0 + 1
a[2] = 1 + 2 = 3
... etc ...
Am I right that my prof is wrong? Or am I going crazy?
Your professor probably confused the 'i' with another variable outside the loop.
It would work for:
j = ...;
a[0] = 0;
for( i = 1; i < n; i++)
a[i] = a[i-1] + j;
That said, you can find a closed formula for a[i]=a[i-1]+i, but it involves a bit more mathematics. In this case it's:
a[i] = (i^2+i)/2;
In general, it could get a lot more complex, like this closed formula for the fibonacci series:
https://math.stackexchange.com/questions/65011/how-to-prove-that-the-binet-formula-gives-the-terms-of-the-fibonacci-sequence
Whomever supplied this answer:
a[0] = 0
a[1] = a[0]+i = i
a[2] = a[1] + i = 2i
a[3] = 3i
is wrong, as substitution does not work that way.
The loop in question is the simplest example in which flow dependencies (aka "true" dependencies) can be located: a[i] is always written "before" it is read, hence a "read-after-write hazard". Flow dependencies impose asymmetric ordering.
Related
function sum(arr){
let count = 0;
let count1 = 0;
for(let i = 0; i<arr.length;++i){
count = arr[i] + count}
for(let j = 0; j < arr.length;++j){
if(j === arr.length - 1){
break}
count1 = arr[j] * arr[j + 1] + count1;}
count = count + count1;
return count;
}
I implemented above's code to calculating the sum of numbers and each of their product in an array. For example, if array = [1,2,3], then it would be 1+2+3+1*2+2*3 = 14. However, writing two separate for loop seems silly to me, is there a more elegant way to do this? In addition, I'm stuck on prove this algorithm's correctness and the running time. But it looks like O(n) + O(n)= 2O(n) = O(n) to me. For proving the correctness, I think induction is a way to do it but currently I'm having trouble of using induction to prove an algorithm.
Simple modification:
function sum(arr){
let count = arr[0];
for(let i = 1; i<arr.length;++i){
count = count + arr[i] * (arr[i-1] + 1); }
return count;
}
And yes, you are right about O(n) + O(n)= 2O(n) = O(n) because constants are ignored
I have to swap numbers in an array 'd' times, so that left rotation of the array can be done. 'd' is the number of rotations of the array. Suppose if the array is 1->2->3->4->5 and if the d=1 then after one left rotation the array will be 2->3->4->5->1.
I have used the following code for performing the above operation:
for (int rotation = 0; rotation < d; rotation++) {
for (int i = 1; i < a.length; i++) {
int bucket = a[i - 1];
a[i - 1] = a[i];
a[i] = bucket;
}
}
But the efficiency of this algorithm is too high, probably O(n^d) worst case. How to improve the efficiency of the algorithm, especially in the worst case?
I am looking for a RECURSIVE approach for this algorithm. I came up with:
public static void swapIt(int[] array, int rotations){
for(int i=1; i<array.length; i++){
int bucket = array[i-1];
array[i-1] = array[i];
array[i] = bucket;
}
rotations--;
if(rotations>0){
swapIt(array,rotations);
}
else{
for(int i=0; i<array.length; i++){
System.out.print(array[i]+" ");
}
}
}
This RECURSIVE algorithm worked, but again efficiency is the issue. Can not use it for larger arrays.
Komplexity of your algorithm looks like O(n*d) to me.
My approach would be not to rotate by one d times, but to rotate by d one time.
You can calculate the destination of an element by:
So instead of a[i - 1] = a[i];
You would do this:
a[(i + a.length - d) % a.length] = a[i];
the Term (i + a.length - d) % a.length handles that you always get values in the intervall: 0... a.length-1
Explanation:
i + a.length - d is always positive (as long as d is <= a.length)
but it could be greater/equal than a.length what would not be allowed.
So take the reminder of the division with a.length.
This way you get for every i= 0.. a.length-1 the right new position.
As mentioned by Satyarth Agrahari:
If d>n you need to reduce d. d= d % a.length to ensure that (i + a.length - d) % a.length is in the wanted interval 0... a.length-1. The result is the same because rotation by a.length is like doing nothing at all.
to add upon the answer by #mrsmith42, you should probably check that d lies in range 1 <= d <= N-1. you can trim it down by taking modulo as d = d % N
I am trying to study for a test in my programming language concepts class.
I am trying to understand how to solve this problem. Our professor said we don't need to use formal notation to prove the problem as long as he can understand what we are saying.
I missed the lecture where he solved the problem and I'm having a very hard time finding resources to help me solve it on my own.
Would be so thankful for an explanation.
Problem
Use axiomatic semantics to prove that the postcondition is true following the execution of the program assuming the precondition is true
Precondition: n ≥ 0 and A contains n elements indexed from 0
bound = n;
while (bound > 0) {
t = 0;
for (i = 0; i < bound-1; i++) {
if (A[i] > A[i+1]) {
swap = A[i];
A[i] = A[i+1];
A[i+1] = swap;
t = i+1;
}
}
bound = t;
}
Postcondition: A[0] ≤ A[1] ≤ ...≤ A[n-1]
Lets number the lines for reference:
1. bound = n;
2. while (bound > 0) {
3. t = 0;
4. for (i = 0; i < bound-1; i++) {
5. if (A[i] > A[i+1]) {
6. swap = A[i];
7. A[i] = A[i+1];
8. A[i+1] = swap;
9. t = i+1;
10. }
11. }
12. bound = t;
13. }
Consider the following assertions:
Before entering 12
t < bound
Before entering 11
A[i] <= A[t] for all i such that 0 <= i < t
Before entering 13
A[k] <= A[j] for all indexes k and j such that bound <= k <= j <= n-1
After leaving 12
bound has decreased
Let's see now why the assertions are true
This is true because t=0 before the loop and if set inside the if it is
t = i + 1 < (bound - 1) + 1 = bound.
This is true because otherwise a swap would have happened.
This is true because of 2 and because the for doesn't change entries with indexes j from bound to n-1.
This is true because of 1.
From assertion 4 we deduce that the while loop, and so the algorithm, finishes in n steps at most, when bound = 0.
The postcondition now follows from assertion 3 for bound = 0.
I have these 2 codes, the question is to find how many times x=x+1 will run in each occasion as T1(n) stands for code 1 and T2(n) stands for code 2. Then I have to find the BIG O of each one, but I know how to do it, the thing is I get stuck in finding how many times ( as to n of course ) will x = x + 1 will run.
CODE 1:
for( i= 1; i <= n; i++)
{
for(j = 1; j <= sqrt(i); j++)
{
for( k = 1; k <= n - j + 1; k++)
{
x = x + 1;
}
}
}
CODE 2:
for(j = 1; j <= n; j++)
{
h = n;
while(h > 0)
{
for (i = 1; i <= sqrt(n); i++)
{
x = x+1;
}
h = h/2;
}
}
I am really stuck, and have read already a lot so I ask if someone can help me, please explain me analytically.
PS: I think in the code 2 , this for (i = 1; i <= sqrt(n); i++) will run n*log(n) times, right? Then what?
For code 1 you have that the number of calls of x=x+1 is:
Here we bounded 1+sqrt(2)+...+sqrt(n) with n sqrt(n) and used the fact that the first term is the leading term.
For code 2 the calculations are simpler:
The second loop actually goes from h=n to 0 by iterating h = h/2 but you can see that this is the same as going from 1 to log n. What we used is the fact the j, t, i are mutually independent (analogously just like we can write that sum from 1 to n of f(n) is just nf(n)).
I'm trying to study for an upcoming quiz about Big-O notation. I've got a few examples here but they're giving me trouble. They seem a little too advanced for a lot of the basic examples you find online to help. Here are the problems I'm stuck on.
1. `for (i = 1; i <= n/2; i = i * 2) {
sum = sum + product;
for (j= 1; j < i*i*i; j = j + 2) {
sum++;
product += sum;
}
}`
For this one, the i = i * 2 in the outer loop implies O(log(n)), and I don't think the i <= n/2 condition changes anything because of how we ignore constants. So the outer loop stays O(log(n)). The inner loops condition j < i*i*i confuses me because its in terms of 'i' and not 'n'. Would the Big-O of this inner loop then be O(i^3)? And thus the Big-O for the entire problem
be O( (i^3) * log(n) )?
2. `for (i = n; i >= 1; i = i /2) {
sum = sum + product
for (j = 1; j < i*i; j = j + 2) {
sum ++;
for (k = 1 ; k < i*i*j; k++)
product *= i * j;
}
}`
For this one, the outermost loop implies O(log(n)). The middle loop implies, again unsure, O(i^2)? And the innermost loop implies O(i^2*j)? I've never seen examples like this before so I'm almost guessing at this point. Would the Big-O notation for this problem be O(i^4 * n * j)?
3. `for (i = 1; i < n*n; i = i*2) {
for (j = 0; j < i*i; j++) {
sum ++;
for (k = i*j; k > 0; k = k - 2)
product *= i * j;
}
}`
The outermost loop for this one has an n^2 condition, but also a logarithmic increment, so I think that cancels out to be just regular O(n). The middle loop is O(i^2), and the innermost loop is I think just O(n) and trying to trick you. So for this problem the Big-O notation would be O(n^2 * i^2)?
4. `int i = 1, j = 2;
while (i <= n) {
sum += 1;
i = i * j;
j = j * 2;
}`
For this one I did a few iterations to better see what was happening:
i = 1, j = 2
i = 2, j = 4
i = 8, j = 8
i = 64, j = 16
i = 1024, j = 32
So clearly, 'i' grows very quickly, and thus the condition is met very quickly. However I'm not sure just what kind of Big-O notation this is.
Any pointers or hints you can give are greatly appreciated, thanks guys.
You can't add i or j to O-notation, it must be converted to n.
For the first one:
Let k be log 2 i.
Then inner loop is done 2^(k*3)/2=2^(3k-1) times for each iteration of outer loop.
k goes from 1 to log2n.
So total number of iterations is
sum of 2^(3k-1) for k from 1 to log 2 n which is 4/7(n^3-1) according to Wolfram Alpha, which is O(n^3).
For the last one, i=j1*j2*j3*...jk, and jm=2^m
i=2^1*2^2*...2^k=2^(1+2+...k)
So 1+2+3+...+k=log 2 n
(k+1)k/2 = log 2 n
Which is O(sqrt(log n))
BTW, log n^2 is not n.
This question is better to ask at computer science than here.