Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 9 years ago.
Improve this question
This is my fibonacci generator:
package main
import "fmt"
func main() {
for i, j := 0, 1; j < 100; i, j = i+j,i {
fmt.Println(i)
}
}
It's working, but I don't know how can I improve it, I'd like more expert approaches about it, Thanks...
I assume you are talking about improving the time complexity (and not the code complexity).
Your solution computes the Fibonacci numbers in O(n) time. Interestingly, there exists an O(log n) solution as well.
The algorithm is simple enough: Find the nth power of matrix A using a Divide and Conquer approach and report (0,0)th element, where
A = |1 1 |
|1 0 |
The recursion being
A^n = A^(n/2) * A^(n/2)
Time complexity:
T(n) = T(n/2) + O(1) = O(logn)
If you think about it with a piece of paper, you'd find that the proof is simple and is based upon the principle of induction.
If you still need help, refer to this link
NOTE: Of course, the O(logn) time is true only if you want to find the nth Fibonacci number. If, however, you intend to print ALL of the n fib numbers, theoretically, you can not have a better time complexity than you already have.
Related
Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 1 year ago.
Improve this question
Suppose I want to find partition number of n, aka p(n). Here Euler's Pentagonal number theorem, a dynamic programming based solution is present, which has time and complexities O(n^2), O(n^2\log(n)) respectively.
Is there any improvement over this algorithm to reduce complexity or is there any proof showing that this algorithm is the best possible for this problem/ reducing complexity bellow this is NP-hard. Also what about the space-time trade off. Can we reduce time/space complexity by increasing space/time complexity respectively (keeping in mind that each complexity should not be more that O(n^3).
The following recurrence can be directly translated to code:
where
,
import numpy as np
def num_partitions(n):
# recursive function with an auxiliary cache to avoid recomputing
# the same value more than once
def get(n, k, aux):
# terminate the recursion
if n < k:
return 0
if k == 1 or k == n:
return 1
# check if the value is already in the cache - if not, compute
# it recursively
if aux[n][k] == -1:
aux[n][k] = get(n-k, k, aux) + get(n-1, k-1, aux)
return aux[n][k]
return np.sum([get(n, k, np.ones((n+1,n+1)) * -1) for k in range(1, n+1)], dtype=np.int)
import sys
print(num_partitions(int(sys.argv[1])))
Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 3 years ago.
Improve this question
suppose that we have a m*n matrix that each rows are in order. so, i only know that order of best algorithm for this problem is O(m(log m + log n)).
(It was a test question and result is this order)
but i don't know how this algorithm works
One idea can be like this.
If I ask you what is the rank of a given number x in the original matrix? How do you answer this question?
One answer can be:
Just binary search the first occurrence of x or greater element on each row. and then add the individual ranks.
int rank = 1;
for (int i = 0; i < m; ++i) {
rank += std::lower_bound(matrix[i].begin(), matrix[i].end(), x);
}
This can be done in O(m * log n) time(m binary searches on n sized arrays).
Now we just need to do a binary search on x(between 0 and INT_MAX or matrix[0][k]) to find the kth rank. Since INT_MAX is const, that will make the overall time complexity O(m * log n) theoretically. One optimization, which can be done use intelligent ranges in place of matrix[i].begin(), matrix[i].end().
PS: Still wondering the O(m*(log m + log n)) or O( m * (log mn)) solution.
Closed. This question is not reproducible or was caused by typos. It is not currently accepting answers.
This question was caused by a typo or a problem that can no longer be reproduced. While similar questions may be on-topic here, this one was resolved in a way less likely to help future readers.
Closed 4 years ago.
Improve this question
I have this algorithm here:
sumup(int n) {
int s = ???, k = 0;
while(k != n) {
k = s*(2*k-1)*(2*k-1);
s = k;
}
return s;
}
And I need to find out what its purpose is. It doesn't even seem to work with most numbers and it just returns n again anyway, once its done.
Does anybody have any idea what this algorithm is used for?
I assumed it was for square roots, but it doesn't really seem to work either way.
At the end of the for loop s and k are equal. Before the next iteration k != n is checked. This is equivalent to s != n. So the loop runs until s == n holds and then n is returned. So the function get the input n, runs for some time and returns n at the end.
The questions are:
Does it terminate? Under what conditions?
Only if s and n fit together. E.g. if 0 < n < s holds the algorithm will not terminate.
How long does it take, if it terminates?
k is initialized with 0 and becomes the value of s after the first iteration. From there it is basically cubed every iteration. Solving s^(3^x) = n leads to a complexity of Θ(log log n).
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 7 years ago.
Improve this question
What is the time complexity of the following algorithm?
int j = 0;
while (j<n) {
for (int i = 0; i < n; i++) {
x++;
j++;
}
}
I tried it and calculated it like this: n*n = n^2, so result would be O(n)^2.
But I have second thoughts that it could also be like this: n+n= 2n. Result: O(n).
I know that if you have two for loops, you should multiply your n's. But here we have while and for, so I don't know.
it is actually just O(n).. (not O(n+n), not O(2n), not O(n^n), just O(n) )
because since you increment j up to n in the inner loop
as soon as you finish the inner loop of n elements, the outer condition will be false.. so it will exit, after n iterations
I think your understanding of determining time complexity with programs involving loops is a bit off. The general approach is to count the number of iterations. The complexity of the generic program:
Loop until SomeCondition:
DoStuff()
Is O([#iterations]*[complexity of DoStuff]). So if the number of iterations is proportional to some variable n and DoStuff is proportional to some variable m, then this program is O(n*m).
Circling back to your question: we see that the inner for loop is proportional to the variable n. Now we ask ourselves "How many iterations does it take for the while loop to reach its condition?". Well that depends on how much j grows each iteration! As pointed out by CaldasGM j increases by one for every iteration of the inner for loop. This means that j grows by n each iteration and so the while loop will exit after one iteration!
So the complexity of this program is O([#iterations]*[Complexity of loop contents]) = O([1*n]*[1]) = O(n).
Hope this helps your understanding :)
Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 9 years ago.
Improve this question
T(1) = 1
T(n) = T(n^1/3) + 1
How can I solve it? By "solve" I mean to found it's "complexity" (I don't really know how to say it in English), such as O(nlogn) ecc.
I couldn't guess the substitution method; i go nowhere with the iteration method, and I can't apply Master Theorem.
I'm arrived here, but I'm not sure:
T(n) = T(n^(1/3^k))) +k
Can you give me and advice please?
I'll try to formulate some possible solutions. You can pick one depending on further constraints.
The recursion will run until n becomes 1. This is:
1 = n^(1/3^k)
or more generally
b = n^(1/3^k)
where k is the recursion depth. Solving this for k yields:
ln(b) = 1/3^k * ln(n)
ln(ln(b) / ln(n)) = k * ln(1/3)
-ln(ln(b) / ln(n)) / ln(3) = k
If we set b to 1, then the equation becomes unsolvable, because ln(0) is not specified. This would be equivalent to an endless recursion.
However, we can say that in the last recursion n should be "roughly 1". So we actually have a b != 1. Then k is:
k = -ln(ln(b) / ln(n)) / ln(3)
= -ln(c1 / ln(n)) / c2
= -(ln(c1) - lnln(n)) / c2
= (-c3 + lnln(n)) / c2
This should be O(log log n).
If you want to truncate n to its integer part, the calculation becomes pretty messy, because you have special cases after each step. However, we could approximate the result by specifying b = 1.999999. This would yield the same complexity as above.
If this is a recursive function, so what i did understand is that
T(n)= T(integerpart(cubicsquare(n)) +1 ;
in this case :
S=0;
if (n>=1){
S++;
N= n;
while (N>1){
N=integerpart(N^1/3);
S++;
}
}
T(n)= S ;
that's mean that T(n) is a simple function : with integer bound, and the width of the keme interval is 2^(3^k) - 2^(3^(k-1))
you can see, first interval is if n in ]1,8[ T(n)=2; then if n in [8,252[ ,T(n)=3...
so, as we can say then that t(2^(3^k)) = k+1 ;
then t(n) ~O(ln(ln(n))/ln(3)) (consider suite 2^3^k)