i want to prove that n is the same value to sin(0.9) which is the x, using taylor expansion, partial sum and finding N.
Related
I'm a teacher and told my students that the Big O of an expression such as y=15x+8 is O(1), however when we learned prefix and postfix, we discussed that evaluating these expressions is O(N) because you have to go through each character of the equation (assuming you're given a string).
One student asked how we said that evaluating an expression is O(1) if behind the scenes there must be either an infix, postfix or prefix evaluation being done.
I'm not sure what to answer.
An expression does not have any time complexity as such. An algorithm to solve a problem can have a time complexity. So it all depends on what you define as your problem and what you define to be relevant parameters of complexity. If you have a fixed assignment such as
y := 15 * x + 8;
the problem can be defined as "compute the value of 15 * x + 8, with input parameter x". So here you want to express time complexity as a function dependent on x. The time complexity is O(1), assuming we are talking about standard 32/64-bit arithmetic computations and otherwise O(log x), if this is arbitrary-precision arithmetic.
However, if you regard the size of the expression as variable, the problem becomes "compute the value of an arithmetic expression tree with k nodes, where k is an input parameter". This is a different problem, and has a different complexity, as you correctly pointed out.
i am stuck on this one and i don't know how to solve it, no matter what i try i just can't find a way to play with the function so i can represent it in a way that will allow me to find a g(n), so that g(n) is T(n)∈Θ(g(n))
the function i am having trouble with is:
$T(n)=4n^4T(\sqrt n) +(n^6lgn+3lg^7n)(2n^2lgn+lg^3n)$
additionally, if you can - could you please check if i am on the right path with:
$T(n)=T(n-1)+\frac{1}{n}+\frac{1}{n^2}$
to solve it i tried to use: $T(n)-T(n-1)=\frac{1}{n}+\frac{1}{n^2}$ iff $(T(n)-T(n-1))+(T(n-1)-T(n-2))+\ldots+(T(2)-T(1))=\frac{1}{n}+\frac{1}{n-1}+...+\frac{1}{n^2}+\frac{1}{\left(n-1\right)^2}+....$ iff $(T(n)-T(n-1))+(T(n-1)-T(n-2))+\ldots+(T(2)-T(1))=T(n)=T(1)+\sum_{k=2}^n\frac{1}{n}+\sum_{k=2}^n\frac{1}{n^2}$ and then using the harmonic series formula. however i don't know how to continue and finish it and find the asymptotic boundaries to solve it
i hope that on the second i am on the right path. however i don't know how to solve the first one at all. if i've done any mistakes, please show me the right way so i can improve my mistakes.
thank you very much for your help
sorry that for some reason math doesn't show correctly here
Following on from the comments:
Solving (2) first since it is more straightforward.
Your expansion attempt is correct. Writing it slightly differently:
A, harmonic series - asymptotically equal to the natural logarithm:
γ = 0.57721... is the Euler-Mascheroni constant.
B, sum of inverse squares - the infinite sum is the famous Basel problem:
Which is 1.6449.... Therefore since B is monotonically increasing, it will always be O(1).
The total complexity of (2) is simply Θ(log n).
(1) is a little more tedious.
Little-o notation: strictly lower complexity class, i.e.:
Assume a set of N functions {F_i} is ordered in decreasing order of complexity, i.e. F2 = o(F1) etc. Take a linear combination of them:
Thus a sum of different functions is asymptotically equal to the one with the highest growth rate.
To sort the terms in the expansion of the two parentheses, note that
Provable by applying L'Hopital's rule. So the only asymptotically significant term is n^6 log n * 2n^2 log n = 2n^8 log^2 n.
Expand the summation as before, note that i) the factor 4n^4 accumulates, ii) the parameter for the m-th expansion is n^(1/(2^m)) (repeated square-root).
The new term added by the m-th expansion is therefore (will assume you know how to do this since you were able to do the same for (2)):
Rather surprisingly, each added term is precisely equal to the first.
Assume that the stopping condition for recursive expansion is n < 2 (which of course rounds down to T(1)):
Since each added term t_m is always the same, simply multiply by the maximum number of expansions:
Function (1) is
I'm studding Naive string search algorithm (aka brute force algorithm). I know that, there exists other more efficient algorithms, but as I'm starting from basic, currently I'm interested only in this algorithm.
And I have a question, as follows:
What is the average time complexity (ϴ) for this algorithm?
I have found that best and worst cases have respectively ϴ = N, ϴ = M*N
From your comment to the question, it seems that the N text characters are uniformly randomly generated. For this setting, brute force's average time is O(N - M), irrespective of the way the search string is generated. (Note that Wikipedia states O(N + M), but we can actually deduce O(N - M) using the following analysis. See also these lecture notes).
Consider the iteration where the search string is matched against the text at position i of the text. For any search string, each character of the search string has a probability p = 255/256 of not matching the character of the search string. Say we define that a "success" is not matching. Then the number of attempts until success is a Geometric Distribution with expected (1 - p) / p = O(1) failures until success.
So, for position i, the expected cost is O(1). By linearity of expectation, we need now to sum over all relevant i. There are Θ(N - M) such i.
In the computational complexity theory, we say that an algorithm have complexity O(f(n)) if the number of computations that solve a problem with input size n is bounded by cf(n), for all integer n, where c is a positive constant non-depending on n, and f(n) is an increasing function that goes to infinity as n does.
The 3-SAT problem is stated as: given a CNF expression, whose clauses has exactly 3 literals, is there some assignment of TRUE and FALSE values to the variables that will make the entire expression true?
A CNF expression consists of, say, k clauses involving m variables x1, ..., xm.
In order to decide if 3-SAT has polynomial complexity P(n), or not, I need to understand something so simple as "what is n" in the problem.
My question is:
Which is considered, in this particular 3-SAT problem, the input
size n?
Is it the number k of clauses? Or is it the number m of variables?
Or n is some function of k and m? ( n=f(k,m) ).
I am in trouble with this simple issue.
According to the answer of Timmie Smith, we can consider the estimate:
k <= constant * f(m)
where m is a polynomial function of m.
More precisely, the function P(m) it could be considered of exponent 3 (that is, cubic).
Thus, if we consider the complexity f(k) of 3-SAT, we would have:
f(k, m)=f(P(m),m), (with P(m) = m^3).
So, if the function f is polyonomial in k and m, then actually results polynomial in m. Thus, by considering m as the input size, it would be to estimate if a given algorithm is, or not, polynomial in m, in order to know if 3-SAT is in P or not.
If you agree, I can accept the answer of Timmie as the good one.
UPDATE:
I did the same question here:
https://cstheory.stackexchange.com/questions/18756/whats-the-meaning-of-input-size-for-3-sat
The accepted answer was helpful to me.
The input is the number m of variables. This is because the number of possible clauses that can be formed given m variables is a polynomial function of the number of variables.
I've seen in some papers of Carnegie Mellon University the following definition of "size of input" for this kind of problems:
number of bits it takes to write the input down
Considering that the input can be compressed, this definition makes sense to me, because it is a good measure of input entropy.
My 2 cents! Cheers!!
The input size is the number of variables m.
The reason for this is that the size of the search space that needs to be traversed for solving the problems is determined entirely by the number of variables: Each variable has two possible states (1 or 0), the search space consists of all possible assignments. A brute-force algorithm would just test all possible assignments (2^m) to traverse the search space. Although most 3-SAT algorithms will be impacted significantly by the number of clauses, that does not influence the underlying problem's complexity.
Therefore the input size is also the number of variables for plain-old SAT, where the search space looks the same, although resolving clauses in a non-brute-force way works quite differently.
Looking for some help with an upcoming exam, this is a question from the review. Seeing if someone could restate a) so I might be able to better understand what it is asking.
So it wants me to instead of using extra multiplications maybe obtain some of the terms in the answer (PQ) by subtracting and adding already multiplied terms. Such as Strassen does in his algorithm to compute the product of 2x2 matrices in 7 multiplications instead of 8.
a) Suppose P(x) and Q(x) are two polynomials of (even) size n.
Let P1(x) and P2(x) denote the polynomials of size n/2 determined by the first n/2 and last n/2 coefficients of P(x). Similarly define Q1(x) and Q2(x),
i.e., P = P1 + x^(n/2)P2. and Q = Q1 + x^(n/2) Q2.
Show how the product PQ can be computed using only 3 distinct multiplications of polynomials of size n/2.
b) Briefly explain how the result in a) can be used to design a divide-and-conquer algorithm for multiplying two polynomials of size n (explain what the recursive calls are and what the bootstrap condition is).
c) Analyze the worst-case complexity of algorithm you have given in part b). In particular derive a recurrence formula for W(n) and solve. As usual, to simplify the math, you may assume that n is a power of 2.
Here is a link I found which does polynomial multiplication.
http://algorithm.cs.nthu.edu.tw/~course/Extra_Info/Divide%20and%20Conquer_supplement.pdf
Notice here that if we do polynomial multiplication the way we learned in high school, it would take big-omega(n^2) time. The question wants you to see that there is a more efficient algorithm out there by first preprocessing the polynomials, by dividing it into two pieces. This lecture gives a pretty detailed explanation of how to do this.
Especially, look at page 12 of the link. It shows you explicitly how a 4 multiplication process can be done in 3 when multiplying polynomials.