1/sqrt(x^2 + y^2 + 1 - sqrt(2) sqrt(x^2 + y^2) (x/sqrt(x^2 + y^2) + y/sqrt(x^2 + y^2))) - 1/sqrt(x^2 + y^2 + 1 - sqrt(2) sqrt(x^2 + y^2) (x/sqrt(x^2 + y^2) + y/sqrt(x^2 - y^2)))
trying to do a contour plot of an equation like this but can't get wolfram alpha or mathematica to accept it. It is plotting nothing. When I do it on wolfram alpha computation time exceeded.
Do I need to simplify this expression?
Related
I have these two equations:
pl1 = ((2*α + 1)^2/((2*α + 1)^2 - (1 - α)^2))
(3/(2 α + 1) (xmed1 - (1 - α)/(2 α + 1) xmed2) +
((-3 α)/(2*α + 1) (1 - α)/(2 α + 1)*pl3))
pl3 = (2*α + 1)^2/((2*α + 1)^2 - (1 - α)^2)
(3/(2 α + 1) (xmed3 - (1 - α)/(2 α + 1) xmed2) +
((-3 α)/(2*α + 1) (1 - α)/(2 α + 1)*pl1))
parameters are α, xmed1, xmed2, xmed3.
I'm trying to plug pl1 into pl3 and solve for pl3. Basically just save myself the time of doing the algebra by hand.
I currently am not even able to set pl3 as a variable as I get as output:
$RecursionLimit: Recursion depth of 1023 exceeded during evaluation of (1+2 α)^2
So now I have no idea what to do. Obviously, this is my first crack at using mathematica. Thought I would speed up tedious algebra but now I've spent the last 5 hours going through tutorials and reading the documentation for mathematica... Probably should have just done it by hand!
Note the double equals == for solving.
FullSimplify[Solve[
{pl1 == ((2*α + 1)^2/((2*α + 1)^2 - (1 - α)^2))
(3/(2 α + 1) (xmed1 - (1 - α)/(2 α + 1) xmed2) +
((-3 α)/(2*α + 1) (1 - α)/(2 α + 1)*pl3)),
pl3 == (2*α + 1)^2/((2*α + 1)^2 - (1 - α)^2)
(3/(2 α + 1) (xmed3 - (1 - α)/(2 α + 1) xmed2) +
((-3 α)/(2*α + 1) (1 - α)/(2 α + 1)*pl1))},
{pl1, pl3}]]
I understand that why bubble sort is O(n^2).
However in many explanations I see something like this:
(n-1) + (n-2) + (n-3) + ..... + 3 + 2 + 1
Sum = n(n-1)/2
How do you calcuate Sum from this part:
(n-1) + (n-2) + (n-3) + ..... + 3 + 2 + 1
Can anyone help?
Here's the trick:
If n is even:
n + (n-1) + (n-2) + … + 3 + 2 + 1
= [n + 1] + [(n-1) + 2] + [(n-2) + 3] + … + [(n - (n/2 - 1)) + n/2]
= (n + 1) + (n + 1) + (n + 1) + … + (n + 1)
= n(n+1)/2
If n is odd:
n + (n-1) + (n-2) + … + 3 + 2 + 1
= [n + 1] + [(n-1) + 2] + [(n-2) + 3] + … + [(n - (n-1)/2 + 1) + (n-1)/2] + (n-1)/2 + 1
= (n+1) + (n+1) + (n+1) + … + (n+1) + (n-1)/2 + 1
= (n+1)(n-1)/2 + (n-1)/2 + 1
= (n^2 - 1 + n - 1 + 2)/2
= (n^2 + n)/2
= n(n+1)/2
For your case, since you're counting up to n-1 rather than n, replace n with (n-1) in this formula, and simplify:
x(x+1)/2, x = (n-1)
=> (n-1)((n-1)+1)/2
= (n-1)(n)/2
= n(n-1)/2
The simplest "proof" to understand without deriving the equation is to imagine the complexity as area:
so if we have sequence:
n+(n-1)+(n-2)...
we can create a shape from it... let consider n=5:
n 5 *****
n-1 4 ****
n-2 3 ***
n-3 2 **
n-4 1 *
Now when you look at the starts they form a right angle triangle with 2 equal length sides ... that is half of n x n square so the area is:
area = ~ n.n / 2 = (n^2)/2
In complexities the constants are meaningless so the complexity would be:
O(n^2)
Hi I solved a question with recursion tree method. Then I reached the below equatition.
n
∑ 3^(i-1)(n - (i - 1))
i=1
I need to find asymptotic upper bound for that equation. Any help would be appreciated.
Wolfram Alpha is a great tool for this: https://www.wolframalpha.com/input/?i=sum(3%5E(i-1)(n+-+i+%2B+1)+for+i+%3D+1..n)
That tool simplifies the sum to: (-2n + 3^(n+1) - 3)/4.
In terms of big-O, that's O(3^n).
Let u(n) = 3^(n-1) + 2*3^(n-2) + ... + n, then
u(n+1) = (3^n + 3^(n-1) + ... + 1) + 3^(n-1) + 2*3^(n-2) + ... + n = (3^(n+1)-1)/2 + u(n) = 3*u(n) + n + 1
u(n) = (3^(n+1) - 2n - 3) / 4.
I was asked to analyze the time complexity of the following recursive equation using the iterative method:
T(n)=T(n/3)+T(2n/3)+n^2.
T(1)=1
when I try to expand the equation it blows up and I can't really keep track of all the recursive "calls" and constants.
This is caused by the uneven division of the data (1\3 - 2\3).
Is there an easier way to solve this using the iterative method?
Many thanks.
Here is a paper that shows the analysis of a similiar formula: T(n)=T(n/3)+T(2n/3)+n
One way to make it iterative will require using a method similar to how parsers\compilers work
Applying your formula: T(n)=T(n/3)+T(2n/3)+n^2 with n = 1..9 yields
T(0) = 0
T(1) = T(1/3) + T(2/3) + 1
T(2) = T(2/3) + T(4/3) + 4
T(3) = T(1) + T(2) + 9
T(4) = T(4/3) + T(8/3) + 16
T(5) = T(5/3) + T(10/3) + 25
T(6) = T(2) + T(4) + 36
T(7) = T(7/3) + T(14/3) + 49
T(8) = T(8/3) + T(16/3) + 64
T(9) = T(3) + T(6) + 91
T(3m) = T(m) + T(2m) + 9m^2
.. Maybe this can give you some hints
What helps here is to not multiply out any of the numbers, but write everything in terms of powers. Doing that all by hand, I got the following for the first few expansions:
T(n) = T((1/3)n) + T((2/3)n) + n^2
= T((1/3^2)n)
+ 2T((2/3^2)n)
+ T((2^2/3^2)n)
+ [n^2] #constants from the first expansion
+ [((1/3)n)^2 + ((2/3)n)^2] #constants from the second expansion
= T((1/3^3)n)
+ 3T((2/3^3)n)
+ 3T((2^2/3^3)n)
+ T((2^3/3^3)n)
+ [n^2]
+ [((1/3)n)^2 + ((2/3)n)^2]
+ [((1/3^2)n)^2 + ((2/3^2)n)^2 + ((2^2/3^2)n)^2] #constants from 3rd expansion
It's a bit hard to tell, but what seems to happen is that you get the binomial coefficients going for the Ts, where the xth expansion looks like this:
T(n) = sum((x choose i) * T(((2^i)/(3^x))n), i from 0 to x)
+ constants
At each step, the additional constants that are added at expansion x are the arguments to T from expansion x-1, squared, since they all end up getting squared thanks to the n^2. So all the new constants at a given expansion y are equal to:
NewConsts(y) = sum(((y - 1) choose i) * (((2^i)/(3^(y-1)))*n)^2, i from 0 to y - 1)
And all the constants at expansion x are equal to
n^2 + sum(NewConsts(y), y from 1 to x)
So, assuming all the above is correct, which I'm not 100% sure on, I guess you have to figure out when the constants stop mattering - that is, for what x is ((2^x / 3^x) * n)^2) equal to 0 - and your answer is the sum of all of those constants...
It seems to be O(n^2) if I haven't missed anything...
First of all T grows monotonously (for several first values you can check this manually, for the rest it's by induction - if a function is monotonous in [1..10], then it will be monotonous on [1..15] and so on).
T(n)=T(n/3)+T(2n/3)+n^2<=2T(2n/3)+n^2
T(n)<=n^2+2*(2n/3)^2+4*(4n/9)^2+...
=sum[k=0..log3(n)]((8/9)^k*n^2)
=n^2*sum[k=0..log3(n)](8/9)^k
<=n^2*sum[k=0..inf](8/9)^k
<=C*n^2
I am going through the above topic from CLRS(CORMEN) (page 834) and I got stuck at this point.
Can anybody please explain how the following expression,
A(x)=A^{[0]}(x^2) +xA^{[1]}(x^2)
follows from,
n-1 `
Σ a_j x^j
j=0
Where,
A^{[0]} = a_0 + a_2x + a_4a^x ... a_{n-2}x^{\frac{n}{2-1}}
A^{[1]} = a_1 + a_3x + a_5a^x ... a_{n-1}x^{\frac{n}{2-1}}
The polynomial A(x) is defined as
A(x) = a_0 + a_1 x + a_2 x^2 + a_3 x^3 + ...
To start the divide-and-conquer strategy of polynomial multiplication by the FFT, CLRS introduces two new polynomials: one of the coefficients of the even-powers of x called A[0] and one of the coefficients of the odd-powers of x called A[1]
A[0](x) = a_0 + a_2 x + a_4 x^2 + ...
A[1](x) = a_1 + a_3 x + a_5 x^2 + ...
Now if we substitute x^2 into A[0] and A[1], we have
A[0](x^2) = a_0 + a_2 x^2 + a_4 x^4 + ...
A[1](x^2) = a_1 + a_3 x^2 + a_5 x^4 + ...
and if we multiply A[1](x^2) by x, we have
x A[1](x^2) = a_1 x + a_3 x^3 + a_5 x^5 + ...
Now if we add A[0](x^2) and x A[1](x^2), we have
A[0](x^2) + x A[1](x^2) = (a_0 + a_2 x^2 + a_4 x^4 + ...) + (a_1 x + a_3 x^3 + a_5 x^5 + ...)
= a_0 + a_1 x + a_2 x^2 + a_3 x^3 + ...
= A(x)
Q.E.D.
If you divvy the polynomial up into "odd exponents" and "even exponents", you'll find the annoying fact that the A[1] polynomial (the one with odd exponents) has, well odd exponents! Even exponents are easier to work with, for FFT. So, one can simply factor out a single "x" from all of the values in A[1], and move it outside of the expression.
FFT likes working with even-exponented polynomials only. Thus, when you're dividing-and-conquering, you want to turn your A[1] expression into an "even-exponented" polynomial, and recurse on that, and then multiply-back-in that x. You will see that occur in the inner loop of the actual algorithm.
Edit: I realize that your confusion may stem from the fact that they're "passing in" (x^2) as the value in the polynomial. The "x" in A[1] and A[0] are different from the x in the (x^2) expression. You'll see how that must be, as while the original polynomial A goes up to exponent N, A[1] and A[0] both only go up to exponent (N/2).
I'm not going to answer your question because I feel that previous people have answered it. What I will do is try to explain the purpose of the FFT.
First, the FFT is a way to compute the convolution between two vectors. That is, suppose x = and y= are 1xn vectors then the convolution of x and y is
\sum_{i=0} ^n {xi y{n-i}}.
You will have to accept the fact that computing that value is EXTREMELY useful in a wide range of applications.
Now consider the following.
Suppose we construct two polynomials
A(z) = x0 + x1*z + x2 *z^2 + .. + xn^z^n
B(z) = y0 + y1*z + y2 *z^2 + .. + yn^z^n
then the multiplication is
AB(z) = A(z)B(z) = \sum_{i=0} ^ n (\sum_{k=0} ^ i xk*y{i-k}) z^i
where the inside sum is clearly a convolution of different size for different values of k.
Now we can clearly compute the coefficients (convolutions) of AB in n^2 time by a brute force method.
However, we can also be much more clever. Consider the fact that any polynomial of degree n can be described uniquely by n+1 points. That is given n+1 points we can construct the unique polynomial of degree n that goes through all n+1 points. Further more consider 2 polynomials in the form of n+1 points. You can compute their product by simply multiplying the n+1 y-values and keeping the x-values to result in their product in point-form. Now given a polynomial in n+1 point-form you can find the unique polynomial that describes it in O(n) time (actually Im not sure about this, it may be O(nlogn) time but certainly not more.)
This is exactly what the FFT does. However, the points that it picks to get the n+1 points to described the polynomials A and B are VERY carefully chosen. Some of the points are indeed complex because it just so happens that you can save time in evaluating a Polynomial by considering such points. That is if you were to choose just real points instead of the carefully chosen points that the FFT uses you would need O(n^2) time to evaluate the n+1 points. If you choose the FFT you only need O(nlogn) time. And thats all there is to the FFT. Oh and there is a unique side effect to the way that the FFT chooses points. Given an n-th degree polynomial, you must choose 2^m points where m is chosen such that 2^m is the smallest power of 2 greater than or equal to n.
A(x) is broken in to even x^2, and odd x parts,
for example if A(x) = 21 x^5 + 17 x^4 + 33 x^3 + 4 x^2 + 8 x + 7
then A0 = 17 y^2 + 4 y + 7
so that A0(x^2) = 17 x^4 + 4 x^2 + 7
and A1 = 21 y^2 + 33 y + 8
so that A1(x^2) = 21 x^4 + 33 x^2 + 8
or x * A1(x^2) = 21 x^5 + 33 x^3 + 8 x
clearly, in this case, A(x) = A0(x^2) + x A1(x^2) = even + odd parts