I want to find
lim(x->4-) ((x-4)/(x^2-8*x+16))
so I typed this into the Yacas interface:
Limit(x,4,Right) (1/(x-4))
Yacas answered:
Infinity
But that is wrong; the answer is -Infinity. Am I just misunderstanding the dir argument of the Limit() function?
You're just messing up your notation. Think back to calculus:
lim x goes to 4 of f(x) is two limits:
lim x goes to 4- of f(x)
lim x goes to 4+ of f(x)
If both of those limits exist, then lim x goes to 4 exists. However, you have the directions backwards. The lim as x goes to 4- means from the left. The lim as x goes to 4+ means from the right.
Limit(x,4,Right) (1/(x-4)) == Infinity (lim x goes to 4+)
Limit(x,4,Left) (1/(x-4)) == -Infinity (lim x goes to 4-)
Related
I justed started studying the sorting algorithm, so i need help solving problems on (big Omega) $\Omega$
How can I Prove that $n! = \Omega(n^{100})$
I know that we write $f(x) = \Omega(g(x))$ if $g(x) = O(f(x))$. This means that there is a constant $c>0$ and a value $x_0$ such that $|f(X)| \ge cg(x)$ whenever $x>x_0$.
Hence from the definition above, I can write
$$n^100 = O(n!)$$
We can find a constant c and a value $x_0$ such that $n^100 \le O(n!)$ for all $x>x_0$.
We could take $c=1$ and $x_0=1$
I don't know if I am correct. Please how should I continue and complete the proof.
The meaning of n! being Ω(n**100) is that there is some c and some x₀ such that n! ≥ c n**100 for all x ≥ x₀. Your choice of c=x₀=1 says that 3! is bigger than 3^100, which it clearly isn't.
Think about how fast n! grows. (n + 1)! is n + 1 times bigger than n!
Think about how fact n**100 grows. (n+1)**100 is ((n + 1)/n) ** 100 bigger than n*100. For large n, that number is going to get closer and closer to 1.
Assume f(x) goes to infinity as x tends to infinity and a,b>0. Find the f(x) that yields the lowest order for
as x tends to infinity.
By order I mean Big O and Little o notation.
I can only solve it roughly:
My solution: We can say ln(1+f(x)) is approximately equal to ln(f(x)) as x goes to infinity. Then, I have to minimize the order of
Since for any c>0, y+c/y is miminized when y =sqrt(c), b+ln f(x)}=sqrt(ax) is the anwer. Equivalently, f(x)=e^(sqrt(ax)-b) and the lowest order for g(x) is 2 sqrt(ax).
Can you help me obtain a rigorous answer?
The rigorous way to minimize (I should say extremize) a function of another function is to use the Euler-Lagrange relation:
Thus:
Taylor expansion:
If we only consider up to "constant" terms:
Which is of course the result you obtained.
Next, linear terms:
We can't solve this equation analytically; but we can explore the effect of a perturbation in the function f(x) (i.e. a small change in parameter to the previous solution). We can obviously ignore any linear changes to f, but we can add a positive multiplicative factor A:
sqrt(ax) and Af are obviously both positive, so the RHS has a negative sign. This means that ln(A) < 0, and thus A < 1, i.e. the new perturbed function gives a (slightly) tighter bound. Since the RHS must be vanishingly small (1/f), A must not be very much smaller than 1.
Going further, we can add another perturbation B to the exponent of f:
Since ln(A) and the RHS are both vanishing small, the B-term on LHS must be even smaller for the sign to be consistent.
So we can conclude that (1) A is very close to 1, (2) B is much smaller than 1, i.e. the result you obtained is in fact a very good upper bound.
The above also leads to the possibility of even tighter bounds for higher powers of f.
I know that log n! gives a complexity of O(nlogn) but how to exapnd the above? The second one may be simplified to (nlogn)!. Please clarify on this.
You could estimate upper and lower bounds for (log(n))! using the identity
together with product estimations.
For an upper bound:
For a lower bound:
Combined you will get:
So at least:
Obviously, the (in)equations are somehow 'odd' due to the non-integer index boundaries of the products.
Update:
The bound given by hwlau using the sterling approximation is lower (by sqrt(log(n))/n) and should be tight.
Update: No, you cannot use (N ln N)! in your second formula. The reason is explained below using the first case.
With the log version of Stirling approximation, we have
ln(z!) = (z+1/2) ln z - z + O(1)...
Note that the extra z is kept here, the reason will be obvious soon. Now if we let x = log N,
(ln N)! = x! = exp(ln x!)
~ exp((x+1/2) ln x - x) = x^(x+1/2) exp(-x)
= (ln N)^((ln N)+1/2) / N
The extra term we kept turns out to be a inverse of N, it definitely has effects on the complexity since we cannot simply throw away of exp of something. If we denote g(N) for the approximation above and f(N) = (ln N)!, then lim f(N)/g(N) = sqrt(2 pi) < inf, so f = O(g)
For the (ln N!)!, it is a bit complicated, I use mathematica to check for the limit, and it suggests that the expansion
ln(z!) ~ (z+1/2) ln z - z + ln(sqrt(2pi))
is enough. I don't have general rule for when we can stop. And in general, it may not be possible to use only finite terms. But in this case, we can.
In case you only need a loose bound, for the first formula, you can actually throw away of the -z term because (z+1/2) ln z > (z+1/2) ln z - z.
f(x) = (exp(x)-1)/x;
g(x) = (exp(x)-1)/log(exp(x))
Analytically, f(x) = g(x) for all x.
When x approaches 0, both f(x) and g(x) approach 1.
% Compute y against x
for k = 1:15
x(k) = 10^(-k);
f(k) =(exp(x(k))-1)/x(k);
De(k) = log(exp(x(k)));
g(k)= (exp(x(k))-1)/De(k);
end
% Plot y
plot(1:15,f,'r',1:15,g,'b');
However, g(x) works better than f(x). f(x) actually diverges when x approaches 0. Why is g(x) better than f(x)?
It's hard not to give the answer to this, so I'll only point to a few hints
look at De... I mean really look at it. Note how as x gets
smaller, De is no longer equal to x.
Now look at exp(x) - 1. Notice a pattern.
Ask yourself, what is eps(1), and why does it matter?
In Matlab, exp(10^-16) -1 = 0. Why?
Mary got a magic ball for her birthday. The ball, when thrown from
some height, bounces for the double of this height. Mary's thrown the
ball from her balcony which is x above the ground. Help her
calculate how many bounces are there needed for the ball to reach whe
height w.
Input: One integer z (1 ≤ z ≤ 106) as the number of test cases. For
every test, integers x and w (1 ≤ x ≤ 109, 0 ≤ w ≤ 109).
Output: For every case one integer equal to the number of bounces
needed fot the ball to reach w should be printed.
OK, so, though it looks unspeakably easy, I can't find a more efficient way to solve it than a simple, dumb, brutal approach of a loop multiplying x by 2 till it's at least w. For a maximum test, it will get a horrific time, of course. Then, I thought of using previous cases which saves quite a bit time providing that we can get the closest yet smaller result from the previous cases in a short time (O(1)?) which, however, I can't (and don't know if it's possible..) implement. How should this be done?
You are essentially trying to solve the problem
2i x = w
and then finding the smallest integer greater than i. Solving, we get
2i = w / x
i = log2 (w / x)
So one approach would be to compute this value explicitly and then take the ceiling. Of course, you'd have to watch out for numerical instability when doing this. For example, if you are using floats to encode the values and then let w = 8,000,001 and x = 1,000,000, you will end up getting the wrong answer (3 instead of 4). If you use doubles to hold the value, you will also get the wrong answer when x = 1 and w = 536870912 (reporting 30 instead of 29, since 1 x 229 = 536870912, but due to inaccuracies in the double the answer is erroneously rounded up to 30). It looks like we'll have to switch to a different approach.
Let's revisit your initial solution of just doubling the value of x until it exceeds w should be perfectly fine here. The maximum number of times you can double x until it reaches w is given by log2 (w/x), and since w/x is at most one billion, this iterates at most log2 109 times, which is about thirty times each. Doing thirty iterations of a multiply by two is probably going to be extremely fast. More generally, if the upper bound of w / x is U, then this will take at most O(log U) time to complete. If you have k (x, w) pairs to check, this takes time O(k log U).
If you're not satisfied with doing this, though, there's another very fast algorithm you could try. Essentially, you want to compute log2 w/x. You could start off by creating a table that lists all powers of two along with their logarithms. For example, your table might look like
T[1] = 0
T[2] = 1
T[4] = 2
T[8] = 3
...
You could then compute w/x, then do a binary search to figure out where in which range the value lies. The upper bound of this range is then the number of times the ball must bounce. This means that if you have k different pairs to inspect, and if you know that the maximum ratio of w/x is U, creating this table takes O(log U) time and each query then takes time proportional to the log of the size of the table, which is O(log log U). The overall runtime is then O(log U + k log log U), which is extremely good. Given that you're dealing with at most one million problem instances and that U is one billion, k log log U is just under five million, and log U is about thirty.
Finally, if you're willing to do some perversely awful stuff with bitwise hackery, since you know for a fact that w/x fits into a 32-bit word, you can use this bitwise trickery with IEEE doubles to compute the logarithm in a very small number of machine operations. This would probably be faster than the above two approaches, though I can't necessarily guarantee it.
Hope this helps!
Use this formula to calculate the number of bounces for each test case.
ceil( log(w/x) / log(2) )
This is pseudo-code, but it should be pretty simple to convert it to any language. Just replace log with a function that finds the logarithm of a number in some specific base and replace ceil with a function that rounds up a given decimal value to the next int above it (for example, ceil(2.3) = 3).
See http://www.purplemath.com/modules/solvexpo2.htm for why this works (in your case, you're trying to solve the equation x * 2 ^ n = w for an integer n, and you should start by dividing both sides by x).
EDIT:
Before using this method, you should check that w > x and return 1 if it isn't. (The ball always has to bounce at least once).
Also, it has been pointed out that inaccuracies in floating point values may cause this method to sometimes fail. You can work around this by checking if 2 ^ (n-1) >= w, where n is the result of the equation above, and if so returning (n - 1) instead of n.