Simple algebra question, for a program I'm writing - algorithm

How would I solve:
-x^3 - x - 4 = 0
You can't use quadratic, because it is to the 3rd power, right?
I know I it should come out to ~1.3788 but I'm not sure how I would derive that.
I started with:
f(x) = x + (4/(x^2 + 1)).
Solving for 0, moving the x over to the other side, multiplying by (x^2 + 1) on both sides, I end up with:
-x(x^2 + 1) = 4,
or
-x^3 - x - 4 = 0.

Finding roots of equations by using Newton's method or Fixed point iteration

Algebraically, you want to use Cardano's method:
http://www.math.ucdavis.edu/~kkreith/tutorials/sample.lesson/cardano.html
Using this method, it's about as easy to solve as the quadratic.
Actually, this is possibly clearer:
http://en.wikipedia.org/wiki/Cubic_function#Summary

Find a root by using newton iteration (see link below). Then divide the polynomial by (x-TheRootYouFound). The result will be a quadratic formula that you can plug into your quadratic root finder of your choice.
About Newton Iteration:
http://en.wikipedia.org/wiki/Newton%27s_method
About Polynomial Division
http://en.wikipedia.org/wiki/Polynomial_long_division
This article may be interesting for you as well. It covers more robust ways to solve your problem at the expense of some additional complexity.
http://en.wikipedia.org/wiki/Root-finding_algorithm

It's a cubic function. You're correct, the quadratic formula does not apply.
You gave one root, but in general there are three.
How did you arrive at that single value? Trial and error? That's legitimate. You don't need to "derive" anything.
x^3 + a2*x^2 + a1*x + a0 = 0 can be written as (x-x1)*(x-x2)*(x-x3) = 0, where x1, x2, and x3 are the three roots. If you know that the root you cited is correct, you can divide it out and leave (x-x2)*(x-x3) = 0, which is a quadratic that you can apply the usual techniques to.

This may not help from a programming viewpoint, but from a math perspective...
Note than in this particular cubic function you need to consider imaginary numbers, because when x = i then you have a denominator that is zero (in your original equation). Also, generally speaking you shouldn't multiply or divide by variables (adding and subtracting is fine though) when you move them to the other side of the equation because you'll generally forget of about the condition where the term you multiplied or divided by is zero. Those answers need to be excluded from the solution set.
x = i is an example of an excluded solution in the above cubic. You need to evaluate your excluded solutions before you manipulate the equation at all.

Related

Working with small probabilities, via logs

Source: Google Code Jam. https://code.google.com/codejam/contest/10224486/dashboard#s=a&a=1
We're asked to calculate Prob(K successes from N trials) where each of the N trials has a known success probability of p_n.
Some Analysis and thoughts on the problem are given after the Code Jam.
They observe that evaluating all possible outcomes of your N trials would take you an exponential time in N, so instead they provide a nice "dynamic programming" style solution that's O(N^2).
Let P(p#q) = Prob(p Successes after the first q Trials)
Then observe the fact that:
Prob(p#q) = Prob(qth trial succeeds)*P(p-1#q-1) + Prob(qth trial fails)*P(p#q-1)
Now we can build up a table of P(i#j) where i<=j, for i = 1...N
That's all lovely - I follow all of this and could implement it.
Then as the last comment, they say:
In practice, in problems like this, one should store the logarithms of
probabilities instead of the actual values, which can become small
enough for floating-point precision errors to matter.
I think I broadly understand the point they're trying to make, but I specifically can't figure out how to use this suggestion.
Taking the above equation, and substuting in some lettered variables:
P = A*B + C*D
If we want to work in Log Space, then we have:
Log(P) = Log(A*B + C*D),
where we have recursively pre-computed Log(B) and Log(D), and A & B are known, easily-handled decimals.
But I don't see any way to calculate Log(P) without just doing e^(Log(B)), etc. which feels like it would defeat to point of working in log space`?
Does anyone understand in better detail what I'm supposed to be doing?
Starting from the initial relation:
P = A⋅B + C⋅D
Due to its symmetry we can assume that B is larger than D, without loss of generality.
The following processing is useful:
log(P) = log(A⋅B + C⋅D) = log(A⋅elog(B) + C⋅elog(D)) = log(elog(B)⋅(A + C⋅elog(D) - log(B))
log(P) = log(B) + log(A + C⋅elog(D) - log(B)).
This is useful because, in this case, log(B) and log(D) are both negative numbers (logarithms of some probabilities). It was assumed that B is larger than D, thus its log is closer to zero. Therefore log(D) - log(B) is still negative, but not as negative as log(D).
So now, instead of needing to perform exponentiation of log(B) and log(D) separately, we only need to perform exponentiation of log(D) - log(B), which is a mildly negative number. So the above processing leads to better numerical behavior than using logarithms and applying exponentiation in the trivial way, or, equivalently, than not using logarithms at all.

Generating a mathematical model of a pattern

Does there exist some algorithm that allows for the creation of a mathematical model given an inclusive set?
I'm not sure I'm asking that correctly... Let me try again...
Given some input set...
int Set[] = { 1, 4, 9, 16, 25, 36 };
Does there exist an algorithm that would be able to deduce the pattern evident in the set? In this case being...
Set[x] = x^2
The only way I can think of doing something like this is some GA where the fitness is how closely the generated model matches the input set.
Edit:
I should add that my problem domain implies that the set is inclusive. Meaning, I am finding the closest possible function for the set and not using the function to extrapolate beyond the set...
The problem of curve fitting might be a reasonable place to start looking. I'm not sure if this is exactly what you're looking for - it won't really identify the pattern so much as just produce a function which follows the pattern as closely as possible.
As others have mentioned, for a simple set there can easily be infinitely many such functions, so something like this may be what you want, rather than exactly what you have described in your question.
Wikipedia seems to indicate that the Gauss-Newton algorithm or the Levenberg–Marquardt algorithm might be a good place to begin your research.
A mathematical argument explaining why, in general, this is impossible:
There are only countably many computer programs that can be written at all.
There are uncountably many infinite sequences of integers.
Therefore, there are infinitely many sequences of integers for which no possible computer program can generate those sequences.
Accordingly, this is impossible in the general case. Sorry!
Hope this helps!
If you want to know if the given data fits some polynomial function, you compute successive differences until you reach a constant. The number of differences to reach the constant is the degree of the polynomial.
x | 1 2 3 4
y | 1 4 9 16
y' | 3 5 7
y" | 2 2
Since y" is 2, y' is 2x + C1, and thus y is x2 + C1x + C2. C1 is 0, since 2×1.5 = 3. C2 is 0 because 12 = 1. So, we have y = x2.
So, the algorithm is:
Take successive differences.
If it does not converge to a constant, either resort to curve fitting, or report the data is insufficient to determine a polynomial.
If it does converge to a constant, iteratively integrate polynomial expression and evaluate the trailing constant until the degree is achieved.

Most efficient algorithm to compute a common numerator of a sum of fractions

I'm pretty sure that this is the right site for this question, but feel free to move it to some other stackexchange site if it fits there better.
Suppose you have a sum of fractions a1/d1 + a2/d2 + … + an/dn. You want to compute a common numerator and denominator, i.e., rewrite it as p/q. We have the formula
p = a1*d2*…*dn + d1*a2*d3*…*dn + … + d1*d2*…d(n-1)*an
q = d1*d2*…*dn.
What is the most efficient way to compute these things, in particular, p? You can see that if you compute it naïvely, i.e., using the formula I gave above, you compute a lot of redundant things. For example, you will compute d1*d2 n-1 times.
My first thought was to iteratively compute d1*d2, d1*d2*d3, … and dn*d(n-1), dn*d(n-1)*d(n-2), … but even this is inefficient, because you will end up computing multiplications in the "middle" twice (e.g., if n is large enough, you will compute d3*d4 twice).
I'm sure this problem could be expressed somehow using maybe some graph theory or combinatorics, but I haven't studied enough of that stuff to have a good feel for it.
And one note: I don't care about cancelation, just the most efficient way to multiply things.
UPDATE:
I should have known that people on stackoverflow would be assuming that these were numbers, but I've been so used to my use case that I forgot to mention this.
We cannot just "divide" out an from each term. The use case here is a symbolic system. Actually, I am trying to fix a function called .as_numer_denom() in the SymPy computer algebra system which presently computes this the naïve way. See the corresponding SymPy issue.
Dividing out things has some problems, which I would like to avoid. First, there is no guarantee that things will cancel. This is because mathematically, (a*b)**n != a**n*b**n in general (if a and b are positive it holds, but e.g., if a == b ==-1 and n == 1/2, you get (a*b)**n == 1**(1/2) == 1 but (-1)**(1/2)*(-1)**(1/2) == I*I == -1). So I don't think it's a good idea to assume that dividing by an will cancel it in the expression (this may be actually be unfounded, I'd need to check what the code does).
Second, I'd like to also apply a this algorithm to computing the sum of rational functions. In this case, the terms would automatically be multiplied together into a single polynomial, and "dividing" out each an would involve applying the polynomial division algorithm. You can see in this case, you really do want to compute the most efficient multiplication in the first place.
UPDATE 2:
I think my fears for cancelation of symbolic terms may be unfounded. SymPy does not cancel things like x**n*x**(m - n) automatically, but I think that any exponents that would combine through multiplication would also combine through division, so powers should be canceling.
There is an issue with constants automatically distributing across additions, like:
In [13]: 2*(x + y)*z*(S(1)/2)
Out[13]:
z⋅(2⋅x + 2⋅y)
─────────────
2
But this is first a bug and second could never be a problem (I think) because 1/2 would be split into 1 and 2 by the algorithm that gets the numerator and denominator of each term.
Nonetheless, I still want to know how to do this without "dividing out" di from each term, so that I can have an efficient algorithm for summing rational functions.
Instead of adding up n quotients in one go I would use pairwise addition of quotients.
If things cancel out in partial sums then the numbers or polynomials stay smaller, which makes computation faster.
You avoid the problem of computing the same product multiple times.
You could try to order the additions in a certain way, to make canceling more likely (maybe add quotients with small denominators first?), but I don't know if this would be worthwhile.
If you start from scratch this is simpler to implement, though I'm not sure it fits as a replacement of the problematic routine in SymPy.
Edit: To make it more explicit, I propose to compute a1/d1 + a2/d2 + … + an/dn as (…(a1/d1 + a2/d2) + … ) + an/dn.
Compute two new arrays:
The first contains partial multiples to the left: l[0] = 1, l[i] = l[i-1] * d[i]
The second contains partial multiples to the right: r[n-1] = 1, r[i] = d[i] * r[i+1]
In both cases, 1 is the multiplicative identity of whatever ring you are working in.
Then each of your terms on the top, t[i] = l[i-1] * a[i] * r[i+1]
This assumes multiplication is associative, but it need not be commutative.
As a first optimization, you don't actually have to create r as an array: you can do a first pass to calculate all the l values, and accumulate the r values during a second (backward) pass to calculate the summands. No need to actually store the r values since you use each one once, in order.
In your question you say that this computes d3*d4 twice, but it doesn't. It does multiply two different values by d4 (one a right-multiplication and the other a left-multiplication), but that's not exactly a repeated operation. Anyway, the total number of multiplications is about 4*n, vs. 2*n multiplications and n divisions for the other approach that doesn't work in non-commutative multiplication or non-field rings.
If you want to compute p in the above expression, one way to do this would be to multiply together all of the denominators (in O(n), where n is the number of fractions), letting this value be D. Then, iterate across all of the fractions and for each fraction with numerator ai and denominator di, compute ai * D / di. This last term is equal to the product of the numerator of the fraction and all of the denominators other than its own. Each of these terms can be computed in O(1) time (assuming you're using hardware multiplication, otherwise it might take longer), and you can sum them all up in O(n) time.
This gives an O(n)-time algorithm for computing the numerator and denominator of the new fraction.
It was also pointed out to me that you could manually sift out common denominators and combine those trivially without multiplication.

Is there a fast way to invert a matrix in Matlab?

I have lots of large (around 5000 x 5000) matrices that I need to invert in Matlab. I actually need the inverse, so I can't use mldivide instead, which is a lot faster for solving Ax=b for just one b.
My matrices are coming from a problem that means they have some nice properties. First off, their determinant is 1 so they're definitely invertible. They aren't diagonalizable, though, or I would try to diagonlize them, invert them, and then put them back. Their entries are all real numbers (actually rational).
I'm using Matlab for getting these matrices and for this stuff I need to do with their inverses, so I would prefer a way to speed Matlab up. But if there is another language I can use that'll be faster, then please let me know. I don't know a lot of other languages (a little but of C and a little but of Java), so if it's really complicated in some other language, then I might not be able to use it. Please go ahead and suggest it, though, in case.
I actually need the inverse, so I can't use mldivide instead,...
That's not true, because you can still use mldivide to get the inverse. Note that A-1 = A-1 * I. In MATLAB, this is equivalent to
invA = A\speye(size(A));
On my machine, this takes about 10.5 seconds for a 5000x5000 matrix. Note that MATLAB does have an inv function to compute the inverse of a matrix. Although this will take about the same amount of time, it is less efficient in terms of numerical accuracy (more info in the link).
First off, their determinant is 1 so they're definitely invertible
Rather than det(A)=1, it is the condition number of your matrix that dictates how accurate or stable the inverse will be. Note that det(A)=∏i=1:n λi. So just setting λ1=M, λn=1/M and λi≠1,n=1 will give you det(A)=1. However, as M → ∞, cond(A) = M2 → ∞ and λn → 0, meaning your matrix is approaching singularity and there will be large numerical errors in computing the inverse.
My matrices are coming from a problem that means they have some nice properties.
Of course, there are other more efficient algorithms that can be employed if your matrix is sparse or has other favorable properties. But without any additional info on your specific problem, there is nothing more that can be said.
I would prefer a way to speed Matlab up
MATLAB uses Gauss elimination to compute the inverse of a general matrix (full rank, non-sparse, without any special properties) using mldivide and this is Θ(n3), where n is the size of the matrix. So, in your case, n=5000 and there are 1.25 x 1011 floating point operations. So on a reasonable machine with about 10 Gflops of computational power, you're going to require at least 12.5 seconds to compute the inverse and there is no way out of this, unless you exploit the "special properties" (if they're exploitable)
Inverting an arbitrary 5000 x 5000 matrix is not computationally easy no matter what language you are using. I would recommend looking into approximations. If your matrices are low rank, you might want to try a low-rank approximation M = USV'
Here are some more ideas from math-overflow:
https://mathoverflow.net/search?q=matrix+inversion+approximation
First suppose the eigen values are all 1. Let A be the Jordan canonical form of your matrix. Then you can compute A^{-1} using only matrix multiplication and addition by
A^{-1} = I + (I-A) + (I-A)^2 + ... + (I-A)^k
where k < dim(A). Why does this work? Because generating functions are awesome. Recall the expansion
(1-x)^{-1} = 1/(1-x) = 1 + x + x^2 + ...
This means that we can invert (1-x) using an infinite sum. You want to invert a matrix A, so you want to take
A = I - X
Solving for X gives X = I-A. Therefore by substitution, we have
A^{-1} = (I - (I-A))^{-1} = 1 + (I-A) + (I-A)^2 + ...
Here I've just used the identity matrix I in place of the number 1. Now we have the problem of convergence to deal with, but this isn't actually a problem. By the assumption that A is in Jordan form and has all eigen values equal to 1, we know that A is upper triangular with all 1s on the diagonal. Therefore I-A is upper triangular with all 0s on the diagonal. Therefore all eigen values of I-A are 0, so its characteristic polynomial is x^dim(A) and its minimal polynomial is x^{k+1} for some k < dim(A). Since a matrix satisfies its minimal (and characteristic) polynomial, this means that (I-A)^{k+1} = 0. Therefore the above series is finite, with the largest nonzero term being (I-A)^k. So it converges.
Now, for the general case, put your matrix into Jordan form, so that you have a block triangular matrix, e.g.:
A 0 0
0 B 0
0 0 C
Where each block has a single value along the diagonal. If that value is a for A, then use the above trick to invert 1/a * A, and then multiply the a back through. Since the full matrix is block triangular the inverse will be
A^{-1} 0 0
0 B^{-1} 0
0 0 C^{-1}
There is nothing special about having three blocks, so this works no matter how many you have.
Note that this trick works whenever you have a matrix in Jordan form. The computation of the inverse in this case will be very fast in Matlab because it only involves matrix multiplication, and you can even use tricks to speed that up since you only need powers of a single matrix. This may not help you, though, if it's really costly to get the matrix into Jordan form.

Using Taylor Series to Avoid Loss of Precision

I'm trying to use Taylor series to develop a numerically sound algorithm for solving a function. I've been at it for quite a while, but haven't had any luck yet. I'm not sure what I'm doing wrong.
The function is
f(x)=1 + x - sin(x)/ln(1+x) x~0
Also: why does loss of precision even occur in this function? when x is close to zero, sin(x)/ln(1+x) isn't even close to being the same number as x. I don't see where significance is even being lost.
In order to solve this, I believe that I will need to use the Taylor expansions for sin(x) and ln(1+x), which are
x - x^3/3! + x^5/5! - x^7/7! + ...
and
x - x^2/2 + x^3/3 - x^4/4 + ...
respectively. I have attempted to use like denominators to combine the x and sin(x)/ln(1+x) components, and even to combine all three, but nothing seems to work out correctly in the end. Any help is appreciated.
The loss of precision can come in because when x ~ 0, ln(1+x) is also close to 0, so you wind up dividing by a very small number. Computers aren't very good at that ;-)
If you use the Taylor series for ln(1+x) directly, it's going to be kind of a pain because you'll wind up dividing by an infinite series of terms. For cases like this, I usually prefer to just compute the Taylor series for the entire function as a whole from the definition:
f(x) = f(0) + f'(0) x + f''(0) x/2 + f'''(0) x/6 + ...
from which you'll get
f(x) = 2 + 3x/2 - x^2/4 - x^3/24 - x^4/240 - 23x^5/1440 + 31x^6/2880 ...
(I cheated and plugged it into Mathematica ;-) Like Steve says, this series doesn't converge all that quickly, although I can't think of a better method at the moment.
EDIT: I think I misread the question - if all you're trying to do is find the zeros of the function, there are definitely better ways than using a Taylor series.
As this is homework, I'm just going to try to give a few pointers in the right direction.
Solution 1
Rather than using the Talyor series approximation, try to simply use a root finding algorithm such as the Newton-Raphson method, linear interpolation, or interval bisection (or combine them even). They are very simple to implement, and with an appropiate choice of starting value(s), the root can converge to a precise value quite quickly.
Solution 2
If you really need to use the Taylor series approximation for whatever reason, then just expand the sin(x), ln(x), and whatever else. (Multiplying through by ln(x) to remove the denominator in your case will work). Then you'll need to use some sort of polynomial equation solver. If you want a reasonable degree of accuracy, you'll need to go beyond the 3rd or 4th powers I'd imagine, which means a simple analytical solution is not going to be easy. However, you may want to look into something like the Durand-Kerner method for solving general polynomials of any order. Still, if you need to use high-order terms this approach is just going to lead to complications, so I would definitely recommend solution 1.
Hope that helps...
I think you need to look at what happens to ln(x+1) as x -->0 and you will see why this function does not behave well near x = 0.
I haven't looked into this that closely, but you should be aware that some taylor series converge very, very slowly.
Just compute the Taylor series of f directly.
Maxima gives me (first 4 terms about x=0):
(%i1) f(x):=1 + x - sin(x)/log(1+x);
- sin(x)
(%o1) f(x) := 1 + x + ----------
log(1 + x)
(%i2) taylor(f(x),x,0,4);
2 3 4
x x x x
(%o2)/T/ - + -- + -- + --- + . . .
2 4 24 240
Method used in question is correct - just make sure your calculator is in radians mode.

Resources