Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 9 years ago.
Improve this question
I am now having an oscillation curve which is part of the solutions of a set of nonlinear ordinary differential equations. I am required to test the stability/convergence of this curve as time goes to infinite. How to do it with Matlab?
The figure looks like this:
It has been eight years since I did anything like this, so take my answer with a grain of salt.
Solve the equations using step size S and also with step size S/2; if the results match (i.e. are within machine epsilon, or 10x machine epsilon, or however you're defining the word "match"), then you're good to go on truncation error
Solve the equations using standard floating point arithmetic, and also solve them with extended precision arithmetic (IIRC Matlab calls this Variable Precision Arithmetic; IEEE double precision arithmetic uses a 52-bit significand, so an 80-bit significand ought to be more than enough to reveal instability due to roundoff error); if the results match, then you're good to go on roundoff error
I turned out using the following script, it works fine for me, but I am still wondering, is there any better way of predicting convergence at long time.
function err = stability_test(t, y)
% Given data of an oscillating curve y(t), tell whether the oscillation
% amplitude decrease or not by
% 1. locating peaking points
% 2. linear fit peak points and see if the gradient is negative or not
%
% t, y must be of the same shape
% err = 0, non-ocillating
% < 0, stable
% > 0, unstable
nt = linspace(min(t), max(t), 500);
ny = interp1(t,y,nt,'spline');
ndy = gradient(ny,nt);
ndy2 = del2(ny,nt);
if(isempty(find(ndy<0, 1)) || isempty(find(ndy2>0, 1)))
err = 0;
else
ndt = nt(2) - nt(1);
ii = find(abs(ndy)<abs(ndt*ndy2*2) & ndy2<0);
if(isempty(ii))
err = 0;
else
if(length(ii)==1)
ii = [ii,length(ndy)];
end
ym = ny(ii);
tm = nt(ii);
p = polyfit(tm, ym,1);
err = p(1);
end
end
Related
Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 8 days ago.
Improve this question
In golang standard source code, file: src/math/dim.go
func max(x, y float64) float64 {
// special cases
switch {
case IsInf(x, 1) || IsInf(y, 1):
return Inf(1)
case IsNaN(x) || IsNaN(y):
return NaN()
case x == 0 && x == y:
if Signbit(x) {
return y
}
return x
}
if x > y {
return x
}
return y
}
related: https://floating-point-gui.de/errors/comparison/
The page you link to seems to suggest that you should avoid comparing floating point numbers. The reason is that if you do two widely different things depending on which one of two floating point numbers is bigger than the other, you might get surprises if the two numbers are almost equal but not quite, because rounding errors might mean that one of them is bigger than the other in a way that you do not expect.
Note that this is only a problem if you could do two very different things when comparing two very close floating point numbers.
What makes this implementation of max acceptable is that when the floating point numbers are very close to each other, you end up doing two things that are very close to each other as well (you are returning one of them), so you won't get any discontinuity issue.
Note however that you may still get unexpected behavior, like for instance
max(0.15 + 0.15, 0.1 + 0.2) == 0.3 may be false. But it's not a problem in the max function, it's a problem in what you do with the result.
This question already has answers here:
Round to nearest multiple of a number
(3 answers)
Closed last year.
Let’s imagine I would like to round a number (i.e x = 7.4355) to a given arbitrary precision (i.e p = 0.002). In this case, I would expect to see:
round_arbitrary(x, p) = 7.436
What would be the best approach to design such a rounding function? Ideas in pseudocode or Rust are welcome
What would be the best approach to design such a rounding function?
An approach that gets near to OP's goal:
// Pseudo code (p != 0)
round_arbitrary(x, p)
x /= p
x = round(x)
return x*p
A key point is that floating point numbers are finite in size and so can represent about 264 different values exactly whereas code values like 7.4355, 0.002 and the math quotient 1/7.0 are of a much bigger set. Thus the above will get one close, but not certainty to an exact mathematically rounded value.
More advanced code would avoid overflow by not rounding large values which do not need rounding.
// Assume 0 < |p| < 1.0
round_arbitrary_2(x, p)
if (round(x) != x)
x /= p
x = round(x)
x *= p;
return x*p
Deeper
This issues lies with floating point numbers that are encoded with an integer times a power-of-2. Then the question is not so much "How to round to an arbitrary (non power-of-ten) precision", but "How to round to an arbitrary (non power-of-2) precision".
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 2 years ago.
Improve this question
Is matrix multiplication speed faster for sparse matrixes than dense matrixes?
To give a simplified example,
does
"[[0,0],[0,0]] multiplies [[1,1],[1,1]]"
faster than
"[[256,256],[256,256]] multiplies [[1,1],[1,1]]" ?
The machine code algorithm to do a multiplication goes like this:
int mul(int a,int b)
{
int result = 0;
bit sign = sign(a) ^ sign(b);
a = abs(a); b = abs(b);
while (b != 0)
{
b = b>>1; // shift b right, bit0 into carry
if (carrySet()) result += a;
a = a<<1; // shift a left
// note: checks for overflow being left out
}
return (sign==0 ? sum : -sum);
}
You'll easily see that the more bits are set in the right operand, the more computations are necessary to sum up the left operand.
So, provided that your matrix multiplication boils down to machine code multiplications like this, a sparse matrix will multiply significantly faster than a dense matrix.
The question I cannot answer here is if the FPU will do this in a more efficient manner. You'll want to read some specs here. But even if a FPU (or GPU) is doing some sort of tweaking, I doubt the basic multiplication grinding loop looks very much different (interested in comments about this.)
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 8 years ago.
Improve this question
I have two functions
m1 = f1(w, s)
m2 = f2(w, s)
f1() and f2() are all blackboxs. Given w and s, I can get m1 and m2.
Now, I need to design or find a function g, such that
m2' = g(m1)
Also, the difference between m2 and m2' must be minimized.
The w and s are all stochastic process.
How can I find such a function g()? What knowledge domain does this belong to ?
Assuming you can invoke f1,f2 as many times as you want - this can be solved using regression.
Set a training set: (w_1,s_1,m2_1),...,(w_n,s_n,m2_n).
'Convert' the set to the parameters of g:
(m1_1,m2_1),...,(m1_n,m2_n).
Create your 'base functions'. For example, for base functions of
polynoms up to degree 3 the the 'modified' training set will be
(1,m1_1,m1_1^2,m1_1^3,m2_1), ... It is easy to generalize it to any
degree of polynom or any other set base functions.
Now you have yourself a problem which can be solved by linear
regression using ordinary least squares (OLS)
However, note that for some functions, this might be impossible to calculate find a good model to fit, since you lose data when you reduce the dimensionality from 2 (w,s) to 1 (m1).
Matlab code snap (poor choice of functions):
%example functions
f = #(w,s) w.^2 + s.^3 -1;
g = #(w,s) s.^2 - w + 2;
%random points for sampling
w = rand(1,100);
s = rand(1,100);
%the data
m1 = f(w,s)';
m2 = g(w,s)';
%changing dimension:
d = 5;
points = size(m1,1);
A = ones(points,d);
for jj=1:d
A(:,jj) = (m1.^(jj-1))';
end
%OLS:
theta = pinv(A'*A)*A'*m2;
%new point:
w = rand(1,1);
s = rand(1,1);
m1 = f(w,s);
%estimate the new point:
A = ones(1,d);
for jj=1:d
A(:,jj) = (m1.^(jj-1))';
end
%the estimation:
estimated = A*theta
%the real value:
g(w,s)
This kind of problems are studied in fields such as statistic or inverse problems. Here's one way to approach the problem theoretically (from the point of view of inverse problems):
First of all, it is quite clear that in the general case, the function g might not exists. However, what you can (try to) compute, given that you (assume to) know something about the statistics of w and s, is the posterior probability density p(m2|m1), which can then be used to compute estimators for m2 given m1, for instance, a maximum a posteriori estimate.
The posterior density can be computed using Bayes' formula:
p(m2|m1) = (\int p(m1,m2|w,s)p(w,s) dw ds) / (\int p(m1|w,s) dw ds)
which, in this case, might be (theoretically) nasty to apply since some of the involved maginal probability densities are singular. The best way to proceed numerically depends on the additional assumptions you can do on the statistics of w and s (e.g., Gaussian) and the functions f1, f2 (e.g., smooth). There is no silver bullet.
amit's OLS solution is probably a good starting point. Just be sure to sample from the correct distributions for w and s.
Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 12 years ago.
Improve this question
Is there some way to calculate the inverse factorials of real numbers?
For example - 1.5 ! = 1.32934039
Is there some way to obtain 1.5 back if I have the value 1.32934039?
I am trying
http://www.wolframalpha.com/input/?i=Gamma^(-1)[1.32934039]
but that is a fail.
Using wolframalpha.com, you can ask for
Solve[Gamma[x+1]==1.32934039,x]
As mentioned in the comments, Gamma does not have a unique inverse. True even when you are solving for a conventional factorial, e.g.
Solve[Gamma[x+1]==6,x]
yields several answers, of which one is 3.
Instead of using Gamma[] in WolframAlpha, you can also use Factorial[]:
Solve[Factorial[x]==6,x]
Solve[Factorial[x]==1.32934039,x]
David Cantrell gives a good approximation of Γ-1(n) on this page:
k = the positive zero of the digamma function, approximately 1.461632
c = Sqrt(2*pi)/e - Γ(k), approximately 0.036534
L(x) = ln((x+c)/Sqrt(2*pi))
W(x) = Lambert W function
ApproxInvGamma(x) = L(x) / W(L(x) / e) + 1/2
For integers you can do:
i = 2
n = someNum
while (n != 1):
n /= i
i += 1
return (i==1 ? i : None)
The factorial for real numbers has no inverse. You say that "each function must have an inverse". That is incorrect. Consider the constant function f(x)=0. What is f^-1(42)? For a function to be inverse it must be both an injection and a surjection.