I want to create (approximately) the double sigmoid in the shown figure as
a function in terms of the parameters X,Y,Z, a,b,c and d.
Any idea? Thanks.
This question seems to have gone ignored, so try something like this:
k = 1 # adjust this for "sharpness"
s(x) = (tanh(k * x) + 1) / 2
f(x) = X + (Y-X) * s(x-b) + (Z-Y) * s(x-c)
Here's an example plot.
Is there a way to obtain the order of an ODE in mathematica.
For example, if i have y''+5y i want mathematica return 2 (beacuse it's a 2nd order equation). So, is it possible what i'm asking?
here is a way to extract the value automatically:
ode = y'' + y' + y == 0 ;
Max[Cases[ ode , Derivative[n_][y] :> n , Infinity]]
2
note this just finds the largest derivative in the expression, it doesn't verify if the expression is actually an ode..
Hi I'm trying to implement Gradient Descent algorithm for a function:
My starting point for the algorithm is w = (u,v) = (2,2). The learning rate is eta = 0.01 and bound = 10^-14. Here is my MATLAB code:
function [resultTable, boundIter] = gradientDescent(w, iters, bound, eta)
% FUNCTION [resultTable, boundIter] = gradientDescent(w, its, bound, eta)
%
% DESCRIPTION:
% - This function will do gradient descent error minimization for the
% function E(u,v) = (u*exp(v) - 2*v*exp(-u))^2.
%
% INPUTS:
% 'w' a 1-by-2 vector indicating initial weights w = [u,v]
% 'its' a positive integer indicating the number of gradient descent
% iterations
% 'bound' a real number indicating an error lower bound
% 'eta' a positive real number indicating the learning rate of GD algorithm
%
% OUTPUTS:
% 'resultTable' a iters+1-by-6 table indicating the error, partial
% derivatives and weights for each GD iteration
% 'boundIter' a positive integer specifying the GD iteration when the error
% function got below the given error bound 'bound'
%
% The error function
E = #(u,v) (u*exp(v) - 2*v*exp(-u))^2;
% Partial derivative of E with respect to u
pEpu = #(u,v) 2*(u*exp(v) - 2*v*exp(-u))*(exp(v) + 2*v*exp(-u));
% Partial derivative of E with respect to v
pEpv = #(u,v) 2*(u*exp(v) - 2*v*exp(-u))*(u*exp(v) - 2*exp(-u));
% Initialize boundIter
boundIter = 0;
% Create a table for holding the results
resultTable = zeros(iters+1, 6);
% Iteration number
resultTable(1, 1) = 0;
% Error at iteration i
resultTable(1, 2) = E(w(1), w(2));
% The value of pEpu at initial w = (u,v)
resultTable(1, 3) = pEpu(w(1), w(2));
% The value of pEpv at initial w = (u,v)
resultTable(1, 4) = pEpv(w(1), w(2));
% Initial u
resultTable(1, 5) = w(1);
% Initial v
resultTable(1, 6) = w(2);
% Loop all the iterations
for i = 2:iters+1
% Save the iteration number
resultTable(i, 1) = i-1;
% Update the weights
temp1 = w(1) - eta*(pEpu(w(1), w(2)));
temp2 = w(2) - eta*(pEpv(w(1), w(2)));
w(1) = temp1;
w(2) = temp2;
% Evaluate the error function at new weights
resultTable(i, 2) = E(w(1), w(2));
% Evaluate pEpu at the new point
resultTable(i, 3) = pEpu(w(1), w(2));
% Evaluate pEpv at the new point
resultTable(i, 4) = pEpv(w(1), w(2));
% Save the new weights
resultTable(i, 5) = w(1);
resultTable(i, 6) = w(2);
% If the error function is below a specified bound save this iteration
% index
if E(w(1), w(2)) < bound
boundIter = i-1;
end
end
This is an exercise in my machine learning course, but for some reason my results are all wrong. There must be something wrong in the code. I have tried debugging and debugging it and haven't found anything wrong...can someone identify what is my problem here?...In other words can you check that the code is valid gradient descent algorithm for the given function?
Please let me know if my question is too unclear or if you need more info :)
Thank you for your effort and help! =)
Here is my results for five iterations and what other people got:
PARAMETERS: w = [2,2], eta = 0.01, bound = 10^-14, iters = 5
As discussed below the question: I would say the others are wrong... your minimization leads to smaller values of E(u,v), check:
E(1.4,1.6) = 37.8 >> 3.6 = E(0.63, -1.67)
Not a complete answer but lets go for it:
I added a plotting part in your code, so you can see whats going on.
u1=resultTable(:,5);
v1=resultTable(:,6);
E1=E(u1,v1);
E1(E1<bound)=NaN;
[x,y]=meshgrid(-1:0.1:5,-5:0.1:2);Z=E(x,y);
surf(x,y,Z)
hold on
plot3(u1,v1,E1,'r')
plot3(u1,v1,E1,'r*')
The result shows that your algorithm is doing the right thing for that function. So, as other said, or all the others are wrong, or you are not using the right equation from the beggining.
(I apologize for not just commenting, but I'm new to SO and cannot comment.)
It appears that your algorithm is doing the right thing. What you want to be sure is that at each step the energy is shrinking (which it is). There are several reasons why your data points may not agree with the others in the class: they could be wrong (you or others in the class), they perhaps started at a different point, they perhaps used a different step size (what you are calling eta I believe).
Ideally, you don't want to hard-code the number of iterations. You want to continue until you reach a local minimum (which hopefully is the global minimum). To check this, you want both partial derivatives to be zero (or very close). In addition, to make sure you're at a local min (not a local max, or saddle point) you should check the sign of E_uu*E_vv - E_uv^2 and the sign of E_uu look at: http://en.wikipedia.org/wiki/Second_partial_derivative_test for details (the second derivative test, at the top). If you find yourself at a local max or saddle point, your gradient will tell you not to move (since the partial derivatives are 0). Since you know this isn't optimal, you have to just perturb your solution (sometimes called simulated annealing).
Hope this helps.
I have written the following algorithm in order to evaluate a function in MatLab using Newton's method (we set r = -7 in my solution):
function newton(r);
syms x;
y = exp(x) - 1.5 - atan(x);
yprime = diff(y,x);
f = matlabFunction(y);
fprime = matlabFunction(yprime);
x = r;
xvals = x
for i=1:8
u = x;
x = u - f(r)/fprime(r);
xvals = x
end
The algorithm works in that it runs without any errors, but the numbers keep decreasing at every iteration, even though, according to my textbook, the expression should converge to roughly -14 for x. My algorithm is correct the first two iterations, but then it goes beyond -14 and finally ends up at roughøy -36.4 after all iterations have completed.
If anyone can give me some help as to why the algorithm does not work properly, I would greatly appreciate it!
I think
x = u - f(r)/fprime(r);
should be
x = u - f(u)/fprime(u);
If you always use r, you're always decrementing x by the same value.
syms x
y = exp(x) - 1.5 - atan(x); % your function is converted in for loop
x=-1;
n=10;
v=0;
for i=2:n
x(i)=tan(exp(x(i-1))-1.5);
v=[v ;x(i)]; % you will get solution vector for each i value
end
v
I have this set of variables:
N = 250;
% independent variables[0..10]
x_1 = rand(N,1) * 10;
x_2 = rand(N,1) * 10;
y = ones(N,1); % regresssion variable
y((x_1 + x_2 + rand(N,1) * 2) <= 11) = 2;
I want to make two-var regression in matlab, but do not know how to do this, can somebody helps me? The result of linear or polynomial regression must be line between this two classes, stored in y.
One or more 'independent' variables, it's the same. Just as an example few ways to solve:
>>> X= [x_1 x_2];
>>> X\ y
ans =
0.10867
0.11984
>>> pinv(X)* y
ans =
0.10867
0.11984
See more of \ and pinv.
Matlab do have many other ways to solve least squares. You may like to elaborate more on your specific case, in order to find the most suitable one. Anyway, above documentation is a good starting point for you.
Edit:
Some general information on least squares worthwhile to read are wiki and mathworks