As stated above: I wish to compute the minimum (and/or maximum) of a continuous variable over time. Here is a minimal example to demonstrate:
model MinMaxTest
Real u;
Real u_min(start = 10);
Real u_max(start = -10);
equation
u = sin(time / 180 * Modelica.Constants.pi);
u_min = min(u, u_min);
u_max = max(u, u_max);
annotation(experiment(StartTime = 0, StopTime = 360, Tolerance = 1e-06, Interval = 1));
end MinMaxTest;
u is the arbitrary continuous variable (for demo purposes a simple sinus wave).
u_min/u_max is the minimum/maximum over time.
Obviously the expected result is u_min=-1 and u_max=1. Unfortunately the simulation crashes with a "Matrix singular!" error. Can anyone direct me how to avoid that?
EDIT 1
I'm using OpenModelica 1.15 (was 1.9.2)
EDIT 2
As I'm quite new to Modelica, I'm struggling to understand the differences between the following approaches:
u_min = if noEvent(u < u_min) then u else pre(u_min);
if noEvent(u < u_min) then
u_min = u;
else
u_min = pre(u_min);
end if;
u_min = if noEvent(u < u_min) then u else u_min;
u_min = if u < u_min then u else pre(u_min);
u_min = if u < u_min then u else u_min;
when u < u_min then
u_min = u;
end when;
u_min + T*der(u_min) = if u <= u_min then u else u_min;
1 and 2 are equivalent and result in the expected behavior.
3 produces the desired result but gives a "Translation Notification" about an "algebraic loop", why?
4 fails in so far, that the resulting u_min curve is identical to u?! why?
5 combines 3 and 4.
6 fails to compile with Sorry - Support for Discrete Equation Systems is not yet implemented
7 I'm unclear what the idea behind this is, but it works if T is of the suggested size.
If I'm understanding the Modelica documentation correctly then 1-5 have in common that exactly one equation is active at all times. noEvent suppresses event generation at the specified zero crossing. I had the impression that this is mostly an efficiency improvement. Why does leaving it out cause 4 to fail? pre refers to the previous value of the variable, so I guess that makes sense if we want to keep a variable constant, but why does 7 work without it? My understanding of when was, that its equation is only active at that precise event, and otherwise keeps the previous value, which is why I tried using it in 6. It seems to work if I compare against constant values (which is of no use for this particular problem).
EDIT3
u_min = smooth(0, if u < u_min then u else pre(u_min));
Interestingly, this works also.
I tested your model with Dymola 2016 and it works, however you can try to use an alternative approach. In Modelica you have to think in terms of equations and not in terms of assignments.
u_min = min(u, u_min);
Is what you would do if the code were to be executed as a sequence of instructions. Under the hood the Modelica tool converts this equation
into a nonlinear system that is solved as the simulation proceed.
These are the statistics I get when simulating your model
Statistics
Original Model
Number of components: 1
Variables: 3
Unknowns: 3 (3 scalars)
Equations: 3
Nontrivial: 3
Translated Model
Time-varying variables: 3 scalars
Number of mixed real/discrete systems of equations: 0
Sizes of linear systems of equations: { }
Sizes after manipulation of the linear systems: { }
Sizes of nonlinear systems of equations: {1, 1}
Sizes after manipulation of the nonlinear systems: {1, 1}
Number of numerical Jacobians: 0
As you can see there are two nonlinear systems, one for u_min and one for u_max.
An alternative solution to you problem is the following
model Test
Real x;
Real y;
Real u_min;
Real u_max;
parameter Real T = 1e-4;
equation
x = sin(time) + 0.1*time;
y = sin(time) - 0.1*time;
u_min + T*der(u_min) = if y <= u_min then y else u_min;
u_max + T*der(u_max) = if x >= u_max then x else u_max;
end Test;
In this case u_min and u_max are two state variables and they follow
the variables x and y, depending on their values. For example, when x is lower than u_max then u_max gets "stuck" to the maximum value reached up to that point in time.
Sorry but I can't post an image of the model running since this is my first reply.
For your initial question, what seems to work correctly for me in OpenModelica is this:
u_min = min(u, pre(u_min));
u_max = max(u, pre(u_max));
For me that compiles, simulates, and gives the expected results, but also does say "Matrix singular!". On the other hand, if I change the initial declaration for u_max to this:
Real u_max(start = 0);
Then, the "Matrix singular!" goes away.
I don't know why, but that does seems to do the job, and I would suggest is more straightforward then the other options you have listed.
The main issue here is that you get an equation which is singular, since you try to solve the equation u_min = min(u,u_min). Where u_min depends on u and u_min and every value of u_min that is smaller than u would fit in that equation, also a tool might try to use a non-linear solver for that.
An other solution for that could be perhaps the delay operator:
u_min = min(u, delay(u_min,0));
u_max = max(u, delay(u_max,0));
Some notes on the different approaches:
u_min = if noEvent(u < u_min) then u else pre(u_min);
if noEvent(u < u_min) then
u_min = u;
else
u_min = pre(u_min);
end if;
These both are semantically identical, so the result is should be the same. Also the usage of the pre operator solves the issue, since here u_min depends on u and pre(u_min), so there is no need for a non-linear solver.
u_min = if noEvent(u < u_min) then u else u_min;
Like above where min() is used here the solution of u_min depends on u and u_min, what leads to a non-linear solution.
u_min = if u < u_min then u else pre(u_min);
The semantic of the noEvent() operator results into literally usage the if-expression, in this case here an event u < u_min is triggered and the expression u_min = u is used all the time.
u_min = if u < u_min then u else u_min;
Yes, it combines the problems of 3 and 4.
when u < u_min the
u_min = u;
end when;
Here again the solution of u_min depends on u_min and u.
u_min + T*der(u_min) = if u <= u_min then u else u_min;
Here u_min is a state and so the calculation of u_min is done by the integrator and this equation is now solved for der(u_min), which then effects u_min.
Related
I am using below Matlab code to calculate power function [ without using built-in function] to calculate power = b^e.
At the moment , I am unable to get power function going that support fractional exponential values b^(1/2) = sqrt(b) or 3.4 ^ (1/4) to calculate power due inefficient approach , because it loops e times. I need help in efficient logic for fractional exponent.
Thank you
b = [-32:32]; % example input values
e = [-3:3]; % example input values but doesn't support fraction's
power_function(b,e)
p = 1;
if e < 0
e = abs(e);
multiplier = 1/b;
else
multiplier = b;
end
for k = 1:e
p(:) = p * multiplier; % n-th root for any given number
end
So, I want to know if making the code more easy to read slows performance in Matlab.
function V = example(t, I)
a = 10;
b = 20;
c = 0.5;
V = zeros(1, length(t));
V(1) = 0;
delta_t = t(2) - t(1);
for i=1:length(t)-1
V(i+1) = V(i) + delta_t*feval(#V_prime,a,b,c,t(i));
end;
So, this function is just an example of a Euler method. The idea is that I name constant variables, a, b, c and define a function of the derivative. This basically makes the code easier to read. What I want to know is if declaring a,b,c slows down my code. Also, for performance improvement, would be better to put the equation of the derivative (V_prime) directly on the equation instead of calling it?
Following this mindset the code would look something like this.
function V = example(t, I)
V = zeros(1, length(t));
V(1) = 0;
delta_t = t(2) - t(1);
for i=1:length(t)-1
V(i+1) = V(i) + delta_t*(((10 + t(i)*3)/20)+0.5);
Also from what I've read, Matlab performs better when the code is vectorized, would that be the case in my code?
EDIT:
So, here is my actual code that I am working on:
function [V, u] = Izhikevich_CA1_Imp(t, I_amp, t_inj)
vr = -61.8; % resting potential (mV)
vt = -57.0; % threshold potential (mV)
c = -65.8; % reset membrane potential (mV)
vpeak = 22.6; % membrane voltage cutoff
khigh = 3.3; % nS/mV
klow = 0.1; % nS/mV
C = 115; % Membrane capacitance (pA)
a = 0.0012; % 1/ms
b = 3; % nS
d = 10; % pA
V = zeros(1, length(t));
V(1) = vr; u = 0; % initial values
span = length(t)-1;
delta_t = t(2) - t(1);
for i=1:span
if (V(i) <= vt)
k = klow;
else
k = khigh;
end;
if ((t(i) >= t_inj(1)) && (t(i) <= t_inj(2)))
I_inj = I_amp;
else I_inj = 0;
end;
V(i+1) = V(i) + delta_t*((k*(V(i)-vr)*(V(i)-vt)-u(i)+I_inj)/C);
u(i+1) = u(i) + delta_t*(a*(b*(V(i)-vr)-u(i)));
if (V(i+1) >= vpeak)
V(i+1) = c;
V(i) = vpeak;
u(i+1) = u(i+1) + d;
end;
end;
plot(t,V);
Since I didn't have any training in Matlab (learned by trying and failing), I have my C mindset of programming, and for what I understand, Matlab code should be vectorized.
Eventually I will start working with bigger functions, so performance will be a concern. Now my goal is to vectorize this code.
Usually it is faster.
Especially if you replace looped function calls (like plot()), you will see a significant increase in performance.
In one of my past projects, I had to optimize a program. This one was made using regular program rules (for, while, etc.). Using vectorization, I reached a 10 times increase in performance, which is quite notable..
I would suggest using vectorisation instead of loops most of the time.
On matlab you should basically forget the mindset coming from low-level C programming.
In my experience the first rule for achieving performance in matlab is to avoid loops and use built-in vectorized functions as much as possible. In general, you should try to avoid direct access to array elements like array(i).
Implementing your own ODE solver inevitably leads to very slow execution because in this case there is really no way to avoid the aforementioned things, even if your implementation is per se fine (like in your case). I strongly advise to rely on matlab's ode solvers which are highly optimized blocks of compiled code and much faster than any interpreted matlab code you can write.
In my opinion this goes along with readability of the code as well, at least for the trivial reason that you get a shorter code... but I guess it is also a matter of personal taste.
I am trying to implement the Johnson-Lindenstrauss lemma. I have search for the pseudocode here but could not get any.
I don't know if I have implemented it correctly or not. I just want you guys who understand the lemma to please check my code for me and advice me as to the correct matlab implementation.
n = 2;
d = 4;
k = 2;
G = rand(n,d);
epsilon = sqrt(log(n)/k);
% Projection in dim k << d
% Defining P (k x d)
P = randn(k,d);
% Projecting down to k-dim
proj = P.*G;
u = proj(:,1);
v = proj(:,2);
% u = P * G(:,5);
% v = P * G(:,36);
norm(G(:,1)-G(:,2))^2 * k * (1-epsilon);
norm(u - v)^2;
norm(G(:,1)-G(:,2))^2 * k * (1+epsilon);
for the first part of that to find the epsilon you need to solve a polynomial equation.
n = 2;
k = 2;
pol1 = [-1/3 1/2 0 4*log2(n)/k];
c = roots(pol1)
1.4654 + 1.4304i
1.4654 - 1.4304i
-1.4308 + 0.0000i
Then you need to remove the complex roots and keep the real one:
epsilon = c(imag(c)==0);
% if there are more than one root with imaginary part equal to 0 then you need to select the smaller one.
now you know that the epsilon should be equal or greater that the result.
For any set of m points in R^N and for k = 20*logm/epsilon^2 and epsilon < 1/2:
1/sqrt(k).*randn(k,N)
obtain Pr[success]>=1-2m^(5*epsilon-3)
An R package is available to perform Random projection using Johnson Lindenstrauss Lemma RandPro
I've been working on speeding up the following function, but with no results:
function beta = beta_c(k,c,gamma)
beta = zeros(size(k));
E = #(x) (1.453*x.^4)./((1 + x.^2).^(17/6));
for ii = 1:size(k,1)
for jj = 1:size(k,2)
E_int = integral(E,k(ii,jj),10000);
beta(ii,jj) = c*gamma/(k(ii,jj)*sqrt(E_int));
end
end
end
Up to now, I solved it this way:
function beta = beta_calc(k,c,gamma)
k_1d = reshape(k,[1,numel(k)]);
E_1d =#(k) 1.453.*k.^4./((1 + k.^2).^(17/6));
E_int = zeros(1,numel(k_1d));
parfor ii = 1:numel(k_1d)
E_int(ii) = quad(E_1d,k_1d(ii),10000);
end
beta_1d = c*gamma./(k_1d.*sqrt(E_int));
beta = reshape(beta_1d,[size(k,1),size(k,2)]);
end
Seems to me, it didn't really enhance performances. What do you think about this?
Would you mind to shed a light?
I thank you in advance.
EDIT
I am gonna introduce some theoretical background involving my question.
Generally, beta is to be calculated as follows
Therefore, in the reduced case of unidimensional k array, E_int may be calculated as
E = 1.453.*k.^4./((1 + k.^2).^(17/6));
E_int = 1.5 - cumtrapz(k,E);
or, alternatively as
E_int(1) = 1.5;
for jj = 2:numel(k)
E =#(k) 1.453.*k.^4./((1 + k.^2).^(17/6));
E_int(jj) = E_int(jj - 1) - integral(E,k(jj-1),k(jj));
end
Nonetheless, k is currently a matrix k(size1,size2).
Here's another approach, parallelize, because it's easy using spmd or parfor. Instead of integral consider quad, see this link for examples...
I like this question.
The problem: the function integral takes as integration limits only scalars. Hence, it is difficult to vectorize the computation of of E_int.
A clue: there seems to be lot of redundancy in integrating the same function over and over from k(ii,jj) to infinity...
Proposed solution: How about sorting the values of k from smallest to largest and integrating E_sort_int(si) = integral( E, sortedK(si), sortedK(si+1) ); with sortedK( numel(k) + 1 ) = 10000;. Then the full value of E_int = cumsum( E_sort_int ); (you only need to "undo" the sorting and reshape it back to the size of k).
I have written the following algorithm in order to evaluate a function in MatLab using Newton's method (we set r = -7 in my solution):
function newton(r);
syms x;
y = exp(x) - 1.5 - atan(x);
yprime = diff(y,x);
f = matlabFunction(y);
fprime = matlabFunction(yprime);
x = r;
xvals = x
for i=1:8
u = x;
x = u - f(r)/fprime(r);
xvals = x
end
The algorithm works in that it runs without any errors, but the numbers keep decreasing at every iteration, even though, according to my textbook, the expression should converge to roughly -14 for x. My algorithm is correct the first two iterations, but then it goes beyond -14 and finally ends up at roughøy -36.4 after all iterations have completed.
If anyone can give me some help as to why the algorithm does not work properly, I would greatly appreciate it!
I think
x = u - f(r)/fprime(r);
should be
x = u - f(u)/fprime(u);
If you always use r, you're always decrementing x by the same value.
syms x
y = exp(x) - 1.5 - atan(x); % your function is converted in for loop
x=-1;
n=10;
v=0;
for i=2:n
x(i)=tan(exp(x(i-1))-1.5);
v=[v ;x(i)]; % you will get solution vector for each i value
end
v