Modelica: assign array return value to scalars - algorithm

The following model tries to assign a vector return value of a function call to a vector of scalars. It checks and works if it is inside an equation section, but it fails inside an algorithm section. Is that a bug in the Modelica tool I am using, or am I doing something wrong? And, how can I write it without introducing the intermediate variable x[2]?
model returnVector
Real x1;
Real x2;
Real x[2];
Real A[2,2] = [1,2;3,4];
Real b[2] = {8,7};
algorithm
x = Modelica.Math.Matrices.solve(A,b);
{x1, x2} = Modelica.Math.Matrices.solve(A,b);
end returnVector;

You are doing something wrong :-)
In equations the left-hand-side is any expression; so you could even write Modelica.Math.Matrices.solve(A,b)={x1,x2};.
In algorithms the left-hand-side must be a component-reference (section 11.2 in Modelica 3.4; https://modelica.org/documents/ModelicaSpec34.pdf ) and the right-hand-side is evaluated and then assigned to the left-hand-side variable.

Under the 'algorithm', one must use ' := ' to represent the Assignment in Modelica instead of ' = ' as like in C/C++.
So the code could be rewritten as follows:
model returnVector
Real x1;
Real x2;
Real x[2];
Real A[2,2] = [1, 2; 3, 4];
Real b[2] = {8, 7};
algorithm
x := Modelica.Math.Matrices.solve(A, b);
x1 := x[1];
x2 := x[2];
//{x1, x2} = Modelica.Math.Matrices.solve(A, b);
end returnVector;

Simply write the left-hand-side within round parentheses:
({x1, x2}) := Modelica.Math.Matrices.solve(A, b);

Related

Julia JuMP array variable constraint

I am trying to model a non-linear problem involving vector rotation using JuMP in Julia. I need a constraint, which looks something like v[1:3] == rotate(v) If I write it like this, it does not work, since "Nonlinear expressions may contain only scalar expressions". How can I work around this?
I could say something like v[1] == rotate(v)[1] and same for v[2] and v[3], but then I would have to compute rotate(v) three times as often. I could also try to split the rotate function into three functions which compute one element each, but the actual constraint is a bit more complicated than a simple rotation, so this could prove to be tricky.
Are there any other ways to do this? Maybe to have something like an auxiliary variable which can be computed as a vector and then in the constraint only compare the elements of the two vectors (essentialy the first approach, but without computing the function three times)?
See here for a suggested work-around:
https://discourse.julialang.org/t/how-to-set-up-nlconstraints-from-multi-output-user-defined-function/42309/5?u=odow
using JuMP
using Ipopt
function myfun(x)
return sum(xi for xi in x), sum(xi^2 for xi in x)
end
function memoized()
cache = Dict{UInt, Any}()
fi = (i, x) -> begin
h = hash((x, typeof(x)))
if !haskey(cache, h)
cache[h] = myfun(x)
end
return cache[h][i]::Real
end
return (x...) -> fi(1, x), (x...) -> fi(2, x)
end
model = Model(Ipopt.Optimizer)
f1, f2 = memoized()
register(model, :f1, 3, f1; autodiff = true)
register(model, :f2, 3, f2; autodiff = true)
#variable(model, x[1:3] >= 0, start = 0.1)
#NLconstraint(model, f1(x...) <= 2)
#NLconstraint(model, f2(x...) <= 1)
#objective(model, Max, sum(x))
optimize!(model)

Integral changes variable subscripts

I have been away from Mathematica for quite a while and am trying to fix some old notebooks from v4 that are no longer working under v11. I'm also a tad rusty.
I am attempting to use functional minimization to fit a polynomial of variable degree to an arbitrary function (F) given a starting guess (ao) and domain of interest (d). Note that while F is arbitrary, its nature is such that the integral of the product of F and a polynomial (or F^2) can always be evaluated algebraically.
For the sake of example, I'll use the following inputs:
ao = { 1, 2, 3, 4 }
d = { -1, 1 }
F = Sin[x]
To do so, I create an array of 'indexed' variables
polyCoeff = Array[a,Length[a],0]
Result: polycoeff = {a[0], a[1], a[2], a[3]}
I then create the polynomial itself using the following
genPoly[{},x_] := 0
genPoly[a_List,x_] := First[a] + x genPoly[Rest[a],x]
poly = genPoly[polyCoeff,x]
Result: poly = a[0] + x (a[1] + x (a[2] + x a[3]))
I then define my objective function as the integral of the square of the error of the difference between this poly and the function I am attempting to fit:
Q = Integrate[ (poly - F[x])^2, {x, d[[1]],d[[2]]} ]
result: Q = 0.545351 - 2. a[0.]^2 + 0.66667 a[1.]^2 + .....
And this is where things break down. poly looks just as I expected: a polynomial in x with coefficients that look like a[0], a[1], a[2], ... But, Q is not exactly what I expected. I expected and got a new polynomial. But not the coefficients contained a[0.], a[1.], a[2.], ...
The next step is to create the initial guess for FindMinimum
init = Transpose[{polyCoeff,ao}]
Result: {{a[0],1},{a[1],2},{a[3],3},{a[4],4}}
This looks fine.
But when I make the call to FindMinimum, I get an error because the coefficients passed in the objective (a[0.],a[1.],...) do not match those passed in the initial guess (a[0],a[1],...).
S = FindMinimum[Q,init]
So I think my question is how do I keep Integrate from changing the arguments to my coefficients? But, I am open to other approaches as well. Keep in mind though that this is "legacy" work that I really don't want to have to completely revamp.
Thanks much for any/all help.

Performance of Wolfram Mathematica InverseFunction

given the constants
mu = 20.82;
ex = 1.25;
kg1 = 1202.76;
kp = 76.58;
kvb = 126.92;
I need to invert the function
f[Vpx_,Vgx_] := Vpx Log[1 + Exp[kp (1/mu + Vgx/(Vpx s[Vpx]))]];
where
s[x_] := 1 + kvb/(2 x^2);
so that I get a function of two variables, the second one being Vgx.
I tried with
t = InverseFunction[Function[{Vpx, Vgx}, f[Vpx, Vgx]], 1, 2];
tested with t[451,-4]
It takes so much time that every time I try it I stop the evaluation.
On the other side, working with only one variable, everything works:
Vgx = -4;
t = InverseFunction[Function[{Vpx}, f[Vpx,Vgx]]];
t[451]
It's my fault? the method is inappropriate? or it's a limitation of Wolfram Mathematica?
Thanks
Teodoro Marinucci
P.S. For everyone interested it's a problem related to the Norman Koren model of triodes.
As I said in my comment, my guess is that InverseFunction first tries to solve symbolically for the inverse, e.g. Solve[Function[{Vpx, Vgx}, f[Vpx, Vgx]][X, #2] == #1, X], which takes a very long time (I didn't let it finish). However, I came across a system option that seems to turn this off and produce a function:
With[{opts = SystemOptions["ExtendedInverseFunction"]},
Internal`WithLocalSettings[
SetSystemOptions["ExtendedInverseFunction" -> False],
t = InverseFunction[Function[{Vpx, Vgx}, f[Vpx, Vgx]], 1, 2],
SetSystemOptions[opts]
]];
t[451, -4]
(* 199.762 *)
A couple of notes:
According to the documentation, InverseFunction with exact input should produce an exact answer. Here some of the parameters are approximate (floating-point) real numbers, so the answer above is a numerical approximation.
The actual definition of t depends on f. If f changes, then a side effect will be that t changes. If that is not something you explicitly want, then it is probably better to define t this way:
t = InverseFunction[Function[{Vpx, Vgx}, Evaluate#f[Vpx, Vgx]], 1, 2]
As my late Theoretical Physics professor said, "a simple and beautiful solution is likely to be true".
Here is the piece of code that works:
mu = 20.82; ex = 1.25; kg1 = 1202.76; kp = 76.58; kvb = 126.92;
Ip[Vpx_, Vgx_] = Power[Vpx/kp Log[1 + Exp[kp (1/mu + Vgx/Sqrt[kvb + Vpx^2])]], ex] 2/kg1;
Vp[y_, z_] := x /. FindRoot[Ip[x, z] == y, {x, 80}]
The "real" amplification factor of a tube is the partial derivative of Ip[Vpx, Vgx] by respect to Vgx, with give Vpx. I would be happier if could use the Derivative, but I'm having errors.
I'll try to understand why, but for the moment the definition
[CapitalDelta]x = 10^-6;
[Micro][Ipx_, Vgx_] := Abs[Vp[Ipx, Vgx + [CapitalDelta]x] - Vp[Ipx, Vgx]]/[CapitalDelta]x
works well for me.
Thanks, it was really the starting point of the FindRoot the problem.

How to make a graph of a three-branches function in matlab

How can I make this function's graph in Matlab, so that its body is depicted in the same graph (plot or subplot)?
t0=0.15
x(t)= 1, if 0<=t<(t0/2)
-2, if (t0/2)<=t<=(3/2)*t0
0, else
The real question you should be asking is "How to define a function that has branches?", since plotting is easy once the function is defined.
Here's a way using anonymous functions:
x_t = #(t,t0)1*(0<=t & t<t0/2)-2*(t0/2<=t & t<=(3/2)*t0); %// the 1* is redundant, I only
%// left it there for clarity
Note that the & operator expects arrays and not scalars.
Here's a way using heaviside (aka step) functions (not exactly what you wanted, due to its behavior on the transition point, but worth mentioning):
x_t = #(t,t0)1*heaviside(t)+(-1-2)*heaviside(t-t0/2)+2*heaviside(t-t0*3/2);
Note that in this case, you need to "negate" the previous heaviside once you leave its area of validity.
After defining this function, simply evaluate and plot.
t0 = 0.15;
tt = -0.1:0.01:0.5;
xx = x_t(tt,t0);
plot(tt,xx); %// Or scatter(), or any other plotting function
BTW, t0 does not have to be an input to x_t - if it is defined before x_t, the value of t0 that exists in the workspace at that time will be captured and used, but this also means that if t0 changes later, this will not affect x_t.
I'm not sure of what you want, but would it be it?
clc
close all
clear
t0 = 0.15;
t = 0:0.01:0.15;
x = zeros(size(t));
x(0 <= t & t < (t0/2)) = 1;
x((t0/2) <= t & t <= (3/2)*t0) = -2;
figure, plot(t, x, 'rd')
which gives,
Everything depends on the final t, for example if the end t is 0.3, then you'll get,

How to apply HoldForm to a list of variables without the variables in the list being evaluated first?

I am writing a debug function, which prints a variable name, and its value. I call this debug function with a list of variables from anywhere in the program. So the idea is for it to work like this:
debug[var_List] := Module[{values = ReleaseHold[var], i},
For[i = 1, i <= Length[values], i++,
Print[var[[i]], " = ", values[[i]]]
]
];
Now I use the above, like this
x = 3; y = 5;
debug[{HoldForm[x], HoldForm[y]}]
and I see in the console the following
x = 3
y = 5
But I have a large program and long list of variables at different places I want to debug. And I do not want to type HoldForm to each variable to make up the list to call the debug[] function. Much easier to Map it if possible. Less typing each time. But this does not work:
debug[ Map[HoldForm,{x,y}]]
The reason is that {x,y} was evaluated before HoldForm got hold of it. So I end up with a list that has the values in it, like this:
3 = 3
5 = 5
I could not find a way to Map HoldForm without the list being evaluated.
The best I could find is this:
debug[HoldForm[Defer[{x, y}]]]
which gives the following output from the above debug[] function:
{x,y} = {3,5}
Since Defer[{x, y}] has length 1, and it is just one thing, I could not break it up to make a 2 column list like in the above example.
It will be better if I can get an output of the form
x = 3
y = 5
easier to match the variable with its value since I have many variables.
question is: Any one knows of a programming trick to convert HoldForm[{x,y}] to {HoldForm[x],HoldForm[y]}
thanks
Just use Thread:
Thread[HoldForm[{x, y}]]
alternatively,
Map[HoldForm, Unevaluated[{x, y}]]
Here is a longer alternative demonstrating use of Hold, found in Roman Maeder's Programming In Mathematica (3rd ed.), page 137:
e1 = Hold[{x, y}];
e2 = MapAt[Hold, e1, {1, 0}];
e3 = Map[HoldForm, e2, {2}];
e4 = MapAt[ReleaseHold, First[e3], {0}];
debug[e4]
x=3
y=5
I did a PrintIt function using attributes that does what you want. I posted it here https://stackoverflow.com/a/8270643/884752, I repeat the code:
SetAttributes[System`ShowIt, HoldAll];
System`ShowIt[code__] := System`ShowIt[{code}];
System`ShowIt[code_] :=
With[{y = code},
Print[Defer[code = y]];
y
];
SetAttributes[System`PrintIt, {HoldAll,Listable}];
System`PrintIt[expr__]:=System`PrintIt[{expr}];
System`PrintIt[expr_] := System`ShowIt[expr];

Resources