Dymola DAE solver - ode

I am trying to figure out the steps required by Dymola in order to solve the Modelica code. By reading some reference papers and book, I understood that Dymola:
Translates the Modelica code into a hybrid DAE (flattening).
Manipulates the DAE in order to convert it into ODE form (index reduction and other techniques).
Uses DASSL algorithm.
My question is: why does Dymola need to used DASSL to solve the ODE? Shouldn't be enough to use a general ODE solver such as BDF or Runge-Kutta?
Thank you in advance for your attention.
All the Best,
Alessandro

Dymola supports several integration algorithms, including RK. But DASSL is a good default. Also note that some problems cannot be reduced to ODE form.

Index reduction "throws away" constraint information by replacing the original equations with their derivatives.
In theory this does not matter much since the derivative equations have these lost identities as conserved quantities. However, numerical integration introduces drift that may remove the state from the manifold of consistent states of the DAE. Of course the original equations can be remembered, and the additional "hidden" constraints can be constructed. One has to be careful in projecting the state back to the manifold of consistent states to not destroy the consistency and order of the integration method.
This can be ameliorated by not reducing to the index-0 ODE but stopping at the index-1 DAE stage so that there is less differentiation of the original equations. The resulting numerical integration then has essentially the complexity of an implicit RK method resp. implicit multi-step method.
For the index-1 system one would then need to use a DAE solver like DASSL.

Related

Continous action-state-space and tiling

After getting used to the Q-Learning algorithm in discrete action-state-space I would like to expand this now to continous spaces. To do this I read the chapter On-Policy Control with Approximation of Sutton´s introduction. Here, the usage of differentiable functions like a linear function or an ANN are recommended to solve the problem of continous action-state-space. Nevertheless Sutton then discribes the tiling method which maps the continous variables onto a discrete presentation. Is this always necessary?
Trying to understand this methods I tried to implement the example of the Hill Climbing Car in the book without the tiling method and a linear base function q. As my state space is 2 dimensional, and my action is one dimensional I used a three dimensional weight vector w in this equation:
When I now try to choose the action which will maximize the output, the obvious answer will be a=1, if w_2 > 0. Therefore, the weight will slowly converge to positive zero and the agent will not learn anything useful. As Sutton is able to solve the problem using the tiling I am wondering if my problem is caused by the absence of the tiling method or if I am doing anything else wrong.
So: Is the tiling always necessary?
Regarding your main question about tiling, the answer is no, not always it is necessary using tiling.
As you tried, it's a good idea to implement some easy example as the Hill Climbing Car in order to fully understand the concepts. Here, however, you are misundertanding something important. When the book talks about linear methods, it is refering to linear in the parameters, which means that you can extract a set of (non linear) features and combine them linearly. This kind of approximators can represent functions much more complex than a standard linear regression.
The parametrization you have proposed it's not able to represent a non-linear Q function. Taking into account that in the Hill Climbing problem you want to learn Q-functions of this style:
You will need something more powefull than . An easy solution for your problem could be to use a Radial Basis Function (RBF) network. In this case, you use a set of features (or BF, like for example Gaussians functions) to map your state space:
Additionally, if your action space is discrete and small, the easiest solution is to maintain an independent RBF network for each action. For selecting the action, simply compute the Q value for each action and select the one with higher value. In this way you avoid the (complex) optimization problem of selecting the best action in a continuous function.
You can find a more detailed explanation on the Busoniu et al. book Reinforcement Learning and Dynamic Programming Using Function Approximators, pages 49-51. It's available for free here.

Why should we compute the image mean when we train CNNs?

When I use caffe for image classification, it often computes the image mean. Why is that the case?
Someone said that it can improve the accuracy, but I don't understand why this should be the case.
Refer to image whitening technique in Deep learning. Actually it has been proved that it improve the accuracy but not widely used.
To understand why it helps refer to the idea of normalizing data before applying machine learning method. which helps to keep the data in the same range. Actually there is another method now used in CNN which is Batch normalization.
Neural networks (including CNNs) are models with thousands of parameters which we try to optimize with gradient descent. Those models are able to fit a lot of different functions by having a non-linearity φ at their nodes. Without a non-linear activation function, the network collapses to a linear function in total. This means we need the non-linearity for most interesting problems.
Common choices for φ are the logistic function, tanh or ReLU. All of them have the most interesting region around 0. This is where the gradient either is big enough to learn quickly or where a non-linearity is at all in case of ReLU. Weight initialization schemes like Glorot initialization try to make the network start at a good point for the optimization. Other techniques like Batch Normalization also keep the mean of the nodes input around 0.
So you compute (and subtract) the mean of the image so that the first computing nodes get data which "behaves well". It has a mean of 0 and thus the intuition is that this helps the optimization process.
In theory, a network can be able to "subtract" the mean by itself. So if you train long enough, this should not matter too much. However, depending on the activation function "long enough" can be important.

Will non-linear regression algorithms perform better if trained with normally distributed target values?

After finding out about many transformations that can be applied on the target values(y column), of a data set, such as box-cox transformations I learned that linear regression models need to be trained with normally distributed target values in order to be efficient.(https://stats.stackexchange.com/questions/298/in-linear-regression-when-is-it-appropriate-to-use-the-log-of-an-independent-va)
I'd like to know if the same applies for non-linear regression algorithms. For now I've seen people on kaggle use log transformation for mitigation of heteroskedasticity, by using xgboost, but they never mention if it is also being done for getting normally distributed target values.
I've tried to do some research and I found in Andrew Ng's lecture notes(http://cs229.stanford.edu/notes/cs229-notes1.pdf) on page 11 that the least squares cost function, used by many algorithms linear and non-linear, is derived by assuming normal distribution of the error. I believe if the error should be normally distributed then the target values should be as well.
If this is true then all the regression algorithms using least squares cost function should work better with normally distributed target values.
Since xgboost uses least squares cost function for node splitting(http://cilvr.cs.nyu.edu/diglib/lsml/lecture03-trees-boosting.pdf - slide 13) then maybe this algorithm would work better if I transform the target values using box-cox transformations for training the model and then apply inverse box-cox transformations on the output in order to get the predicted values.
Will this theoretically speaking give better results?
Your conjecture "I believe if the error should be normally distributed then the target values should be as well." is totally wrong. So your question does not have any answer at all since it is not a valid question.
There are no assumptions on the target variable to be Normal at all.
Getting the target variable transformed does not mean the errors are normally distributed. In fact, that may ruin normality.
I have no idea what this is supposed to mean: "linear regression models need to be trained with normally distributed target values in order to be efficient." Efficient in what way?
Linear regression models are global models. They simply fit a surface to the overall data. The operations are matrix operations, so the time to "train" the model depends only on the size of data. The distribution of the target has nothing to do with model building performance. And, it has nothing to do with model scoring performance either.
Because targets are generally not normally distributed, I would certainly hope that such a distribution is not required for a machine learning algorithm to work effectively.

A good parameter optimization algorithm for a limited number of points with variance

I'm trying to meta-optimize an algorithm, which has almost a dosen constants. I guess some form of genetic algorithm should be used. However, the algorithm itself is quite heavy and probabilistic by nature (a version of ant colony optimization). Thus calculating the fitness for some set of parameters is quite slow and the results include a lot of variance. Even the order of magnitude for some of the parameters is not exactly clear, so the distribution on some components will likely need to be logarithmic.
Would someone have ideas about suitable algorithms for this problem? I.e. it would need to converge with a limited number of measurement points and also be able to handle randomness in the measured fitness. Also, the easier it is to implement with Java the better of course. :)
If you can express you model algebraically (or as differential equations), consider trying a derivative-based optimization methods. These have the theoretical properties you desire and are much more computationally efficient than black-box/derivative free optimization methods. If you have a MATLAB license, give fmincon a try. Note: fmincon will work much better if you supply derivative information. Other modeling environments include Pyomo, CasADi and Julia/JuMP, which will automatically calculate derivatives and interface with powerful optimization solvers.

Automatic probability densities

I have found automatic differentiation to be extremely useful when writing mathematical software. I now have to work with random variables and functions of the random variables, and it seems to me that an approach similar to automatic differentiation could be used for this, too.
The idea is to start with a basic random vector with given multivariate distribution and then you want to work with the implied probability distributions of functions of components of the random vector. The idea is to define operators that automatically combine two probability distributions appropriately when you add, multiply, divide two random variables and transform the distribution appropriately when you apply scalar functions such as exponentiation. You could then combine these to build any function you need of the original random variables and automatically have the corresponding probability distribution available.
Does this sound feasible? If not, why not? If so and since it's not a particularly original thought, could someone point me to an existing implementation, preferably in C
There has been a lot of work on probabilistic programming. One issue is that as your distribution gets more complicated you start needing more complex techniques to sample from it.
There are a number of ways this is done. Probabilistic graphical models gives one vocabulary for expressing these models, and you can then sample from them using various Metropolis-Hastings-style methods. Here is a crash course.
Another model is Probabilistic Programming, which can be done through an embedded domain specific language, directly. Oleg Kiselyov's HANSEI is an example of this approach. Once they have the program they can inspect the tree of decisions and expand them out by a form of importance sampling to gain the most information possible at each step.
You may also want to read "Nonstandard Interpretations of Probabilistic
Programs for Efficient Inference" by Wingate et al. which describes one way to use extra information about the derivative of your distribution to accelerate Metropolis-Hastings-style sampling techniques. I personally use automatic differentiation to calculate those derivatives and this brings the topic back to automatic-differentiation. ;)

Resources