Random function in Modelica - random

Hello,I need a function as random in C language. Maybe you will say that i can call C function, but the effect is not the same in visual c++ tool. So, I need your help.
thanks.

See the Noise library:
https://github.com/DLR-SR/Noise
It has some models and functions to generate random numbers.

If you are using Dymola, you can use the function rand():
model rand_model
Real a(start=rand());
Real b(start=rand());
equation
when (sample(1,1)) then
a = rand();
b = rand();
end when;
end rand_model;
The function is not documented in the Dymola user manual and it is no part of the modelica standard. The output seems to be an integer between 0 and 32767, seed seems to be constant.
Perhaps the implementation is given in the moutil.c file which is shipped with Dymola. But i'm not sure.

Related

IDL Integration

I'm looking to integrate a function I am building, but the function would change each iteration based on a given input. For instance:
y=4e^(mx/4)
I would want to integrate with respect to x with a lower and upper bound, but the value of m would change. I know all my values of m.
Can I work with this? My initial assumption would be to use QROMB but that seems limited and unable to handle my issue.
QROMB (and other integrators) want a function of one variable, so you have to get the m in there through the back door. One way is with a common block:
function integrand,x
common int_common,int_m
return,4*exp(int_m*x/4)
end
function integrator,m,xlow,xhigh
common int_common,int_m
int_m=m
return,qromb('integrand',xlow,xhigh)
end
integrator(m,xlow,xhigh) will return the integral you want.

How can I insert this Function in Simulink? (Rotational 2 DOF)

I want to insert the below function K, in my simulink model, to calculate the dynamic transmission error (dte).
The problem is the function K depend on Θp. I don't know if there is any way to do that in simulink.
I would appreciate any suggest.
Here you can find the simulink model
Simulink Model (NEW)
/note: Answer has been rewritten completely.
You can't use a gain block. Any parameter like "Gain" is only evaluated once at the start of the simulation. If you have something which changes over time like your Θp you have to use the signal.
In your case having a one line MATLAB expression to evaluate, the easiest way to use it is a "Function Block" (User-Defined Functons->Fcn), not a MATLAB Function as I originally suggested. Replace the MATLAB Function you already got in your code with a Function Block and use the code:
a0 + sum( af .* cos(n * zp * u) + bf .* sin(n * zp * u) )
The nice advantage is, that all workspace variables are already initialized.
I applied some modifications to the formula, using element-wise multiplication where I expect it should be used. You can use the same code line in MATLAB to verify it really does what you expect.

Can we create pure functions in Fortran which generate random numbers?

My goal is to write a pure function using random numbers which can be used in a DO CONCURRENT structure. The compiler does not seem to permit this.
mwe.f95:8:30:
call init_seed ( )
1
Error: Subroutine call to ‘init_seed’ at (1) is not PURE
mwe.f95:9:36:
call random_number ( y )
1
Error: Subroutine call to intrinsic ‘random_number’ at (1) is not PURE
mwe.f95:16:8:
use myFunction
1
Fatal Error: Can't open module file ‘myfunction.mod’ for reading at (1): No such file or directory
compilation terminated.
Why is this so and is there a way to generate random numbers in a pure routine?
The MWE follows. Compilation command is gfortran mwe.f95. Compiler version is GCC 5.1.0.
module myFunction
implicit none
contains
pure real function action ( ) result ( new_number )
real :: y
call init_seed ( )
call random_number ( y )
new_number = y**2
end function
end module myFunction
program mwe
use myFunction
implicit none
real :: x
x = action ( )
end program mwe
This is completely against the concept of pureness. True pure functions, as found in true functional languages, should always return the same result for the same input. Fortran pure functions can read module variables and therefore are more complex.
It is not even a good idea to have any function, not just a pure function, to return pseudo-random numbers. When you have more function calls in an expression the Fortran compiler is permitted to evaluate the function just once. That is even more likely, or better justified, when that function is pure.
I would suggest to just use regular DO loops and call random_number or other custom PRNG subroutine. Even if you want automatic parallelization or similar , the compilers are normally capable to treat regular DO loops equally well as DO CONCURRENT.
You'll need pure random number generator. It is quite possible to make, say, for Linear Congruential Generator, where seed (being 64bit unsigned integer) is the same as state and is the same as return value. In that case state/seed is kept externally outside the sampling routine, passed explicitly and on getting it back from RNG is stored

try catch or type conversion performance in julia - (Julia 73 seconds, Python 0.5 seconds)

I have been playing with Julia because it seems syntactically similar to python (which I like) but claims to be faster. However, I tried making a similar script to something I have in python for tesing where numerical values are within a text file which uses this function:
function isFloat(s)
try:
float64(s)
return true
catch:
return false
end
end
For some reason, this takes a great deal of time for a text file with a reasonable amount of rows of text (~500000).
Why would this be? Is there a better way to do this? What general feature of the language can I understand from this to apply to other languages?
Here are the two exact scripts i ran with the times for reference:
python: ~0.5 seconds
def is_number(s):
try:
np.float64(s)
return True
except ValueError:
return False
start = time.time()
file_data = open('SMW100.asc').readlines()
file_data = map(lambda line: line.rstrip('\n').replace(',',' ').split(), file_data)
bools = [(all(map(is_number, x)), x) for x in file_data]
print time.time() - start
julia: ~73.5 seconds
start = time()
function isFloat(s)
try:
float64(s)
return true
catch:
return false
end
end
x = map(x-> split(replace(x, ",", " ")), open(readlines, "SMW100.asc"))
u = [(all(map(isFloat, i)), i) for i in x]
print(start - time())
Note also that you can use the float64_isvalid function in the standard library to (a) check whether a string is a valid floating-point value and (b) return the value.
Note also that the colons (:) after try and catch in your isFloat code are wrong in Julia (this is a Pythonism).
A much faster version of your code should be:
const isFloat2_out = [1.0]
isFloat2(s::String) = float64_isvalid(s, isFloat2_out)
function foo(L)
x = split(L, ",")
(all(isFloat2, x), x)
end
u = map(foo, open(readlines, "SMW100.asc"))
On my machine, for a sample file with 100,000 rows and 10 columns of data, 50% of which are valid numbers, your Python code takes 4.21 seconds and my Julia code takes 2.45 seconds.
This is an interesting performance problem that might be worth submitting to julia-users to get more focused feedback than SO will probably provide. At a first glance, I think you're hitting problems because (1) try/catch is just slightly slow to begin with and then (2) you're using try/catch in a context where there's a very considerable amount of type uncertainty because of lots of function calls that don't return stable types. As a result, the Julia interpreter spend its time trying to figure out the types of objects rather than doing your computation. It's a bit hard to tell exactly where the big bottlenecks are because you're doing a lot of things that are not very idiomatic in Julia. Also you seem to be doing your computations in the global scope, where Julia's compiler can't perform many meaningful optimizations due to additional type uncertainty.
Python is oddly ambiguous on the subject of whether using exceptions for control flow is good or bad. See Python using exceptions for control flow considered bad?. But even in Python, the consensus is that user code shouldn't use exceptions for control flow (although for some reason generators are allowed to do this). So basically, the simple answer is that you should not be doing that – exceptions are for exceptional situations, not for control flow. That is why almost zero effort has been put into making Julia's try/catch construct faster – you shouldn't be using it like that in the first place. Of course, we will probably get around to making it faster at some point.
That said, the onus is on us as the designers of Julia's standard library to make sure that we provide APIs that never force you to use exceptions for control flow. In this case, you need a function that allows you to try to parse something as a floating-point value and indicate whether that was possible or not – not by throwing an exception, but rather by returning normal values. We don't provide such an API, so this ultimately a shortcoming of Julia's standard library – as it exists right now. I've opened an issue to discuss this API design question: https://github.com/JuliaLang/julia/issues/5704. We'll see how it pans out.

random number generator in boost.python

How can I use the same random number generator in my "Python with numpy" code as my C++0x code?
I am currently using
std::ranlux64_base_01
in C++ and
numpy.random.RandomState(10)
in Python.
I exposed C++'s random number generator:
typedef std::ranlux64_base_01 RNG;
RNG g_rng;
...
class_<RNG>("RNG");
scope().attr("g_rng") = g_rng;
How do I use it with Python's methods that take a numpy.random?
There are 2 ways:
the first is to use pythons random number generator from c++. It will probably look something like this:
boost::python::object randmod = boost::python::import("numpy.random")
boost::python::object randfunc = randmod.attr("RandomState")
randfunc(10)
The second is to wrap and expose the c++ function so that it can be used from python. The code for this is left an an exercise for the student.
Edit:
Once you have exported the c++ function you would have to make a python object that mimics the interface of numpy.random.RandomState using the c++ function for it's random bits. This is probably more work then you want to do. I have not used numpy, but from the docs it looks like the RandomState object is not-trivial.

Resources