OMShell Simulation Times Output - time

When I simulate an OpenModelica model using OMShell I got the following times as output:
timeFrontend
timeBackend
timeSimCode
timeTemplates
timeCompile
timeSimulation
timeTotal
I wasn't able to find any information about the meaning of each of them. Do you know the meaning of each of these times? Do you know some documentation that can help me figure this out?

OK, here it goes from the top of my head:
timeFrontend: the time it takes to flatten the Modelica code (remove structure and expand connects, etc) to get to a hybrid DAE
timeBackend: the time it takes to do a lot of symbolic manipulations to the system to bring it to ODE form (causalization, blt transformation, index reduction, matching, etc)
timeSimCode: time to generate the structures for code generation
timeTemplates: the time it takes to generate the C or C++ files from SimCode stuctures
timeCompile: the time it takes to compile the generated C or C++ files via gcc or clang into a simulation executable
timeSimulation: the time it takes to run the generated simulation executable to get the simulation results
timeTotal: duh :)

Related

TwinCAT fails to save data to CSV

I am part of tractor pulling team and we have Bechoff CX8190 based PLC for data logging. System works most of the time but every now and then saving sensor values (every 10ms is collected) to CSV fails (mostly in middle of csv row). Guy who build the code is new with the TwinCAT and does not know how to find what causes that. Any Ideas where to look reason for this.
Writing to a file is always a asynchron action in TwinCAT. That is to say this is no realtime action and it is not safe that the writing process is done within the task cycletime of 10ms. Therefore these functionblocks always have a BUSY-output which has to be evaluated and the functionblock has to be called successivly until the BUSY-output returns to FALSE. Only then a new write command can be executed.
I normally tackle this task with a two-side-buffer algorithm. Lets say the buffer-array has 2x100 entries. So fill up the first 100 entries with sample values. Then write them all together with one command to the file. When its done, clean the buffer. In the meanwhile the other half of the buffer can be filled with sample values. If second side is full, write them all together to the file ... and so on. So you have more time for the filesystem access (in the example above 100x10ms=1s) as the 10ms task cycletime.
But this is just a suggestion out of my experience. I agree with the others, some code could really help.

running external program in TCL

After developing an elaborate TCL code to do smoothing based on Gabriel Taubin's smoothing without shape shrinkage, the code runs extremely slow. This is likely due to the size of unstructured grid I am smoothing. I have to use TCL because the grid generator I am using is Pointwise and Pointwise's "macro language" is TCL based. I'm still a bit new to this, but is there a way to run an external code from TCL where TCL sends the data to the software, the software runs the smoothing operation, and output is sent back to TCL to update the internal data inside the Pointwise grid generation tool? I will be writing the smoothing tool in another language which is significantly faster.
There are a number of options to deal with code that "runs extremely show". I would start with determining how fast it must run. Are we talking milliseconds, seconds, minutes, hours or days. Next it is necessary to determine which part is slow. The time command is useful here.
But assuming you have decided that more performance is necessary and you have some metrics for your current program so you will know if you are improving, here are some things to try:
Try to improve the existing code. If you are using the expr command, make sure your expressions are given to the command as a single argument enclosed in braces. Beginners sometimes forget this and the improvement can be substantial.
Use the critcl package to code parts of the program in "C". Critcl allows you to put "C" code directly into your Tcl program and have that code pulled out, compiled and loaded into your program.
Write a traditional "C" based Tcl extension. Tcl is very extensible and has a clean API for building extensions. There is sample code for extensions and source to many extensions is readily available.
Write a program to do the time consuming part of the job and execute it as a separate process and obtain the output back into your Tcl script. This is where the exec command comes in useful. Presumably you will have to write data out to some where the program can get it and read the output of the program back into your Tcl script. If you want to get fancy you can do two-way communications across a localhost TCP port. The set up in Tcl is quite simple. The "C" code in a program to do it is a bit more tedious, but many examples exist out on the Internet.
Which option to choose depends very much on how much improvement is required and the amount of code that must be improved. You haven't given us much idea what those things are in your case, so all I can offer is rather vague general solutions.
For a loadable module, you can write a Tcl extension. An example is here:
File Last Modified Time with Milliseconds Precision
Alternatively, just write your program to take input from a file. Have Tcl write the input data to the file, run the program, then collect the output from the external program.

Advice in Time simulation

I just created VANET simulation with AODV protocol .. Then running it using omnet++ and sumo ..please your advice for :-
1- Which button it better to use ( run button , or fast button )
2- When I using the run button, my simulation take along time upto 12 hour with sample number of car .. So if the number of car increase it will take more long time, So how can increase the speed of simulation to get less run time ..
Thanks in advance
For running simulations (for collecting results and after verifying that your model works as intended) I recommend two things:
Run your simulation in release mode. This means compiling the code with the flag MODE=release. More details can be found in the OMNeT++ user manual.
Run your simulations on the command line - do not use any OMNeT++ GUI. If you want to collect results this is by far the fastest way as you do not care about the GUI. How to use the command line is explained in the dedicated OMNeT++ user manual section.

Optimizing an assemly for .cpp file

I have a question on optimizing an assembly file that I got from a .cpp file!!
This is my hw from computer organization class.
The hw is as follows.
I have to write a program that calculate dot product of two vectors and generate .asm file. Then, I have to optimize the .asm file and compare the execution time by using QueryPerformanceCounter on Visual Studio. I generated the .asm file and found the loop part in it. I am trying to learn the basic assembly language to optimize the assembly. However, I have no idea how to execute the .asm file. My professor mentioned about linking between .cpp file and the assembly but no idea what that mean.
Any help will be appriciated.
If I understand what your professor is asking, you need to do this in steps:
Create a function in C++ to calculate the dot product.
In main(), call this function in a loop many thousands of times, say 5000.
Add a call to QueryPerformanceCounter before and after this loop.
Run your program and note the total time it took to call your function 5000 times.
Use the compiler to generate assembly for your function. Save that assembly to an .asm file and then optimize it.
Assemble the .asm file with an appropriate assembler in order to generate an .obj file.
Compile your .cpp file and link it with the .obj file you generated in the step above to produce an .exe file.
Run the program again and note the total time it took to call your optimized function 5000 times.
Compare the two measurements (and note how the compiler is probably better at optimizing than you are).
You don't say what compiler, assembler or hardware platform you're using, so I can't provide more details than that.

Wisdom in FFTW doesn't import/export

I am using FFTW for FFTs, it's all working well but the optimisation takes a long time with the FFTW_PATIENT flag. However, according to the FFTW docs, I can improve on this by reusing wisdom between runs, which I can import and export to file. (I am using the floating point fftw routines, hence the fftwf_ prefix below instead of fftw_)
So, at the start of my main(), I have :
char wisdom_file[] = "optimise.fft";
fftwf_import_wisdom_from_filename(wisdom_file);
and at the end, I have:
fftwf_export_wisdom_to_filename(wisdom_file);
(I've also got error-checking to check the return is non-zero, omitted for simplicity above, so I know the files are reading and writing correctly)
After one run I get a file optimise.fft with what looks like ASCII wisdom. However, subsequent runs do not get any faster, and if I create my plans with the FFTW_WISDOM_ONLY flag, I get a null plan, showing that it doesn't see any wisdom there.
I am using 3 different FFTs (2 real to complex and 1 inverse complex to real), so have also tried import/export in each FFT, and to separate files, but that doesn't help.
I am using FFTW-3.3.3, I can see that FFTW-2 seemed to need more setting up to reuse wisdom, but the above seems sufficient now- what am I doing wrong?

Resources