Used Versions: OMNeT++ 5.0 with iNET 3.4.0
Using OMNeT++ I'm running some simulation with a big amount of repetitions.
In some cases I don't understand the behaviour of my system, so I want to watch the procedure using Qt. Therefore I need the repeat some special cases of the previously simulated repetitions.
Even though I use the exact same configuration-file in combination with the corresponding seedset, I don't get the desired repetion, so I get completly different results. What can be the reason for that?
Analyzing the header of the generated log-files, there are only differences in the following lines:
run General-107342-20170331-15:42:22-5528
attr datetime 20170331-15:42:22
attr processid 5528
All other parameters are matching exactly. I don't understand why the results are different. Is the processid relevant for behavior like this?
Some tips to nail down the problem:
Check if the difference is indeed caused by the Graphical/Non-graphical difference. Run your simulation with both:
$ mysim -r 154 -u Cmdenv
$ mysim -r 154 -u Qtenv
$ mysim -r 154 -u Tkenv
Check the results. Different results may be caused by several issues:
relying on undefined behavior in C++, like you have a (set) collection and you iterate over it. The order of the collection is undefined and it can throw the simulation to a different trajectory
Accessing uninitialized memory
Using data that is available only in graphical runtime, like using the positions of the nodes defined by the #displayString property. Node positions may change based on the layouting algorithm and layouting is not available in Cmdenv
Changing the model state while testing whether the model is running under a graphical runtine i.e. inside if (isGUI()) {} blocs.
First I would try to figure out whether this is related to GUI vs non-GUI or rather the use of undefined behavior. If Tkenv and Qtenv gives the same result while Cmdenv differes, then it is a GUI-nonGUI issue. If all of them is different I would suspect a memory issue or undefined behavior.
If everything else fails, run the simulation in both Cmdenv and Qtenv and turn on event logging. Compare the logs and see where the two trajctories start to diverge and debug both run around that point to see the cause of divergence.
Related
I'm somewhat new to Veins and I'm trying to record collision statistics within the sample "RSUExampleScenario" provided in the VM. I found this question which describes what line to add to the .ini file, which I have, but I'm unable to find the "ncollisions" value in the results folder, which makes me think either I ran the wrong .ini line or am looking in the wrong place.
Thanks!
Because collision statistics take time to compute (essentially: trying to decode every transmission twice: once while considering interference by other nodes as usual, then trying again while ignoring all interference), Veins 5.1 requires you to explicitly turn collision statistics on. As discussed in https://stackoverflow.com/a/52103375/4707703, this can be achieved by adding a line *.**.nic.phy80211p.collectCollisionStatistics = true to omnetpp.ini.
After altering the Veins 5.1 example simulation this way and running it again (e.g., by running ./run -u Cmdenv -c Default from the command line), the ncollisions field in the resulting .sca file should now (sometimes) have non-zero values.
You can quickly verify this by running (from the command line)
opp_scavetool export --filter 'module("**.phy80211p") and name("ncollisions")' results/Default-\#0.sca -F CSV-R -o collisions.csv
The resulting collisions.csv should now contain a line containing (among other information) param,,,*.**.nic.phy80211p.collectCollisionStatistics,true (indicating that the simulation was executed with the required configuration) as well as many lines containing (among other information) scalar,RSUExampleScenario.node[10].nic.phy80211p,ncollisions,,,1 (indicating that node[10] could have received one more message, had it not been for interference caused by other transmissions in the simulation.
**
I am pretty sure a set of sequential commands with same input cant
change it's output every time you run it
It might sound stupid . For example while installing an application or bulding using Cmake , atleast for me , i would encounter different bugs each time i run the installer using the same system.
I guess i might have changed the cmake setting or the system settings but it feels so strange and i am totally paranoid about it.
**.
you never initialize arr but you use it to do M=M-arr[i]; the behavior is undefined
out of that to use an array with a dynamic size is not recommended (ISO C++ forbids variable length array), allocate it in the heap
I'd like some advice on what vocabulary to use to describe the following. Having the right vocabulary will allow me to search for tools and ideas related to the concept
I'd like to say a script is SomeWord if it is expected to produce the same output no matter where it is run.
For example, the following script is not SomeWord:
#!/bin/bash
ls ~
because of course it depends on where it is executed.
Whereas the following (if it runs without error) is expected to always produce the same output:
#!/bin/bash
echo "hello, world"
A more useful example would be something that loads and runs a docker or singularity container in a way that guarantees that a very particular container image is being used. For example by retrieving the singularity image by its content-hash.
The advantages of SomeWord scripts are: (a) they may be safely run on a remote system without worrying about the environment and (b) their outputs may be cached.
The best I can think of would be "deterministic" or some variation on "environment independent" or "reproducible".
Any container should be able to do this, as that is a big part of why the tech developed in the first place. Environment managers like conda can also do this to a certain extent, but because it's just modifying the host environment it's possible to be using non-conda binaries without realizing it.
I've noticed on my MacBook Pro (Quad-core) that when I run make, it takes the same amount of time as make -j, and sure enough, Activity Monitor shows all four cores getting high usage. Why is this? Is there some default setting that Apple has? I mean, it would make sense for -j to be the default, but from what I've seen on the web make with no arguments should only be using one thread.
This isn't necessarily a problem, but I'd like to understand the cause nonetheless.
The -j|--jobs flag specifies/limits the number of commands that can be run simultaneously, not the number of threads to allocate to a single command. Think of this option as concurrency instead of parallelism.
For example, I can specify --jobs=2 and have both an ES6 transpiler and a SASS preprocessor running in the background, in the same terminal window, watching for any file changes I may make.
I am trying to get different results for the code I have written in veins. I would like to run simulation multiple times to get average of all result. Issue I am facing is when I use repeat=5 i get exact same result in all 5 runs. I want to regenerate the network each time it repeats. I have written code to place RSUs in random positions but i get same result. What I can try?
Frist of all, see the TicToc Tutorial.
For your issue, you need to set a seed for each run as the OMNet Manual show
For me, the best way is set seed-set as the repetition number(repeat)
seed-set = ${repetition}
To start all repetitions of a simulation go to Run Configurations and set Cmdenv as User interface and:
for OMNeT++ 5.0 or older: set * (asterisk) in Runnumber
for OMNeT++ 5.1: set 0..4 in Run(s)
As a result you will obtain five set of results.
Optionally you can choose more than one processes to run in parallel or CPUs/processes to use in Run Configurations.