I'm somewhat new to Veins and I'm trying to record collision statistics within the sample "RSUExampleScenario" provided in the VM. I found this question which describes what line to add to the .ini file, which I have, but I'm unable to find the "ncollisions" value in the results folder, which makes me think either I ran the wrong .ini line or am looking in the wrong place.
Thanks!
Because collision statistics take time to compute (essentially: trying to decode every transmission twice: once while considering interference by other nodes as usual, then trying again while ignoring all interference), Veins 5.1 requires you to explicitly turn collision statistics on. As discussed in https://stackoverflow.com/a/52103375/4707703, this can be achieved by adding a line *.**.nic.phy80211p.collectCollisionStatistics = true to omnetpp.ini.
After altering the Veins 5.1 example simulation this way and running it again (e.g., by running ./run -u Cmdenv -c Default from the command line), the ncollisions field in the resulting .sca file should now (sometimes) have non-zero values.
You can quickly verify this by running (from the command line)
opp_scavetool export --filter 'module("**.phy80211p") and name("ncollisions")' results/Default-\#0.sca -F CSV-R -o collisions.csv
The resulting collisions.csv should now contain a line containing (among other information) param,,,*.**.nic.phy80211p.collectCollisionStatistics,true (indicating that the simulation was executed with the required configuration) as well as many lines containing (among other information) scalar,RSUExampleScenario.node[10].nic.phy80211p,ncollisions,,,1 (indicating that node[10] could have received one more message, had it not been for interference caused by other transmissions in the simulation.
Related
the confusion is because when we specify --sgd in vw command line, it runs classic sgd, without adaptive, normalised and invariant updates. So, when we specify algorithm as sgd in vw-hyperopt, does it run as classic or with special updates? Is it mandatory to specify algorithm in vw-hyperopt? Which is the default algorithm? Thank you.
Looking at the source code confirms that the meaning of --algorithm sgd here simply leaves the default alone.
This is different than vw --sgd. It doesn't disable the defaults by passing --sgd to vw. IOW: yes, the adaptive, normalized and invariant updates will still be in effect.
Also: you can verify this further by looking at the log file created by vw-hyperopt in the current dir and verify it has no --sgd option in it. This log includes the full vw command line it executes for training and testing, e.g:
2020-09-08 00:58:45,053 INFO [root/vw-hyperopt:239]: executing the following command (training): vw -d mydata.train -f ./current.model --holdout_off -c ... --loss_function quantile
I'm new to trying out snakemake (last week or so) in order to handle less of the small details for workflows, previously I have coded up my own specific workflow through python.
I generated a small workflow which among the steps would use Illumina PE reads and ran Kraken against them. I'd then parse the output of the Kraken output to detect the most common species (within a set of allowable) if a species value wasn't provided (running with snakemake -s test.snake --config R1_reads= R2_reads= species=''.
I have 2 questions.
What is the recommended approach given the dynamic output/input?
Currently my strategy for this is to create a temp file which
contains the detected species and then cat {input.species} it into
other shell commands. This doesn't seem elegant but looking through
the docs I couldn't quite find an adequate alternative. I noticed
PersistentDicts would let me pass variables between run: commands
but I'm unsure if I can use that to load variables into a shell:
section. I also noticed that wrappers could allow me to handle it
however from the point I need that variable on I'd be wrapping the
remainder of my workflow.
Is snakemake the right tool if I want to use the species afterwards to run a set of scripts specific to the species (with multiple species specific workflows)?
Right now my impression on how to solve this problem is to have multiple workflow files for the species and have a run with switch which calls the associated species workflow dependant on the species.
Appreciate any insight on these questions.
-Kim
You can mark output as dynamic (e.g. expecting one file per species). Then, Snakemake will determine the downstream DAG of jobs after those files have been generated. See http://snakemake.readthedocs.io/en/stable/snakefiles/rules.html#dynamic-files
Used Versions: OMNeT++ 5.0 with iNET 3.4.0
Using OMNeT++ I'm running some simulation with a big amount of repetitions.
In some cases I don't understand the behaviour of my system, so I want to watch the procedure using Qt. Therefore I need the repeat some special cases of the previously simulated repetitions.
Even though I use the exact same configuration-file in combination with the corresponding seedset, I don't get the desired repetion, so I get completly different results. What can be the reason for that?
Analyzing the header of the generated log-files, there are only differences in the following lines:
run General-107342-20170331-15:42:22-5528
attr datetime 20170331-15:42:22
attr processid 5528
All other parameters are matching exactly. I don't understand why the results are different. Is the processid relevant for behavior like this?
Some tips to nail down the problem:
Check if the difference is indeed caused by the Graphical/Non-graphical difference. Run your simulation with both:
$ mysim -r 154 -u Cmdenv
$ mysim -r 154 -u Qtenv
$ mysim -r 154 -u Tkenv
Check the results. Different results may be caused by several issues:
relying on undefined behavior in C++, like you have a (set) collection and you iterate over it. The order of the collection is undefined and it can throw the simulation to a different trajectory
Accessing uninitialized memory
Using data that is available only in graphical runtime, like using the positions of the nodes defined by the #displayString property. Node positions may change based on the layouting algorithm and layouting is not available in Cmdenv
Changing the model state while testing whether the model is running under a graphical runtine i.e. inside if (isGUI()) {} blocs.
First I would try to figure out whether this is related to GUI vs non-GUI or rather the use of undefined behavior. If Tkenv and Qtenv gives the same result while Cmdenv differes, then it is a GUI-nonGUI issue. If all of them is different I would suspect a memory issue or undefined behavior.
If everything else fails, run the simulation in both Cmdenv and Qtenv and turn on event logging. Compare the logs and see where the two trajctories start to diverge and debug both run around that point to see the cause of divergence.
I am trying to stream my data to vw in --daemon mode, and would like to obtain at the end the value of the coefficients for each variable.
Therefore I'd like vw in --daemon mode to either:
- send me back the current value of the coefficients for each line of data I send.
- Write the resulting model in the "--readable_model" format.
I know about the dummy example trick save_namemodel | ... to get vw in daemon mode to save the model to a given file, but it isn't enough as I can't access the coefficient values from that file.
Any idea on how I could solve my problem ?
Unfortunately, on-demand saving of readable models isn't currently supported in the code but it shouldn't be too hard to add. Open source software is there for users to improve according to their needs. You may open a issue on github, or better, contribute the change.
See:
this code line where only the binary regressor is saved using save_predictor(). One could envision a "rsave" or "saver" tag/command to store the regressor in readable form as is being done in this code line
As a work-around you may call vw with --audit and parse every audit line for the feature names and their current weights but this would:
make vw much slower
require parsing every line to get the values rather than on demand
I know there are several posts asking similar things, but none address the problem I'm having.
I'm working on a script that handles connections to different Bluetooth low energy devices, reads from some of their handles using gatttool and dynamically creates a .json file with those values.
The problem I'm having is that gatttool commands take a while to execute (and are not always successful in connecting to the devices due to device is busy or similar messages). These "errors" translate not only in wrong data to fill the .json file but they also allow lines of the script to continue writing to the file (e.g. adding extra } or similar). An example of the commands I'm using would be the following:
sudo gatttool -l high -b <MAC_ADDRESS> --char-read -a <#handle>
How can I approach this in a way that I can wait for a certain output? In this case, the ideal output when you --char-read using gatttool would be:
Characteristic value/description: some_hexadecimal_data`
This way I can make sure I am following the script line by line instead of having these "jumps".
grep allows you to filter the output of gatttool for the data you are looking for.
If you are actually looking for a way to wait until a specific output is encountered before continuing, expect might be what you are looking for.
From the manpage:
expect [[-opts] pat1 body1] ... [-opts] patn [bodyn]
waits until one of the patterns matches the output of a spawned
process, a specified time period has passed, or an end-of-file is
seen. If the final body is empty, it may be omitted.