When running simulations in Veins, can one dump the Console Output into a file - omnet++

I am currently running Simulations in Veins and/or Artery.
Is there an easy way (thats perhaps I just didn't find because I'm blind/stupid) to dump the Output created in the Console into a file, apart from running it slower than express mode and then using copy/paste?
Can I create these data while still running in express-mode?

The short answer: if by 'console output' you mean the event log, then yes you can, but no you shouldn't, for exactly the reason you mention: express mode disables this output.
The recommended way to collect data from your simulation is by recording it using "statistics", see also this page of the OMNeT++ tutorial.
You can log this information using the record-eventlog=true option in your omnetpp.ini (as described in more detail in the manual), but this produces huge files for veins and artery. This is because the event log is used more as a logging system. The best way to think of it is as debug output and development support: to quickly figure out why something isn't working correctly. I tried to (ab)use this feature for logging data -- please, save yourself the immense pains and use the statistics module.

Yes. Easiest way, From top bar, go to: Run > Run Configuration > Common tab > scroll down to output and select the name and the location of the output file.
Downside, each time you want to run a different application, it over writes the previous one that was created, so don't forget to back it up before you run a different simulation.
Good luck.

Related

Is there a recommended debugging strategy for E2E automation tests?

What is the best elegant approach for debugging a large E2E test?
I am using TestCafe automation framework and currently. I'm facing multiple tests that are flaky and require a fix.
The problem is that every time I modify something within the test code I need to run the entire test from the start in order to see if that new update succeeds or not.
I would like to hear ideas about strategies regard to how to debug an E2E test without losing your mind.
Current debug methods:
Using the builtin TestCafe mechanism for debugging in the problematic area in the code and try to comment out everything before that line.
But that really doesn't feel like the best approach.
When there are prerequisite data such as user credentials,url,etc.. I manually Declare those again just before the debug().
PS: I know that tests should be focused as much as possible and relatively small but this is what we have now.
Thanks in advance
You can try using the flag
--debug-on-fail
This pauses the test when it fails and allows you to view the tested page and determine the cause of the fail.
Also use test.only to to specify that only a particular test or fixture should run while all others should be skipped
https://devexpress.github.io/testcafe/documentation/using-testcafe/command-line-interface.html#--debug-on-fail
You can use the takeScreenshot action to capture the existing state of the application during the test. Test Café stores the screenshots inside the screenshots sub-directory and names the file with a timestamp. Alternatively, you can add the takeOnFails command line flag to automatically capture the screen whenever a test fails, so at the point of failure.
Another option is to slow down the test so it’s easier to observe when it is running. You can adjust the speed using the - speed command line flag. 1 is the fastest speed and 0.01 the slowest. Then you can record the test run using the - video command line flag, but you need to set up FFmpeg for this.

Confirming compile time scripts execution in Xcode

I am downloading data from a remote server using curl in Build Phases > Run Script. Downloading takes 5-15s, not that much, but multiple times a day it consumes considerable time. Is there a better way to skip a script than commenting it out? Ideally, it would be some kind of confirmation at compile time (e.g Do you really need to download X? y/n).
You can’t make the run script interactive in the console as far as I know. But you can use a shell conditional with an AppleScript interactive dialog, because AppleScript itself blocks while dialog is shown. See for example https://cantina.co/adding-interactivity-to-the-xcode-build-process/.
However, introducing uncertainty into a build is dangerous. Plus you’d never be able to automate the build. In my view you’d be better off flipping a custom build setting / environment variable.

parser for vstest.console in Bamboo

We are having to use the command configuration (to be able to specify the runsettings file that I want to use at the time of running the job) for vstest.console. But now it does not create the results within bamboo. like the vstest.console bamboo task does.
First question, does the mstest parser create that kind of results? Is there a way to do this after running the command prompt. Also, since they both create trx files, can I use mstest parser for the trx created by vstest.console?
Second question, I don't see in my log that it kicked off that step. Is there anything special I need to do to have it kicked off? Especially if the previous step fails?
Also, the .trx logger setting is not renaming the trx to my TestResult.trx file name.
I looked in the list of apps to see if there is a vstest.console version of the parser, there is not. We are using version 6.6.3 ..so we are a bit behind but not sure if this means anything with the parser.
Must have had to do a reboot. I don't know why, but I came back in after the weekend, and the servers have been rebooted, and with no change to what I said above, the results are showing up as expected.

SCCM OSD TS End User Summary Screen

I am looking for a good way to make a summary to existing large build TS.
What I am working with is SCCM 2012r2 and what I need is a hint, how to capture all steps I want(some of them are in various groups) and put result of them in some sort of variable so at the end, someone who is building that PC will have a table showing lets say 30 of applications green and 4 of them red as a failure.
Can it be done in some easy way? I just need someone building the PC to see what app didn't install so he can install it manually or at least provide me more information before I'll dive into logs.
Thanks
I wouldn't say easy because it requires lots of steps and you have to do it per application manually basically but there is a TS Variable _SMSTSLastActionSucceeded which you could check after each installation step (you have to set the step to continue on error to make this work). So basically after you tried to install you check whether it worked and then set a TS variable of your choice to reflect the failure.
As a final step you implement a script that checks all your TS variables and outputs the result.
You could even use the addon OSDBackground to display your errors as the background image.
Some lengthy article how to implement a form of error handling can be found here however you would have to do this quite a bit different because in this example the ts fails at the first error and you want to continue and log but you should get the basic principles.

Is there a gui for nosetests

I've been using nosetests for the last few months to run my Python unit tests.
It definitely does the job but it is not great for giving a visual view of what tests are working or breaking.
I've used several other GUI based unit test frameworks that provide a visual snap shot of the state of your unit tests as well as providing drill down features to get to detailed error messages.
Nosetests dumps most of its information to the console leaving it the developer to sift through the detail.
Any recommendations?
You can use rednose plugin to color up your console. The visual feedback is much better with it.
I've used Trac + Bitten for continuous integration, it was fairly complex setup and required substantial amount of time to RTFM, set up and then maintain everything but I could get nice visual reports with failed tests and error messages and graphs for failed tests, pylint problems and code coverage over time.
Bitten is a continuous integration plugin for Trac. It has the master-slave architecture. Bitten master is integrated with and runs together with Trac. Bitten slave can be run on any system that communicate with master. It would regularly poll master for build tasks. If there is a pending task (somebody has commited something recently), master will send "build recipe" similar to ant's build.xml to slave, slave would follow the recipe and send back results. Recipe can contain instructions like "check out code from that repository", "execute this shell script", "run nosetests in this directory".
The build reports and statistics then show up in Trac.
I know this question was asked 3 years ago, but I'm currently developing a GUI to make nosetests a little easier to work with on a project I'm involved in.
Our project uses PyQt which made it really simple to start with this GUI as it provides all you need to create interfaces. I've not been working with Python for long but its fairly easy to get to grips with so if you know what you're doing it'll be perfect providing you have the time.
You can convert .UI files created in the PyQt Designer to python scripts with:
pyuic4 -x interface.ui -o interface.py
And you can get a few good tutorials to get a feel for PyQt here. Hope that helps someone :)
I like to open a second terminal, next to my editor, in which I just run a loop which re-runs nosetests (or any test command, e.g. plain old unittests) every time any file changes. Then you can keep focus in your editor window, while seeing test output update every time you hit 'save' in your editor.
I'm not sure what the OP means by 'drill down', but personally all I need from the test output is the failure traceback, which of course is displayed whenever a test fails.
This is especially effective when your code and tests are well-written, so that the vast majority of your tests only take milliseconds to run. I might run these fast unit tests in a loop as described above while I edit or debug, and then run any longer-running tests manually at the end, just before I commit.
You can re run tests manually using Bash 'watch' (but this just runs them every X seconds. Which is fine, but it isn't quite snappy enough to keep me happy.)
Alternatively I wrote a quick python package 'rerun', which polls for filesystem changes and then reruns the command you give it. Polling for changes isn't ideal, but it was easy to write, is completely cross-platform, is fairly snappy if you tell it to poll every 0.25 seconds, doesn't cause me any noticeable lag or system load even with large projects (e.g. Python source tree), and works even in complicated cases (see below.)
https://pypi.python.org/pypi/rerun/
A third alternative is to use a more general-purpose 'wait on filesystem changes' program like 'watchdog', but this seemed heavyweight for my needs, and solutions like this which listen for filesystem events sometimes don't work as I expected (e.g. if Vim saves a file by saving a tmp somewhere else and then moving it into place, the events that happen sometimes aren't the ones you expect.) Hence 'rerun'.

Resources