When running a large set of tests using MsTest from the command line, I can see each test executing and its outcome logged in the window like so:
Passed Some.NameSpace.Test1
Passed Some.NameSpace.Test2
And so on for thousands of tests. Once completed, MsTest will spit out a summary like this
Summary
---------
Test run failed
Passed 2000
Failed 1
------------
Total 2001
At this point I either have to start scrolling backwards in the window trying to find the needle in a haystack that represents my single failing test, or I can open the huge xml file that represents the result, and text-search for some keyword indicating a failed test.
Isn't there an easier way? Can I have MsTest report progress without dumping Passed test names to the console (still logging failed ones), or can I have a summary of just Failed tests at the end?
I think its obvious what any command line user wants to do: follow progress AND know the outcome at the end, without having to read xml or browse the cmd window history.
Answering my own question: A simple wrapper/parser script that calls MsTest.exe and parses/summarizes the output, either the stdout or the trx, is the only solution it seems.
You could use the TestContext.CurrentTestOutcome at the end of each test to determine if the test was failed and then log all failed tests to a different file.
[TestCleanup]
public void CleanUp()
{
if (TestContext.CurrentTestOutcome.ToString().Equals("Failed"))
{
TestContext.WriteLine("{0}.{1} ==> {2}", TestContext.FullyQualifiedTestClassName,
TestContext.TestName, TestContext.CurrentTestOutcome.ToString());
//Log the result to a file.
}
}
I don't know if this could help you.
Related
I have two load tests below with each one being in their separate test cases. This is using SOAP UI free:
Currently I have to manually select a load test, run it manually, wait until it finishes and then manually export the results before manually moving onto the next load test and performing the same actions.
Is there a way (and if so how) to be able to automatically run all the load tests (one by one) and extract each of it's own set of results in a file (test step, min, max avg, etc). This is to save the tester having to do manual intervention and can just let the test run whilst they do other stuff.
You can use the load tests command line, the doc is here.
Something like
loadtestrunner -ehttp://localhost:8080/services/MyService c:\projects\my-soapui-project.xml -r -f folder_name
Using these two options:
r : Turns on exporting of a LoadTest statistics summary report
f : Specifies the root folder to which test results should be exported
Then file like LoadTest_1-statistics.txt will be in your specified folder with csv statistics results.
inspired with answer of #aristotll )
loadtestrunner.bat runs the following class : com.eviware.soapui.tools.SoapUITestCaseRunner
from groovy you can call the same like this:
com.eviware.soapui.tools.SoapUITestCaseRunner.main([
"-ehttp://localhost:8080/services/MyService",
"c:\projects\my-soapui-project.xml",
"-r",
"-f",
"folder_name"
])
but the method main calls System.exit()...
and soapui will exit in this case.
so let's go deeper:
def res = new com.eviware.soapui.tools.SoapUITestCaseRunner().runFromCommandLine([
"-ehttp://localhost:8080/services/MyService",
"c:\projects\my-soapui-project.xml",
"-r",
"-f",
"folder_name"
])
assert res == 0 : "SoapUITestCaseRunner failed with code $res"
PS: did not tested - just an idea
Main question: Would groovy's execute() method allow me to run a command that takes a file as an argument, any maybe run the command in background mode?
Here is my issue. I was able to use groovy's execute() for simple commands like ls for example. Suppose now I want to start a process like Kafka from a groovy script (end result is to replace bash files with groovy scripts). So I start with these lines:
def kafkaHome = "Users/mememe/kafka_2.11-0.9.0.1"
def zkStart = "$kafkaHome/bin/zookeeper-server-start.sh"
def zkPropsFile = "$kafkaHome/config/zookeeper.properties"
Now, executing the command below form my mac terminal:
/Users/mememe/kafka_2.11-0.9.0.1/bin/zookeeper-server-start.sh /Users/mememe/kafka_2.11-0.9.0.1/config/zookeeper.properties
starts up the the process just fine. And, executing this statement:
println "$zkStart $zkPropsFile"
prints the above command line as is. However, executing this command from within the groovy script:
println "$zkStart $zkPropsFile".execute().text
simply hangs! And trying this:
println "$zkStart $zkPropsFile &".execute().text
where I make it a background process goes further, but starts complaining about the input file and throws this exception:
java.lang.NumberFormatException: For input string: "/Users/mememe/kafka_2.11-0.9.0.1/config/zookeeper.properties"
Trying this gives the same exception as above:
def proc = ["$zkStart", "$zkPropsFile", "&"].execute()
println proc.text
What am I missing please? Thank you.
Yes, try using the consumeProcessOutpusStream() method:
def os = new File("/some/path/toyour/file.log").newOutputStream()
"$zkStart $zkPropsFile".execute().consumeProcessOutputStream(os)
You can find the the method in the Groovy docs for the Process class:
http://docs.groovy-lang.org/docs/groovy-1.7.2/html/groovy-jdk/java/lang/Process.html
Which states:
Gets the output and error streams from a process and reads them to keep the process from blocking due to a full output buffer. The stream data is thrown away but blocking due to a full output buffer is avoided. Use this method if you don't care about the standard or error output and just want the process to run silently - use carefully however, because since the stream data is thrown away, it might be difficult to track down when something goes wrong. For this, two Threads are started, so this method will return immediately.
Inside TestKitchen describe blocks I'm running a command, loading its output into a variable then running multiple expect statements over that output validating different parts of it. The end goal is using this as part of CI builds to do blackbox testing.
In this instance I'm calling Jmeter (using it to run a remote agent to perform off-DUT tests) then running through the results that it returns checking each test (yeah yeah... it's a little nasty but it works a treat):
describe "Test Transparent Proxy (JMeter)" do
$jmeter_run = command("/usr/local/apache-jmeter-2.13/bin/jmeter -n -t /root/jmx/mytest.jmx -r -Jremote_hosts=192.168.7.252 -Gdut_ip=#$internal_ip -X -l /dev/stdout 2>&1").stdout
it 'test1' do
expect($jmeter_run).to match /text_to_match/
end
it 'test2' do
expect($jmeter_run).to match /more_text to match/
end
end
The tests themselves run fine, but I'm finding that I'm getting multiple jmeter runs (different test sets) being run out-of-order as to how they're defined in the test spec. I have other blocks that are being executed around the Jmeter tests. Here is my flow:
block 1
block 2
block 3 (Jmeter1)
block 4
block 5 (Jmeter2)
What I'm getting though is this:
block5
block3
block1
block2
block4
None of the documentation I've found seems to give me any clues as to how to avoid this. I don't want to put the command execution inside a should/expect chunk of its own as I want/need to be able to tell if an individual test has failed. I would also like to avoid running 50-odd individual Jmeter tests (they're about 5 secs each even with an avg of 20 tests in each run).
Help? :D
Well I managed to resolve this issue myself.
After a lot of tinkering I ended up running the command inside a test:
it 'JMeter executed correctly' do
$jmeter_run1 = command("/usr/local/apache-jmeter-2.13/bin/jmeter -n -t /root/jmx/mytest.jmx -r -Jremote_hosts=192.168.7.252 -Gdut_ip=#$internal_ip -X -l /dev/stdout 2>&1").stdout
expect($jmeter_run1).not_to be_empty
end
Everything now runs nicely in order like it is supposed to and everything is happy.
How can I run a specific thread group in a Test Plan from the command line? I have a Test Plan (project file) that contains two "thread groups": one for crawling a site and another for calling specific urls with parameters. From the command line I execute with Maven, like so:
mvn.bat -Dnamescsv=src/test/resources/RandomLastNames.csv
-Ddomainhost=stgweb.domain.com -Dcrawlerthreads=2 -Dcrawlerloopcount=10 -Dsearchthreads=5 -Dsearchloopcount=5 -Dresultscsv=JmeterResults.csv clean test verify
I want to pass an argument to run only one of the two "thread group" in that project file. Can you do that with JMeter? I don't want to use an IF controller unless I have to because it feels like a "hack". I know that SoapUI lets you do this with the '-s' option.
I asked this question on the JMeter forum also.
In our tests we use the while controller. It doesn't look like a hack to me and works well. You can turn thread groups on and off easily with the JMeter properties. Note you can't change its status when the test is already running though.
Add While controller - ${__P(threadActive)}
Set JMeter property on JMeter load ( -JthreadActive = true )
Run test
Please note ${__P(threadActive)} equates to ${__P(threadActive)} == true, anything other than true will result in that thread group not running
In my Protractor test script, I use the usual notation:
describe("mytest") {
...
it(" should do this") {
...
it(" should do that") {
I would like to be able to see what test and what part of each is currently running when I run them. Is there any option I can use to output test descriptions to the console?
There is a reporter that should do what you are looking for. Take a look at https://www.npmjs.com/package/jasmine-spec-reporter and https://github.com/bcaudan/jasmine-spec-reporter/tree/master/examples/protractor
You can use the --verbose option to print more information about your tests, but it will not tell you which test is currently being run.
I suggest you to create an issue if you want that feature. https://github.com/angular/protractor/issues/new
$ ./node_modules/protractor/bin/protractor protractor-config.js --verbose
------------------------------------
PID: 7985 (capability: chrome #1)
------------------------------------
Using the selenium server at http://localhost:4444/wd/hub
angularjs homepage
should greet the named user
todo list
should list todos
should add a todo
Finished in 5.915 seconds
3 tests, 5 assertions, 0 failures
since protractor runs in node you can use console.log as you normally would in javascript, for more console fun see the node docs
I like to place any logs after functionality so that I know its been completed, so wrapping it inside of a .then() function seemed to work best for me
example:
element(elementToFind).click().then(function(){
console.log("clicked element");
continue();
});
If you need to know the name of the spec currently rinning you could use: jasmine.getEnv().currentSpec.description
and log it
console.log('\nTest spec: ' + __filename + '\n');
to log the test but no idea how to log the it being executed