How to automatically running multiple load tests in SOAP UI free version? - performance

I have two load tests below with each one being in their separate test cases. This is using SOAP UI free:
Currently I have to manually select a load test, run it manually, wait until it finishes and then manually export the results before manually moving onto the next load test and performing the same actions.
Is there a way (and if so how) to be able to automatically run all the load tests (one by one) and extract each of it's own set of results in a file (test step, min, max avg, etc). This is to save the tester having to do manual intervention and can just let the test run whilst they do other stuff.

You can use the load tests command line, the doc is here.
Something like
loadtestrunner -ehttp://localhost:8080/services/MyService c:\projects\my-soapui-project.xml -r -f folder_name
Using these two options:
r : Turns on exporting of a LoadTest statistics summary report
f : Specifies the root folder to which test results should be exported
Then file like LoadTest_1-statistics.txt will be in your specified folder with csv statistics results.

inspired with answer of #aristotll )
loadtestrunner.bat runs the following class : com.eviware.soapui.tools.SoapUITestCaseRunner
from groovy you can call the same like this:
com.eviware.soapui.tools.SoapUITestCaseRunner.main([
"-ehttp://localhost:8080/services/MyService",
"c:\projects\my-soapui-project.xml",
"-r",
"-f",
"folder_name"
])
but the method main calls System.exit()...
and soapui will exit in this case.
so let's go deeper:
def res = new com.eviware.soapui.tools.SoapUITestCaseRunner().runFromCommandLine([
"-ehttp://localhost:8080/services/MyService",
"c:\projects\my-soapui-project.xml",
"-r",
"-f",
"folder_name"
])
assert res == 0 : "SoapUITestCaseRunner failed with code $res"
PS: did not tested - just an idea

Related

TestKichen, Serverspec and out-of-order command execution

Inside TestKitchen describe blocks I'm running a command, loading its output into a variable then running multiple expect statements over that output validating different parts of it. The end goal is using this as part of CI builds to do blackbox testing.
In this instance I'm calling Jmeter (using it to run a remote agent to perform off-DUT tests) then running through the results that it returns checking each test (yeah yeah... it's a little nasty but it works a treat):
describe "Test Transparent Proxy (JMeter)" do
$jmeter_run = command("/usr/local/apache-jmeter-2.13/bin/jmeter -n -t /root/jmx/mytest.jmx -r -Jremote_hosts=192.168.7.252 -Gdut_ip=#$internal_ip -X -l /dev/stdout 2>&1").stdout
it 'test1' do
expect($jmeter_run).to match /text_to_match/
end
it 'test2' do
expect($jmeter_run).to match /more_text to match/
end
end
The tests themselves run fine, but I'm finding that I'm getting multiple jmeter runs (different test sets) being run out-of-order as to how they're defined in the test spec. I have other blocks that are being executed around the Jmeter tests. Here is my flow:
block 1
block 2
block 3 (Jmeter1)
block 4
block 5 (Jmeter2)
What I'm getting though is this:
block5
block3
block1
block2
block4
None of the documentation I've found seems to give me any clues as to how to avoid this. I don't want to put the command execution inside a should/expect chunk of its own as I want/need to be able to tell if an individual test has failed. I would also like to avoid running 50-odd individual Jmeter tests (they're about 5 secs each even with an avg of 20 tests in each run).
Help? :D
Well I managed to resolve this issue myself.
After a lot of tinkering I ended up running the command inside a test:
it 'JMeter executed correctly' do
$jmeter_run1 = command("/usr/local/apache-jmeter-2.13/bin/jmeter -n -t /root/jmx/mytest.jmx -r -Jremote_hosts=192.168.7.252 -Gdut_ip=#$internal_ip -X -l /dev/stdout 2>&1").stdout
expect($jmeter_run1).not_to be_empty
end
Everything now runs nicely in order like it is supposed to and everything is happy.

How can I run a specific 'thread group' in a JMeter 'test plan' from the command line?

How can I run a specific thread group in a Test Plan from the command line? I have a Test Plan (project file) that contains two "thread groups": one for crawling a site and another for calling specific urls with parameters. From the command line I execute with Maven, like so:
mvn.bat -Dnamescsv=src/test/resources/RandomLastNames.csv
-Ddomainhost=stgweb.domain.com -Dcrawlerthreads=2 -Dcrawlerloopcount=10 -Dsearchthreads=5 -Dsearchloopcount=5 -Dresultscsv=JmeterResults.csv clean test verify
I want to pass an argument to run only one of the two "thread group" in that project file. Can you do that with JMeter? I don't want to use an IF controller unless I have to because it feels like a "hack". I know that SoapUI lets you do this with the '-s' option.
I asked this question on the JMeter forum also.
In our tests we use the while controller. It doesn't look like a hack to me and works well. You can turn thread groups on and off easily with the JMeter properties. Note you can't change its status when the test is already running though.
Add While controller - ${__P(threadActive)}
Set JMeter property on JMeter load ( -JthreadActive = true )
Run test
Please note ${__P(threadActive)} equates to ${__P(threadActive)} == true, anything other than true will result in that thread group not running

How to see what Protractor tests is currently executing?

In my Protractor test script, I use the usual notation:
describe("mytest") {
...
it(" should do this") {
...
it(" should do that") {
I would like to be able to see what test and what part of each is currently running when I run them. Is there any option I can use to output test descriptions to the console?
There is a reporter that should do what you are looking for. Take a look at https://www.npmjs.com/package/jasmine-spec-reporter and https://github.com/bcaudan/jasmine-spec-reporter/tree/master/examples/protractor
You can use the --verbose option to print more information about your tests, but it will not tell you which test is currently being run.
I suggest you to create an issue if you want that feature. https://github.com/angular/protractor/issues/new
$ ./node_modules/protractor/bin/protractor protractor-config.js --verbose
------------------------------------
PID: 7985 (capability: chrome #1)
------------------------------------
Using the selenium server at http://localhost:4444/wd/hub
angularjs homepage
should greet the named user
todo list
should list todos
should add a todo
Finished in 5.915 seconds
3 tests, 5 assertions, 0 failures
since protractor runs in node you can use console.log as you normally would in javascript, for more console fun see the node docs
I like to place any logs after functionality so that I know its been completed, so wrapping it inside of a .then() function seemed to work best for me
example:
element(elementToFind).click().then(function(){
console.log("clicked element");
continue();
});
If you need to know the name of the spec currently rinning you could use: jasmine.getEnv().currentSpec.description
and log it
console.log('\nTest spec: ' + __filename + '\n');
to log the test but no idea how to log the it being executed

Print MSTest summary after command line exeution

When running a large set of tests using MsTest from the command line, I can see each test executing and its outcome logged in the window like so:
Passed Some.NameSpace.Test1
Passed Some.NameSpace.Test2
And so on for thousands of tests. Once completed, MsTest will spit out a summary like this
Summary
---------
Test run failed
Passed 2000
Failed 1
------------
Total 2001
At this point I either have to start scrolling backwards in the window trying to find the needle in a haystack that represents my single failing test, or I can open the huge xml file that represents the result, and text-search for some keyword indicating a failed test.
Isn't there an easier way? Can I have MsTest report progress without dumping Passed test names to the console (still logging failed ones), or can I have a summary of just Failed tests at the end?
I think its obvious what any command line user wants to do: follow progress AND know the outcome at the end, without having to read xml or browse the cmd window history.
Answering my own question: A simple wrapper/parser script that calls MsTest.exe and parses/summarizes the output, either the stdout or the trx, is the only solution it seems.
You could use the TestContext.CurrentTestOutcome at the end of each test to determine if the test was failed and then log all failed tests to a different file.
[TestCleanup]
public void CleanUp()
{
if (TestContext.CurrentTestOutcome.ToString().Equals("Failed"))
{
TestContext.WriteLine("{0}.{1} ==> {2}", TestContext.FullyQualifiedTestClassName,
TestContext.TestName, TestContext.CurrentTestOutcome.ToString());
//Log the result to a file.
}
}
I don't know if this could help you.

Protect script from hacking (reading directories, modifying files)

I am trying to write script, that will run all my tests automatically and check for failures, smth like that (simply run test with "ruby file.rb" and parsing output):
def failures?(test_file)
io = IO.popen("ruby #{test_file}")
log = io.readlines
io.close
# parsing output for failures "1 tests, 1 assertions, 0 failures, 0 errors"
log.last.split(',').select{ |s| s =~ /failures/ }.first[/\d+/] != "0"
end
puts failures?("test.rb")
But someone can easily place some malicious code in "test_file" and crush everything:
Dir.glob("*")
Dir.mkdir("HACK_DIR")
File.delete("some_file")
What is the way to protect ruby script from such hacking?
I did something similar to that but using the concept of a "sandbox".
First you create a test user that has no permissions to any of your OS files (of course not to your test files either).
Your testing system will first copy the whole tests root folder to a sandbox (created at a temp location for example), give the testing user permission to this sandbox and execute the tests as the test user.
So, the tests execution file creation/modification/deletion is restricted to this sandbox. Also, you can analyse later on all the tests post-mortem data that was left in this sandbox.
I did this easily in linux creating folders on the /tmp dir and using a special user called "tester".
Hope this helps.

Resources