Inside TestKitchen describe blocks I'm running a command, loading its output into a variable then running multiple expect statements over that output validating different parts of it. The end goal is using this as part of CI builds to do blackbox testing.
In this instance I'm calling Jmeter (using it to run a remote agent to perform off-DUT tests) then running through the results that it returns checking each test (yeah yeah... it's a little nasty but it works a treat):
describe "Test Transparent Proxy (JMeter)" do
$jmeter_run = command("/usr/local/apache-jmeter-2.13/bin/jmeter -n -t /root/jmx/mytest.jmx -r -Jremote_hosts=192.168.7.252 -Gdut_ip=#$internal_ip -X -l /dev/stdout 2>&1").stdout
it 'test1' do
expect($jmeter_run).to match /text_to_match/
end
it 'test2' do
expect($jmeter_run).to match /more_text to match/
end
end
The tests themselves run fine, but I'm finding that I'm getting multiple jmeter runs (different test sets) being run out-of-order as to how they're defined in the test spec. I have other blocks that are being executed around the Jmeter tests. Here is my flow:
block 1
block 2
block 3 (Jmeter1)
block 4
block 5 (Jmeter2)
What I'm getting though is this:
block5
block3
block1
block2
block4
None of the documentation I've found seems to give me any clues as to how to avoid this. I don't want to put the command execution inside a should/expect chunk of its own as I want/need to be able to tell if an individual test has failed. I would also like to avoid running 50-odd individual Jmeter tests (they're about 5 secs each even with an avg of 20 tests in each run).
Help? :D
Well I managed to resolve this issue myself.
After a lot of tinkering I ended up running the command inside a test:
it 'JMeter executed correctly' do
$jmeter_run1 = command("/usr/local/apache-jmeter-2.13/bin/jmeter -n -t /root/jmx/mytest.jmx -r -Jremote_hosts=192.168.7.252 -Gdut_ip=#$internal_ip -X -l /dev/stdout 2>&1").stdout
expect($jmeter_run1).not_to be_empty
end
Everything now runs nicely in order like it is supposed to and everything is happy.
Related
Here is the case;
There is this app called "termux" on android which allows me to use a terminal on android, and one of the addons are androids API's like sensors, tts engines, etc.
I wanted to make a script in ruby using this app, specifically this api, but there is a catch:
The script:
require('json')
JSON.parse(%x'termux-sensor -s "BMI160 Gyro" -n 1')
-s = Name or partially the name of the sensor
-n = Count of times the command will run
returns me:
{
"BMI160 Gyroscope" => {
"values" => [
-0.03...,
0.00...,
1.54...
]
}
}
I didn't copied and pasted the values, but that's not the point, the point is that this command takes almost a full second the load, but there is a way to "make it faster"
If I use the argument "-d" and not use "-n", I can specify the time in milliseconds to delay between data being sent in STDOUT, it also takes a full second to load, but when it loads, the delay works like charm
And since I didn't specify a 'n' number of times, it never stops, and there is the problem
How can I retrieve the data continuously in ruby??
I thought about using another thread so it won't stop my program, but how can I tell ruby to return the last X lines of the STDOUT from a command that hasn't and will not ever stop since "%x'command'" in ruby waits for a return?
If I understood you need to connect to stdout from a long running process.
see if this works for your scenario using IO.popen:
# by running this program
# and open another terminal
# and start writing some data into data.txt
# you will see it appearing in this program output
# $ date >> data.txt
io_obj = IO.popen('tail -f ./data.txt')
while !io_obj.eof?
puts io_obj.readline
end
I found out a built in module that saved me called PTY and the spawn#method plus thread management helped me to keep a variable updated with the command values each time the command outputted new bytes
I have two load tests below with each one being in their separate test cases. This is using SOAP UI free:
Currently I have to manually select a load test, run it manually, wait until it finishes and then manually export the results before manually moving onto the next load test and performing the same actions.
Is there a way (and if so how) to be able to automatically run all the load tests (one by one) and extract each of it's own set of results in a file (test step, min, max avg, etc). This is to save the tester having to do manual intervention and can just let the test run whilst they do other stuff.
You can use the load tests command line, the doc is here.
Something like
loadtestrunner -ehttp://localhost:8080/services/MyService c:\projects\my-soapui-project.xml -r -f folder_name
Using these two options:
r : Turns on exporting of a LoadTest statistics summary report
f : Specifies the root folder to which test results should be exported
Then file like LoadTest_1-statistics.txt will be in your specified folder with csv statistics results.
inspired with answer of #aristotll )
loadtestrunner.bat runs the following class : com.eviware.soapui.tools.SoapUITestCaseRunner
from groovy you can call the same like this:
com.eviware.soapui.tools.SoapUITestCaseRunner.main([
"-ehttp://localhost:8080/services/MyService",
"c:\projects\my-soapui-project.xml",
"-r",
"-f",
"folder_name"
])
but the method main calls System.exit()...
and soapui will exit in this case.
so let's go deeper:
def res = new com.eviware.soapui.tools.SoapUITestCaseRunner().runFromCommandLine([
"-ehttp://localhost:8080/services/MyService",
"c:\projects\my-soapui-project.xml",
"-r",
"-f",
"folder_name"
])
assert res == 0 : "SoapUITestCaseRunner failed with code $res"
PS: did not tested - just an idea
I have a script to run all tests in a directory, using require; I do:
Dir.files.each
...
and then
require 'path'
...
I try to use another loop to run those tests a few times, passing different arguments. I have my arguments in an array, and I go through the array, then run the same code as above to run all tests in a directory. In this loop, I print a line:
puts executing tests for a[i]
and the next line is the require that runs the set of tests.
The problem is that the print executes e.g., ten times (ten lines printed together), but require only runs in the end, passing only the very last element of the array. I tried different statements, and they all run fine, so I don't believe it's a problem in the loop; I think it's the require. I tried load, but didn't see any difference. 'exec' only runs the first test in the set. Any ideas?
some more details:
thanks for the replies! the system command is much closer to what I wanted - it runs all tests for me.
I have below an example of what I'm trying to do.
When I run the script once passing specific arguments 'a', I get the following results:
#### Run all tests for 'a' ####
Loaded suite
............................
Finished in 220.123 seconds
If I put my arguments in an array eg. ar = ['a','b','c','d']
I get
#### Run all tests for 'a' ####
#### Run all tests for 'b' ####
#### Run all tests for 'c' ####
#### Run all tests for 'd' ####
Loaded suite
............................
Finished in 220.123 seconds
ie. the tests run for the last option only (d)
If I use 'system' every single file runs individually - which makes it hard to go through the results for say 100 tests for a few different runs.
The code snippet is :
for i in 0 .. #ar.length-1 do
puts'## Running : '+ #ar[i] + ' ##'
Dir.entries('./suite_dir').each do | file |
require './suite_dir/'+ file
end
end
Don't use require like that, it's not intended to be executable...it only works once.
From http://www.ruby-doc.org/core-1.9.3/Kernel.html#method-i-require
require(name) → true or false
Loads the given name, returning true if successful and false if the
feature is already loaded.
You can use something like this to run all files from the directory:
Dir.foreach(path) do |file|
puts "Testing #{file}"
system(path+file)
end
I am trying to write script, that will run all my tests automatically and check for failures, smth like that (simply run test with "ruby file.rb" and parsing output):
def failures?(test_file)
io = IO.popen("ruby #{test_file}")
log = io.readlines
io.close
# parsing output for failures "1 tests, 1 assertions, 0 failures, 0 errors"
log.last.split(',').select{ |s| s =~ /failures/ }.first[/\d+/] != "0"
end
puts failures?("test.rb")
But someone can easily place some malicious code in "test_file" and crush everything:
Dir.glob("*")
Dir.mkdir("HACK_DIR")
File.delete("some_file")
What is the way to protect ruby script from such hacking?
I did something similar to that but using the concept of a "sandbox".
First you create a test user that has no permissions to any of your OS files (of course not to your test files either).
Your testing system will first copy the whole tests root folder to a sandbox (created at a temp location for example), give the testing user permission to this sandbox and execute the tests as the test user.
So, the tests execution file creation/modification/deletion is restricted to this sandbox. Also, you can analyse later on all the tests post-mortem data that was left in this sandbox.
I did this easily in linux creating folders on the /tmp dir and using a special user called "tester".
Hope this helps.
Can I replace an executable (accessed via a system call from ruby) with an executable that expects certain input and supplies the expected output in a consistent amount of time? I'm mainly operating on Mac OSX 10.6 (Snow Leopard), but I also have access to Linux and Windows. I'm using MRI ruby 1.8.7.
Background: I'm looking at doing several DNA sequence alignments, one in each thread. When I try using BioRuby for this, either BioRuby or ruby's standard library's tempfile sometimes raise exceptions (which is better than failing silently!).
I set up a test that reproduces the problem, but only some of the time. I assume the main sources of variability between tests are the threading, the tempfile system, and the executable used for alignment (ClustalW). Since ClustalW probably isn't malfunctioning, but can be a source of variability, I'm thinking that eliminating it may aid reproducibility.
For those thinking select isn't broken - that's what I'm wondering too. However, according to the changelog, there was concern about tempfile's thread safety in August 2009. Also, I've checked on the BioRuby mailing list whether I'm calling the BioRuby code correctly, and that seems to be the case.
I really don't understand what the problem is or what exactly are you after, can't you just write something like
#!/bin/sh
#Test for input (syntax might be wrong, but you get the idea)
if [ $* ne "expected input" ]; then
echo "expected output for failure"
exit -1
fi
#have it work in a consistent amount of time
$CONSISTENT_AMOUNT_OF_TIME = 20
sleep $CONSISTENT_AMOUNT_OF_TIME
echo "expected output"
You can. In cases where I'm writing a functional test for program A, I may need to "mock" a program, B, that A runs via system. What I do then is to make program B's pathname configurable, with a default:
class ProgramA
def initialize(argv)
#args = ParseArgs(argv)
#config = Config.new(#args.config_path || default_config_path)
end
def run
command = [
program_b_path,
'--verbose',
'--do_something_wonderful',
].join(' ')
system(command)
...
end
def program_a_path
#config.fetch('program_b_path', default_program_b_path)
end
end
Program A takes a switch, "--config PATH", which can override the default config file path. The test sets up a configuration file in /tmp:
program_b_path: /home/wayne/project/tests/mock_program_b.rb
And passes to program A that configuration file:
program_a.rb --config /tmp/config.yaml
Now program A will run not the real program B, but the mock one.
Have you tried the Mocha gem? It's used a lot for testing, and you describe it perfectly. It "fakes" the method call of an object (which includes just about anything in ruby), and returns the result you want without actually running the method. Take this example file:
# test.rb
require 'rubygems'
require 'mocha'
self.stubs(:system).with('ls').returns('monkey')
puts system('ls')
Running this script outputs "monkey" because I stubbed out the system call. You can use this to bypass parts of an application you don't want test, to factor out irrelevant parts.