How can I performance test using shell scripts - tools and techniques? - performance

I have a system to which I must apply load for the purpose of performance testing. Some of the load can be created via LoadRunner over HTTP.
However in order to generate realistic load for the system I also need to simulate users using a command line tool which uses a non HTTP protocol* to talk to the server.
* edit: actually it is HTTP but we've been advised by the vendor that it's not something easy to record/script and replay. So we're limited to having to invoke it using the CLI tool.
I have the constraint of not having the licences for LoadRunner to do this and not having the time to put the case to get the license.
Therefore I was wondering if there was a tool that I could use to control the concurrent execution of a collection of shell scripts (it needs to run on Solaris) which will be my transactions. Ideally it would be able to ramp up in accordance with a predetermined scehdule.
I've had a look around and can't tell if JMeter will do the trick. It seems very web oriented.

You can use bellow script to trigger load test for HTTP/S requests,
#!/bin/bash
#define variables
set -x # run in debug mode
DURATION=60 # how long should load be applied ? - in seconds
TPS=20 # number of requests per second
end=$((SECONDS+$DURATION))
#start load
while [ $SECONDS -lt $end ];
do
for ((i=1;i<=$TPS;i++)); do
curl -X POST <url> -H 'Accept: application/json' -H 'Authorization: Bearer xxxxxxxxxxxxx' -H 'Content-Type: application/json' -d '{}' --cacert /path/to/cert/cert.crt -o /dev/null -s -w '%{time_starttransfer}\n' >> response-times.log &
done
sleep 1
done
wait
#end load
echo "Load test has been completed"
You may refer this for more information

If all you need is starting a bunch of shell scripts in parallel, you can quickly create something of your own in perl with fork, exec and sleep.
#!/usr/bin/perl
for $i (1..1000)
{
if (fork == 0)
{
exec ("script.sh");
exit;
}
sleep 1;
}

For anyone interested I have written a Java tool to manage this for me. It references a few files to control how it runs:
1) Schedules File - defines various named lists of timings which controls the length of sequential phases.
e.g. MAIN,120,120,120,120,120
This will result in a schedule named MAIN which has 5 phases each 120 seconds long.
2) Transactions File - defines transactions that need to run. Each transaction has a name, a command that should be called, boolean controlling repetition, integer controlling pause between repetitions in seconds, data file reference,schedule to use and increments.
e.g. Trans1,/path/to/trans1.ksh,true,10,trans1.data.csv,MAIN,0,10,0,10,0
This will result in a transaction running trans1.ksh, repeatedly with a pause of 10 seconds between repetitions. It will reference the data in trans1.data.csv. During phase 1 it will increment the number of parallel invocations by 0, phase 2 will add 10 parallel invocations, phase 3 none added and so on. Phase times are taken from the schedule named MAIN.
3) Data Files - as referenced in the transaction file, this will be a CSV with a header. Each line of data will be passed to subsequent invocations of the transaction.
e.g.
HOSTNAME,USERNAME,PASSWORD
server1,jimmy,password123
server1,rodney,ILoveHorses
These get passed to the transaction scripts via environment variables (e.g. PASSWORD=ILoveHorses), a bit klunky, but workable.
My Java simply parses the config files, sets up a manager thread per transaction which itself takes care of setting up and starting executor threads in accordance with the configuration. Managers take care of adding executors linearly so as not to totally overload it.
When it runs, it just reports every second on how many workers each transaction has running and which phase it's in.
It was a fun little weekend project, it's certainly no load runner and I'm sure there are some massive flaws in it that I'm currently blissfully unaware of, but it seems to do ok.
So in summary the answer here was to "roll ya own".

Related

Issues with mkdbfile in a simple "read a file > Create a hashfile job"

Hello datastage savvy people here.
Two days in a row, the same single datastage job failed (not stopping at all).
The job tries to create a hashfile using the command /logiciel/iis9.1/Server/DSEngine/bin/mkdbfile /[path to hashfile]/[name of hashfile] 30 1 4 20 50 80 1628
(last trace in the log)
Something to consider (or maybe not ? ) :
The [name of hashfile] directory exists, and was last modified at the time of execution) but the file D_[name of hashfile] does not.
I am trying to understand what happened to prevent the same incident to happen next run (tonight).
Previous to this day, this job is in production since ever, and we had no issue with it.
Using Datastage 9.1.0.1
Did you check the job log to see if captured an error? When a DataStage job executes any system command via command execution stage or similar methods, the stdout of the called job is captured and then added to a message in the job log. Thus if the mkdbfile command gives any output (success messages, errors, etc) it should be captured and logged. The event may not be flagged as an error in the job log depending on the return code, but the output should be there.
If there is no logged message revealing cause of directory non-create, a couple of things to check are:
-- Was target directory on a disk possibly out of space at that time?
-- Do you have any Antivirus software that scans directories on your system at certain times? If so, it can interfere with I/o. If it does a scan at same time you had the problem, you may wish to update the AV software settings to exclude the directory you were writing dbfile to.

JMeter test works differently from CLI than GUI - why?

I'm creating a small test using JMeter. So far I have one Thread Group that executes an HTTP request, waits for 10 seconds, then executes an other HTTP Request and checks what was returned. If I start 100 such threads with 1 second ramp-up period from the JMeter GUI, it works fine, I get the expected values and the whole test finishes in 22 seconds. However, when I start the very same jmx file from the command line, the test runs for more than 120 seconds and some threads (at the last run, 36 out of the 100) don't get the expected value. This might indicate a bug in the system I test, but I don't understand why the test takes that long time from the CLI and why I get errors from the CLI. What is the difference between running the test from the GUI and from the CLI? Does the CLI run the tests "more parallel"? By the way, this is the command line I'm using:
/home/nar/apache-jmeter-3.3/bin/jmeter -n -t test_transactions.jmx -l test_transactions.out
I'm afraid I cannot share the test plan, but I can share the "outline":
+ Thread Group
+ CSV Data Set Config
+ HTTP Request
| + JSON Extractor
+ Constant timer
+ HTTP Request
| + JSON Extractor
| + Response Assertion
+ View Results Tree
+ Save Responses to a file
+ View Results in Table
+ Summary Report
The Constant timer waits for 10 seconds. The first HTTP Request sends in some data and initiates a computation, the second checks the result.
I think you should disable the following listeners in non gui test:
View Results Tree
Save Responses to a file
View Results in Table
Summary Report
After disable you still have result using -l test_transactions.out which you can later view using GUI mode with Browse button in your Listener
In non GUI you can also generate dashboard report if you want by adding -e -o /path/dashboardfolder
It actually does indicate the bug in the system under test. The reason is that you must run JMeter in non-GUI mode as GUI creates huge overhead in terms of resources consumption, especially when you're using Listeners, especially if one of them is View Results Tree.
So my expectation is that in non-GUI mode you're basically creating more immense load which your application cannot handle. You can check this out using i.e. Active Threads Over Time and Transactions Per Second listeners.

Bash script that checks website every 10 seconds

The following script checks a sites content to see if any change has been done to it, every 10 seconds. It's for a very time sensitive application. If something on the site has changed, I merely have seconds to do something else. It will then start a new download and compare cycle and wait for the next change and do cycle. The do something else, has yet to be scripted and not relevant to the question.
The question: Will it be a problem for a public website to have a script downloading a single page every 10-15 seconds. If so, is there any other way to monitor a site, unmanned?
#!/bin/bash
Domain="example.com"
Ocontent=$(curl -L "$Domain")
Ncontent="$Ocontent"
until [ "$Ocontent" != "$Ncontent" ]; do
Ocontent=$(curl -L "$Domain")
#CONTENT CHANGED TRUE
#if [ "$Ocontent" == "$Ncontent ]; then
# Ocontent=$(curl -L "$Domain")
#fi
echo "$Ocontent"
sleep 10
done
The problems you're going to run into:
If the site notices and has a problem with it, you may end up on a banned IP list. Using an IP pool or other distributed resource can mitigate this.
Pinging a website precisely every x number of seconds is unlikely. Network latency is likely to cause a great deal of variance in this.
If you get a network partition, your code should know how to cope. (What if your connection goes down? What should happen?)
Note that getting the immediate response is only part of downloading a webpage. There may be changes to referenced files, such as css, javascript or images that are not immediately apparent from just the original http response.

Automate the Endurance Testing using Jmeter

I want to automate the endurance testing using jmeter. I have a test plan consisting of 2 thread groups, in turn having multiple requests.
I want to run the test plan first for 10 users, then for 50, then for 100...so on with each run having a pause of 20 minutes, in an automatic manner so that i do not have to sit and wait for 20 minutes and then type 50 users in the command line argument and again wait for 20 minutes and then type 100 users in the command line argument and again wait for 20 minutes and so on.
What you are asking is possible using Ultimate threadgroup,
Sample should be like,
I have added steps for 10, 50, 100 users for 20 min. If needed more runs you can add more rows with customized settings.
I figured it out. I can have an excel file which has a column containing the thread count like: 10,20,30 etc. Then in a java program using eclipse/intellij, I can read the above values from the excel file in a loop and assign the values to a String variable: "value" inside the loop. Then inside the loop,I can invoke the jmx file as below:
Runtime rt = Runtime.getRuntime();
Process pr = rt.exec("C:\\apache-jmeter-2.13\\bin\\jmeter.bat -t \"C:\\jmeter scripts\\test.jmx\" -Jusers="+value+" -n -l \"C:\\jmeter scripts\\nonGUI.csv\" -j \"C:\\jmeter scripts\\jmeterLogs.log\"");Thread.sleep(120000);
Hope this helps others who want to perform Endurance testing using jmeter+java

Neo4j 2.0.0 - Poor performance for dev/test in a virtual machine

I have Neo4j server running inside a virtual machine using Ubuntu 13.10 and I am accessing via REST using Cypher queries. The virtual machine has 4 GB of memory allocated to it.
I've changed the open file count to 40000, set the initial JVM heap to 1G and my neo4j.properties file is as follows:
neostore.nodestore.db.mapped_memory=250M
neostore.relationshipstore.db.mapped_memory=100M
neostore.propertystore.db.mapped_memory=100M
neostore.propertystore.db.strings.mapped_memory=100M
neostore.propertystore.db.arrays.mapped_memory=100M
keep_logical_logs=3 days
node_auto_indexing=true
node_keys_indexable=id
I've also updated sysctl based on the Neo4j Linux tuning guide:
vm.dirty_background_ratio = 50
vm.dirty_ratio = 80
Since I am testing queries, the basic routine is to run my suite of tests and then delete all of the nodes and run them all again. At the start of each test run, the database has 0 nodes in it. My suite of tests of about 100 queries is taking 22 seconds to run. Basic parameterized creates such as:
CREATE (x:user { email: {param0},
name: {param1},
displayname: {param2},
id: {param3},
href: {param4},
object: {param5} })
CREATE x-[:LOGIN]->(:login { password: {param6},
salt: {param7} } )
are currently taking over 170ms to execute (and that's the average, first query time is 700ms). During a test run, the CPU in the VM never exceeds 50% and memory usage is at a steady 1.4Gb.
Why would creating a single node in an empty database take 170ms? At this point unit testing is becoming almost impossible since it is so slow. This is my first time trying to tune Neo4j so I'm not really sure how to figure out where the problem is or what changes should be made.
Additional Details
I'm using Go 1.2 to make REST calls to the cypher endpoint (http://localhost:7474/db/data/cypher) of a locally installed Neo4j instance. I'm setting the request headers for content-type to "application/json", accept to "application/json" and "X-Stream" to true. I always return either an array of maps or nothing depending on the query.
It seems like the creates are the problem and are taking forever. For example:
2014/01/15 11:35:51 NewUser took 123.314938ms
2014/01/15 11:35:51 NewUser took 156.101784ms
2014/01/15 11:35:52 NewUser took 167.439442ms
2014/01/15 11:35:52 ValidatePassword took 4.287416ms
NewUser creates two new nodes and one relationship and is taking 167ms, while ValidatePassword is a read-only operation and it completes in 4ms. Also note that the three calls to NewUser are identical parameterized queries. While the creates are the big problem, I'm also a little concerned that Neo4j is taking 4ms to just find a labeled node when there are only 100 nodes in the database.
I do not restart the server in between test runs or delete the database. I issue a single delete all nodes query MATCH (n) OPTIONAL MATCH (n)-[r]-() DELETE n,r at the end of the test run. Running the same test suite multiple times back to back does not improve the query times.
Are your 100 queries all the same only with different parameters, or actually 100 different queries?
What you see is actually setup work. The parser has to load the parsing rules initially that takes a few ms. Also new queries that have not been seen are compiled, planned and put in the query cache.
So the first query always takes a bit longer. But as you parametrize all subsequent ones should be fast.
Can you confirm that?
I think you see the transactional overhead of flushing the transaction to disk.
Did you try to batch more requests into one? I.e. with the transactional endpoint? Or the /db/data/batch (but I'd rather use the new tx-endpoint /db/data/transaction).
Did you create an index for your lookup property for your validate query?
Can you do me a favor and test your create query without a label? I found some perf issues when testing that myself earlier this week.
Just ran a test with curl
for i in `seq 1 10`; do time curl -i -H content-type:application/json -H accept:application/json -H X-Stream:true -d #perf_test.json http://localhost:7474/db/data/cypher; done
I'm getting between 16 and 30ms per request externally including starting curl
HTTP/1.1 200 OK
Content-Type: application/json; charset=UTF-8; stream=true
Access-Control-Allow-Origin: *
Transfer-Encoding: chunked
Server: Jetty(9.0.5.v20130815)
{"columns":[],"data":[]}
real 0m0.016s
user 0m0.005s
sys 0m0.005s
Perhaps it is rather the VM (disk or network) or the cross-vm communication?
Did another test with ab and 1000 requests for both endpoints, got a mean of about 5 ms both times.
https://gist.github.com/jexp/8452037

Resources