Jmeter - url from command line - jmeter

I have been trying to pass the url for a web page from the command line to do performance testing using Jmeter.
I had set up the user defined variables like
NumberOfUsers ${__P(NumberOfUsers,2)}
HowManyTimesToRun ${__P(HowManyTimesToRun,2)}
RampUpTime ${__P(RampUpTime, 10)}
Host ${__P(Host)}
I tried using the Jmeter command as
./jmeter.sh -n -t Performance.jmx -l old88.jtl -JNumberOfUsers=5 -JRampUpTime=10JHowManyTimesToRun=2 -JHost=www.google.com
It seems to take all the values correctly except the host name. Is there a way to pass the url from command line. I use this property in HTTP request Defaults.

I do not see any issues here - except a space ' ' and '-' are missing for 'HowManyTimesToRun' property. Maybe it s a typo!!
./jmeter.sh -n -t Performance.jmx -l old88.jtl -JNumberOfUsers=5 -JRampUpTime=10 -JHowManyTimesToRun=2 -JHost=www.google.com

Related

Bash HTTPie works when calling STRAVA script through command line but not through crontab

I'm new to shell scripting and I have a Bash script pulling in data from the Strava API and manipulating/reading it using jq.
When I copy and paste in the first line of code (the one calling in data) into the command line, it works. When I run bash strava.sh the entire program works. But when I execute the program through crontab, I'm getting the following error:
usage: http [--json] [--form] [--pretty {all,colors,format,none}]
[--style STYLE] [--print WHAT] [--headers] [--body] [--verbose]
[--all] [--history-print WHAT] [--stream] [--output FILE]
[--download] [--continue]
[--session SESSION_NAME_OR_PATH | --session-read-only SESSION_NAME_OR_PATH]
[--auth USER[:PASS]] [--auth-type {basic,digest}]
[--proxy PROTOCOL:PROXY_URL] [--follow]
[--max-redirects MAX_REDIRECTS] [--timeout SECONDS]
[--check-status] [--verify VERIFY]
[--ssl {ssl2.3,tls1,tls1.1,tls1.2}] [--cert CERT]
[--cert-key CERT_KEY] [--ignore-stdin] [--help] [--version]
[--traceback] [--default-scheme DEFAULT_SCHEME] [--debug]
[METHOD] URL [REQUEST_ITEM [REQUEST_ITEM ...]]
http: error: unrecognized arguments: https://www.strava.com/oauth/token client_id=xxx client_secret=xxx refresh_token=xxx grant_type=refresh_token
Here's what the line looks like in my script:
access_token=$(http POST "https://www.strava.com/oauth/token" client_id="xxx" client_secret="xxx" refresh_token="xxx" grant_type="refresh_token" | jq -r '.access_token')
When running through crontab, the above error is printed on the first line (i.e. line given above), so I'm fairly certain the problem lies in that line. What am I doing wrong?
The httpie manual (https://httpie.io/docs/cli/best-practices) advises to use of:
--ignore-stdin
For "non-interactive invocations".
Possibly a path issue - are there multiple copies of http installed?
Is there a "%" anywhere in your parameters? Crontab interprets % as a newline, so if you'll have to escape it - "%%".
As an aside - please put your subshell inside "s, lest one day strava returns something like "AC0f4;rm * 0cd-4b203"
access_token="$( http POST ...

Parse string in sh file

For a gitlab ci/cd project, I need to find the url of a knative service (used to deploy a webservice) so that I can utilize it as my base url for load testing
I have found that I can find the url (and other information) with the command: kubectl get ksvc helloworld-go, which outputs:
NAME URL LATESTCREATED LATESTREADY READY REASON
helloworld-go http://helloworld-go.default.34.83.80.117.xip.io helloworld-go-96dtk helloworld-go-96dtk True
Can someone please provide me an easy way to extract only the url in a sh script? I believe the easiest way might be to find the text between the first and second space on the second line.
kubectl get ksvc helloworld-go | grep -oP "http://[^\t]*"
or
kubectl get ksvc helloworld-go | grep -Eo "http://[^[:space:]]*"

cURL call works with number but not with variable containing number

I've ran into a strange issue. I'm trying to script my router to collect usage stats and other stuff. I'm making one cURL to the auth URL to get a valid session id, then another using that session id to the page I need.
Here is my script:
SESSION_ID=$(curl --silent -D - -X POST http://10.0.0.1/login.cgi -d'admin_username=admin&admin_password=admin' | grep 'SESSION' | sed 's/Set-Cookie: SESSION=//' | sed 's/; path=\///')
echo $SESSION_ID # 1234567890
curl -v -H "Cookie: SESSION=$SESSION_ID" http://10.0.0.1/modemstatus_dslstatus.html
If I manually take SESSION_ID and insert it in place of '"$SESSION_ID"' everything is dandy. cURL shows the headers (via -v) and they are correct. Running the command while manually inserting the session id produces identical headers.
I'm sure it's something small. Please teach me something :)
Check for carriage returns \r in your variables which wouldn't appear with a simple echo in some cases.

Details about which user from csv failed response assertion in Jmeter

I am using JMeter to webUI performance testing. I have a list of users in csv with passwords. I am using response assertion to check failed password scenario.
How to record which user from csv is failed?
I would recommend going for Sample Variables property. For example, if you defined a ${username} which holds the user name from the CSV you can get it added to JMeter .jtl results file by adding the next line to user.properties file:
sample_variables=username
If you need to store more variables - provide them separated by commas:
sample_variables=username,password
Remember that:
JMeter restart is required to pick the property up
You can pass it via -J command line argument as well like:
jmeter -Jsample_variables=username,password -n -t test.jmx -l results.jtl
See Apache JMeter Properties Customization Guide for more information on different JMeter properties types and ways of working with them

Using CURL to download file and view headers and status code

I'm writing a Bash script to download image files from Snapito's web page snapshot API. The API can return a variety of responses indicated by different HTTP response codes and/or some custom headers. My script is intended to be run as an automated Cron job that pulls URLs from a MySQL database and saves the screenshots to local disk.
I am using curl. I'd like to do these 3 things using a single CURL command:
Extract the HTTP response code
Extract the headers
Save the file locally (if the request was successful)
I could do this using multiple curl requests, but I want to minimize the number of times I hit Snapito's servers. Any curl experts out there?
Or if someone has a Bash script that can respond to the full documented set of Snapito API responses, that'd be awesome. Here's their API documentation.
Thanks!
Use the dump headers option:
curl -D /tmp/headers.txt http://server.com
Use curl -i (include HTTP header) - which will yield the headers, followed by a blank line, followed by the content.
You can then split out the headers / content (or use -D to save directly to file, as suggested above).
There are three options -i, -I, and -D
> curl --help | egrep '^ +\-[iID]'
-D, --dump-header FILE Write the headers to FILE
-I, --head Show document info only
-i, --include Include protocol headers in the output (H/F)

Resources