How to verify AB responses? - apachebench

Is there a way to make sure that AB gets proper responses from server? For example:
To force it to output the response of a single request to STDOUT OR
To ask it to check that some text fragment is included into the response body
I want to make sure that authentication worked properly and i am measuring response time of the target page, not the login form.
Currently I just replace ab -n 100 -c 1 -C "$MY_COOKIE" $MY_REQUEST with curl -b "$MY_COOKIE" $MY_REQUEST | lynx -stdin .
If it's not possible, is there an alternative more comprehensive tool that can do that?

You can use the -v option as listed in the man doc:
-v verbosity
Set verbosity level - 4 and above prints information on headers, 3 and above prints response codes (404, 200, etc.), 2 and above prints warnings and info.
https://httpd.apache.org/docs/2.4/programs/ab.html
So it would be:
ab -n 100 -c 1 -C "$MY_COOKIE" -v 4 $MY_REQUEST
This will spit out the response headers and HTML content. The 3 value will be enough to check for a redirect header.
I didn't try piping it to Lynx but grep worked fine.

Apache Benchmark is good for a cursory glance at your system but is not very sophisticated. I am currently attempting to tune a web service and am finding that AB does not measure complete response time when considering the transfer of the body. Also as you mention you can not verify what is returned.
My current recommendation is Apache JMeter. http://jmeter.apache.org/
I am having much better success with it. You may find the Response Assertion useful for your situation. http://jmeter.apache.org/usermanual/component_reference.html#Response_Assertion

Related

Curl taking too long to send https request using command line

I have implemented one shall script which send an https request to proxy with authorization header using GET request.
Here is my command :
curl -s -o /dev/null -w "%{http_code}" -X GET -H "Authorization: 123456:admin05" "https://www.mywebpage/api/request/india/?ID=123456&Number=9456123789&Code=01"
It takes around 12 second to wait and then sending request to proxy and revert back with some code like 200,400,500 etc..
Is it possible to reduce time and make it faster using CURL ?
Please advice me for such a case.
Thanks.
Use option -v or --verbose along with --trace-time
It gives details of actions begin taken along with timings.
Includes DNS resolution, SSL handshake, etc. Line starting with '>' means header/body being sent, '<' means being received.
Based on difference between operation sequence - you can decipher whether server is taking time to respond (time between request and response) or network latency or bandwidth(response taking) time.

What is the fastest way to perform a HTTP request and check for 404?

Recently I needed to check for a huge list of filenames if they exist on a server. I did this by running a for loop which tried to wget each of those files. That was efficient enough, but took about 30 minutes in this case. I wonder if there is a faster way to check whether a file exists or not (since wget is for downloading files and not performing thousands of requests).
I don't know if that information is relevant, but it's an Apache server.
Curl would be the best option in a for loop and here is a straight forward simple way, run this in your forloop
curl -I --silent http://www.yoururl/linktodetect | grep -m 1 -c 404
What this simply does is check the http response header for a 404 returned on the link and if its detected as a missing file/link throwing a 404 then the command line output will display you a number 1; otherwise, if the file/link is valid and does not return a 404 then the command line output will display you a number 0.

Curl range not working(downloads entire file)

curl -v -r 0-500 http://somefile -o localfile
It should download just the first 501 bytes, no? Instead, it just downloads the entire thing. All 67 megabytes. Thanks curl! Could my companies proxy servers be blocking this feature somehow? I am skeptical about that, since the downloads themselves do work, just not the range feature. Am I missing something?
As a client you could always abort the download when you have received what you want.
By using head, you will be able to limit the download to 500 bytes, even if the server does not accept the range-header
curl -v -r 0-500 http://somefile |head -c 500 > localfile
It should download just the first 501 bytes, no?
It depends on the server. From man curl:
You should also be aware that many HTTP/1.1 servers do not have this feature enabled, so that when you attempt to get a range, you'll instead get the whole document.
As you can see in the response from the server, it's using HTTP/1.1. So it's not surprising that the range feature is not supported at the server side.
Please use the following command
curl -H "range: bytes=354-500" -O http://example.com/file.extension

Using CURL to download file and view headers and status code

I'm writing a Bash script to download image files from Snapito's web page snapshot API. The API can return a variety of responses indicated by different HTTP response codes and/or some custom headers. My script is intended to be run as an automated Cron job that pulls URLs from a MySQL database and saves the screenshots to local disk.
I am using curl. I'd like to do these 3 things using a single CURL command:
Extract the HTTP response code
Extract the headers
Save the file locally (if the request was successful)
I could do this using multiple curl requests, but I want to minimize the number of times I hit Snapito's servers. Any curl experts out there?
Or if someone has a Bash script that can respond to the full documented set of Snapito API responses, that'd be awesome. Here's their API documentation.
Thanks!
Use the dump headers option:
curl -D /tmp/headers.txt http://server.com
Use curl -i (include HTTP header) - which will yield the headers, followed by a blank line, followed by the content.
You can then split out the headers / content (or use -D to save directly to file, as suggested above).
There are three options -i, -I, and -D
> curl --help | egrep '^ +\-[iID]'
-D, --dump-header FILE Write the headers to FILE
-I, --head Show document info only
-i, --include Include protocol headers in the output (H/F)

Changing POST data used by Apache Bench per iteration

I'm using ab to do some load testing, and it's important that the supplied querystring (or POST) parameters change between requests.
I.e. I need to make requests to URLs like:
http://127.0.0.1:9080/meth?param=0
http://127.0.0.1:9080/meth?param=1
http://127.0.0.1:9080/meth?param=2
...
to properly exercise the application.
ab seems to only read the supplied POST data file once, at startup, so changing its content during the test run is not an option.
Any suggestions?
You're going to need to use a more full-featured benchmarking tool like jMeter for this.
Add my recommendation for jMeter...it works very well!
You could also create a script that creates a second script with something like:
ab -n 1 -c 1 'http://yoursever.com/method?param=0' &
ab -n 1 -c 1 'http://yoursever.com/method?param=1' &
ab -n 1 -c 1 'http://yoursever.com/method?param=2' &
ab -n 1 -c 1 'http://yoursever.com/method?param=3' &
ab -n 1 -c 1 'http://yoursever.com/method?param=4' &
But that's only really useful if you're trying to simulate load and observe your server. The actual benchmarks will have to be collated if you want to check ab performance. At that point I'd just use jMeter. For my use, I just need to simulate load and the ab processes are light enough that running 100 like this is no problem.
Here is patched version of ab or patch:
http://www.andboson.com/?p=1372
this version is included that patch http://chrismiles.info/dev/testing/ab
also can read many post-data line by line
upd:
sample request:
./ab -v1 -n2 -c1 -T'application/json' -ppostfile http://api.webhookinbox.com/i/HX6mC1WS/in/
postfile content:
{"data1":1, "data2":"4"}
{"data0":0, "x":"y"}
upd2:
also alternative
https://github.com/andboson/ab-go

Resources