Make a (CURL-like) HTTP request without the "Http version" for testing? - bash

I'm testing malformed HTTP requests on OSX, but I can't workout how to make a request with a missing/malformed http version.
Curl seems to only allow valid presets (--http1.0, --http1.1, --http1)
Whats the easiest way to construct a request without "http version"?
Example:
Given the following commands create the following request lines:
Ex1.
command: curl -i http://localhost:8080/cat.jpg?v=1
request: GET cat.jpg?v=1 HTTP/1.1
Ex2.
command: curl -i http://localhost:8080/cat.jpg?v=1 --http1.0
request: GET cat.jpg?v=1 HTTP/1.0
Wanted
How could I create the following
command: ???
request: GET cat.jpg?v=1 (missing http version)
EDIT: ANSWER
curl only deals with valid requests. netcat is an alternative that has more control.
See this answer
Thanks #DanFromGermany

Related

regular expression inside a cURL call

I have a cURL call like this:
curl --silent --max-filesize 500 --write-out "%{http_code}\t%{url_effective}\n" 'http://fmdl.filemaker.com/maint/107-85rel/fmpa_17.0.2.[200-210].dmg' -o /dev/null
This call generates a list of of URLs with the HTTP code (200 or 404 normally) like this:
404 http://fmdl.filemaker.com/maint/107-85rel/fmpa_17.0.2.203.dmg
404 http://fmdl.filemaker.com/maint/107-85rel/fmpa_17.0.2.204.dmg
200 http://fmdl.filemaker.com/maint/107-85rel/fmpa_17.0.2.205.dmg
404 http://fmdl.filemaker.com/maint/107-85rel/fmpa_17.0.2.206.dmg
The only valid URLs are the ones preceded by the 200 HTTP code, so I would like to put a regular expression in the cURL so that it only downloads the lines that start with 200
Any ideas on how to do this without being a bash script?
Thank you in advance
You can use the following :
curl --silent -f --max-filesize 500 --write-out "%{http_code}\t%{url_effective}\n" -o '#1.dmg' 'http://fmdl.filemaker.com/maint/107-85rel/fmpa_17.0.2.[200-210].dmg'
This will try to reach every url and when it's not a 404 nor too large download it into a file whose name will be based on the index in the url.
The -f flag makes it avoid to output the content of the response when the HTTP code isn't a success one, while the -o flag specifies an output file, where #1 corresponds to the effective value of your [200-210] range (adding other [] or {} would let you refer to other parts of the URL by their index).
Note that during my tests, the --max-filesize 500 flag prevented the download of the only url which didn't end up in a 404, fmpa_17.0.2.205.dmg

JMeter Not Sending File with HTTP Request

I'm new to JMeter and trying to put a file to our API using an HTTP Request. When I put the file via curl using the -F flag, it works no problem.
Here's my curl request:
curl -X PUT -u uname:pword https://fakehostname.com/psr-1/controllers/vertx/upload/file/big/ADJTIME3 -F "upload1=#ADJTIME" -vis
and here's the relevant part of the response from the server:
> User-Agent: curl/7.37.1 Host: myfakehost.com Accept: */*
> Content-Length: 4190 Expect: 100-continue Content-Type:
> multipart/form-data; boundary=------------------------d76566a6ebb651d3
When I do the same put via JMeter, the Content-Length is 0 which makes me think that JMeter isn't reading the file for some reason. I know the path is correct because I browsed to the file from JMeter. Litte help?
In File Upload, make your file path RELATIVE to .jmx file or place next to it and specify file name only.
Thanks to everyone who offered solutions and suggestions. It turns out that the API I was trying to load test was the issue. I can PUT a file via curl no problem, but there's something about the Jmeter PUT that the API does not like. I finally tried doing a PUT to an unrelated API and was successful.

How to use curl post method while the login info is required?

For example, if I wanna issue a post request to the server. But the website requires the username and password to login first. How should I do these two operations?
If it's requires some programmatic username and password built into the web page, you'd need to submit what it expects for a user logging in, then capture the cookies you get, and then send those cookies back with your post. This can get involved if the login process involves multiple pages which are redirected to. curl can do this, but be prepared to spend some time on it.
To get the cookie being returned by the server, use curl -i to include headers. You can also add -L to automatically follow redirects (which you otherwise would have to do manually by retrieving the URI in the Location: field of an HTTP 301 or 302 response). Example:
curl -i -L stackoverflow.com > /tmp/so.html
grep -i 'Set-Cookie:' /tmp/so.html
Yields:
Set-Cookie: prov=31c24327-c0bf-474d-b504-fc97dc69ab61; domain=.stackoverflow.com; expires=Fri, 01-Jan-2055 00:00:00 GMT; path=/; HttpOnly
(Until you get the predictable logic right and how you need to submit the requests, you'll need to inspect the rest of the headers to be able to accomodate redirects, see if there are multple cookies, etc.)
To submit a cookie, use curl -b:
curl -b "prov=31c24327-c0bf-474d-b504-fc97dc69ab61" [rest of curl command]
Be patient and good luck, and be sure to check the curl man page.
curl -u username:password -X POST --data "name1=value1&name2=value2" http://yourwebpage.com/

Using CURL to download file and view headers and status code

I'm writing a Bash script to download image files from Snapito's web page snapshot API. The API can return a variety of responses indicated by different HTTP response codes and/or some custom headers. My script is intended to be run as an automated Cron job that pulls URLs from a MySQL database and saves the screenshots to local disk.
I am using curl. I'd like to do these 3 things using a single CURL command:
Extract the HTTP response code
Extract the headers
Save the file locally (if the request was successful)
I could do this using multiple curl requests, but I want to minimize the number of times I hit Snapito's servers. Any curl experts out there?
Or if someone has a Bash script that can respond to the full documented set of Snapito API responses, that'd be awesome. Here's their API documentation.
Thanks!
Use the dump headers option:
curl -D /tmp/headers.txt http://server.com
Use curl -i (include HTTP header) - which will yield the headers, followed by a blank line, followed by the content.
You can then split out the headers / content (or use -D to save directly to file, as suggested above).
There are three options -i, -I, and -D
> curl --help | egrep '^ +\-[iID]'
-D, --dump-header FILE Write the headers to FILE
-I, --head Show document info only
-i, --include Include protocol headers in the output (H/F)

Invoking SOAP request from shell command

I using curl to send a SOAP request to a web service and get the response using shell scripting. please find below the command i am using:-
curl -H "Content-Type: text/xml; charset=utf-8" -H "SOAPAction:" -d #sample_request.txt -X POST http://someWebServiceURL
I am getting an error response which says no SOAPAction header.
PFB the response body part
<soapenv:Body>
<soapenv:Fault>
<faultcode>Client.NoSOAPAction</faultcode>
<faultstring>WSWS3147E: Error: no SOAPAction header!</faultstring>
</soapenv:Fault>
</soapenv:Body>
Any help is appreciated !!
You need to provide the name of the SOAP action. You have:
-H "SOAPAction:"
Supply the name of the action in there. E.g.
-H "SOAPAction: http://my_example/my_action"
Get the name of the action from the WSDL if you have one. E.g., see How do you determine a valid SoapAction?.
From the WSDL of the service, you can find the SoapAction. And you can find the operation you're trying to invoke and access the WSDL by opening a web browser to the URL of the service.
Using the curl to invoke the SoapAction, you should specify the Action by "-H", such as -H SOAPAction: http://tempuri.org/Execute.

Resources