How do I execute an HTTP PUT in bash? - bash

I'm sending requests to a third-party API. It says I must send an HTTP PUT to http://example.com/project?id=projectId
I tried doing this with PHP curl, but I'm not getting a response from the server. Maybe something is wrong with my code because I've never used PUT before. Is there a way for me to execute an HTTP PUT from bash command line? If so, what is the command?

With curl it would be something like
curl --request PUT --header "Content-Length: 0" http://website.com/project?id=1
but like Mattias said you'd probably want some data in the body as well so you'd want the content-type and the data as well (plus content-length would be larger)

If you really want to only use bash it actually has some networking support.
echo -e "PUT /project?id=123 HTTP/1.1\r\nHost: website.com\r\n\r\n" > \
/dev/tcp/website.com/80
But I guess you also want to send some data in the body?

Like Mattias suggested, Bash can do the job without further tools. If you want to send data, you have to preset at least "Content-length". With variables "host", "port", "resource" and "data" defined, you can do a HTTP put with
echo -e "PUT /$resource HTTP/1.1\r\nHost: $host:$port\r\nContent-Length: ${#data}\r\n\r\n$data\r\n" > /dev/tcp/$host/$port
I tested this with a Rest API and it workes fine.

Related

How to know if graphql returned an error when using curl?

HTTP GraphQl calls always return 200 - even if the response doesn't suite the request.
I'm trying to do a graphql call and understand if there's an error, like using $? and --fail but it doesn't help because of the always 200 response.
Even if graphql's output isn't according to input and contains error arrays, curl only cares about the http code, which is always 200.
Is there a way for curl to understand a graphql error? Like some kind of built in mechanism in to compare requested input to actual input and understand there's an error?
Perhaps I'm barking on the wrong tree here and should use some command line tool more dedicated to graphql? Thanks.
curl doesn't know anything in particular about GraphQL. You can pipe the output of curl to grep to check for the presence of errors and draw conclusions based on that as necessary.
ex:
curl --request POST \
--header 'content-type: application/json' \
--url http://localhost:4000/ \
--data 'your query data'| grep "errors"

How to get custom header in bash

I'm adding a custom header in Asp.Net app:
context.Response.Headers.Add("X-Date", DateTime.Now.ToString());
context.Response.Redirect(redirectUrl, false);
When I'm using Fiddler I can see the "X-Date" header in the response.
I need to receive it by using bash.
I tried curl -i https://my.site.com and also wget -O - -o /dev/null --save-headers https://my.site.com with no success.
In both cases I see just the regular headers like: Content-Type, Server, Date, etc...
How I can receive the "X-Date" header?
Thanks,
Lev
protocol headers are different than file-headers (like http-header and tcp-header are different). When you create a protocol header you wiil need a server to resolve it and use the associated enviroment variables. Example ...
#!/bin/bash
# Apache - CGI
echo "text/plain"
echo ""
echo "$CONTENT_TYPE"
echo "$HTTP_ACCEPT"
echo "$SERVER_PROTOCOL"
When calling this script via web, The response ony my browser was...
text/html
text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8
HTTP/1.1
What you looking for are enviorment variables called $HTTP_ACCEPT, $CONTENT_TYPE and maybe $SERVER_PROTOCOL too.

Curl Request : response using curl Request is not properly formatted(xml)

i am loading xml as String from Remote using curl as below:
$ curl -i -H "Accept: application/xml" -X GET "URL Here"
but response is not xml format, hence not easily readable.
<?xml version="1.0" encoding="UTF-8" standalone="yes"?><RunConfig <PipeLineXmlVersion>1.0</PipeLineXmlVersion><DateTime>20161128_160859</DateTime><Analysis><Lane>1</Lane><PipeLine>run_multiplexed_auto_start_v4.0.sh</PipeLine><Version>4.0</Version><Mismatch>1</Mismatch><MergeLane>0</MergeLane<Version>4.0</Version></Analysis></RunConfig>
When i try the same API using some REST client then i can see the proper xml.
As i searched, Accept header should work but unfortunately not in my case.
Please help me with this.
Thanks.
If what you mean by "not proper" is the fact that the response is not pretty printed (i.e. lack spaces and indent), there are plenty of command line tools to format xml.
For example:
curl ... | xmllint --format -
Here, you pass the response of curl to xmllint (part of xmllib2-utils), which will format your answer. The - in the end tells the tool to print the result in the console.
Have a look at this question for more ways to achieve it.

Curl taking too long to send https request using command line

I have implemented one shall script which send an https request to proxy with authorization header using GET request.
Here is my command :
curl -s -o /dev/null -w "%{http_code}" -X GET -H "Authorization: 123456:admin05" "https://www.mywebpage/api/request/india/?ID=123456&Number=9456123789&Code=01"
It takes around 12 second to wait and then sending request to proxy and revert back with some code like 200,400,500 etc..
Is it possible to reduce time and make it faster using CURL ?
Please advice me for such a case.
Thanks.
Use option -v or --verbose along with --trace-time
It gives details of actions begin taken along with timings.
Includes DNS resolution, SSL handshake, etc. Line starting with '>' means header/body being sent, '<' means being received.
Based on difference between operation sequence - you can decipher whether server is taking time to respond (time between request and response) or network latency or bandwidth(response taking) time.

Using CURL to download file and view headers and status code

I'm writing a Bash script to download image files from Snapito's web page snapshot API. The API can return a variety of responses indicated by different HTTP response codes and/or some custom headers. My script is intended to be run as an automated Cron job that pulls URLs from a MySQL database and saves the screenshots to local disk.
I am using curl. I'd like to do these 3 things using a single CURL command:
Extract the HTTP response code
Extract the headers
Save the file locally (if the request was successful)
I could do this using multiple curl requests, but I want to minimize the number of times I hit Snapito's servers. Any curl experts out there?
Or if someone has a Bash script that can respond to the full documented set of Snapito API responses, that'd be awesome. Here's their API documentation.
Thanks!
Use the dump headers option:
curl -D /tmp/headers.txt http://server.com
Use curl -i (include HTTP header) - which will yield the headers, followed by a blank line, followed by the content.
You can then split out the headers / content (or use -D to save directly to file, as suggested above).
There are three options -i, -I, and -D
> curl --help | egrep '^ +\-[iID]'
-D, --dump-header FILE Write the headers to FILE
-I, --head Show document info only
-i, --include Include protocol headers in the output (H/F)

Resources