Here's the code I'm looking at:
#!/bin/bash
nc -l 8080 &
curl "http://localhost:8080" \
-H "Accept: application/json" \
-H "Content-Type: application/json" \
--data #<(cat <<EOF
{
"me": "$USER",
"something": $(date +%s)
}
EOF
)
What does the # do? Where is there documentation about #?
It is a curl-specific symbol. man curl shows you:
-d, --data <data>
(HTTP) Sends the specified data in a POST request to the HTTP server, in the
(same way that a browser does when a user has filled in an HTML form and
(presses the submit button. This will cause curl to pass the data to the
(server using the content-type application/x-www-form-urlencoded. Compare to
(-F, --form.
--data-raw is almost the same but does not have a special interpretation of
the # character. To post data purely binary, you should instead use the
--data-binary option. To URL-encode the value of a form field you may use
--data-urlencode.
If any of these options is used more than once on the same command line, the
data pieces specified will be merged together with a separating &-symbol.
Thus, using '-d name=daniel -d skill=lousy' would generate a post chunk that
looks like 'name=daniel&skill=lousy'.
If you start the data with the letter #, the rest should be a file name to
read the data from, or - if you want curl to read the data from stdin.
Multiple files can also be specified. Posting data from a file named
'foobar' would thus be done with -d, --data #foobar. When --data is told to
read from a file like that, carriage returns and newlines will be stripped
out. If you don't want the # character to have a special interpretation use
--data-raw instead.
See also --data-binary and --data-urlencode and --data-raw. This option
overrides -F, --form and -I, --head and -T, --upload-file.
Related
I am using curl to send a POST request and the payload variable giving me trouble is converted to an instance of UUID on the backend of the API. When hardcoding the payload values, as shown below this line, I receive a successful response.
curl --request POST '<url>' --header 'Content-Type: application/json' \
--data-raw '{
"id": "4312-3532-1244-3413",
"otherValue": "also_hardcoded"
}'
But when reading the data for the "id" variable from a file, I get an error response "400: Bad Request". Sample of what that request looks like below:
while read -r LineFromFile; do
curl --request POST '<url>' --header 'Content-Type: application/json' \
--data-raw '{
"id": "$LineFromFile",
"otherValue": "also_hardcoded"
}'
done < "$1"
Even though the text file only contains the same value as "id" in the first example. I tried using IFS=$'\n' in case a newline character was being discreetly read but that did not have any effect. I thought it could be an end of file issue, but adding duplicate lines with the same value to the text file only causes there to be more 400: Bad Request responses.
Can someone help me determine why the file is not being read the way I expected it to? Thank you very much. I greatly appreciate any assistance.
I have five cURL statements that work fine by themselves and am trying to put them together in a bash script. Each cURL statement relies on a variable generated from a cuRL statement executed before it. I'm trying to figure out the smartest way to go about this. Here is the first cURL statement;
curl -i -k -b sessionid -X POST https://base/resource -H "Content-Type: application/json" -H "Authorization: Authorization: PS-Auth key=keyString; runas=userName; pwd=[password]" -d "{\"AssetName\":\"apiTest\",\"DnsName\":\"apiTest\",\"DomainName\":\"domainNameString\",\"IPAddress\":\"ipAddressHere\",\"AssetType\":\"apiTest\"}"
This works fine, it produces this output;
{"WorkgroupID":1,"AssetID":57,"AssetName":"apiTest","AssetType":"apiTest","DnsName":"apiTest","DomainName":"domainNameString","IPAddress":"ipAddressHere","MacAddress":null,"OperatingSystem":null,"LastUpdateDate":"2017-10-30T15:18:05.67-07:00"}
However, in the next cURL statement, I need to use the integer from AssetID in order to execute it. In short, how can I take the AssetID value and store it to a variable to be used in the next statement? In total, I'll be using 5 cURL statements and they rely on values generated in the preceeding statement to execute. Any insight on how is appreciated.
Download and install jq which is like sed for JSON data. You can use it to slice and filter and map and transform structured data with the same ease that sed, awk, grep does for unstructured data. Remember to replace '...' with your actual curl arguments
curl '...' | jq --raw-output '.AssetID'
and to store it in a variable use command-substitution syntax to run the command and return the result.
asset_ID=$( curl '...' | jq --raw-output '.AssetID' )
In the curl command, drop the -i flag to output only the JSON data without the header information.
I am trying to feed a GeoJSON to this data service using the following bash code.
curl -X POST -F "shape=$(cat myfile.geojson)" \
-F 'age=69' -o reconstructed_myfile.geojson \
https://dev.macrostrat.org/reconstruct
However, I am getting an "Argument list too long" error. I see a lot of questions open on stack related to this issue, but I do not understand how to convert the answers given in those threads to this specific case.
You should use <filename or #filename:
curl -X POST \
-F 'shape=<myfile.geojson' \
-F 'age=69' \
-o 'reconstructed_myfile.geojson' \
-- 'https://dev.macrostrat.org/reconstruct'
See man curl for details:
$ man curl | awk '$1 ~ /-F/' RS=
-F, --form <name=content>
(HTTP) This lets curl emulate a filled-in form in which a user has
pressed the submit button. This causes curl to POST data using the
Content-Type multi‐ part/form-data according to RFC 2388. This
enables uploading of binary files etc. To force the 'content' part to
be a file, prefix the file name with an # sign. To just get the
content part from a file, prefix the file name with the symbol <. The
difference between # and < is then that # makes a file get
attached in the post as a file upload, while the < makes a text field
and just get the contents for that text field from a file.
I can't seem to get jq to behave "normally" in a shell pipeline. For example:
$ curl -s https://api.github.com/users/octocat/repos | jq | cat
results in jq simply printing out its help text*. The same thing happens if I try to redirect jq's output to a file:
$ curl -s https://api.github.com/users/octocat/repos | jq > /tmp/stuff.json
Is jq deliberately bailing out if it determines that it's not being run from a tty? How can I prevent this behavior so that I can use jq in a pipeline?
Edit: it looks like this is no longer an issue in recent versions of jq. I have jq-1.6 now and the examples above work as expected.
* (I realize this example contains a useless use of cat; it's for illustration purposes only)
You need to supply a filter as an argument. To pass the JSON through unmodified other than the pretty printing jq provides by default, use the identity filter .:
curl -s https://api.github.com/users/octocat/repos | jq '.' | cat
One use case I have found myself doing frequently as well is "How do I construct JSON data to supply into other shell commands, for example curl?" The way I do this is by using the --null-input/-n option:
Don’t read any input at all! Instead, the filter is run once using null as the input. This is useful when using jq as a simple calculator or to construct JSON data from scratch.
And an example passing it into curl:
jq -n '{key: "value"}' | curl -d #- \
--url 'https://some.url.com' \
-H 'Content-Type: application/json' \
-H 'Accept: application/json'
Some research revealed a few useful stackexchange posts, namely expanding variable in CURL, but that given answer doesn't seem to properly handle bash variables that have spaces in them.
I am setting a variable to the output of awk, parsing a string for a substring (actually truncating to 150 characters). The string I am attempting to POST via curl has spaces in it.
When I use the following curl arguments, the POST variable Body is set to the part of the string before the first space.
curl -X POST 'https://api.twilio.com/2010-04-01/Accounts/GUID/SMS/Messages.xml' -d 'From=DIDfrom' -d 'To=DIDto' -d 'Body="'$smsbody'" -u SECGUID
smsbody is set as:
smsbody="$(echo $HOSTNAME$ $SERVICEDESC$ in $SERVICESTATE$\: $SERVICEOUTPUT$ | awk '{print substr($0,0,150)}')"
So the only portion of smsbody that is POSTed is $HOSTNAME$ (which happens to be a string without any space characters).
What is the curl syntax I should use to nest the bash variable properly to expand, but be taken as a single data field?
Seems pretty trivial, but I messed with quotes for a while without luck. I figure someone with better CLI-fu can handle it in a second.
Thanks!
It looks like you have an extra single quote before Body. You also need double quotes or the $smsbody won't be evaluated.
Try this:
curl -X POST 'https://api.twilio.com/2010-04-01/Accounts/GUID/SMS/Messages.xml' \
-d 'From=DIDfrom' -d 'To=DIDto' -d "Body=$smsbody" -u SECGUID
If the $s are still an issue (I don't think spaces are), try this to prepend a \ to them:
smsbody2=`echo $smsbody | sed 's/\\$/\\\\$/g'`
curl -X POST 'https://api.twilio.com/2010-04-01/Accounts/GUID/SMS/Messages.xml' \
-d 'From=DIDfrom' -d 'To=DIDto' -d "Body=$smsbody2" -u SECGUID
If I run nc -l 5000 and change the twilio address to localhost:5000, I see the smsbody variable coming in properly.
matt#goliath:~$ nc -l 5000POST / HTTP/1.1
Authorization: Basic U0VDR1VJRDphc2Q=
User-Agent: curl/7.21.6 (x86_64-apple-darwin10.7.0) libcurl/7.21.6 OpenSSL/1.0.0e zlib/1.2.5 libidn/1.20
Host: localhost:5000
Accept: */*
Content-Length: 45
Content-Type: application/x-www-form-urlencoded
From=DIDfrom&To=DIDto&Body=goliath$ $ in $: