Using JSON output in curl bash script - bash

I want to upload a file automatically to rackspace files which requires an auth-token that is updated daily, so I want to create a script which gets the auth token and then uses that in the script to upload the file.
This is the command to get the auth token which outputs the key perfectly:
curl -s -X POST https://auth.api.rackspacecloud.com/v2.0/tokens\
-d '{ "auth":{ "RAX-KSKEY:apiKeyCredentials":{ "username":"USER", "apiKey":"KEY" } } }'\
-H "Content-type: application/json" | python -mjson.tool |\
python -c 'import sys, json;\
print json.load(sys.stdin)[sys.argv[1]][sys.argv[2]][sys.argv[3]]'\
access token id
This is the command to upload the file:
curl -X PUT -T file.xml -D - \
-H "Content-Type: text/xml" \
-H "X-Auth-Token: TOKENGOESHERE" \
URL
I need to get the token from the first command into the TOKENGOESHERE place in the second command.
What I have tried so far is:
token = curl -s -X POST https://auth.api.rackspacecloud.com/v2.0/tokens -d '{ "auth":{ "RAX-KSKEY:apiKeyCredentials":{ "username":"USER", "apiKey":"KEY" } } }' -H "Content-type: application/json" | python -mjson.tool | python -c 'import sys, json; print json.load(sys.stdin)[sys.argv[1]][sys.argv[2]][sys.argv[3]]' access token id
curl -X PUT -T file.xml -D - \
-H "Content-Type: text/xml" \
-H "X-Auth-Token: $token" \
URL
but it didn't work and I am guessing it has something to do with the quotes but I don't know enough about bash to know what the problem is.
Thanks!

This should work:
token=$(curl -s -X POST https://auth.api.rackspacecloud.com/v2.0/tokens \
-d '{ "auth":{ "RAX-KSKEY:apiKeyCredentials":{ "username":"USER", "apiKey":"KEY" } } }' \
-H "Content-type: application/json" \
| python -mjson.tool \
| python -c 'import sys, json; print json.load(sys.stdin)["access"]["token"]["id"]')
curl -X PUT -T file.xml -D - \
-H "Content-Type: text/xml" \
-H "X-Auth-Token: $token" \
URL

I know it's a bit off topic, but I wanted to share my 'workflow' which may help a lot of people.
If you download these two cool toys (replacement for curl and python's json):
https://github.com/jkbr/httpie
http://stedolan.github.io/jq/
Then you can do all these fun things:
(Just replace USER and KEY with your real user and key in the 1st line, and all the others are copy and paste-able.
Get the json:
json=$(echo '{ "auth":{ "RAX-KSKEY:apiKeyCredentials":{ "username":"USER", "apiKey":"KEY" } } }' | http POST https://auth.api.rackspacecloud.com/v2.0/tokens)
Get token with http:
token=$(echo $json | jq '.access | .token | .id' | sed s/\"//g)
Easy token usage for later:
auth="X-Auth-Token:$token"
Get endpoint for Sydney cloud files (change SYD for your favorite Datacenter) (change publicURL to internalURL if you're running from inside the DC):
url=$(echo $json | jq '.access | .serviceCatalog | .[] | select(.name == "cloudFiles") | .endpoints | .[] | select(.region == "SYD") | .publicURL' | sed s/\"//g)
-- Hard work is done. Now it gets easy --
Get list of containers:
http "$url" $auth
Create a container:
http PUT "$url/my_container" $auth
Upload a file:
cat python1.JPG | http PUT "$url/my_container/python1.jpg" $auth
List files:
http "$url/my_container"
Get CDN API URL (not the one for downloading, that's later):
cdn_url=$(echo $json | jq ' .access | .serviceCatalog | .[] | select(.name == "cloudFilesCDN") | .endpoints | .[] | select(.region == "SYD") | .publicURL' | sed s/\"//g)
CDN enable the container:
http PUT "$cdn_url/my_container" $auth "X-Cdn-Enabled: True"
Get public CDN url for my_container:
pub_url=$(http -h HEAD "$cdn_url/my_container" $auth | awk '/X-Cdn-Uri/{print $2;}')
View your file:
firefox "$pub_url/python1.jpg"
All the API docs are here: http://docs.rackspace.com/files/api/v1/cf-devguide/content/API_Operations_for_Storage_Services-d1e942.html
Enjoy :)

This is the pattern which you should be using:
token=`cat /etc/passwd`
echo "file contents: $token"
Note, as triplee points out, that you must not have spaces on either side of the = sign.

I highly recommend skipping curl and using one of the language specific SDKs found on http://developer.rackspace.com
They all handle authentication easily and reauthentication for long lived processes. They all have examples of how to upload files too.

Related

Mutiple varaibles in while loop of bash scripting

Suppose i have the below curl where i will be reading two of the varaibles from a file.how can we accomodate both the varaibles in a single while loop
while read p; do
curl -X POST --header 'Content-Type: application/json' --header 'Accept: application/json' --header 'HTTP_X_MERCHANT_CODE: STA' --header 'AK-Client-IP: 135.71.173.56' --header 'Authorization: Basic qwrewrereererweer' -d '{
"request_details": [
{
"id": "$p", #first dynamic varaible which will be fetched from the file file.txt
"id_id": "$q", #second dynamic varaible to be fetched from the file file.txt
"reason": "Pickup reattempts exhausted"
}
]
}' api.stack.com/ask
done<file.txt
file.tx will have two columns from which the dynamic variables whill be fetched to the above curl. Pls let me know how can we accomodate both the variable in the above while loop
i will need bit of a help regarding the same
Since you'll want to use a tool like jq to construct the payload anyway, you should let jq parse the file instead of using the shell.
filter='split(" ") |
{ request_details: [
{
id: .[0],
id_id: .[1],
reason: "Pickup reattempts exhausted"
}
]
}'
jq -cR "$filter" file.txt |
while IFS= read -r payload; do
curl -X POST --header 'Content-Type: application/json' \
--header 'Accept: application/json' \
--header 'HTTP_X_MERCHANT_CODE: STA' \
--header 'AK-Client-IP: 135.71.173.56' \
--header 'Authorization: Basic qwrewrereererweer' \
-d "$payload"
done
The -c option to jq ensures the entire output appears on one line: curl doesn't need a pretty-printed JSON value.
read accepts multiple target variable names. The last one receives all the content not yet read from the line. So read p reads the whole line, read p q would read the first token (separated by whitespace) into p and the rest into q, and read p q r would read the first two tokens into p and q and any remaining junk into r (for example if you want to support comments or extra tokens in file.txt).

argument list too long curl

Trying to solve "argument list too long"
I have been searching for a solution and found the closest one to my issue
curl: argument list too long
however the response is not clear as I am still having the issue "argument list too long"
curl -X POST -d #data.txt \
https://Path/to/attachments \
-H 'content-type: application/vnd.api+json' \
-H 'x-api-key: KEY' \
-d '{
"data": {
"type": "attachments",
"attributes": {
"attachment": {
"content": "'$(cat data.txt | base64 --wrap=0)'",
"file_name": "'"$FileName"'"
}
}
}
}'
thank you
Use jq to format your base64 encoded data string into a proper JSON string, and then pass the JSON data as standard input to the curl command.
#!/usr/bin/env sh
attached_file='img.png'
# Pipe the base64 encoded content of attached_file
base64 --wrap=0 "$attached_file" |
# into jq to make it a proper JSON string within the
# JSON data structure
jq --slurp --raw-input --arg FileName "$attached_file" \
'{
"type": "attachments",
"attributes": {
"attachment": {
"content": .,
"file_name": $FileName
}
}
}
' |
# Get the resultant JSON piped into curl
# that will read the data from the standard input
# using -d #-
curl -X POST -d #- \
'https://Path/to/attachments' \
-H 'content-type: application/vnd.api+json' \
-H 'x-api-key: KEY'
Per the linked answer
you are trying to pass the entirety of the base64'd content on the command line
This is a limitation of the shell, not curl. That is, the shell is responding with error argument list too long. The program curl is never even started.
The recommendation is
curl has the ability to load in data to POST from a file
Write the json data to some file /tmp/data.json using piping.(the commands will use piping | and file redirection > >> which can handle arbitrarily large amounts of data. Whereas, you cannot place arbitrarily large amounts of data into a single command, there is a limit).
echo -n '
{
"data": {
"type": "attachments",
"attributes": {
"attachment": {
"content": "' > /tmp/data.json
cat data.txt | base64 --wrap=0 >> /tmp/data.json
echo -n '",
"file_name": "'"$FileName"'"
}
}
}
}' >> /tmp/data.json
Pass that file path /tmp/data.json to the curl command using # so curl knows it's a file path.
curl -X POST -d #/tmp/data.json \
"https://Path/to/attachments" \
-H 'content-type: application/vnd.api+json' \
-H 'x-api-key: KEY'

Awk or other command how to get variable value on string curl results?

in the bash script I'm running a curl for POST request getting data
f_curl_get_data (){
read -p "start date : " start_date
read -p "end date : " end_date
# (!) NOTE
# - the date time format must be YYYY-MM-DD
mng_type=users
user=myuser
secret=mysecret
curl --location --request POST 'https://myapi.com/api/v2.1/rest/exports' \
--header 'Content-Type: application/json' \
--header 'SDK-APP-ID: '$user'' \
--header 'SDK-SECRET: '$secret'' \
--data-raw '{
"type":"'$mng_type'",
"start_date":"'$start_date'",
"end_date": "'$end_date'"
}'
}
and I get the following results
{"results":{"created_at":"2020-03-13T07:04:14Z","download_url":"","error_message":"","original_filename":"2020-03-13T07:04:14Z_exported_users.json","percentage":0,"resource_name":"users","size":0,"status":"started","total_rows":0,"unique_id":"37c23e60-5b83-404a-bd1f-6733ef04463b"},"status":200}
how do I just get the value from the variable "unique_id" with awk command or other?
37c23e60-5b83-404a-bd1f-6733ef04463b
thank u
Using sed
sed -e 's/.*unique_id":"\(.*\)\"}.*/\1/'
Demo :
:>echo '{"results":{"created_at":"2020-03-13T07:04:14Z","download_url":"","error_message":"","original_filename":"2020-03-13T07:04:14Z_exported_users.json","percentage":0,"resource_name":"users","size":0,"status":"started","total_rows":0,"unique_id":"37c23e60-5b83-404a-bd1f-6733ef04463b"},"status":200}' | sed -e 's/.*unique_id":"\(.*\)\"}.*/\1/'
37c23e60-5b83-404a-bd1f-6733ef04463b
Using GNU awk and json extension:
$ gawk '
#load "json" # load extension
{
lines=lines $0 # in case of multiline json file
if(json_fromJSON(lines,data)!=0) { # explode valid json to an array
print data["results"]["unique_id"] # print the object value
lines="" # in case there is more json left
}
}' file
Output:
37c23e60-5b83-404a-bd1f-6733ef04463b
Extension can be found in there:
http://gawkextlib.sourceforge.net/json/json.html
... or you could use jq:
$ jq -r '.results.unique_id' file
37c23e60-5b83-404a-bd1f-6733ef04463b

IBM Watson speech to test output only transcript or grep only transcript

I'm using curl with IBM Watson to produce a transcript but I can't seem to get an output where just the transcript is shown as shown below
Another method might be just to grep for the text in transcript":""
curl
curl -u user:password -X POST --header "Content-Type: audio/wav" --header "Transfer-Encoding: chunked" --data-binary #test.wav "https://stream.watsonplatform.net/speech-to-text/api/v1/recognize?continuous=true" > demo.txt
{
"results": [
{
"alternatives": [
{
"confidence": 0.302,
"transcript": "when to stop announced "
}
],
"final": true
},
{
"alternatives": [
{
"confidence": 0.724,
"transcript": "Russia is destroying western cheese and considering a ban Weston condoms and infection internet is reacting "
}
],
"final": true
To store the output into a file you can use the option -o
curl -u user:password -X POST --data-binary #test.wav
-o transcript.txt
--header "Content-Type: audio/wav" --header "Transfer-Encoding: chunked"
"https://stream.watsonplatform.net/speech-to-text/api/v1/recognize?continuous=true"
-o, --output <file>
Write output to <file> instead of stdout. Like in:
curl http://ibm.com.com -o "html_output.txt"
More info.
grep alone can't do quite what you're asking for because it doesn't get more granular than a single line. However, grep + sed can - sed can be used to perform regex replacement on the lines that grep spits out.
First pipe to grep: grep transcript, then pipe to sed: sed 's/.*"transcript": "\(.*\)".*/\1/g'
Here's the complete command:
curl -u user:password -X POST --header "Content-Type: audio/wav" --header "Transfer-Encoding: chunked" --data-binary #test.wav "https://stream.watsonplatform.net/speech-to-text/api/v1/recognize?continuous=true" | grep transcript | sed 's/.*"transcript": "\(.*\)".*/\1/g'
Note that by default, curl displays several lines of status when piping it's output. You can disable those with -s:
curl -s -u user:password -X POST --header "Content-Type: audio/wav" --header "Transfer-Encoding: chunked" --data-binary #test.wav "https://stream.watsonplatform.net/speech-to-text/api/v1/recognize?continuous=true" | grep transcript | sed 's/.*"transcript": "\(.*\)".*/\1/g'
Here's more info on sed if you're interested: http://www.grymoire.com/Unix/Sed.html
Update: I should also mention that there are SDKs available for a number of languages if you're interested in doing something more complex - https://github.com/watson-developer-cloud

pbcopy Specific Part of Shell Script

I am using the goo.gl URL shortener to shorten URL's with a curl command. The command is below:
curl https://www.googleapis.com/urlshortener/v1/url \
-H 'Content-Type: application/json' \
-d '{"longUrl": "http://www.google.com/"}'
This returns the response is below:
{
"kind": "urlshortener#url",
"id": "http://goo.gl/fbsS",
"longUrl": "http://www.google.com/"
}
Is there a way to use pbcopy to only copy the shortened URL? (http://goo.gl/fbsS)
I am new to posting on StackOverflow, and would appreciate any responses I can get.
Try this:
$ curl ... | grep '"id":' | cut -d\" -f 4 | pbcopy

Resources