Trying to solve "argument list too long"
I have been searching for a solution and found the closest one to my issue
curl: argument list too long
however the response is not clear as I am still having the issue "argument list too long"
curl -X POST -d #data.txt \
https://Path/to/attachments \
-H 'content-type: application/vnd.api+json' \
-H 'x-api-key: KEY' \
-d '{
"data": {
"type": "attachments",
"attributes": {
"attachment": {
"content": "'$(cat data.txt | base64 --wrap=0)'",
"file_name": "'"$FileName"'"
}
}
}
}'
thank you
Use jq to format your base64 encoded data string into a proper JSON string, and then pass the JSON data as standard input to the curl command.
#!/usr/bin/env sh
attached_file='img.png'
# Pipe the base64 encoded content of attached_file
base64 --wrap=0 "$attached_file" |
# into jq to make it a proper JSON string within the
# JSON data structure
jq --slurp --raw-input --arg FileName "$attached_file" \
'{
"type": "attachments",
"attributes": {
"attachment": {
"content": .,
"file_name": $FileName
}
}
}
' |
# Get the resultant JSON piped into curl
# that will read the data from the standard input
# using -d #-
curl -X POST -d #- \
'https://Path/to/attachments' \
-H 'content-type: application/vnd.api+json' \
-H 'x-api-key: KEY'
Per the linked answer
you are trying to pass the entirety of the base64'd content on the command line
This is a limitation of the shell, not curl. That is, the shell is responding with error argument list too long. The program curl is never even started.
The recommendation is
curl has the ability to load in data to POST from a file
Write the json data to some file /tmp/data.json using piping.(the commands will use piping | and file redirection > >> which can handle arbitrarily large amounts of data. Whereas, you cannot place arbitrarily large amounts of data into a single command, there is a limit).
echo -n '
{
"data": {
"type": "attachments",
"attributes": {
"attachment": {
"content": "' > /tmp/data.json
cat data.txt | base64 --wrap=0 >> /tmp/data.json
echo -n '",
"file_name": "'"$FileName"'"
}
}
}
}' >> /tmp/data.json
Pass that file path /tmp/data.json to the curl command using # so curl knows it's a file path.
curl -X POST -d #/tmp/data.json \
"https://Path/to/attachments" \
-H 'content-type: application/vnd.api+json' \
-H 'x-api-key: KEY'
Related
Suppose i have the below curl where i will be reading two of the varaibles from a file.how can we accomodate both the varaibles in a single while loop
while read p; do
curl -X POST --header 'Content-Type: application/json' --header 'Accept: application/json' --header 'HTTP_X_MERCHANT_CODE: STA' --header 'AK-Client-IP: 135.71.173.56' --header 'Authorization: Basic qwrewrereererweer' -d '{
"request_details": [
{
"id": "$p", #first dynamic varaible which will be fetched from the file file.txt
"id_id": "$q", #second dynamic varaible to be fetched from the file file.txt
"reason": "Pickup reattempts exhausted"
}
]
}' api.stack.com/ask
done<file.txt
file.tx will have two columns from which the dynamic variables whill be fetched to the above curl. Pls let me know how can we accomodate both the variable in the above while loop
i will need bit of a help regarding the same
Since you'll want to use a tool like jq to construct the payload anyway, you should let jq parse the file instead of using the shell.
filter='split(" ") |
{ request_details: [
{
id: .[0],
id_id: .[1],
reason: "Pickup reattempts exhausted"
}
]
}'
jq -cR "$filter" file.txt |
while IFS= read -r payload; do
curl -X POST --header 'Content-Type: application/json' \
--header 'Accept: application/json' \
--header 'HTTP_X_MERCHANT_CODE: STA' \
--header 'AK-Client-IP: 135.71.173.56' \
--header 'Authorization: Basic qwrewrereererweer' \
-d "$payload"
done
The -c option to jq ensures the entire output appears on one line: curl doesn't need a pretty-printed JSON value.
read accepts multiple target variable names. The last one receives all the content not yet read from the line. So read p reads the whole line, read p q would read the first token (separated by whitespace) into p and the rest into q, and read p q r would read the first two tokens into p and q and any remaining junk into r (for example if you want to support comments or extra tokens in file.txt).
I have a parsed variable obtained after parsing some text:
parsed=$(echo "PA-232 message1 GX-1234 message2 PER-10 message3" | grep -Eo '[A-Z]+-[0-9]+')
parsed contains a bunch of ids:
echo $parsed
PA-232
GX-1234
PER-10
The next thing I have to do in my script is generate a json text and invoke an API with it:
The json text should be
"{\"tasks\": [{\"taskId\": \"PA-232\"}, {\"taskId\": \"GX-1234\"}, {\"taskId\": \"PER-10\"}], \"projectId\": \"$CI_PROJECT_ID\" }"
Notice CI_PROJECT_ID is an envvar that I also have to send, thats why I needed to use double quotes and escape them.
And it would be called with curl:
curl -X POST -H 'Content-Type:application/json' -k -u $CLIENT_ID:$CLIENT_SECRET 'https://somewhere.com/api/tasks' -d "{\"tasks\": [{\"taskId\": \"PA-232\"}, {\"taskId\": \"GX-1234\"}, {\"taskId\": \"PER-10\"}], \"projectId\": \"$CI_PROJECT_ID\"}"
The question is how can I generate a json string like the one shown above from the parsed variable and the additional envvar?
How about doing it with jq?
CI_PROJECT_ID='I want this " to be escaped automatically'
echo 'PA-232 message1 GX-1234 message2 PER-10 message3' |
jq -R --arg ciProjectId "$CI_PROJECT_ID" '
{
tasks: [
capture( "(?<taskId>[[:upper:]]+-[[:digit:]]+)"; "g" )
],
projectId: $ciProjectId
}
'
{
"tasks": [
{
"taskiD": "PA-232"
},
{
"taskiD": "GX-1234"
},
{
"taskiD": "PER-10"
}
],
"projectId": "I want this \" to be escaped automatically"
}
note: you can use jq -c ... for outputting a compact JSON
And here's a solution without jq that doesn't escape the characters in the strings so it might generate invalid JSON:
CI_PROJECT_ID='no escaping needed'
tasks_jsonArr=$(
echo "PA-232 message1 GX-1234 message2 PER-10 message3" |
grep -Eo '[A-Z]+-[0-9]+' |
sed 's/.*/{ "taskiD": "&" }/' |
paste -sd ',' |
sed 's/.*/[ & ]/'
)
curl -k 'https://somewhere.com/api/tasks' \
-X POST \
-H 'Content-Type:application/json' \
-u "$CLIENT_ID:$CLIENT_SECRET" \
-d "{\"tasks\": $tasks_jsonArr, \"projectId\": \"$CI_PROJECT_ID\"}"
N.B. For JSON-escaping strings with standard tools, take a look at function json_stringify in awk
in the bash script I'm running a curl for POST request getting data
f_curl_get_data (){
read -p "start date : " start_date
read -p "end date : " end_date
# (!) NOTE
# - the date time format must be YYYY-MM-DD
mng_type=users
user=myuser
secret=mysecret
curl --location --request POST 'https://myapi.com/api/v2.1/rest/exports' \
--header 'Content-Type: application/json' \
--header 'SDK-APP-ID: '$user'' \
--header 'SDK-SECRET: '$secret'' \
--data-raw '{
"type":"'$mng_type'",
"start_date":"'$start_date'",
"end_date": "'$end_date'"
}'
}
and I get the following results
{"results":{"created_at":"2020-03-13T07:04:14Z","download_url":"","error_message":"","original_filename":"2020-03-13T07:04:14Z_exported_users.json","percentage":0,"resource_name":"users","size":0,"status":"started","total_rows":0,"unique_id":"37c23e60-5b83-404a-bd1f-6733ef04463b"},"status":200}
how do I just get the value from the variable "unique_id" with awk command or other?
37c23e60-5b83-404a-bd1f-6733ef04463b
thank u
Using sed
sed -e 's/.*unique_id":"\(.*\)\"}.*/\1/'
Demo :
:>echo '{"results":{"created_at":"2020-03-13T07:04:14Z","download_url":"","error_message":"","original_filename":"2020-03-13T07:04:14Z_exported_users.json","percentage":0,"resource_name":"users","size":0,"status":"started","total_rows":0,"unique_id":"37c23e60-5b83-404a-bd1f-6733ef04463b"},"status":200}' | sed -e 's/.*unique_id":"\(.*\)\"}.*/\1/'
37c23e60-5b83-404a-bd1f-6733ef04463b
Using GNU awk and json extension:
$ gawk '
#load "json" # load extension
{
lines=lines $0 # in case of multiline json file
if(json_fromJSON(lines,data)!=0) { # explode valid json to an array
print data["results"]["unique_id"] # print the object value
lines="" # in case there is more json left
}
}' file
Output:
37c23e60-5b83-404a-bd1f-6733ef04463b
Extension can be found in there:
http://gawkextlib.sourceforge.net/json/json.html
... or you could use jq:
$ jq -r '.results.unique_id' file
37c23e60-5b83-404a-bd1f-6733ef04463b
I want to send a big json with long string field by curl, how should I crop it to multiple lines? For example:
curl -X POST 'localhost:3000/upload' \
-H 'Content-Type: application/json'
-d "{
\"markdown\": \"# $TITLE\\n\\nsome content with multiple lines....\\n\\nsome content with multiple lines....\\n\\nsome content with multiple lines....\\n\\nsome content with multiple lines....\\n\\n\"
}"
Use a tool like jq to generate your JSON, rather than trying to manually construct it. Build the multiline string in the shell, and let jq encode it. Most importantly, this avoids any potential errors that could arise from TITLE containing characters that would need to be correctly escaped when forming your JSON value.
my_str="# $TITLE
some content with multiple lines...
some content with multiple lines...
some content with multiple lines..."
my_json=$(jq --argjson v "$my_str" '{markdown: $v}')
curl -X POST 'localhost:3000/upload' \
-H 'Content-Type: application/json' \
-d "$my_json"
curl has the ability to read the data for -d from standard input, which means you can pipe the output of jq directly to curl:
jq --argjson v "$my_str" '{markdown: $v}' | curl ... -d#-
You can split anything to multiple lines using the technique already in your post, by terminating lines with \.
If you need to split in the middle of a quoted string,
terminate the quote and start a new one.
For example these are equivalent:
echo "foobar"
echo "foo""bar"
echo "foo"\
"bar"
But for your specific example I recommend a much better way.
Creating the JSON in a double-quoted string is highly error prone,
because of having to escape all the internal double-quotes,
which becomes hard to read and maintain as well.
A better alternative is to use a here-document,
pipe it to curl, and use -d#- to make it read the JSON from stdin.
Like this:
formatJson() {
cat << EOF
{
"markdown": "some content with $variable in it"
}
EOF
}
formatJson | curl -X POST 'localhost:3000/upload' \
-H 'Content-Type: application/json'
-d#-
If I were you, I'd save the JSON to a file:
curl -X POST 'localhost:3000/upload' \
-H 'Content-Type: application/json' \
-d "$(cat my_json.json)"
I'm using curl with IBM Watson to produce a transcript but I can't seem to get an output where just the transcript is shown as shown below
Another method might be just to grep for the text in transcript":""
curl
curl -u user:password -X POST --header "Content-Type: audio/wav" --header "Transfer-Encoding: chunked" --data-binary #test.wav "https://stream.watsonplatform.net/speech-to-text/api/v1/recognize?continuous=true" > demo.txt
{
"results": [
{
"alternatives": [
{
"confidence": 0.302,
"transcript": "when to stop announced "
}
],
"final": true
},
{
"alternatives": [
{
"confidence": 0.724,
"transcript": "Russia is destroying western cheese and considering a ban Weston condoms and infection internet is reacting "
}
],
"final": true
To store the output into a file you can use the option -o
curl -u user:password -X POST --data-binary #test.wav
-o transcript.txt
--header "Content-Type: audio/wav" --header "Transfer-Encoding: chunked"
"https://stream.watsonplatform.net/speech-to-text/api/v1/recognize?continuous=true"
-o, --output <file>
Write output to <file> instead of stdout. Like in:
curl http://ibm.com.com -o "html_output.txt"
More info.
grep alone can't do quite what you're asking for because it doesn't get more granular than a single line. However, grep + sed can - sed can be used to perform regex replacement on the lines that grep spits out.
First pipe to grep: grep transcript, then pipe to sed: sed 's/.*"transcript": "\(.*\)".*/\1/g'
Here's the complete command:
curl -u user:password -X POST --header "Content-Type: audio/wav" --header "Transfer-Encoding: chunked" --data-binary #test.wav "https://stream.watsonplatform.net/speech-to-text/api/v1/recognize?continuous=true" | grep transcript | sed 's/.*"transcript": "\(.*\)".*/\1/g'
Note that by default, curl displays several lines of status when piping it's output. You can disable those with -s:
curl -s -u user:password -X POST --header "Content-Type: audio/wav" --header "Transfer-Encoding: chunked" --data-binary #test.wav "https://stream.watsonplatform.net/speech-to-text/api/v1/recognize?continuous=true" | grep transcript | sed 's/.*"transcript": "\(.*\)".*/\1/g'
Here's more info on sed if you're interested: http://www.grymoire.com/Unix/Sed.html
Update: I should also mention that there are SDKs available for a number of languages if you're interested in doing something more complex - https://github.com/watson-developer-cloud