S3 Upload via CURL fails when triggered via cronjob - bash

I'm using a script dump-to-s3.sh to put database dumps in a S3 bucket.
When triggered manually it works perfectly but when I trigger it via this cron (as root crontab) it fails with the following error message:
crontab
31 12 * * * /home/dokku/.mongodb/dump-to-s3.sh
error from CURL
<?xml version="1.0" encoding="UTF-8"?>
<Error><Code>SignatureDoesNotMatch</Code>
<Message>The request signature we calculated does not match the signature you provided.
Check your key and signing method.</Message>...
dump-to-s3.sh
PATH="/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
#cd to dump-folder
cd /dump/folder
file="mydump.tar.gz"
bucket="mybucket"
resource="/${bucket}/dumps/${file}"
contentType="application/x-compressed-tar"
dateValue=`date -R`
stringToSign="PUT\n\n${contentType}\n${dateValue}\n${resource}"
s3Key="xxxxxxxxxxxxxxxx"
s3Secret="xxxxxxxxxxxxxxxx"
signature=`echo -en ${stringToSign} | openssl sha1 -hmac ${s3Secret} -binary | base64`
curl -X PUT -T "${file}" \
-H "Host: ${bucket}.s3.amazonaws.com" \
-H "Date: ${dateValue}" \
-H "Content-Type: ${contentType}" \
-H "Authorization: AWS ${s3Key}:${signature}" \
https://${bucket}.s3.amazonaws.com/dumps/${file}

I think you might need to run bash manually, e.g.
/bin/bash /home/dokku/.mongodb/dump-to-s3.sh
The -en options for echo don't work in the regular shell; it might just be a bash extension.
In sh:
$ sh
sh-3.2$ echo -en foo
-en foo
sh-3.2$
In bash:
$ bash
bash-3.2$ echo -en foo
foobash-3.2$

Related

Upload data to aws s3 using curl in c++

I'm trying to upload the data to aws s3 using following script and getting below error.
Example Script
#!/usr/bin/sh
file_to_upload="/home/sraj/Hello10.txt"
bucket="mybucket"
filepath="/${bucket}/${file_to_upload}"
contentType='application\/x-compressed-tar'
dateValue="`date +'%a, %d %b %Y %H:%M:%S %z'`"
signature_string="PUT\n\n${contentType}\n${dateValue}\n${filepath}"
s3_access_key=xxxxxxxxx
s3_secret_key=yyyyyyyyy
signature_hash=`echo -en ${signature_string} | openssl sha256 -hmac ${s3_secret_key} -binary | base64`
echo "${s3_access_key} : ${s3_secret_key} : ${signature_hash}"
curl -X PUT -T "${file_to_upload}" \
-H "Host: ${bucket}.s3.amazonaws.com" \
-H "Date: ${dateValue}" \
-H "Content-Type: ${contentType}" \
-H "Authorization: AWS ${s3_access_key}:${signature_hash}" \
https://${bucket}.s3.amazonaws.com/${file_to_upload}
I think this is related to I AM user KMS encryption option.
Kindly help me how to solve above issue. I'm not much idea about curl and s3 also.
It will very helpful for sample code.
Some of them mentioned this is related to Signature version4 issue but I don't have much idea about fix this one.
The error says Invalid Argument.
I found this link, I have no idea if it will help.
https://www.gyanblog.com/aws/how-upload-aws-s3-curl/#curl-the-savior`
The only difference I see is the dateValue.
I do not know if that matters, but just in case, I posted this.
# about the file
file_to_upload=<file you want to upload>
bucket=<your s3 bucket name>
filepath="/${bucket}/${file_to_upload}"
# metadata
contentType="application/x-compressed-tar"
dateValue=`date -R`
signature_string="PUT\n\n${contentType}\n${dateValue}\n${filepath}"
#s3 keys
s3_access_key=<your s3 access key>
s3_secret_key=<your s3 secret key>
#prepare signature hash to be sent in Authorization header
signature_hash=`echo -en ${signature_string} | openssl sha1 -hmac ${s3_secret_key} -binary | base64`
# actual curl command to do PUT operation on s3
curl -X PUT -T "${file_to_upload}" \
-H "Host: ${bucket}.s3.amazonaws.com" \
-H "Date: ${dateValue}" \
-H "Content-Type: ${contentType}" \
-H "Authorization: AWS ${s3_access_key}:${signature_hash}" \
https://${bucket}.s3.amazonaws.com/${file_to_upload}

BASH SCRIPT output doesn't export in file

i just stuck with my self coded application calling ws command (web socket) and i'm trying to export the output. Also i want to exit wscat when it's finished after sometime of input from API from the JSON backend devlopment
#!/bin/bash
while getopts a:c: flag
do
case "${flag}" in
a) accesskey=${OPTARG};;
c) clientnodeid=${OPTARG};;
esac
done
master="wscat -c ws://localhost:8091/ws/callback -H accessKey:$accesskey -H clientNodeId:$clientnodeid"
sleep 15
eval $master
final=$(eval echo "$master")
echo $final >>logfile.log
ps -ef | grep wscat | grep -v grep | awk '{print $2}' | xargs kill
#curl -X POST --data "$final" -k "https://localhost:7460/activate" -H "accept: application/json" -H "accessKey:$accesskey" -H "clientNodeId:$clientnodeid" -H "Content-Type: application/json" -H "callbackRequested:true"
exit
I want to call then output from wscat to sent over curl
When i run the script manually it got success but when i call it from another application (java) it's it running but not generating log.
With all words, i want to export $final to text file and that text file i should import it to --data of curl calling
Fixed based on #Barmar's comment:
You're overcomplicating this with all those variables. Just do
eval "$master" >> logfile.log

Bash script to loop through remote directory and pipe files 1 at a time to CURL

I am trying to transfer all files residing in a specified directory on Server1 to Server3 via a script running on Server2.
The transfer to Server3 has to happen through an API and thus must use the following CURL call:
curl -X POST https://content.dropboxapi.com/2/files/upload \
--header "Authorization: Bearer $token" \
--header "Dropbox-API-Arg: {\"path\": \"/xfer/$name\",\"mode\": \"add\",\"autorename\": true,\"mute\": false,\"strict_conflict\": false}" \
--header "Content-Type: application/octet-stream" \
--data-binary #$f
If it is just 1 file, I can do it successfully, but i'm trying to iterate through the directory on Server1 and send the file directly to the CURL call. So far I've got:
files="( $(ssh me#server1 ls dir/*) )"
while read f
do
name=$(basename ${f})
curl -X POST https://content.dropboxapi.com/2/files/upload \
--header "Authorization: Bearer $token" \
--header "Dropbox-API-Arg: {\"path\": \"/xfer/$name\",\"mode\": \"add\",\"autorename\": true,\"mute\": false,\"strict_conflict\": false}" \
--header "Content-Type: application/octet-stream" \
--data-binary #$f
done <<< "$files"
The loop seems to be reading the "(" from the array of files into the 1st file name, which obviously causes a problem. I can't get beyond that to be able to tell if POSTING the current file in the loop via --data-binary will actually do what I think (or am hoping) it will.
Any ieas?
The error in the original message was enclosing the ssh command with "()". I am working on a similar issue. In the past I've used Rsync but I want a solution that doesn't require installing extra software. Here is an example that I'm working with to move files off of a Nodejs dev server to backup, running in Bash on Debian:
files=$(ssh chris#estack ls ~/tmp/gateway)
#echo $files
for FILE in $files
do
if [[ "$FILE" = "node_modules" || "$FILE" = ".git" ]]
then
echo "skip $FILE";
continue
fi
echo Copy ~/tmp/gateway/$FILE
#scp -Cpr chris#estack:~/tmp/gateway/$FILE ~/tmp/tmp
done

Amazon S3 file download through curl by using IAM user credentials

I have created an IAM user with access to only one bucket. I have tested the credentials and permissions through web and python boto. Its working fine.
Now I have requirement to use these credentials and download the private file from that bucket through curl.
signature="$(echo -n "GET" | openssl sha1 -hmac "f/rHQ8yCvPthxxxxxxxXxxxx" -binary | base64)"
date="$(LC_ALL=C date -u +"%a, %d %b %Y %X %z")"
curl -H "Host: my-bucket.s3.amazonaws.com" -H "Date: $date" -H "Authorization: AWS 'XXXAJX2NY3QXXX35XXX':$signature" -H "Content-Type: 'text/plain'" https://my-bucket.s3.amazonaws.com/path/to_file.txt
but i am getting the following error:
InvalidAccessKeyIdThe AWS Access Key Id you provided does not exist in our records.
Please help, how do I download the file using curl ? Is there anything am I missing or its not possible through curl command?
Thanks!
Following is the example on how you can download with s3 curl script,
#!/bin/sh
file=path/to/file
bucket=your-bucket
resource="/${bucket}/${file}"
contentType="application/x-compressed-tar"
dateValue="`date +'%a, %d %b %Y %H:%M:%S %z'`"
stringToSign="GET
${contentType}
${dateValue}
${resource}"
s3Key=xxxxxxxxxxxxxxxxxxxx
s3Secret=xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
signature=`/bin/echo -en "$stringToSign" | openssl sha1 -hmac ${s3Secret} -binary | base64`
curl -H "Host: ${bucket}.s3.amazonaws.com" \
-H "Date: ${dateValue}" \
-H "Content-Type: ${contentType}" \
-H "Authorization: AWS ${s3Key}:${signature}" \
https://${bucket}.s3.amazonaws.com/${file}
Hope it helps.

cURL: (6) Couldn't resolve host 'GET' BASH script

I am having a problem with my bash script. It is producing an error of
curl (6) couldn't resolve host
What have I done wrong?
The following is my bash script.
#!/bin/bash
(set -o igncr) 2>/dev/null && set -o igncr; # this comment is needed
CookieFileName=cookies.txt
TEST="curl -k --cookie $CookieFileName --cookie-jar $CookieFileName POST -F "passUID=xxx&passUCD=xxx" https://wp1.coned.com/retailaccess/default.asp"
echo $TEST
RESPONSE=`$TEST`
echo $RESPONSE
Try this instead :
#!/bin/bash
set -o igncr
CookieFileName='cookies.txt'
curl -k \
--cookie "$CookieFileName" \
--cookie-jar "$CookieFileName" \
--data "passUID=xxx&passUCD=xxx" \
"https://wp1.coned.com/retailaccess/default.asp" # POST request
If you need to load another page after that, simply chains cURL commands with the previous line :
curl -k \
--cookie "$CookieFileName" \
--cookie-jar "$CookieFileName" \
"https://wp1.coned.com/retailaccess/another_page.asp" # GET request
Note
Command Substitution: The $(foo bar) causes the command 'foo' to be executed with the argument 'bar' and $(..) will be replaced by the output. See http://mywiki.wooledge.org/BashFAQ/002, http://mywiki.wooledge.org/CommandSubstitution, and http://mywiki.wooledge.org/BashFAQ/082

Resources