Upload data to aws s3 using curl in c++ - windows

I'm trying to upload the data to aws s3 using following script and getting below error.
Example Script
#!/usr/bin/sh
file_to_upload="/home/sraj/Hello10.txt"
bucket="mybucket"
filepath="/${bucket}/${file_to_upload}"
contentType='application\/x-compressed-tar'
dateValue="`date +'%a, %d %b %Y %H:%M:%S %z'`"
signature_string="PUT\n\n${contentType}\n${dateValue}\n${filepath}"
s3_access_key=xxxxxxxxx
s3_secret_key=yyyyyyyyy
signature_hash=`echo -en ${signature_string} | openssl sha256 -hmac ${s3_secret_key} -binary | base64`
echo "${s3_access_key} : ${s3_secret_key} : ${signature_hash}"
curl -X PUT -T "${file_to_upload}" \
-H "Host: ${bucket}.s3.amazonaws.com" \
-H "Date: ${dateValue}" \
-H "Content-Type: ${contentType}" \
-H "Authorization: AWS ${s3_access_key}:${signature_hash}" \
https://${bucket}.s3.amazonaws.com/${file_to_upload}
I think this is related to I AM user KMS encryption option.
Kindly help me how to solve above issue. I'm not much idea about curl and s3 also.
It will very helpful for sample code.
Some of them mentioned this is related to Signature version4 issue but I don't have much idea about fix this one.

The error says Invalid Argument.
I found this link, I have no idea if it will help.
https://www.gyanblog.com/aws/how-upload-aws-s3-curl/#curl-the-savior`
The only difference I see is the dateValue.
I do not know if that matters, but just in case, I posted this.
# about the file
file_to_upload=<file you want to upload>
bucket=<your s3 bucket name>
filepath="/${bucket}/${file_to_upload}"
# metadata
contentType="application/x-compressed-tar"
dateValue=`date -R`
signature_string="PUT\n\n${contentType}\n${dateValue}\n${filepath}"
#s3 keys
s3_access_key=<your s3 access key>
s3_secret_key=<your s3 secret key>
#prepare signature hash to be sent in Authorization header
signature_hash=`echo -en ${signature_string} | openssl sha1 -hmac ${s3_secret_key} -binary | base64`
# actual curl command to do PUT operation on s3
curl -X PUT -T "${file_to_upload}" \
-H "Host: ${bucket}.s3.amazonaws.com" \
-H "Date: ${dateValue}" \
-H "Content-Type: ${contentType}" \
-H "Authorization: AWS ${s3_access_key}:${signature_hash}" \
https://${bucket}.s3.amazonaws.com/${file_to_upload}

Related

Using Curl to upload files to S3 using Signature v4

I'm trying to upload a file to s3 using this curl command (can't use awscli)
src_file=/path/to/file.png
dest_file=test_image.png
bucket=mybucket.com
s3Key="<key>"
s3Secret="<secret>"
contentsha=`openssl sha256 ${src_file} | awk '{print $NF}'`
curl https://${bucket}.s3.amazonaws.com/${dest_file} \
-H "x-amz-content-sha256: ${contentsha}" \
--aws-sigv4 "aws:amz:us-east-1:s3" \
--user "${s3Key}:${s3Secret}" \
--upload-file "${src_file}" \
--insecure
but I keep getting this error:
<Error><Code>SignatureDoesNotMatch</Code><Message>The request signature we calculated does not match the signature you provided. Check your key and signing method.</Message>
What am I doing wrong?

Bash script to loop through remote directory and pipe files 1 at a time to CURL

I am trying to transfer all files residing in a specified directory on Server1 to Server3 via a script running on Server2.
The transfer to Server3 has to happen through an API and thus must use the following CURL call:
curl -X POST https://content.dropboxapi.com/2/files/upload \
--header "Authorization: Bearer $token" \
--header "Dropbox-API-Arg: {\"path\": \"/xfer/$name\",\"mode\": \"add\",\"autorename\": true,\"mute\": false,\"strict_conflict\": false}" \
--header "Content-Type: application/octet-stream" \
--data-binary #$f
If it is just 1 file, I can do it successfully, but i'm trying to iterate through the directory on Server1 and send the file directly to the CURL call. So far I've got:
files="( $(ssh me#server1 ls dir/*) )"
while read f
do
name=$(basename ${f})
curl -X POST https://content.dropboxapi.com/2/files/upload \
--header "Authorization: Bearer $token" \
--header "Dropbox-API-Arg: {\"path\": \"/xfer/$name\",\"mode\": \"add\",\"autorename\": true,\"mute\": false,\"strict_conflict\": false}" \
--header "Content-Type: application/octet-stream" \
--data-binary #$f
done <<< "$files"
The loop seems to be reading the "(" from the array of files into the 1st file name, which obviously causes a problem. I can't get beyond that to be able to tell if POSTING the current file in the loop via --data-binary will actually do what I think (or am hoping) it will.
Any ieas?
The error in the original message was enclosing the ssh command with "()". I am working on a similar issue. In the past I've used Rsync but I want a solution that doesn't require installing extra software. Here is an example that I'm working with to move files off of a Nodejs dev server to backup, running in Bash on Debian:
files=$(ssh chris#estack ls ~/tmp/gateway)
#echo $files
for FILE in $files
do
if [[ "$FILE" = "node_modules" || "$FILE" = ".git" ]]
then
echo "skip $FILE";
continue
fi
echo Copy ~/tmp/gateway/$FILE
#scp -Cpr chris#estack:~/tmp/gateway/$FILE ~/tmp/tmp
done

file transfer from 1 remote server to another remote server without downloading file

I am trying to write a Bash script on my server (My Server) that will grab a file from one remote server (Source) and copy it to a Dropbox account (Destination). I need to get the file from Source via SFTP and will be copying it to Destination using the Dropbox API (HTTPS). So far I can get the file with:
curl -k "sftp://Source/file.txt" --user "me:mypasswd" -o "/test/file.txt" --ftp-create-dirs
and then copy it to Dropbox with
curl -X POST https://content.dropboxapi.com/2/files/upload \
--header "Authorization: Bearer " \
--header "Dropbox-API-Arg: {\"path\": \"/path/to/file.txt\",\"mode\": \"add\",\"autorename\": true,\"mute\": false,\"strict_conflict\": false}" \
--header "Content-Type: application/octet-stream" \
--data-binary #/test/file.txt
I'm guessing the "right" way to do this is to pipe the file from Source directly to Destination, but I'm just not sure how to go about putting them together.
This is definitely not my area of expertise, so I don't even know where to start - nested CURL calls? If anyone could point me in the right direction, I'd be most appreciative.
UPDATE
Here's the whole curl command I'm running:
curl -X POST https://content.dropboxapi.com/2/files/upload \
--header "Authorization: Bearer $token" \
--header "Dropbox-API-Arg: {\"path\": \"/xfer/chef.txt\",\"mode\": \"add\",\"autorename\": true,\"mute\": false,\"strict_conflict\": false}" \
--header "Content-Type: application/octet-stream" \
--data-binary "$(curl -k "http://marketing.dave0112.com/file.txt" --user "me:mypasswd")"
I was having issues with CURL not supporting SFTP so I changed to HTTP while i get that end sorted out. Not sure if that affects anything.
You can replace this line :
--data-binary #/test/file.txt
with
--data-binary #<(curl -k "sftp://Source/file.txt" --user "me:mypasswd")
If problems, try :
--data-binary "$(curl -k "sftp://Source/file.txt" --user "me:mypasswd")"

Authenticate S3 request using Authorization header

I would like to authenticate my S3 requests using this method. For this purpose I have adapted scripts presented here. Here is how it looks like:
#!/bin/sh
file="$2"
bucket="$1"
resource="/${bucket}/${file}"
contentType="$3"
dateValue="`date +'%a, %d %b %Y %H:%M:%S %z'`"
stringToSign="GET\n\n${contentType}\n${dateValue}\n${resource}"
s3Key="$AWS_ACCESS_KEY"
s3Secret="$AWS_SECRET_ACCESS_KEY"
signature=`/bin/echo -en "$stringToSign" | openssl sha1 -hmac ${s3Secret} -binary | base64`
curl -v -H "Host: ${bucket}.s3.amazonaws.com" -H "Date: ${dateValue}" -H "Content-Type: ${contentType}" -H "Authorization: AWS ${s3Key}:${signature}" https://${bucket}.s3.amazonaws.com/${file}
My file permissions are like this:
and file content type is:
Here is how do I call my script:
./getfile1.sh tmp666 7b5879dd9b894f7fc5a14b895f01abd1.png image/png
Unfortunately I am getting SignatureDoesNotMatch error although StringToSign returned by AWS is matching mine:
<StringToSign>GET
image/png
Wed, 28 Sep 2016 12:46:14 +0200
/tmp666/7b5879dd9b894f7fc5a14b895f01abd1.png</StringToSign>
My IAM user has admin permissions. What am I doing wrong? I have spent half of day trying to solve it and feel powerless.

Amazon S3 file download through curl by using IAM user credentials

I have created an IAM user with access to only one bucket. I have tested the credentials and permissions through web and python boto. Its working fine.
Now I have requirement to use these credentials and download the private file from that bucket through curl.
signature="$(echo -n "GET" | openssl sha1 -hmac "f/rHQ8yCvPthxxxxxxxXxxxx" -binary | base64)"
date="$(LC_ALL=C date -u +"%a, %d %b %Y %X %z")"
curl -H "Host: my-bucket.s3.amazonaws.com" -H "Date: $date" -H "Authorization: AWS 'XXXAJX2NY3QXXX35XXX':$signature" -H "Content-Type: 'text/plain'" https://my-bucket.s3.amazonaws.com/path/to_file.txt
but i am getting the following error:
InvalidAccessKeyIdThe AWS Access Key Id you provided does not exist in our records.
Please help, how do I download the file using curl ? Is there anything am I missing or its not possible through curl command?
Thanks!
Following is the example on how you can download with s3 curl script,
#!/bin/sh
file=path/to/file
bucket=your-bucket
resource="/${bucket}/${file}"
contentType="application/x-compressed-tar"
dateValue="`date +'%a, %d %b %Y %H:%M:%S %z'`"
stringToSign="GET
${contentType}
${dateValue}
${resource}"
s3Key=xxxxxxxxxxxxxxxxxxxx
s3Secret=xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
signature=`/bin/echo -en "$stringToSign" | openssl sha1 -hmac ${s3Secret} -binary | base64`
curl -H "Host: ${bucket}.s3.amazonaws.com" \
-H "Date: ${dateValue}" \
-H "Content-Type: ${contentType}" \
-H "Authorization: AWS ${s3Key}:${signature}" \
https://${bucket}.s3.amazonaws.com/${file}
Hope it helps.

Resources