How to fix data import to Windows EC2 from S3 bucket - windows

I got a Windows Server 2012 R2 EC2 instance and fail to import txt-files from a S3 bucket.
I want to set up a regular data import from an S3 bucket to the EC2 instance using the aws-cli. To test the command, I opened the command prompt with administration rights, navigated to the directory, where I want to import the files and run the following command.
aws s3 cp s3://mybuckt/ . --recursive
Then I get an error like the following for every file in the bucket:
download failed: s3://mybuckt/filename.txt to .\filename.txt [Error 87] The parameter is incorrect
I end up with a list of empty files in my directory. The list is equal to that on the bucket but the text files are plain empty.
When I try the command without recursive, nothing happens. No error messages, no files copied.
aws s3 cp s3://mybuckt/ .
Here are my questions:
Why is the recursive option wrong when I import the files?
What can I check in the configuration of the EC2 instance, to verify that it is correctly set up for the data import?

You did not specify any files to copy. You should use:
aws s3 cp s3://mybuckt/* . --recursive
Or, you could use:
aws s3 sync s3://mybuckt/ . --recursive

The solution to my specific problem here was that I had to specify the file name on the bucket as well as on my EC2 instance.
aws s3 cp s3://mybuckt/file.txt nameOnMyEC2.txt

Related

Uploading file to S3 bucket

I am trying to upload file from my local machine to S3 bucket but I am getting an error "The user-provided path ~Downloads/index.png does not exist."
aws s3 cp ~Downloads/index.png s3://asdfbucketasdf/Temp/index_temp.png
File with name index does exists on my Downloads.
This answer might be helpful to some users new to AWS CLI on different platforms.
If you are on Linux or Linux-like systems, you can type:
aws s3 cp ~/Downloads/index.png s3://asdfbucketasdf/Temp/index_temp.png
Note that ~Downloads means a username called Downloads. What you would want is ~/Downloads, which means Downloads directory under current user's home directory.
You can type out your path fully like so (assuming your home directory was /home/matt):
aws s3 cp /home/matt/Downloads/index.png s3://asdfbucketasdf/Temp/index_temp.png
If you are on Windows, you can type:
aws s3 cp C:\Users\matt\Downloads\index.png s3://asdfbucketasdf/Temp/index_temp.png
or you can use ~ like feature in Windows:
aws s3 cp %USERPROFILE%\Downloads\index.png s3://asdfbucketasdf/Temp/index_temp.png
If you are using windows and CLI version 2:
aws s3 cp "helloworld.txt" s3://testbucket

Unable to upload file to S3 bucket

I am trying to send a AD backup folder to a AWS s3 bucket on a windows 2016 server machine, via cmd line.
aws s3 cp “D:\WindowsImageBackup” s3://ad-backup/
However I get the below error.
Invalid length for parameter Key, value: 0, valid range: 1-inf
The folder I am trying to upload has some large files in so not sure if its too big. I have tested the bucket and smaller files work.
Thanks
You have to use --recursive option to upload a folder:
aws s3 cp --recursive “D:\WindowsImageBackup” s3://ad-backup/
Or pack that folder into a single file and upload that file with plain aws s3 cp.

Is there a way to redirect from a Bash command directly to an S3 file object?

Currently, I have Bash commands redirecting output to a log file, and then a separate CLI aws s3 cp call to copy the log file up to S3.
I was wondering if there's a way to redirect output straight to S3 without the extra command/step. I tried doing the aws s3 cp to a https url but that doesn't seem to work since urls are for currently existing files/objects on S3.
I never tested it, but chech if it is reasonable:
aws s3 cp <(/path/command arg1 arg2) s3://mybucket/mykey
Here /path/command arg1 arg2 is your "Bash commands redirecting output to a log file", but you can't redirect output, you need to leave it in the stdout.
Not sure whether its an overkill based on the gravity of your scenario, but using a AWS File Gateway, you can put the files to a mounted disk and it will be synced automatically to S3.

How to compare the content of a local folder with Amazon S3?

I have created a script to upload the files on a S3 bucket, and I got a timeout error, so I am not sure if all the files are on the bucket. I have created another function for checking the differences, but it seems not to work because of the listing from the local folder:
If I do a find like here, find $FOLDER -type f | cut -d/ -f2- | sort, I get the whole path, like /home/sop/path/to/folder/.... It seems that cut -d/ -f2- does nothing...
If I do a ls -LR I am not getting a list, for being able to compare it with the aws s3api list-objects ... result
The AWS Command-Line Interface (CLI) has a useful aws s3 sync command that can replicate files from a local directory to an Amazon S3 bucket (or vice versa, or between buckets).
It will only copy new/changed files, so it's a great way to make sure files have been uploaded.
See: AWS CLI S3 sync command documentation

Passing S3cmd commands As User Data To Ec2

i am having one AWS EC2 instance. From this EC2 instance i am creating slave EC2 instances.
And while creating slave instances i am passing user data to new slave instance.In that user data i have written code for creating new directory in EC2 instance and downloading file from S3 bucket.
but problem is that, script creates new directory on EC2 instance but it Fails to download file from S3 bucket.
User Data Script :-
#! /bin/bash
cd /home
mkdir pravin
s3cmd get s3://bucket/usr.sh >> download.log
As shown above,in this code mkdir pravin create directory but s3cmd get s3://bucket/usr.sh fails to download file and download.log file also gets created but it remains empty.
How can i solve this proble, ? (AMI used for this is preconfigured with s3cmd)
Are you by chance running Ubuntu? Then Shlomo Swidler's question Python s3cmd only runs from login shell, not during startup sequence might apply exactly:
The s3cmd Python script (this one: http://s3tools.org/s3cmd ) seems to only work when run via an interactive login session, but not when run via scripts during the boot process.
Mitch Garnaat suggests that one should always beware of environmental differences inflicted by executing code within User-Data Scripts:
It's probably related to some difference in your environment when you are logged in as opposed to when the script is running as part of the startup sequence. I have run into similar problems with cron jobs.
This turned out to be the problem indeed, Shlomo Swidler summarizes the 'root cause' and a solution further down in this thread:
Mitch, your comment helped me realize what's different about the
startup sequence: the operative user is root. When I log in, I'm the
"ubuntu" user.
s3cmd looks in the current user's ~/.s3cfg - which didn't exist in
/root/.s3cfg, only in /home/ubuntu/.s3cfg.
Luckily s3cmd allow you to specify the config file's location with
--config /home/ubuntu/.s3cfg .

Resources