I am trying to send a AD backup folder to a AWS s3 bucket on a windows 2016 server machine, via cmd line.
aws s3 cp “D:\WindowsImageBackup” s3://ad-backup/
However I get the below error.
Invalid length for parameter Key, value: 0, valid range: 1-inf
The folder I am trying to upload has some large files in so not sure if its too big. I have tested the bucket and smaller files work.
Thanks
You have to use --recursive option to upload a folder:
aws s3 cp --recursive “D:\WindowsImageBackup” s3://ad-backup/
Or pack that folder into a single file and upload that file with plain aws s3 cp.
Related
I am trying to upload file from my local machine to S3 bucket but I am getting an error "The user-provided path ~Downloads/index.png does not exist."
aws s3 cp ~Downloads/index.png s3://asdfbucketasdf/Temp/index_temp.png
File with name index does exists on my Downloads.
This answer might be helpful to some users new to AWS CLI on different platforms.
If you are on Linux or Linux-like systems, you can type:
aws s3 cp ~/Downloads/index.png s3://asdfbucketasdf/Temp/index_temp.png
Note that ~Downloads means a username called Downloads. What you would want is ~/Downloads, which means Downloads directory under current user's home directory.
You can type out your path fully like so (assuming your home directory was /home/matt):
aws s3 cp /home/matt/Downloads/index.png s3://asdfbucketasdf/Temp/index_temp.png
If you are on Windows, you can type:
aws s3 cp C:\Users\matt\Downloads\index.png s3://asdfbucketasdf/Temp/index_temp.png
or you can use ~ like feature in Windows:
aws s3 cp %USERPROFILE%\Downloads\index.png s3://asdfbucketasdf/Temp/index_temp.png
If you are using windows and CLI version 2:
aws s3 cp "helloworld.txt" s3://testbucket
I got a Windows Server 2012 R2 EC2 instance and fail to import txt-files from a S3 bucket.
I want to set up a regular data import from an S3 bucket to the EC2 instance using the aws-cli. To test the command, I opened the command prompt with administration rights, navigated to the directory, where I want to import the files and run the following command.
aws s3 cp s3://mybuckt/ . --recursive
Then I get an error like the following for every file in the bucket:
download failed: s3://mybuckt/filename.txt to .\filename.txt [Error 87] The parameter is incorrect
I end up with a list of empty files in my directory. The list is equal to that on the bucket but the text files are plain empty.
When I try the command without recursive, nothing happens. No error messages, no files copied.
aws s3 cp s3://mybuckt/ .
Here are my questions:
Why is the recursive option wrong when I import the files?
What can I check in the configuration of the EC2 instance, to verify that it is correctly set up for the data import?
You did not specify any files to copy. You should use:
aws s3 cp s3://mybuckt/* . --recursive
Or, you could use:
aws s3 sync s3://mybuckt/ . --recursive
The solution to my specific problem here was that I had to specify the file name on the bucket as well as on my EC2 instance.
aws s3 cp s3://mybuckt/file.txt nameOnMyEC2.txt
Currently, I have Bash commands redirecting output to a log file, and then a separate CLI aws s3 cp call to copy the log file up to S3.
I was wondering if there's a way to redirect output straight to S3 without the extra command/step. I tried doing the aws s3 cp to a https url but that doesn't seem to work since urls are for currently existing files/objects on S3.
I never tested it, but chech if it is reasonable:
aws s3 cp <(/path/command arg1 arg2) s3://mybucket/mykey
Here /path/command arg1 arg2 is your "Bash commands redirecting output to a log file", but you can't redirect output, you need to leave it in the stdout.
Not sure whether its an overkill based on the gravity of your scenario, but using a AWS File Gateway, you can put the files to a mounted disk and it will be synced automatically to S3.
Is there a Azure CLI upload option to parallel upload files to blob storage. There is folder with lots of files. Currently the only option I have is do a for loop with below command and upload is sequentially.
az storage blob upload --file $f --container-name $CONTAINERNAME --name $FILEINFO
For now, it is not possible. With the Azure CLI 2.0 there is no option or argument to upload the contents of a specified directory to Blob storage recursively. So, Azure CLi 2.0 does not support upload files in parallel.
If you want to upload multiple files in parallel, you could use Azcopy.
AzCopy /Source:C:\myfolder /Dest:https://myaccount.blob.core.windows.net/mycontainer /DestKey:key /S
Specifying option /S uploads the contents of the specified directory to Blob storage recursively, meaning that all subfolders and their files will be uploaded as well.
As you mentioned, you could use loop to upload files, but it does not support upload files in parallel. Try following script.
export AZURE_STORAGE_ACCOUNT='PUT_YOUR_STORAGE_ACCOUNT_HERE'
export AZURE_STORAGE_ACCESS_KEY='PUT_YOUR_ACCESS_KEY_HERE'
export container_name='nyc-tlc-sf'
export source_folder='/Volumes/MacintoshDisk02/Data/Misc/NYC_TLC/yellow/2012/*'
export destination_folder='yellow/2012/'
#echo "Creating container..."
#azure storage container create $container_name
for f in $source_folder
do
echo "Uploading $f file..."
azure storage blob upload $f $container_name $destination_folder$(basename $f)
done
echo "List all blobs in container..."
azure storage blob list $container_name
echo "Completed"
I have created a script to upload the files on a S3 bucket, and I got a timeout error, so I am not sure if all the files are on the bucket. I have created another function for checking the differences, but it seems not to work because of the listing from the local folder:
If I do a find like here, find $FOLDER -type f | cut -d/ -f2- | sort, I get the whole path, like /home/sop/path/to/folder/.... It seems that cut -d/ -f2- does nothing...
If I do a ls -LR I am not getting a list, for being able to compare it with the aws s3api list-objects ... result
The AWS Command-Line Interface (CLI) has a useful aws s3 sync command that can replicate files from a local directory to an Amazon S3 bucket (or vice versa, or between buckets).
It will only copy new/changed files, so it's a great way to make sure files have been uploaded.
See: AWS CLI S3 sync command documentation