I have a number of files to move within S3 and need to issue a number of "s3cmd cp --recursive" so I have a big list of these commands (about 1200). I basically just want it to do the first one and then the next one and so on. It seems like it should be really simple:
#!/bin/bash
s3cmd cp --recursive s3://bucketname/fromfolder s3://bucketname/tofolder/
s3cmd cp --recursive s3://bucketname/fromfolder s3://bucketname/tofolder/
s3cmd cp --recursive s3://bucketname/fromfolder s3://bucketname/tofolder/
...
When I run this using "./s3mvcopy.sh" it just immediately returns and doesn't do anything.
Any ideas? Thanks in advance.
Make sure you have your .s3cfg command under your home directory.
Then in the file make sure you have the following:
[default]
access_key = <your access key>
secret_key = <your secret access key>
Related
I create a script to automatically upload my files to google cloud storage, my vm is in the same project as my Google Cloud Bucket...
So I create this script but I can't run it properly
#!/bin/bash
TIME=`date +%b-%d-%y`
FILENAME=backup-$TIME.tar.gz
SRCDIR=opt/R
DESDIR= gsutil gs cp FILENAME -$TIME.tar.gz gs://my-storage-name
tar -cpzf $DESDIR/$FILENAME $SRCDIR
any help?
#!/bin/bash
TIME=`date +%b-%d-%y`
FILENAME=backup-$TIME.tar.gz
gsutil cp {path of the source-file} gs://my-storage-name/backup-$TIME.tar.gz
It will save your file with the name of backup-$TIME.tar.gz
eg.backup-Jun-09-21.tar.gz
I have multiple files in the s3 bucket which I am trying to move to a different bucket that matches the given prefixes.
File Names: test001, test002, test003, test004, example1, example2
I am using AWS cp command in the Bash Script to move files but it's not working for me.
aws s3 cp s3://example-1//test s3://example-2/test --recursive
Can you please tell me how can I move files from one bucket to another which matches the prefix?
Run this command (with your actual s3 source and destination endpoints) to copy the files with names that begin with "test" and "example":
aws s3 cp s3://srcbucket/ s3://destbucket/ --recursive --exclude "*" --include "test*" --include "example*"
The --exclude and --include parameters are processed on the client side. Because of this, the resources of your local machine might affect the performance of the operation.
Try something like this:
aws s3 cp s3://example-1/ s3://example-2/ --include "test*" --recursive
I am creating an amazon emr cluster where one of the steps is a bash script run by script-runner.jar:
aws emr create cluster ... --steps '[ ... {
"Args":["s3://bucket/scripts/script.sh"],
"Type":"CUSTOM_JAR",
"ActionOnFailure":"TERMINATE_CLUSTER",
"Jar":"s3://us-east-1.elasticmapreduce/libs/script-runner/script-runner.jar",
}, ... ]'...
as described in https://docs.aws.amazon.com/emr/latest/ReleaseGuide/emr-hadoop-script.html
script.sh needs other files in its commands: think awk ... -f file, sed ... -f file, psql ... -f file, etc.
On my laptop with both script.sh and files in my working directory, everything works just fine. However, after I upload everything to s3://bucket/scripts, the cluster creation fails with:
file: No such file or directory
Command exiting with ret '1'
I have found the workaround posted below, but I don't like it for the reasons specified. If you have a better solution, please post it, so that I can accept it.
I am using the following work around in script.sh:
# Download the SQL file to a tmp directory.
tmpdir=$(mktemp -d "${TMPDIR:-/tmp/}$(basename $0).XXXXXXXXXXXX")
aws s3 cp s3://bucket/scripts/file ${tmpdir}
# Run my command
xxx -f ${tmpdir}/file
# Clean up
rm -r ${tmpdir}
This approach works but:
Running script.sh locally means that I have to upload file to s3 first, which makes development harder.
There are actually a few files involved...
Good day, I am not a developer but running simple gsutil command to manage my google cloud storage.
I ran into an issue where I run the following command form the cmd
gsutil -m cp -r gs:/bucket/ .
Scenario1: with most buckets this goes just fine
Scenario2: there is one bucket where I get an error and I really have no clue how this is possible
the error I get is:
CommandException: NO URLs matched: gs://content-music.tapgamez.com/
I am hoping anyone can share their thoughts with me
thnx
One scenario where this error message appears is when the bucket you're attempting to recursively copy from contains no objects, e.g.:
$ gsutil mb gs://some-random-bucket-name
$ gsutil -m cp -r gs://some-random-bucket-name/ .
CommandException: No URLs matched: gs://some-random-bucket-name/
CommandException: 1 file/object could not be transferred.
The same issue, but for the rm command, is being tracked on GitHub:
https://github.com/GoogleCloudPlatform/gsutil/issues/417
gsutil command rsync doesn't seem to have this issue (working fine even on empty buckets). Try it to see if it will do that you need.
Docs
gsutil rsync -r gs://mybucket1 gs://mybucket2
I also faced a similar issue, I was doing the following mistake -
Mistake -
gsutil cp -r gs://<bucket-name>/src/main/resources/output/20220430 .
Correct -
gsutil cp -r gs://<bucket-name>//src/main/resources/output/20220430 .
I was missing the extra '/' after bucket name.
to get the exact name , you can select the object and get that URL from there.
I'm trying to use a vagrant file I received to set up a VM in Ubuntu with virtualbox.
After using the vagrant up command I get the following error:
File provisioner:
* File upload source file /home/c-server/tools/appDeploy.sh must exist
appDeploy.sh does exist in the correct location and looks like this:
#!/bin/bash
#
# Update the app server
#
/usr/local/bin/aws s3 cp s3://dev-build-ci-server/deploy.zip /tmp/.
cd /tmp
unzip -o deploy.zip vagrant/tools/deploy.sh
cp -f vagrant/tools/deploy.sh /tmp/.
rm -rf vagrant
chmod +x /tmp/deploy.sh
dos2unix /tmp/deploy.sh
./deploy.sh
rm -rf ./deploy.sh ./deploy.zip
#
sudo /etc/init.d/supervisor stop
sudo /etc/init.d/supervisor start
#
Since the script exists in the correct location, I'm assuming it's looking for something else (maybe something that should exist on my local computer). What that is, I am not sure.
I did some research into what the file provisioner is and what it does but I cannot find an answer to get me past this error.
It may very well be important that this vagrant file will work correctly on Windows 10, but I need to get it working on Ubuntu.
In your Vagrantfile, check that the filenames are capitalized correctly. Windows isn't case-sensitive but Ubuntu is.