Good day, I am not a developer but running simple gsutil command to manage my google cloud storage.
I ran into an issue where I run the following command form the cmd
gsutil -m cp -r gs:/bucket/ .
Scenario1: with most buckets this goes just fine
Scenario2: there is one bucket where I get an error and I really have no clue how this is possible
the error I get is:
CommandException: NO URLs matched: gs://content-music.tapgamez.com/
I am hoping anyone can share their thoughts with me
thnx
One scenario where this error message appears is when the bucket you're attempting to recursively copy from contains no objects, e.g.:
$ gsutil mb gs://some-random-bucket-name
$ gsutil -m cp -r gs://some-random-bucket-name/ .
CommandException: No URLs matched: gs://some-random-bucket-name/
CommandException: 1 file/object could not be transferred.
The same issue, but for the rm command, is being tracked on GitHub:
https://github.com/GoogleCloudPlatform/gsutil/issues/417
gsutil command rsync doesn't seem to have this issue (working fine even on empty buckets). Try it to see if it will do that you need.
Docs
gsutil rsync -r gs://mybucket1 gs://mybucket2
I also faced a similar issue, I was doing the following mistake -
Mistake -
gsutil cp -r gs://<bucket-name>/src/main/resources/output/20220430 .
Correct -
gsutil cp -r gs://<bucket-name>//src/main/resources/output/20220430 .
I was missing the extra '/' after bucket name.
to get the exact name , you can select the object and get that URL from there.
Related
I'm trying to setup my environment to learn azure from the Microsoft learning page https://learn.microsoft.com/en-us/learn/modules/microservices-data-aspnet-core/environment-setup
but when i run . <(sudo wget -q -O - https://aka.ms/microservices-data-aspnet-core-setup) to pull the repo and run the services, i get the error below
~/clouddrive/aspnet-learn/modules/microservices-data-aspnet-core/setup ~/clouddrive/aspnet-learn
~/clouddrive/aspnet-learn
bash: /home/username/clouddrive/aspnet-learn/src/deploy/k8s/quickstart.sh: Permission denied
bash: /home/username/clouddrive/aspnet-learn/src/deploy/k8s/create-acr.sh: Permission denied
cat: /home/username/clouddrive/aspnet-learn/deployment-urls.txt: No such file or directory
this used to work until it stopped working and I'm not sure what caused it to break or how to fix it.
I've tried deleting the 'Storage account' and the resources, but doesn't seem to work. also, when i delete the storage account and create a new one then try again, it seems to have the old data stored and i need to run a remove, so somehow this data isnt really being deleted when i delete the 'Storage account'
Before running this script, please remove or rename the existing /home/username/clouddrive/aspnet-learn/ directory as follows:
Remove: rm -r /home/username/clouddrive/aspnet-learn/
any idea what is wrong here, or how i can actually reset this to work like a new storage?
Note: I saw some solutions which say to start with sudo, for elevated permission, but didnt manage to get this to work
I have done the repro by following the given document
Able to deploy a modified version of the eShopOnContainers reference app
Again I executed the same command ,
. <(wget -q -O - https://aka.ms/microservices-data-aspnet-core-setup)
got the same error which you have got
If we try to run the deploy script without cleaning the already created resource/app,will get the above error.
If you want to re-run the setup script, run the below command first to clean the resource
cd ~ && \
rm -rf ~/clouddrive/aspnet-learn && \
az group delete --name eshop-learn-rg --yes
OR
Remove: rm -r /home/username/clouddrive/aspnet-learn/
Rename: mv /home/username/clouddrive/aspnet-learn/ ~/clouddrive/new-name-here/
The above command removes or renames the existing /home/username/clouddrive/aspnet-learn/ directory
Now you can run the script again
I am running a bash script with sudo and have tried the below but am getting the error below using aws cp. I think the problem is that the script is looking for the config in /root which does not exist. However doesn't the -E preserve the original location? Is there an option that can be used with aws cp to pass the location of the config. Thank you :).
sudo -E bash /path/to/.sh
- inside of this script is `aws cp`
Error
The config profile (name) could not be found
I have also tried `export` the name profile and `source` the path to the `config`
You can use the original user like :
sudo -u $SUDO_USER aws cp ...
You could also run the script using source instead of bash -- using source will cause the script to run in the same shell as your open terminal window, which will keep the same env together (such as user) - though honestly, #Philippe answer is the better, more correct one.
I am creating an amazon emr cluster where one of the steps is a bash script run by script-runner.jar:
aws emr create cluster ... --steps '[ ... {
"Args":["s3://bucket/scripts/script.sh"],
"Type":"CUSTOM_JAR",
"ActionOnFailure":"TERMINATE_CLUSTER",
"Jar":"s3://us-east-1.elasticmapreduce/libs/script-runner/script-runner.jar",
}, ... ]'...
as described in https://docs.aws.amazon.com/emr/latest/ReleaseGuide/emr-hadoop-script.html
script.sh needs other files in its commands: think awk ... -f file, sed ... -f file, psql ... -f file, etc.
On my laptop with both script.sh and files in my working directory, everything works just fine. However, after I upload everything to s3://bucket/scripts, the cluster creation fails with:
file: No such file or directory
Command exiting with ret '1'
I have found the workaround posted below, but I don't like it for the reasons specified. If you have a better solution, please post it, so that I can accept it.
I am using the following work around in script.sh:
# Download the SQL file to a tmp directory.
tmpdir=$(mktemp -d "${TMPDIR:-/tmp/}$(basename $0).XXXXXXXXXXXX")
aws s3 cp s3://bucket/scripts/file ${tmpdir}
# Run my command
xxx -f ${tmpdir}/file
# Clean up
rm -r ${tmpdir}
This approach works but:
Running script.sh locally means that I have to upload file to s3 first, which makes development harder.
There are actually a few files involved...
After creating multiple migrations I started editing these and sometimes testing them out. All was working well untill I tried the use of foreign keys taken from this example.
For some reason this wasn't working for me, so I decided to remove everything with foreign. Now when I run a php artisan migrate I get the following error:
[Symfony\Component\Debug\Exception\FatalErrorException] syntax error,
unexpected ';'
I know it's related to one of the migrations I edited, but how can I quickly find it without going through all the migrations I created.
My question isn't about where my problem is (so my exact code isn't necessary), but how to debug efficiently?
EDIT:
I just tried php artisan:rollback and that works.
EDIT #2:
I just 'fixed' my problem, but would like to know for future reference how to debug faster.
Run the artisan command with verbose output
php artisan -vvv migrate
This will reveal more information about the syntax error.
Edit: from my comment,
You can quickly scan for syntax issues with the following cli command (unix only)
find -L database/migrations -name '*.php' -print0 | xargs -0 -n 1 -P 4 php -l
For users on windows using git bash:
find database/migrations -name '*.php' -print0 | xargs -0 -n 1 -P 4 php -l
I have a number of files to move within S3 and need to issue a number of "s3cmd cp --recursive" so I have a big list of these commands (about 1200). I basically just want it to do the first one and then the next one and so on. It seems like it should be really simple:
#!/bin/bash
s3cmd cp --recursive s3://bucketname/fromfolder s3://bucketname/tofolder/
s3cmd cp --recursive s3://bucketname/fromfolder s3://bucketname/tofolder/
s3cmd cp --recursive s3://bucketname/fromfolder s3://bucketname/tofolder/
...
When I run this using "./s3mvcopy.sh" it just immediately returns and doesn't do anything.
Any ideas? Thanks in advance.
Make sure you have your .s3cfg command under your home directory.
Then in the file make sure you have the following:
[default]
access_key = <your access key>
secret_key = <your secret access key>