I'm trying to use the aws cli cp command in a cron of an aws environment on a Ubuntu 14.04.3 AWS EC2.
The ec2-user is called ubuntu and lives in /home/ubuntu
I have my aws config file in /home/ubuntu/.aws/config
[default]
output=json
region=eu-central-1
I have my aws credentials file in /home/ubuntu/.aws/credentials
[default]
aws_access_key_id=******
aws_secret_access_key=******
My crontab looks like this
* * * * * sh /home/ubuntu/test.sh
The shell script tries to copy a test file over to S3 is a one-liner:
/usr/local/bin/aws s3 cp test.txt s3://<my-bucket>/test.txt >> /home/ubuntu/some-log-file.log
The cron runs the script each minute, but nothing is copied to the S3 bucket.
If i run the script manually on my shell it works.
I tried (without success):
Putting the right path in front of aws (/usr/local/bin/aws)
Putting aws_access_key_id and aws_secret_access_key into the .aws/config file as well.
Putting aws env vars to crontab and/or shell script
AWS_DEFAULT_REGION=eu-central-1
AWS_ACCESS_KEY_ID=******
AWS_SECRET_ACCESS_KEY =******
Defining HOME in the crontab and/or shell script
HOME="/home/ubuntu"
Putting the config and credential file location to the crontab
AWS_CONFIG_FILE="/home/ubuntu/.aws/config"
AWS_CREDENTIAL_FILE="/home/ubuntu/.aws/credentials"
Putting PATH to the crontab and/or the shell script
PATH="/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:"
Has anybody an idea what I might do wrong?
Fix was relatively simple. When running AWS CLI commands from cron you need to set the user environment variables.
In the cron command use . $HOME/.profile;
Example:
10 5 * * * . $HOME/.profile; /var/www/rds-scripts/clonedb.sh
In the shell script set the $SHELL and $PATH variables.
export SHELL=/bin/bash
export PATH=/usr/local/sbain:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
With these changes the AWS CLI is able to load the user credential files and locate the AWS CLI binary files.
Found out that I forgot an absolute path to test.txt (/home/ubuntu/test.txt)
I'll keep the question because it lists several options and might still be helpful to others.
Related
When I create EC2 inst. I use bash script into user data where I export variable AWS credentials and then run the command to copy files from S3 bucket. But this command is not executed.
#! /bin/bash
export AWS_ACCESS_KEY_ID=MYACCESSKEY
export AWS_SECRET_ACCESS_KEY=MYSECRETKEY
aws s3 cp s3://mys3bucket/ ./
How to fix it?
It is run on its own bash process which dies at the end of your script.
Exported variables are preserved only for the lifetime of your script and they are also visible from child processes of your script.
I am running a bash script with sudo and have tried the below but am getting the error below using aws cp. I think the problem is that the script is looking for the config in /root which does not exist. However doesn't the -E preserve the original location? Is there an option that can be used with aws cp to pass the location of the config. Thank you :).
sudo -E bash /path/to/.sh
- inside of this script is `aws cp`
Error
The config profile (name) could not be found
I have also tried `export` the name profile and `source` the path to the `config`
You can use the original user like :
sudo -u $SUDO_USER aws cp ...
You could also run the script using source instead of bash -- using source will cause the script to run in the same shell as your open terminal window, which will keep the same env together (such as user) - though honestly, #Philippe answer is the better, more correct one.
I am trying execute some gcloud commands in bash script from crontab. The script execute sucessfully from command shell but not from the cron job.
I have tried with:
Settng the full path to gcloud like:
/etc/bash_completion.d/gcloud
/home/Arturo/.config/gcloud
/usr/bin/gcloud
/usr/lib/google-cloud-sdk/bin/gcloud
Setting in the begin the script:
/bin/bash -l
Setting in the crontab:
51 21 30 5 6 CLOUDSDK_PYTHON=/usr/bin/python2.7;
/home/myuser/folder1/myscript.sh param1 param2 param3 -f >>
/home/myuser/folder1/mylog.txt`
Setting inside the script:
export CLOUDSDK_PYTHON=/usr/bin/python2.7
Setting inside the script:
sudo ln -s /home/myuser/google-cloud-sdk/bin/gcloud /usr/bin/gcloud
Version Ubuntu 18.04.3 LTS
command to execute: gcloud config set project myproject
but nothing is working, maybe I am doing something wrongly. I hope you can help me.
You need to set your user in your crontab, for it to run the gcloud command. As well explained in this other post here, you need to modify your crontab to fetch the data in your Cloud SDK, for the execution to occur properly - it doesn't seem that you have made this configuration.
Another option that I would recommend you to try out, it's using a Cloud Scheduler to run your gcloud commands. This way, you can use gcloud for your cron jobs in a more integrated and easy way. You can verify more information about this option here: Creating and configuring cron jobs
Let me know if the information helped you!
I found my error, the problem here was only in the command: "gcloud dns record-sets transaction start", the others command was executing sucesfully but only no logging nothing, by that I though that was not executng the other commands. This Command create a temp file ex. transaction.yaml and that file could not be created in the default path for gcloud(snap/bin), but the log simply dont write any thing!. I had to specify the path and name for that file with the flag --transaction-file=mytransaction.yaml. Thanks for your supprot and ideas
I have run into the same issue before. I fixed it by forcing the profile to load in my script.sh,loading the gcloud environment variables with it. Example below:
#!/bin/bash
source /etc/profile
gcloud config set project myprojectecho
echo "Project set to myprojectecho."
I hope this can help others in the future with similar issues, as this also helped me when trying to set GKE nodes from 0-4 on a schedule.
Adding the below line to the shell script fixed my issue
#Execute user profile
source /root/.bash_profile
I have my crontab set up as follows (this is inside a docker container).
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
SHELL=/bin/bash
5 * * * * bash /usr/local/bin/process-logs > /proc/1/fd/1 2>/proc/1/fd/
The /usr/local/bin/process-logs is designed to expose some MongoDB logs using mtools to a simple web server.
The problematic part of the script is fairly simple. raw_name is archive_name without the file extension.
aws s3 cp "s3://${s3_bucket}/${file_name}" "${archive_name}"
gunzip "${archive_name}"
mlogvis --no-browser "${raw_name}"
If I manually run the command as specified in the crontab config above
bash /usr/local/bin/process-logs > /proc/1/fd/1 2>/proc/1/fd/2
It all works as expected (this is the expected output from mlogvis)
...
copying /usr/local/lib/python3.5/dist-packages/mtools/data/index.html to /some/path/mongod.log-20190313-1552456862.html
...
When the script gets triggered via crontab it throws the following error
usage: mlogvis [-h] [--version] [--no-progressbar] [--no-browser] [--out OUT]
[--line-max LINE_MAX]
mlogvis: error: unrecognized arguments: mongod.log-20190313-1552460462
The mlogvis command that caused the following error (actual values not parameters)
mlogvis --no-browser "mongod.log-20190313-1552460462"
Again if I run this command myself it all works as expected.
mlogvis: http://blog.rueckstiess.com/mtools/mlogvis.html
I don't believe this to be an issue with the file not having correct permissions or not existing as mlogvis produces a different error in these conditions. I've also tested with removing '-' from the file name thinking it might be trying to parse these as arguments but it made no difference.
I know cron execution doesn't have the same execution environment as the user I tested the script as. I've set the PATH to be the same as the user and when the container starts up I execute env >> /etc/environment so all the environment vars and properly set.
Does anyone know of a way to debug this or has anyone encountered similar? All other components of the script are functioning except mlogvis which is core to the purpose of this job.
Summary of what I've tried as a fix:
Set environment and PATH for cron execution to be the same as the user I tested the script as
Replace - in file name(s) to see if it was parsing the parts as arguments
hardcode a filename with full permissions to see if it was permissions related
Manually run the script -> this works
Manually run the mlogvis command in isolation -> this works
try to load /home/user/.bash_profile before executing script and try again. I suspect that you have missing PATH or other environment variable which is not set.
source /home/user/.bash_profile
Please post your complete script, because while executing via crontab,
you have to be sure your raw_name variable was properly created. As
it seems to depend on archive_name, posting some more context can
help us to help you.
In any case, if you are using bash, you can try something like :
aws s3 cp "s3://${s3_bucket}/${file_name}" "${archive_name}"
gunzip "${archive_name}"
# here you have to be sure that archive_name is correct
raw_name_2=${archive_name%%.*}
mlogvis --no-browser "${raw_name_2}"
It is not going to solve your issue, but probably will take you closer to the right path.
New to shell scripting. Trying to use shell script on RHEL 6.9 linux server to upload a file with whitespace in filename to AWS S3 with aws cli. I have tried single and double quotes and have been reading aws cli links like http://docs.aws.amazon.com/cli/latest/userguide/cli-using-param.html
Here is a simple version of my script with the problem:
#!/bin/bash
profile=" --profile XXXXXXX"
sourcefile=" '/home/my_login/data/batch4/Test File (1).zip'"
targetobject=" 's3://my-bucket/TestFolder/batch4/Test File (1).zip'"
service=" s3"
action=" cp"
encrypt=" --sse"
func="aws"
awsstring=$func$profile$service$action$sourcefile$targetobject$encrypt
echo $awsstring
$awsstring
When I run I get:
$ ./s3copy.sh
aws --profile XXXXXXX s3 cp '/home/my_login/data/batch4/Test File (1).zip' 's3://my-bucket/TestFolder/batch4/Test File (1).zip' --sse
Unknown options: (1).zip','s3://my-bucket/TestFolder/batch4/Test,File,(1).zip'
When I execute the $awsstring value from command line, it works:
$ aws --profile XXXXXXX s3 cp '/home/my_login/data/batch4/Test File (1).zip' 's3://my-bucket/TestFolder/batch4/Test File (1).zip' --sse
upload: data/batch4/Test File (1).zip to s3://my-bucket/TestFolder/batch4/Test File (1).zip
aws cli does not seem to recognize the quotes in the shell script. I need to quote the file names in my script, because I have white space in them.
Question: Why does the string execute correctly from the command line, but not from within the shell script?
Use eval $awsstring. I faced similar issue. You can look at my answer - https://stackoverflow.com/a/47111888/2396539
On second thought having space in file name is not desirable and if you can control it , avoid it in the 1st place.