I have a crons set up on aws EC2 instance using "crontab -e" and it works fine except when it runs it seems to run as ec2 user however i need the cron to run as apache because with ec2 user im getting some permission errors
0 0 * * 0 /usr/bin/php /var/www/html/xxxxx >/dev/null 2>&1
I had it working fine by setting up the cront with the following command
sudo crontab -u apache -e
however it seems these crons got deleted for some reason. Anyone have any idea on why they were deleted
Related
I am having difficulties getting my docker-compose command to run on reboot in my EC2 instance. I have been through many responses with a similar question but have been unsuccessful so far.
In my EC2 instance, I have the following crontab set up (via crontab -e) which fails to execute when the instance is rebooted:
#reboot sleep 60 && sudo systemctl enable docker && cd /home/<user>/<repo_name> && docker-compose up --build -d
Running the command manually successfully runs the docker-compose file, and I have checked that other crontabs execute successfully, a quick * * * * * echo "test" > text.txt runs as intended.
My question is, is there a way to get this crontab to execute successfully on reboot, or is there another way that is perhaps better to get my containers up and running?
I'm trying to use the aws cli cp command in a cron of an aws environment on a Ubuntu 14.04.3 AWS EC2.
The ec2-user is called ubuntu and lives in /home/ubuntu
I have my aws config file in /home/ubuntu/.aws/config
[default]
output=json
region=eu-central-1
I have my aws credentials file in /home/ubuntu/.aws/credentials
[default]
aws_access_key_id=******
aws_secret_access_key=******
My crontab looks like this
* * * * * sh /home/ubuntu/test.sh
The shell script tries to copy a test file over to S3 is a one-liner:
/usr/local/bin/aws s3 cp test.txt s3://<my-bucket>/test.txt >> /home/ubuntu/some-log-file.log
The cron runs the script each minute, but nothing is copied to the S3 bucket.
If i run the script manually on my shell it works.
I tried (without success):
Putting the right path in front of aws (/usr/local/bin/aws)
Putting aws_access_key_id and aws_secret_access_key into the .aws/config file as well.
Putting aws env vars to crontab and/or shell script
AWS_DEFAULT_REGION=eu-central-1
AWS_ACCESS_KEY_ID=******
AWS_SECRET_ACCESS_KEY =******
Defining HOME in the crontab and/or shell script
HOME="/home/ubuntu"
Putting the config and credential file location to the crontab
AWS_CONFIG_FILE="/home/ubuntu/.aws/config"
AWS_CREDENTIAL_FILE="/home/ubuntu/.aws/credentials"
Putting PATH to the crontab and/or the shell script
PATH="/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:"
Has anybody an idea what I might do wrong?
Fix was relatively simple. When running AWS CLI commands from cron you need to set the user environment variables.
In the cron command use . $HOME/.profile;
Example:
10 5 * * * . $HOME/.profile; /var/www/rds-scripts/clonedb.sh
In the shell script set the $SHELL and $PATH variables.
export SHELL=/bin/bash
export PATH=/usr/local/sbain:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
With these changes the AWS CLI is able to load the user credential files and locate the AWS CLI binary files.
Found out that I forgot an absolute path to test.txt (/home/ubuntu/test.txt)
I'll keep the question because it lists several options and might still be helpful to others.
I've tried setting up cron to run in my Docker container, but without success thus far.
This is the cron-related parts of the Dockerfile:
FROM ruby:2.2.2
# Add crontab file in the cron directory
RUN apt-get install -y rsyslog
ADD crontab /etc/cron.d/hello-cron
# Give execution rights on the cron job
RUN chmod +x /etc/cron.d/hello-cron
# Create the log file to be able to run tail
RUN touch /var/log/cron.log
# Run the command on container startup
RUN service cron start
When I log on to the container instance, cron appears to be running:
$ service cron status
cron is running.
And /etc/cron.d has my job:
$ cat /etc/cron.d/hello-cron
* * * * * root echo "Hello world" >> /var/log/cron.log 2>&1
But nothing is appended to /var/log/cron.log, so it doesn't appear to run.
If I then, from within the container, runs $ cron it registers my hello-cron file and the log file will have "Hello world" appended every minute.
Your analysis is correct, the cron jobs are not running. This happens because normally, and by best practices, the container only runs a single process, such as Apache, NGINX, etc. - it does not run any of the normal operating system daemons such as crond.
No crond means, there is nothing that would read or execute your crontab.
There are several possibilities to solve this, but no perfect solution that I know of.
The worst one is to actually install crond, along with something like supervisord. It makes your container dramatically more complex.
You can create a separate container that runs nothing but cron. Mount whatever you need from the other containers as volumes. This is generally the recommended option, but it has limitations. The cron container needs to know a lot about the internals of your other containers, and the cron jobs don't execute in the same context as the rest of the containers.
You can create a cron job on the host, and have it execute scripts in the containers with docker exec. That works well, but creates a dependency between host and container. It may also not work at all if you don't have access to the host's operating system (for instance, in a hosted situation, or if a different team manages the host).
I have a ruby script that connects to an Amazon S3 bucket and downloads the latest production backup. I have tested the script (which is very simple) and it works fine.
However, when I schedule this script to be run as a cron job it seems to fail when it loads the Amazon (aws-s3) gem.
The first few lines of my script looks like this:
#!/usr/bin/env ruby
require 'aws/s3'
As I said, when I run this script manually, it works fine. When I run it via a scheduled cron job, it fails when it tries to load the gem:
`require': no such file to load -- aws/s3 (LoadError)
The crontab for this script looks like this:
0 3 * * * ~/Downloader/download.rb > ~/Downloader/output.log 2>&1
I originally thought it might be because cron is running as a different user, but when I do a 'whoami' at the start of my ruby script it tells me it's running as the same user I always use.
I have also done a bundle init and added the gem to my gemfile, but this doesn't seem to have any affect.
Why does cron fail to load the gem? I am running Ubuntu.
As mentioned here https://coderwall.com/p/vhv8aw you can simply try
rvm cron setup # let RMV do your cron settings
Make sure that you make copy of your crontab before running this command
If you're running it manually and it works you're probably in a different shell environment than cron is executing in. Since you mention you're on Ubuntu, the cron jobs probably execute under /bin/sh, and you're manually running them under /bin/bash if you haven't changed anything.
You can debug your environment problems or you can change the shell that your job runs under.
To debug, There are several ways to figure out what shell your cron jobs are using. It can be defined in
/etc/crontab
or you can make a cron job to dump your shell and environment information, as has been mentioned in this SO answer: How to simulate the environment cron executes a script with?
To switch to that shell and see the actual errors causing your job to fail, do
sudo su
env -i <path to shell> (e.g. /bin/sh)
Then running your script you should see what the errors are and be able to fix them (rubygems?).
Option 2 is to switch shells. You can always try something like:
0 3 * * * /bin/bash -c '~/Downloader/download.rb > ~/Downloader/output.log 2>&1'
To force your job into bash. That might also clear things up.
You may also explicitly set your Gem path:
GEM_HOME="/usr/local/rvm/gems/ruby-1.9.2-p290#my-special-gemset"
in a non cron environment execute echo $PATH, copy the path and paste it into your crontab, before your command:
echo $PATH
/usr/local/bin:/usr/local/sbin:/usr/bin:/bin:/usr/sbin:/sbin
and inside crontab:
PATH=/usr/local/bin:/usr/local/sbin:/usr/bin:/bin:/usr/sbin:/sbin
0 3 * * * ~/Downloader/download.rb > ~/Downloader/output.log 2>&1
Add this at the beginning of your cron
PATH="/home/user/.rvm/gems/ruby-2.1.4/bin:/home/user/.rvm/gems/ruby-2.1.4#global/bin:/home/user/.rvm/rubies/ruby-2.1.4/bin:/home/user/.rvm/gems/ruby-2.1.4/bin:/home/user/.rvm/gems/ruby-2.1.4#global/bin:/home/user/.rvm/rubies/ruby-2.1.4/bin:/usr/lib64/qt-3.3/bin:/usr/local/bin:/usr/bin:/home/user/.rvm/bin:/usr/local/sbin:/usr/sbin:/home/user/.rvm/bin:/home/user/.local/bin:/home/user/bin"
GEM_HOME='/home/user/.rvm/gems/ruby-2.1.4'
GEM_PATH='/home/user/.rvm/gems/ruby-2.1.4:/home/user/.rvm/gems/ruby-2.1.4#global'
MY_RUBY_HOME='/home/user/.rvm/rubies/ruby-2.1.4'
IRBRC='/home/user/.rvm/rubies/ruby-2.1.4/.irbrc'
RUBY_VERSION='ruby-2.1.4'
I've tried all the solution above, none of them worked until I tried;
0 12 * * * /bin/bash -l -c 'ruby /Users/simon/Desktop/script.rb'
Wondered if anyone had any idea why the following problem is occurring, or had any tips where to look…I can run the shell script manually in ssh, but if I set it up to run in crontab i get the problems below.
Server is: FreeBSD 8, and I have access to all root permissions
I have a shell script (Bourne) that runs under the “root” permissions using crontab with the following command:
* * * * * /data/backups/scripts/server_log_check.sh > /data/backups/logs/cron_logs/server_log_check.sh_cron.log
The “server_log_check.sh” script checks to see if “the report server” is running with this command:
if ps -xauww | grep -v grep | grep java | grep www > /dev/null
then
#“reports are running, no need to try to restart it”
Else
/usr/local/etc/rc.d/tomcat55 start #start report server because it is not running
Fi
The problem is occurring on this line: “/usr/local/etc/rc.d/tomcat55 start”, when the script is run using crontab, but if I run the script manually via ssh this line of code runs without a problem, but all the rest of the code in the script executes fine, just not this line. Allternatively, if I paste this line /usr/local/etc/rc.d/tomcat55 start into the ssh command prompt, it runs just fine too.
I changed the “server_log_check.sh” ownership to be “root”, but that didn’t make a difference, and the script "tomcat55" ownership is "www". The crontab entry is being made under the "Root" profile, so, I assumed there is no problem running a file that is owned by a lessor permission such as "www" has
Do you have any ideas why cron is doing this?
Thanks in advance
Try adding the following which will add the error to the log file as well:
* * * * * /data/backups/scripts/server_log_check.sh > /data/backups/logs/cron_logs/server_log_check.sh_cron.log 2>&1
Also change this:
/usr/local/etc/rc.d/tomcat55 start
to:
cd /home/root
nohup /usr/local/etc/rc.d/tomcat55 start &
This should create a nohup.out in /home/root.