Reboot Crontab on AWS EC2 not executing - amazon-ec2

I am having difficulties getting my docker-compose command to run on reboot in my EC2 instance. I have been through many responses with a similar question but have been unsuccessful so far.
In my EC2 instance, I have the following crontab set up (via crontab -e) which fails to execute when the instance is rebooted:
#reboot sleep 60 && sudo systemctl enable docker && cd /home/<user>/<repo_name> && docker-compose up --build -d
Running the command manually successfully runs the docker-compose file, and I have checked that other crontabs execute successfully, a quick * * * * * echo "test" > text.txt runs as intended.
My question is, is there a way to get this crontab to execute successfully on reboot, or is there another way that is perhaps better to get my containers up and running?

Related

Cron job not executing script with docker exec

I have created a bash script to run a backup of my containerized postgres database. This script is called "do-pg-backup.sh":
#! bin/bash
db=$(docker container ls -q --filter name=mydbcontainer* --format "{{.Names}}")
docker exec -it $db /etc/pg-backup/pg-backup.sh
echo "done pg backup"
If I execute the do-pg-backup.sh script in the ubuntu terminal ("bash do-pg-backup.sh") it runs perfectly as expected.
Now I have setup the following crontab so this script will run every minute ("sudo crontab -e"):
* * * * * bash /path/to/do-pg-backup.sh >> /tmp/cron.log
* * * * * echo "minute passed" >> /tmp/cron.log
My cron.log file looks like this after a few minutes:
minute passed
done pg backup
minute passed
done pg backup
...and so on...
However, the do-pg-backup.sh script is NOT successfully backing up the the db from the context of the cronjob. What am I doing wrong?
Thanks to the helpful hint from #markp-fuso, I discovered that I was getting the following error:
the input device is not a TTY
After a little googling around, I made the following edit to do-pg-backup.sh script:
docker exec -i $db /etc/pg-backup/pg-backup.sh
Note: changed "-it" to "-i"
And it now works. I dont understand why it now works, but am happy it is fixed am my very important data is getting backed up.
Source

AWS EC2 cron set up

I have a crons set up on aws EC2 instance using "crontab -e" and it works fine except when it runs it seems to run as ec2 user however i need the cron to run as apache because with ec2 user im getting some permission errors
0 0 * * 0 /usr/bin/php /var/www/html/xxxxx >/dev/null 2>&1
I had it working fine by setting up the cront with the following command
sudo crontab -u apache -e
however it seems these crons got deleted for some reason. Anyone have any idea on why they were deleted

How to automatically start your docker services on the restart of my computer?

I want to write a bash script file (.sh) file in Ubuntu so that docker services start automatically on reboot.
User the cron service.
You need to use special string called #reboot. It will run once, at startup after Linux reboots. The syntax is as follows:
#reboot /path/to/job
#reboot /path/to/shell.script
#reboot /path/to/command arg1 arg2
#So to run docker on reboot:
#reboot start docker
This is an easy way to give your users the ability to run a shell script or command at boot time without root access. First, run crontab command:
$ crontab -e
OR
# crontab -e -u UserName
# crontab -e -u vivek
So, to run a script called /home/vivek/bin/installnetkit.sh
#reboot /home/vivek/bin/installnetkit.sh

Continuously run bash script in Azure Container

I need to run a bash script continuously for indefinite time inside a docker container in Azure via Azure Container Instance service (ACI). My bash script has a while loop that keeps it running and Azure container has OnFailure Property to restart container if fails.
I see that after running Container for about 2 days, Container status is Running. However, the bash script that was running in foreground and sending logs in azure container console seems to be died and no longer sending logs to console. I also see it's not doing what it supposed to do.
How can I reliably keep this bash script running for indefinite time in Azure container?
The bash script which has internal while loop runs as below:
Commands
bash
my-while-loop-script.sh
To solve this issue, I replaced while loop inside my-while-loop-script.sh with a crond to execute a python application as a cron job. below is the line that executes a cron inside my-while-loop-script.sh. this line will execute my-cron.cron contents show below:
./busybox crond -f
To achieve that, I used busybox 1.30.1 tools. To install busybox in your docker:
ADD busybox-1.30.1/ /busybox
WORKDIR /busybox
RUN make defconfig
RUN make
And, you also need to add cron settings to crontabs dir.
RUN mkdir -p /var/spool/cron/crontabs/
# Copy cron settings
ADD my-cron.cron /var/spool/cron/crontabs/root
Sample my-cron.cron looks like just a normal cron file:
* * * * * python my-app.py

docker exec is not working in cron

I have pretty simple command which is working fine standalone as a command or bash script but not when I put it in crontab
40 05 * * * bash /root/scripts/direct.sh >> /root/cron.log
which has following line
PATH=$PATH:/usr/bin:/usr/sbin:/usr/local/bin:/usr/local/sbin
SHELL=/bin/sh PATH=/bin:/sbin:/usr/bin:/usr/sbin:/root/
# Mongo Backup
docker exec -it mongodb mongodump -d meteor -o /dump/
I tried to change the url of script to /usr/bin/scirpts/ no luck
I even tried to run script directly in cron
26 08 * * * docker exec -it mongodb mongodump -d meteor -o /dump/ >> /root/cron.log
with no luck, any help appreciated.
EDIT
I don't see any errors in /root/cron.log file either
Your docker exec command says it needs "pseudo terminal and runs in interactive mode" (-it flags) while cron doesn't attach to any TTYs.
Try changing your docker exec command to this and see if that works?
docker exec mongodb mongodump -d meteor -o /dump/
for what it's worth I had this exact same problem. Fixing your PATH, changing permissions, and making sure you are running as the appropriate docker user are all good things, but that's not enough. It's going to continue failing because you are using "docker exec -it", which tells docker to use an interactive shell. Change it to "docker exec -t" and it'll work fine. There will be no log output anywhere telling you this, however. Enjoy!
cron debugging
1. /var/log or sendmail
As crond work as a daemon, without ability of failing, execution is more important than logging. Then by default, if something goes wrong, cron will send a mail to $USER#localhost reporting script output and errors.
Have a look at /var/mail or /var/spool/mail for some mails, maybe
and at /etc/aliases to see where root's mail are sents.
2. crond and $PATH
When you run a command by cron, have care that $PATH is user's default path and not root default path (ie no */sbin and other reserved path to super user tools).
For this, the simplier way is to print your default path in the environment where everything run fine:
echo $PATH
or patch your script from command line:
sed -e "2aPATH='$PATH'" -i /root/scripts/direct.sh
This will add current $PATH initializer at line 2 in your script.
Or this, will whipe from your script all other PATH=:
sed -e "s/PATH=[^ ]*\( \|$\)/\1/;2aPATH='$PATH'" -i /root/scripts/direct.sh
3. Force logging
Add at top of your script:
exec 1>/tmp/cronlog-$$.log
exec 2>/tmp/cronlog-$$.err
Try this:
sed -e '1a\\nexec 1>/tmp/cronlog-$$.log\nexec 2>/tmp/cronlog-$$.err' -i ~/scripts/direct.sh
Finalized script could look like:
#!/bin/bash
# uncomment two following lines to force log to /tmp
# exec 1>/tmp/cronlog-$$.log
# exec 2>/tmp/cronlog-$$.err
PATH='....' # copied from terminal console!
docker exec -it mongodb mongodump -d meteor -o /dump/
Executable flag
If you run your script by
40 05 * * * bash /root/scripts/direct.sh
no executable flag are required, but you must add them:
chmod +x ~/scripts/direct.sh
if you want to run:
40 05 * * * /root/scripts/direct.sh
1) Make sure this task is in the root user's crontab - it's probably the case but you didn't write it explicitly
2) cron may be unable to find bash. I would remove it and call directly your script after making it executable:
chmod 755 /root/scripts/direct.sh
and then set your crontab entry as 40 05 * * * /root/scripts/direct.sh 2>&1 >> /root/cron.log
If it's still not working, then you should have some useful output in /root/cron.log
Are you sure your script is running? Add an other command like touch /tmp/cronok before the docker exec call.
Don't forget that the crontab needs a newline at the end. Use crontab -e to edit it.
Restart the cron service and check the logs (grep -i cron /var/log/syslog).
If your OS is redhat/centos/fedora, you should try with the username (root) between the frequency and the command.
Check your mails with the mail command.
Check the crontab permissions. chmod 644 /etc/crontab.
Maybe you just don't want to reinvent the wheel.
Here's a few things I'd change-- first, capture STDERR along with STDOUT and remove the shell specification in cron-- use #! in your script instead.
40 05 * * * /root/scripts/direct.sh &>> /root/cron.log
Next, you are setting your PATH in the reverse order, and you are missing your shbang. I have no idea why you are defining SHELL as /bin/sh, when you are running bash, instead of dash. Change your script to this.
#!/usr/bin/env bash
PATH=/bin:/sbin:/usr/bin:/usr/sbin:/root
PATH=$PATH:/usr/bin:/usr/sbin:/usr/local/bin:/usr/local/sbin
# Mongo Backup
docker exec -it mongodb mongodump -d meteor -o /dump/
See if that yields something better to work with.

Resources