I've tried setting up cron to run in my Docker container, but without success thus far.
This is the cron-related parts of the Dockerfile:
FROM ruby:2.2.2
# Add crontab file in the cron directory
RUN apt-get install -y rsyslog
ADD crontab /etc/cron.d/hello-cron
# Give execution rights on the cron job
RUN chmod +x /etc/cron.d/hello-cron
# Create the log file to be able to run tail
RUN touch /var/log/cron.log
# Run the command on container startup
RUN service cron start
When I log on to the container instance, cron appears to be running:
$ service cron status
cron is running.
And /etc/cron.d has my job:
$ cat /etc/cron.d/hello-cron
* * * * * root echo "Hello world" >> /var/log/cron.log 2>&1
But nothing is appended to /var/log/cron.log, so it doesn't appear to run.
If I then, from within the container, runs $ cron it registers my hello-cron file and the log file will have "Hello world" appended every minute.
Your analysis is correct, the cron jobs are not running. This happens because normally, and by best practices, the container only runs a single process, such as Apache, NGINX, etc. - it does not run any of the normal operating system daemons such as crond.
No crond means, there is nothing that would read or execute your crontab.
There are several possibilities to solve this, but no perfect solution that I know of.
The worst one is to actually install crond, along with something like supervisord. It makes your container dramatically more complex.
You can create a separate container that runs nothing but cron. Mount whatever you need from the other containers as volumes. This is generally the recommended option, but it has limitations. The cron container needs to know a lot about the internals of your other containers, and the cron jobs don't execute in the same context as the rest of the containers.
You can create a cron job on the host, and have it execute scripts in the containers with docker exec. That works well, but creates a dependency between host and container. It may also not work at all if you don't have access to the host's operating system (for instance, in a hosted situation, or if a different team manages the host).
Related
So I am making a script that can run these commands whenever a server boot/reboot:
sudo bash
su - erp
cd frappe-bench/
bench start >/tmp/bench_log &
I found guides here and there about how can I change user in script I came out with the following script:
#! /bin/sh
sudo -u erp bash
cd /home/erp/frappe-bench/
bench start >/tmp/bench_log &
And, I have created a service at /etc/systemd/system/ and set it to run automatically when the server boots up.
The problem is, whenever I run sudo systemctl start erpnextd.service and checked the status, it came up with this
May 24 17:10:05 appbsystem2 systemd[1]: Started ERPNext | Auto Restart.
May 24 17:10:05 appbsystem2 sudo[18814]: root : TTY=unknown ; PWD=/ ; USER=>erp ; COMMAND=/bin/bash
May 24 17:10:05 appbsystem2 systemd[1]: erpnextd.service: Succeeded.
But it still doesn't start up ERPNext.
All I wanted to do is make a script that will start erpnext automatically everytime a server reboot.
Note: I only install frappe-bench on user erp only
Because you are using systemd, you already have all the features from your script available, and better. So you don't even need the script anymore:
[Unit]
Description=...
[Service]
# Run as user erp.
User=erp
# You probably also want to run as group erp, if it exists.
Group=erp
# Change to this directory before executing.
WorkingDirectory=/home/erp/frappe-bench
# Redirect standard output to the given log file.
StandardOutput=file:/tmp/bench_log
# Redirect standard error to the same log file.
StandardError=file:/tmp/bench_log
# Command line for starting the program. Make sure to use an absolute path!
ExecStart=/full/path/to/bench start
[Install]
WantedBy=multi-user.target
Using crontab (the script will start after every reboot/startup)
#crontab -e
#reboot sh /full/path/to/bench start >/tmp/bench_log
The answer provide by Thomas is very helpful.
However, I found another workaround by adding the path of my script file into the bottom of /etc/rc.local file.
Both method works, just a matter of preference ;)
I created a program.conf that logrotate my logs hourly in an EC2 instance. the logrotate works well when i force it command (by sudo logrotate program.conf --verbose --force) but it doesn't run each hour.
I tried several solutions by googling this problem like puting my program.conf in /etc/logrotate.d and moving logrotate from cron.dail into cron.hourly.
but it doesn't work.
Here is my program.conf :
/home/user_i/*.log{
hourly
missingok
dateext
rotate 1
compress
size 100M
sharedscripts
postrotate
/usr/bin/bash file.sh
endscript
}
Have you any idea please ?
Thanks
OP states in a comment that they can't use crontab and requires a solution utilizing /etc/cron.hourly.
Take the "program.conf" file you're using to define the logrotate parameters and put that file somewhere accessible by the root user but NOT in the /etc/logrotate.d/ directory. The idea is that if we're running this hourly in our own fashion, we don't want logrotate to also perform this rotation when it normally runs. This file only needs to be readable by root, not executable.
You need to make sure that ALL of the logrotate parameters you need are inside this file. We are going to be using logrotate to execute a rotation using only this configuration file, but that also means that any of the 'global' parameters you defined in /etc/logrotate.conf are not going to be taken into account. If you have any parameters in there that need to be respected by your custom rotation, they need to be duplicated into your "program.conf" file.
Inside your /etc/cron.hourly folder, create a new file (executable by root) that will be the script executing our custom rotation every hour (adjust your shell/shebang accordingly):
#!/usr/bin/bash
logrotate -f /some/dir/program.conf
This will make cron fire off an hourly, forced rotation for that configuration file without having any effect on logrotate's normal functionality.
On Debian Bullseye (and maybe other modern systemd based systems) logrotate is handled by a systemd timer which runs once a day. To change the run frequency of logroate to a hourly frequency, you have to override the default logrotate.timer unit.
Execute systemctl edit logrotate.timer and insert the following overrides:
[Timer]
OnCalendar=
OnCalendar=hourly
AccuracySec=1m
Then run systemctl reenable --now logrotate.timer to activate the changes.
The empty OnCalender option resets the previous defined values.
The default logroate.timer has set the AccuracySec option to one hour. Unfortunately resetting it with an empty value is not possible so it has to be set to one minute manually.
You will need to add the job in your crontab
crontab -e
And then add a job that runs every hour 14 minutes past,
14 * * * * /usr/sbin/logrotate /home/sammy/logrotate.conf --state /home/sammy/logrotate-state
Taken from : https://www.digitalocean.com/community/tutorials/how-to-manage-logfiles-with-logrotate-on-ubuntu-16-04
Also additionally check if the crontab is actually running by doing
service crontab status
if it is stopped you can start it by doing
service crontab start
If you are using Linux, once you install logrotate rpm, it automatically creates a file logrotate under /etc/cron.daily. You can move this file to the /etc/cron.hourly folder. This will then automatically run logrotate hourly.
I need to run a bash script continuously for indefinite time inside a docker container in Azure via Azure Container Instance service (ACI). My bash script has a while loop that keeps it running and Azure container has OnFailure Property to restart container if fails.
I see that after running Container for about 2 days, Container status is Running. However, the bash script that was running in foreground and sending logs in azure container console seems to be died and no longer sending logs to console. I also see it's not doing what it supposed to do.
How can I reliably keep this bash script running for indefinite time in Azure container?
The bash script which has internal while loop runs as below:
Commands
bash
my-while-loop-script.sh
To solve this issue, I replaced while loop inside my-while-loop-script.sh with a crond to execute a python application as a cron job. below is the line that executes a cron inside my-while-loop-script.sh. this line will execute my-cron.cron contents show below:
./busybox crond -f
To achieve that, I used busybox 1.30.1 tools. To install busybox in your docker:
ADD busybox-1.30.1/ /busybox
WORKDIR /busybox
RUN make defconfig
RUN make
And, you also need to add cron settings to crontabs dir.
RUN mkdir -p /var/spool/cron/crontabs/
# Copy cron settings
ADD my-cron.cron /var/spool/cron/crontabs/root
Sample my-cron.cron looks like just a normal cron file:
* * * * * python my-app.py
I have a build server. I'm using the Azure Build Agent script. It's a shell script that will run continuously while the server is up. Problem is that I cannot seem to get it to run on startup. I've tried /etc/init.d and /etc/rc.local and the agent is not being run. Nothing concerning the build agent in the boot logs.
For /etc/init.d I created the script agent.sh which contains:
#!/bin/bash
sh ~/agent/run.sh
Gave it the proper permissions chmod 755 agent.shand moved it to /etc/init.d.
and for /etc/rc.local, I just appended the following
sh ~/agent/run.sh &
before exit 0.
What am I doing wrong?
EDIT: added examples.
EDIT 2: Just noticed that the init.d README says that shell scripts need to start with #!/bin/sh and not #!/bin/bash. Also used absolute path, but no change.
FINAL EDIT: As #ewrammer suggested, I used cron and it worked. crontab -e and then #reboot /home/user/agent/run.sh.
It is hard to see what is wrong if you are not posting what you have done, but why not add it as a cron job with #reboot as pattern? Then cron will run the script every time the computer starts.
Just in case, using a supervisor could be a good idea, In Ubuntu 14 you don't have systemd but you can choose from others https://en.wikipedia.org/wiki/Process_supervision.
If using immortal, after installing it, you just need to create a run.yml file in /etc/immortal with something like:
cmd: /path/to/command
log:
file: /var/log/command.log
This will start your script/command on every start, besides ensuring your script/app is always up and running.
I have a ruby script that connects to an Amazon S3 bucket and downloads the latest production backup. I have tested the script (which is very simple) and it works fine.
However, when I schedule this script to be run as a cron job it seems to fail when it loads the Amazon (aws-s3) gem.
The first few lines of my script looks like this:
#!/usr/bin/env ruby
require 'aws/s3'
As I said, when I run this script manually, it works fine. When I run it via a scheduled cron job, it fails when it tries to load the gem:
`require': no such file to load -- aws/s3 (LoadError)
The crontab for this script looks like this:
0 3 * * * ~/Downloader/download.rb > ~/Downloader/output.log 2>&1
I originally thought it might be because cron is running as a different user, but when I do a 'whoami' at the start of my ruby script it tells me it's running as the same user I always use.
I have also done a bundle init and added the gem to my gemfile, but this doesn't seem to have any affect.
Why does cron fail to load the gem? I am running Ubuntu.
As mentioned here https://coderwall.com/p/vhv8aw you can simply try
rvm cron setup # let RMV do your cron settings
Make sure that you make copy of your crontab before running this command
If you're running it manually and it works you're probably in a different shell environment than cron is executing in. Since you mention you're on Ubuntu, the cron jobs probably execute under /bin/sh, and you're manually running them under /bin/bash if you haven't changed anything.
You can debug your environment problems or you can change the shell that your job runs under.
To debug, There are several ways to figure out what shell your cron jobs are using. It can be defined in
/etc/crontab
or you can make a cron job to dump your shell and environment information, as has been mentioned in this SO answer: How to simulate the environment cron executes a script with?
To switch to that shell and see the actual errors causing your job to fail, do
sudo su
env -i <path to shell> (e.g. /bin/sh)
Then running your script you should see what the errors are and be able to fix them (rubygems?).
Option 2 is to switch shells. You can always try something like:
0 3 * * * /bin/bash -c '~/Downloader/download.rb > ~/Downloader/output.log 2>&1'
To force your job into bash. That might also clear things up.
You may also explicitly set your Gem path:
GEM_HOME="/usr/local/rvm/gems/ruby-1.9.2-p290#my-special-gemset"
in a non cron environment execute echo $PATH, copy the path and paste it into your crontab, before your command:
echo $PATH
/usr/local/bin:/usr/local/sbin:/usr/bin:/bin:/usr/sbin:/sbin
and inside crontab:
PATH=/usr/local/bin:/usr/local/sbin:/usr/bin:/bin:/usr/sbin:/sbin
0 3 * * * ~/Downloader/download.rb > ~/Downloader/output.log 2>&1
Add this at the beginning of your cron
PATH="/home/user/.rvm/gems/ruby-2.1.4/bin:/home/user/.rvm/gems/ruby-2.1.4#global/bin:/home/user/.rvm/rubies/ruby-2.1.4/bin:/home/user/.rvm/gems/ruby-2.1.4/bin:/home/user/.rvm/gems/ruby-2.1.4#global/bin:/home/user/.rvm/rubies/ruby-2.1.4/bin:/usr/lib64/qt-3.3/bin:/usr/local/bin:/usr/bin:/home/user/.rvm/bin:/usr/local/sbin:/usr/sbin:/home/user/.rvm/bin:/home/user/.local/bin:/home/user/bin"
GEM_HOME='/home/user/.rvm/gems/ruby-2.1.4'
GEM_PATH='/home/user/.rvm/gems/ruby-2.1.4:/home/user/.rvm/gems/ruby-2.1.4#global'
MY_RUBY_HOME='/home/user/.rvm/rubies/ruby-2.1.4'
IRBRC='/home/user/.rvm/rubies/ruby-2.1.4/.irbrc'
RUBY_VERSION='ruby-2.1.4'
I've tried all the solution above, none of them worked until I tried;
0 12 * * * /bin/bash -l -c 'ruby /Users/simon/Desktop/script.rb'