how to run a logrotate hourly - amazon-ec2

I created a program.conf that logrotate my logs hourly in an EC2 instance. the logrotate works well when i force it command (by sudo logrotate program.conf --verbose --force) but it doesn't run each hour.
I tried several solutions by googling this problem like puting my program.conf in /etc/logrotate.d and moving logrotate from cron.dail into cron.hourly.
but it doesn't work.
Here is my program.conf :
/home/user_i/*.log{
hourly
missingok
dateext
rotate 1
compress
size 100M
sharedscripts
postrotate
/usr/bin/bash file.sh
endscript
}
Have you any idea please ?
Thanks

OP states in a comment that they can't use crontab and requires a solution utilizing /etc/cron.hourly.
Take the "program.conf" file you're using to define the logrotate parameters and put that file somewhere accessible by the root user but NOT in the /etc/logrotate.d/ directory. The idea is that if we're running this hourly in our own fashion, we don't want logrotate to also perform this rotation when it normally runs. This file only needs to be readable by root, not executable.
You need to make sure that ALL of the logrotate parameters you need are inside this file. We are going to be using logrotate to execute a rotation using only this configuration file, but that also means that any of the 'global' parameters you defined in /etc/logrotate.conf are not going to be taken into account. If you have any parameters in there that need to be respected by your custom rotation, they need to be duplicated into your "program.conf" file.
Inside your /etc/cron.hourly folder, create a new file (executable by root) that will be the script executing our custom rotation every hour (adjust your shell/shebang accordingly):
#!/usr/bin/bash
logrotate -f /some/dir/program.conf
This will make cron fire off an hourly, forced rotation for that configuration file without having any effect on logrotate's normal functionality.

On Debian Bullseye (and maybe other modern systemd based systems) logrotate is handled by a systemd timer which runs once a day. To change the run frequency of logroate to a hourly frequency, you have to override the default logrotate.timer unit.
Execute systemctl edit logrotate.timer and insert the following overrides:
[Timer]
OnCalendar=
OnCalendar=hourly
AccuracySec=1m
Then run systemctl reenable --now logrotate.timer to activate the changes.
The empty OnCalender option resets the previous defined values.
The default logroate.timer has set the AccuracySec option to one hour. Unfortunately resetting it with an empty value is not possible so it has to be set to one minute manually.

You will need to add the job in your crontab
crontab -e
And then add a job that runs every hour 14 minutes past,
14 * * * * /usr/sbin/logrotate /home/sammy/logrotate.conf --state /home/sammy/logrotate-state
Taken from : https://www.digitalocean.com/community/tutorials/how-to-manage-logfiles-with-logrotate-on-ubuntu-16-04
Also additionally check if the crontab is actually running by doing
service crontab status
if it is stopped you can start it by doing
service crontab start

If you are using Linux, once you install logrotate rpm, it automatically creates a file logrotate under /etc/cron.daily. You can move this file to the /etc/cron.hourly folder. This will then automatically run logrotate hourly.

Related

Is it save to put a command which includes a sudo command on the schedluer?

Good morning,
I am working on a link between my Laravel file server and a sinology backup. The command I am using uses a sudo command to create and then disconnect the link. I want to know if I would be able to run this command from the scheduler.
Thanks
You can use for example (to run at midnight, every day):
0 0 * * * /path/to/your/command
This is a record you can add in cron of the user you use to run the command. Be aware cron have different environment so you should set all the variables you need.
You may need to create special shell script to include there your environment variables:
. ~/.bash_profile
/path/to/your/command

Auto Start Script

So I am making a script that can run these commands whenever a server boot/reboot:
sudo bash
su - erp
cd frappe-bench/
bench start >/tmp/bench_log &
I found guides here and there about how can I change user in script I came out with the following script:
#! /bin/sh
sudo -u erp bash
cd /home/erp/frappe-bench/
bench start >/tmp/bench_log &
And, I have created a service at /etc/systemd/system/ and set it to run automatically when the server boots up.
The problem is, whenever I run sudo systemctl start erpnextd.service and checked the status, it came up with this
May 24 17:10:05 appbsystem2 systemd[1]: Started ERPNext | Auto Restart.
May 24 17:10:05 appbsystem2 sudo[18814]: root : TTY=unknown ; PWD=/ ; USER=>erp ; COMMAND=/bin/bash
May 24 17:10:05 appbsystem2 systemd[1]: erpnextd.service: Succeeded.
But it still doesn't start up ERPNext.
All I wanted to do is make a script that will start erpnext automatically everytime a server reboot.
Note: I only install frappe-bench on user erp only
Because you are using systemd, you already have all the features from your script available, and better. So you don't even need the script anymore:
[Unit]
Description=...
[Service]
# Run as user erp.
User=erp
# You probably also want to run as group erp, if it exists.
Group=erp
# Change to this directory before executing.
WorkingDirectory=/home/erp/frappe-bench
# Redirect standard output to the given log file.
StandardOutput=file:/tmp/bench_log
# Redirect standard error to the same log file.
StandardError=file:/tmp/bench_log
# Command line for starting the program. Make sure to use an absolute path!
ExecStart=/full/path/to/bench start
[Install]
WantedBy=multi-user.target
Using crontab (the script will start after every reboot/startup)
#crontab -e
#reboot sh /full/path/to/bench start >/tmp/bench_log
The answer provide by Thomas is very helpful.
However, I found another workaround by adding the path of my script file into the bottom of /etc/rc.local file.
Both method works, just a matter of preference ;)

Run an shell script on startup (not login) on Ubuntu 14.04

I have a build server. I'm using the Azure Build Agent script. It's a shell script that will run continuously while the server is up. Problem is that I cannot seem to get it to run on startup. I've tried /etc/init.d and /etc/rc.local and the agent is not being run. Nothing concerning the build agent in the boot logs.
For /etc/init.d I created the script agent.sh which contains:
#!/bin/bash
sh ~/agent/run.sh
Gave it the proper permissions chmod 755 agent.shand moved it to /etc/init.d.
and for /etc/rc.local, I just appended the following
sh ~/agent/run.sh &
before exit 0.
What am I doing wrong?
EDIT: added examples.
EDIT 2: Just noticed that the init.d README says that shell scripts need to start with #!/bin/sh and not #!/bin/bash. Also used absolute path, but no change.
FINAL EDIT: As #ewrammer suggested, I used cron and it worked. crontab -e and then #reboot /home/user/agent/run.sh.
It is hard to see what is wrong if you are not posting what you have done, but why not add it as a cron job with #reboot as pattern? Then cron will run the script every time the computer starts.
Just in case, using a supervisor could be a good idea, In Ubuntu 14 you don't have systemd but you can choose from others https://en.wikipedia.org/wiki/Process_supervision.
If using immortal, after installing it, you just need to create a run.yml file in /etc/immortal with something like:
cmd: /path/to/command
log:
file: /var/log/command.log
This will start your script/command on every start, besides ensuring your script/app is always up and running.

how to create a systemd service for daily reboot?

I have a raspberry pi with RuneAudio. I would like to set up a daily automatic reboot. Since RuneOS uses systemd rather than cron, how can I do that with systemd?
You could do this with a bash script that runs on startup and sleeps 24 hours and restarts then.
Write a file that contains:
sleep 24h
sudo reboot
save it as reboot24.sh, make it executable and attach the following line to /etc/rc.loc
sudo bash /path/to/file/reboot24.sh
Edit: this is a description for Raspbian. Not sure if it works on RuneOS
According to this installed package list cron should be installed by default.
If it's disabled just enable it by typing
sudo systemctl enable cron
then append this to your /etc/crontab file
25 6 * * * root reboot
this will reboot your system every day at 6:25.
Now restart cron
sudo systemctl restart cron

How to run cron in Docker container from Ruby image

I've tried setting up cron to run in my Docker container, but without success thus far.
This is the cron-related parts of the Dockerfile:
FROM ruby:2.2.2
# Add crontab file in the cron directory
RUN apt-get install -y rsyslog
ADD crontab /etc/cron.d/hello-cron
# Give execution rights on the cron job
RUN chmod +x /etc/cron.d/hello-cron
# Create the log file to be able to run tail
RUN touch /var/log/cron.log
# Run the command on container startup
RUN service cron start
When I log on to the container instance, cron appears to be running:
$ service cron status
cron is running.
And /etc/cron.d has my job:
$ cat /etc/cron.d/hello-cron
* * * * * root echo "Hello world" >> /var/log/cron.log 2>&1
But nothing is appended to /var/log/cron.log, so it doesn't appear to run.
If I then, from within the container, runs $ cron it registers my hello-cron file and the log file will have "Hello world" appended every minute.
Your analysis is correct, the cron jobs are not running. This happens because normally, and by best practices, the container only runs a single process, such as Apache, NGINX, etc. - it does not run any of the normal operating system daemons such as crond.
No crond means, there is nothing that would read or execute your crontab.
There are several possibilities to solve this, but no perfect solution that I know of.
The worst one is to actually install crond, along with something like supervisord. It makes your container dramatically more complex.
You can create a separate container that runs nothing but cron. Mount whatever you need from the other containers as volumes. This is generally the recommended option, but it has limitations. The cron container needs to know a lot about the internals of your other containers, and the cron jobs don't execute in the same context as the rest of the containers.
You can create a cron job on the host, and have it execute scripts in the containers with docker exec. That works well, but creates a dependency between host and container. It may also not work at all if you don't have access to the host's operating system (for instance, in a hosted situation, or if a different team manages the host).

Resources