systemd timer. several instance, different time? - systemd

I'm using the btrfs-scrub#.timer which is a timer with a template. (the argument represents the btrfs volume to scrub)
I need to scrub several volumes regularly but I'd like scrubs not to happen simultaneously. (e.g everyday a scrub, each day a different volume).
According to the documentation, you can only have one argument per timer
Is there a simple way to do that with systemd timers?

You could override the timer specifications for each unit using drop-in files, for example:
# /etc/systemd/system/btrfs-scrub#-.timer.d/OnCalendar.conf
[Timer]
OnCalendar=
OnCalendar=*-*-01 00:00:00
 
# /etc/systemd/system/btrfs-scrub#usr.timer.d/OnCalendar.conf
[Timer]
OnCalendar=
OnCalendar=*-*-10 00:00:00
 
# /etc/systemd/system/btrfs-scrub#var-lib.timer.d/OnCalendar.conf
[Timer]
OnCalendar=
OnCalendar=*-*-20 00:00:00

Related

How to run 1 playbook for the same group by multiple plays aka threaded

Current setup that we do have ~2000 servers (in 1 group)
I would like to know if there is a way to run x.yml on all the group (where all the 2k servers are in ) but with multiple plays (threaded , or something)
ansible-playbook -i prod.ini -l my_group[50%] x.yml
ansible-playbook -i prod.ini -l my_group[other 50%] x.yml
solutions with awx or ansible-tower are not relevant.
using even 500-1000 forks didn't gave any improvement
try to combine forks, and the free strategy.
the default behavior of Ansible is:
Ansible runs each task on all hosts affected by a play before starting the next task on any host, using 5 forks.
So event if your increase the forks number, the tasks on special forks will still wait any host finish to go ahead. The free strategy allows each host to run until the end of the play as fast as it can
- hosts: all
strategy: free
tasks:
# ...
ansible-playbook -i prod.ini -f 500 -l my_group x.yml
As mentioned above, you should preferably increase fork and set the strategy to free. Increasing fork will help you run the playbook on more server and setting the strategy to free would allow you to run a task for servers independently without waiting for others.
Please refer to below doc for more clarifaction.
docs
resolved by using patterns my_group[:1000] and my_group[999:]
forks didnt give any time decrease in my case.
also free strategy did multiplied the time which was pretty weird.
also debugging free strategy summary is free difficult when u have 2k servers and about 50 tasks in playbook .
thanks everyone for sharing
much appreciated

How to make squeue display time limits in hours only?

When viewing submitted jobs managed by Slurm, I would like to have the time limit column (specified by %l) to show only hours, instead of the usual days-hours:minutes:seconds format. This is the command I am currently using:
squeue --format="%.6i %.5P %.25j %.8u %.8T %.10M %.5l %.15b %.5C %.6D %R" --sort=+i --me
and this is the example output:
276350 qgpu jobname username RUNNING 1:14:14 1-00:00:00 gres:gpu:v100:1 18 1 s31n02
So, in this case, I would like the elapsed time to remain as is (1:14:14), but the time limit to change from 1-00:00:00 to 24. Is there a way to do it?
This is the way Slurm displays the dates. Elapsed time will eventually be displayed the same way (days-hours:minutes:seconds) after 23:59:59.
You can use a wrapper script to convert into a different format. Or if you know the time limit is no more than a day, just set the time limit to 23:59:00 by using --time=1439.
salloc -N1 --time=1439 bash
Using your squeue command:
166 mypartition interactive jyvet RUNNING 7:36 23:59:00 N/A 1 1 mynode

Weekly scheduling of circleci job?

I would like to run jmeter tests in weekly basis.
I have the below code now:
nightly:
triggers:
- schedule:
cron: "0 0 13 ? * SAT *"
filters:
branches:
only:
- master
- beta
Does it run everyday because of nightly: or does it run weekly because of cron: "0 0 13 ? * SAT *". If it is daily, how can I convert to run weekly?
"nightly" is just the name of the Circle CI workflow. It could just as easily be "foo" or "bar". The scheduling is in the cron line, where you specify 5 things: the Minute, Hour, Day of the month, Number of the month and Day of the week that you want the job to run.
For example, to run the workflow every Saturday at 1 pm, you could use "0 13 * * 6".
Using the asterisk (*) in a field means any value of that field is acceptable. Be aware that Circle CI will interpret the time as UTC so you may need to adjust it based on your timezone. https://crontab.guru/ is a nice site for learning about and experimenting with cron entries.

chkconfig: 35 99 05? Explanation

35 are the runlevels for the script start
99 and 05 is not understandable
Could some one explain the chkconfig parameters?
Run levels are from 0 to 6.
How this 99 parameter will work when used in the script?
As the man page for chkconfig says,
Each service which should be manageable by chkconfig needs two or more
commented lines added to its init.d script. The first line tells
chkconfig what runlevels the service should be started in by default,
as well as the start and stop priority levels. If the service should
not, by default, be started in any runlevels, a - should be used in
place of the runlevels list. The second line contains a description
for the service, and may be extended across multiple lines with
backslash continuation.
For example, random.init has these three lines:
# chkconfig: 2345 20 80
# description: Saves and restores system entropy pool for \
# higher quality random number generation.
This says that the random script should be started in levels 2, 3, 4, and 5, that its
start priority should be 20, and that its stop priority should be 80.
Start and Stop priorities are used to determine what order initscripts are run in: when starting, lower numbers are run first, when stopping, higher numbers are shutdown first.

Heroku Apscheduler: timed job OK but not for schedulee job

I am following this to schedule my Django cron job on Heroku.
Procfile:
web: gunicorn tango.wsgi --log-file -
clock: python createStatistics.py
createStatistics.py:
from apscheduler.schedulers.blocking import BlockingScheduler
sched = BlockingScheduler()
#sched.scheduled_job('interval', minutes=1)
def timed_job():
print('This job is run every minute.')
#sched.scheduled_job('cron', day=14, hour=15, minute=37)
def scheduled_job():
print('This job is run on day 14 at minute 37, 3pm.')
sched.start()
The timed_job runs OK, however, the scheduled_job has no effect. Do I need to set up any time zone information for apscheduler (I have the TIME_ZONE set in settings.py)? If so, how? Or did I miss anything?
Specific to Heroku, for reasons I have not been able to figure out yet, it seems that you need to specify the optional id field on cron jobs to make them work. So the cron, job definition would now look like this.
#sched.scheduled_job('cron', id="job_1", day=14, hour=15, minute=37)
def scheduled_job():
print('This job is run on day 14 at minute 37, 3pm.')
You must specify the timezone in every job otherwise Heroku will run in UTC timezone.
#sched.scheduled_job('cron', day=14, hour=15, minute=37, timezone=YOUR_TIME_ZONE)
def scheduled_job():
print('This job is run on day 14 at minute 37, 3pm.')

Resources