Delaying a specificed kernel task - linux-kernel

I am trying to write a kernel module which will delay a certain kernel task by some n seconds if it meets a certain condition.
I have found which tasks meet my condition just fine, and have the pid of the said task stored in a variable. What I need is a method which takes a task pid and a time (presumably in jiffies) and reschedules the execution of the program for n seconds in the future.
I have looked in sched.c but to no avail. I know about the likes of schedule_timout(), however I don't want to timeout the current task. I am also aware of resched_task from sched.c, however it only takes a timeout value and I can't pass the task to reschedule.
Note that I don't want to edit any Linux system files here, just my kernel module.

Related

Difference between Async, forks, serial in ansible

Having trouble to understand the difference between the async vs forks vs serial in Ansible.
I feel they almost do the same job. Found the below from google,
serial: Decides the number of nodes process in each tasks in a single run.
forks: Maximum number of simultaneous connections Ansible made on each Task.
async: to run multiple tasks in a playbook concurrently
Serial sets a number, a percentage, or a list of numbers of hosts you want to manage at a time.
Async triggers Ansible to run the task in the background which can be checked (or) followed up later, and its value will be the maximum time that Ansible will wait for that particular Job (or) task to complete before it eventually times out or complete.
Ansible works by spinning off forks of itself and talking to many remote systems independently. The forks parameter controls how many hosts are configured by Ansible in parallel. By default, the forks parameter in Ansible is a very conservative 5.This means that only 5 hosts will be configured at the same time, and it's expected that every user will change this parameter to something more suitable for their environment. A good value might be 25 or even 100.
SERIAL : Decides the number of nodes process in each tasks in a single run.
Use: When you need to provide changes as batches/ rolling changes.
FORKS : Maximum number of simultaneous connections Ansible made on each Task.
Use: When you need to manage how many nodes should get affected simultaneously.

what does asyncio.get_event_loop().run_until_complete(asyncio.sleep(1)) mean

Does code below mean anything? If it does, what's the scenario to use it? What's difference if the sleep time is different?
...
while True:
asyncio.get_event_loop().run_until_complete(asyncio.sleep(1))
...
Copy from one comment which I think answered my question:
run_until_complete() runs not only the given task, but also any other runnable tasks, until the given task completes. The above loop could have a meaning if some tasks were scheduled using asyncio.get_event_loop().create_task(xxx) prior to the loop's execution. In that case the while loop as shown might be designed not to run sleep() (which would indeed be useless), but to give the event loop a time slot of 1s to run, and then (presumably in the loop) do some non-asyncio checks.

Bash script - Maintain multiple instances running

How can I ensure, that multiple instances of certain program are always running?
Let's say that I want to make sure that 4 instances of a certain program are always running.
If one instance is killed, new one should start.
If 5 instances are running, one should be killed.
This is not really a shell question, because the approach is the same, whichever shell you are using.
I think the cleanest solution is to have a "watchdog", which checks the running processes (using ps) and, if necessary, starts a new one or kills an unnecessary one.
One way - which I have used in a similar situation - is to write a cron job, which regularly (say: every 5 minutes) starts the watchdog and let it do his work.
If such an interval is too long for your case (i.e. if you need checking it more often than every minute), you could have the watchdog run continuously, in a loop. Still, you will need a cron job, which controls in turn the watchdog from time to time - just in case the watchdogs dies. In this case you might consider running it as a daemon.

Repeated tasks - spawn new processes or run continuously?

We have about 10 different Python scripts that download data from the web, read data from a database and write data back to that database. They do so repeatedly every 10 seconds (or 10 seconds after the last task has completed).
The question is, what is the best approach at running these tasks? I can think of a few ways:
a while True that runs the task then sleeps for the interval. It could be guarded by a watchdog like supervisord, making sure it is always up.
having the script execute the task just once, and invoking the script externally once every 10 seconds by another process.
having the script execute the task lets say for 1 hour (every 10 seconds for an hour), and having a watchdog make sure that task runs again once the hour is over.
I would like to avoid long running processes that actually do something because I don't want to deal with memory problems etc over long periods of time.
Additional Information
The scripts are different because they each retrieve data from a different source, and query, calculate and insert different data into the database.
The tasks are performed every 10 seconds since the data being retrieve is in real-time, and we need to not only keep updating it very frequently, but also keep all the historical data in the database.
There are a lot of resources being used by the scripts - MySQL connections, HTTP connections, Redis connections, etc. We have encountered issues with using the long-running approach before, specifically with MySQL connections (things like MySQL server has gone away, even though all connections had been closed). Hence the inclination toward having the scripts run in shorter periods of time.
What are some common approaches at this?
Unless your scripts somehow leak memory (quite unlikely), they should all be the same. So, for sheer simplicity (your time programming/debugging is much more expensive than a few miliseconds of the machine's time, even each 10 seconds!) I'd go for the single script that checks each 10 seconds.
OTOH, checking each 10 seconds sounds like busywork. Can't you set up so that whatever you are monitoring tells you when there are changes? Or batch the records up so you can retrieve, say, a day's worth at at time?
If you are running on linux, cron has granularity of a minute. We have processes we run constantly. Rather than watch them, the script will open a semaphore that gets released when the program finishes normally or not. This way if it runs long and it gets called again by cron, the copy will exit when it can't get the lock. This way you can call it a often as you need to without it stepping on a possibly still running copy.

preventing process being scheduled

I am creating a kernel module for linux. I was wondering, how can I stop a process from being scheduled for a specified time? Is there a function in the sched.c that can do this? Is it possible to add a specfic task_struct to a wait queue for a certain defined period of time or use something like schedule_timeout for a specific process?
Thanks
Delaying process scheduling for a time is equivalent to letting it sleep. In drivers, this is often done with msleep() (common in work tasks), or for processes, by placing the process into interruptable sleep mode with
set_current_state(TASK_INTERRUPTABLE);
schedule_timeout(x*HZ);
The kernel will not schedule the task again until the timeout has expired or a signal is received.

Resources