I was reading through galvin chapters on process and thread
Looking at processes , multiple processes are scheduled by the cpu scheduler or short term scheduler , after this comes the concept that a thread is a path of execution and a process can contain multiple threads .
Now i thought of a scenario , Suppose the cpu scheduler schedules a process for execution using round robin algo , now suppose the scheduled process has 50 threads , now in this scenario how are the threads withing the same process scheduled and how does the context switch b/w threads , process happen
Can plz sm one explain me the entire scenario in detals , i will be very thankfullll
Process is a program in execution and its the job of the programmer to decide the number of threads in that process and how they would be scheduled.. it depends on which sequence he would like the program to run..
so as soon as the process is in running state it would run the thread that was scheduled to run by the programmer..
Even in case of threads running concurrently its the programmer who decides which threads can run concurrently and which cannot.. i hope this clears your doubt.
Related
What I want to is as follows:
In a Spring Boot application,
Schedule tasks (functions, or a method of a class) with cron expressions (cron expressions can be different for each task).
When it's time to run task, run it, concurrently with other task if neccessary (start time is the same, running periods overlaps etc.) - and without any limitation of the concurrency.
The tasks can take several minutes.
The number of tasks (and its options) and cron expressions cannot be determined at development time. It is end user configurable.
The scheduler must satisfy the following requirements.
It must not have a wait queue. if a scheduled time arrives, the task must be executed immediately (Don't worry about the number of the threads).
When the tasks are not running, the number of idle threads should be minimal - or the number should be controllable.
I've looked at ThreadPoolTaskScheduler, but it seems that it fails to satisfy the above requirements.
Thank you in advance.
Writing a scheduler in rufus where the scheduled tasks will overlap. This is expected behavior, but was curious how rufus handles the overlap. Will it overlap up to n threads and then block from there? Or does it continue to overlap without a care of how many concurrent tasks run at a time.
Ideally I would like to take advantage of rufus concurrency and not have to manage my own pool of managed threads. Would like to block once I've reached the max pool count.
scheduler = Rufus::Scheduler.new
# Syncs one tenant in every call. Overlapping calls will allow for multiple
# syncs to occur until all threads expended, then blocks until a thread is available.
scheduler.every '30s', SingleTenantSyncHandler
Edit
Seeing from the README that rufus does use thread pools in version 3.x.
You can set the max thread count like:
scheduler = Rufus::Scheduler.new(:max_work_threads => 77)
Assuming this answers my question but still would like confirmation from others.
Yes, I confirm, https://github.com/jmettraux/rufus-scheduler/#max_work_threads answers your question. Note that this thread pool is shared among all scheduled jobs in the rufus-scheduler instance.
I want to start several Linux kernel threads using kthread_create (not the kthread_run), but in my driver there are some probability that some of threads will be not woken up with wake_up_process.
Is it correct to create all threads with kthread_create and not wake they up?
I think, some threads may get stuck in TASK_UNINTERRUPTABLE.
The problem is that I can't wake up the thread before the data for thread will be ready. If I do, then thread will try to parse unavailable data. And sometimes there will be data not for all threads.
Also, I can't start the thread at time when data will be available because this is too long to start the thread for my requirements.
Each task I have work in a short bursts, then sleep for about an hour and then work again and so on until the job is done. Some jobs may take about 10 hours to complete and there is nothing I can do about it.
What bothers me is that while job is sleeping resque worker would be busy, so if I have 4 workers and 5 jobs the last job would have to wait 10 hours until it can be processed, which is grossly unoptimal since it can work while any other worker is sleeping. Is there any way to make resque worker to process other job while current job is sleeping?
Currently I have a worker similar to this:
class ImportSongs
def self.perform(api_token, songs)
api = API.new api_token
songs.each_with_index do |song, i|
# make current worker proceed with another job while it's sleeping
sleep 60*60 if i != 0 && i % 100 == 0
api.import_song song
end
end
end
It looks like the problem you're trying to solve is API rate limiting with batch processing of the import process.
You should have one job that runs as soon as it's enqueued to enumerate all the songs to be imported. You can then break those down into groups of 100 (or whatever size you have to limit it to) and schedule a deferred job using resque-scheduler in one hour intervals.
However, if you have a hard API rate limit and you execute several of these distributed imports concurrently, you may not be able to control how much API traffic is going at once. If you have that strict of a rate limit, you may want to build a specialized process as a single point of control to enforce the rate limiting with it's own work queue.
With resque-scheduler, you'll be able to repeat discrete jobs at scheduled or delayed times as an alternative to a single, long running job that loops with sleep statements.
I am creating a kernel module for linux. I was wondering, how can I stop a process from being scheduled for a specified time? Is there a function in the sched.c that can do this? Is it possible to add a specfic task_struct to a wait queue for a certain defined period of time or use something like schedule_timeout for a specific process?
Thanks
Delaying process scheduling for a time is equivalent to letting it sleep. In drivers, this is often done with msleep() (common in work tasks), or for processes, by placing the process into interruptable sleep mode with
set_current_state(TASK_INTERRUPTABLE);
schedule_timeout(x*HZ);
The kernel will not schedule the task again until the timeout has expired or a signal is received.