I would like to know how the task scheduling of the OpenMP task queue is performed.
Here I read that, by default, OpenMP imposes a breadth-first scheduler and that they did some tests FIFO vs. LIFO, but they don't say anything about the default. Since I only have a single thread (I use the single directive) creating multiple tasks, I don't think it makes any sense comparing their breadth-first vs work-first scheduling.
So, is the default FIFO or LIFO? And is it possible to change it?
Thanks
I would like to know how the task scheduling of the OpenMP task queue is performed
Abstract version
Task scheduling in OpenMP is implementation defined, even though the standard imposes some restrictions on the algorithm. Should you need to manipulate the scheduler, the place to search is the particular OpenMP implementation you are targeting.
The long tale
The basic concept upon which all the task scheduling machinery is defined is that of task scheduling point (see section 2.11.3):
Whenever a thread reaches a task scheduling point, the implementation
may cause it to perform a task switch, beginning or resuming execution
of a different task bound to the current team.
In the notes below they give a broader explanation of what should be the expected behavior (emphasis mine):
Task scheduling points dynamically divide task regions into parts.
Each part is executed uninterrupted from start to end. Different parts
of the same task region are executed in the order in which they are
encountered. In the absence of task synchronization constructs, the
order in which a thread executes parts of different schedulable tasks
is unspecified.
A correct program must behave correctly and consistently with all
conceivable scheduling sequences that are compatible with the rules
above
...
The standard also specifies where task scheduling points are implied:
the point immediately following the generation of an explicit task
after the point of completion of a task region
in a taskyield region
in a taskwait region
at the end of a taskgroup region
in an implicit and explicit barrier region
the point immediately following the generation of a target region
at the beginning and end of a target data region
in a target update region
and what a thread may do when it meets one of them:
begin execution of a tied task bound to the current team
resume any suspended task region, bound to the current team, to which it is tied
begin execution of an untied task bound to the current team
resume any suspended untied task region bound to the current team.
It says explicitly, though:
If more than one of the above choices is available, it is unspecified
as to which will be chosen.
leaving space for different conforming behaviors. It only imposes four constraints:
An included task is executed immediately after generation of the task.
Scheduling of new tied tasks is constrained by the set of task regions that are currently tied to the thread, and that are not
suspended in a barrier region. If this set is empty, any new tied task
may be scheduled. Otherwise, a new tied task may be scheduled only if
it is a descendent task of every task in the set.
A dependent task shall not be scheduled until its task dependences are fulfilled.
When an explicit task is generated by a construct containing an if clause for which the expression evaluated to false, and the previous
constraints are already met, the task is executed immediately after
generation of the task.
that every scheduling algorithm must fulfill to be considered conforming.
Related
I don't quite understand spark.task.cpus parameter. It seems to me that a “task” corresponds to a “thread” or a "process", if you will, within the executor. Suppose that I set "spark.task.cpus" to 2.
How can a thread utilize two CPUs simultaneously? Couldn't it require locks and cause synchronization problems?
I'm looking at launchTask() function in deploy/executor/Executor.scala, and I don't see any notion of "number of cpus per task" here. So where/how does Spark eventually allocate more than one cpu to a task in the standalone mode?
To the best of my knowledge spark.task.cpus controls the parallelism of tasks in you cluster in the case where some particular tasks are known to have their own internal (custom) parallelism.
In more detail:
We know that spark.cores.max defines how many threads (aka cores) your application needs. If you leave spark.task.cpus = 1 then you will have #spark.cores.max number of concurrent Spark tasks running at the same time.
You will only want to change spark.task.cpus if you know that your tasks are themselves parallelized (maybe each of your task spawns two threads, interacts with external tools, etc.) By setting spark.task.cpus accordingly, you become a good "citizen". Now if you have spark.cores.max=10 and spark.task.cpus=2 Spark will only create 10/2=5 concurrent tasks. Given that your tasks need (say) 2 threads internally the total number of executing threads will never be more than 10. This means that you never go above your initial contract (defined by spark.cores.max).
I have studied about the topic of Job Schedulers and there are different types like Long term, medium and short term schedulers and finally got confused with the things.
So my question is, "Among these three schedulers, which scheduler type will make use of the scheduling algorithms(like FCFS, SJF etc.)"
My understanding so far is, "The scheduling algorithm will take the job from the ready queue (which contains the list of jobs to be executed which is in ready more) and keeps the CPU busy as much as possible".
And the Long Term Scheduler is the one which decides what are all the jobs to be allowed in the ready queue.
So, the long term scheduler is the one which is going to make use of those scheduling algols..?.
And also, I have seen the link, https://en.wikipedia.org/wiki/Scheduling_(computing)
where I have seen that,
Note: The following lines is excerpted from Wiki...
"Thus the short-term scheduler makes scheduling decisions much more frequently than the long-term or mid-term schedulers...."
So, whether all these 3 schedulers will make use of the scheduling algol.??
Finally, I got tucked at this point and got confused with the difference between these types of schedulers ..
Could some one kindly do briefly explain this one?
So I can able to understand this one.
Thanks in advance.
So, whether all these 3 schedulers will make use of the scheduling
algo??
Basically, the scheduling algorithms are chosen by all three of them depending on whichever is functional at that point. All of them require some kind of scheduling decisions at any point as all of them are schedulers. So, it all depends on which is executing at what instant (short-term scheduler executes more frequently as compared to others).
Wikipedia is right in mentioning that. I hope you got your answer in short.
Description :
As mentioned in Process Scheduling page on tutorialspoint :-
Schedulers are special system softwares which handles process scheduling in various ways. Their main task is to select the jobs to be submitted into the system and to decide which process to run.
Long Term Scheduler ------> It selects processes from pool and loads them into memory for execution
Medium Term Scheduler -----> It selects those processes which are ready to execute.
Short Term Scheduler ------> It can re-introduce the process into memory and execution can be continued.
The below list (click here for source) shows the function of each of the three types of schedulers (long-term, short-term, and medium-term) for each of three types of operating systems (batch, interactive, and real-time).
batch
longterm -----> job admission based on characteristics and resource
needs
mediumterm -----> usually none—jobs remain in storage until done
shortterm -----> processes scheduled by priority; continue until wait
voluntarily, request service, or are terminated
interactive
longterm -----> sessions and processes normally accepted unless
capacity reached
mediumterm -----> processes swapped when necessary
shortterm -----> processes scheduled on rotating basis; continue until
service requested, time quantum expires, or pre-empted
real-time
longterm -----> processes either permanent or accepted at once
mediumterm -----> processes never swapped
shortterm -----> scheduling based on strict priority with immediate
preemption; may time-share processes with equal priorities
For the sake of argument i'm trying to define an algorithm that creates
a pool of tasks (each task has an individual time-slice to operate)
and managing them without the system clock.
The problem that i've encountered is that with each approach that I was taking,
the tasks with the higher time-slice were starved.
What i've tried to do is creating a tasks vector that contains a pair of task/(int) "time".
I've sorted the vector with the lowest "time" first & then iterated and executed each task with a time of zero (0). While iterating through the entire vector i've decreased the "time" of each task.
Is there a better approach to this kind of "problem". With my approach, startvation definitely will occur.
Managing tasks may not need a system clock at all.
You only need to figure a way to determine the priority between each task then run each task following their priority.
You may want to pause a task to execute another task and then will need to set a new priority to the paused task. This feature (Multitasking) will need an interruption based on an event (usually clock-time, but you can use any other event like temperature or monkey pushing a button or another process sending a signal).
You say your problem is that tasks with the higher time-slice are starving.
As you decrease the 'time' of each task when running it and assuming 'time' will not be negative, higher time-slice tasks will eventually get to 0 and as well as every other task.
We have a list of tasks with different length, a number of cpu cores and a Context Switch time.
We want to find the best scheduling of tasks among the cores to maximize processor utilization.
How could we find this?
Isn't it like if we choose the biggest available tasks from the list and give them one by one to the current ready cores, it's going to be best or you think we must try all orders to find out which is the best?
I must add that all cores are ready at the time unit 0 and the tasks are supposed to work concurrently.
The idea here is that there's no silver bullet, for what you must consider what are the types of tasks being executed, and try to schedule them as nicely as possible.
CPU-bound tasks don't use much communication (I/O), and thus, need to be continuously executed, and interrupted only when necessary -- according to the policy being used;
I/O-bound tasks may be continuously put aside in the execution, allowing other processes to work, since it will be sleeping for many periods, waiting for data to be retrieved to primary memory;
interative tasks must be continuously executed, but needs not to be executed without interruptions, as it will generate interruptions, waiting for user inputs, but it needs to have a high priority, in order not to let the user notice delays in the execution.
Considering this, and the context switch costs, you must evaluate what types of tasks you have, choosing, thus, one or more policies for your scheduler.
Edit:
I thought this was a simply conceptual question. Considering you have to implement a solution, you must analyze the requirements.
Since you have the length of the tasks, and the context switch times, and you have to maintain the cores busy, this becomes an optimization problem, where you must keep the minimal number of cores idle when it reaches the end of the processes, but you need to maintain the minimum number of context switches, so that your overall execution time does not grow too much.
As pointed by svick, this sounds like a partition problem, which is NP-complete, and in which you need to divide a sequence of numbers into a given number of lists, so that the sum of each list is equal to each other.
In your problem you'd have a relaxation on the objective, so that you no longer need all the cores to execute the same amount of time, but you want the difference between any two cores execution time to be as small as possible.
In the reference given by svick, you can see a dynamic programming approach that you may be able to map onto your problem.
Based on msdn ,windows os schedule threads based on base prorety and uses as a boost dynamic priorety
The system treats all threads with the same priority as equal. The
system assigns time slices in a round-robin fashion to all threads
with the highest priority. If none of these threads are ready to run,
the system assigns time slices in a round-robin fashion to all threads
with the next highest priority. If a higher-priority thread becomes
available to run, the system ceases to execute the lower-priority
thread (without allowing it to finish using its time slice), and
assigns a full time slice to the higher-priority thread.
From the above quote
The system treats all threads with the same priority as equal
Does it mean that the system treats threads based on dynamic priorety?And base priorety is used just as low limit for dynamic priorety change?
Thank you
Based on msdn ,windows os schedule threads based on base prorety and uses as a boost dynamic
priorety
Well, you follow that with a nice text snipped that has NO SIGN OF A BOOST DYNAMIC PRIORITY.
More information about that is in the documentation - for example http://msdn.microsoft.com/en-us/library/windows/desktop/ms684828(v=vs.85).aspx is a good start.
In simple words, the scheduler schedules threads based on their current priority, and boost priority changes that, so they get scheduled differently.