Scheduling algorithm - algorithm

I have a scheduling problem where new jobs (sets of tasks whose execution is sequentially connected) arrive every few seconds or so.
Each job requires some resources to be allocated at known intervals.
For example:
Job j1 is a set of tasks for which we reserve resources {r1, r2, r3}
on a known scheduling pattern:
r1:[t0 .. t1=t0+td1],
r2:[t2=t1+td2+i2 .. t3=t2+td3]
t0 being the start time of execution
td1 is the length of the resource allocation for r1
t1 being the end time of the resource allocation for r1
i1 is length of the waiting perioid between r1, r2 and so on.
In the example, a new job j2 is being scheduled right after j1 execution has started.
The earliest start time for j2 is t1.
A job may take some minutes of execution most of which consists of waiting.
I have a scheduler that looks at the current reservation table and decides which is the earliest possible starting moment for a new job with fixed allocation times and waiting periods and makes the reservations accordingly.
(But in reality, the waiting period doesn't really need to be fixed - but within some percentage (maybe 5%) and there may be alternatives to resource usage, for example, if resource r3.1 is booked, then 3.2 may be used as such to achieve the same thing.)
However, if the scheduler is required (yes, it's been suggested) to be able to dynamically adjust all the schedule allocations when a new job arrives to maximize the total work done (in a day) by taking advantage of the fact that the waiting times need not be exactly as given and the possibility to parallel execution with some resrouce duplicates (3.1/3.2), then I'd be looking at a completely different scheduling scheme (than my current start-as-soon-as-possible approach).
What scheduling scheme would you call that then?
Any suggestions on approaching the (new) problem?

As for your question regarding "alternatives to resource usage":
The pattern most commonly implemented to tackle that sort of problem is the Object Pool Pattern
The most widely known example for this probably is the ThreadPool
I suggest you implement a ResourcePool class with a int GetResource(ResourceType type, int durationInSeconds) method.
The return value indicates when the next resource of the given ResourceType will be available

You could be dealing with the RCPSP (Resource Constrained Project Scheduling Problem). Solution techniques range from integer programming and constraint programming to various heuristics. The technique depends the details such as planning horizon, how tasks/jobs use/share resources, how fast you need an solution schedule, etc.
see:
https://developers.google.com/optimization/scheduling/job_shop
http://www.laas.fr/files/ROC/2014-Presentations/MILP-RCPSP-PMS2014.pdf

Related

how to get the lowest cost when arrange moveable jobs

it's a very hard dynamic programming question, and I want to share with you and we can discuss a little bit toward its solution:
You will put your new application to cloud server; you have to schedule your job in order to get lowest cost. you don't need to care about the number of jobs running at the same time on the same server. every job k is given by a release time sk, a deadline fk, and a duration dk with dk ≤ fk - sk. This job needs to be scheduled for an interval of dk consecutive minutes between time sk and fk. server company would charges per minute per server. You only need one virtual server and you can save money moving jobs from sk to fk around to maximize the amount of time without running any jobs or, in other word, to minimize the amount of time running one or more jobs. using dynamic programming to solve problem. Your algorithm should be polynomial in n, the number of jobs.
This is the problem of minimizing busy time.
See Theorem 17 of this paper:
Rohit Khandekar, Baruch Schieber, Hadas Shachnai, and Tami Tamir. Minimizing busy time in multiple machine real-time
scheduling. In Proceedings of the 30th Annual Conference on Foundations of Software Technology and Theoretical Computer
Science (FSTTCS), pages 169 – 180, 2010
For a description of a polynomial time algorithm.
The key is:
To realize there are only certain interesting times that need to be considered (if you have a schedule, consider delaying each busy interval until you hit a deadline for one of the jobs being processed)
To consider when the longest duration job is done. This splits the problem into two pieces; before and after, which can be solved independently in the normal dynamic programming fashion.

Why Shortest Job First(SJF) algorithm is not used instead of FCFS at final level in Multilevel Feedback Scheduling

In Multilevel Feedback Scheduling at the base level queue, the processes circulate in round robin fashion until they complete and leave the system. Processes in the base level queue can also be scheduled on a first come first served basis.
Why can't they be scheduled on Shortest Job First (SJF) algorithm instead of First Come First Serve (FCFS) algorithm which seems to improve average performance of the algorithm.
One simple reason:
The processes fall in the base level queue after they fail to finish in the time quantum alloted to them in the higher level queues. If you implement SJF algorithm in the base level queue, you may starve a process because shorter job may keep coming before a longer executing process ever gets the CPU.
The SJF algorithm gives more througput, only when processes differ a lot in their burst time. However its not always the case that it will perform better than FCFS. Take a loot at this answer.
Since in Multilevel Feedback Scheduling algorithm, all the processes that are unable to complete execution within defined time quantum of first 2 queues, are put to the last queue having FCFS, its very likely that they all have large CPU bursts and therefore wont differ much in their burst time. Hence, its preferred to have FCFS, scheduling for the last queue.

Enabling Univa Grid Engine Resource Reservation without a time limit on jobs

My organization has a server cluster running Univa Grid Engine 8.4.1, with users submitting various kinds of jobs, some using a single CPU core, and some using OpenMPI to utilize multiple cores, all with varying and unpredictable run-times.
We've enabled a ticketing system so that one user can't hog the entire queue, but if the grid and queue are full of single-CPU jobs, no multi-CPU job can ever start (they just sit at the top of the queue waiting for the required number of cpu slots to become free, which generally never happens). We're looking to configure Resource Reservation such that, if the MPI job is the next in the queue, the grid will hold slots open as they become free until there's enough to submit the MPI job, rather than filling them with the single-CPU jobs that are further down in the queue.
I've read (here for example) that the grid makes the decision of which slots to "reserve" based on how much time is remaining on the jobs running in those slots. The problem we have is that our jobs have unknown run-times. Some take a few seconds, some take weeks, and while we have a rough idea how long a job will take, we can never be sure. Thus, we don't want to start running qsub with hard and soft time limits through -l h_rt and -l s_rt, or else our jobs could be killed prematurely. Resource Reservation appears to be using the default_duration, which we set to infinity for lack of a better number to use, and treating all jobs equally. Its picking slots filled by month-long jobs which have already been running for a few days, instead of slots filled by minute-long jobs which have only been running for a few seconds.
Is there a way to tell the scheduler to reserve slots for a multi-CPU MPI job as they become available, rather than pre-select slots based on some perceived run-time of the jobs in them?
Unfortunately I'm not aware of a way to do what you ask - I think that the reservation is created once at the time that the job is submitted, not progressively as slots become free. If you haven't already seen the design document for the Resource Reservation feature, it's worth a look to get oriented to the feature.
Instead, I'm going to suggest some strategies for confidently setting job runtimes. The main problem when none of your jobs have runtimes is that Grid Engine can't reserve space infinitely in the future, so even if you set some really rough runtimes (within an order of magnitude of the true runtime), you may get some positive results.
If you've run a similar job previously, one simple rule of thumb is to set max runtime to 150% of the typical or maximum runtime of the job, based on historical trends. Use qacct or parse the accounting file to get hard data. Of course, tweak that percentage to whatever suits your risk threshold.
Another rule of thumb is to set the max runtime not based on the job's true runtime, but based on a sense around "after this date, the results won't be useful" or "if it takes this long, something's definitely wrong". If you need an answer by Friday, there's no sense in setting the runtime limit for three months out. Similarly, if you're running md5sum on typically megabyte-sized files, there's no sense in setting a 1-day runtime limit; those jobs ought to only take a few seconds or minutes, and if it's really taking a long time, then something is broken.
If you really must allow true indefinite-length jobs, then one option is to divide your cluster into infinite and finite queues. Jobs specifying a finite runtime will be able to use both queues, while infinite jobs will have fewer resources available; this will incentivize users to work a little harder at picking runtimes, without forcing them to do so.
Finally, be sure that the multi-slot jobs are submitted with the -R y qsub flag to enable the resource reservation system. This could go in the system default sge_request file, but that's generally not recommended as it can reduce scheduling performance:
Since reservation scheduling performance consumption is known to grow with the number of pending jobs, use of -R y option is recommended only for those jobs actually queuing for bottleneck resources.

A new project management algorithm wanted for finite number of workers

I'm studying task-based parallel computing and got interested in a variation of the old project management problem -- the critical path of an activity-on-vertex (AOV) project network, which can be calculated using the topological sorting algorithm if there's no deadlock cycle. The total time of those activities on a critical path gives the minimum completion time of the project.
But this is assuming we always have enough workers simultaneously finishing the activities with no dependence on each other. If the number of workers (processors/cores) available is finite, certain activities can wait not because some activities they depend on have not yet been finished, but simply because all workers are now busy doing other activities. This is a simplified model for today's multi-core parallel computing. If there's only one worker who has to do all the activities, the project completion time is the total time of all activities. We are back to single-core serial computing that way.
Is there an efficient algorithm that gives the minimum completion time of an AOV network given a finite number of workers available? How should we wisely choose which activities to do first when the doable activities is more than the number of workers so as to minimize the idling time of workers later on? The minimum time should be somewhere in between the critical path time (infinite workers) and the total time of all activities (one worker). It should also be greater than equal to the total time divided by the number of workers (no idling). Is there an algorithm to get that minimum time?
I found a C++ conference video called "work stealing" that almost answers my question. At 18:40, the problem is said on the slide to be NP-hard if activities cannot be paused, further divided, or transferred from worker to worker. Such restrictions make decisions of which workers to finish which jobs (activities) too hard to make. Work stealing is therefore introduced to avoid making such difficult decisions beforehand. Instead, it makes such decions no longer crucial so long as certain apparent greedy rules are followed. The whole project will be always finished as soon as possible under the constraint of either the critical path or the no-idling time of the finite number of workers or both. The video then goes on talking about how to make the procedure of "work stealing" between different workers (processors) more efficient by making the implementation distributed and cache-friendly, etc.
According to the video, future C++ shared-memory parallel coding will be task-based rather than loop-based. To solve a problem, the programmer defines a bunch of tasks to finish and their dependence relations to respect, and then the coding language will automatically schedule the tasks on multiple cores at run time in a flexible way. This "event-driven"-like way of implementing a flexible code by a distributed task queuing system will become very useful in parallel computing.
When an optimization problem is NP-hard, the best way to solve it is to find ways to avoid it.

What's the best Task scheduling algorithm for some given tasks?

We have a list of tasks with different length, a number of cpu cores and a Context Switch time.
We want to find the best scheduling of tasks among the cores to maximize processor utilization.
How could we find this?
Isn't it like if we choose the biggest available tasks from the list and give them one by one to the current ready cores, it's going to be best or you think we must try all orders to find out which is the best?
I must add that all cores are ready at the time unit 0 and the tasks are supposed to work concurrently.
The idea here is that there's no silver bullet, for what you must consider what are the types of tasks being executed, and try to schedule them as nicely as possible.
CPU-bound tasks don't use much communication (I/O), and thus, need to be continuously executed, and interrupted only when necessary -- according to the policy being used;
I/O-bound tasks may be continuously put aside in the execution, allowing other processes to work, since it will be sleeping for many periods, waiting for data to be retrieved to primary memory;
interative tasks must be continuously executed, but needs not to be executed without interruptions, as it will generate interruptions, waiting for user inputs, but it needs to have a high priority, in order not to let the user notice delays in the execution.
Considering this, and the context switch costs, you must evaluate what types of tasks you have, choosing, thus, one or more policies for your scheduler.
Edit:
I thought this was a simply conceptual question. Considering you have to implement a solution, you must analyze the requirements.
Since you have the length of the tasks, and the context switch times, and you have to maintain the cores busy, this becomes an optimization problem, where you must keep the minimal number of cores idle when it reaches the end of the processes, but you need to maintain the minimum number of context switches, so that your overall execution time does not grow too much.
As pointed by svick, this sounds like a partition problem, which is NP-complete, and in which you need to divide a sequence of numbers into a given number of lists, so that the sum of each list is equal to each other.
In your problem you'd have a relaxation on the objective, so that you no longer need all the cores to execute the same amount of time, but you want the difference between any two cores execution time to be as small as possible.
In the reference given by svick, you can see a dynamic programming approach that you may be able to map onto your problem.

Resources