Shortest Remaining Time - average turnaround time - algorithm

Assuming all processes arrive at the same time, shortest job first seems to be optimal in terms of lowering the average turn around time. I also managed to prove that.
However, when procesess arrive at different times I felt like the optimal algorithm would be the Shortest Remaing Time (preemptive shortest job first). But I can't find a way to prove it. Can someone help me/point me to a solution? Or am I flat out wrong?
http://en.wikipedia.org/wiki/Shortest_remaining_time
You can run one process at a time. No context switch time.
EDIT:
Say we have n proccesses.
Each process has an execution time of P(i). 1<= i <= n
Each process becomes available for execution at a specific time R(i)
Each process ends running at some time C(i) (turn-around time) based on when it started running, if it was suspended e.t.c
all times are integers. no specific example. I just have to find an algorithm that optimizes the average turn around time ((C(1)+C(2)+...+C(n))/n) for any given input. (as low aas possible)

Related

A* Algorithm - termination strategy

I implemented the A* algorithm (actually a modification thereof...) to calculate wiring on a circuit board.
While it is fairly fast in finding a path, it takes painfully long (>100ms) to find no path.
From the outlines of the algorithm I have come across it is clear that it will terminate only when the queue of unvisited nodes is empty.
Are there any heuristics to terminate the search for a path early -- possibly when adding additional assumptions?
I would just define a max cost limit and terminate if no candidate+lowerBoundOfEstimatedCosts is still smaller than this limit.
On a circuit board I would think no path should be longer than 5 times the manhatten distance (just my guess)
An other idea:
Maybe determine if there is a path at all (with some kind of 'flood fill') might be a good idea. It avoids calculating heuristics for each point and therefore should be much faster than searching the hole pace by A*-Search.
In an multi-threaded environment you could run this in a second (maybe low priority) thread and terminate the A*-Search as soon as you fill-approach finds out that there is no path at all.

Interval Scheduling Problem: accepting requests in order of latest starting time is always optimal?

I learned that the interval scheduling problem is optimal when we accepts the requests in the order of earliest finish time.
Then, is it also true that we also have always optimal solution if we accept the requests in the order of latest starting time?
I think it is false, because we would get a different schedule set, but I am wondering how I can come up with a more mathematical proof.
Scheduling by latest starting time is the same as:
Reverse time (negate all the times and swap interval ends)
Schedule by earliest finish time
Reverse time again to restore the original intervals.
By symmetry, the maximum number of schedulable intervals is the same whether you reverse time or not, so if "earliest finish time" is optimal, then "latest start time" is optimal, too.
As a hint, imagine mirroring all the intervals, or pretending that time runs backwards. You know that the greedy “take the earliest finish time” will select the maximum number of intervals. If you sweep backwards in time, what’s the equivalent condition?

Time Management Scheduling Algorithm

I am interested in solving a problem related to time management.
Suppose you are given N intervals, where the i-th interval has a start time and end time, as well as the amount. Each interval represents a constraint that requires (at least) the specified amount of total time doing task i by a machine inside the start and end time of the i-th interval.
The machine can only work on one task at any time but can switch between tasks and come back to another if necessary.
How do you produce a schedule (i.e. allocation of time and task) that satisfies all intervals (i.e. constraints), as well as reporting and minimizing maximum lateness, in the most efficient way?
Also, a variant of the problem:
Each interval is also given a task ID, and if a task is done at some time, it would be counted towards all intervals covering this time and requiring this task. In other words, if multiple intervals that require the same task overlaps, doing the task during the overlapping time will be counted as trying to satisfy all 3 constraints, thus saving some time.
Is there an efficient way to solve this problem as well?

greedy algorithm, scheduling

I am trying to understand how Greedy Algorithm scheduling problem works.
So I've been reading and googling for a while since I could not understand Greedy algorithm scheduling problem.
We have n jobs to schedule on a single resource. The job (i) has a requested start time s(i) and finish time f(i).
There are some greedy ideas which we select...
Accept in increasing order of s ("earliest start time")
Accept in increasing order of f - s ("shortest job time")
Accept in increasing order of number of conflicts ("fewest conflicts")
Accept in increasing order of f ("earliest finish time")
And the book says the last one, accept in increasing order of f will always gives an optimal solution.
However it did not mention why it always gives optimal solution and why other 3 will not give optimal solution.
They provided the figure that says why other three will not provide optimal solution but I could not understand what it means.
Since I have low reputation, I can not post any image so I will try to draw it.
 |---| |---| |---|
|-------------------------|
increasing order of s
underestimated solution
|-----------| |-----------|
   |-----|
increasing order of f-s
underestimated solution
|----|  |----| |----|  |----|
 |-----| |-----| |-----|
 |-----|    |-----|
 |-----|    |-----|
increasing order of number of conflicts.
underestimated solution
This is what it looks like and I don't see why this is a counterexample of each scenario.
If anyone can explain why each greedy idea does/ does not work, it will be very helpful.
Thank you.
I think I can explain this.
Lets say, we have n jobs, start times as s[1..n] and finish times as f[1..n]. So if we sort it according to finish times, then, we will always be able to complete most number of tasks. Lets see, how.
If a job is finishing earlier (even if it started later in the series, a short job), then, we always have more time for later jobs. Lets assume, we have other jobs that we could start/complete in this interval so that our number of tasks could increase. Now, this is not actually possible as if any task completed before this, then that would be the one with earliest finish time so we would be working on that one. And, if any task has not been completed till now (but has started), then if we selected that, we would not have completed any task but now we actually have done one at least. So, in any case, this is the most optimal choice.
There are many possible solutions with maximum number of tasks that can be done in an interval, EFT gives one such solution. But it is always the max number possible.
I hope I could explain it well.
Since #vish4071 has already explained why selecting earliest finish time will lead to optimal solution, I'll only explain the counterexamples. Task [a,b] starts at a and ends at b. I'll use the counterexamples you have provided.
Earliest start time
Suppose tasks [1,10], [2,3], [4,5], [6,7]. The earliest start time strategy will choose [1,10] and then refuse the other 3, since they all collide with the first one. Yet we can see that [2,3], [4,5], [6,7] is the optimal solution, so earliest start time strategy will not always yield the optimal result.
Shortest execution time
Suppose tasks [1,10], [11,20], [9,12]. This strategy would choose [9,12] and then reject the other two, but optimal solution is [1,10], [11,20]. Therefore, shortest execution time strategy will not always lead to optimal result.
Least amount of collisions
This strategy seems promising, but your example with 11 task proves it not to be optimal. Suppose tasks: [1,4], 3x[3,6], [5,8], [7,10], [9,12], 3x[11,14] and [13, 16]. [7,10] has only 2 collisions with other tasks, which is less than any other task, so it would be selected first by the least amount of collisions strategy. Then [1,4] and [13, 16] would be selected, and all the other tasks rejected because they collide with already selected tasks. That is 3 tasks, however 4 tasks can be selected without collision: [1,4], [5,8], [9,12] and [13, 16].
You can also see that the earliest finish time strategy will always choose the optimal solution in these examples. Note that more than one optimal solution can exist with same number of selected tasks. In such case, earliest finish time strategy will always choose one of them.

algorithm similar to 'assignment task'

Here is the assignment problem http://en.wikipedia.org/wiki/Generalized_assignment_problem
I have a similar task, but can't find the algorithm.
We have m tasks, n laborers, m>n. When task is done, the laborer takes the next one (if there is free one). If task is taken by some laborer, no one else can take it. Each laborer has his own speed: V1..Vn, each task has its own 'volume' - W1..Wm. So, i need to distribute tasks between laborers with the goal of minimization of time doing all tasks.
Please help me to find an algorithm or how this problem is named.)
This problem is scheduling jobs on parallel, uniformly related machines so as to minimize the makespan. There's a polynomial-time approximation scheme due to Hochbaum and Shmoys (Using dual approximation algorithms for scheduling problems: Theoretical and practical results, 1988). btilly is right that the bin-packing problem is closely related; the analyses of both Hochbaum--Shmoys and the previous best approximation MULTIFIT are based on techniques pioneered for bin packing.
This looks like a likely np-complete variation of the http://en.wikipedia.org/wiki/Bin_packing_problem. I would therefore not worry about an exact algorithm.
Assuming that the tasks are independent, my first try would be a greedy heuristic. Given an estimate of finishing time, assign to each worker at all points the longest task that they can finish before that finishing time. Now do a binary search to find the shortest finishing time that you can get away with. Your initial upper time is the time for the fastest worker to do everything. Your initial lower time is the time for all of the workers to complete that much work if all are working at the same time.
This is clearly not always going to be perfectly optimal. But it should work reasonably well.

Resources