I am completely stuck on a task scheduling problem.
Here is the requirement:
Implement a scheduling algorithm that adds jobs to the regular queue and pushes them through in such a way that the average wait time for all jobs in the queue is minimized. A new job isn't pushed through unless it minimizes the average waiting time.
Assume that your program starts working at 0 seconds. A request for the ith job came at requestTimei, and let's assume that it takes jobProcessi seconds to process it.
def jobScheduling(requestTime, jobProcess, timeFromStart):
requestTimeAndDuration={}
for i in range(len(requestTime)):
job=[]
job.append(requestTime[i])
job.append(jobProcess[i])
requestTimeAndDuration[i]=job
taskProcessed=[]
previousEndTime=0
while (requestTimeAndDuration):
endTimes={}
for k,v in requestTimeAndDuration.items():
if(len(taskProcessed)==0):
previousEndTime=0
else:
previousEndTime=taskProcessed[-1][1]
#print previousEndTime
if(v[0]<=previousEndTime):
endTimes[k]= previousEndTime+v[1]
else:
endTimes[k]= v[0]+v[1]
endTimesSorted = sorted(endTimes.items(), key=lambda endTimes: endTimes[1])
nextJobId = endTimesSorted[0][0]
nextJobEndTime = endTimesSorted[0][1]
nextJob=[]
nextJob.append(nextJobId)
previousEndTime=0
if(len(taskProcessed)>0):
previousEndTime=taskProcessed[-1][1]
nextJobStarTime = nextJobEndTime-jobProcess[nextJobId]
nextJob.append(nextJobEndTime)
nextJob.append(nextJobStarTime)
taskProcessed.append(nextJob)
del requestTimeAndDuration[nextJobId]
print taskProcessed
My algorithm tries to sort the tasks by its end time, which is computed from previousEndTime + currentJobProcess
requestTime = [0, 5, 8, 11], jobProcess = [9, 4, 2, 1]
iteration 1:
task = [[0,9],[5,4],[8,2][11,1]]
PreviousEndTime=0 //since we started, there were no previous tasks 0+9=9, 5+4=9, 8+2=10, 11+1=12
endTime = {0:9, 1:9, 2:11, 3:12} //take task 0 and remove it from tasks
iteration 2:
task = [[5,4],[8,2][11,1]]
PreviousEndTime=9 9+4=13, 9+2=11, 11+1=12
endTime = {1:13,2:11,3:12} //remove task 2
iteration 3:
task = [[5,4],[11,1]]
previousEndTime=11
11+4=15, 11+1=12
endTime = {1:13,3:12} //remove task 3
iteration 4:
task = [[5,4],[11,1]]
previousEndTime=12
12+4=15
endTime = {1:16} //remove task 1
Final Result printed is [0,2,3,1]
My problem is that, my algorithm works for some cases, but not the complicated ones.
requestTime: [4, 6, 8, 8, 15, 16, 17, 21, 22, 25]
jobProcess: [30, 25, 14, 16, 26, 10, 11, 11, 14, 8]
The answer is [9, 5, 6, 7, 2, 8, 3, 1, 4]
But my algoritm produces [5, 9, 6, 7, 8, 3, 1, 4, 0]
So does anyone know how to do this problem? I'm afraid my algorithm may be fundamentally flawed.
I don't see a really neat solution like sorting by end time, but if there is such a solution, you should be able to get the same answer by sorting the tasks using as a comparator a function that works out which task should be scheduled first if those are the only two tasks to be considered.
Related
I have the following dynamic programming problem that I just can't figure out.
Basically you have a table like this which represents the time it takes computer X to accomplish Y tasks (ordi means computer).
In this case, computer 1 will take 7 seconds to complete 1 task, 10 seconds to complete 2 tasks, etc.
Computer 2 will take 8 seconds to accomplish 1 task, 9 seconds to accomplish 2 tasks, etc.
Now, I want to write a dynamic programming algorithm that will tell me the minimum amount of time needed for Computer 1 AND 2 to accomplish 3 tasks, or the minimum time needed for Computer 1, 2 AND 3 to accomplish 5 tasks, etc.
Keep in mind 2 constraints: each computer involved must have at least 1 task assigned to it, and all 6 tasks must be distributed. For example, you couldn't use Computer 1 AND 2 to accomplish 1 task in the same way that you couldn't use 3 computers to accomplish less than 3 tasks (and each one must have a task).
This is the solution :
My (almost working) (Rust) code is below, it doesn't give the right numbers, though, can anyone get it to give the correct solution?
let costs = [
[7, 10, 14, 20, 21, 30],
[8, 9, 15, 10, 18, 20],
[9, 9, 16, 28, 30, 40],
[11, 15, 20, 30, 35, 20],
];
let mut optimal = vec![vec![999999999; costs[0].len()]; costs.len()];
for j in 0..costs[0].len() {
optimal[0][j] = costs[0][j];
}
for i in 1..optimal.len() {
for j in i..optimal[i].len() {
let mut min = 999999999;
for k in 0..j {
let c = optimal[i - 1][j - k] + costs[i][k];
min = std::cmp::min(c, min);
}
optimal[i][j] = min;
}
}
let costs = [
[7, 10, 14, 20, 21, 30],
[8, 9, 15, 10, 18, 20],
[9, 9, 16, 28, 30, 40],
[11, 15, 20, 30, 35, 20],
];
let mut optimal = vec![vec![999999999; costs[0].len()]; costs.len()];
for j in 0..costs[0].len() {
optimal[0][j] = costs[0][j];
}
for i in 1..optimal.len() {
for j in i..optimal[i].len() {
let mut min = 999999999;
//Wrong interval
for k in 1..j+1 {
//Index shift because we start with 0
let c = optimal[i - 1][j - k] + costs[i][k-1];
min = std::cmp::min(c, min);
}
optimal[i][j] = min;
}
}
You essentially did not account that the column for 1 task is in column 0 and so on, so you will get 2 index shifts.
First thing first, I am new to the world of statistics.
Problem statement:
I have three predicted time series. These time series represent three independent scores, the sum of which is desired to be minimized over timeslot while selecting it. Length of the timeslot is already provided. I have read that there is confidence based selection of predicted interval for such problems, but I have used LSTM to predict the time series which may restrict me to use that approach, perhaps I think calculating the predicted interval is related to single time series.
e.g: Consider below arrays represent the three predicted time series.
arr1 = [23, 34, 16, 5, 45, 10, 2, 34, 56, 11]
arr2 = [123, 100, 124, 245, 125, 120, 298, 124, 175, 200]
arr3 = [1, 3, 10, 7, 2, 2, 10, 7, 8, 12]
time slot length = 3
As you could see, optimal timeslot for arr1 is [5, 7], for arr2 is [0, 2], and arr3 for is [3, 5], but I need only one timeslot for all three time series.
Questions:
Which error paradigm I should employ to select the optimal time slot?
I also have given weights(positive real number in [0, 1]) which represents the importance of particular time series in deciding timeslot. How do I employ it in error paradigm?
I seem to be a little confused on the proper implementation of Quick Sort.
If I wanted to find all of the pivot values of QuickSort, at what point do I stop dividing the subarrays?
QuickSort(A,p,r):
if p < r:
q = Partition(A,p,r)
Quicksort(A,p,q-1)
Quicksort(A,q+1,r)
Partition(A,p,r):
x = A[r]
i = p-1
for j = p to r-1:
if A[j] ≤ x:
i = i + 1
swap(A[i], A[j])
swap(A[i+1], A[r])
return i+1
Meaning, if I have an array:
A = [9, 7, 5, 11, 12, 2, 14, 3, 10, 6]
As Quick Sort breaks this into its constitutive pieces...
A = [2, 5, 3] [12, 7, 14, 9, 10, 11]
One more step to reach the point of confusion...
A = [2, 5] [7, 12, 14, 9, 10, 11]
Does the subArray on the left stop here? Or does it (quickSort) make a final call to quickSort with 5 as the final pivot value?
It would make sense to me that we continue until all subarrays are single items- but one of my peers have been telling me otherwise.
Pivots for your example would be: 6, 3, 11, 10, 9, 12. Regarding
Does the subArray on the left stop here?
It is always best to examine the source code. When your recursive subarray becomes [2, 5, 3], function QuickSort will be invoked with p = 0 and r = 2. Let's proceed: Partition(A,0,2) will return q = 1, so the next two calls will be Quicksort(A,0,0) and Quicksort(A,2,2). Therefore, Quicksort(A,0,1) will never be invoked, so you'll never have a chance to examine the subarray [2, 5] - it has already been sorted!
Given I have an array such as follows:
arr = [8, 13, 14, 10, 6, 7, 8, 14, 5, 3, 5, 2, 6, 7, 4]
I would like to count the number of consecutive number sequences. Eg in the above array the consecutive number sequences (or array-slices) are:
[13,14]
[6,7,8]
[6,7]
And hence we have 3 such slices. What is an efficient Algorithm to count this? I know how I can do it O(N^2) but I'm looking for something which is better than that.
arr = [8, 13, 14, 10, 6, 7, 8, 14, 5, 3, 5, 2, 6, 7, 4]
p arr.each_cons(2).chunk{|a,b| a.succ == b || nil}.count #=> 3
nilhas a special meaning to the chunk-method: it causes items to be dropped.
arr = [8, 13, 14, 10, 6, 7, 8, 14, 5, 3, 5, 2, 6, 7, 4]
result = []
stage = []
for i in arr:
if len(stage) > 0 and i != stage[-1]+1:
if len(stage) > 1:
result.append(stage)
stage = []
stage.append(i)
print result
Output:
[[13, 14], [6, 7, 8], [6, 7]]
The time complexity of this code is O(n). (There's only one for loop. And it's not hard to see that each iteration in the loop is O(1).)
I would do as below using Enumerable#slice_before:
a = [8, 13, 14, 10, 6, 7, 8, 14, 5, 3, 5, 2, 6, 7, 4]
prev = a[0]
hash = Hash[a.slice_before do |e|
prev, prev2 = e, prev
prev2 + 1 != e
end.map{|e| [e,e.size] if e.size > 1}]
hash # => {[13, 14]=>2, [6, 7, 8]=>3, [6, 7]=>2}
hash.size # => 3
I think this can be done in O(N) time. If you just want the count,
Iterate through the array. Initialize counter to 0.
If next element is one more or one less than current element, increment the counter.
Continue iterating till the next element is not one more or one less than current element.
Repeat steps 2 and 3 until you reach the end.
If you want sections of continuously increasing consecutive elements (not clear from your question)
Iterate through the array. Initialize counter to 0.
If next element is one more than current element, increment the counter.
Continue iterating till the next element is not one more than current element.
Repeat steps 2 and 3 until you reach the end.
My friend posed this question to me; felt like sharing it here.
Given a deck of cards, we split it into 2 groups, and "interleave them"; let us call this operation a 'split-join'. And repeat the same operation on the resulting deck.
E.g., { 1, 2, 3, 4 } becomes { 1, 2 } & { 3, 4 } (split) and we get { 1, 3, 2, 4 } (join)
Also, if we have an odd number of cards i.e., { 1, 2, 3 } we can split it like { 1, 2 } & { 3 } (bigger-half first) leading to { 1, 3, 2 }
(i.e., n is split up as Ceil[n/2] & n-Ceil[n/2])
The question my friend asked me was:
HOW many such split-joins are needed to get the original deck back?
And that got me wondering:
If the deck has n cards, what is the number of split-joins needed if:
n is even ?
n is odd ?
n is a power of '2' ? [I found that we then need log (n) (base 2) number of split-joins...]
(Feel free to explore different scenarios like that.)
Is there a simple pattern/formula/concept correlating n and the number of split-joins required?
I believe, this is a good thing to explore in Mathematica, especially, since it provides the Riffle[] method.
To quote MathWorld:
The numbers of out-shuffles needed to return a deck of n=2, 4, ... to its original order are 1, 2, 4, 3, 6, 10, 12, 4, 8, 18, 6, 11, ... (Sloane's A002326), which is simply the multiplicative order of 2 (mod n-1). For example, a deck of 52 cards therefore is returned to its original state after eight out-shuffles, since 2**8=1 (mod 51) (Golomb 1961). The smallest numbers of cards 2n that require 1, 2, 3, ... out-shuffles to return to the deck's original state are 1, 2, 4, 3, 16, 5, 64, 9, 37, 6, ... (Sloane's A114894).
The case when n is odd isn't addressed.
Note that the article also includes a Mathematica notebook with functions to explore out-shuffles.
If we have an odd number of cards n==2m-1, and if we split the cards such that during each shuffle the first group contains m cards, the second group m-1 cards, and the groups are joined such that no two cards of the same group end up next to each other, then the number of shuffles needed is equal to MultiplicativeOrder[2, n].
To show this, we note that after one shuffle the card which was at position k has moved to position 2k for 0<=k<m and to 2k-2m+1 for m<=k<2m-1, where k is such that 0<=k<2m-1. Written modulo n==2m-1 this means that the new position is Mod[2k, n] for all 0<=k<n. Therefore, for each card to return to its original position we need N shuffles where N is such that Mod[2^N k, n]==Mod[k, n] for all 0<=k<n from which is follows that N is any multiple of MultiplicativeOrder[2, n].
Note that due to symmetry the result would have been exactly the same if we had split the deck the other way around, i.e. the first group always contains m-1 cards and the second group m cards. I don't know what would happen if you alternate, i.e. for odd shuffles the first group contains m cards, and for even shuffles m-1 cards.
There's old work by magician/mathematician Persi Diaconnis about restoring the order with perfect riffle shuffles. Ian Stewart wrote about that work in one of his 1998 Scientific American Mathematical Recreation columns -- see, e.g.: http://www.whydomath.org/Reading_Room_Material/ian_stewart/shuffle/shuffle.html
old question I know, but strange no one put up an actual mathematica solution..
countrifflecards[deck_] := Module[{n = Length#deck, ct, rifdeck},
ct = 0;
rifdeck =
Riffle ##
Partition[ # , Ceiling[ n/2], Ceiling[ n/2], {1, 1}, {} ] &;
NestWhile[(++ct; rifdeck[#]) &, deck, #2 != deck &,2 ]; ct]
This handles even and odd cases:
countrifflecards[RandomSample[ Range[#], #]] & /# Range[2, 52, 2]
{1, 2, 4, 3, 6, 10, 12, 4, 8, 18, 6, 11, 20, 18, 28, 5, 10, 12, 36,
12, 20, 14, 12, 23, 21, 8}
countrifflecards[RandomSample[ Range[#], #]] & /# Range[3, 53, 2]
{2, 4, 3, 6, 10, 12, 4, 8, 18, 6, 11, 20, 18, 28, 5, 10, 12, 36, 12,
20, 14, 12, 23, 21, 8, 52}
You can readily show if you add a card to the odd-case the extra card will stay on the bottom and not change the sequence, hence the odd case result is just the n+1 even result..
ListPlot[{#, countrifflecards[RandomSample[ Range[#], #]]} & /#
Range[2, 1000]]