Assume you are given t1, t2, t3, ..., tn amount of tasks to finish every day. And once you start working, you can only finish c1, c2, c3, ..., cn tasks until spending 1 day resting. You can spend multiple days resting too. But you can only do the tasks which are given you that day. For example;
T[] = {10, 1, 4, 8} given tasks;
C[] = {8, 4, 2, 1} is the capacity of doing tasks for each day.
For this example, optimal solution is giving a break on the 3rd day. That way you can complete 17 tasks in 4 days:
1st day 8 (maximum 10 tasks, but c1=8)
2nd day 1 (maximum 1 task, c2=4)
3rd day 0 (rest to reset to c1)
4th day 8 (maximum 8 tasks, c1=8)
Any other schedule would result with fewer tasks getting done.
I'm trying to find the recurrence relation for this dynamic programming problem. Can anyone help me? I find this question but mine is different because of the decreasing work capacity and there are different number of jobs each day. Reference
If I got you right you have an amount of tasks to do, t(i) for every day i. Also you have some kind of a given internal restriction sequence c(j) for a current treak day j where j can be reseted to 0 if no task was done that day. Goal is to maximizie the solved tasks.
Naive approach is to store for each day i a list for every state j how many tasks were done. Fill the data for the first day. Then for every following day fill the values for the "no break" case - which is
value(i-1,j-1)+min(t(i),c(j)). Choose the maximum from the previous day to fill the "break" entry. Repeat until last day. Choose the highest value and trace back the path.
Example for above
Memory consumtption is pretty easy: O(number of days * number of states).
If you are only interested in the value and not the schedule the memory consumption would be the O(number of states).
Time consumption is a bit more complex, so lets write some pseudo code:
For each day i
For each possible state j
add and write values
choose maximum from previous day for break state
choose maximum
For each day
trace back path
The choose maximum-function would have a complexity of O(number of states).
This pseudo code results in time consumption O(number of days * number of states) as well.
Related
Let's say you have different jobs that you need to run on a regular basis (for example, you want to make API calls to different endpoints).
Let's say you need to hit two different endpoints and you want your calls to be as far away in time from each other as possible.
Example: You have two jobs, one is run once a minute, another is run twice a minute.
Solution: Start job A with interval of 60 seconds, wait 15 seconds, start job B with interval of 30 seconds.
This way the jobs will run at seconds: 0(job A), 15(job B), 45(job B), 60(job A), 75(job B), 105(job B), 120(job A), ... making a maximum interval between API calls 15 seconds while maintaining the call frequency that we need.
Can you think of an algorithm for these cases that will give optimal start times for each job so that the minimum time difference between calls in maximized? Ideally this algorithm could handle more than two jobs.
Assume we don't need to wait for the job to be finished to run it once again.
Thanks
Here is my solution if we allow the intervals to be slightly unequal.
Suppose that our calls are A[0], A[1], ..., A[n] with frequencies of f[0], f[1], ..., f[n] where the frequencies are all in the same unit. For example 60/hour, 120/hour, etc.
The total frequency with which events happen will be f = f[0] + f[1] + ... + f[n], which means that some event will be scheduled every hour/f time apart. The question is which one will happen when.
The way to imagine this is imagine we have a row of buckets filling with water. Each time we will dump a unit of water from the fullest bucket in front of us.
Since at the start we don't actually care where we start, let's initialize a vector of numbers by just assigning random numbers to them, full[0], full[1], ..., full[n]. And now our algorithm looks like this pseudocode:
Every hour/f time apart:
for each i in 0..n:
fill[i] += f[i]/f
i_choice = (select i from 0..n with the largest f[i])
fill[i_choice] -= 1
Do event A[i_choice]
This leads to events spaced as far apart as possible, but with repeating events happening in a slightly uneven rhythm. In your example that will lead to every 20 seconds doing events following the pattern ...ABBABBABBABB....
I am attempting at solving a Job Selection problem using Dynamic Programming. The problem is as follows:
- There is one job offering every day with varying payouts every day
- You cannot work three days in a row (if you work on day 1 and 2, you must take a break on day 3)
- Come up with a job schedule to work on to maximize the amount of money you make
I have formalized the input and output of the problem as follows:
Input: P[1...n] a list of n positive numbers
Output: m, a max possible payout and A, a set of indexes {1,... n} such that if i is in A, and i+1 is in A, then i+2 is not in A. m is equal to a summation of P[i] for all values i in set A.
I am stuck on the thought process of making a self-reduction, and subsequently a dynamic programming algorithm in order to calculate the maximum earnings.
Any assistance is highly appreciated - thanks!
Usually dynamic programming is relatively straightforward once you decide how much state you need to take account of at each point, and your solution is efficient or not depending on whether your choice of state is good.
Here I would suggest that the state at each point is whether it is 0, 1, or 2 days since the last break. So for each day and 0,1,2 days since a break I calculate the max possible payout up to and including that day, given that it is 0,1,2 days since a break.
For 0 days since a break the max payout is the max possible payout for any state on the previous day. There is no contribution for that day, since you are taking a break.
For 1 days since a break the max payout is the payout for that day plus the max possible pay from all previous days for the state of 0 days since a break for that day.
For 2 days since a break the max payout is the payout for the current and previous days plus the max possible payout for two days ago and a state of 0 days since the last break on that day.
So you can calculate the max payouts from left to right, making use of previous calculations, and the overall max is the max payout associated with any state on the final day.
I think it would be easier to formule the answer in this way:
Solutions:
X=[x1, x2, ..., xn] in [0,1]^n | xi + xj + zk <= 2 for each k=i+2=j+1, i>=1, k<=n.
Maximize:
f(X)= Sum(xi*vi) for i in [1, N], where vi is the payout of working day i.
Then recursive algorithm has to decide wether if it works a day or if not to maximize the function, taking into acount the restraints of Solution. That is a very simple basic schema for DP.
Mcdowella explains the choice of states and transitions pretty well for this particular DP problem. The only thing left to add is a graph representation. Hope it helps.
Job Selection DP
I have been working on this question and can't seem to find the right answer. Can someone please help me with this?
We are given N jobs [1,..,N]. We'll get a salary S(i) >= 0 for getting a job i done, and a deduction D(i) >= 0 that adds up for each day passing.
We'll need T(i) days to complete job i. Suppose the job i is done on day d, we'll get S(i) - d.D(i) in reward. The reward can be negative if d is too big.
We can switch jobs in the process and work on jobs in any order, meaning if we start job 1 that takes 5 days on day 1, we don't have to spend 5 consecutive days working on job 1.
How can we decide the best schedule of the jobs, so that we can complete all the jobs and get maximum salary?
I think shapiro is right. You need to determine an appropriate weighted cost formula for each task. It has to take into account the days remaining, the per day deduction, and maybe total deduction.
Once you have the weighted cost you can sort the task list by the weighted cost and perform one day of work on the first task in the list (should be the one that will cost the most if not completed). Then recalculate the weighted cost for all the tasks now that a day has passed, sort the list, and repeat until all tasks are complete.
Generally when you are optimizing schedules in the real world this is the approach. Figure out which task should be worked on first, do some work on it, then recalculate to see if you should switch tasks or keep working on the current one.
Following the above discussion:
For each job i, calculate the one day delay cost as X(i) = D(i) / T(i) and order the jobs by it. Maybe even just order by D(i) since when you choose one job you are not choosing the others - so it makes sense to choose the one with the most expensive deduction. Perform the jobs by this order to minimize the deduction fees.
Again, this is assuming that S(i) is a fixed reward for each job, independent on the exact day it is finished by, and that all jobs need to be performed.
First forget about S(i). You are doing all the jobs you get all the rewards anyway.
Second there's no point to interrupt a task and switch to another.Let's say you have jobs A and B. The deduction you get for the one that finishes last is the same (it's going to take T(A) + T(B) to finish it regardless of how you schedule). The deduction for the other job can only increase if you switch because it's going to take longer to finish it. So you're best if you drop the switch.
Now the problem is to order the tasks so that you get minimum amount of penalty. I'm not sure what's next.
You can pick the first job to minimize T(x) * sum(d) (since you commit to dong job x everything will incur T(x) days delay).
Or you can pick the last job since you know you're going to pay sum(T) * d(x) (you know when it's going to finish).
One says order by T(x) the other says order by d(x) and they are both wrong.
Likely the solution is some dynamic programming in this space, but it escapes me at the moment.
Everyday from 9am to 5pm, I am supposed to have at least one person at the factory supervising the workers and make sure that nothing goes wrong.
There are currently n applicants to the job, and each of them can work from time si to time ci, i = 1, 2, ..., n.
My goal is to minimize the time that more than two people are keeping watch of the workers at the same time.
(The applicants' available working hours are able to cover the time period from 9am to 5pm.)
I have proved that at most two people are needed for any instant of time to fulfill my needs, but how should I get from here to the final solution?
Finding the time periods where only one person is available for the job and keeping them is my first step, but finding the next step is what troubles me... .
The algorithm must run in polynomial-time.
Any hints(a certain type of data structure maybe?) or references are welcome. Many thanks.
I think you can do this with dynamic programming by solving the sub-problem:
What is the minimum overlap time given that applicant i is the last worker and we have covered all times from start of day up to ci?
Call this value of the minimum overlap time cost(i).
You can compute the value of cost(i) by considering cases:
If si is equal to the start of day, then cost(i) = 0 (no overlap is required)
Otherwise, consider all previous applicants j. Set cost(i) to the minimum of cost(j)+overlap between i and j. Also set prev(i) to the value of j that attains the minimum.
Then the answer to your problem is given by the minimum of cost(k) for all values of k where ck is equal to the end of the day. You can work out the correct choice of people by backtracking using the values of prev.
This gives an O(n^2) algorithm.
I have the following scenario: (since I don't know of a way to show LaTeX, here's a screenshot)
I'm having some trouble conceptualizing what's going on here. If I were to program this, I would probably attempt to structure this as some kind of heap where each node represents a worker, from earliest-to-latest, then run Prim's/Kruskal's algorithm on it. I don't know if I'm on the right track with that idea, but I need to flesh out my understanding of this problem so I can do the following:
Describe in detail the greedy choice
Show that if there's an optimal solution for which the greedy choice was not made, then an exchange can be made to conform with the greedy choice
Know how to implement a greedy algorithm solution, and its running time
So where should I be going with this idea?
This problem is very similar in nature to "Roster Scheduling problems." Think of the committee as say a set of 'supervisors' and you want to have a supervisor present, whenever a worker is present. In this case, the supervisor comes from the same set as the workers.
Here are some modeling ideas, and an Integer Programming formulation.
Time Slicing Idea
This sounds like a bad idea initially, but works really well in practice. We are going to create a lot of "time instants" T i from the start time of the first shift, to the end time of the very last shift. It sometimes helps to think of
T1, T2, T3....TN as being time instants (say) five minutes apart. For every Ti at least one worker is working on a shift. Therefore, that time instant has be be covered (Coverage means there has to be at least one member of the committee also working at time Ti.)
We really need to only worry about 2n Time instants: The start and finish times of each of the n workers.
Coverage Property Requirement
For every time instant Ti, we want a worker from the Committee present.
Let w1, w2...wn be the workers, sorted by their start times s_i. (Worker w1 starts the earliest shift, and worker wn starts the very last shift.)
Introduce a new Indicator variable (boolean):
Y_i = 1 if worker i is part of the committeee
Y_i = 0 otherwise.
Visualization
Now think of a 0-1 matrix, where the rows are the SORTED workers, and the columns are the time instants...
Construct a Time-Worker Matrix (0/1)
t1 t2 t3 t4 t5 t6 ... tN
-------------------------------------------
w1 1 1
w2 1 1
w3 1 1 1
w4 1 1 1
...
...
wn 1 1 1 1
-------------------------------------------
Total 2 4 3 ... ... 1 2 4 5
So the problem is to make sure that for each column, at least 1 worker is Selected to be part of the committee. The Total shows the number of candidates for the committee at each Time instant.
An Integer Programming based formulation
Objective: Minimize Sum(Y_i)
Subject to:
Y1 + Y2 >= 1 # coverage for time t1
Y1 + Y2 + Y3 >= 1 # coverage for time t2
...
More generally, the constraints are:
# Set Covering constraint for time T_i
Sum over all worker i's that are working at time t_i (Y_i) >= 1
Y_i Binary for all i's
Preprocessing
This Integer program, if attempted without preprocessing can be very difficult, and end up choking the solvers. But in practice there are quite a number of preprocessing ideas that can help immensely.
Make any forced assignments. (If ever there is a time instant with only one
worker working, that worker has to be in the committee ∈ C)
Separate into nice subproblems. Look at the time-worker Matrix. If there are nice 'rectangles' in it that can be cut out without
impacting any other time instant, then that is a wholly separate
sub-problem to solve. Makes the solver go much, much faster.
Identical shifts - If lots of workers have the exact same start and end times, then you can simply choose ANY one of them (say, the
lexicographically first worker, WLOG) and remove all the other workers from
consideration. (Makes a ton of difference in real life situations.)
Dominating shifts: If one worker starts before and stays later than any other worker, the 'dominating' worker can stay, all the
'dominated' workers can be removed from consideration for C.
All the identical rows (and columns) in the time-worker Matrix can be fused. You need to only keep one of them. (De-duping)
You could throw this into an IP solver (CPLEX, Excel, lp_solve etc.) and you will get a solution, if the problem size is not an issue.
Hope some of these ideas help.