I am attempting at solving a Job Selection problem using Dynamic Programming. The problem is as follows:
- There is one job offering every day with varying payouts every day
- You cannot work three days in a row (if you work on day 1 and 2, you must take a break on day 3)
- Come up with a job schedule to work on to maximize the amount of money you make
I have formalized the input and output of the problem as follows:
Input: P[1...n] a list of n positive numbers
Output: m, a max possible payout and A, a set of indexes {1,... n} such that if i is in A, and i+1 is in A, then i+2 is not in A. m is equal to a summation of P[i] for all values i in set A.
I am stuck on the thought process of making a self-reduction, and subsequently a dynamic programming algorithm in order to calculate the maximum earnings.
Any assistance is highly appreciated - thanks!
Usually dynamic programming is relatively straightforward once you decide how much state you need to take account of at each point, and your solution is efficient or not depending on whether your choice of state is good.
Here I would suggest that the state at each point is whether it is 0, 1, or 2 days since the last break. So for each day and 0,1,2 days since a break I calculate the max possible payout up to and including that day, given that it is 0,1,2 days since a break.
For 0 days since a break the max payout is the max possible payout for any state on the previous day. There is no contribution for that day, since you are taking a break.
For 1 days since a break the max payout is the payout for that day plus the max possible pay from all previous days for the state of 0 days since a break for that day.
For 2 days since a break the max payout is the payout for the current and previous days plus the max possible payout for two days ago and a state of 0 days since the last break on that day.
So you can calculate the max payouts from left to right, making use of previous calculations, and the overall max is the max payout associated with any state on the final day.
I think it would be easier to formule the answer in this way:
Solutions:
X=[x1, x2, ..., xn] in [0,1]^n | xi + xj + zk <= 2 for each k=i+2=j+1, i>=1, k<=n.
Maximize:
f(X)= Sum(xi*vi) for i in [1, N], where vi is the payout of working day i.
Then recursive algorithm has to decide wether if it works a day or if not to maximize the function, taking into acount the restraints of Solution. That is a very simple basic schema for DP.
Mcdowella explains the choice of states and transitions pretty well for this particular DP problem. The only thing left to add is a graph representation. Hope it helps.
Job Selection DP
Related
Assume you are given t1, t2, t3, ..., tn amount of tasks to finish every day. And once you start working, you can only finish c1, c2, c3, ..., cn tasks until spending 1 day resting. You can spend multiple days resting too. But you can only do the tasks which are given you that day. For example;
T[] = {10, 1, 4, 8} given tasks;
C[] = {8, 4, 2, 1} is the capacity of doing tasks for each day.
For this example, optimal solution is giving a break on the 3rd day. That way you can complete 17 tasks in 4 days:
1st day 8 (maximum 10 tasks, but c1=8)
2nd day 1 (maximum 1 task, c2=4)
3rd day 0 (rest to reset to c1)
4th day 8 (maximum 8 tasks, c1=8)
Any other schedule would result with fewer tasks getting done.
I'm trying to find the recurrence relation for this dynamic programming problem. Can anyone help me? I find this question but mine is different because of the decreasing work capacity and there are different number of jobs each day. Reference
If I got you right you have an amount of tasks to do, t(i) for every day i. Also you have some kind of a given internal restriction sequence c(j) for a current treak day j where j can be reseted to 0 if no task was done that day. Goal is to maximizie the solved tasks.
Naive approach is to store for each day i a list for every state j how many tasks were done. Fill the data for the first day. Then for every following day fill the values for the "no break" case - which is
value(i-1,j-1)+min(t(i),c(j)). Choose the maximum from the previous day to fill the "break" entry. Repeat until last day. Choose the highest value and trace back the path.
Example for above
Memory consumtption is pretty easy: O(number of days * number of states).
If you are only interested in the value and not the schedule the memory consumption would be the O(number of states).
Time consumption is a bit more complex, so lets write some pseudo code:
For each day i
For each possible state j
add and write values
choose maximum from previous day for break state
choose maximum
For each day
trace back path
The choose maximum-function would have a complexity of O(number of states).
This pseudo code results in time consumption O(number of days * number of states) as well.
We're given an array containing the prices of apples through N days.
After an apple has been brought, it needs to be consumed within D days.
Another constraint is that we can consume ONLY 1 apple a day.
So we have to find a balance between buying apples when the price is cheap, but also not buying too many that it will be spoiled in D days, and we won't be able to consume them all.
Can anyone suggest which algorithm could be the best for an optimal outcome?
Edit :- Yes, it's necessary to consume an apple a day, and only 1 apple a day.
To optimize expenses, we need to know minimum in sliding window of length K, ending in every day.
There is O(N) algorithm providing needed results using deque data structure.
Python code. Commented mins list contains indices of minimum value in window (for reference). Every day we increment count for the day with the cheapest price in period of K days.
from collections import deque
def mininslidingwindow(A, k):
#mins = []
buys = [0] * len(A)
deq = deque()
for i in range(len(A)):
if (len(deq) > 0) and (deq[0] <= i - k):
deq.popleft() #too old index
while len(deq) > 0 and A[deq[-1]] >= A[i]:
deq.pop() #remove elements having no chance to become a minimum
deq.append(i)
#mins.append(deq[0]) #deque head is current min
buys[deq[0]] += 1
return buys
print(mininslidingwindow([2,1,5,7,2,8,4,3,4,2], 3))
>>[1, 3, 0, 0, 3, 0, 0, 2, 0, 1]
Assumption: Assuming we need to eat an apple everyday.
A)
Algorithm Steps:
Preprocess spans of prices to know till what number of days the price on day i will be minimum in future. This process can be done in O(N) for all N days, where N is total days for which we want to plan.
Once we have the span till future for all days, we can greedily buy apples on day i keeping in mind the constraint of D days. At each day we will see if we can extend the number of days we have covered and buy as many apples required for those days. We extend only if the price at a given day is minimum for such days. This process involves scanning of N days. Hence O(N) time.
Time Complexity: The above problem could be solved in O(N) time where N is number of days of data provided.
Space Complexity: O(N) for the spans.
B) Better Algorithm for real time: Maintain a datastructure to keep data of past d days and returns minimum element. Buy an apple every day with minimum price following the criteria of being within d days else discard the price from the datastructure. Time complexity: O(N), Space Complexity: O(D)
Everyday from 9am to 5pm, I am supposed to have at least one person at the factory supervising the workers and make sure that nothing goes wrong.
There are currently n applicants to the job, and each of them can work from time si to time ci, i = 1, 2, ..., n.
My goal is to minimize the time that more than two people are keeping watch of the workers at the same time.
(The applicants' available working hours are able to cover the time period from 9am to 5pm.)
I have proved that at most two people are needed for any instant of time to fulfill my needs, but how should I get from here to the final solution?
Finding the time periods where only one person is available for the job and keeping them is my first step, but finding the next step is what troubles me... .
The algorithm must run in polynomial-time.
Any hints(a certain type of data structure maybe?) or references are welcome. Many thanks.
I think you can do this with dynamic programming by solving the sub-problem:
What is the minimum overlap time given that applicant i is the last worker and we have covered all times from start of day up to ci?
Call this value of the minimum overlap time cost(i).
You can compute the value of cost(i) by considering cases:
If si is equal to the start of day, then cost(i) = 0 (no overlap is required)
Otherwise, consider all previous applicants j. Set cost(i) to the minimum of cost(j)+overlap between i and j. Also set prev(i) to the value of j that attains the minimum.
Then the answer to your problem is given by the minimum of cost(k) for all values of k where ck is equal to the end of the day. You can work out the correct choice of people by backtracking using the values of prev.
This gives an O(n^2) algorithm.
Interview Question by a financial software company for a Programmer position
Q1) Say you have an array for which the ith element is the price of a given stock on
day i.
If you were only permitted to buy one share of the stock and sell one share
of the stock, design an algorithm to find the best times to buy and sell.
My Solution :
My solution was to make an array of the differences in stock prices between day i and day i+1 for arraysize-1 days and then use Kadane Algorithm to return the sum of the largest continuous sub array.I would then buy at the start of the largest continuous array and sell at the end of the largest
continous array.
I am wondering if my solution is correct and are there any better solutions out there???
Upon answering i was asked a follow up question, which i answered exactly the same
Q2) Given that you know the future closing price of Company x for the next 10 days ,
Design a algorithm to to determine if you should BUY,SELL or HOLD for every
single day ( You are allowed to only make 1 decision every day ) with the aim of
of maximizing profit
Eg: Day 1 closing price :2.24
Day 2 closing price :2.11
...
Day 10 closing price : 3.00
My Solution: Same as above
I would like to know what if theres any better algorithm out there to maximise profit given
that i can make a decision every single day
Q1 If you were only permitted to buy one share of the stock and sell one share of the stock, design an algorithm to find the best times to buy and sell.
In a single pass through the array, determine the index i with the lowest price and the index j with the highest price. You buy at i and sell at j (selling before you buy, by borrowing stock, is in general allowed in finance, so it is okay if j < i). If all prices are the same you don't do anything.
Q2 Given that you know the future closing price of Company x for the next 10 days , Design a algorithm to to determine if you should BUY,SELL or HOLD for every single day ( You are allowed to only make 1 decision every day ) with the aim of of maximizing profit
There are only 10 days, and hence there are only 3^10 = 59049 different possibilities. Hence it is perfectly possible to use brute force. I.e., try every possibility and simply select the one which gives the greatest profit. (Even if a more efficient algorithm were found, this would remain a useful way to test the more efficient algorithm.)
Some of the solutions produced by the brute force approach may be invalid, e.g. it might not be possible to own (or owe) more than one share at once. Moreover, do you need to end up owning 0 stocks at the end of the 10 days, or are any positions automatically liquidated at the end of the 10 days? Also, I would want to clarify the assumption that I made in Q1, namely that it is possible to sell before buying to take advantage of falls in stock prices. Finally, there may be trading fees to be taken into consideration, including payments to be made if you borrow a stock in order to sell it before you buy it.
Once these assumptions are clarified it could well be possible to take design a more efficient algorithm. E.g., in the simplest case if you can only own one share and you have to buy before you sell, then you would have a "buy" at the first minimum in the series, a "sell" at the last maximum, and buys and sells at any minima and maxima inbetween.
The more I think about it, the more I think these interview questions are as much about seeing how and whether a candidate clarifies a problem as they are about the solution to the problem.
Here are some alternative answers:
Q1) Work from left to right in the array provided. Keep track of the lowest price seen so far. When you see an element of the array note down the difference between the price there and the lowest price so far, update the lowest price so far, and keep track of the highest difference seen. My answer is to make the amount of profit given at the highest difference by selling then, after having bought at the price of the lowest price seen at that time.
Q2) Treat this as a dynamic programming problem, where the state at any point in time is whether you own a share or not. Work from left to right again. At each point find the highest possible profit, given that own a share at the end of that point in time, and given that you do not own a share at the end of that point in time. You can work this out from the result of the calculations of the previous time step: In one case compare the options of buying a share and subtracting this from the profit given that you did not own at the end of the previous point or holding a share that you did own at the previous point. In the other case compare the options of selling a share to add to the profit given that you owned at the previous time, or staying pat with the profit given that you did not own at the previous time. As is standard with dynamic programming you keep the decisions made at each point in time and recover the correct list of decisions at the end by working backwards.
Your answer to question 1 was correct.
Your answer to question 2 was not correct. To solve this problem you work backwards from the end, choosing the best option at each step. For example, given the sequence { 1, 3, 5, 4, 6 }, since 4 < 6 your last move is to sell. Since 5 > 4, the previous move to that is buy. Since 3 < 5, the move on 5 is sell. Continuing in the same way, the move on 3 is to hold and the move on 1 is to buy.
Your solution for first problem is Correct. Kadane's Algorithm runtime complexity is O(n) is a optimal solution for maximum subarray problem. And benefit of using this algorithm is that it is easy to implement.
Your solution for second problem is wrong according to me. What you can do is to store the left and right index of maximum sum subarray you find. Once you find have maximum sum subarray and its left and right index. You can call this function again on the left part i.e 0 to left -1 and on right part i.e. right + 1 to Array.size - 1. So, this is a recursion process basically and you can further design the structure of this recursion with base case to solve this problem. And by following this process you can maximize profit.
Suppose the prices are the array P = [p_1, p_2, ..., p_n]
Construct a new array A = [p_1, p_2 - p_1, p_3 - p_2, ..., p_n - p_{n-1}]
i.e A[i] = p_{i+1} - p_i, taking p_0 = 0.
Now go find the maximum sum sub-array in this.
OR
Find a different algorithm, and solve the maximum sub-array problem!
The problems are equivalent.
We are given N ranges of date offsets when N employees are present in an
organization. Something like
1-4 (i.e. employee will come on 1st, 2nd, 3rd and 4th day )
2-6
8-9
..
1-14
We have to organize an event on minimum number of days such that each
employee can attend the event at least twice.Please suggest the algorithm(probably greedy) to do this.
PS: Event is one day event.
If your data is small, you can just brute-force it. Pick all possible combination of 2 days. For each combination, try it and see if everyone can attend both. If not, pick all possible combinations of 3 days, see if everyone can attend 2 out of the 3, and so on. It's exponential, but may not be so bad for your purposes.
The greedy approach is to count how many people are at work each day, and pick the day with the maximum number of people. Repeating, count how many people are at work each day who don't already have two events scheduled and pick the day with the maximum number of people. Of course, don't pick the same day twice.
I think this can be done by the following greedy approach on events sorted with end date
Maintain a num count for all intervals. (Initialize all to 0)
If num = 0 place the two events on the last two days of this interval.
If num = 1 place one event on the last day of this interval
If num = 2 already two events have been covered for this interval.
Placing on the event in an interval can lead to increase in num count of the succeeding event.