most efficient algorithm to choose a building to renew - algorithm
trying solve that next problem but im little struggling..
In a certain street there is a row of 𝑛≥2 old buildings. To improve the appearance of the street, it is necessary to renew the facade
of some of the buildings. Out of every two adjacent buildings, at least one of them needs to be renovated. Given an array of length 𝑛
indicating the cost of renewing each building. We have to choose which buildings to renew, so the total cost will be low as possible.
For example, if the given array is (50,30,40,60,10,30,10), then the optimal solution is to renew the
The buildings number 2,3,5,7 and the cost will be 30+40+10+10=90.
any help?
I tried to look on the first two numbers at the array:
if they are equal - I choose the second, because than I can ignore the third number at the array.
cant goo more than that...
If I understand correctly, this is a dynamic programming problem and we should be able to achieve a linear solution.
For each house, we can either renew it or not. But at a given house, there's not enough information available greedily to be sure whether we should renew it or not, so we have to try both options at each step.
If we choose to renew it, we incur its cost, but we have the freedom to ignore the costs of its adjacent neighbors.
If we choose not to renew it, we must accept the cost of renewing its adjacent neighbors.
The self-similarity required for DP shows up in the sense that once we've found the optimal cost for a given house i, we can solidify the minimal cost of the subarray to the left of i and we never need to visit it again.
In code, this means we'll have a DP cache that keeps track of the two choices (renovate or not) for each house i. For house i, if we renew it, we can take the lesser value of either renewing the house to its left or not. If we don't renew house i, we have no choice but to incur the cost of renewing the house to its left. Store both possibilities so i+1 can make an informed decision.
const minimizeRenewalCost = costs => {
// dp[i][0] stores the min cost of renewing house i
// dp[i][1] stores the min cost of not renewing house i
const dp = [...Array(costs.length + 1)].map(e => [0, 0]);
for (let i = 1; i <= costs.length; i++) {
// renew i (pick min cost for either decision at i-1)
dp[i][0] = Math.min(...dp[i-1]) + costs[i-1];
// don't renew i (must renew i-1)
dp[i][1] = dp[i-1][0];
}
return Math.min(...dp.at(-1));
};
const costs = [50, 30, 40, 60, 10, 30, 10];
console.log(minimizeRenewalCost(costs));
Related
What is a greedy algorithm for this problem that is minimally optimal + proof?
The details are a bit cringe, fair warning lol: I want to set up meters on the floor of my building to catch someone; assume my floor is a number line from 0 to length L. The specific type of meter I am designing has a radius of detection that is 4.7 meters in the -x and +x direction (diameter of 9.4 meters of detection). I want to set them up in such a way that if the person I am trying to find steps foot anywhere in the floor, I will know. However, I can't just setup a meter anywhere (it may annoy other residents); therefore, there are only n valid locations that I can setup a meter. Additionally, these meters are expensive and time consuming to make, so I would like to use as few as possible. For simplicity, you can assume the meter has 0 width, and that each valid location is just a point on the number line aformentioned. What is a greedy algorithm that places as few meters as possible, while being able to detect the entire hallway of length L like I want it to, or, if detecting the entire hallway is not possible, will output false for the set of n locations I have (and, if it isn't able to detect the whole hallway, still uses as few meters as possible while attempting to do so)? Edit: some clarification on being able to detect the entire hallway or not
Given: L (hallway length) a list of N valid positions to place a meter (p_0 ... p_N-1) of radius 4.7 You can determine in O(N) either a valid and minimal ("good") covering of the whole hallway or a proof that no such covering exists given the constraints as follows (pseudo-code): // total = total length; // start = current starting position, initially 0 // possible = list of possible meter positions // placed = list of (optimal) meter placements, initially empty boolean solve(float total, float start, List<Float> possible, List<Float> placed): if (total-start <= 0): return true; // problem solved with no additional meters - woo! else: Float next = extractFurthestWithinRange(start, possible, 4.7); if (next == null): return false; // no way to cover end of hall: report failure else: placed.add(next); // placement decided return solve(total, next + 4.7, possible, placed); Where extractFurthestWithinRange(float start, List<Float> candidates, float range) returns null if there are no candidates within range of start, or returns the last position p in candidates such that p <= start + range -- and also removes p, and all candidates c such that p >= c. The key here is that, by always choosing to place a meter in the next position that a) leaves no gaps and b) is furthest from the previously-placed position we are simultaneously creating a valid covering (= no gaps) and an optimal covering (= no possible valid covering could have used less meters - because our gaps are already as wide as possible). At each iteration, we either completely solve the problem, or take a greedy bite to reduce it to a (guaranteed) smaller problem. Note that there can be other optimal coverings with different meter positions, but they will use the exact same number of meters as those returned from this pseudo-code. For example, if you adapt the code to start from the end of the hallway instead of from the start, the covering would still be good, but the gaps could be rearranged. Indeed, if you need the lexicographically minimal optimal covering, you should use the adapted algorithm that places meters starting from the end: // remaining = length (starts at hallway length) // possible = positions to place meters at, starting by closest to end of hallway // placed = positions where meters have been placed boolean solve(float remaining, List<Float> possible, Queue<Float> placed): if (remaining <= 0): return true; // problem solved with no additional meters - woo! else: // extracts points p up to and including p such that p >= remaining - range Float next = extractFurthestWithinRange2(remaining, possible, 4.7); if (next == null): return false; // no way to cover start of hall: report failure else: placed.add(next); // placement decided return solve(next - 4.7, possible, placed);
To prove that your solution is optimal if it is found, you merely have to prove that it finds the lexicographically last optimal solution. And you do that by induction on the size of the lexicographically last optimal solution. The case of a zero length floor and no monitor is trivial. Otherwise you demonstrate that you found the first element of the lexicographically last solution. And covering the rest of the line with the remaining elements is your induction step. Technical note, for this to work you have to be allowed to place monitoring stations outside of the line.
Understanding the concise DP solution for best time to buy and sell stocks IV
The problem is the famous leetcode problem (or in similar other contexts), best to buy and sell stocks, with at most k transactions. Here is the screenshot of the problem: I am trying to make sense of this DP solution. You can ignore the first part of large k. I don't understand the dp part why it works. class Solution(object): def maxProfit(self, k, prices): """ :type k: int :type prices: List[int] :rtype: int """ # for large k greedy approach (ignore this part for large k) if k >= len(prices) / 2: profit = 0 for i in range(1, len(prices)): profit += max(0, prices[i] - prices[i-1]) return profit # Don't understand this part dp = [[0]*2 for i in range(k+1)] for i in reversed(range(len(prices))): for j in range (1, k+1): dp[j][True] = max(dp[j][True], -prices[i]+dp[j][False]) dp[j][False] = max(dp[j][False], prices[i]+dp[j-1][True]) return dp[k][True] I was able to drive a similar solution, but that uses two rows (dp and dp2) instead of just one row (dp in the solution above). To me it looks like the solution is overwriting values on itself, which for this solution doesn't look right. However the answer works and passes leetcode.
Lets put words to it: for i in reversed(range(len(prices))): For each future price we already know in advance, after considering later prices. for j in range (1, k+1): For each possibility of considering this price as one of k two-price transactions. dp[j][True] = max(dp[j][True], -prices[i]+dp[j][False]) If we consider this might be a purchase -- since we are going backwards in time, a purchase means a completed transaction -- we choose the best of (1) having considered the jth purchase already (dp[j][True]) or (2) subtract this price as a purchase and add the best result we have already that includes the jth sale (-prices[i] + dp[j][False]). dp[j][False] = max(dp[j][False], prices[i]+dp[j-1][True]) Otherwise, we might consider this as a sale (the first half of a transaction since we're going backwards), so we choose the best of (1) the jth sale already considered (dp[j][False]), or (2) we add this price as a sale and add to that the best result we have so far for the first (j - 1) completed transactions (prices[i] + dp[j-1][True]). Note that the first dp[j][False] is referring to the jth "half-transaction," the sale if you will, since we are going backwards in time, that would have been set in an earlier iteration on a later price. We then can possibly overwrite it with our consideration of this price as a jth sale.
Dyanmic Shortest Path
Your friends are planning an expedition to a small town deep in the Canadian north next winter break. They’ve researched all the travel options and have drawn up a directed graph whose nodes represent intermediat destinations and edges represent the reoads betweeen them. In the course of this, they’ve also learned that extreme weather causes roads in this part of the world to become quite slow in the winter and may cause large travel delays. They’ve found an excellent travel Web site that can accurately predict how fast they’ll be able to travel along the roads; however, the speed of travel depends on the time of the year. More precisely, the Web site answers queries of the following form: given an edge e = (u, v) connecting two sites u and v, and given a proposed starting time t from location u, the site will return a value fe(t), the predicted arrival time at v. The web site guarantees that 1 fe(t) > t for every edge e and every time t (you can’t travel backward in time), and that fe(t) is a monotone increasing function of t (that is, you do not arrive earlier by starting later). Other than that, the functions fe may be arbitrary. For example, in areas where the travel time does not vary with the season, we would have fe(t) = t + e, wheree is the time needed to travel from the beginning to the end of the edge e. Your friends want to use the Web site to determine the fastest way to travel through the directed graph from their starting point to their intended destination. (You should assume that they start at time 0 and that all predictions made by the Web site are completely correct.) Give a polynomial-time algorithm to do this, where we treat a single query to the Web site (based on a specific edge e and a time t) as taking a single computational step. def updatepath(node): randomvalue = random.randint(0,3) print(node,"to other node:",randomvalue) for i in range(0,n): distance[node][i] = distance[node][i] + randomvalue def minDistance(dist,flag_array,n): min_value = math.inf for i in range(0,n): if dist[i] < min_value and flag_array[i] == False: min_value = dist[i] min_index = i return min_index def shortest_path(graph, src,n): dist = [math.inf] * n flag_array = [False] * n dist[src] = 0 for cout in range(n): #find the node index that have min cost u = minDistance(dist,flag_array,n) flag_array[u] = True updatepath(u) for i in range(n): if graph[u][i] > 0 and flag_array[i]==False and dist[i] > dist[u] + graph[u][i]: dist[i] = dist[u] + graph[u][i] path[i] = u return dist I applied Dijkstra algorithm but it is not correct ? What would i change in my algorithm to work it for dynamic changing edge.
Well, Key points are that function is monotonically increasing. There is an algorithm which exploits this property and it is called A*. Accumulated cost: Your prof wants you to use two distances one is accumulated cost(this is simple the cost from previous added to the cost/time needed to move to the next node). Heuristic cost: This is some predicted cost. Disjkstra approach would not work because you are working with heuristic cost/predicted and accumulated cost. Monotonically increasing means h(A) <= h(A) + f(A..B).It simply says that if you move from node A to node B then the cost should not be less than the previous node (in this case A) and this is heuristic + accumulated. If this property holds then the first path which A* chooses is always the path to goal and it never needs to backtrack. Note: The power of this algorithm is totally base on how you predict value. If you underestimate the value that will be corrected with accumulated value but if you overestimate the value it will chose wrong path. Algorithm: Create a Min Priority queue. insert initial city in q. while(!pq.isEmpty() && !Goalfound) Node min = pq.delMin() //this should return you a cities to which your distance(heuristic+accumulated is minial). put all succesors of min in pq // all cities which you can reach, you can better make a list of visited cities s that queue will be efficient by not placing same element twice. Keep doing this and at the end you will either reach goal or your queue will be empty Extra Here i implemented a 8-puzzle-solve using A*, it can give you an idea about how costs are defined and ho it works. ` private void solve(MinPQ<Node> pq, HashSet<Node> closedList) { while(!(pq.min().getBoad().isGoal(pq.min().getBoad()))){ Node e = pq.delMin(); closedList.add(e); for(Board boards: e.getBoad().neighbors()){ Node nextNode = new Node(boards,e,e.getMoves()+1); if(!equalToPreviousNode(nextNode,e.getPreviousNode())) pq.insert(nextNode); } } Node collection = pq.delMin(); while(!(collection.getPreviousNode() == null)){ this.getB().add(collection.getBoad()); collection =collection.getPreviousNode(); } this.getB().add(collection.getBoad()); System.out.println(pq.size()); } A link to full code is here.
Algorithm to calculate sum of points for groups with varying member count [closed]
Closed. This question needs debugging details. It is not currently accepting answers. Edit the question to include desired behavior, a specific problem or error, and the shortest code necessary to reproduce the problem. This will help others answer the question. Closed 6 years ago. Improve this question Let's start with an example. In Harry Potter, Hogwarts has 4 houses with students sorted into each house. The same happens on my website and I don't know how many users are in each house. It could be 20 in one house 50 in another and 100 in the third and fourth. Now, each student can earn points on the website and at the end of the year, the house with the most points will win. But it's not fair to "only" do a sum of the points, as the house with a 100 students will have a much higher chance to win, as they have more users to earn points. So I need to come up with an algorithm which is fair. You can see an example here: https://worldofpotter.dk/points What I do now is to sum all the points for a house, and then divide it by the number of users who have earned more than 10 points. This is still not fair, though. Any ideas on how to make this calculation more fair? Things we need to take into account: * The percent of users earning points in each house * Few users earning LOTS of points * Many users earning FEW points (It's not bad earning few points. It still counts towards the total points of the house) Link to MySQL dump(with users, houses and points): https://worldofpotter.dk/wop_points_example.sql Link to CSV of points only: https://worldofpotter.dk/points.csv
I'd use something like Discounted Cumulative Gain which is used for measuring the effectiveness of search engines. The concept is as it follows: FUNCTION evalHouseScore (0_INDEXED_SORTED_ARRAY scores): score = 0; FOR (int i = 0; i < scores.length; i++): score += scores[i]/log2(i); END_FOR RETURN score; END_FUNCTION; This must be somehow modified as this way of measuring focuses on the first result. As this is subjective you should decide on your the way you would modify it. Below I'll post the code which some constants which you should try with different values: FUNCTION evalHouseScore (0_INDEXED_SORTED_ARRAY scores): score = 0; FOR (int i = 0; i < scores.length; i++): score += scores[i]/log2(i+K); END_FOR RETURN L*score; END_FUNCTION Consider changing the logarithm. Tests: int[] g = new int[] {758,294,266,166,157,132,129,116,111,88,83,74,62,60,60,52,43,40,28,26,25,24,18,18,17,15,15,15,14,14,12,10,9,5,5,4,4,4,4,3,3,3,2,1,1,1,1,1}; int[] s = new int[] {612,324,301,273,201,182,176,139,130,121,119,114,113,113,106,86,77,76,65,62,60,58,57,54,54,42,42,40,36,35,34,29,28,23,22,19,17,16,14,14,13,11,11,9,9,8,8,7,7,7,6,4,4,3,3,3,3,2,2,2,2,2,2,2,1,1,1}; int[] h = new int[] {813,676,430,382,360,323,265,235,192,170,107,103,80,70,60,57,43,41,21,17,15,15,12,10,9,9,9,8,8,6,6,6,4,4,4,3,2,2,2,1,1,1}; int[] r = new int[] {1398,1009,443,339,242,215,210,205,177,168,164,144,144,92,85,82,71,61,58,47,44,33,21,19,18,17,12,11,11,9,8,7,7,6,5,4,3,3,3,3,2,2,2,1,1,1,1}; The output is for different offsets: 1182 1543 1847 2286 904 1231 1421 1735 813 1120 1272 1557
It sounds like some sort of constraint between the houses may need to be introduced. I might suggest finding the person that earned the most points out of all the houses and using it as the denominator when rolling up the scores. This will guarantee the max value of a user's contribution is 1, then all the scores for a house can be summed and then divided by the number of users to normalize the house's score. That should give you a reasonable comparison. It does introduce issues with low numbers of users in a house that are high achievers in which you may want to consider lower limits to the number of house members. Another technique may be to introduce handicap scores for users to balance the scales. The algorithm will most likely flex over time based on the data you receive. To keep it fair it will take some responsive action after the initial iteration. Players can come up with some creative ways to make scoring systems work for them. Here is some pseudo-code in PHP that you may use: <?php $mostPointsEarned; // Find the user that earned the most points $houseScores = []; foreach ($houses as $house) { $numberOfUsers = 0; $normalizedScores = []; foreach ($house->getUsers() as $user) { $normalizedScores[] = $user->getPoints() / $mostPointsEarned; $numberOfUsers++; } $houseScores[] = array_sum($normalizedScores) / $numberOfUsers; } var_dump($houseScores);
You haven't given any examples on what should be preferred state, and what are situations against which you want to be immune. (3,2,1,1 compared to 5,2 etc.) It's also a pity you haven't provided us the dataset in some nice way to play. scala> val input = Map( // as seen on 2016-09-09 14:10 UTC on https://worldofpotter.dk/points 'G' -> Seq(758,294,266,166,157,132,129,116,111,88,83,74,62,60,60,52,43,40,28,26,25,24,18,18,17,15,15,15,14,14,12,10,9,5,5,4,4,4,4,3,3,3,2,1,1,1,1,1), 'S' -> Seq(612,324,301,273,201,182,176,139,130,121,119,114,113,113,106,86,77,76,65,62,60,58,57,54,54,42,42,40,36,35,34,29,28,23,22,19,17,16,14,14,13,11,11,9,9,8,8,7,7,7,6,4,4,3,3,3,3,2,2,2,2,2,2,2,1,1,1), 'H' -> Seq(813,676,430,382,360,323,265,235,192,170,107,103,80,70,60,57,43,41,21,17,15,15,12,10,9,9,9,8,8,6,6,6,4,4,4,3,2,2,2,1,1,1), 'R' -> Seq(1398,1009,443,339,242,215,210,205,177,168,164,144,144,92,85,82,71,61,58,47,44,33,21,19,18,17,12,11,11,9,8,7,7,6,5,4,3,3,3,3,2,2,2,1,1,1,1) ) // and the results on the website were: 1. R 1951, 2. H 1859, 3. S 990, 4. G 954 Here is what I thought of: def singleValuedScore(individualScores: Seq[Int]) = individualScores .sortBy(-_) // sort from most to least .zipWithIndex // add indices e.g. (best, 0), (2nd best, 1), ... .map { case (score, index) => score * (1 + index) } // here is the 'logic' .max input.mapValues(singleValuedScore) res: scala.collection.immutable.Map[Char,Int] = Map(G -> 1044, S -> 1590, H -> 1968, R -> 2018) The overall positions would be: Ravenclaw with 2018 aggregated points Hufflepuff with 1968 Slytherin with 1590 Gryffindor with 1044 Which corresponds to the ordering on that web: 1. R 1951, 2. H 1859, 3. S 990, 4. G 954. The algorithms output is maximal product of score of user and rank of the user within a house. This measure is not affected by "long-tail" of users having low score compared to the active ones. There are no hand-set cutoffs or thresholds. You could experiment with the rank attribution (score * index or score * Math.sqrt(index) or score / Math.log(index + 1) ...)
I take it that the fair measure is the number of points divided by the number of house members. Since you have the number of points, the exercise boils down to estimate the number of members. We are in short supply of data here as the only hint we have on member counts is the answers on the website. This makes us vulnerable to manipulation, members can trick us into underestimating their numbers. If the suggested estimation method to "count respondents with points >10" would be known, houses would only encourage the best to do the test to hide members from our count. This is a real problem and the only thing I will do about it is to present a "manipulation indicator". How could we then estimate member counts? Since we do not know anything other than test results, we have to infer the propensity to do the test from the actual results. And we have little other to assume than that we would have a symmetric result distribution (of the logarithm of the points) if all members tested. Now let's say the strong would-be respondents are more likely to actually test than weak would-be respondents. Then we could measure the extra dropout ratio for the weak by comparing the numbers of respondents in corresponding weak and strong test-point quantiles. To be specific, of the 205 answers, there are 27 in the worst half of the overall weakest quartile, while 32 in the strongest half of the best quartile. So an extra 5 respondents of the very weakest have dropped out from an assumed all-testing symmetric population, and to adjust for this, we are going to estimate member count from this quantile by multiplying the number of responses in it by 32/27=about 1.2. Similarly, we have 29/26 for the next less-extreme half quartiles and 41/50 for the two mid quartiles. So we would estimate members by simply counting the number of respondents but multiplying the number of respondents in the weak quartiles mentioned above by 1.2, 1.1 and 0.8 respectively. If however any result distribution within a house would be conspicuously skewed, which is not the case now, we would have to suspect manipulation and re-design our member count. For the sample at hand however, these adjustments to member counts are minor, and yields the same house ranks as from just counting the respondents without adjustments.
I got myself to amuse me a little bit with your question and some python programming with some random generated data. As some people mentioned in the comments you need to define what is fairness. If as you said you don't know the number of people in each of the houses, you can use the number of participations of each house, thus you motivate participation (it can be unfair depending on the number of people of each house, but as you said you don't have this data on the first place). The important part of the code is the following. import numpy as np from numpy.random import randint # import random int # initialize random seed np.random.seed(4) houses = ["Gryffindor","Slytherin", "Hufflepuff", "Ravenclaw"] houses_points = [] # generate random data for each house for _ in houses: # houses_points.append(randint(0, 100, randint(60,100))) houses_points.append(randint(0, 50, randint(2,10))) # count participation houses_participations = [] houses_total_points = [] for house_id in xrange(len(houses)): houses_total_points.append(np.sum(houses_points[house_id])) houses_participations.append(len(houses_points[house_id])) # sum the total number of participations total_participations = np.sum(houses_participations) # proposed model with weighted total participation points houses_partic_points = [] for house_id in xrange(len(houses)): tmp = houses_total_points[house_id]*houses_participations[house_id]/total_participations houses_partic_points.append(tmp) The results of this method are the following: House Points per Participant Gryffindor: [46 5 1 40] Slytherin: [ 8 9 39 45 30 40 36 44 38] Hufflepuff: [42 3 0 21 21 9 38 38] Ravenclaw: [ 2 46] House Number of Participations per House Gryffindor: 4 Slytherin: 9 Hufflepuff: 8 Ravenclaw: 2 House Total Points Gryffindor: 92 Slytherin: 289 Hufflepuff: 172 Ravenclaw: 48 House Points weighted by a participation factor Gryffindor: 16 Slytherin: 113 Hufflepuff: 59 Ravenclaw: 4 You'll find the complete file with printing results here (https://gist.github.com/silgon/5be78b1ea0b55a20d90d9ec3e7c515e5).
You should enter some more rules to define the fairness. Idea 1 You could set up the rule that anyone has to earn at least 10 points to enter the competition. Then you can calculate the average points for each house. Positive: Everyone needs to show some motivation. Idea 2 Another approach would be to set the rule that from each house only the 10 best students will count for the competition. Positive: Easy rule to calculate the points. Negative: Students might become uninterested if they see they can't reach the top 10 places of their house.
From my point of view, your problem is diveded in a few points: The best thing to do would be to re - assignate the player in the different Houses so that each House has the same number of players. (as explain by #navid-vafaei) If you don't want to do that because you believe that it may affect your game popularity with player whom are in House that they don't want because you can change the choice of the Sorting Hat at least in the movie or books. In that case, you can sum the point of the student's house and divide by the number of students. You may just remove the number of student with a very low score. You may remove as well the student with a very low activity because students whom skip school might be fired. The most important part for me n your algorithm is weather or not you give points for all valuables things: In the Harry Potter's story, the students earn point on the differents subjects they chose at school and get point according to their score. At the end of the year, there is a special award event. At that moment, the Director gave points for valuable things which cannot be evaluated in the subject at school suche as the qualites (bravery for example).
How to find the minimum value of M?
I'm trying to solve this problem: You have N relatives. You will talk to ith relative for exactly Ti minutes. Each minute costs you 1 dollar . After the conversation, they will add a recharge of Xi dollars in your mobile. Initially, you have M dollars balance in your mobile phone. Find the minimum value of M, that you must have initially, in your phone, so that you don't run out of balance during any of the call (encounter negative balance). Note : You can call relatives in any order. Each relative will be called exactly once. Input: N T1 X1 T2 X2 2 1 1 2 1 Output: 2 This looks easy to me at first but I'm not able to find the exact solution. My Initial thoughts: We have no problem where Xi > Ti as it will not reduce our initial balance. We need to take care of situation where where we will run into loss i.e Ti > Xi. But I am unable to make expression which will result in minimum initial value. Need guidance in approaching this problem to find optimal solution.
UPDATE:- Binary Search approach seems to lead to wrong result (as proved by the test case provided in the comment below by user greybeard. So, this is another approach.We maintain the difference between call cost and recharge amount. Then we maintain two arrays/vectors. If our recharge amount is strictly greater than cost of call, we put the call in the first array ,else we put it in the second array. Then we can sort the first array according to the cost and the second array according to the recharge amount. We then update the diff by adding the least amount of recharge from the call where our cost is greater than recharge Then we can iterate through our first array and update our max requirement,requirement for each call and current balance.Finally, our answer will be the maximum between max requirement and the diff we have maintained. Example :- N = 2 T1 = 1 R1 = 1 T2 = 2 R2 = 1 Our first array contains nothing as all the calls have cost greater than or equal to recharge amount. So, we place both calls in our second array The diff gets updated to 2 before we sort the array. Then, we add the min recharge we can get from the calls to our diff(i.e 1).Now, the diff stands at 3.Then as our first array contains no elements, our answer is equal to the diff i.e 3. Time Complexity :- O(nlogn) Working Example:- #include<bits/stdc++.h> using namespace std; #define MAXN 100007 int n,diff; vector<pair<int,int> > v1,v2; int main(){ diff = 0; cin>>n; for(int i=0;i<n;i++){ int cost,recharge; cin>>cost>>recharge; if(recharge > cost){ v1.push_back(make_pair(cost,recharge)); }else{ v2.push_back(make_pair(recharge,cost)); } diff += (cost-recharge); } sort(v1.begin(), v1.end()); sort(v2.begin(), v2.end()); if(v2.size() > 0)diff += v2[0].first; int max_req = diff, req = 0,cur = 0; for(int i=0; i<v1.size(); i++){ req = v1[i].first - cur; max_req = max(max_req, req); cur += v1[i].second-v1[i].first; } cout<<max(max_req,diff)<<endl; return 0; }
(This is a wiki post: you are invited to edit, and don't need much reputation to do so without involving a moderator.) Working efficiently means accomplishing the task at hand, with no undue effort. Aspects here: the OP asks for guidance in approaching this problem to find optimal solution - not for a solution (as this entirely similar, older question does). the problem statement asks for the minimum value of M - not an optimal order of calls or how to find that. To find the minimum balance initially required, categorise the relatives/(T, X)-pairs/calls (the order might have a meaning, if not for the problem as stated) T < X Leaves X-T more for calls to follow. Do in order of increasing cost. Start assuming an initial balance of 1. For each call, if you can afford it, subtract its cost, add its refund and be done accounting for it. If you can't afford it (yet), put it on hold/the back burner/in a priority queue. At the end of "rewarding calls", remove each head of the queue in turn, accounting for necassary increases in intitial balance. This part ends with a highest balance, yet. T = X No influence on any other call. Just do at top balance, in any order. The top balance required for the whole sequence can't be lower than the cost of any single call, including these. T > X Leaves T-X less for subsequent calls. Do in order of decreasing refund. (This may, as any call, go to a balance of zero before refund. As order of calls does not change the total cost, the ones requiring the least initial balance will be those yielding the lowest final one. For the intermediate balance required by this category, don't forget that least refund.) Combine the requirements from all categories. Remember the request for guidance.