I'm trying to solve this problem:
You have N relatives. You will talk to ith relative for exactly Ti
minutes. Each minute costs you 1 dollar . After the conversation,
they will add a recharge of Xi dollars in your mobile. Initially, you
have M dollars balance in your mobile phone.
Find the minimum value of M, that you must have initially, in your
phone, so that you don't run out of balance during any of the call
(encounter negative balance).
Note : You can call relatives in any order. Each relative will be
called exactly once.
Input:
N
T1 X1
T2 X2
2
1 1
2 1
Output:
2
This looks easy to me at first but I'm not able to find the exact solution.
My Initial thoughts:
We have no problem where Xi > Ti as it will not reduce our initial
balance. We need to take care of situation where where we will run
into loss i.e Ti > Xi.
But I am unable to make expression which will result in minimum
initial value.
Need guidance in approaching this problem to find optimal solution.
UPDATE:-
Binary Search approach seems to lead to wrong result (as proved by the
test case provided in the comment below by user greybeard.
So, this is another approach.We maintain the difference between call cost
and recharge amount.
Then we maintain two arrays/vectors.
If our recharge amount is strictly greater than cost of call, we put
the call in the first array ,else we put it in the second array.
Then we can sort the first array according to the cost and the second array
according to the recharge amount. We then update the diff by adding the
least amount of recharge from the call where our cost is greater than recharge
Then we can iterate through our first array and update our max
requirement,requirement for each call and current balance.Finally, our answer
will be the maximum between max requirement and the diff we have maintained.
Example :-
N = 2
T1 = 1 R1 = 1
T2 = 2 R2 = 1
Our first array contains nothing as all the calls have cost greater than
or equal to recharge amount. So, we place both calls in our second array
The diff gets updated to 2 before we sort the array. Then, we add the min
recharge we can get from the calls to our diff(i.e 1).Now, the diff stands
at 3.Then as our first array contains no elements, our answer is equal to
the diff i.e 3.
Time Complexity :- O(nlogn)
Working Example:-
#include<bits/stdc++.h>
using namespace std;
#define MAXN 100007
int n,diff;
vector<pair<int,int> > v1,v2;
int main(){
diff = 0;
cin>>n;
for(int i=0;i<n;i++){
int cost,recharge;
cin>>cost>>recharge;
if(recharge > cost){
v1.push_back(make_pair(cost,recharge));
}else{
v2.push_back(make_pair(recharge,cost));
}
diff += (cost-recharge);
}
sort(v1.begin(), v1.end());
sort(v2.begin(), v2.end());
if(v2.size() > 0)diff += v2[0].first;
int max_req = diff, req = 0,cur = 0;
for(int i=0; i<v1.size(); i++){
req = v1[i].first - cur;
max_req = max(max_req, req);
cur += v1[i].second-v1[i].first;
}
cout<<max(max_req,diff)<<endl;
return 0;
}
(This is a wiki post: you are invited to edit, and don't need much reputation to do so without involving a moderator.)
Working efficiently means accomplishing the task at hand, with no undue effort. Aspects here:
the OP asks for guidance in approaching this problem to find optimal solution - not for a solution (as this entirely similar, older question does).
the problem statement asks for the minimum value of M - not an optimal order of calls or how to find that.
To find the minimum balance initially required, categorise the relatives/(T, X)-pairs/calls (the order might have a meaning, if not for the problem as stated)
T < X Leaves X-T more for calls to follow. Do in order of increasing cost.
Start assuming an initial balance of 1. For each call, if you can afford it, subtract its cost, add its refund and be done accounting for it. If you can't afford it (yet), put it on hold/the back burner/in a priority queue. At the end of "rewarding calls", remove each head of the queue in turn, accounting for necassary increases in intitial balance.
This part ends with a highest balance, yet.
T = X No influence on any other call. Just do at top balance, in any order.
The top balance required for the whole sequence can't be lower than the cost of any single call, including these.
T > X Leaves T-X less for subsequent calls. Do in order of decreasing refund.
(This may, as any call, go to a balance of zero before refund.
As order of calls does not change the total cost, the ones requiring the least initial balance will be those yielding the lowest final one. For the intermediate balance required by this category, don't forget that least refund.)
Combine the requirements from all categories.
Remember the request for guidance.
Related
The problem is the famous leetcode problem (or in similar other contexts), best to buy and sell stocks, with at most k transactions.
Here is the screenshot of the problem:
I am trying to make sense of this DP solution. You can ignore the first part of large k. I don't understand the dp part why it works.
class Solution(object):
def maxProfit(self, k, prices):
"""
:type k: int
:type prices: List[int]
:rtype: int
"""
# for large k greedy approach (ignore this part for large k)
if k >= len(prices) / 2:
profit = 0
for i in range(1, len(prices)):
profit += max(0, prices[i] - prices[i-1])
return profit
# Don't understand this part
dp = [[0]*2 for i in range(k+1)]
for i in reversed(range(len(prices))):
for j in range (1, k+1):
dp[j][True] = max(dp[j][True], -prices[i]+dp[j][False])
dp[j][False] = max(dp[j][False], prices[i]+dp[j-1][True])
return dp[k][True]
I was able to drive a similar solution, but that uses two rows (dp and dp2) instead of just one row (dp in the solution above). To me it looks like the solution is overwriting values on itself, which for this solution doesn't look right. However the answer works and passes leetcode.
Lets put words to it:
for i in reversed(range(len(prices))):
For each future price we already know in advance, after considering later prices.
for j in range (1, k+1):
For each possibility of considering this price as one of k two-price transactions.
dp[j][True] = max(dp[j][True], -prices[i]+dp[j][False])
If we consider this might be a purchase -- since we are going backwards in time, a purchase means a completed transaction -- we choose the best of (1) having considered the jth purchase already (dp[j][True]) or (2) subtract this price as a purchase and add the best result we have already that includes the jth sale (-prices[i] + dp[j][False]).
dp[j][False] = max(dp[j][False], prices[i]+dp[j-1][True])
Otherwise, we might consider this as a sale (the first half of a transaction since we're going backwards), so we choose the best of (1) the jth sale already considered (dp[j][False]), or (2) we add this price as a sale and add to that the best result we have so far for the first (j - 1) completed transactions (prices[i] + dp[j-1][True]).
Note that the first dp[j][False] is referring to the jth "half-transaction," the sale if you will, since we are going backwards in time, that would have been set in an earlier iteration on a later price. We then can possibly overwrite it with our consideration of this price as a jth sale.
I was recently doing a project euler problem (namely #31) which was basically finding out how many ways we can sum to 200 using elements of the set {1,2,5,10,20,50,100,200}.
The idea that I used was this: the number of ways to sum to N is equal to
(the number of ways to sum N-k) * (number of ways to sum k), summed over all possible values of k.
I realized that this approach is WRONG, namely due to the fact that it creates several several duplicate counts. I have tried to adjust the formula to avoid duplicates, but to no avail. I am seeking the wisdom of stack overflowers regarding:
whether my recursive approach is concerned with the correct subproblem to solve
If there exists one, what would be an effective way to eliminate duplicates
how should we approach recursive problems such that we are concerned with the correct subproblem? what are some indicators that we've chosen a correct (or incorrect) subproblem?
When trying to avoid duplicate permutations, a straightforward strategy that works in most cases is to only create rising or falling sequences.
In your example, if you pick a value and then recurse with the whole set, you will get duplicate sequences like 50,50,100 and 50,100,50 and 100,50,50. However, if you recurse with the rule that the next value should be equal to or smaller than the currently selected value, out of those three you will only get the sequence 100,50,50.
So an algorithm that counts only unique combinations would be e.g.:
function uniqueCombinations(set, target, previous) {
for all values in set not greater than previous {
if value equals target {
increment count
}
if value is smaller than target {
uniqueCombinations(set, target - value, value)
}
}
}
uniqueCombinations([1,2,5,10,20,50,100,200], 200, 200)
Alternatively, you can create a copy of the set before every recursion, and remove the elements from it that you don't want repeated.
The rising/falling sequence method also works with iterations. Let's say you want to find all unique combinations of three letters. This algorithm will print results like a,c,e, but not a,e,c or e,a,c:
for letter1 is 'a' to 'x' {
for letter2 is first letter after letter1 to 'y' {
for letter3 is first letter after letter2 to 'z' {
print [letter1,letter2,letter3]
}
}
}
m69 gives a nice strategy that often works, but I think it's worthwhile to better understand why it works. When trying to count items (of any kind), the general principle is:
Think of a rule that classifies any given item into exactly one of several non-overlapping categories. That is, come up with a list of concrete categories A, B, ..., Z that will make the following sentence true: An item is either in category A, or in category B, or ..., or in category Z.
Once you have done this, you can safely count the number of items in each category and add these counts together, comfortable in the knowledge that (a) any item that is counted in one category is not counted again in any other category, and (b) any item that you want to count is in some category (i.e., none are missed).
How could we form categories for your specific problem here? One way to do it is to notice that every item (i.e., every multiset of coin values that sums to the desired total N) either contains the 50-coin exactly zero times, or it contains it exactly once, or it contains it exactly twice, or ..., or it contains it exactly RoundDown(N / 50) times. These categories don't overlap: if a solution uses exactly 5 50-coins, it pretty clearly can't also use exactly 7 50-coins, for example. Also, every solution is clearly in some category (notice that we include a category for the case in which no 50-coins are used). So if we had a way to count, for any given k, the number of solutions that use coins from the set {1,2,5,10,20,50,100,200} to produce a sum of N and use exactly k 50-coins, then we could sum over all k from 0 to N/50 and get an accurate count.
How to do this efficiently? This is where the recursion comes in. The number of solutions that use coins from the set {1,2,5,10,20,50,100,200} to produce a sum of N and use exactly k 50-coins is equal to the number of solutions that sum to N-50k and do not use any 50-coins, i.e. use coins only from the set {1,2,5,10,20,100,200}. This of course works for any particular coin denomination that we could have chosen, so these subproblems have the same shape as the original problem: we can solve each one by simply choosing another coin arbitrarily (e.g. the 10-coin), forming a new set of categories based on this new coin, counting the number of items in each category and summing them up. The subproblems become smaller until we reach some simple base case that we process directly (e.g. no allowed coins left: then there is 1 item if N=0, and 0 items otherwise).
I started with the 50-coin (instead of, say, the largest or the smallest coin) to emphasise that the particular choice used to form the set of non-overlapping categories doesn't matter for the correctness of the algorithm. But in practice, passing explicit representations of sets of coins around is unnecessarily expensive. Since we don't actually care about the particular sequence of coins to use for forming categories, we're free to choose a more efficient representation. Here (and in many problems), it's convenient to represent the set of allowed coins implicitly as simply a single integer, maxCoin, which we interpret to mean that the first maxCoin coins in the original ordered list of coins are the allowed ones. This limits the possible sets we can represent, but here that's OK: If we always choose the last allowed coin to form categories on, we can communicate the new, more-restricted "set" of allowed coins to subproblems very succinctly by simply passing the argument maxCoin-1 to it. This is the essence of m69's answer.
There's some good guidance here. Another way to think about this is as a dynamic program. For this, we must pose the problem as a simple decision among options that leaves us with a smaller version of the same problem. It boils out to a certain kind of recursive expression.
Put the coin values c0, c1, ... c_(n-1) in any order you like. Then define W(i,v) as the number of ways you can make change for value v using coins ci, c_(i+1), ... c_(n-1). The answer we want is W(0,200). All that's left is to define W:
W(i,v) = sum_[k = 0..floor(200/ci)] W(i+1, v-ci*k)
In words: the number of ways we can make change with coins ci onward is to sum up all the ways we can make change after a decision to use some feasible number k of coins ci, removing that much value from the problem.
Of course we need base cases for the recursion. This happens when i=n-1: the last coin value. At this point there's a way to make change if and only if the value we need is an exact multiple of c_(n-1).
W(n-1,v) = 1 if v % c_(n-1) == 0 and 0 otherwise.
We generally don't want to implement this as a simple recursive function. The same argument values occur repeatedly, which leads to an exponential (in n and v) amount of wasted computation. There are simple ways to avoid this. Tabular evaluation and memoization are two.
Another point is that it is more efficient to have the values in descending order. By taking big chunks of value early, the total number of recursive evaluations is minimized. Additionally, since c_(n-1) is now 1, the base case is just W(n-1)=1. Now it becomes fairly obvious that we can add a second base case as an optimization: W(n-2,v) = floor(v/c_(n-2)). That's how many times the for loop will sum W(n-1,1) = 1!
But this is gilding a lilly. The problem is so small that exponential behavior doesn't signify. Here is a little implementation to show that order really doesn't matter:
#include <stdio.h>
#define n 8
int cv[][n] = {
{200,100,50,20,10,5,2,1},
{1,2,5,10,20,50,100,200},
{1,10,100,2,20,200,5,50},
};
int *c;
int w(int i, int v) {
if (i == n - 1) return v % c[n - 1] == 0;
int sum = 0;
for (int k = 0; k <= v / c[i]; ++k)
sum += w(i + 1, v - c[i] * k);
return sum;
}
int main(int argc, char *argv[]) {
unsigned p;
if (argc != 2 || sscanf(argv[1], "%d", &p) != 1 || p > 2) p = 0;
c = cv[p];
printf("Ways(%u) = %d\n", p, w(0, 200));
return 0;
}
Drumroll, please...
$ ./foo 0
Ways(0) = 73682
$ ./foo 1
Ways(1) = 73682
$ ./foo 2
Ways(2) = 73682
I am having trouble understanding the reasoning behind the solution to this question on CareerCup.
Pots of gold game: Two players A & B. There are pots of gold arranged
in a line, each containing some gold coins (the players can see how
many coins are there in each gold pot - perfect information). They get
alternating turns in which the player can pick a pot from one of the
ends of the line. The winner is the player which has a higher number
of coins at the end. The objective is to "maximize" the number of
coins collected by A, assuming B also plays optimally. A starts the
game.
The idea is to find an optimal strategy that makes A win knowing that
B is playing optimally as well. How would you do that?
At the end I was asked to code this strategy!
This was a question from a Google interview.
The proposed solution is:
function max_coin( int *coin, int start, int end ):
if start > end:
return 0
// I DON'T UNDERSTAND THESE NEXT TWO LINES
int a = coin[start] + min(max_coin(coin, start+2, end), max_coin(coin, start+1, end-1))
int b = coin[end] + min(max_coin(coin, start+1,end-1), max_coin(coin, start, end-2))
return max(a,b)
There are two specific sections I don't understand:
In the first line why do we use the ranges [start + 2, end] and [start + 1, end - 1]? It's always leaving out one coin jar. Shouldn't it be [start + 1, end] because we took the starting coin jar out?
In the first line, why do we take the minimum of the two results and not the maximum?
Because I'm confused about why the two lines take the minimum and why we choose those specific ranges, I'm not really sure what a and b actually represent?
First of all a and b represent respectively the maximum gain if start (respectively end) is played.
So let explain this line:
int a = coin[start] + min(max_coin(coin, start+2, end), max_coin(coin, start+1, end-1))
If I play start, I will immediately gain coin[start]. The other player now has to play between start+1 and end. He plays to maximize his gain. However since the number of coin is fixed, this amounts to minimize mine. Note that
if he plays start+1 I'll gain max_coin(coin, start+2, end)
if he plays end Ill gain max_coin(coin, start+1, end-1)
Since he tries to minimize my gain, I'll gain the minimum of those two.
Same reasoning apply to the other line where I play end.
Note: This is a bad recursive implementation. First of all max_coin(coin, start+1, end-1) is computed twice. Even if you fix that, you'll end up computing lots of time shorter case. This is very similar to what happens if you try to compute Fibonacci numbers using recursion. It would be better to use memoization or dynamic programming.
a and b here represent the maximum A can get by picking the starting pot or the ending pot, respectively.
We're actually trying to maximize A-B, but since B = TotalGold - A, we're trying to maximize 2A - TotalGold, and since TotalGold is constant, we're trying to maximize 2A, which is the same as A, so we completely ignore the values of B's picks and just work with A.
The updated parameters in the recursive calls include B picking as well - so coin[start] represents A picking the start, then B picks the next one from the start, so it's start+2. For the next call, B picks from the end, so it's start+1 and end-1. Similarly for the rest.
We're taking the min, because B will try to maximize it's own profit, so it will pick the choice that minimizes A's profit.
But actually I'd say this solution is lacking a bit in the sense that it just returns a single value, not 'an optimal strategy', which, in my mind, would be a sequence of moves. And it also doesn't take into account the possibility that A can't win, in which case one might want to output a message saying that it's not possible, but this would really be something to clarify with the interviewer.
Let me answer your points in reverse order, somehow it seems to make more sense that way.
3 - a and b represent the amount of coins the first player will get, when he/she chooses the first or the last pot respectively
2 - we take the minimum because it is the choice of the second player - he/she will act to minimise the amount of coins the first player will get
1 - the first line presents the scenario - if the first player has taken the first pot, what will the second player do? If he/she again takes the first pot, it will leave (start+2, end). If he/she takes the last pot, it will leave (start+1, end-1)
Assume what you gain on your turn is x and what you get in all consequent turns is y. Both values represent x+y, where a assumes you take next pot (x=coin[start]) from the front and b assumes you take your next pot (x=coin[end]) from the back.
Now how you compute y.
After your choice, the opponent will use the same optimum strategy (thus recursive calls) to maximise his profit, and you will be left with a the smaller profit for the turn. This is why your y=min(best_strategy_front(), best_strategy_end()) -- your value is the smaller of the two choices that are left because the opponent will take the bigger.
The indexing simply indicates the remaining sequences minus one pot on the front and on the back after you made your choice.
A penny from my end too. I have explained steps in detail.
public class Problem08 {
static int dp[][];
public static int optimalGameStrategy(int arr[], int i, int j) {
//If one single element then choose that.
if(i == j) return arr[i];
//If only two elements then choose the max.
if (i + 1 == j ) return Math.max(arr[i], arr[j]);
//If the result is already computed, then return that.
if(dp[i][j] != -1) return dp[i][j];
/**
* If I choose i, then the array length will shrink to i+1 to j.
* The next move is of the opponent. And whatever he choose, I would want the result to be
* minimum. If he choose j, then array will shrink to i+1, j-1. But if also choose i then
* array will shrink to i+2,j. Whatever he choose, I want the result to be min, hence I take
* the minimum of his two choices.
*
* Similarly for a case, when I choose j.
*
* I will eventually take the maximum of both of my case. :)
*/
int iChooseI = arr[i] + Math.min(optimalGameStrategy(arr, i+1, j-1),
optimalGameStrategy(arr, i+2, j));
int iChooseJ = arr[j] + Math.min(optimalGameStrategy(arr, i+1, j-1),
optimalGameStrategy(arr, i, j-2));
int res = Math.max(iChooseI, iChooseJ );
dp[i][j] = res;
return res;
}
public static void main(String[] args) {
int[] arr = new int[]{5,3,7,10};
dp = new int[arr.length][arr.length];
for(int i=0; i < arr.length; i++) {
for(int j=0; j < arr.length; j++) {
dp[i][j] = -1;
}
}
System.out.println( " Nas: " + optimalGameStrategy(arr, 0, arr.length-1));
}
}
As we know from programming, sometimes a slight change in a problem can
significantly alter the form of its solution.
Firstly, I want to create a simple algorithm for solving
the following problem and classify it using bigtheta
notation:
Divide a group of people into two disjoint subgroups
(of arbitrary size) such that the
difference in the total ages of the members of
the two subgroups is as large as possible.
Now I need to change the problem so that the desired
difference is as small as possible and classify
my approach to the problem.
Well,first of all I need to create the initial algorithm.
For that, should I make some kind of sorting in order to separate the teams, and how am I suppose to continue?
EDIT: for the first problem,we have ruled out the possibility of a set being an empty set. So all we have to do is just a linear search to find the min age and then put it in a set B. SetA now has all the other ages except the age of setB, which is the min age. So here is the max difference of the total ages of the two sets, as high as possible
The way you described the first problem, it is trivial in the way that it requires you to find only the minimum element (in case the subgroups should contain at least 1 member), otherwise it is already solved.
The second problem can be solved recursively the pseudo code would be:
// compute sum of all elem of array and store them in sum
min = sum;
globalVec = baseVec;
fun generate(baseVec, generatedVec, position, total)
if (abs(sum - 2*total) < min){ // check if the distribution is better
min = abs(sum - 2*total);
globalVec = generatedVec;
}
if (position >= baseVec.length()) return;
else{
// either consider elem at position in first group:
generate(baseVec,generatedVec.pushback(baseVec[position]), position + 1, total+baseVec[position]);
// or consider elem at position is second group:
generate(baseVec,generatedVec, position + 1, total);
}
And now just start the function with generate(baseVec,"",0,0) where "" stand for an empty vector.
The algo can be drastically improved by applying it to a sorted array, hence adding a test condition to stop branching, but the idea stays the same.
I am a long time lurker, and just had an interview with Google where they asked me this question:
Various artists want to perform at the Royal Albert Hall and you are responsible for scheduling
their concerts. Requests for performing at the Hall are accommodated on a first come first served
policy. Only one performance is possible per day and, moreover, there cannot be any concerts
taking place within 5 days of each other
Given a requested time d which is impossible (i.e. within 5 days of an already sched-
uled performance), give an O(log n)-time algorithm to find the next available day d2
(d2 > d).
I had no clue how to solve it, and now that the interview is over, I am dying to figure out how to solve it. Knowing how smart most of you folks are, I was wondering if you can give me a hand here. This is NOT for homework, or anything of that sort. I just want to learn how to solve it for future interviews. I tried asking follow up questions but he said that is all I can tell you.
You need a normal binary search tree of intervals of available dates. Just search for the interval containing d. If it does not exist, take the interval next (in-order) to the point where the search stopped.
Note: contiguous intervals must be fused together in a single node. For example: the available-dates intervals {2 - 15} and {16 - 23} should become {2 - 23}. This might happen if a concert reservation was cancelled.
Alternatively, a tree of non-available dates can be used instead, provided that contiguous non-available intervals are fused together.
Store the scheduled concerts in a binary search tree and find a feasible solution by doing a binary search.
Something like this:
FindDateAfter(tree, x):
n = tree.root
if n.date < x
n = FindDateAfter(n.right, x)
else if n.date > x and n.left.date < x
return n
return FindDateAfter(n.left, x)
FindGoodDay(tree, x):
n = FindDateAfter(tree, x)
while (n.date + 10 < n.right.date)
n = FindDateAfter(n, n.date + 5)
return n.date + 5
I've used a binary search tree (BST) that holds the ranges for valid free days that can be scheduled for performances.
One of the ranges must end with int.MaxValue, because we have an infinite amount of days so it can't be bound.
The following code searches for the closest day to the requested day, and returns it.
The time complexity is O(H) when H is the tree height (usually H=log(N), but can become H=N in some cases.).
The space complexity is the same as the time complexity.
public static int FindConcertTime(TreeNode<Tuple<int, int>> node, int reqDay)
{
// Not found!
if (node == null)
{
return -1;
}
Tuple<int, int> currRange = node.Value;
// Found range.
if (currRange.Item1 <= reqDay &&
currRange.Item2 >= reqDay)
{
// Return requested day.
return reqDay;
}
// Go left.
else if (currRange.Item1 > reqDay)
{
int suggestedDay = FindConcertTime(node.Left, reqDay);
// Didn't find appropriate range in left nodes, or found day
// is further than current option.
if (suggestedDay == -1 || suggestedDay > currRange.Item1)
{
// Return current option.
return currRange.Item1;
}
else
{
// Return suggested day.
return suggestedDay;
}
}
// Go right.
// Will always find because the right-most node has "int.MaxValue" as Item2.
else //if (currRange.Item2 < reqDay)
{
return FindConcertTime(node.Right, reqDay);
}
}
Store the number of used nights per year, quarter, and month. To find a free night, find the first year that is not fully booked, then the quarter within that year, then the month. Then check each of the nights in that month.
Irregularities in the calendar system makes this a little tricky so instead of using years and months you can apply the idea for units of 4 nights as "month", 16 nights as "quarter", and so on.
Assume, at level 1 all schedule details are available.
Group schedule of 16 days schedule at level 2.
Group 16 level 2 status at level 3.
Group 16 level 3 status at level 4.
Depends on number of days that you want to expand, increase the level.
Now search from higher level and do binary search at the end.
Asymtotic complexity:-
It means runtime is changing as the input grows.
suppose we have an input string “abcd”. Here we traverse through each character to find its length thus the time taken is proportional to the no of characters in the string like n no of char. Thus O(n).
but if we put the length of the string “abcd” in a variable then no matter how long the string be we still can find the length of thestring by looking at the variable len. (len=4).
ex: return 23. no matter what you input is we still have the output as 23.
thus the complexity is O(1). Thus th program will be running in a constant time wrt input size.
for O(log n) - the operations are happening in logarithmic steps.
https://drive.google.com/file/d/0B7eUOnXKVyeERzdPUE8wYWFQZlk/view?usp=sharing
Observe the image in the above link. Over here we can see the bended line(logarithmic line). Here we can say that for smaller inputs the O(log n) notation works good as the time taken is less as we can see in the bended line but when the input grows the linear notation i.e O(n) is considered as better way.
There are also the best and worst case scenarios to be seen. Like the above example.
You can also refer to this cheat for the algorithms: http://bigocheatsheet.com/
It was already mentioned above, but basically keep it simple with a binary tree. You know a binary tree has log N complexity. So you already know what algorithm you need to use.
All you have to do is to come up with a tree node structure and use binary tree insertion algorithm to find next available date:
A possible one:
The tree node has two attributes: d (date of the concert) and d+5 (end date for the blocking period of 5 days). Again to keep it simple, use a timestamp for the two date attributes.
Now it is trivial to find next available date by using binary tree inorder insertion algorithm with initial condition of root = null.
Why not try to use Union-Find? You can group each concert day + the next 5 days as part of one set and then perform a FIND on the given day which would return the next set ID which would be your next concert date.
If implemented using a tree, this gives a O(log n) time complexity.