Merge ranges in intervals - algorithm

Given a set of intervals: {1-4, 6-7, 10-12} add a new interval: (9,11) so that the final solution is 'merged': Output: {1-4, 6-7, 9-12}. The merger can happen on both sides (low as well as high range).
I saw this question was answered at multiple places, someone even suggested using Interval Tress, but did not explain how exactly they would use it. The only solution I know of is to arrange the intervals in ascending order of their start time and iterating over them and trying to merge them appropriately.
If someone can help me understand how we can use interval trees in this use case, that will be great!
[I have been following interval trees in CLRS book, but they do not talk about merging, all they talk about is insertion and search.]

(I'm assuming that this means that intervals can never overlap, since otherwise they'd be merged.)
One way to do this would be to store a balanced binary search tree with one node per endpoint of a range. Each node would then be marked as either an "open" node marking the start of an interval or a "close" node marking the end of an interval.
When inserting a new range, one of two cases will occur regarding the start point of the range:
It's already inside a range, which means that you will extend an already-existing range as part of the insertion.
It's not inside a range, so you'll be creating a new "open" node.
To determine which case you're in, you can do a predecessor search in the tree for the range's start point. If you get NULL or a close node, you need to insert a new open node representing the start point of the range. If you get an open node, you will just keep extending that interval.
From there, you need to determine how far the range extends. To do this, continuously compute the successor of the initial node you inserted until one of the following occurs:
You have looked at all nodes in the tree. In that case, you need to insert a close node marking the end of this interval.
You see a close node after the end of the range. In that case, you're in the middle of an existing range when the new range ends, so you don't need to do anything more. You're done.
You see a close or open node before the end of the range. In that case, you need to remove that node from the tree, since the old range is subsumed by the new one.
You see an open node after the end of the range. In that case, insert a new close node into the tree, since you need to terminate the current range before seeing the start of this new one.
Implemented naively, the runtime of this algorithm is O(log n + k log n), where n is the number of intervals and k is the number of intervals removed during this process (since you have to do n deletes). However, you can speed this up to O(log n) by using the following trick. Since the deletion process always deletes nodes in a sequence, you can use a successor search for the endpoint to determine the end of the range to remove. Then, you can splice the subrange to remove out of the tree by doing two tree split operations and one tree join operation. On a suitable balanced tree (red-black or splay, for example), this can be done in O(log n) total time, which is much faster if a lot of ranges are going to get subsumed.
Hope this helps!

public class MergeIntervals {
public static class Interval {
public double start;
public double end;
public Interval(double start, double end){
this.start = start;
this.end = end;
}
}
public static List<Interval> mergeInteval(List<Interval> nonOverlapInt, Interval another){
List<Interval> merge = new ArrayList<>();
for (Interval current : nonOverlapInt){
if(current.end < another.start || another.end < current.start){
merge.add(current);
}
else{
another.start = current.start < another.start ? current.start : another.start ;
another.end = current.end < another.end ? another.end : current.end;
}
}
merge.add(another);
return merge;
}

Check this out. It may help you:- http://www.boost.org/doc/libs/1_46_0/libs/icl/doc/html/index.html
The library offers these functionalities:
1) interval_set
2) separate_interval_set
3) split_interval_set

C#
public class Interval
{
public Interval(int start, int end) { this.start = start; this.end = end; }
public int start;
public int end;
}
void AddInterval(List<Interval> list, Interval interval)
{
int lo = 0;
int hi = 0;
for (lo = 0; lo < list.Count; lo++)
{
if (interval.start < list[lo].start)
{
list.Insert(lo, interval);
hi++;
break;
}
if (interval.start >= list[lo].start && interval.start <= list[lo].end)
{
break;
}
}
if (lo == list.Count)
{
list.Add(interval);
return;
}
for (hi = hi + lo; hi < list.Count; hi++)
{
if (interval.end < list[hi].start)
{
hi--;
break;
}
if (interval.end >= list[hi].start && interval.end <= list[hi].end)
{
break;
}
}
if (hi == list.Count)
{
hi = list.Count - 1;
}
list[lo].start = Math.Min(interval.start, list[lo].start);
list[lo].end = Math.Max(interval.end, list[hi].end);
if (hi - lo > 0)
{
list.RemoveRange(lo + 1, hi - lo);
}
}

This is simply done by adding the interval in question to the end of the interval set, then performing a merge on all teh elements of the interval set.
The merge operation is well-detailed here: http://www.geeksforgeeks.org/merging-intervals/
If you're not in the mood for C++ code, here is the same things in python:
def mergeIntervals(self, intervalSet):
# interval set is an array.
# each interval is a dict w/ keys: startTime, endTime.
# algorithm from: http://www.geeksforgeeks.org/merging-intervals/
import copy
intArray = copy.copy(intervalSet)
if len(intArray) <= 1:
return intArray
intArray.sort(key=lambda x: x.get('startTime'))
print "sorted array: %s" % (intArray)
myStack = [] #append and pop.
myStack.append(intArray[0])
for i in range(1, len(intArray)):
top = myStack[0]
# if current interval NOT overlapping with stack top, push it on.
if (top['endTime'] < intArray[i]['startTime']):
myStack.append(intArray[i])
# otherwise, if end of current is more, update top's endTime
elif (top['endTime'] < intArray[i]['endTime']):
top['endTime'] = intArray[i]['endTime']
myStack.pop()
myStack.append(top)
print "merged array: %s" % (myStack)
return myStack
Don't forget your nosetests to verify you actually did the work right:
class TestMyStuff(unittest.TestCase):
def test_mergeIntervals(self):
t = [ { 'startTime' : 33, 'endTime' : 35 }, { 'startTime' : 11, 'endTime' : 15 }, { 'startTime' : 72, 'endTime' : 76 }, { 'startTime' : 44, 'endTime' : 46 } ]
mgs = MyClassWithMergeIntervalsMethod()
res = mgs.mergeIntervals(t)
assert res == [ { 'startTime' : 11, 'endTime' : 15 }, { 'startTime' : 33, 'endTime' : 35 }, { 'startTime' : 44, 'endTime' : 46 }, { 'startTime' : 72, 'endTime' : 76 } ]
t = [ { 'startTime' : 33, 'endTime' : 36 }, { 'startTime' : 11, 'endTime' : 35 }, { 'startTime' : 72, 'endTime' : 76 }, { 'startTime' : 44, 'endTime' : 46 } ]
mgs = MyClassWithMergeIntervalsMethod()
res = mgs.mergeIntervals(t)
assert res == [{'endTime': 36, 'startTime': 11}, {'endTime': 46, 'startTime': 44}, {'endTime': 76, 'startTime': 72}]

Related

Having a hard time visualizing the call order of binary tree recursion

I am trying to visualize recursion for this particular problem solution below, and have placed 2 console.logs, but I still can’t understand it.
The problem:
Node depths
Here is my solution. It works though I don't fully understand why.
function helper(root, depth) {
console.log(depth, ‘step 1’)
if(root === null) {
console.log(depth, ‘step 2’)
return 0;
}
if(root.left === null && root.right === null) {
console.log(depth, ‘step 3’)
return depth
}
console.log(depth, ‘step 4’)
const leftSum = helper(root.left, depth + 1);
const rightSum = helper(root.right, depth + 1);
const totalSum = depth + leftSum + rightSum;
return totalSum;
}
function nodeDepths(root) {
return helper(root, 0);
}
In particular, for this input:
{
"tree": {
"nodes": [
{"id": "1", "left": "2", "right": null, "value": 1},
{"id": "2", "left": null, "right": null, "value": 2}
],
"root": "1"
}
}
For the case where depth is equal to 1 and node.value is 2, the algorithm order goes like below
0 step 1
0 step 4
1 step 1
1 step 3
1 step 1
1 step 2
My question is why at depth === 1, the code doesn’t go to step 4 after step 3 but goes back up to step 1 instead? And when a number is returned from the call stack, why is that number added to the sum of a branch (but not minus, multiply or divide)?
Thank you in advance! I’ve been stumped on this for the past 3 days.
I tried console.log out the callstack and expected it to go to step 4 after step 3 for when depth is equal to 1.
You recurse twice, in the code here:
const leftSum = helper(root.left, depth + 1);
const rightSum = helper(root.right, depth + 1);
The four lines that are prefixed with depth 1 combine the two calls, two lines from each.
The first call is on the left child of the root, which is a valid leaf node, so you see step 3 printed, since that's the base case for leaf nodes. The second recursion, on the right child, is running on a null value, and bails out in step 2, another base case.
It might help you better understand the recursion if you printed the root value, along with the depth.
As for why the depths are being added up, that I can't help you with. I just seems to be the task assigned in this exercise. I don't think there's any utility to that sum, unlike, for example, the maximum depth (which can be used to balance a tree, which helps make some algorithms more efficient).
To understand the execution flow, I would suggest to change what you actually log. Logging "step" may not be enough to get a clear view. I would suggest to log something when a recursive call is about to be made, and when execution comes back from that recursive call. And also indicate which of the two recursive calls it is (left or right). Finally, use depth to indent the output -- indented output gives a better clue as to what is happening.
(NB: that data structure your mentioned is not ready to be passed to your function, as it doesn't have node instances as values for left or right -- so I added function that converts it)
function helper(root, depth) {
let indent = " ".repeat(depth);
console.log(`${indent}helper(root.id==${root?.id}):`);
indent += " ";
if(root === null) {
console.log(`${indent}Root is null. return 0`);
return 0;
}
if(root.left === null && root.right === null) {
console.log(`${indent}This node is a leaf; return ${depth} (depth).`);
return depth
}
console.log(`${indent}Making recursive call to the left.`);
const leftSum = helper(root.left, depth + 1);
console.log(`${indent}Back from left recursive call. leftSum==${leftSum}.`);
console.log(`${indent}Making recursive call to the right`);
const rightSum = helper(root.right, depth + 1);
console.log(`${indent}Back from right recursive call. rightSum==${rightSum}.`);
const totalSum = depth + leftSum + rightSum;
console.log(`${indent}All done for this node. totalSum=${totalSum}. Return it.`);
return totalSum;
}
function nodeDepths(root) {
return helper(root, 0);
}
// Added this function to convert data structure to nested nodes
function createTree({tree: { nodes, root }}) {
const map = new Map(nodes.map(obj => [obj.id, { ...obj }]));
for (const obj of map.values()) {
obj.left = map.get(obj.left) ?? null;
obj.right = map.get(obj.right) ?? null;
}
return map.get(root);
}
const data = {
"tree": {
"nodes": [
{"id": "1", "left": "2", "right": null, "value": 1},
{"id": "2", "left": null, "right": null, "value": 2}
],
"root": "1"
}
};
const root = createTree(data);
nodeDepths(root);
I hope with this kind of logging the process becomes clearer.

Knapsack: how to add item type to existing solution

I've been working with this variation of dynamic programming to solve a knapsack problem:
KnapsackItem = Struct.new(:name, :cost, :value)
KnapsackProblem = Struct.new(:items, :max_cost)
def dynamic_programming_knapsack(problem)
num_items = problem.items.size
items = problem.items
max_cost = problem.max_cost
cost_matrix = zeros(num_items, max_cost+1)
num_items.times do |i|
(max_cost + 1).times do |j|
if(items[i].cost > j)
cost_matrix[i][j] = cost_matrix[i-1][j]
else
cost_matrix[i][j] = [cost_matrix[i-1][j], items[i].value + cost_matrix[i-1][j-items[i].cost]].max
end
end
end
cost_matrix
end
def get_used_items(problem, cost_matrix)
i = cost_matrix.size - 1
currentCost = cost_matrix[0].size - 1
marked = Array.new(cost_matrix.size, 0)
while(i >= 0 && currentCost >= 0)
if(i == 0 && cost_matrix[i][currentCost] > 0 ) || (cost_matrix[i][currentCost] != cost_matrix[i-1][currentCost])
marked[i] = 1
currentCost -= problem.items[i].cost
end
i -= 1
end
marked
end
This has worked great for the structure above where you simply provide a name, cost and value. Items can be created like the following:
items = [
KnapsackItem.new('david lee', 8000, 30) ,
KnapsackItem.new('kevin love', 12000, 50),
KnapsackItem.new('kemba walker', 7300, 10),
KnapsackItem.new('jrue holiday', 12300, 30),
KnapsackItem.new('stephen curry', 10300, 80),
KnapsackItem.new('lebron james', 5300, 90),
KnapsackItem.new('kevin durant', 2300, 30),
KnapsackItem.new('russell westbrook', 9300, 30),
KnapsackItem.new('kevin martin', 8300, 15),
KnapsackItem.new('steve nash', 4300, 15),
KnapsackItem.new('kyle lowry', 6300, 20),
KnapsackItem.new('monta ellis', 8300, 30),
KnapsackItem.new('dirk nowitzki', 7300, 25),
KnapsackItem.new('david lee', 9500, 35),
KnapsackItem.new('klay thompson', 6800, 28)
]
problem = KnapsackProblem.new(items, 65000)
Now, the problem I'm having is that I need to add a position for each of these players and I have to let the knapsack algorithm know that it still needs to maximize value across all players, except there is a new restriction and that restriction is each player has a position and each position can only be selected a certain amount of times. Some positions can be selected twice, others once. Items would ideally become this:
KnapsackItem = Struct.new(:name, :cost, :position, :value)
Positions would have a restriction such as the following:
PositionLimits = Struct.new(:position, :max)
Limits would be instantiated perhaps like the following:
limits = [Struct.new('PG', 2), Struct.new('C', 1), Struct.new('SF', 2), Struct.new('PF', 2), Struct.new('Util', 2)]
What makes this a little more tricky is every player can be in the Util position. If we want to disable the Util position, we will just set the 2 to 0.
Our original items array would look something like the following:
items = [
KnapsackItem.new('david lee', 'PF', 8000, 30) ,
KnapsackItem.new('kevin love', 'C', 12000, 50),
KnapsackItem.new('kemba walker', 'PG', 7300, 10),
... etc ...
]
How can position restrictions be added to the knapsack algorithm in order to still retain max value for the provided player pool provided?
There are some efficient libraries available in ruby which could suit your task , Its clear that you are looking for some constrain based optimization , there are some libraries in ruby which are a opensource so, free to use , Just include them in you project. All you need to do is generate Linear programming model objective function out of your constrains and library's optimizer would generate Solution which satisfy all your constrains , or says no solution exists if nothing can be concluded out of the given constrains .
Some such libraries available in ruby are
RGLPK
OPL
LP Solve
OPL follows the LP syntax similar to IBM CPLEX , which is widely used Optimization software, So you could get good references on how to model the LP using this , Moreover this is build on top of the RGLPK.
As I understand, the additional constraint that you are specifying is as following:
There shall be a set of elements, out which only at most k (k = 1 or
2) elements can be selected in the solution. There shall be multiple
such sets.
There are two approaches that come to my mind, neither of which are efficient enough.
Approach 1:
Divide the elements into groups of positions. So if there are 5 positions, then each element shall be assigned to one of 5 groups.
Iterate (or recur) through all the combinations by selecting 1 (or 2) element from each group and checking the total value and cost. There are ways in which you can fathom some combinations. For example, in a group if there are two elements in which one gives more value at lesser cost, then the other can be rejected from all solutions.
Approach 2:
Mixed Integer Linear Programming Approach.
Formulate the problem as follows:
Maximize summation (ViXi) {i = 1 to N}
where Vi is value and
Xi is a 1/0 variable denoting presence/absence of an element from the solution.
Subject to constraints:
summation (ciXi) <= C_MAX {total cost}
And for each group:
summation (Xj) <= 1 (or 2 depending on position)
All Xi = 0 or 1.
And then you will have to find a solver to solve the above MILP.
This problem is similar to a constraint vehicle routing problem. You can try a heuristic like the saving algorithm from Clarke&Wright. You can also try a brute-force algorithm with less players.
Considering players have Five positions your knapsack problem would be:-
Knpsk(W,N,PG,C,SF,PF,Util) = max(Knpsk(W-Cost[N],N-1,...)+Value[N],Knpsk(W,N-1,PG,C,SF,PF,Util),Knpsk(W-Cost[N],N-1,PG,C,SF,PF,Util-1)+Value[N])
if(Pos[N]=="PG") then Knpsk(W-Cost[N],N-1,....) = Knpsk(W-Cost[N],N-1,PG-1,....)
if(Pos[N]=="C") then Knpsk(W-Cost[N],N-1,....) = Knpsk(W-Cost[N],N-1,PG,C-1....)
so on...
PG,C,SF,PF,Util are current position capacities
W is current knapsack capacity
N number of items available
Dynamic Programming can be used as before using 7-D table and as in your case the values of positions are small it will slow down algorithm by factor of 16 which is great for n-p complete problem
Following is dynamic programming solution in JAVA:
public class KnapsackSolver {
HashMap CostMatrix;
// Maximum capacities for positions
int posCapacity[] = {2,1,2,2,2};
// Total positions
String[] positions = {"PG","C","SF","PF","util"};
ArrayList playerSet = new ArrayList<player>();
public ArrayList solutionSet;
public int bestCost;
class player {
int value;
int cost;
int pos;
String name;
public player(int value,int cost,int pos,String name) {
this.value = value;
this.cost = cost;
this.pos = pos;
this.name = name;
}
public String toString() {
return("'"+name+"'"+", "+value+", "+cost+", "+positions[pos]);
}
}
// Used to add player to list of available players
void additem(String name,int cost,int value,String pos) {
int i;
for(i=0;i<positions.length;i++) {
if(pos.equals(positions[i]))
break;
}
playerSet.add(new player(value,cost,i,name));
}
// Converts subproblem data to string for hashing
public String encode(int Capacity,int Totalitems,int[] positions) {
String Data = Capacity+","+Totalitems;
for(int i=0;i<positions.length;i++) {
Data = Data + "," + positions[i];
}
return(Data);
}
// Check if subproblem is in hash tables
int isDone(int capacity,int players,int[] positions) {
String k = encode(capacity,players,positions);
if(CostMatrix.containsKey(k)) {
//System.out.println("Key found: "+k+" "+(Integer)CostMatrix.get(k));
return((Integer)CostMatrix.get(k));
}
return(-1);
}
// Adds subproblem added hash table
void addEncode(int capacity,int players,int[] positions,int value) {
String k = encode(capacity,players,positions);
CostMatrix.put(k, value);
}
boolean checkvalid(int capacity,int players) {
return(!(capacity<1||players<0));
}
// Solve the Knapsack recursively with Hash look up
int solve(int capacity,int players,int[] posCapacity) {
// Check if sub problem is valid
if(checkvalid(capacity,players)) {
//System.out.println("Processing: "+encode(capacity,players,posCapacity));
player current = (player)playerSet.get(players);
int sum1 = 0,sum2 = 0,sum3 = 0;
int temp = isDone(capacity,players-1,posCapacity);
// Donot add player
if(temp>-1) {
sum1 = temp;
}
else sum1 = solve(capacity,players-1,posCapacity);
//check if current player can be added to knapsack
if(capacity>=current.cost) {
posCapacity[posCapacity.length-1]--;
temp = isDone(capacity-current.cost,players-1,posCapacity);
posCapacity[posCapacity.length-1]++;
// Add player to util
if(posCapacity[posCapacity.length-1]>0) {
if(temp>-1) {
sum2 = temp+current.value;
}
else {
posCapacity[posCapacity.length-1]--;
sum2 = solve(capacity-current.cost,players-1,posCapacity)+current.value;
posCapacity[posCapacity.length-1]++;
}
}
// Add player at its position
int i = current.pos;
if(posCapacity[i]>0) {
posCapacity[i]--;
temp = isDone(capacity-current.cost,players-1,posCapacity);
posCapacity[i]++;
if(temp>-1) {
sum3 = temp+current.value;
}
else {
posCapacity[i]--;
sum3 = solve(capacity-current.cost,players-1,posCapacity)+current.value;
posCapacity[i]++;
}
}
}
//System.out.println(sum1+ " "+ sum2+ " " + sum3 );
// Evaluate the maximum of all subproblem
int res = Math.max(Math.max(sum1,sum2), sum3);
//add current solution to Hash table
addEncode(capacity, players, posCapacity,res);
//System.out.println("Encoding: "+encode(capacity,players,posCapacity)+" Cost: "+res);
return(res);
}
return(0);
}
void getSolution(int capacity,int players,int[] posCapacity) {
if(players>=0) {
player curr = (player)playerSet.get(players);
int bestcost = isDone(capacity,players,posCapacity);
int sum1 = 0,sum2 = 0,sum3 = 0;
//System.out.println(encode(capacity,players-1,posCapacity)+" "+bestcost);
sum1 = isDone(capacity,players-1,posCapacity);
posCapacity[posCapacity.length-1]--;
sum2 = isDone(capacity-curr.cost,players-1,posCapacity) + curr.value;
posCapacity[posCapacity.length-1]++;
posCapacity[curr.pos]--;
sum3 = isDone(capacity-curr.cost,players-1,posCapacity) + curr.value;
posCapacity[curr.pos]++;
if(bestcost==0)
return;
// Check if player is not added
if(sum1==bestcost) {
getSolution(capacity,players-1,posCapacity);
}
// Check if player is added to util
else if(sum2==bestcost) {
solutionSet.add(curr);
//System.out.println(positions[posCapacity.length-1]+" added");
posCapacity[posCapacity.length-1]--;
getSolution(capacity-curr.cost,players-1,posCapacity);
posCapacity[posCapacity.length-1]++;
}
else {
solutionSet.add(curr);
//System.out.println(positions[curr.pos]+" added");
posCapacity[curr.pos]--;
getSolution(capacity-curr.cost,players-1,posCapacity);
posCapacity[curr.pos]++;
}
}
}
void getOptSet(int capacity) {
CostMatrix = new HashMap<String,Integer>();
bestCost = solve(capacity,playerSet.size()-1,posCapacity);
solutionSet = new ArrayList<player>();
getSolution(capacity, playerSet.size()-1, posCapacity);
}
public static void main(String[] args) {
KnapsackSolver ks = new KnapsackSolver();
ks.additem("david lee", 8000, 30, "PG");
ks.additem("kevin love", 12000, 50, "C");
ks.additem("kemba walker", 7300, 10, "SF");
ks.additem("jrue holiday", 12300, 30, "PF");
ks.additem("stephen curry", 10300, 80, "PG");
ks.additem("lebron james", 5300, 90, "PG");
ks.additem("kevin durant", 2300, 30, "C");
ks.additem("russell westbrook", 9300, 30, "SF");
ks.additem("kevin martin", 8300, 15, "PF");
ks.additem("steve nash", 4300, 15, "C");
ks.additem("kyle lowry", 6300, 20, "PG");
ks.additem("monta ellis", 8300, 30, "C");
ks.additem("dirk nowitzki", 7300, 25, "SF");
ks.additem("david lee", 9500, 35, "PF");
ks.additem("klay thompson", 6800, 28,"PG");
//System.out.println("Items added...");
// System.out.println(ks.playerSet);
int maxCost = 30000;
ks.getOptSet(maxCost);
System.out.println("Best Value: "+ks.bestCost);
System.out.println("Solution Set: "+ks.solutionSet);
}
}
Note: If players with certain positions are added more than its capacity then those added as util because players from any position can be added to util.

Algorithm: Determine if a combination of min/max values fall within a given range

Imagine you have 3 buckets, but each of them has a hole in it. I'm trying to fill a bath tub. The bath tub has a minimum level of water it needs and a maximum level of water it can contain. By the time you reach the tub with the bucket it is not clear how much water will be in the bucket, but you have a range of possible values.
Is it possible to adequately fill the tub with water?
Pretty much you have 3 ranges (min,max), is there some sum of them that will fall within a 4th range?
For example:
Bucket 1 : 5-10L
Bucket 2 : 15-25L
Bucket 3 : 10-50L
Bathtub 100-150L
Is there some guaranteed combination of 1 2 and 3 that will fill the bathtub within the requisite range? Multiples of each bucket can be used.
EDIT: Now imagine there are 50 different buckets?
If the capacity of the tub is not very large ( not greater than 10^6 for an example), we can solve it using dynamic programming.
Approach:
Initialization: memo[X][Y] is an array to memorize the result. X = number of buckets, Y = maximum capacity of the tub. Initialize memo[][] with -1.
Code:
bool dp(int bucketNum, int curVolume){
if(curVolume > maxCap)return false; // pruning extra branches
if(curVolume>=minCap && curVolume<=maxCap){ // base case on success
return true;
}
int &ret = memo[bucketNum][curVolume];
if(ret != -1){ // this state has been visited earlier
return false;
}
ret = false;
for(int i = minC[bucketNum]; i < = maxC[bucketNum]; i++){
int newVolume = curVolume + i;
for(int j = bucketNum; j <= 3; j++){
ret|=dp(j,newVolume);
if(ret == true)return ret;
}
}
return ret;
}
Warning: Code not tested
Here's a naïve recursive solution in python that works just fine (although it doesn't find an optimal solution):
def match_helper(lower, upper, units, least_difference, fail = dict()):
if upper < lower + least_difference:
return None
if fail.get((lower,upper)):
return None
exact_match = [ u for u in units if u['lower'] >= lower and u['upper'] <= upper ]
if exact_match:
return [ exact_match[0] ]
for unit in units:
if unit['upper'] > upper:
continue
recursive_match = match_helper(lower - unit['lower'], upper - unit['upper'], units, least_difference)
if recursive_match:
return [unit] + recursive_match
else:
fail[(lower,upper)] = 1
return None
def match(lower, upper):
units = [
{ 'name': 'Bucket 1', 'lower': 5, 'upper': 10 },
{ 'name': 'Bucket 2', 'lower': 15, 'upper': 25 },
{ 'name': 'Bucket 3', 'lower': 10, 'upper': 50 }
]
least_difference = min([ u['upper'] - u['lower'] for u in units ])
return match_helper(
lower = lower,
upper = upper,
units = sorted(units, key = lambda u: u['upper']),
least_difference = min([ u['upper'] - u['lower'] for u in units ]),
)
result = match(100, 175)
if result:
lower = sum([ u['lower'] for u in result ])
upper = sum([ u['upper'] for u in result ])
names = [ u['name'] for u in result ]
print lower, "-", upper
print names
else:
print "No solution"
It prints "No solution" for 100-150, but for 100-175 it comes up with a solution of 5x bucket 1, 5x bucket 2.
Assuming you are saying that the "range" for each bucket is the amount of water that it may have when it reaches the tub, and all you care about is if they could possibly fill the tub...
Just take the "max" of each bucket and sum them. If that is in the range of what you consider the tub to be "filled" then it can.
Updated:
Given that buckets can be used multiple times, this seems to me like we're looking for solutions to a pair of equations.
Given buckets x, y and z we want to find a, b and c:
a*x.min + b*y.min + c*z.min >= bathtub.min
and
a*x.max + b*y.max + c*z.max <= bathtub.max
Re: http://en.wikipedia.org/wiki/Diophantine_equation
If bathtub.min and bathtub.max are both multiples of the greatest common divisor of a,b and c, then there are infinitely many solutions (i.e. we can fill the tub), otherwise there are no solutions (i.e. we can never fill the tub).
This can be solved with multiple applications of the change making problem.
Each Bucket.Min value is a currency denomination, and Bathtub.Min is the target value.
When you find a solution via a change-making algorithm, then apply one more constraint:
sum(each Bucket.Max in your solution) <= Bathtub.max
If this constraint is not met, throw out this solution and look for another. This will probably require a change to a standard change-making algorithm that allows you to try other solutions when one is found to not be suitable.
Initially, your target range is Bathtub.Range.
Each time you add an instance of a bucket to the solution, you reduce the target range for the remaining buckets.
For example, using your example buckets and tub:
Target Range = 100..150
Let's say we want to add a Bucket1 to the candidate solution. That then gives us
Target Range = 95..140
because if the rest of the buckets in the solution total < 95, then this Bucket1 might not be sufficient to fill the tub to 100, and if the rest of the buckets in the solution total > 140, then this Bucket1 might fill the tub over 150.
So, this gives you a quick way to check if a candidate solution is valid:
TargetRange = Bathtub.Range
foreach Bucket in CandidateSolution
TargetRange.Min -= Bucket.Min
TargetRange.Max -= Bucket.Max
if TargetRange.Min == 0 AND TargetRange.Max >= 0 then solution found
if TargetRange.Min < 0 or TargetRange.Max < 0 then solution is invalid
This still leaves the question - How do you come up with the set of candidate solutions?
Brute force would try all possible combinations of buckets.
Here is my solution for finding the optimal solution (least number of buckets). It compares the ratio of the maximums to the ratio of the minimums, to figure out the optimal number of buckets to fill the tub.
private static void BucketProblem()
{
Range bathTub = new Range(100, 175);
List<Range> buckets = new List<Range> {new Range(5, 10), new Range(15, 25), new Range(10, 50)};
Dictionary<Range, int> result;
bool canBeFilled = SolveBuckets(bathTub, buckets, out result);
}
private static bool BucketHelper(Range tub, List<Range> buckets, Dictionary<Range, int> results)
{
Range bucket;
int startBucket = -1;
int fills = -1;
for (int i = buckets.Count - 1; i >=0 ; i--)
{
bucket = buckets[i];
double maxRatio = (double)tub.Maximum / bucket.Maximum;
double minRatio = (double)tub.Minimum / bucket.Minimum;
if (maxRatio >= minRatio)
{
startBucket = i;
if (maxRatio - minRatio > 1)
fills = (int) minRatio + 1;
else
fills = (int) maxRatio;
break;
}
}
if (startBucket < 0)
return false;
bucket = buckets[startBucket];
tub.Maximum -= bucket.Maximum * fills;
tub.Minimum -= bucket.Minimum * fills;
results.Add(bucket, fills);
return tub.Maximum == 0 || tub.Minimum <= 0 || startBucket == 0 || BucketHelper(tub, buckets.GetRange(0, startBucket), results);
}
public static bool SolveBuckets(Range tub, List<Range> buckets, out Dictionary<Range, int> results)
{
results = new Dictionary<Range, int>();
buckets = buckets.OrderBy(b => b.Minimum).ToList();
return BucketHelper(new Range(tub.Minimum, tub.Maximum), buckets, results);
}

Implement a queue in which push_rear(), pop_front() and get_min() are all constant time operations

I came across this question:
Implement a queue in which push_rear(), pop_front() and get_min() are all constant time operations.
I initially thought of using a min-heap data structure which has O(1) complexity for a get_min(). But push_rear() and pop_front() would be O(log(n)).
Does anyone know what would be the best way to implement such a queue which has O(1) push(), pop() and min()?
I googled about this, and wanted to point out this Algorithm Geeks thread. But it seems that none of the solutions follow constant time rule for all 3 methods: push(), pop() and min().
Thanks for all the suggestions.
You can implement a stack with O(1) pop(), push() and get_min(): just store the current minimum together with each element. So, for example, the stack [4,2,5,1] (1 on top) becomes [(4,4), (2,2), (5,2), (1,1)].
Then you can use two stacks to implement the queue. Push to one stack, pop from another one; if the second stack is empty during the pop, move all elements from the first stack to the second one.
E.g for a pop request, moving all the elements from first stack [(4,4), (2,2), (5,2), (1,1)], the second stack would be [(1,1), (5,1), (2,1), (4,1)]. and now return top element from second stack.
To find the minimum element of the queue, look at the smallest two elements of the individual min-stacks, then take the minimum of those two values. (Of course, there's some extra logic here is case one of the stacks is empty, but that's not too hard to work around).
It will have O(1) get_min() and push() and amortized O(1) pop().
Okay - I think I have an answer that gives you all of these operations in amortized O(1), meaning that any one operation could take up to O(n), but any sequence of n operations takes O(1) time per operation.
The idea is to store your data as a Cartesian tree. This is a binary tree obeying the min-heap property (each node is no bigger than its children) and is ordered in a way such that an inorder traversal of the nodes gives you back the nodes in the same order in which they were added. For example, here's a Cartesian tree for the sequence 2 1 4 3 5:
1
/ \
2 3
/ \
4 5
It is possible to insert an element into a Cartesian tree in O(1) amortized time using the following procedure. Look at the right spine of the tree (the path from the root to the rightmost leaf formed by always walking to the right). Starting at rightmost node, scan upward along this path until you find the first node smaller than the node you're inserting.
Change that node so that its right child is this new node, then make that node's former right child the left child of the node you just added. For example, suppose that we want to insert another copy of 2 into the above tree. We walk up the right spine past the 5 and the 3, but stop below the 1 because 1 < 2. We then change the tree to look like this:
1
/ \
2 2
/
3
/ \
4 5
Notice that an inorder traversal gives 2 1 4 3 5 2, which is the sequence in which we added the values.
This runs in amortized O(1) because we can create a potential function equal to the number of nodes in the right spine of the tree. The real time required to insert a node is 1 plus the number of nodes in the spine we consider (call this k). Once we find the place to insert the node, the size of the spine shrinks by length k - 1, since each of the k nodes we visited are no longer on the right spine, and the new node is in its place. This gives an amortized cost of 1 + k + (1 - k) = 2 = O(1), for the amortized O(1) insert. As another way of thinking about this, once a node has been moved off the right spine, it's never part of the right spine again, and so we will never have to move it again. Since each of the n nodes can be moved at most once, this means that n insertions can do at most n moves, so the total runtime is at most O(n) for an amortized O(1) per element.
To do a dequeue step, we simply remove the leftmost node from the Cartesian tree. If this node is a leaf, we're done. Otherwise, the node can only have one child (the right child), and so we replace the node with its right child. Provided that we keep track of where the leftmost node is, this step takes O(1) time. However, after removing the leftmost node and replacing it with its right child, we might not know where the new leftmost node is. To fix this, we simply walk down the left spine of the tree starting at the new node we just moved to the leftmost child. I claim that this still runs in O(1) amortized time. To see this, I claim that a node is visited at most once during any one of these passes to find the leftmost node. To see this, note that once a node has been visited this way, the only way that we could ever need to look at it again would be if it were moved from a child of the leftmost node to the leftmost node. But all the nodes visited are parents of the leftmost node, so this can't happen. Consequently, each node is visited at most once during this process, and the pop runs in O(1).
We can do find-min in O(1) because the Cartesian tree gives us access to the smallest element of the tree for free; it's the root of the tree.
Finally, to see that the nodes come back in the same order in which they were inserted, note that a Cartesian tree always stores its elements so that an inorder traversal visits them in sorted order. Since we always remove the leftmost node at each step, and this is the first element of the inorder traversal, we always get the nodes back in the order in which they were inserted.
In short, we get O(1) amortized push and pop, and O(1) worst-case find-min.
If I can come up with a worst-case O(1) implementation, I'll definitely post it. This was a great problem; thanks for posting it!
Ok, here is one solution.
First we need some stuff which provide push_back(),push_front(),pop_back() and pop_front() in 0(1). It's easy to implement with array and 2 iterators. First iterator will point to front, second to back. Let's call such stuff deque.
Here is pseudo-code:
class MyQueue//Our data structure
{
deque D;//We need 2 deque objects
deque Min;
push(element)//pushing element to MyQueue
{
D.push_back(element);
while(Min.is_not_empty() and Min.back()>element)
Min.pop_back();
Min.push_back(element);
}
pop()//poping MyQueue
{
if(Min.front()==D.front() )
Min.pop_front();
D.pop_front();
}
min()
{
return Min.front();
}
}
Explanation:
Example let's push numbers [12,5,10,7,11,19] and to our MyQueue
1)pushing 12
D [12]
Min[12]
2)pushing 5
D[12,5]
Min[5] //5>12 so 12 removed
3)pushing 10
D[12,5,10]
Min[5,10]
4)pushing 7
D[12,5,10,7]
Min[5,7]
6)pushing 11
D[12,5,10,7,11]
Min[5,7,11]
7)pushing 19
D[12,5,10,7,11,19]
Min[5,7,11,19]
Now let's call pop_front()
we got
D[5,10,7,11,19]
Min[5,7,11,19]
The minimum is 5
Let's call pop_front() again
Explanation: pop_front will remove 5 from D, but it will pop front element of Min too, because it equals to D's front element (5).
D[10,7,11,19]
Min[7,11,19]
And minimum is 7. :)
Use one deque (A) to store the elements and another deque (B) to store the minimums.
When x is enqueued, push_back it to A and keep pop_backing B until the back of B is smaller than x, then push_back x to B.
when dequeuing A, pop_front A as return value, and if it is equal to the front of B, pop_front B as well.
when getting the minimum of A, use the front of B as return value.
dequeue and getmin are obviously O(1). For the enqueue operation, consider the push_back of n elements. There are n push_back to A, n push_back to B and at most n pop_back of B because each element will either stay in B or being popped out once from B. Over all there are O(3n) operations and therefore the amortized cost is O(1) as well for enqueue.
Lastly the reason this algorithm works is that when you enqueue x to A, if there are elements in B that are larger than x, they will never be minimums now because x will stay in the queue A longer than any elements in B (a queue is FIFO). Therefore we need to pop out elements in B (from the back) that are larger than x before we push x into B.
from collections import deque
class MinQueue(deque):
def __init__(self):
deque.__init__(self)
self.minq = deque()
def push_rear(self, x):
self.append(x)
while len(self.minq) > 0 and self.minq[-1] > x:
self.minq.pop()
self.minq.append(x)
def pop_front(self):
x = self.popleft()
if self.minq[0] == x:
self.minq.popleft()
return(x)
def get_min(self):
return(self.minq[0])
If you don't mind storing a bit of extra data, it should be trivial to store the minimum value. Push and pop can update the value if the new or removed element is the minimum, and returning the minimum value is as simple as getting the value of the variable.
This is assuming that get_min() does not change the data; if you would rather have something like pop_min() (i.e. remove the minimum element), you can simply store a pointer to the actual element and the element preceding it (if any), and update those accordingly with push_rear() and pop_front() as well.
Edit after comments:
Obviously this leads to O(n) push and pop in the case that the minimum changes on those operations, and so does not strictly satisfy the requirements.
You Can actually use a LinkedList to maintain the Queue.
Each element in LinkedList will be of Type
class LinkedListElement
{
LinkedListElement next;
int currentMin;
}
You can have two pointers One points to the Start and the other points to the End.
If you add an element to the start of the Queue. Examine the Start pointer and the node to insert. If node to insert currentmin is less than start currentmin node to insert currentmin is the minimum. Else update the currentmin with start currentmin.
Repeat the same for enque.
JavaScript implementation
(Credit to adamax's solution for the idea; I loosely based an implementation on it. Jump to the bottom to see fully commented code or read through the general steps below. Note that this finds the maximum value in O(1) constant time rather than the minimum value--easy to change up):
The general idea is to create two Stacks upon construction of the MaxQueue (I used a linked list as the underlying Stack data structure--not included in the code; but any Stack will do as long as it's implemented with O(1) insertion/deletion). One we'll mostly pop from (dqStack) and one we'll mostly push to (eqStack).
Insertion: O(1) worst case
For enqueue, if the MaxQueue is empty, we'll push the value to dqStack along with the current max value in a tuple (the same value since it's the only value in the MaxQueue); e.g.:
const m = new MaxQueue();
m.enqueue(6);
/*
the dqStack now looks like:
[6, 6] - [value, max]
*/
If the MaxQueue is not empty, we push just the value to eqStack;
m.enqueue(7);
m.enqueue(8);
/*
dqStack: eqStack: 8
[6, 6] 7 - just the value
*/
then, update the maximum value in the tuple.
/*
dqStack: eqStack: 8
[6, 8] 7
*/
Deletion: O(1) amortized
For dequeue we'll pop from dqStack and return the value from the tuple.
m.dequeue();
> 6
// equivalent to:
/*
const tuple = m.dqStack.pop() // [6, 8]
tuple[0];
> 6
*/
Then, if dqStack is empty, move all values in eqStack to dqStack, e.g.:
// if we build a MaxQueue
const maxQ = new MaxQueue(3, 5, 2, 4, 1);
/*
the stacks will look like:
dqStack: eqStack: 1
4
2
[3, 5] 5
*/
As each value is moved over, we'll check if it's greater than the max so far and store it in each tuple:
maxQ.dequeue(); // pops from dqStack (now empty), so move all from eqStack to dqStack
> 3
// as dequeue moves one value over, it checks if it's greater than the ***previous max*** and stores the max at tuple[1], i.e., [data, max]:
/*
dqStack: [5, 5] => 5 > 4 - update eqStack:
[2, 4] => 2 < 4 - no update
[4, 4] => 4 > 1 - update
[1, 1] => 1st value moved over so max is itself empty
*/
Because each value is moved to dqStack at most once, we can say that dequeue has O(1) amortized time complexity.
Finding the maximum value: O(1) worst case
Then, at any point in time, we can call getMax to retrieve the current maximum value in O(1) constant time. As long as the MaxQueue is not empty, the maximum value is easily pulled out of the next tuple in dqStack.
maxQ.getMax();
> 5
// equivalent to calling peek on the dqStack and pulling out the maximum value:
/*
const peekedTuple = maxQ.dqStack.peek(); // [5, 5]
peekedTuple[1];
> 5
*/
Code
class MaxQueue {
constructor(...data) {
// create a dequeue Stack from which we'll pop
this.dqStack = new Stack();
// create an enqueue Stack to which we'll push
this.eqStack = new Stack();
// if enqueueing data at construction, iterate through data and enqueue each
if (data.length) for (const datum of data) this.enqueue(datum);
}
enqueue(data) { // O(1) constant insertion time
// if the MaxQueue is empty,
if (!this.peek()) {
// push data to the dequeue Stack and indicate it's the max;
this.dqStack.push([data, data]); // e.g., enqueue(8) ==> [data: 8, max: 8]
} else {
// otherwise, the MaxQueue is not empty; push data to enqueue Stack
this.eqStack.push(data);
// save a reference to the tuple that's next in line to be dequeued
const next = this.dqStack.peek();
// if the enqueueing data is > the max in that tuple, update it
if (data > next[1]) next[1] = data;
}
}
moveAllFromEqToDq() { // O(1) amortized as each value will move at most once
// start max at -Infinity for comparison with the first value
let max = -Infinity;
// until enqueue Stack is empty,
while (this.eqStack.peek()) {
// pop from enqueue Stack and save its data
const data = this.eqStack.pop();
// if data is > max, set max to data
if (data > max) max = data;
// push to dequeue Stack and indicate the current max; e.g., [data: 7: max: 8]
this.dqStack.push([data, max]);
}
}
dequeue() { // O(1) amortized deletion due to calling moveAllFromEqToDq from time-to-time
// if the MaxQueue is empty, return undefined
if (!this.peek()) return;
// pop from the dequeue Stack and save it's data
const [data] = this.dqStack.pop();
// if there's no data left in dequeue Stack, move all data from enqueue Stack
if (!this.dqStack.peek()) this.moveAllFromEqToDq();
// return the data
return data;
}
peek() { // O(1) constant peek time
// if the MaxQueue is empty, return undefined
if (!this.dqStack.peek()) return;
// peek at dequeue Stack and return its data
return this.dqStack.peek()[0];
}
getMax() { // O(1) constant time to find maximum value
// if the MaxQueue is empty, return undefined
if (!this.peek()) return;
// peek at dequeue Stack and return the current max
return this.dqStack.peek()[1];
}
}
#include <iostream>
#include <queue>
#include <deque>
using namespace std;
queue<int> main_queue;
deque<int> min_queue;
void clearQueue(deque<int> &q)
{
while(q.empty() == false) q.pop_front();
}
void PushRear(int elem)
{
main_queue.push(elem);
if(min_queue.empty() == false && elem < min_queue.front())
{
clearQueue(min_queue);
}
while(min_queue.empty() == false && elem < min_queue.back())
{
min_queue.pop_back();
}
min_queue.push_back(elem);
}
void PopFront()
{
int elem = main_queue.front();
main_queue.pop();
if (elem == min_queue.front())
{
min_queue.pop_front();
}
}
int GetMin()
{
return min_queue.front();
}
int main()
{
PushRear(1);
PushRear(-1);
PushRear(2);
cout<<GetMin()<<endl;
PopFront();
PopFront();
cout<<GetMin()<<endl;
return 0;
}
This solution contains 2 queues:
1. main_q - stores the input numbers.
2. min_q - stores the min numbers by certain rules that we'll described (appear in functions MainQ.enqueue(x), MainQ.dequeue(), MainQ.get_min()).
Here's the code in Python. Queue is implemented using a List.
The main idea lies in the MainQ.enqueue(x), MainQ.dequeue(), MainQ.get_min() functions.
One key assumption is that emptying a queue takes o(0).
A test is provided at the end.
import numbers
class EmptyQueueException(Exception):
pass
class BaseQ():
def __init__(self):
self.l = list()
def enqueue(self, x):
assert isinstance(x, numbers.Number)
self.l.append(x)
def dequeue(self):
return self.l.pop(0)
def peek_first(self):
return self.l[0]
def peek_last(self):
return self.l[len(self.l)-1]
def empty(self):
return self.l==None or len(self.l)==0
def clear(self):
self.l=[]
class MainQ(BaseQ):
def __init__(self, min_q):
super().__init__()
self.min_q = min_q
def enqueue(self, x):
super().enqueue(x)
if self.min_q.empty():
self.min_q.enqueue(x)
elif x > self.min_q.peek_last():
self.min_q.enqueue(x)
else: # x <= self.min_q.peek_last():
self.min_q.clear()
self.min_q.enqueue(x)
def dequeue(self):
if self.empty():
raise EmptyQueueException("Queue is empty")
x = super().dequeue()
if x == self.min_q.peek_first():
self.min_q.dequeue()
return x
def get_min(self):
if self.empty():
raise EmptyQueueException("Queue is empty, NO minimum")
return self.min_q.peek_first()
INPUT_NUMS = (("+", 5), ("+", 10), ("+", 3), ("+", 6), ("+", 1), ("+", 2), ("+", 4), ("+", -4), ("+", 100), ("+", -40),
("-",None), ("-",None), ("-",None), ("+",-400), ("+",90), ("-",None),
("-",None), ("-",None), ("-",None), ("-",None), ("-",None), ("-",None), ("-",None), ("-",None))
if __name__ == '__main__':
min_q = BaseQ()
main_q = MainQ(min_q)
try:
for operator, i in INPUT_NUMS:
if operator=="+":
main_q.enqueue(i)
print("Added {} ; Min is: {}".format(i,main_q.get_min()))
print("main_q = {}".format(main_q.l))
print("min_q = {}".format(main_q.min_q.l))
print("==========")
else:
x = main_q.dequeue()
print("Removed {} ; Min is: {}".format(x,main_q.get_min()))
print("main_q = {}".format(main_q.l))
print("min_q = {}".format(main_q.min_q.l))
print("==========")
except Exception as e:
print("exception: {}".format(e))
The output of the above test is:
"C:\Program Files\Python35\python.exe" C:/dev/python/py3_pocs/proj1/priority_queue.py
Added 5 ; Min is: 5
main_q = [5]
min_q = [5]
==========
Added 10 ; Min is: 5
main_q = [5, 10]
min_q = [5, 10]
==========
Added 3 ; Min is: 3
main_q = [5, 10, 3]
min_q = [3]
==========
Added 6 ; Min is: 3
main_q = [5, 10, 3, 6]
min_q = [3, 6]
==========
Added 1 ; Min is: 1
main_q = [5, 10, 3, 6, 1]
min_q = [1]
==========
Added 2 ; Min is: 1
main_q = [5, 10, 3, 6, 1, 2]
min_q = [1, 2]
==========
Added 4 ; Min is: 1
main_q = [5, 10, 3, 6, 1, 2, 4]
min_q = [1, 2, 4]
==========
Added -4 ; Min is: -4
main_q = [5, 10, 3, 6, 1, 2, 4, -4]
min_q = [-4]
==========
Added 100 ; Min is: -4
main_q = [5, 10, 3, 6, 1, 2, 4, -4, 100]
min_q = [-4, 100]
==========
Added -40 ; Min is: -40
main_q = [5, 10, 3, 6, 1, 2, 4, -4, 100, -40]
min_q = [-40]
==========
Removed 5 ; Min is: -40
main_q = [10, 3, 6, 1, 2, 4, -4, 100, -40]
min_q = [-40]
==========
Removed 10 ; Min is: -40
main_q = [3, 6, 1, 2, 4, -4, 100, -40]
min_q = [-40]
==========
Removed 3 ; Min is: -40
main_q = [6, 1, 2, 4, -4, 100, -40]
min_q = [-40]
==========
Added -400 ; Min is: -400
main_q = [6, 1, 2, 4, -4, 100, -40, -400]
min_q = [-400]
==========
Added 90 ; Min is: -400
main_q = [6, 1, 2, 4, -4, 100, -40, -400, 90]
min_q = [-400, 90]
==========
Removed 6 ; Min is: -400
main_q = [1, 2, 4, -4, 100, -40, -400, 90]
min_q = [-400, 90]
==========
Removed 1 ; Min is: -400
main_q = [2, 4, -4, 100, -40, -400, 90]
min_q = [-400, 90]
==========
Removed 2 ; Min is: -400
main_q = [4, -4, 100, -40, -400, 90]
min_q = [-400, 90]
==========
Removed 4 ; Min is: -400
main_q = [-4, 100, -40, -400, 90]
min_q = [-400, 90]
==========
Removed -4 ; Min is: -400
main_q = [100, -40, -400, 90]
min_q = [-400, 90]
==========
Removed 100 ; Min is: -400
main_q = [-40, -400, 90]
min_q = [-400, 90]
==========
Removed -40 ; Min is: -400
main_q = [-400, 90]
min_q = [-400, 90]
==========
Removed -400 ; Min is: 90
main_q = [90]
min_q = [90]
==========
exception: Queue is empty, NO minimum
Process finished with exit code 0
Java Implementation
import java.io.*;
import java.util.*;
public class queueMin {
static class stack {
private Node<Integer> head;
public void push(int data) {
Node<Integer> newNode = new Node<Integer>(data);
if(null == head) {
head = newNode;
} else {
Node<Integer> prev = head;
head = newNode;
head.setNext(prev);
}
}
public int pop() {
int data = -1;
if(null == head){
System.out.println("Error Nothing to pop");
} else {
data = head.getData();
head = head.getNext();
}
return data;
}
public int peek(){
if(null == head){
System.out.println("Error Nothing to pop");
return -1;
} else {
return head.getData();
}
}
public boolean isEmpty(){
return null == head;
}
}
static class stackMin extends stack {
private stack s2;
public stackMin(){
s2 = new stack();
}
public void push(int data){
if(data <= getMin()){
s2.push(data);
}
super.push(data);
}
public int pop(){
int value = super.pop();
if(value == getMin()) {
s2.pop();
}
return value;
}
public int getMin(){
if(s2.isEmpty()) {
return Integer.MAX_VALUE;
}
return s2.peek();
}
}
static class Queue {
private stackMin s1, s2;
public Queue(){
s1 = new stackMin();
s2 = new stackMin();
}
public void enQueue(int data) {
s1.push(data);
}
public int deQueue() {
if(s2.isEmpty()) {
while(!s1.isEmpty()) {
s2.push(s1.pop());
}
}
return s2.pop();
}
public int getMin(){
return Math.min(s1.isEmpty() ? Integer.MAX_VALUE : s1.getMin(), s2.isEmpty() ? Integer.MAX_VALUE : s2.getMin());
}
}
static class Node<T> {
private T data;
private T min;
private Node<T> next;
public Node(T data){
this.data = data;
this.next = null;
}
public void setNext(Node<T> next){
this.next = next;
}
public T getData(){
return this.data;
}
public Node<T> getNext(){
return this.next;
}
public void setMin(T min){
this.min = min;
}
public T getMin(){
return this.min;
}
}
public static void main(String args[]){
try {
FastScanner in = newInput();
PrintWriter out = newOutput();
// System.out.println(out);
Queue q = new Queue();
int t = in.nextInt();
while(t-- > 0) {
String[] inp = in.nextLine().split(" ");
switch (inp[0]) {
case "+":
q.enQueue(Integer.parseInt(inp[1]));
break;
case "-":
q.deQueue();
break;
case "?":
out.println(q.getMin());
default:
break;
}
}
out.flush();
out.close();
} catch(IOException e){
e.printStackTrace();
}
}
static class FastScanner {
static BufferedReader br;
static StringTokenizer st;
FastScanner(File f) {
try {
br = new BufferedReader(new FileReader(f));
} catch (FileNotFoundException e) {
e.printStackTrace();
}
}
public FastScanner(InputStream f) {
br = new BufferedReader(new InputStreamReader(f));
}
String next() {
while (st == null || !st.hasMoreTokens()) {
try {
st = new StringTokenizer(br.readLine());
} catch (IOException e) {
e.printStackTrace();
}
}
return st.nextToken();
}
String nextLine(){
String str = "";
try {
str = br.readLine();
} catch (IOException e) {
e.printStackTrace();
}
return str;
}
int nextInt() {
return Integer.parseInt(next());
}
long nextLong() {
return Long.parseLong(next());
}
double nextDoulbe() {
return Double.parseDouble(next());
}
}
static FastScanner newInput() throws IOException {
if (System.getProperty("JUDGE") != null) {
return new FastScanner(new File("input.txt"));
} else {
return new FastScanner(System.in);
}
}
static PrintWriter newOutput() throws IOException {
if (System.getProperty("JUDGE") != null) {
return new PrintWriter("output.txt");
} else {
return new PrintWriter(System.out);
}
}
}
We know that push and pop are constant time operations [O(1) to be precise].
But when we think of get_min()[i.e to find the current minimum number in the queue] generally the first thing that comes to mind is searching the whole queue every time the request for the minimum element is made. But this will never give the constant time operation, which is the main aim of the problem.
This is generally asked very frequently in the interviews, so you must know the trick
To do this we have to use two more queues which will keep the track of minimum element and we have to go on modifying these 2 queues as we do push and pop operations on the queue so that minimum element is obtained in O(1) time.
Here is the self-descriptive pseudo code based on the above approach mentioned.
Queue q, minq1, minq2;
isMinq1Current=true;
void push(int a)
{
q.push(a);
if(isMinq1Current)
{
if(minq1.empty) minq1.push(a);
else
{
while(!minq1.empty && minq1.top < =a) minq2.push(minq1.pop());
minq2.push(a);
while(!minq1.empty) minq1.pop();
isMinq1Current=false;
}
}
else
{
//mirror if(isMinq1Current) branch.
}
}
int pop()
{
int a = q.pop();
if(isMinq1Current)
{
if(a==minq1.top) minq1.pop();
}
else
{
//mirror if(isMinq1Current) branch.
}
return a;
}

Project Euler N2 - Fibonacci algorithm isnt working properly

Each new term in the Fibonacci
sequence is generated by adding the
previous two terms. By starting with 1
and 2, the first 10 terms will be:
1, 2, 3, 5, 8, 13, 21, 34, 55, 89, ...
Find the sum of all the even-valued
terms in the sequence which do not
exceed four million.
Int64[] Numeros = new Int64[4000005];
Numeros[0] = 1;
Numeros[1] = 2;
Int64 Indice = 2;
Int64 Acumulador = 2;
for (int i = 0; i < 4000000; i++)
{
Numeros[Indice] = Numeros[Indice - 2] + Numeros[Indice - 1];
if (Numeros[Indice] % 2 == 0)
{
if ((Numeros[Indice] + Acumulador) > 4000000)
{
break;
}
else
{
Acumulador += Numeros[Indice];
}
}
Indice++;
}
Console.WriteLine(Acumulador);
Console.ReadLine();
My program isn't functioning as it should be I guess because on Project Euler they say my answer is incorrect. Maybe I'm overlooking something. Any help?
This line
if ((Numeros[Indice] + Acumulador) > 4000000)
is checking for the sum being greater than 4MM. You need to check that the term (Numeros[Indice]) is greater than 4MM. So changing that to this...
if (Numeros[Indice] > 4000000)
is probably a good place to start.
And also man your test condition in for loop is useless.. i wil never reach the value of 4000000.
simply a
while(1){
//code
}
will also do, no need of i as u never use it

Resources