Related
Problem statement:
A company sells bowls of various integer sized diameters (inches) and often customers buy a number of these bowls at once.
The company would like to reduce shipping costs by sending the minimum number of packages for an order of bowls to a given customer by finding an optimal nesting of the bowls.
The company has also decided to restrict the nestings with the following limitations:
No more than 3 bowls should be nested in one nesting.
A bowl can be nested inside another if it's smaller but not more than 3 inches smaller than the bowl it's directly nested within.
For example, a customer orders the following bowl sizes:
One 5" bowl
One 8" bowl
Two 11" bowls
One 12" bowl
Two 15" bowls
The follow is a possible (and optimal) nesting:
[15] [15,12,11] [11,8,5]
Is there an algorithm to always provide an optimal nesting?
I've looked through many similar questions here on stackoverflow and googled around, but can't find this exact problem, nor am I able to map any similar problems over to this problem space in a way that solves the problem.
This was actually posted in another forum by a real business owner. A number of the developers tried to help, ultimately finding a heuristic solution that provided an optimal solution most of the time but not always.
I can share the chosen algorithm one of the developers put forward as well as a few approaches I tried myself.
I'm just very curious about this problem and if there is an algorithm that can actually do this, or the best solution will be heuristic. If you can either give an idea of how to approach this, share an algorithm, or send a link to a similar problem that can be mapped to this one, that would be awesome.
This can be solved with dynamic programming in polynomial time.
The idea is that we ONLY care about how many boxes there are total, and how many boxes there are of different top bowl sizes. We don't care about the details beyond that. This is a polynomial amount of state, and so we can track through the calculation and enumerate one arrangement per possible state in a polynomial time. We then reconstruct the minimal packing of bowls into boxes from that arrangement.
class Arrangement:
def __init__(self, next_bowl, prev_arrangement=None):
self.prev_arrangement = prev_arrangement
self.add_rule = None
self.open1 = {}
self.open2 = {}
self.next_bowl = next_bowl
if prev_arrangement is None:
self.boxes = 0
for i in range(next_bowl, next_bowl + 4):
self.open1[i] = 0
self.open2[i] = 0
else:
self.boxes = prev_arrangement.boxes
for i in range(next_bowl, next_bowl + 4):
self.open1[i] = prev_arrangement.open1.get(i, 0)
self.open2[i] = prev_arrangement.open2.get(i, 0)
# This will be tuples of tuples.
def state(self):
open1 = (self.open1[i+self.next_bowl] for i in range(4))
open2 = (self.open2[i+self.next_bowl] for i in range(4))
return (open1, open2)
def next_arrangements(self, bowl):
base_arrangement = Arrangement(bowl, self)
base_arrangement.boxes += 1
base_arrangement.add_rule = ("new",)
old_count = self.open2.get(bowl, 0)
base_arrangement.open2[bowl] = old_count + 1
yield base_arrangement
for i in range(1, 4):
if 0 < self.open1.get(bowl+i, 0):
next_arrangement = Arrangement(bowl, self)
next_arrangement.open1[bowl+i] -= 1
next_arrangement.add_rule = ("open", 1, bowl+i)
yield next_arrangement
if 0 < self.open2.get(bowl+i, 0):
next_arrangement = Arrangement(bowl, self)
next_arrangement.open2[bowl+i] -= 1
next_arrangement.open1[bowl] += 1
next_arrangement.add_rule = ("open", 2, bowl+i)
yield next_arrangement
def find_boxes(self):
items = self._find_boxes()
boxes = items["full"]
for more_boxes in items["open1"].values():
boxes.extend(more_boxes)
for more_boxes in items["open2"].values():
boxes.extend(more_boxes)
return list(reversed(sorted(boxes)))
def _find_boxes(self):
if self.prev_arrangement is None:
return {
"full": [],
"open1": {},
"open2": {},
}
else:
items = self.prev_arrangement._find_boxes()
rule = self.add_rule
if rule[0] == "new":
if self.next_bowl not in items["open2"]:
items["open2"][self.next_bowl] = [[self.next_bowl]]
else:
items["open2"][self.next_bowl].append([self.next_bowl])
elif rule[0] == "open":
if rule[1] == 1:
box = items["open1"][rule[2]].pop()
box.append(self.next_bowl)
items["full"].append(box)
elif rule[1] == 2:
box = items["open2"][rule[2]].pop()
box.append(self.next_bowl)
if self.next_bowl not in items["open1"]:
items["open1"][self.next_bowl] = [box]
else:
items["open1"][self.next_bowl].append(box)
return items
def __str__ (self):
return str(self.boxes) + " open1:" + str(self.open1) + " open2:" + str(self.open2)
def bowl_nesting (bowls):
bowls = list(reversed(sorted(bowls))) # Largest to smallest.
start_arrangement = Arrangement(bowls[0])
arrange = {start_arrangement.state(): start_arrangement}
for bowl in bowls:
next_arrange = {}
for state, arrangement in arrange.items():
for next_arrangement in arrangement.next_arrangements(bowl):
state = next_arrangement.state()
if state in next_arrange and next_arrange[state].boxes <= next_arrangement.boxes:
pass # We are not an improvement.
else:
next_arrange[state] = next_arrangement
arrange = next_arrange
min_boxes = len(bowls)
min_box_list = None
for arrangement in arrange.values():
if arrangement.boxes <= min_boxes:
min_boxes = arrangement.boxes
min_box_list = arrangement.find_boxes()
return min_box_list
print(bowl_nesting([15, 15, 12, 11, 11,8,5]))
Now while the above solution works, it is inefficient. Suppose that we have up to k bowls of any given size. The number of combinations of open1[bowl] and open2[bowl] that allows is k choose 2 = k*(k-1)/2). When we consider that our state has 4 sizes in it, that's O(k^8 / 16 possible states. We do that for the number of bowls to get O(n k^8). This doesn't scale well.
We can do better by making the following notes:
In any arrangement with an open2[bowls+3] option, you do not do worse by moving the next bowl out of whatever box you were going to put it in, and putting it there instead.
If there is an open2[bowls+2] option and an open2[bowls+1] option, you never do worse by picking open2[bowls+2].
If there is an open1[bowls+i] option and an open1[bowls+j] option with 1 <= i < j <= 3 then you never do worse picking open1[bowls+i] instead.
This optimization means fewer choices, which speeds you up by a constant. But also you cannot have open2[bowls+3] and also have open2[bowls]. So that O(k^8) becomes O(k^7) states. And adding to the boxes with larger bowls will reduce how much of the potential state space we actually visit. This should lead to a better constant.
Here is this logic with a minor refactor to cleanup the code.
class Arrangement:
def __init__(self, next_bowl, prev_arrangement=None, choice=None, position=None):
self.prev_arrangement = prev_arrangement
self.add_rule = None
self.open1 = {}
self.open2 = {}
self.next_bowl = next_bowl
if prev_arrangement is None:
self.boxes = 0
for i in range(next_bowl, next_bowl + 4):
self.open1[i] = 0
self.open2[i] = 0
else:
self.boxes = prev_arrangement.boxes
for i in range(next_bowl, next_bowl + 4):
self.open1[i] = prev_arrangement.open1.get(i, 0)
self.open2[i] = prev_arrangement.open2.get(i, 0)
if choice is not None:
self.choice(choice, position)
# This will be tuples of tuples.
def state(self):
open1 = (self.open1[i+self.next_bowl] for i in range(4))
open2 = (self.open2[i+self.next_bowl] for i in range(4))
return (open1, open2)
def choice (self, rule, position=None):
self.add_rule = (rule, position)
if rule == "new":
self.boxes += 1
self.open2[self.next_bowl] += 1
elif rule == "open1":
self.open1[position] -= 1
elif rule == "open2":
self.open2[position] -= 1
self.open1[self.next_bowl] += 1
def next_arrangements(self, bowl):
if 0 < self.open2.get(bowl+3, 0):
yield Arrangement(bowl, self, "open2", bowl+3)
else:
yield Arrangement(bowl, self, "new")
for i in [3, 2, 1]:
if 0 < self.open1.get(bowl+i, 0):
yield Arrangement(bowl, self, "open1", bowl+i)
break
for i in [2, 1]:
if 0 < self.open2.get(bowl+i, 0):
yield Arrangement(bowl, self, "open2", bowl+i)
break
def find_boxes(self):
items = self._find_boxes()
boxes = items["full"]
for more_boxes in items["open1"].values():
boxes.extend(more_boxes)
for more_boxes in items["open2"].values():
boxes.extend(more_boxes)
return list(reversed(sorted(boxes)))
def _find_boxes(self):
if self.prev_arrangement is None:
return {
"full": [],
"open1": {},
"open2": {},
}
else:
items = self.prev_arrangement._find_boxes()
rule = self.add_rule
if rule[0] == "new":
if self.next_bowl not in items["open2"]:
items["open2"][self.next_bowl] = [[self.next_bowl]]
else:
items["open2"][self.next_bowl].append([self.next_bowl])
elif rule[0] == "open1":
box = items["open1"][rule[1]].pop()
box.append(self.next_bowl)
items["full"].append(box)
elif rule[0] == "open2":
box = items["open2"][rule[1]].pop()
box.append(self.next_bowl)
if self.next_bowl not in items["open1"]:
items["open1"][self.next_bowl] = [box]
else:
items["open1"][self.next_bowl].append(box)
return items
def bowl_nesting (bowls):
bowls = list(reversed(sorted(bowls))) # Largest to smallest.
start_arrangement = Arrangement(bowls[0])
arrange = {start_arrangement.state(): start_arrangement}
for bowl in bowls:
next_arrange = {}
for state, arrangement in arrange.items():
for next_arrangement in arrangement.next_arrangements(bowl):
state = next_arrangement.state()
if state in next_arrange and next_arrange[state].boxes <= next_arrangement.boxes:
pass # We are not an improvement.
else:
next_arrange[next_arrangement.state()] = next_arrangement
arrange = next_arrange
min_boxes = len(bowls)
min_box_list = None
for arrangement in arrange.values():
if arrangement.boxes <= min_boxes:
min_boxes = arrangement.boxes
min_box_list = arrangement.find_boxes()
return min_box_list
print(bowl_nesting([15, 15, 12, 11, 11,8,5]))
Yes, we can calculate an optimal nesting. As you presented, start with the bowls sorted in reverse order.
15,15,12,11,11,8,5
Assign the minimum number of starting bowls, corresponding to the count of the largest bowl.
[15] [15]
As we iterate element by element, the state we need to keep is the smallest bowl size and count in each container per index visited.
index 0, [(15, 1), (15, 1)]
(The state can be further refined to a multiset of those packages with identical count and smallest bowl size, which would add some complication.)
The choice for any element is which box (or set of boxes with similar state) to add it to or whether to start a new box with it.
index 1, [(15, 1), (12, 2)]
or
index 1, [(15, 1), (15, 1), (12, 1)]
We can explore these branches in an iterative or recursive breadth first search prioritised by the number of elements remaining plus the number of packages in the state, avoiding previously seen states.
We can further prune the search space by avoiding branches with the same or more count of packages than the best we've already seen.
This approach would amount to brute force in the sense of exploring all relevant branches. But hopefully the significant restrictions of package size and bowl size relationship would narrow the search space considerably.
This "Answer" is based on btilly's solution (the accepted answer).
Thank you #btilly for sticking with this and taking the time to revise the algorithm and fix bugs!
Since this was originally set within the context of Google Apps Script, I've rewritten this in Javascript and want to share the JS code with anyone else that might want it.
btilly's improved algorithm does indeed run much quicker than the first. Though the improvement factor depends on the bowls provided I've noticed it running up to 50 times faster in some of my sample sets.
Below is the JS code. Some caveats:
I've kept the same structure and same naming as much as possible in copying over btilly's solution.
There's no guarantee I did not introduce bugs while porting over btilly's code.
I'm not too familiar with many modern/proper JS conventions and also I don't know Python at all, so translating some of the concepts was tough and although I think my code is now bug free, if you spot any bugs, inefficiencies, bad programming ideas, please let me know and I'll update the below code.
I added a count to the state creation to make each state unique, since in my Apps Script implementation the JS runtime kept stringifying the arrays so that two states were sometimes considered the same even if they were not (e.g. the previous arrangement's bowl was the same size as another arrangement's bowl, but not the same bowl - the way two 10" bowls might appear to a 9" bowl for example). This was not needed in Python since the generators were unique based on their memory addresses. If you know a better way to do this in JS, please let me know. Seems a little sloppy the way I did it.
Improved/faster code (Javascript):
class Arrangement2{
constructor(next_bowl, prev_arrangement, choice, position){
this.prev_arrangement = prev_arrangement;
this.add_rule = null;
this.open1 = {};
this.open2 = {};
this.next_bowl = next_bowl;
if (prev_arrangement == null){
this.boxes = 0;
for (let i = next_bowl; i < next_bowl + 4; i++){
this.open1[i] = 0;
this.open2[i] = 0;
}
}
else{
this.boxes = prev_arrangement.boxes;
for (let i = next_bowl; i < next_bowl + 4; i++){
this.open1[i] = prev_arrangement.open1[i] != null ? prev_arrangement.open1[i] : 0;
this.open2[i] = prev_arrangement.open2[i] != null ? prev_arrangement.open2[i] : 0;
}
}
if(choice != null){
this.choice(choice,position);
}
}
state(){
let open1 = {};
let open2 = {};
for(let i = 0; i < 4; i++){
open1[i+this.next_bowl] = this.open1[i+this.next_bowl];
open2[i+this.next_bowl] = this.open2[i+this.next_bowl];
}
var toReturn = [];
//Used to make each state unique, without this the algorithm may not always find the best solution
Arrangement2.count++;
toReturn.push(Arrangement2.count);
toReturn.push(open1);
toReturn.push(open2);
return toReturn;
}
choice(rule, position){
this.add_rule = [rule, position];
if( rule == "new" ){
this.boxes += 1;
this.open2[this.next_bowl] += 1;
}
else if( rule == "open1" ){
this.open1[position] -= 1;
}
else if( rule == "open2" ){
this.open2[position] -= 1;
this.open1[this.next_bowl] += 1;
}
}
* next_arrangements (bowl){
if( 0 < (this.open2[bowl+3] != null ? this.open2[bowl+3] : 0)){
yield new Arrangement2(bowl, this, "open2", bowl + 3);
}
else{
yield new Arrangement2(bowl, this, "new", null);
for(let i = 3; i > 0; i--){
if (this.open1[bowl+i] != null ? this.open1[bowl+i] : 0){
yield new Arrangement2(bowl, this, "open1", bowl+i);
break ;
}
}
for(let i = 2; i > 0; i--){
if (this.open2[bowl+i] != null ? this.open2[bowl+i] : 0){
yield new Arrangement2(bowl, this, "open2", bowl+i);
break ;
}
}
}
}
find_boxes(){
let items = this._find_boxes();
let boxes = items["full"];
for (const [key, more_boxes] of Object.entries(items["open1"])) {
boxes = boxes.concat(more_boxes);
}
for (const [key, more_boxes] of Object.entries(items["open2"])) {
boxes = boxes.concat(more_boxes);
}
//Max --> Min (i.e [ 12, 12, 11, 11, 10, 7, 7, 7 ])
boxes.sort(function(a, b){return b - a});
return boxes; //boxes.sort().reverse(); //list(reversed(sorted(boxes)));
}
_find_boxes(){
if (this.prev_arrangement == null){
return {
"full": [],
"open1": {},
"open2": {},
}
}
else{
let items = this.prev_arrangement._find_boxes();
let rule = this.add_rule;
if (rule[0] == "new"){
if (!(this.next_bowl in items["open2"])){
items["open2"][this.next_bowl] = [[this.next_bowl]];
}
else{
items["open2"][this.next_bowl].push([this.next_bowl]);
}
}
else if( rule[0] == "open1"){
let box = items["open1"][rule[1]].pop();
box.push(this.next_bowl);
items["full"].push(box);
}
else if( rule[0] == "open2"){
let box = items["open2"][rule[1]].pop();
box.push(this.next_bowl);
if (!(this.next_bowl in items["open1"])){
items["open1"][this.next_bowl] = [box];
}
else{
items["open1"][this.next_bowl].push(box);
}
}
return items;
}
}
__str__(){
return this.next_bowl + " " + JSON.stringify(this.boxes) + " open1:" + JSON.stringify(this.open1) + " open2:" + JSON.stringify(this.open2);
}
}
allStates_nesting_improved = function (bowls){
//Used to make each state unique, without this the algorithm may not always find the best solution
Arrangement2.count = 0;
//Max --> Min (i.e [ 12, 12, 11, 11, 10, 7, 7, 7 ])
bowls.sort(function(a, b){return b - a});
let start_arrangement = new Arrangement2(bowls[0], null);
let returnObj = start_arrangement.state();
let arrange = {[returnObj]:start_arrangement};
for (const [key, bowl] of Object.entries(bowls) ) {
let next_arrange = {};
for (let [state, arrangement] of Object.entries(arrange) ) {
let next_arrangements = arrangement.next_arrangements(bowl);
let next_arrangement = next_arrangements.next();
while(next_arrangement.value != undefined){
next_arrangement = next_arrangement.value;
let state = next_arrangement.state();
let nextArrange_state = next_arrange[state];
if ( next_arrange[state] != undefined && (nextArrange_state === state) && next_arrange[state].boxes <= next_arrangement.boxes){
continue ; // # We are not an improvement.
}
else{
next_arrange[next_arrangement.state()] = next_arrangement;
}
next_arrangement = next_arrangements.next();
}
}
arrange = next_arrange;
}
let min_boxes = bowls.length;
let min_box_list = null;
for (const [key, arrangement] of Object.entries(arrange) ) {
if (arrangement.boxes <= min_boxes){
min_boxes = arrangement.boxes;
min_box_list = arrangement.find_boxes();
}
}
console.log(min_box_list);
return min_box_list;
}
Original code (Javascript):
class Arrangement1{
constructor(next_bowl, prev_arrangement){
this.prev_arrangement = prev_arrangement;
this.add_rule = null;
this.open1 = {};
this.open2 = {};
this.next_bowl = next_bowl;
if (prev_arrangement == null){
this.boxes = 0;
for (let i = next_bowl; i < next_bowl + 4; i++){
this.open1[i] = 0;
this.open2[i] = 0;
}
}
else{
this.boxes = prev_arrangement.boxes;
for (let i = next_bowl; i < next_bowl + 4; i++){
this.open1[i] = prev_arrangement.open1[i] != null ? prev_arrangement.open1[i] : 0;
this.open2[i] = prev_arrangement.open2[i] != null ? prev_arrangement.open2[i] : 0;
}
}
}
state(){
//Used to make each state unique, without this the algorithm may not always find the best solution
Arrangement1.count++;
let open1 = {};
let open2 = {};
for(let i = 0; i < 4; i++){
open1[i+this.next_bowl] = this.open1[i+this.next_bowl];
open2[i+this.next_bowl] = this.open2[i+this.next_bowl];
}
var toReturn = [];
toReturn.push(Arrangement1.count);
toReturn.push(open1);
toReturn.push(open2);
return toReturn;
}
* next_arrangements (bowl){
let base_arrangement = new Arrangement1(bowl, this);
base_arrangement.boxes += 1;
base_arrangement.add_rule = ["new"];
let old_count = this.open2[bowl] != null ? this.open2[bowl] : 0;
base_arrangement.open2[bowl] = old_count + 1;
yield base_arrangement;
for(let i = 1; i < 4; i++){
if (0 < (this.open1[bowl+i] != null ? this.open1[bowl+i] : 0)){
let next_arrangement = new Arrangement1(bowl, this);
next_arrangement.open1[bowl+i] -= 1;
next_arrangement.add_rule = ["open", 1, bowl+i];
yield next_arrangement;
}
if (0 < (this.open2[bowl+i] != null ? this.open2[bowl+i] : 0)){
let next_arrangement = new Arrangement1(bowl, this);
next_arrangement.open2[bowl+i] -= 1;
next_arrangement.open1[bowl] += 1;
next_arrangement.add_rule = ["open", 2, bowl+i];
yield next_arrangement;
}
}
}
find_boxes(){
let items = this._find_boxes();
let boxes = items["full"];
for (const [key, more_boxes] of Object.entries(items["open1"])) {
boxes = boxes.concat(more_boxes);
}
for (const [key, more_boxes] of Object.entries(items["open2"])) {
boxes = boxes.concat(more_boxes);
}
//Max --> Min (i.e [ 12, 12, 11, 11, 10, 7, 7, 7 ])
boxes.sort(function(a, b){return b - a});
return boxes;
}
_find_boxes(){
if (this.prev_arrangement == null){
return {
"full": [],
"open1": {},
"open2": {},
}
}
else{
let items = this.prev_arrangement._find_boxes();
let rule = this.add_rule;
if (rule[0] == "new"){
if (!(this.next_bowl in items["open2"])){
items["open2"][this.next_bowl] = [[this.next_bowl]];
}
else{
items["open2"][this.next_bowl].push([this.next_bowl]);
}
}
else if( rule[0] == "open"){
if (rule[1] == 1){
let box = items["open1"][rule[2]].pop();
box.push(this.next_bowl);
items["full"].push(box);
}
else if( rule[1] == 2){
let box = items["open2"][rule[2]].pop();
box.push(this.next_bowl);
if (!(this.next_bowl in items["open1"])){
items["open1"][this.next_bowl] = [box];
}
else{
items["open1"][this.next_bowl].push(box);
}
}
}
return items;
}
}
__str__(){
return this.next_bowl + " " + JSON.stringify(this.boxes) + " open1:" + JSON.stringify(this.open1) + " open2:" + JSON.stringify(this.open2);
}
}
allStates_nesting = function (bowls){
//Used to make each state unique, without this the algorithm may not always find the best solution
Arrangement1.count = 0;
//Max --> Min (i.e [ 12, 12, 11, 11, 10, 7, 7, 7 ])
bowls.sort(function(a, b){return b - a});
let start_arrangement = new Arrangement1(bowls[0], null);
let returnObj = start_arrangement.state();
let arrange = {[returnObj]:start_arrangement};
for (const [key, bowl] of Object.entries(bowls) ) {
let next_arrange = {};
for (let [state, arrangement] of Object.entries(arrange) ) {
let next_arrangements = arrangement.next_arrangements(bowl);
let next_arrangement = next_arrangements.next();
while(next_arrangement.value != undefined){
next_arrangement = next_arrangement.value;
let state = next_arrangement.state();
let nextArrange_state = next_arrange[state];
if ( next_arrange[state] != undefined && (nextArrange_state === state) && next_arrange[state].boxes <= next_arrangement.boxes){
continue ; // # We are not an improvement.
}
else{
next_arrange[state] = next_arrangement;
}
next_arrangement = next_arrangements.next();
}
}
arrange = next_arrange;
}
let min_boxes = bowls.length;
let min_box_list = null;
for (const [key, arrangement] of Object.entries(arrange) ) {
if (arrangement.boxes <= min_boxes){
min_boxes = arrangement.boxes;
min_box_list = arrangement.find_boxes();
}
}
return min_box_list;
}
See it in action
Here is a link to a spreadsheet testbed with 3 algorithms:
Algorithm 1: A heuristic algorithm another developer provided (runs fast but doesn't always find the optimal solution and ignores some of the requirements in some of its solutions for simplicity's sake)
Algorithm 2: btilly's revised algorithm (faster)
Algorithm 3: btilly's first attempt
Bowl Nesting Spreadsheet
Feel free to make a copy and modify the code and/or add your own algorithm to compare it with the others. (The orange "Run" button won't work since the spreadsheet is in "Viewer" mode. You'll need to make a copy to run it).
To make a copy go to
File -> Make a copy.
Once you have your own copy, you can click the "Run" button or go to the code by clicking
Extensions -> Apps Script
You can then modify and/or add your own algorithm to the mix.
You'll also have to authorize the script to run as with all Apps Script scripts.
If you're worried about authorizing it, of course check out the code before clicking run to make sure there isn't anything nefarious in there.
This question has been bugging me for days now, and I am at a loss as how to solve it. I have tried very hard to solve it on my own, but now I would just very much appreciate some help and a pointer in the right direction.
Problem:
Given a set of numbers, and the maximum limits for which each number can be greater than or less than the following number, determine the number of valid orderings of the numbers according to the limits.
Example:
Numbers: 20, 30, 36, 40
Max amount that a number can be greater than the following number: 16
Max amount that a number can be less than the following number: 8
Here there would be 3 valid orderings:
36 40 30 20
40 36 30 20
40 30 36 20
I have devised a way to generate all valid permutations using recursion and trees, but unfortunately it takes far too long in cases in which there are many valid orders of the list (approaches n! run time I believe). I feel as if there is a quicker, more mathematical way of solving this using combinatorics that I am just not seeing. Any advice would be greatly appreciated, thank you!
EDIT:
Here's the code for the permutation algorithm I came up with. The last part of the code tests it out with the sample I gave above. It is written in Python 3.6.
class block:
def __init__(self, val, children):
self.val = val
self.children = children
# Gets all the possible children of the current head within the limits
def get_children(head, nums, b, visited, u, d):
global total
if all(visited):
total += 1
return
for i in range(b):
if not visited[i]:
if head.val - nums[i] <= d and nums[i] - head.val <= u:
head.children.append(block(nums[i], []))
visited[i] = True
get_children(head.children[-1], nums, b, visited, u, d)
visited[i] = False
# Display all the valid permutations of the current head
def show(head, vals, b):
vals.append(head.val)
if head.children == [] and len(vals) == b:
print(*vals)
return
for child in head.children:
show(child, vals[:], b)
# Test it out with the sample
b, nums, u, d = 4, [20, 30, 36, 40], 8, 16
visited = [False for x in range(b)]
total = 0
heads = []
for i in range(b):
heads.append(block(nums[i], []))
visited[i] = True
get_children(heads[-1], nums, b, visited, u, d)
visited[i] = False
show(heads[-1], [], b)
print(total)
This prints:
36 40 30 20
40 30 36 20
40 36 30 20
3
Trying your approach with 10 equal numbers resulted in a run-time of 35 seconds.
The first thing I noticed is that the function only needs the last entry in the list head, so the function can be simplified to take an integer instead of a list. The following code has three simplifications:
Pass in an integer for head instead of a list
Change total to be a return value instead of a global
Avoid storing the children (as only the count of orderings is required)
The simplified code looks like:
def get_children(head, nums, b, visited, u, d):
if all(visited):
return 1
t = 0
for i in range(b):
if not visited[i]:
if head - nums[i] <= d and nums[i] - head <= u:
head2 = nums[i]
visited[i] = True
t += get_children(head2, nums, b, visited, u, d)
visited[i] = False
return t
# Test it out with the sample
nums, u, d = [20, 30, 36, 40], 8, 16
b = len(nums)
visited = [False for x in range(b)]
total = 0
for i in range(b):
head = nums[i]
visited[i] = True
total += get_children(head, nums, b, visited, u, d)
visited[i] = False
print(total)
This takes 7 seconds for a list of 10 equal numbers.
The second thing I noticed is that (for a particular test case) the return value of get_children only depends on the things that are True in visited and the value of head.
Therefore we can cache the results to avoid recomputing them:
cache={}
# Gets all the possible children of the current head within the limits
def get_children(head, nums, b, visited, u, d):
if all(visited):
return 1
key = head,sum(1<<i for i,v in enumerate(visited) if v)
result = cache.get(key,None)
if result is not None:
return result
t = 0
for i in range(b):
if not visited[i]:
if head - nums[i] <= d and nums[i] - head <= u:
head2 = nums[i]
visited[i] = True
t += get_children(head2, nums, b, visited, u, d)
visited[i] = False
cache[key] = t
return t
This version only takes 0.03 seconds for a list of 10 equal number (i.e. 1000 times faster than the original.)
If you are doing multiple test cases with different values of b/u/d you should reset the cache at the start of each testcase (i.e. cache={}).
As has been noted in the comments, finding all valid permutations here is equivalent to identifying all Hamiltonian paths in the directed graph that has your numbers as vertices and edges corresponding to each pair of numbers that are permitted to follow one another.
Here's a very simple Java (IDEOne) program to find such paths. Whether this makes your problem tractable depends on the size of your graph and the branching factor.
public static void main(String[] args)
{
int[] values = {20, 30, 36, 40};
Vertex[] g = new Vertex[values.length];
for(int i=0; i<g.length; i++)
g[i] = new Vertex(values[i]);
for(int i=0; i<g.length; i++)
for(int j=0; j<g.length; j++)
if(i != j && g[j].id >= g[i].id-16 && g[j].id <= g[i].id+8)
g[i].adj.add(g[j]);
Set<Vertex> toVisit = new HashSet<>(Arrays.asList(g));
LinkedList<Vertex> path = new LinkedList<>();
for(int i=0; i<g.length; i++)
{
path.addLast(g[i]);
toVisit.remove(g[i]);
findPaths(g[i], path, toVisit);
toVisit.add(g[i]);
path.removeLast();
}
}
static void findPaths(Vertex v, LinkedList<Vertex> path, Set<Vertex> toVisit)
{
if(toVisit.isEmpty())
{
System.out.println(path);
return;
}
for(Vertex av : v.adj)
{
if(toVisit.contains(av))
{
toVisit.remove(av);
path.addLast(av);
findPaths(av, path, toVisit);
path.removeLast();
toVisit.add(av);
}
}
}
static class Vertex
{
int id;
List<Vertex> adj;
Vertex(int id)
{
this.id = id;
adj = new ArrayList<>();
}
public String toString()
{
return String.valueOf(id);
}
}
Output:
[36, 40, 30, 20]
[40, 30, 36, 20]
[40, 36, 30, 20]
I am implementing Dijkstra's shortest path algorithm recursivingly in Scala, but I am having some trouble. I am getting the incorrect output for nodes 3 to 2, called like this, shortestPath(3, 2, x, BitSet.empty). This outputs 6, but the correct answer should be 7. I cannot seem to figure out what's wrong with my code.
var x = ListBuffer(ListBuffer(0, 2, 3, 4),
ListBuffer(2, 0, 0, 0),
ListBuffer(3, 0, 0, 0),
ListBuffer(4, 0, 0, 0))
My code is here shown below.
def shortestPath(cur: Int, dest: Int, graph: ListBuffer[ListBuffer[Int]], visited: BitSet) :Int = {
val newVisited = visited + cur
if(cur == dest) 0
else {
var pathLength = for(i <- graph(cur).indices; if(!visited(i) && graph(cur)(i) > 0)) yield {
graph(cur)(i) + shortestPath(i, dest, graph, newVisited)
}
if (pathLength.isEmpty) 0 else pathLength.min
}
}
As pointed out by obourgain, the critical error of the code is at interpreting the min-distance as 0 when two nodes are not connected.
The min-distance between two nodes should be infinity if they are disconnected, this is because the cost of two disconnected nodes must be greater than the cost of any connected nodes, and one simple fix to your code is to identify infinity with Int.MaxValue.
def shortestPath(cur: Int, dest: Int, graph: ListBuffer[ListBuffer[Int]], visited: BitSet) :Int = {
val newVisited = visited + cur
if(cur == dest) 0
else {
var pathLength = for(i <- graph(cur).indices; if(!visited(i) && graph(cur)(i) > 0)) yield {
val sLen = shortestPath(i, dest, graph, newVisited)
if (graph(cur)(i) > Int.MaxValue - sLen) Int.MaxValue else graph(cur)(i) + sLen // change #1
}
if (pathLength.isEmpty) Int.MaxValue else pathLength.min // change #2
}
}
This modification will give the expected answer Int = 7 when invoking shortestPath(3, 2, x, new BitSet()).
The code commented with "change #1" is to prevent integer overflow when the destination node is not reachable by the neighbor node (thus the min-distance is Int.MaxValue), and the code commented with "change #2" is to treat the min-distance between two nodes as "infinite" when they are disconnected.
The error is on the last line:
if (pathLength.isEmpty) 0 else pathLength.min
If pathLength.isEmpty, it means the two points are not connected. However, the function returns 0, which is interpreted as a connection with weight 0.
LeetCode medium 120. Triangle (Dynamic Programming)
Question:
Given a triangle, find the minimum path sum from top to bottom. Each step you may move to adjacent numbers on the row below.
For example, given the following triangle
[
[2],
[3,4],
[6,5,7],
[4,1,8,3]
]
//The minimum path sum from top to bottom is 11 (i.e., 2 + 3 + 5 + 1 = 11).
//Note:
//Bonus point if you are able to do this using only O(n) extra space, where n is the total number of rows in the triangle.
I always get
fatal error: Can't form Range with end < start
on "for i in (row-1)...0".
Thank you so much! Appreciate your time!
class Solution
{
func minimumTotal(triangle: [[Int]]) -> Int
{
if triangle.count == 0
{
return 0
}
if triangle.count == 1
{
return triangle[0][0]
}
var arr = [Int](count: triangle.last!.count, repeatedValue: 0)
let row = triangle.count
for i in (row-1)...0
{
let col = triangle[i].count
for j in 0...col-1
{
if i == row-1
{
arr[i] = triangle[i][j]
continue
}
arr[j] = min(arr[j], arr[j+1]) + triangle[i][j]
}
}
return arr[0]
}
}
var test1 = Solution()
//var input = [[10]]
//var input = [[1],[2,3]]
var input = [[-1],[2,3],[1,-1,-3]]
var result = test1.minimumTotal(input)
print(result)
for in (0...row-1).reverse()
Swift can't read row-1...0
It's a bad idea to create a range where the start is higher than the end: your code will compile, but it will crash at runtime, so use stride instead of ranage
for i in (row-1).stride(to: 0, by: 1) { }
I came across this question:
Implement a queue in which push_rear(), pop_front() and get_min() are all constant time operations.
I initially thought of using a min-heap data structure which has O(1) complexity for a get_min(). But push_rear() and pop_front() would be O(log(n)).
Does anyone know what would be the best way to implement such a queue which has O(1) push(), pop() and min()?
I googled about this, and wanted to point out this Algorithm Geeks thread. But it seems that none of the solutions follow constant time rule for all 3 methods: push(), pop() and min().
Thanks for all the suggestions.
You can implement a stack with O(1) pop(), push() and get_min(): just store the current minimum together with each element. So, for example, the stack [4,2,5,1] (1 on top) becomes [(4,4), (2,2), (5,2), (1,1)].
Then you can use two stacks to implement the queue. Push to one stack, pop from another one; if the second stack is empty during the pop, move all elements from the first stack to the second one.
E.g for a pop request, moving all the elements from first stack [(4,4), (2,2), (5,2), (1,1)], the second stack would be [(1,1), (5,1), (2,1), (4,1)]. and now return top element from second stack.
To find the minimum element of the queue, look at the smallest two elements of the individual min-stacks, then take the minimum of those two values. (Of course, there's some extra logic here is case one of the stacks is empty, but that's not too hard to work around).
It will have O(1) get_min() and push() and amortized O(1) pop().
Okay - I think I have an answer that gives you all of these operations in amortized O(1), meaning that any one operation could take up to O(n), but any sequence of n operations takes O(1) time per operation.
The idea is to store your data as a Cartesian tree. This is a binary tree obeying the min-heap property (each node is no bigger than its children) and is ordered in a way such that an inorder traversal of the nodes gives you back the nodes in the same order in which they were added. For example, here's a Cartesian tree for the sequence 2 1 4 3 5:
1
/ \
2 3
/ \
4 5
It is possible to insert an element into a Cartesian tree in O(1) amortized time using the following procedure. Look at the right spine of the tree (the path from the root to the rightmost leaf formed by always walking to the right). Starting at rightmost node, scan upward along this path until you find the first node smaller than the node you're inserting.
Change that node so that its right child is this new node, then make that node's former right child the left child of the node you just added. For example, suppose that we want to insert another copy of 2 into the above tree. We walk up the right spine past the 5 and the 3, but stop below the 1 because 1 < 2. We then change the tree to look like this:
1
/ \
2 2
/
3
/ \
4 5
Notice that an inorder traversal gives 2 1 4 3 5 2, which is the sequence in which we added the values.
This runs in amortized O(1) because we can create a potential function equal to the number of nodes in the right spine of the tree. The real time required to insert a node is 1 plus the number of nodes in the spine we consider (call this k). Once we find the place to insert the node, the size of the spine shrinks by length k - 1, since each of the k nodes we visited are no longer on the right spine, and the new node is in its place. This gives an amortized cost of 1 + k + (1 - k) = 2 = O(1), for the amortized O(1) insert. As another way of thinking about this, once a node has been moved off the right spine, it's never part of the right spine again, and so we will never have to move it again. Since each of the n nodes can be moved at most once, this means that n insertions can do at most n moves, so the total runtime is at most O(n) for an amortized O(1) per element.
To do a dequeue step, we simply remove the leftmost node from the Cartesian tree. If this node is a leaf, we're done. Otherwise, the node can only have one child (the right child), and so we replace the node with its right child. Provided that we keep track of where the leftmost node is, this step takes O(1) time. However, after removing the leftmost node and replacing it with its right child, we might not know where the new leftmost node is. To fix this, we simply walk down the left spine of the tree starting at the new node we just moved to the leftmost child. I claim that this still runs in O(1) amortized time. To see this, I claim that a node is visited at most once during any one of these passes to find the leftmost node. To see this, note that once a node has been visited this way, the only way that we could ever need to look at it again would be if it were moved from a child of the leftmost node to the leftmost node. But all the nodes visited are parents of the leftmost node, so this can't happen. Consequently, each node is visited at most once during this process, and the pop runs in O(1).
We can do find-min in O(1) because the Cartesian tree gives us access to the smallest element of the tree for free; it's the root of the tree.
Finally, to see that the nodes come back in the same order in which they were inserted, note that a Cartesian tree always stores its elements so that an inorder traversal visits them in sorted order. Since we always remove the leftmost node at each step, and this is the first element of the inorder traversal, we always get the nodes back in the order in which they were inserted.
In short, we get O(1) amortized push and pop, and O(1) worst-case find-min.
If I can come up with a worst-case O(1) implementation, I'll definitely post it. This was a great problem; thanks for posting it!
Ok, here is one solution.
First we need some stuff which provide push_back(),push_front(),pop_back() and pop_front() in 0(1). It's easy to implement with array and 2 iterators. First iterator will point to front, second to back. Let's call such stuff deque.
Here is pseudo-code:
class MyQueue//Our data structure
{
deque D;//We need 2 deque objects
deque Min;
push(element)//pushing element to MyQueue
{
D.push_back(element);
while(Min.is_not_empty() and Min.back()>element)
Min.pop_back();
Min.push_back(element);
}
pop()//poping MyQueue
{
if(Min.front()==D.front() )
Min.pop_front();
D.pop_front();
}
min()
{
return Min.front();
}
}
Explanation:
Example let's push numbers [12,5,10,7,11,19] and to our MyQueue
1)pushing 12
D [12]
Min[12]
2)pushing 5
D[12,5]
Min[5] //5>12 so 12 removed
3)pushing 10
D[12,5,10]
Min[5,10]
4)pushing 7
D[12,5,10,7]
Min[5,7]
6)pushing 11
D[12,5,10,7,11]
Min[5,7,11]
7)pushing 19
D[12,5,10,7,11,19]
Min[5,7,11,19]
Now let's call pop_front()
we got
D[5,10,7,11,19]
Min[5,7,11,19]
The minimum is 5
Let's call pop_front() again
Explanation: pop_front will remove 5 from D, but it will pop front element of Min too, because it equals to D's front element (5).
D[10,7,11,19]
Min[7,11,19]
And minimum is 7. :)
Use one deque (A) to store the elements and another deque (B) to store the minimums.
When x is enqueued, push_back it to A and keep pop_backing B until the back of B is smaller than x, then push_back x to B.
when dequeuing A, pop_front A as return value, and if it is equal to the front of B, pop_front B as well.
when getting the minimum of A, use the front of B as return value.
dequeue and getmin are obviously O(1). For the enqueue operation, consider the push_back of n elements. There are n push_back to A, n push_back to B and at most n pop_back of B because each element will either stay in B or being popped out once from B. Over all there are O(3n) operations and therefore the amortized cost is O(1) as well for enqueue.
Lastly the reason this algorithm works is that when you enqueue x to A, if there are elements in B that are larger than x, they will never be minimums now because x will stay in the queue A longer than any elements in B (a queue is FIFO). Therefore we need to pop out elements in B (from the back) that are larger than x before we push x into B.
from collections import deque
class MinQueue(deque):
def __init__(self):
deque.__init__(self)
self.minq = deque()
def push_rear(self, x):
self.append(x)
while len(self.minq) > 0 and self.minq[-1] > x:
self.minq.pop()
self.minq.append(x)
def pop_front(self):
x = self.popleft()
if self.minq[0] == x:
self.minq.popleft()
return(x)
def get_min(self):
return(self.minq[0])
If you don't mind storing a bit of extra data, it should be trivial to store the minimum value. Push and pop can update the value if the new or removed element is the minimum, and returning the minimum value is as simple as getting the value of the variable.
This is assuming that get_min() does not change the data; if you would rather have something like pop_min() (i.e. remove the minimum element), you can simply store a pointer to the actual element and the element preceding it (if any), and update those accordingly with push_rear() and pop_front() as well.
Edit after comments:
Obviously this leads to O(n) push and pop in the case that the minimum changes on those operations, and so does not strictly satisfy the requirements.
You Can actually use a LinkedList to maintain the Queue.
Each element in LinkedList will be of Type
class LinkedListElement
{
LinkedListElement next;
int currentMin;
}
You can have two pointers One points to the Start and the other points to the End.
If you add an element to the start of the Queue. Examine the Start pointer and the node to insert. If node to insert currentmin is less than start currentmin node to insert currentmin is the minimum. Else update the currentmin with start currentmin.
Repeat the same for enque.
JavaScript implementation
(Credit to adamax's solution for the idea; I loosely based an implementation on it. Jump to the bottom to see fully commented code or read through the general steps below. Note that this finds the maximum value in O(1) constant time rather than the minimum value--easy to change up):
The general idea is to create two Stacks upon construction of the MaxQueue (I used a linked list as the underlying Stack data structure--not included in the code; but any Stack will do as long as it's implemented with O(1) insertion/deletion). One we'll mostly pop from (dqStack) and one we'll mostly push to (eqStack).
Insertion: O(1) worst case
For enqueue, if the MaxQueue is empty, we'll push the value to dqStack along with the current max value in a tuple (the same value since it's the only value in the MaxQueue); e.g.:
const m = new MaxQueue();
m.enqueue(6);
/*
the dqStack now looks like:
[6, 6] - [value, max]
*/
If the MaxQueue is not empty, we push just the value to eqStack;
m.enqueue(7);
m.enqueue(8);
/*
dqStack: eqStack: 8
[6, 6] 7 - just the value
*/
then, update the maximum value in the tuple.
/*
dqStack: eqStack: 8
[6, 8] 7
*/
Deletion: O(1) amortized
For dequeue we'll pop from dqStack and return the value from the tuple.
m.dequeue();
> 6
// equivalent to:
/*
const tuple = m.dqStack.pop() // [6, 8]
tuple[0];
> 6
*/
Then, if dqStack is empty, move all values in eqStack to dqStack, e.g.:
// if we build a MaxQueue
const maxQ = new MaxQueue(3, 5, 2, 4, 1);
/*
the stacks will look like:
dqStack: eqStack: 1
4
2
[3, 5] 5
*/
As each value is moved over, we'll check if it's greater than the max so far and store it in each tuple:
maxQ.dequeue(); // pops from dqStack (now empty), so move all from eqStack to dqStack
> 3
// as dequeue moves one value over, it checks if it's greater than the ***previous max*** and stores the max at tuple[1], i.e., [data, max]:
/*
dqStack: [5, 5] => 5 > 4 - update eqStack:
[2, 4] => 2 < 4 - no update
[4, 4] => 4 > 1 - update
[1, 1] => 1st value moved over so max is itself empty
*/
Because each value is moved to dqStack at most once, we can say that dequeue has O(1) amortized time complexity.
Finding the maximum value: O(1) worst case
Then, at any point in time, we can call getMax to retrieve the current maximum value in O(1) constant time. As long as the MaxQueue is not empty, the maximum value is easily pulled out of the next tuple in dqStack.
maxQ.getMax();
> 5
// equivalent to calling peek on the dqStack and pulling out the maximum value:
/*
const peekedTuple = maxQ.dqStack.peek(); // [5, 5]
peekedTuple[1];
> 5
*/
Code
class MaxQueue {
constructor(...data) {
// create a dequeue Stack from which we'll pop
this.dqStack = new Stack();
// create an enqueue Stack to which we'll push
this.eqStack = new Stack();
// if enqueueing data at construction, iterate through data and enqueue each
if (data.length) for (const datum of data) this.enqueue(datum);
}
enqueue(data) { // O(1) constant insertion time
// if the MaxQueue is empty,
if (!this.peek()) {
// push data to the dequeue Stack and indicate it's the max;
this.dqStack.push([data, data]); // e.g., enqueue(8) ==> [data: 8, max: 8]
} else {
// otherwise, the MaxQueue is not empty; push data to enqueue Stack
this.eqStack.push(data);
// save a reference to the tuple that's next in line to be dequeued
const next = this.dqStack.peek();
// if the enqueueing data is > the max in that tuple, update it
if (data > next[1]) next[1] = data;
}
}
moveAllFromEqToDq() { // O(1) amortized as each value will move at most once
// start max at -Infinity for comparison with the first value
let max = -Infinity;
// until enqueue Stack is empty,
while (this.eqStack.peek()) {
// pop from enqueue Stack and save its data
const data = this.eqStack.pop();
// if data is > max, set max to data
if (data > max) max = data;
// push to dequeue Stack and indicate the current max; e.g., [data: 7: max: 8]
this.dqStack.push([data, max]);
}
}
dequeue() { // O(1) amortized deletion due to calling moveAllFromEqToDq from time-to-time
// if the MaxQueue is empty, return undefined
if (!this.peek()) return;
// pop from the dequeue Stack and save it's data
const [data] = this.dqStack.pop();
// if there's no data left in dequeue Stack, move all data from enqueue Stack
if (!this.dqStack.peek()) this.moveAllFromEqToDq();
// return the data
return data;
}
peek() { // O(1) constant peek time
// if the MaxQueue is empty, return undefined
if (!this.dqStack.peek()) return;
// peek at dequeue Stack and return its data
return this.dqStack.peek()[0];
}
getMax() { // O(1) constant time to find maximum value
// if the MaxQueue is empty, return undefined
if (!this.peek()) return;
// peek at dequeue Stack and return the current max
return this.dqStack.peek()[1];
}
}
#include <iostream>
#include <queue>
#include <deque>
using namespace std;
queue<int> main_queue;
deque<int> min_queue;
void clearQueue(deque<int> &q)
{
while(q.empty() == false) q.pop_front();
}
void PushRear(int elem)
{
main_queue.push(elem);
if(min_queue.empty() == false && elem < min_queue.front())
{
clearQueue(min_queue);
}
while(min_queue.empty() == false && elem < min_queue.back())
{
min_queue.pop_back();
}
min_queue.push_back(elem);
}
void PopFront()
{
int elem = main_queue.front();
main_queue.pop();
if (elem == min_queue.front())
{
min_queue.pop_front();
}
}
int GetMin()
{
return min_queue.front();
}
int main()
{
PushRear(1);
PushRear(-1);
PushRear(2);
cout<<GetMin()<<endl;
PopFront();
PopFront();
cout<<GetMin()<<endl;
return 0;
}
This solution contains 2 queues:
1. main_q - stores the input numbers.
2. min_q - stores the min numbers by certain rules that we'll described (appear in functions MainQ.enqueue(x), MainQ.dequeue(), MainQ.get_min()).
Here's the code in Python. Queue is implemented using a List.
The main idea lies in the MainQ.enqueue(x), MainQ.dequeue(), MainQ.get_min() functions.
One key assumption is that emptying a queue takes o(0).
A test is provided at the end.
import numbers
class EmptyQueueException(Exception):
pass
class BaseQ():
def __init__(self):
self.l = list()
def enqueue(self, x):
assert isinstance(x, numbers.Number)
self.l.append(x)
def dequeue(self):
return self.l.pop(0)
def peek_first(self):
return self.l[0]
def peek_last(self):
return self.l[len(self.l)-1]
def empty(self):
return self.l==None or len(self.l)==0
def clear(self):
self.l=[]
class MainQ(BaseQ):
def __init__(self, min_q):
super().__init__()
self.min_q = min_q
def enqueue(self, x):
super().enqueue(x)
if self.min_q.empty():
self.min_q.enqueue(x)
elif x > self.min_q.peek_last():
self.min_q.enqueue(x)
else: # x <= self.min_q.peek_last():
self.min_q.clear()
self.min_q.enqueue(x)
def dequeue(self):
if self.empty():
raise EmptyQueueException("Queue is empty")
x = super().dequeue()
if x == self.min_q.peek_first():
self.min_q.dequeue()
return x
def get_min(self):
if self.empty():
raise EmptyQueueException("Queue is empty, NO minimum")
return self.min_q.peek_first()
INPUT_NUMS = (("+", 5), ("+", 10), ("+", 3), ("+", 6), ("+", 1), ("+", 2), ("+", 4), ("+", -4), ("+", 100), ("+", -40),
("-",None), ("-",None), ("-",None), ("+",-400), ("+",90), ("-",None),
("-",None), ("-",None), ("-",None), ("-",None), ("-",None), ("-",None), ("-",None), ("-",None))
if __name__ == '__main__':
min_q = BaseQ()
main_q = MainQ(min_q)
try:
for operator, i in INPUT_NUMS:
if operator=="+":
main_q.enqueue(i)
print("Added {} ; Min is: {}".format(i,main_q.get_min()))
print("main_q = {}".format(main_q.l))
print("min_q = {}".format(main_q.min_q.l))
print("==========")
else:
x = main_q.dequeue()
print("Removed {} ; Min is: {}".format(x,main_q.get_min()))
print("main_q = {}".format(main_q.l))
print("min_q = {}".format(main_q.min_q.l))
print("==========")
except Exception as e:
print("exception: {}".format(e))
The output of the above test is:
"C:\Program Files\Python35\python.exe" C:/dev/python/py3_pocs/proj1/priority_queue.py
Added 5 ; Min is: 5
main_q = [5]
min_q = [5]
==========
Added 10 ; Min is: 5
main_q = [5, 10]
min_q = [5, 10]
==========
Added 3 ; Min is: 3
main_q = [5, 10, 3]
min_q = [3]
==========
Added 6 ; Min is: 3
main_q = [5, 10, 3, 6]
min_q = [3, 6]
==========
Added 1 ; Min is: 1
main_q = [5, 10, 3, 6, 1]
min_q = [1]
==========
Added 2 ; Min is: 1
main_q = [5, 10, 3, 6, 1, 2]
min_q = [1, 2]
==========
Added 4 ; Min is: 1
main_q = [5, 10, 3, 6, 1, 2, 4]
min_q = [1, 2, 4]
==========
Added -4 ; Min is: -4
main_q = [5, 10, 3, 6, 1, 2, 4, -4]
min_q = [-4]
==========
Added 100 ; Min is: -4
main_q = [5, 10, 3, 6, 1, 2, 4, -4, 100]
min_q = [-4, 100]
==========
Added -40 ; Min is: -40
main_q = [5, 10, 3, 6, 1, 2, 4, -4, 100, -40]
min_q = [-40]
==========
Removed 5 ; Min is: -40
main_q = [10, 3, 6, 1, 2, 4, -4, 100, -40]
min_q = [-40]
==========
Removed 10 ; Min is: -40
main_q = [3, 6, 1, 2, 4, -4, 100, -40]
min_q = [-40]
==========
Removed 3 ; Min is: -40
main_q = [6, 1, 2, 4, -4, 100, -40]
min_q = [-40]
==========
Added -400 ; Min is: -400
main_q = [6, 1, 2, 4, -4, 100, -40, -400]
min_q = [-400]
==========
Added 90 ; Min is: -400
main_q = [6, 1, 2, 4, -4, 100, -40, -400, 90]
min_q = [-400, 90]
==========
Removed 6 ; Min is: -400
main_q = [1, 2, 4, -4, 100, -40, -400, 90]
min_q = [-400, 90]
==========
Removed 1 ; Min is: -400
main_q = [2, 4, -4, 100, -40, -400, 90]
min_q = [-400, 90]
==========
Removed 2 ; Min is: -400
main_q = [4, -4, 100, -40, -400, 90]
min_q = [-400, 90]
==========
Removed 4 ; Min is: -400
main_q = [-4, 100, -40, -400, 90]
min_q = [-400, 90]
==========
Removed -4 ; Min is: -400
main_q = [100, -40, -400, 90]
min_q = [-400, 90]
==========
Removed 100 ; Min is: -400
main_q = [-40, -400, 90]
min_q = [-400, 90]
==========
Removed -40 ; Min is: -400
main_q = [-400, 90]
min_q = [-400, 90]
==========
Removed -400 ; Min is: 90
main_q = [90]
min_q = [90]
==========
exception: Queue is empty, NO minimum
Process finished with exit code 0
Java Implementation
import java.io.*;
import java.util.*;
public class queueMin {
static class stack {
private Node<Integer> head;
public void push(int data) {
Node<Integer> newNode = new Node<Integer>(data);
if(null == head) {
head = newNode;
} else {
Node<Integer> prev = head;
head = newNode;
head.setNext(prev);
}
}
public int pop() {
int data = -1;
if(null == head){
System.out.println("Error Nothing to pop");
} else {
data = head.getData();
head = head.getNext();
}
return data;
}
public int peek(){
if(null == head){
System.out.println("Error Nothing to pop");
return -1;
} else {
return head.getData();
}
}
public boolean isEmpty(){
return null == head;
}
}
static class stackMin extends stack {
private stack s2;
public stackMin(){
s2 = new stack();
}
public void push(int data){
if(data <= getMin()){
s2.push(data);
}
super.push(data);
}
public int pop(){
int value = super.pop();
if(value == getMin()) {
s2.pop();
}
return value;
}
public int getMin(){
if(s2.isEmpty()) {
return Integer.MAX_VALUE;
}
return s2.peek();
}
}
static class Queue {
private stackMin s1, s2;
public Queue(){
s1 = new stackMin();
s2 = new stackMin();
}
public void enQueue(int data) {
s1.push(data);
}
public int deQueue() {
if(s2.isEmpty()) {
while(!s1.isEmpty()) {
s2.push(s1.pop());
}
}
return s2.pop();
}
public int getMin(){
return Math.min(s1.isEmpty() ? Integer.MAX_VALUE : s1.getMin(), s2.isEmpty() ? Integer.MAX_VALUE : s2.getMin());
}
}
static class Node<T> {
private T data;
private T min;
private Node<T> next;
public Node(T data){
this.data = data;
this.next = null;
}
public void setNext(Node<T> next){
this.next = next;
}
public T getData(){
return this.data;
}
public Node<T> getNext(){
return this.next;
}
public void setMin(T min){
this.min = min;
}
public T getMin(){
return this.min;
}
}
public static void main(String args[]){
try {
FastScanner in = newInput();
PrintWriter out = newOutput();
// System.out.println(out);
Queue q = new Queue();
int t = in.nextInt();
while(t-- > 0) {
String[] inp = in.nextLine().split(" ");
switch (inp[0]) {
case "+":
q.enQueue(Integer.parseInt(inp[1]));
break;
case "-":
q.deQueue();
break;
case "?":
out.println(q.getMin());
default:
break;
}
}
out.flush();
out.close();
} catch(IOException e){
e.printStackTrace();
}
}
static class FastScanner {
static BufferedReader br;
static StringTokenizer st;
FastScanner(File f) {
try {
br = new BufferedReader(new FileReader(f));
} catch (FileNotFoundException e) {
e.printStackTrace();
}
}
public FastScanner(InputStream f) {
br = new BufferedReader(new InputStreamReader(f));
}
String next() {
while (st == null || !st.hasMoreTokens()) {
try {
st = new StringTokenizer(br.readLine());
} catch (IOException e) {
e.printStackTrace();
}
}
return st.nextToken();
}
String nextLine(){
String str = "";
try {
str = br.readLine();
} catch (IOException e) {
e.printStackTrace();
}
return str;
}
int nextInt() {
return Integer.parseInt(next());
}
long nextLong() {
return Long.parseLong(next());
}
double nextDoulbe() {
return Double.parseDouble(next());
}
}
static FastScanner newInput() throws IOException {
if (System.getProperty("JUDGE") != null) {
return new FastScanner(new File("input.txt"));
} else {
return new FastScanner(System.in);
}
}
static PrintWriter newOutput() throws IOException {
if (System.getProperty("JUDGE") != null) {
return new PrintWriter("output.txt");
} else {
return new PrintWriter(System.out);
}
}
}
We know that push and pop are constant time operations [O(1) to be precise].
But when we think of get_min()[i.e to find the current minimum number in the queue] generally the first thing that comes to mind is searching the whole queue every time the request for the minimum element is made. But this will never give the constant time operation, which is the main aim of the problem.
This is generally asked very frequently in the interviews, so you must know the trick
To do this we have to use two more queues which will keep the track of minimum element and we have to go on modifying these 2 queues as we do push and pop operations on the queue so that minimum element is obtained in O(1) time.
Here is the self-descriptive pseudo code based on the above approach mentioned.
Queue q, minq1, minq2;
isMinq1Current=true;
void push(int a)
{
q.push(a);
if(isMinq1Current)
{
if(minq1.empty) minq1.push(a);
else
{
while(!minq1.empty && minq1.top < =a) minq2.push(minq1.pop());
minq2.push(a);
while(!minq1.empty) minq1.pop();
isMinq1Current=false;
}
}
else
{
//mirror if(isMinq1Current) branch.
}
}
int pop()
{
int a = q.pop();
if(isMinq1Current)
{
if(a==minq1.top) minq1.pop();
}
else
{
//mirror if(isMinq1Current) branch.
}
return a;
}