Bin packing algorithm - algorithm

I have a kitchen heating meals from frozen, they need to produce meals to a head count order. The meals come in frozen sizes of portions such as 4's, 6's etc. The larger sizes have a lower cost per unit. So allowing waste, how do I calculate the sizes to complete an order at the lowest cost.

This problem kind of sounds like the knapsack problem to me. I'm assuming that a greedy algorithm will not work here because there appear to be overlapping subproblems. You will probably have to use a dynamic programming algorithm which determines the minimum cost for a given head count by calculating the cost for all possible combinations of meal portions satisfying that head count.
I only pointed you in the right direction because this sounds like it might be homework. Either way this problem sounds like it can be reduced to one with a well-known solution.

Related

The greedy algorithm and implementation

Hello I've just started learning greedy algorithm and I've first looked at the classic coin changing problem. I could understand the greediness (i.e., choosing locally optimal solution towards a global optimum.) in the algorithm as I am choosing the highest value of coin such that the
sum+{value of chosen coin}<=total value . Then I started to solve some greedy algorithm problem in some sites. I could solve most of the problems but couldn't figure out exactly where the greediness is applied in the problem. I coded the only solution i could think of, for the problems and got it accepted. The editorials also show the same way of solving problem but i could not understand the application of greedy paradigm in the algorithm.
Are greedy algorithms the only way of solving a particular range of problems? Or they are one way of solving problems which could be more efficient?
Could you give me pseudo codes of a same problem with and without the application of greedy paradigm?
There are lots of real life examples of greedy algorithms. One of the obvious is the coin changing problem, to make change in a certain currency, we repeatedly dispense the largest denomination, thus , to give out seventeen dollars and sixty one cents in change, we give out a ten-dollar bill, a five-dollar bill, two one-dollar bills, two quarters , one dime, and one penny. By doing this, we are guaranteed to minimize the number of bills and coins. This algorithm does not work in all monetary systems...more here
I think that there is always another way to solve a problem, but sometimes, as you've stated, it probably will be less efficient.
For example, you can always check all the options (coins permutations), store the results and choose the best, but of course the efficiency is terrible.
Hope it helps.
Greedy algorithms are just a class of algorithms that iteratively construct/improve a solution.
Imagine the most famous problem - TSP. You can formulate it as Integer Linear Programming problem and give it to an ILP solver and it will give you globally optimal solution (if it has enought time). But you could do it in a greedy way. You can construct some solution (e.g. randomly) and then look for changes (e.g. switch an order of two cities) that improve your solution and you keep doing these changes until there is no such change possible.
So the bottom line is: greedy algorithms are only a method of solving hard problems efficiently (in time, but not necessary in the quality of solution), but there are other classes of algorithms for solving such problems.
For coins, greedy algorithm is also the optimal one, therefore the "greediness" is not as visible as with some other problems.
In some cases you prefer solution, which is not the best one, but you can compute it much faster (computing the real best solution can takes years for example).
Then you choose heuristic, that should give you the best results - based on average input data, its structure and what you want to want to accomplish.
On wikipedia, there is good solution on finding the biggest sum of numbers in tree
Imagine that you have for example 2^1000 nodes in this tree. To find optimal solution, you have to visit each node once. Personal computer today is not able to do this in your lifetime, therefore you want some heuristic. Greedy alghoritm however find solution in just 1000 steps (which does not take more than one milisecond)

Heuristics in Graph Traversal

I'm trying to use A* to find the optimal path in a Graph.
The context is that a tourist starts at his hotel, visits landmarks and returns to his hotel at the end of the day. The nodes (landmarks) have two values: importance and time spent. Edges have two values: time spent and cost(currency).
I want to minimize cost, maximize importance and make sure total time is under a certain value. I could find a balance between cost, importance and time for the past path-cost. But what about the future cost? I know how to do it with simpler pathfinding, but is there a method I could follow to find the heuristic I need?
You have a multi-dimensional objective (cost and importance) and so your problem is ill-posed because you haven't defined how to trade off cost and importance while you are "minimizing" cost and "maximizing" importance. You'll only get a partial ordering on sites to vist, because some pairs of sites may be incomparable because one site may have a higher cost and higher importance, while the other may have lower cost and lower importance. Look up multi-objective knapsack problem if you want concrete ways to formalize and solve your problem. And beware -- it can get a bit hairy the deeper you go into the literature.

How to design an optimal trip planner

I'm developing a trip planer program. Each city has a property called rateOfInterest. Each road between two cities has a time cost. The problem is, given the start city, and the specific amount of time we want to spend, how to output a path which is most interesting (i.e. the sum of the cities' rateOfInterest). I'm thinking using some greedy algorithm, but is there any algorithm that can guarantee an optimal path?
EDIT Just as #robotking said, we allow visit places multiple times and it's only interesting the first visit. We have 50 cities, and each city approximately has 5 adjacent cities. The cost function on each edge is either time or distance. We don't have to visit all cities, just with the given cost function, we need to return an optimal partial trip with highest ROI. I hope this makes the problem clearer!
This sounds very much like an instance of a TSP in a weighted manner meaning there is some vertices that are more desirable than other...
Now you could find an optimal path trying every possible permutation (using backtracking with some pruning to make it faster) depending on the number of cities we are talking about. See the TSP problem is a n! problem so after n > 10 you can forget it...
If your number of cities is not that small then finding an optimal path won't be doable so drop the idea... however there is most likely a good enough heuristic algorithm to approximate a good enough solution.
Steven Skiena recommends "Simulated Annealing" as the heuristics of choice to approximate such hard problem. It is very much like a "Hill Climbing" method but in a more flexible or forgiving way. What I mean is that while in "Hill Climbing" you always only accept changes that improve your solution, in "Simulated Annealing" there is some cases where you actually accept a change even if it makes your solution worse locally hoping that down the road you get your money back...
Either way, whatever is used to approximate a TSP-like problem is applicable here.
From http://en.wikipedia.org/wiki/Travelling_salesman_problem, note that the decision problem version is "(where, given a length L, the task is to decide whether any tour is shorter than L)". If somebody gives me a travelling salesman problem to solve I can set all the cities to have the same rate of interest and then the decision problem is whether a most interesting path for time L actually visits all the cities and returns.
So if there was an efficient solution for your problem there would be an efficient solution for the travelling salesman problem, which is unlikely.
If you want to go further than a greedy search, some of the approaches of the travelling salesman problem may be applicable - http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.26.5150 describes "Iterated Local Search" which looks interesting, with reference to the TSP.
If you want optimality, use a brute force exhaustive search where the leaves are the one where the time run out. As long as the expected depth of the search tree is less than 10 and worst case less than 15 you can produce a practical algorithm.
Now if you think about the future and expect your city network to grow, then you cannot ensure optimality. In this case you are dealing with a local search problem.

What is the difference between "hill climbing" and "greedy" algorithms?

Please explain the difference between "hill climbing" and "greedy" algorithms.
It seems both are similiar, and I have a doubts that "hill climbing" is an algorithm; it seems to be an optimization. Is this correct?
Hill-climbing and greedy algorithms are both heuristics that can be used for optimization problems. In an optimization problem, we generally seek some optimum combination or ordering of problem elements. A given combination or ordering is a solution. In either case, a solution can evaluated to compare it against other solutions.
In a hill-climbing heuristic, you start with an initial solution. Generate one or more neighboring solutions. Pick the best and continue until there are no better neighboring solutions. This will generally yield one solution. In hill-climbing, we need to know how to evaluate a solution, and how to generate a "neighbor."
In a greedy heuristic, we need to know something special about the problem at hand. A greedy algorithm uses information to produce a single solution.
A good example of an optimization problem is a 0-1 knapsack. In this problem, there is a knapsack with a certain weight limit, and a bunch of items to put in the knapsack. Each item has a weight and a value. The object is to maximize the value of the objects in the knapsack while keeping the weight under the limit.
A greedy algorithm would pick objects of highest density and put them in until the knapsack is full. For example, compared to a brick, a diamond has a high value and a small weight, so we would put the diamond in first.
Here is an example of where a greedy algorithm would fail: say you have a knapsack with capacity 100. You have the following items:
Diamond, value 1000, weight 90 (density = 11.1)
5 gold coins, value 210, weight 20 (density each = 10.5)
The greedy algorithm would put in the diamond and then be done, giving a value of 1000. But the optimal solution would be to include the 5 gold coins, giving value 1050.
The hill-climbing algorithm would generate an initial solution--just randomly choose some items (ensure they are under the weight limit). Then evaluate the solution--that is, determine the value. Generate a neighboring solution. For example, try exchanging one item for another (ensure you are still under the weight limit). If this has a higher value, use this selection and start over.
Hill climbing is not a greedy algorithm.
Yes you are correct. Hill climbing is a general mathematical optimization technique (see: http://en.wikipedia.org/wiki/Hill_climbing). A greedy algorithm is any algorithm that simply picks the best choice it sees at the time and takes it.
An example of this is making change while minimizing the number of coins (at least with USD). You take the most of the highest denomination of coin, then the most of the next highest, until you reach the amount needed.
In this way, hill climbing is a greedy algorithm.

Find a bijection that best preserves distances

I have two spaces (not necessarily equal in dimension) with N points.
I am trying to find a bijection (pairing) of the points, such that the distances are preserved as well as possible.
I can't seem to find a discussion of possible solutions or algorithms to this question online. Can anyone suggest keywords that I could search for? Does this problem have a name, or does it come up in any domain?
I believe you are looking for a Multidimensional Scaling algorithm where you are minimizing the total change in distance. Unfortunately, I have very little experience in this area and can't be of much more help.
I haven't heard of the exact same problem. There are two similar types of problems:
Non-linear dimensionality reduction, you're given N high dimensional points and you want to find N low dimensional points that preserve distance as well as possible. MDS, mentioned by Michael Koval is one such method.
This might be more promising: algorithms for the assignment problem. For example Kuhn-Munkres (the Hungarian algorithm), you're given an NxN matrix that encodes the cost of matching pi with pj and you want to find the minimum cost bijection. There are many generalizations of this problem, for example b-matching (Kuhn-Munkres solves 1-matching).
Depending on how you define "preserves distances as well as possible" I think you either want (2) or a generalization of (2) in such a way that the cost doesn't only depend on the two points being matched but the assignment of all other points.
Finally, Kuhn-Munkres comes up everywhere in operations research.

Resources