If I have a number of items less than one pound, and I want to efficiently pack them into one pound containers, should I do that by brute force? (Figure out all the various combinations, pack, and see which combination leads to the smallest number of packages?)
Is there a name for this sort of algorithm?
In my case, I don't have a large number of packages.
you might want to look at the knapsack problem
You can also look for 1d bin-packing or 2d bin-packing algorithm. If you don't have too much bins I suggest a brute-force algorithm but it seems to be a very hard problem.
You can look into the Algorithm Design Manual for descriptions of your problem:
Bin Packing
http://www.cs.sunysb.edu/~algorith/files/bin-packing.shtml
Knapsack problem
http://www.cs.sunysb.edu/~algorith/files/knapsack.shtml
You can probably make it easier on yourself if you can define a fitting solution (good enough), rather then if you want to know the best solution.
It's NP complete problem. You dont have much better options, best would probably be some dynamic programming algorithm with pseudopolynomial (exponential) complexity.
I wrote the following Ruby program to solve this problem, it seems to work well.
https://gist.github.com/1398026
Related
I've recently implemented 2sum and 3sum in leetcode and been wondering if it's possible to find if elements can sum up to a given target without bruteforce.
You're asking if the "subset sum problem" has a non-bruteforce solution. It's not really clear what is and what isn't a bruteforce solution, but NP complete programs (which subset sum is) have no known way to solve them in polynomial time in the worst case, but there are very sophisticated approaches to solving them that work efficiently some of the time.
The wikipedia page has good details about solving the subset sum (either approximately or exactly), and links for further reading.
At its most general, depending on your precise definition of "brute force", this is an open problem in computer science; nobody knows. There are some algorithms that are often fast in practice, but whether there's a fundamentally fast algorithm or not, that's an active area of research
Look up "subset sum problem" and "NP-complete"
Hello I've just started learning greedy algorithm and I've first looked at the classic coin changing problem. I could understand the greediness (i.e., choosing locally optimal solution towards a global optimum.) in the algorithm as I am choosing the highest value of coin such that the
sum+{value of chosen coin}<=total value . Then I started to solve some greedy algorithm problem in some sites. I could solve most of the problems but couldn't figure out exactly where the greediness is applied in the problem. I coded the only solution i could think of, for the problems and got it accepted. The editorials also show the same way of solving problem but i could not understand the application of greedy paradigm in the algorithm.
Are greedy algorithms the only way of solving a particular range of problems? Or they are one way of solving problems which could be more efficient?
Could you give me pseudo codes of a same problem with and without the application of greedy paradigm?
There are lots of real life examples of greedy algorithms. One of the obvious is the coin changing problem, to make change in a certain currency, we repeatedly dispense the largest denomination, thus , to give out seventeen dollars and sixty one cents in change, we give out a ten-dollar bill, a five-dollar bill, two one-dollar bills, two quarters , one dime, and one penny. By doing this, we are guaranteed to minimize the number of bills and coins. This algorithm does not work in all monetary systems...more here
I think that there is always another way to solve a problem, but sometimes, as you've stated, it probably will be less efficient.
For example, you can always check all the options (coins permutations), store the results and choose the best, but of course the efficiency is terrible.
Hope it helps.
Greedy algorithms are just a class of algorithms that iteratively construct/improve a solution.
Imagine the most famous problem - TSP. You can formulate it as Integer Linear Programming problem and give it to an ILP solver and it will give you globally optimal solution (if it has enought time). But you could do it in a greedy way. You can construct some solution (e.g. randomly) and then look for changes (e.g. switch an order of two cities) that improve your solution and you keep doing these changes until there is no such change possible.
So the bottom line is: greedy algorithms are only a method of solving hard problems efficiently (in time, but not necessary in the quality of solution), but there are other classes of algorithms for solving such problems.
For coins, greedy algorithm is also the optimal one, therefore the "greediness" is not as visible as with some other problems.
In some cases you prefer solution, which is not the best one, but you can compute it much faster (computing the real best solution can takes years for example).
Then you choose heuristic, that should give you the best results - based on average input data, its structure and what you want to want to accomplish.
On wikipedia, there is good solution on finding the biggest sum of numbers in tree
Imagine that you have for example 2^1000 nodes in this tree. To find optimal solution, you have to visit each node once. Personal computer today is not able to do this in your lifetime, therefore you want some heuristic. Greedy alghoritm however find solution in just 1000 steps (which does not take more than one milisecond)
If an optimal solution to a problem can be obtained by greedy, can it also be obtained by dynamic programming? Since both greedy and dp are dealing with the optimal solution to the sub-problems, is it safe to say that dp can solve all the problems that can be solved by greedy?
Comparing Greedy and DP is like comparing oranges to apples. But an easy way to think is
Greedy approach: Choose whatever you think is optimal now, assuming it will be optimal in the long run.
For example when you are driving and see traffic jam on one road you may take an alternate road which looks empty. This may work but the alternate road can have more severe traffic jam around the corner.
Dynamic Programming on the other hand uses memory to store calculations/results that you have done previously to save time the next time you need them. Using above problem again, The DP Solution would be to calculate traffic on every road and then choose the road(s) which gives best (optimal) time.
In this sense DP is more like a Divide and Conquer approach but with memory. You do not calculate results of sub problems again and again.
And to answer your question
is it safe to say that dp can solve all the problems that can be solved by greedy
I think it is safe to say dp can solve all the problems divide and conquer can solve (may take more memory though)
Of all the examples I can think of DP can give optimal solutions for questions that can be solved optimally by greedy (DP may take exponential time though and almost in each case DP will take more memory).
In terms of performance which approach is better for the knapsack problem: iterative, or recursive?
Limited to 1 sec I need to sort out which of 40 items should the knapsack be filled with to get the most valuable items, a typical knapsack problem.
I know that if I do a brute force to determine which items to select I get 2^41 - 1 subproblems to solve, so it is very unthoughtful to use this solution, but is it a way to cut down the unneeded branches and make it as efficient as the iterative form?
On the other hand if the weight is very big, the matrix would be enormous and also as inefficient as the recursive approach.
With that kind of problem, asking "iterative or recursive" doesn't get you anywhere. What you need to do is write code, measure what it is doing, start understanding what takes time and why, and as your understanding of the problem grows, you'll find more effective ways of attacking the problem.
The problem is NP-complete which means that there are at least pathological cases that cannot be solved quickly. But in practice, many problems can be solved quickly. You want to pick items with high value/weight ratio, and pick items that fill the rucksack well. And you don't want to try all possibilities, you want to find one good solution and with the help of that good solution be able to reject large sets of possibilities quickly.
If it's a typical knapsack problem, isn't it possible to use Dynamic Programming to use the previous results in an iterative way as they're stored in a matrix and using a recursive formula to evaluate the new values?
I am reading a tutorial about "greedy" algorithms but I have a hard time spotting them solving real "Top Coder" problems.
If I know that a given problem can be solved with a "greedy" algorithm it is pretty easy to code the solution. However if I am not told that this problem is "greedy" I can not spot it.
What are the common properties and patterns of the problems solved with "greedy" algorithms? Can I reduce them to one of the known "greedy" problems (e.g. MST)?
Formally, you'd have to prove the matroid property of course. However, I assume that in terms of topcoder you rather want to find out quickly if a problem can be approached greedily or not.
In that case, the most important point is the optimal sub-structure property. For this, you have to be able to spot that the problem can be decomposed into sub-problems and that their optimal solution is part of the optimal solution of the whole problem.
Of course, greedy problems come in such a wide variety that it's next to impossible to offer a general correct answer to your question. My best advice would hence be to think somewhere along these lines:
Do I have a choice between different alternatives at some point?
Does this choice result in sub-problems that can be solved individually?
Will I be able to use the solution of the sub-problem to derive a solution for the overall problem?
Together with loads and loads of experience (just had to say that, too) this should help you to quickly spot greedy problems. Of course, you may eventually classify a problem as greedy, which is not. In that case, you can only hope to realize it before working on the code for too long.
(Again, for reference, I assume a topcoder context.. for anything more realistic and of practical consequence I strongly advise to actually verify the matroid structure before selecting a greedy algorithm.)
A part of your problem may be caused by thinking of "greedy problems". There are greedy algorithms and problems where there is a greedy algorithm, that leads to an optimal solution. There are other hard problems that can also be solved by greedy algorithms but the result will not necessarily be optimal.
For example, for the bin packing problem, there are several greedy algorithms all of them with much better complexity than the exponential algorithm, but you can only be sure that you'll get a solution that is below a certain threshold compared to the optimal solution.
Only regarding problems where greedy algorithms will lead to an optimal solution, my guess would be that an inductive correctness proof feels totally natural and easy. For every single one of your greedy steps, it is quite clear that this was the best thing to do.
Typically problems with optimal, greedy solutions are easy anyway, and others will force you to come up with a greedy heuristic, because of complexity limitations. Usually a meaningful reduction would be showing that your problem is in fact at least NP-hard and hence you know you'll have to find a heuristic. For those problems, I'm a big fan of trying out. Implement your algorithm and try to find out if solutions are "pretty good" (ideal if you also have a slow but correct algorithm you can compare results against, otherwise you might need manually created ground truths). Only if you have something that works well, try to think why and maybe even try to come up with proof of boundaries. Maybe it works, maybe you'll spot border cases where it doesn't work and needs refinement.
"A term used to describe a family of algorithms. Most algorithms try to reach some "good" configuration from some initial configuration, making only legal moves. There is often some measure of "goodness" of the solution (assuming one is found).
The greedy algorithm always tries to perform the best legal move it can. Note that this criterion is local: the greedy algorithm doesn't "think ahead", agreeing to perform some mediocre-looking move now, which will allow better moves later.
For instance, the greedy algorithm for egyptian fractions is trying to find a representation with small denominators. Instead of looking for a representation where the last denominator is small, it takes at each step the smallest legal denominator. In general, this leads to very large denominators at later steps.
The main advantage of the greedy algorithm is usually simplicity of analysis. It is usually also very easy to program. Unfortunately, it is often sub-optimal."
--- ariels
(http://www.everything2.com/title/greedy+algorithm?searchy=search)