Modification of the 0 1 Knapsack Algorithm - algorithm

How would we approach to a 0 1 Knapsack Problem if we have items that can be picked multiple times. For example we have 5 items with weights 6, 5, 4, 2, 1 and their respective weights are 6.59, 6.49, 6.39, 6.29, 6.16. Now the allowed weight to pick up is 10.
The variation is we can pick any item any number of time and then maximise the value. How do we approach this.? Any suggestions or articles is really appreciated.

I've already solved 0/1 Knapsack problem using Genetic algorithm, this tutorial will give an introduction to this topic including an example (in C++) that will get you started.
If you want to have an idea on how to solve the mentioned problem, you can give a try to these link:
Solving the Knapsack Problem with a Simple Genetic Algorithm
Genetic Algorithm for Knapsack Problem
You may use other techniques to solve the problem, but I see using GA as a very interesting thing to do.
Good luck.

Related

Variation of Knapsack Problem: adding Low bounder

Classic variation of 0/1 Knapsack Problem: Only I have to specify low bounder. It means I have to find maximum value and make sure that total weight is equal or higher than some given value but not higher than capacity of a Knapsack. Basically, determine that the solution is between 2 boundries.
I have already code 0/1 Knapsack problem with dynamic programming. And I could test it for low bounder. Something like if the solution is lower than low bounder find second best solution for knapsack and so on. But I believe there is a better way.
Thank you

How to parallel Knapsack problems?

Knapsack problems is a very famous problem. Given a set of items, each with a weight and a value, determine the number of each item to include in a collection so that the total weight is less than or equal to a given limit and the total value is as large as possible.
This problem can be solved with dynamic programming and can be found on every tutorial book of algorithm. But how can I write a parallel version?
That is a very interesting question, the best way to obtain a (good) answer is to use google scholar on such a question. The following link is probably the most recent paper on the subject.

Dynamic vs Greedy Algorithms : The debate regarding Neil G's answer on the same topic

I was trying to understand the differences between Dynamic and Greedy algorithms, and This answer by Neil G was quite helpful, but, there was this one statement that he made which caused a debate in the comments section.
The difference between dynamic programming and greedy algorithms is that with dynamic programming, the subproblems overlap. That means that by "memoizing" solutions to some subproblems, you can solve other subproblems more quickly.
Comments aren't the best place to solve a doubt, so i'm creating this post. My questions are :
Minimum spanning trees have Optimal substructure as well as Overlapping Sub-problems. Also, in an MST, a locally optimal choice is globally optimal. Thus both Dynamic and Greedy properties hold right? How does the above quoted part hold up to this?
How is the property of optimal substructure different to the property "locally optimal then also globally optimal" ? My head is going : If something has an optimal substructure, then all locally optimal choices must also be globally optimal right ? Can someone explain where i'm going wrong here ?
English is not my native language, so please feel free to correct any mistakes with the language.
I think the explanation of the difference between a greedy and dynamic solutions is not good. A greedy solution makes choices only using local information i.e. what looks best as of the current position. As a result greedy solutions may "get stuck" at a local optimum instead of the global one. Dynamic programming is a technique in which you break a single more complex problem to simpler subproblems and then you combine the results from the subproblems to obtain the result for the initial problem. A solution can be both greedy and dynamic. Take a look at my answer to the original thread.
However this statement of yours:
If something has an optimal substructure, then all locally optimal
choices must also be globally optimal right?
Is wrong. Take for example the 0,1 knapsack example: you are a thief, breaking into some shop a night. You have a knapsack and it has a fixed weight capacity. The shop has some products each with associated price and weight. Steal the greatest price possible.
Now take the example where you have knapsack of capacity 50 and products of price and weight respectively (21, 20), (30, 22), (22, 21), and (9, 9). If you make choices that are locally optimal(i.e. each time you select the item with greatest price/weight ratio) you will select the products (30, 22) and (21, 20) while this solution is not optimal. The optimal solution would be to take (21, 20), (22, 21) and (9,9) resulting in profit that is bigger by 2.

An algorithm for the mellon-selling farmer

Question i saw on the net:
A melon-selling farmer has n melons. The weight of each melon, an integer (lbs), is distinct. A customer asks for exactly m pounds of uncut melons. Now, the farmer has the following problem: If it is possible to satisfy the customer, he should do so by finding the appropriate melons as efficiently as possible, or else tell the customer that it is not possible to fulfill his request.
note: This is not homework btw, i just need guidance.
My Answer:
This seems similar to the coin change problem(knapsack) and the subset problem (backtracking).
Coin-change: i can put the weights into a set w = {5, 8, 3 , 2,....} then solve and the same goes for the Subset problem.
So basically i can use either method to solve this problem?
This is exactly an integer knapsack problem where solutions have zero wasteage. There is a good dynamic programming/memoization strategy to help you solve it. See either of these links:
http://www.cs.ship.edu/~tbriggs/dynamic/
https://en.wikipedia.org/wiki/Knapsack_problem
Indeed, the subset-sum problem IS the 0/1 knapsack problem where the weight equals the value of each item.

Travelling Salesman with multiple salesmen?

I have a problem that has been effectively reduced to a Travelling Salesman Problem with multiple salesmen. I have a list of cities to visit from an initial location, and have to visit all cities with a limited number of salesmen.
I am trying to come up with a heuristic and was wondering if anyone could give a hand. For example, if I have 20 cities with 2 salesmen, the approach that I thought of taking is a 2 step approach. First, divide the 20 cities up randomly into 10 cities for 2 salesman each, and I'd find the tour for each as if it were independent for a few iterations. Afterwards, I'd like to either swap or assign a city to another salesman and find the tour. Effectively, it'd be a TSP and then minimum makespan problem. The problem with this is that it'd be too slow and good neighborhood generation of swapping or assigning a city is hard.
Can anyone give an advise on how I could improve the above?
EDIT:
The geo-location for each city are known, and the salesmen start and end at the same places. The goal is to minimize the max travelling time, making this sort of a minimum makespan problem. So for example, if salesman1 takes 10 hours and salesman2 takes 20 hours, the maximum travelling time would be 20 hours.
TSP is a difficult problem. Multi-TSP is probably much worse. I'm not sure you can find good solutions with ad-hoc methods like this. Have you tried meta-heuristic methods ? I'd try using the Cross Entropy method first : it shouldn't be too hard to use it for your problem. Otherwise look for Generic Algorithms, Ant Colony Optimization, Simulated Annealing ...
See "A Tutorial on the Cross-Entropy Method" from Boer et al. They explain how to use the CE method on the TSP. A simple adaptation for your problem might be to define a different matrix for each salesman.
You might want to assume that you only want to find the optimal partition of cities between the salesmen (and delegate the shortest tour for each salesman to a classic TSP implementation). In this case, in the Cross Entropy setting, you consider a probability for each city Xi to be in the tour of salesman A : P(Xi in A) = pi. And you work, on the space of p = (p1, ... pn). (I'm not sure it will work very well, because you will have to solve many TSP problems.)
When you start talking about multiple salesmen I start thinking about particle swarm optimization. I've found a lot of success with this using a gravitational search algorithm. Here's a (lengthy) paper I found covering the topic. http://eprints.utm.my/11060/1/AmirAtapourAbarghoueiMFSKSM2010.pdf
Why don't you convert your multiple TSP into the traditional TSP?
This is a well-known problem (transforming multiple salesman TSP into TSP) and you can find several articles on it.
For most transformations, you basically copy your depot (where the salesmen start and finish) into several depots (in your case 2), make the edge weights in a way to force a TSP to come back to the depot twice, and then remove the two depots and turn them into one.
Voila! got two min cost tours that cover the vertices exactly once.
I wouldn't start writing an algorythm for such complicated issue (unless that's my day job - to write optimization algorythms). Why don't you turn to a generic solution like http://www.optaplanner.org/ ? You have to define your problem and the program uses algorithms that top developers took years to create and optimize.
My first thought on reading the problem description would be to use a standard approach for the salesman problem (googling for an appropriate one as I've never actually had to write code for it); Then take the result and split it in half. Your algorithm then could be to decide where "half" is -- maybe it is half of the cities, or maybe it is based on distance, or maybe some combination. Or search the result for the largest distance between two cities and choose that as the separation between salesman #1's last city and salesman #2's first city. Of course it does not limit to two salesman, you would break into x pieces; But overall the idea is that your standard 1 salesman TSP solution should have already gotten the "nearby" cities next to each other in the travel graph, so you don't have to come up with a separate grouping algorithm...
Anyway, I'm sure there are better solutions, but this seems like a good first approach to me.
Have a look at this question (562904) - while not identical to yours there should be some good food for thought and references for further study.
As mentioned in the answer above the hierarchical clustering solution will work very well for your problem. Instead of continuing to dissolve clusters until you have a single path, however, stop when you have n, where n is the number of salesmen you have. You can probably improve it some by adding in some "fake" stops to improve the likelihood that your clusters end up evenly spaced from the initial destination if the initial clusters are too disparate. It's not optimal - but you're not going to get an optimal solution for a problem like this. I'd create an app that visualizes the problem and then test many variants of the solution to get a feel for whether your heuristic is optimal enough.
In any case I would not randomize the clusters, that would cause the majority of the clusters to be sub-optimal.
just by starting to read your question using genetic algorithm came to my head. just use two genetic algorithm in the same time one can solve how to assign cities to salesmen and the other can solve the TSP for each salesman you have.

Resources