Assume there are N people and M tasks are there and there is a cost matrix which tells when a task is assigned to a person how much it cost.
Assume we can assign more than one task to a person.
It means we can assign all of the tasks to a person if it leads to minimum cost.
I know this problem can be solved using various techniques. Some of them are below.
Bit Masking
Hungarian Algorithm
Min Cost Max Flow
Brute Force( All permutations M!)
Question: But what if we put a constraint like only consecutive tasks can be assigned to a person.
T1 T2 T3
P1 2 2 2
P2 3 1 4
Answer: 6 rather than 5
Explanation:
We might think that , P1->T1, P2->T2, P1->T3 = 2+1+2 =5 can be answer but it is not because (T1 and T3 are consecutive so can not be assigned to P1)
P1->T1, P1->T2, P1-T3 = 2+2+2 = 6
How to approach solving this problem?
You can solve this problem using ILP.
Here is an OPL-like pseudo-code:
**input:
two integers N, M // N persons, M tasks
a cost matrix C[N][M]
**decision variables:
X[N][M][M] // An array with values in {0, 1}
// X[i][j][k] = 1 <=> the person i performs the tasks j to k
**constraints:
// one person can perform at most 1 sequence of consecutive tasks
for all i in {1, N}, sum(j in {1, ..., M}, k in {1, ..., M}) X[i][j][k] <= 1
// each task is performed exactly once
for all t in {1, M}, sum(i in {1, ..., N}, j in {1, ..., t}, k in {t, ..., M}) X[i][j][k] = 1
// impossible tasks sequences are discarded
for all i in {1, ..., N}, for all j in {1, ..., M}, sum(k in {1, ..., j-1}) X[i][j][k] = 0
**objective function:
minimize sum(i, j, k) X[i][j][k] * (sum(t in {j, ..., k}) C[t])
I think that ILP could be the tool of choice here, since more often that not scheduling and production-planning problems are solved using it.
If you do not have experience coding LP programs, don't worry, it is much easier than it looks like, and this problem is rather easy and nice to get started.
There also exists a stackexchange dedicated to this kind of problems and solutions, the OR stack exchange.
This looks np-complete to me. If I am correct, there is not going to be a universally quick solution, and the best one can do is approach this problem using the best possible heuristics.
One approach you did not mention is a constructive approach using A* search. In this case, the search in would move along the matrix from left to right, adding candidate items to a priority queue with every step. Each item in the queue would consist of the current column index, the total cost expended so far, and the list of people who have acted so far. The remaining-cost heuristic for any given state would be the sum of the columnar minima for all remaining columns.
I'm certain that this can find a solution, I'm just not sure it is the best approach. Some quick Googling shows that A* has been applied to several types of scheduling problems though.
Edit: Here is an implementation.
public class OrderedTasks {
private class State {
private final State prev;
private final int position;
private final int costSoFar;
private final int lastActed;
public State(int position, int costSoFar, int lastActed, State prev) {
super();
this.prev = prev;
this.lastActed = lastActed;
this.position = position;
this.costSoFar = costSoFar;
}
public void getNextSteps(int[] task, Consumer<State> consumer) {
Set<Integer> actedSoFar = new HashSet<>();
State prev = this.prev;
if (prev != null) {
for (; prev!=null; prev=prev.prev) {
actedSoFar.add(prev.lastActed);
}
}
for (int person=0; person<task.length; ++person) {
if (actedSoFar.contains(person) && this.lastActed!=person) {
continue;
}
consumer.accept(new State(position+1,task[person]+this.costSoFar,
person, this));
}
}
}
public int minCost(int[][] tasksByPeople) {
int[] cumulativeMinCost = getCumulativeMinCostPerTask(tasksByPeople);
Function<State, Integer> totalCost = state->state.costSoFar+(state.position<cumulativeMinCost.length? cumulativeMinCost[state.position]: 0);
PriorityQueue<State> pq = new PriorityQueue<>((s1,s2)->{
return Integer.compare(totalCost.apply(s1), totalCost.apply(s2));
});
State state = new State(0, 0, -1, null);
for (; state.position<tasksByPeople.length; state = pq.poll()) {
state.getNextSteps(tasksByPeople[state.position], pq::add);
}
return state.costSoFar;
}
private int[] getCumulativeMinCostPerTask(int[][] tasksByPeople) {
int[] result = new int[tasksByPeople.length];
int cumulative = 0;
for (int i=tasksByPeople.length-1; i>=0; --i) {
cumulative += minimum(tasksByPeople[i]);
result[i] = cumulative;
}
return result;
}
private int minimum(int[] arr) {
if (arr.length==0) {
throw new RuntimeException("Not valid for empty arrays.");
}
int min = arr[0];
for (int i=1; i<arr.length; ++i) {
min = Math.min(min, arr[i]);
}
return min;
}
public static void main(String[] args) {
OrderedTasks ot = new OrderedTasks();
System.out.println(ot.minCost(new int[][]{{2, 3},{2,1},{2,4},{2,2}}));
}
}
I think your question is very similar to:
Finding the minimum value
Probably not the best approach if the number of workers is large, but easy to understand and implement could be
get a list all the possible combination with repetition of workers W, for example using the algorithm in https://www.geeksforgeeks.org/combinations-with-repetitions/ . This would give you things like [[W1,W3,W2,W3,W1],[W3,W5,W5,W4,W5]
Discard combinations where workers are not continuous
bool isValid=true;
for (int kk = 0; kk < workerOrder.Length; kk++)
{
int state=0;
for (int mm = 0; mm < workerOrder.Length; mm++)
{
if (workerOrder[mm] == kk && state == 0) { state = 1; } //it has appeard
if (workerOrder[mm] != kk && state == 1 ) { state = 2; } //it is not contious
if (workerOrder[mm] == kk && state == 2) { isValid = false; break; } //it appeard again
}
if (isValid==false){break;}
}
Use the filtered list of lists to check times using the table and keep the minimum one
I am practicing recursive algorithms because although I love recursion, I am still having trouble when there is "double" recursion going on. So I created this brute force 0-1 Knapsack algorithm which will output the final weight and best value, and its pretty good, but I decided that information is only relevant if you know which items are behind those numbers. I am stuck here, though. I want to do this elegantly, without creating a mess of code, and perhaps I am over-limiting my thinking trying to meet that goal. I thought I would post the code here and see if anyone had some nifty ideas about adding code to output the chosen items. This is Java:
public class Knapsack {
static int num_items = 4;
static int weights[] = { 3, 5, 1, 4 };
static int benefit[] = { 2, 4, 3, 6 };
static int capacity = 10;
static int new_sack[] = new int[num_items];
static int max_value = 0;
static int weight = 0;
// O(n2^n) brute force algorithm (i.e. check all combinations) :
public static void findMaxValue(int n, int currentWeight, int currentValue) {
if ((n == 0) && (currentWeight <= capacity) && (currentValue > max_value)) {
max_value = currentValue;
weight = currentWeight;
}
if (n == 0) {
return;
}
findMaxValue(n - 1, currentWeight, currentValue);
findMaxValue(n - 1, currentWeight + weights[n - 1], currentValue + benefit[n - 1]);
}
public static void main(String[] args) {
findMaxValue(num_items, 0, 0);
System.out.println("The max value you can get is: " + max_value + " with weight: " + weight);
// System.out.println(Arrays.toString(new_sack));
}
}
The point of the 0-1 Knapsack algorithm is to find if excluding or including an item in the knapsack results in a higher value. Your code doesn't compare these two possibilities. The code to do this would look like:
public int knapsack(int[] weights, int[] values, int n, int capacity) {
if (n == 0 || capacity == 0)
return 0;
if (weights[n-1] > capacity) // if item won't fit in knapsack
return knapsack(weights, values, n-1, capacity); // look at next item
// Compare if excluding or including item results in greater value
return max(
knapsack(weights, values, n-1, capacity), // exclude item
values[n] + knapsack(weights, values, n-1, capacity - weights[n-1])); // include item
}
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 2 years ago.
The community reviewed whether to reopen this question 12 months ago and left it closed:
Original close reason(s) were not resolved
Improve this question
You are given as input an unsorted array of n distinct numbers, where n is a power of 2. Give an algorithm that identifies the second-largest number in the array, and that uses at most n+log₂(n)−2 comparisons.
Start with comparing elements of the n element array in odd and even positions and determining largest element of each pair. This step requires n/2 comparisons. Now you've got only n/2 elements. Continue pairwise comparisons to get n/4, n/8, ... elements. Stop when the largest element is found. This step requires a total of n/2 + n/4 + n/8 + ... + 1 = n-1 comparisons.
During previous step, the largest element was immediately compared with log₂(n) other elements. You can determine the largest of these elements in log₂(n)-1 comparisons. That would be the second-largest number in the array.
Example: array of 8 numbers [10,9,5,4,11,100,120,110].
Comparisons on level 1: [10,9] ->10 [5,4]-> 5, [11,100]->100 , [120,110]-->120.
Comparisons on level 2: [10,5] ->10 [100,120]->120.
Comparisons on level 3: [10,120]->120.
Maximum is 120. It was immediately compared with: 10 (on level 3), 100 (on level 2), 110 (on level 1).
Step 2 should find the maximum of 10, 100, and 110. Which is 110. That's the second largest element.
sly s's answer is derived from this paper, but he didn't explain the algorithm, which means someone stumbling across this question has to read the whole paper, and his code isn't very sleek as well. I'll give the crux of the algorithm from the aforementioned paper, complete with complexity analysis, and also provide a Scala implementation, just because that's the language I chose while working on these problems.
Basically, we do two passes:
Find the max, and keep track of which elements the max was compared to.
Find the max among the elements the max was compared to; the result is the second largest element.
In the picture above, 12 is the largest number in the array, and was compared to 3, 1, 11, and 10 in the first pass. In the second pass, we find the largest among {3, 1, 11, 10}, which is 11, which is the second largest number in the original array.
Time Complexity:
All elements must be looked at, therefore, n - 1 comparisons for pass 1.
Since we divide the problem into two halves each time, there are at most log₂n recursive calls, for each of which, the comparisons sequence grows by at most one; the size of the comparisons sequence is thus at most log₂n, therefore, log₂n - 1 comparisons for pass 2.
Total number of comparisons <= (n - 1) + (log₂n - 1) = n + log₂n - 2
def second_largest(nums: Sequence[int]) -> int:
def _max(lo: int, hi: int, seq: Sequence[int]) -> Tuple[int, MutableSequence[int]]:
if lo >= hi:
return seq[lo], []
mid = lo + (hi - lo) // 2
x, a = _max(lo, mid, seq)
y, b = _max(mid + 1, hi, seq)
if x > y:
a.append(y)
return x, a
b.append(x)
return y, b
comparisons = _max(0, len(nums) - 1, nums)[1]
return _max(0, len(comparisons) - 1, comparisons)[0]
The first run for the given example is as follows:
lo=0, hi=1, mid=0, x=10, a=[], y=4, b=[]
lo=0, hi=2, mid=1, x=10, a=[4], y=5, b=[]
lo=3, hi=4, mid=3, x=8, a=[], y=7, b=[]
lo=3, hi=5, mid=4, x=8, a=[7], y=2, b=[]
lo=0, hi=5, mid=2, x=10, a=[4, 5], y=8, b=[7, 2]
lo=6, hi=7, mid=6, x=12, a=[], y=3, b=[]
lo=6, hi=8, mid=7, x=12, a=[3], y=1, b=[]
lo=9, hi=10, mid=9, x=6, a=[], y=9, b=[]
lo=9, hi=11, mid=10, x=9, a=[6], y=11, b=[]
lo=6, hi=11, mid=8, x=12, a=[3, 1], y=11, b=[9]
lo=0, hi=11, mid=5, x=10, a=[4, 5, 8], y=12, b=[3, 1, 11]
Things to note:
There are exactly n - 1=11 comparisons for n=12.
From the last line, y=12 wins over x=10, and the next pass starts with the sequence [3, 1, 11, 10], which has log₂(12)=3.58 ~ 4 elements, and will require 3 comparisons to find the maximum.
I have implemented this algorithm in Java answered by #Evgeny Kluev. The total comparisons are n+log2(n)−2. There is also a good reference:
Alexander Dekhtyar: CSC 349: Design and Analyis of Algorithms. This is similar to the top voted algorithm.
public class op1 {
private static int findSecondRecursive(int n, int[] A){
int[] firstCompared = findMaxTournament(0, n-1, A); //n-1 comparisons;
int[] secondCompared = findMaxTournament(2, firstCompared[0]-1, firstCompared); //log2(n)-1 comparisons.
//Total comparisons: n+log2(n)-2;
return secondCompared[1];
}
private static int[] findMaxTournament(int low, int high, int[] A){
if(low == high){
int[] compared = new int[2];
compared[0] = 2;
compared[1] = A[low];
return compared;
}
int[] compared1 = findMaxTournament(low, (low+high)/2, A);
int[] compared2 = findMaxTournament((low+high)/2+1, high, A);
if(compared1[1] > compared2[1]){
int k = compared1[0] + 1;
int[] newcompared1 = new int[k];
System.arraycopy(compared1, 0, newcompared1, 0, compared1[0]);
newcompared1[0] = k;
newcompared1[k-1] = compared2[1];
return newcompared1;
}
int k = compared2[0] + 1;
int[] newcompared2 = new int[k];
System.arraycopy(compared2, 0, newcompared2, 0, compared2[0]);
newcompared2[0] = k;
newcompared2[k-1] = compared1[1];
return newcompared2;
}
private static void printarray(int[] a){
for(int i:a){
System.out.print(i + " ");
}
System.out.println();
}
public static void main(String[] args) {
//Demo.
System.out.println("Origial array: ");
int[] A = {10,4,5,8,7,2,12,3,1,6,9,11};
printarray(A);
int secondMax = findSecondRecursive(A.length,A);
Arrays.sort(A);
System.out.println("Sorted array(for check use): ");
printarray(A);
System.out.println("Second largest number in A: " + secondMax);
}
}
the problem is:
let's say, in comparison level 1, the algorithm need to be remember all the array element because largest is not yet known, then, second, finally, third. by keep tracking these element via assignment will invoke additional value assignment and later when the largest is known, you need also consider the tracking back. As the result, it will not be significantly faster than simple 2N-2 Comparison algorithm. Moreover, because the code is more complicated, you need also think about potential debugging time.
eg: in PHP, RUNNING time for comparison vs value assignment roughly is :Comparison: (11-19) to value assignment: 16.
I shall give some examples for better understanding. :
example 1 :
>12 56 98 12 76 34 97 23
>>(12 56) (98 12) (76 34) (97 23)
>>> 56 98 76 97
>>>> (56 98) (76 97)
>>>>> 98 97
>>>>>> 98
The largest element is 98
Now compare with lost ones of the largest element 98. 97 will be the second largest.
nlogn implementation
public class Test {
public static void main(String...args){
int arr[] = new int[]{1,2,2,3,3,4,9,5, 100 , 101, 1, 2, 1000, 102, 2,2,2};
System.out.println(getMax(arr, 0, 16));
}
public static Holder getMax(int[] arr, int start, int end){
if (start == end)
return new Holder(arr[start], Integer.MIN_VALUE);
else {
int mid = ( start + end ) / 2;
Holder l = getMax(arr, start, mid);
Holder r = getMax(arr, mid + 1, end);
if (l.compareTo(r) > 0 )
return new Holder(l.high(), r.high() > l.low() ? r.high() : l.low());
else
return new Holder(r.high(), l.high() > r.low() ? l.high(): r.low());
}
}
static class Holder implements Comparable<Holder> {
private int low, high;
public Holder(int r, int l){low = l; high = r;}
public String toString(){
return String.format("Max: %d, SecMax: %d", high, low);
}
public int compareTo(Holder data){
if (high == data.high)
return 0;
if (high > data.high)
return 1;
else
return -1;
}
public int high(){
return high;
}
public int low(){
return low;
}
}
}
Why not to use this hashing algorithm for given array[n]? It runs c*n, where c is constant time for check and hash. And it does n comparisons.
int first = 0;
int second = 0;
for(int i = 0; i < n; i++) {
if(array[i] > first) {
second = first;
first = array[i];
}
}
Or am I just do not understand the question...
In Python2.7: The following code works at O(nlog log n) for the extra sort. Any optimizations?
def secondLargest(testList):
secondList = []
# Iterate through the list
while(len(testList) > 1):
left = testList[0::2]
right = testList[1::2]
if (len(testList) % 2 == 1):
right.append(0)
myzip = zip(left,right)
mymax = [ max(list(val)) for val in myzip ]
myzip.sort()
secondMax = [x for x in myzip[-1] if x != max(mymax)][0]
if (secondMax != 0 ):
secondList.append(secondMax)
testList = mymax
return max(secondList)
public static int FindSecondLargest(int[] input)
{
Dictionary<int, List<int>> dictWinnerLoser = new Dictionary<int, List<int>>();//Keeps track of loosers with winners
List<int> lstWinners = null;
List<int> lstLoosers = null;
int winner = 0;
int looser = 0;
while (input.Count() > 1)//Runs till we get max in the array
{
lstWinners = new List<int>();//Keeps track of winners of each run, as we have to run with winners of each run till we get one winner
for (int i = 0; i < input.Count() - 1; i += 2)
{
if (input[i] > input[i + 1])
{
winner = input[i];
looser = input[i + 1];
}
else
{
winner = input[i + 1];
looser = input[i];
}
lstWinners.Add(winner);
if (!dictWinnerLoser.ContainsKey(winner))
{
lstLoosers = new List<int>();
lstLoosers.Add(looser);
dictWinnerLoser.Add(winner, lstLoosers);
}
else
{
lstLoosers = dictWinnerLoser[winner];
lstLoosers.Add(looser);
dictWinnerLoser[winner] = lstLoosers;
}
}
input = lstWinners.ToArray();//run the loop again with winners
}
List<int> loosersOfWinner = dictWinnerLoser[input[0]];//Gives all the elemetns who lost to max element of array, input array now has only one element which is actually the max of the array
winner = 0;
for (int i = 0; i < loosersOfWinner.Count(); i++)//Now max in the lossers of winner will give second largest
{
if (winner < loosersOfWinner[i])
{
winner = loosersOfWinner[i];
}
}
return winner;
}
The Wikipedia article about Knapsack problem contains lists three kinds of it:
1-0 (one item of a type)
Bounded (several items of a type)
Unbounded (unlimited number of items of a type)
The article contains DP approaches for 1. and 3. types of problem, but no solution for 2.
How can the dynamic programming algorithm for solving 2. be described?
Use the 0-1 variant, but allow repetition of an item in the solution up to the number of times specified in its bound. You would need to maintain a vector stating how many copies of each item you already included in the partial solution.
The other DP solutions mentioned are all suboptimal as they require you to directly simulate the problem, resulting in a O(number of items * maximum weight * total count of items) runtime complexity.
There are many ways to optimize this, and I'll mention a few of them here:
One solution is to apply a technique similar to Sqrt Decomposition and is described here: https://codeforces.com/blog/entry/59606. This algorithm runs in O(number of items * maximum weight * sqrt(maximum weight)).
However, Dorijan Lendvaj describes a much faster algorithm that runs in O(number of items * maximum weight * log(maximum weight)) here: https://codeforces.com/blog/entry/65202?#comment-492168
Another way to think of the above approach is the following:
For each type of item, let's define the following values:
w, the weight/cost of the current type of item
v, the value of the current type of item
n, the number of copies of the current type of item available to use
Phase 1
First, let us consider 2^k, the largest power of 2 less than or equal to n. We insert the following items (each inserted item is in the format (weight, value)): (w, v), (2 * w, 2 * v), (2^2 * w, 2^2 * v), ..., (2^(k-1) * w, 2^(k-1) * v). Note that the items inserted each represent 2^0, 2^1, ..., 2^(k-1) copies of the current type of item respectively.
Observe that this is the same as inserting 2^k - 1 copies of the current type of item. This is because we can simulate the taking of any number of items (represented as n') by taking the combination of the above items that corresponds to the binary representation of n' (For all whole numbers k', if the bit representing 2^k' is set, take the item that represents 2^k' copies of the current type of item).
Phase 2
Lastly, we just insert the items that correspond to the set bits of n - (2^k - 1). (For all whole numbers k', if the bit representing 2^k' is set, insert (2^k' * w, 2^k' * v)).
Now, we can simulate the taking of up to n items of the current type simply by taking a combination of the above inserted items.
I don't currently have an exact proof of this solution, but after playing around with it for a while it seems correct. If I can figure one out I may update this post later on.
Proof
First, a proposition: All we have to prove is that inserting the above items allows us to simulate the taking of any number of items of the current type up to n.
With that in mind, let's define some variables:
Let n be the number of items of the current type available
Let x be the number of items of the current type we want to take
Let k be the greatest integer such that 2^k <= n
If x < 2^k, we can easily take x items using the method described in phase 1 of the algorithm:
... we can simulate the taking of any number of items (represented as n') by taking the combination of the above items that corresponds to the binary representation of n' (For all whole numbers k', if the bit representing 2^k' is set, take the item that represents 2^k' copies of the current type of item).
Otherwise, we do the following:
Take n - (2^k - 1) items. This is done by taking all the items inserted in phase 2. Now only the items inserted in phase 1 are available for use.
Take x - (n - (2^k - 1)) items. Since this value is always less than 2^k, we can just use the method used for the first case.
Finally, how do we know that x - (n - (2^k - 1)) < 2^k?
If we simplify the left side, we get:
x - (n - (2^k - 1))
x - n + 2^k - 1
x - (n + 1) + 2^k
If the above value was >= 2^k, then x - (n + 1) >= 0 would be true, meaning that x > n. That would be impossible as that's not a valid value of x.
Finally, there is even an approach mentioned here that runs in O(number of items * maximum weight) time.
The algorithm is similar to the brute force method ic3b3rg proposed and just uses simple DP optimizations and sliding window deque to bring down the run time.
My code was tested on this problem (classical bounded knapsack problem): https://dmoj.ca/problem/knapsack
My code: https://pastebin.com/acezMrMY
I posted an article on Code Project which discusses a more efficient solution to the bounded knapsack algorithm.
From the article:
In the dynamic programming solution, each position of the m array is a
sub-problem of capacity j. In the 0/1 algorithm, for each sub-problem
we consider the value of adding one copy of each item to the knapsack.
In the following algorithm, for each sub-problem we consider the value
of adding the lesser of the quantity that will fit, or the quantity
available of each item.
I've also enhanced the code so that we can determine what's in the
optimized knapsack (as opposed to just the optimized value).
ItemCollection[] ic = new ItemCollection[capacity + 1];
for(int i=0;i<=capacity;i++) ic[i] = new ItemCollection();
for(int i=0;i<items.Count;i++)
for(int j=capacity;j>=0;j--)
if(j >= items[i].Weight) {
int quantity = Math.Min(items[i].Quantity, j / items[i].Weight);
for(int k=1;k<=quantity;k++) {
ItemCollection lighterCollection = ic[j - k * items[i].Weight];
int testValue = lighterCollection.TotalValue + k * items[i].Value;
if(testValue > ic[j].TotalValue) (ic[j] = lighterCollection.Copy()).AddItem(items[i],k);
}
}
private class Item {
public string Description;
public int Weight;
public int Value;
public int Quantity;
public Item(string description, int weight, int value, int quantity) {
Description = description;
Weight = weight;
Value = value;
Quantity = quantity;
}
}
private class ItemCollection {
public Dictionary<string,int> Contents = new Dictionary<string,int>();
public int TotalValue;
public int TotalWeight;
public void AddItem(Item item,int quantity) {
if(Contents.ContainsKey(item.Description)) Contents[item.Description] += quantity;
else Contents[item.Description] = quantity;
TotalValue += quantity * item.Value;
TotalWeight += quantity * item.Weight;
}
public ItemCollection Copy() {
var ic = new ItemCollection();
ic.Contents = new Dictionary<string,int>(this.Contents);
ic.TotalValue = this.TotalValue;
ic.TotalWeight = this.TotalWeight;
return ic;
}
}
The download in the Code Project article includes a test case.
First, store all your data in a single array (with repetition).
Then use the 1st method mentioned in the Wikipedia article(1-0).
For example, trying a bounded knapsack with { 2 (2 times), 4(3 times),...} is equivalent to solving a 1-0 knapsack with {2, 2, 4, 4, 4,...}.
I will suggest you to use Knapsack Fraction Greedy Method Algorithm. It's Complexity is O(n log n) and one of the best algorithm.
Below I have mentioned its code in c#..
private static void Knapsack()
{
Console.WriteLine("************Kanpsack***************");
Console.WriteLine("Enter no of items");
int _noOfItems = Convert.ToInt32(Console.ReadLine());
int[] itemArray = new int[_noOfItems];
int[] weightArray = new int[_noOfItems];
int[] priceArray = new int[_noOfItems];
int[] fractionArray=new int[_noOfItems];
for(int i=0;i<_noOfItems;i++)
{
Console.WriteLine("[Item"+" "+(i+1)+"]");
Console.WriteLine("");
Console.WriteLine("Enter the Weight");
weightArray[i] = Convert.ToInt32(Console.ReadLine());
Console.WriteLine("Enter the Price");
priceArray[i] = Convert.ToInt32(Console.ReadLine());
Console.WriteLine("");
itemArray[i] = i+1 ;
}//for loop
int temp;
Console.WriteLine(" ");
Console.WriteLine("ITEM" + " " + "WEIGHT" + " "+"PRICE");
Console.WriteLine(" ");
for(int i=0;i<_noOfItems;i++)
{
Console.WriteLine("Item"+" "+(i+1)+" "+weightArray[i]+" "+priceArray[i]);
Console.WriteLine(" ");
}//For Loop For Printing the value.......
//Caluclating Fraction for the Item............
for(int i=0;i<_noOfItems;i++)
{
fractionArray[i] = (priceArray[i] / weightArray[i]);
}
Console.WriteLine("Testing.............");
//sorting the Item on the basis of fraction value..........
//Bubble Sort To Sort the Process Priority
for (int i = 0; i < _noOfItems; i++)
{
for (int j = i + 1; j < _noOfItems; j++)
{
if (fractionArray[j] > fractionArray[i])
{
//item Array
temp = itemArray[j];
itemArray[j] = itemArray[i];
itemArray[i] = temp;
//Weight Array
temp = weightArray[j];
weightArray[j] = weightArray[i];
weightArray[i] = temp;
//Price Array
temp = priceArray[j];
priceArray[j] = priceArray[i];
priceArray[i] = temp;
//Fraction Array
temp = fractionArray[j];
fractionArray[j] = fractionArray[i];
fractionArray[i] = temp;
}//if
}//Inner for
}//outer For
// Printing its value..............After Sorting..............
Console.WriteLine(" ");
Console.WriteLine("ITEM" + " " + "WEIGHT" + " " + "PRICE" + " "+"Fraction");
Console.WriteLine(" ");
for (int i = 0; i < _noOfItems; i++)
{
Console.WriteLine("Item" + " " + (itemArray[i]) + " " + weightArray[i] + " " + priceArray[i] + " "+fractionArray[i]);
Console.WriteLine(" ");
}//For Loop For Printing the value.......
Console.WriteLine("");
Console.WriteLine("Enter the Capacity of Knapsack");
int _capacityKnapsack = Convert.ToInt32(Console.ReadLine());
// Creating the valuse for Solution
int k=0;
int fractionvalue = 0;
int[] _takingItemArray=new int[100];
int sum = 0,_totalPrice=0;
int l = 0;
int _capacity = _capacityKnapsack;
do
{
if(k>=_noOfItems)
{
k = 0;
}
if (_capacityKnapsack >= weightArray[k])
{
_takingItemArray[l] = weightArray[k];
_capacityKnapsack = _capacityKnapsack - weightArray[k];
_totalPrice += priceArray[k];
k++;
l++;
}
else
{
fractionvalue = fractionArray[k];
_takingItemArray[l] = _capacityKnapsack;
_totalPrice += _capacityKnapsack * fractionArray[k];
k++;
l++;
}
sum += _takingItemArray[l-1];
} while (sum != _capacity);
Console.WriteLine("");
Console.WriteLine("Value in Kg Are............");
Console.WriteLine("");
for (int i = 0; i < _takingItemArray.Length; i++)
{
if(_takingItemArray[i]!=0)
{
Console.WriteLine(_takingItemArray[i]);
Console.WriteLine("");
}
else
{
break;
}
enter code here
}//for loop
Console.WriteLine("Toatl Value is "+_totalPrice);
}//Method
We can use 0/1 knapsack algorithm with tracking # of items left for each item;
We could do the same on unbounded knapsack algorithm to solve bounded knapsack problem also.