I have found out that selection sort uses Brute Force strategy. However, I think that it uses Greedy strategy.
Why do I think that it uses Greedy: it goes from 0 to n-1 at it outer loop and from i+1 to n-1. This is really naive. It selects the minimum element in one every iteration - it chooses best locally. Everything like in Greedy but it is not.
Can you please explain me why it is not how I think? Information about this issue I have not found in the Internet.
A selection sort could indeed be described as a greedy algorithm, in the sense that it:
tries to choose an output (a permutation of its inputs) that optimizes a certain measure ("sortedness", which could be measured in various ways, e.g. by number of inversions), and
does so by breaking the task into smaller subproblems (for selection sort, finding the k-th element in the output permutation) and picking the locally optimal solution to each subproblem.
As it happens, the same description could be applied to most other sorting algorithms, as well — the only real difference is the choice of subproblems. For example:
insertion sort locally optimizes the sortedness of the permutation of k first input elements;
bubble sort optimizes the sortedness of adjacent pairs of elements; it needs to iterate over the list several times to reach a global optimum, but this still falls within the broad definition of a greedy algorithm;
merge sort optimizes the sortedness of exponentially growing subsequences of the input sequence;
quicksort recursively divides its input into subsequences on either side of an arbitrarily chosen pivot, optimizing the division to maximize sortedness at each stage.
Indeed, off the top of my head, I can't think of any practical sorting algorithm that wouldn't be greedy in this sense. (Bogosort isn't, but can hardly be called practical.) Furthermore, formulating these sorting algorithms as greedy optimization problems like this rather obscures the details that actually matter in practice when comparing sorting algorithms.
Thus, I'd say that characterizing selection sort, or any other sorting algorithm, as greedy is technically valid but practically useless, since such classification provides no real useful information.
Let A be a list of intgers such that: A = [5, 4, 3, 6, 1, 2, 7 ]
A greedy algorithm will look for the most promising direction, therefore :
we will compare: 5 to 4, see that 4 is indeed smaller than 5, set 4 as our minimum
compare 4 to 3 , set 3 as our minimum
Now we compare 3 to 6 and here is the tricky part: while in a normal selection sort(brute force) we will keep considering the remaining numbers, in a greedy approach we will take 3 as our minimum and will not consider the remaining numbers, hence "Best locally".
so a sorted list using this approach will result in a list sorted as such:
[3, 4, 5, 1, 2, 7]
Greedy and brute force describe different traits of the algorithm.
Greedy means that the algorithm on each step selects some option which is locally the best. That is, it have no look-ahead.
Brute-force means that the algorithm looks for an options in a straightforward manner, considering them all. E.g. it might search for an element via binary search, and it wouldn't be brute force anymore.
So the algorithm may be both greedy and brute force. These qualities are not mutually exclusive.
Related
In the Subset Sum problem, if we don't use the Dynamic Programming approach, then we have an exponential time complexity. But if we draw the recursion tree, it seems that all the 2^n branches are unique. If we use dynamic programming, how can we assure that all the unique branches are explored? If there really exists 2^n possible solutions, how does dynamic programming reduce it to polynomial time while also ensuring all 2^n solutions are explored?
How does dynamic programming reduce it to polynomial time while also ensuring all 2^n solutions are explored?
It is pseudo polynomial time, not polynomial time. It's a very important distinction. According to Wikipedia, A numeric algorithm runs in pseudo-polynomial time if its running time is a polynomial in the numeric value of the input, but not necessarily in the length of the input, which is the case for polynomial time algorithms.
What does it matter?
Consider an example [1, 2, 3, 4], sum = 1 + 2 + 3 + 4 = 10.
There does in fact exist 2^4 = 16 subsequences, however, do we need to check them all? The answer is no, since we are only concerned about the sum of subsequence. To illustrate this, let's say we're iterating from the 1st element to the 4th element:
1st element:
We can choose to take or not take the 1st element, so the possible sum will be [0, 1].
2nd element:
We can choose to take or not to take the 2nd element. Same idea, possible sum will be [0, 1, 2, 3].
3rd element:
We have [0, 1, 2, 3] now. We now consider taking the third element. But wait... If we take the third element and add it to 0, we still get 3, which is already present in the array, do we need to store this piece of information? Apparently not. In fact, we only need to know whether a sum is possible at any stage. If there are multiple subsequences summing to the same value, we ignore it. This is the key to the reduction of complexity, if you consider it as a reduction.
With that said, a real polynomial solution for subset sum is not known since it is NP-complete
I've been learning data structures and algorithms from a book, in which it compares time efficiency in terms of number of steps taken by various sorting algorithms. I'm confused as to what we define as one step while doing this.
So while counting no. of steps we consider the worst case scenarios. I understood how we come up with the no. of steps for bubble sort. But for selection sort, I am confused about the part where we compare every element with the current lowest value.
For example, in the worst case array, lets say 5,4,3,2,1, and lets say we are in the first pass through. When we start, 5 is the current lowest value. When we move to 4, and compare it to 5, we change the current lowest value to 4.
Why isnt this action of changing the current lowest value to 4 counted as a swap or an additional step? I mean, it is a step separate from the comparison step. The book I am referring to states that in the first passthrough, the number of comparisons are n-1 but the no. of swaps is only 1, even in worst case, for an n size array. Here they are assuming that the step of changing the current lowest value is a part of the comparison step, which I think should not be a valid assumption, since there can be an array in which you compare but don't need to change the lowest current value and hence your no. of steps eventually reduce. Point being, we cant assume that the no. of steps in the first pass through for selection sort in the worst case is n-1 (comparisons) + 1 (swap). It should be more than (n-1) + (1).
I understand that both selection sort and bubble sort lie in the same classification of time complexity as per big O methodology, but the book goes on to claim that selection sort has lesser steps than bubble sort in worst case scenarios, and I'm doubting that. This is what the book says: https://ibb.co/dxFP0
Generally in these kinds of exercises you’re interested in whether the algorithm is O(1), O(n), O(n^2) or something higher. You’re generally not interested in O(1) vs O(2) or in O(3n) vs O(5n) because for sufficiently large n only the power of n matters.
To put it another way, small differences in the complexity of each step, maybe favors of 2 or 3 or even 10, don’t matter against choosing an algorithm with a factor of n = 300 or more additional work
I have obtained a proof that would discredit a generally held idea regarding the 0/1 knapsack problem and I'm really having a hard time convincing my self I am right because I couldn't find any thing any where to support my claims, so I am going to first state my claims and then prove them and I would appreciate anyone to try to substantiate my claims further or disproof them. Any collaboration is appreciated.
Assertions:
The size of the bnb (branch and bound) algorithm for solving the knapsack problem is not independent of the K (capacity of the knapsack).
The size of bnb tree complete space is always of O(NK) with N being the number of items and not O(2^N)
The bnb algorithm is always better than the standard dynamic programming approach both in time and space.
Pre-assumptions: The bnb algorithm prone the invalid nodes (if the remaining capacity is less than the weight of current item, we are not going to extend it. Also, the bnb algorithm is done in a depth-first manner.
Sloppy Proof:
Here is the recursive formula for solving the knapsack problem:
Value(i,k) = max (Value(i-1,k) , Value(n-1 , k-weight(i)) + value(i)
however if k < weight(i): Value(i,k) = Value(i-1,k)
Now imagine this example:
K = 9
N = 3
V W:
5 4
6 5
3 2
Now here is the Dynamic solution and table for this problem:
Now imagine regardless of whether it is a good idea or not we want to do this using only the recursive formula through memoization and not with the table, with something like a map/dictionary or a simple array to store the visited cells. For solving this problem using memoization we should solve the denoted cells:
Now this is exactly like the tree we would obtain using the bnb approach:
and now for the sloppy proofs:
Memoization and bnb tree have the same amount of nodes
Memoization nodes is dependent of the table size
Table size is dependent of N and K
Therefore bnb is not independent of K
Memoization space is bounded by NK i.e. O(NK)
Therefore bnb tree complete space (or the space if we do the bnb in a breadth first manner) is always of O(NK) and not O(N^2) because the whole tree is not going to be constructed and it would be exactly like the momization.
Memoization has better space than the standard dynamic programming.
bnb has better space than the dynamic programming (even if done in breadth first)
The simple bnb without relaxation (and just eliminating the infeasible nodes) would have better time than memoization (memoization has to search in the look up table and even if the look up was negligible they would still be the same.)
If we disregard the look up search of memoization, it is better than dynamic.
Therefore bnb algorithm is always better than dynamic both in time and space.
Questions:
If by any mean my proofs are correct some questions would arise that are interesting:
Why bother with dynamic programming? In my experience the best thing you could do in dp knapsack is to have the last two columns and you can improve it further to one column if you fill it bottom to top, and it would have O(K) space but still can't (if the above assertions are correct) beat the bnb approach.
Can we still say bnb is better if we integrate it with relaxation pruning (with regard to time)?
ps: Sorry for the long long post!
Edit:
Since two of the answers are focused on memoization, I just want to clarify that I'm not focused on this at all! I just used memoization as a technique to prove my assertions. My main focus is Branch and Bound technique vs dynamic programming, here is a complete example of another problem, solved by bnb + relaxation (source: Coursera - Discrete Optimization) :
I think there is a misunderstanding from your side, that the dynamic programming is the state-of-the art solution for the knapsack problem. This algorithm is taught at universities because it is an easy and nice example for dynamic programming and pseudo-polynomial time algorithms.
I have no expertise in the field and don't know what is the state-of-the art now, but branch-and-bound approaches have been used for quite some time to solve the knapsack-problem: The book Knapsak-Problems by Martello and Toth is already pretty old but treats the branch-and-bound pretty extensively.
Still, this is a great observation from your side, that the branch and bound approach can be used for knapsack - alas, you were born too late to be the first to have this idea:)
There are some points in your proof which I don't understand and which need more explanation in my opinion:
You need memoization, otherwise your tree would have O(2^N) nodes (there will be obviously such a case otherwise knapsack would not be NP-hard). I don't see anything in your proof, that assures that the memoization memory/computation steps are less than O(NK).
Dynamical programming needs only O(K) memory-space, so I don't see why you could claim "bnb algorithm is always better than dynamic both in time and space".
Maybe your claims are true, but I'm not able to see it the way the proof goes now.
The other problem is the definition of "better". Is branch-and-bound approach better if it is better for most of the problems or the common problems or does it has to be better for the wost-case (which would not play any role in the real life)?
The book I have linked to has also some comparisons for the running times of the algorithms. The dynamic programming based algorithms (clearly more complex as the one taught at school) are even better for some kind of problems - see section 2.10.1. Not bad for a total joke!
First of all, since you are applying memorization, you are still doing DP. That's basically the definition of DP: recursion + memorization. And that is also good. Without memorization your computation costs would explode. Just imagine if two items both have weight 2 and a third and a fourth have weight 1. They all end up at the same node in the tree, you would have to do the computation multiple times and you'll end up with exponential running time.
The main difference is the order of computation. The way of computing the entire matrix is called "bottom-up DP", since you start with (0,0) and work yourself upwards. Your way (the tree approach) is called "top-down DP", since you start with the goal and work yourself down the tree. But they are both using dynamic programming.
Now to your questions:
You are overestimating how much you really save. N = 3 is a pretty small example. I quickly tried a bigger example, with N = 20, K=63 (which is still pretty small) and random values and random weights. This is the first picture that I've generated:
values: [4, 10, 9, 1, 1, 2, 1, 2, 6, 4, 8, 9, 8, 2, 8, 8, 4, 10, 2, 6]
weights: [6, 4, 1, 10, 1, 2, 9, 9, 1, 6, 2, 3, 10, 7, 2, 4, 10, 9, 8, 2]
111111111111111111111111111111111111111111111111111111111111111
111111111111111111111111111111111111111111111111111111111111111
111111111111111111111111111111111111111111111111111111111111111
111111111111111111111111111111111111111111111111111111111111111
111111111111111111111111111111111111111111111111111111111111111
111111111111111111111111111111111111111111111111111111111111111
111111111111111111111111111111111111111111111111111111111111111
111111111111111111111111111111111111111111111111111111111111111
111111111111111111111111111111111111111111111111111111111111111
011111111111111111111111111111111111111111111111111111111111101
000001011111111111111111111111111111111111111111111111111111101
000000010111111111111111111111111111111111111111111111111111101
000000000010101011111111111111111111111111111111111111111010101
000000000000000000001010101111111111111111111111111111111010101
000000000000000000000000000101010101111111111111111111101010101
000000000000000000000000000001010101011111111111111111101010101
000000000000000000000000000000000101000001111100001111100000101
000000000000000000000000000000000000000000010100000111100000101
000000000000000000000000000000000000000000000000000010100000101
000000000000000000000000000000000000000000000000000000000000101
000000000000000000000000000000000000000000000000000000000000001
This picture is a transposed version of your displayed matrix. Rows represent the i values (first i elements in the array), and the cols represent the k values (allowed weights). The 1s are the positions in the DP matrix that you will visit during your tree-approach. Of course you'll see a lot of 0s at the bottom of the matrix, but you will visit every position the the upper half. About 68% of the positions in the matrix are visited. A bottom-up DP solution will be faster in such a situation. Recursion calls are slower, since you have to allocate a new stack frame for each recursive call. A speedup of 2x with loops instead of recursive calls is not untypical, and this would already be enough to make the bottom up approach faster. And we haven't even talked about the memorization costs of the tree approach yet.
Notice, that I haven't used actual bnb here. I'm not quite sure how you would do the bound-part, since you actually only know the value of a node once you compute it by visiting its children.
With my input data, the bottom-up approach is clearly a winner. But that doesn't mean that your approach is bad. Quite the opposite. It can actually be quite good. It all depends on the input data. Let's just imagine that K = 10^18 and all your weights are about 10^16. The bottom-up approach would not even find enough memory to allocate the matrix, while your approach will succeed in no time.
However, you probably could improve your version by performing A* instead of bnb. You can estimate the best value for each node (i, k) with int(k / max(weight[1..i]) * min(values[1..i]) and prune a lot of nodes using this heuristic.
In practice dynamic programming can be better for integer 0/1 knapsack because:
No recursion means you can never run into a stack overflow
No need to do a lookup search for each node, so often faster
As you note, storing the last two columns means that the memory requirement is lower
The code is simpler (no need for a memoization table)
I was asked if a Binary Search is a divide and conquer algorithm at an exam. My answer was yes, because you divided the problem into smaller subproblems, until you reached your result.
But the examinators asked where the conquer part in it was, which I was unable to answer. They also disapproved that it actually was a divide and conquer algorithm.
But everywhere I go on the web, it says that it is, so I would like to know why, and where the conquer part of it is?
The book:
Data Structures and Algorithm Analysis in Java (2nd Edition), by Mark Allen Weiss
Says that a D&C algorithm should have two disjoint recursive calls, just like QuickSort does.
Binary Search does not have this, even though it can be implemented recursively.
I think it is not divide and conquer, see first paragraph in http://en.wikipedia.org/wiki/Divide_and_conquer_algorithm
recursively breaking down a problem into two or more sub-problems
which are then combined to give a solution
In binary search there is still only one problem which does just reducing data by half every step, so no conquer (merging) phase of the results is needed.
It isn't.
To complement #Kenci's post, DnC algorithms have a few general/common properties; they:
divide the original problem instance into a set of smaller sub-instances of itself;
independently solve each sub-instance;
combine smaller/independent sub-instance solutions to build a single solution for the larger/original instance
The problem with Binary Search is that it does not really even generate a set of independent sub-instances to be solved, as per step 1; it only simplifies the original problem by permanently discarding sections it's not interested in. In other words, it only reduces the problem's size and that's as far as it ever goes.
A DnC algorithm is supposed to not only identify/solve the smaller sub-instances of the original problem independently of each other, but also use that set of partial independent solutions to "build up" a single solution for the larger problem instance as a whole.
The book Fundamentals of Algorithmics, G. Brassard, P. Bratley says the following (bold my emphasis, italics in original):
It is probably the simplest application of divide-and-conquer, so simple in fact that strictly speaking this is an application of simplification rather than divide-and-conquer: the solution to any sufficiently large instance is reduced to that of a single smaller one, in this case of half size.
Section 7.3 Binary Search on p.226.
In a divide and conquer strategy :
1.Problem is divided into parts;
2.Each of these parts is attacked/solved independently, by applying the algorithm at hand (mostly recursion is used for this purpose) ;
3.And then the solutions of each partition/division and combined/merged together to arrive at the final solution to the problem as a whole (this comes under conquer)
Example, Quick sort, merge sort.
Basically, the binary search algorithm just divides its work space(input (ordered) array of size n) into half in each iteration. Therefore it is definitely deploying the divide strategy and as a result, the time complexity reduces down to O(lg n).So,this covers up the "divide" part of it.
As can be noticed, the final solution is obtained from the last comparison made, that is, when we are left with only one element for comparison.
Binary search does not merge or combine solution.
In short, binary search divides the size of the problem (on which it has to work) into halves but doesn't find the solution in bits and pieces and hence no need of merging the solution occurs!
I know it's a bit too lengthy but i hope it helps :)
Also you can get some idea from : https://www.khanacademy.org/computing/computer-science/algorithms/binary-search/a/running-time-of-binary-search
Also i realised just now that this question was posted long back!
My bad!
Apparently some people consider binary search a divide-and-conquer algorithm, and some are not. I quickly googled three references (all seem related to academia) that call it a D&C algorithm:
http://www.cs.berkeley.edu/~vazirani/algorithms/chap2.pdf
http://homepages.ius.edu/rwisman/C455/html/notes/Chapter2/DivConq.htm
http://www.csc.liv.ac.uk/~ped/teachadmin/algor/d_and_c.html
I think it's common agreement that a D&C algorithm should have at least the first two phases of these three:
divide, i.e. decide how the whole problem is separated into sub-problems;
conquer, i.e. solve each of the sub-problems independently;
[optionally] combine, i.e. merge the results of independent computations together.
The second phase - conquer - should recursively apply the same technique to solve the subproblem by dividing into even smaller sub-sub-problems, and etc. In practice, however, often some threshold is used to limit the recursive approach, as for small size problems a different approach might be faster. For example, quick sort implementations often use e.g. bubble sort when the size of an array portion to sort becomes small.
The third phase might be a no-op, and in my opinion it does not disqualify an algorithm as D&C. A common example is recursive decomposition of a for-loop with all iterations working purely with independent data items (i.e. no reduction of any form). It might look useless at glance, but in fact it's very powerful way to e.g. execute the loop in parallel, and utilized by such frameworks as Cilk and Intel's TBB.
Returning to the original question: let's consider some code that implements the algorithm (I use C++; sorry if this is not the language you are comfortable with):
int search( int value, int* a, int begin, int end ) {
// end is one past the last element, i.e. [begin, end) is a half-open interval.
if (begin < end)
{
int m = (begin+end)/2;
if (value==a[m])
return m;
else if (value<a[m])
return search(value, a, begin, m);
else
return search(value, a, m+1, end);
}
else // begin>=end, i.e. no valid array to search
return -1;
}
Here the divide part is int m = (begin+end)/2; and all the rest is the conquer part. The algorithm is explicitly written in a recursive D&C form, even though only one of the branches is taken. However, it can also be written in a loop form:
int search( int value, int* a, int size ) {
int begin=0, end=size;
while( begin<end ) {
int m = (begin+end)/2;
if (value==a[m])
return m;
else if (value<a[m])
end = m;
else
begin = m+1;
}
return -1;
}
I think it's quite a common way to implement binary search with a loop; I deliberately used the same variable names as in the recursive example, so that commonality is easier to see. Therefore we might say that, again, calculating the midpoint is the divide part, and the rest of the loop body is the conquer part.
But of course if your examiners think differently, it might be hard to convince them it's D&C.
Update: just had a thought that if I were to develop a generic skeleton implementation of a D&C algorithm, I would certainly use binary search as one of API suitability tests to check whether the API is sufficiently powerful while also concise. Of course it does not prove anything :)
The Merge Sort and Quick Sort algorithms use the divide and conquer technique (because there are 2 sub-problems) and Binary Search comes under decrease and conquer (because there is 1 sub-problem).
Therefore, Binary Search actually uses the decrease and conquer technique and not the divide and conquer technique.
Source: https://www.geeksforgeeks.org/decrease-and-conquer/
Binary search is tricky to describe with divide-and-conquer because the conquering step is not explicit. The result of the algorithm is the index of the needle in the haystack, and a pure D&C implementation would return the index of the needle in the smallest haystack (0 in the one-element list) and then recursively add the offsets in the larger haystacks that were divided in the divison step.
Pseudocode to explain:
function binary_search has arguments needle and haystack and returns index
if haystack has size 1
return 0
else
divide haystack into upper and lower half
if needle is smaller than smallest element of upper half
return 0 + binary_search needle, lower half
else
return size of lower half + binary_search needle, upper half
The addition (0 + or size of lower half) is the conquer part. Most people skip it by providing indices into a larger list as arguments, and thus it is often not readily available.
The divide part is of course dividing the set into halves.
The conquer part is determining whether and on what position in the processed part there is a searched element.
Dichotomic in computer science refers to choosing between two antithetical choices, between two distinct alternatives. A dichotomy is any splitting of a whole into exactly two non-overlapping parts, meaning it is a procedure in which a whole is divided into two parts. It is a partition of a whole (or a set) into two parts (subsets) that are:
1. Jointly Exhaustive: everything must belong to one part or the other, and
2. Mutually Exclusive: nothing can belong simultaneously to both parts.
Divide and conquer works by recursively breaking down a problem into two or more sub-problems of the same type, until these become simple enough to be solved directly.
So the binary search halves the number of items to check with each iteration and determines if it has a chance of locating the "key" item in that half or moving on to the other half if it is able to determine keys absence. As the algorithm is dichotomic in nature so the binary search will believe that the "key" has to be in one part until it reaches the exit condition where it returns that the key is missing.
Divide and Conquer algorithm is based on 3 step as follows:
Divide
Conquer
Combine
Binary Search problem can be defined as finding x in the sorted array A[n].
According to this information:
Divide: compare x with middle
Conquer: Recurse in one sub array. (Finding x in this array)
Combine: it is not necessary.
A proper divide and conquer algorithm will require both parts to be processed.
Therefore, many people will not call binary-search a divide and conquer algorithm, it does divide the problem, but discards the other half.
But most likely, your examiners just wanted to see how you argue. (Good) exams aren't about the facts, but about how you react when the challenge goes beyond the original material.
So IMHO the proper answer would have been:
Well, technically, it consists only of a divide step, but needs to conquer only half of the original task then, the other half is trivially done already.
BTW: there is a nice variation of QuickSort, called QuickSelect, which actually exploits this difference to obtain an on average O(n) median search algorithm. It's like QuickSort - but descends only into the half it is interested in.
Binary Search is not a divide and conquer approach. It is a decrease and conquer approach.
In divide and conquer approach, each subproblem must contribute to the solution but in binary search, all subdivision does not contribute to the solution. we divide into two parts and discard one part because we know that the solution does not exist in this part and look for the solution only in one part.
The informal definition is more or less: Divide the problem into small problems. Then solve them and put them together (conquer). Solving is in fact deciding where to go next (left, right, element found).
Here a quote from wikipedia:
The name "divide and conquer" is sometimes applied also to algorithms that reduce each problem to only one subproblem, such as the binary search algorithm for finding a record in a sorted list.
This states, it's NOT [update: misread this phrase:)] only one part of divide and conquer.
Update:
This article made it clear for me. I was confused since the definition says you have to solve every sub problem. But you solved the sub problem if you know you don't have to keep on searching..
The Binary Search is a divide and conquer algorithm:
1) In Divide and Conquer algorithms, we try to solve a problem by solving a smaller sub problem (Divide part) and use the solution to build the solution for our bigger problem(Conquer).
2) Here our problem is to find an element in the sorted array. We can solve this by solving a similar sub problem. (We are creating sub problems here based on a decision that the element being searched is smaller or bigger than the middle element). Thus once we know that the element can not exist surely in one half, we solve a similar sub-problem in the the other half.
3) This way we recurse.
4) The conquer part here is just returning the value returned by the sub problem to the top the recursive tree
I think it is Decrease and Conquer.
Here is a quote from wikipedia.
"The name decrease and conquer has been proposed instead for the
single-subproblem class"
http://en.wikipedia.org/wiki/Divide_and_conquer_algorithms#Decrease_and_conquer
According to my understanding, "Conquer" part is at the end when you find the target element of the Binary search. The "Decrease" part is reducing the search space.
Binary Search and Ternary Search Algorithms are based on Decrease and Conquer technique. Because, you do not divide the problem, you actually decrease the problem by dividing by 2(3 in ternary search).
Merge Sort and Quick Sort Algorithms can be given as examples of Divide and Conquer technique. You divide the problem into two subproblems and use the algorithm for these subproblems again to sort an array. But, you discard the half of array in binary search. It means you DECREASE the size of array, not divide.
No, binary search is not divide and conquer. Yes, binary search is decrease and conquer. I believe divide and conquer algorithms have an efficiency of O(n log(n)) while decrease and conquer algorithms have an efficiency of O(log(n)). The difference being whether or not you need to evaluate both parts of the split in data or not.
I have implemented the First-Fit-Decreasing bin packing algorithm to split a list of numbers into two 'bins' of equal size. The algorithm almost always finds the optimal packing arrangement but occasionally it doesn't.
For example:
The set of numbers 4, 3, 2, 4, 3, 2 can obviously be split into this arrangement:
1) 4, 3, 2
2) 4, 3, 2
The first fit decreasing algorithm does not find a solution.
It is not acceptable in this circumstance to NOT find the correct solution if one exists.
The original puzzle is to split a sequence of numbers into two sets that have an equal sum.
Is this just a simple bin packing problem or have I used the wrong algorithm?
Bin packing is NP complete.
It is not acceptable in this circumstance to NOT find the correct solution if one exists.
Try the Branch and Bound algorithm, but like all the exact algorithms, it doesn't scale to medium or big problems.
First-Fit-Decreasing is a good starting deterministic algorithm, but you can do much better by chaining it with meta-heuristics such as Simulated Annealing, Tabu Search or Genetic Algorithms. There are a couple of open source libs out there which can do that for you, such as Drools Planner (java).