Understanding the difference between these two scaling properties - algorithm

I need help understanding the following paragraph from a book on algorithms -
Search spaces for natural combinatorial problems tend to grow
exponentially in the size N of the input; if the input size increases
by one, the number of possibilities increases multiplicatively. We’d
like a good algorithm for such a problem to have a better scaling
property: when the input size increases by a constant factor—say, a
factor of 2—the algorithm should only slow down by some constant
factor C.
I don't really get why one is better than the other. If anyone can formulate any examples to aid my understanding, its greatly appreciated.

Let's consider the following problem: you're given a list of numbers, and you want to find the longest subsequence of that list where the numbers are in ascending order. For example, given the sequence
2 7 1 8 3 9 4 5 0 6
you could form the subsequence [2, 7, 8, 9] as follows:
2 7 1 8 3 9 4 5 0 6
^ ^ ^ ^
but there's an even longer one, [1, 3, 4, 5, 6] available here:
2 7 1 8 3 9 4 5 0 6
^ ^ ^ ^ ^
That one happens to be the longest subsequence that's in increasing order, I believe, though please let me know if I'm mistaken.
Now that we have this problem, how would we go about solving it in the general case where you have a list of n numbers? Let's start with a not so great option. One possibility would be to list off all the subsequences of the original list of numbers, then filter out everything that isn't in increasing order, and then to take the longest one out of all the ones we find. For example, given this short list:
2 7 1 8
we'd form all the possible subsequences, which are shown here:
[]
[8]
[1]
[1, 8]
[7]
[7, 8]
[7, 1]
[7, 1, 8]
[2]
[2, 8]
[2, 1]
[2, 1, 8]
[2, 7]
[2, 7, 8]
[2, 7, 1]
[2, 7, 1, 8]
Yikes, that list is pretty long. But by looking at it, we can see that the longest increasing subsequences have length two, and that there are plenty of choices for which one we could pick.
Now, how well is this going to scale as our input list gets longer and longer? Here's something to think about - how many subsequences are there of this new list, which I made by adding 3 to the end of the existing list?
2 7 1 8 3
Well, every existing subsequence is still a perfectly valid subsequence here. But on top of that, we can form a bunch of new subsequences. In fact, we could take any existing subsequence and then tack a 3 onto the end of it. That means that if we had S subsequences for our length-four list, we'll have 2S subsequences for our length-five list.
More generally, you can see that if you take a list and add one more element onto the end of it, you'll double the number of subsequences available. That's a mathematical fact, and it's neither good nor bad by itself, but if we're in the business of listing all those subsequences and checking each one of them to see whether it has some property, we're going to be in trouble because that means there's going to be a ton of subsequences. We already see that there are 16 subsequences of a four-element list. That means there's 32 subsequences of a five-element list, 64 subsequences of a six-element list, and, more generally, 2n subsequences of an n-element list.
With that insight, let's make a quick calculation. How many subsequences are we going to have to check if we have, say, a 300-element list? We'd have to potentially check 2300 of them - a number that's bigger than the number of atoms in the observable universe! Oops. That's going to take way more time than we have.
On the other hand, there's a beautiful algorithm called patience sorting that will always find the longest increasing subsequence, and which does so quite easily. You can do this by playing a little game. You'll place each of the items in the list into one of many piles. To determine what pile to pick, look for the first pile whose top number is bigger than the number in question and place it on top. If you can't find a pile this way, put the number into its own pile on the far right.
For example, given this original list:
2 7 1 8 3 9 4 5 0 6
after playing the game we'd end up with these piles:
0
1 3 4 5
2 7 8 9 6
And here's an amazing fact: the number of piles used equals the length of the longest increasing subsequence. Moreover, you can find that subsequence in the following way: every time you place a number on top of a pile, make a note of the number that was on top of the pile to its left. If we do this with the above numbers, here's what we'll find; the parenthesized number tells us what was on top of the stack to the left at the time we put the number down:
0
1 3 (1) 4 (3) 5 (4)
2 7 (2) 8 (7) 9 (8) 6 (5)
To find the subsequence we want, start with the top of the leftmost pile. Write that number down, then find the number in parentheses and repeat this process. Doing that here gives us 6, 5, 4, 3, 1, which, if reversed, is 1, 3, 4, 5, 6, the longest increasing subsequence! (Wow!) You can prove that this works in all cases, and it's a really beautiful exercise to actually go and do this.
So now the question is how fast this process is. Placing the first number down takes one unit of work - just place it in its own pile. Placing the second number down takes at most two units of work - we have to look at the top of the first pile, and optionally put the number into a second pile. Placing the third number takes at most three units of work - we have to look at up to two piles, and possibly place the number into its own third pile. More generally, placing the kth number down takes k units of work. Overall, this means that the work we're doing is roughly
1 + 2 + 3 + ... + n
if we have n total elements. That's a famous sum called Gauss's sum, and it simplifies to approximately n2 / 2. So we can say that we'll need to do roughly n2 / 2 units of work to solve things this way.
How does that compare to our 2n solution from before? Well, unlike 2n, which grows stupidly fast as a function of n, n2 / 2 is actually a pretty nice function. If we plug in n = 300, which previously in 2n land gave back "the number of atoms in the universe," we get back a more modest 45,000. If that's a number of nanoseconds, that's nothing; that'll take a computer under a second to do. In fact, you have to plug in a pretty big value of n before you're looking at something that's going to take the computer quite a while to complete.
The function n2 / 2 has an interesting property compared with 2n. With 2n, if you increase n by one, as we saw earlier, 2n will double. On the other hand, if you take n2 / 2 and increase n by one, then n2 / 2 will get bigger, but not by much (specifically, by n + 1/2).
By contrast, if you take 2n and then double n, then 2n squares in size - yikes! But if you take n2 / 2 and double n, then n2 / 2 goes up only by a factor of four - not that bad, actually, given that we doubled our input size!
This gets at the heart of what the quote you mentioned is talking about. Algorithms with runtimes like 2n, n!, etc. scale terribly as a function of n, since increasing n by one causes a huge jump in the runtime. On the other hand, functions like n, n log n, n2, etc. have the property that if you double n, the runtime only goes up by some constant term. They therefore scale much more nicely as a function of input.

Related

Algorithm to sort X number in batches of Y

Could somebody direct me to an algorithm that I can use to sort X number in batches of Y. Meaning that you can only compare Y numbers at the same time, but you can do that multiple times.
E.g.
There are X=100 statements and a respondent must sort them according to how relevant they are to her in such a way that she will only see and sort Y=9 statements at a time, but will do that multiple times.
From your hypothetical, I believe you are willing to do a lot of work to figure out the next comparison set (because that is done by computer), and would like as few comparisons as possible (because that is a human).
So the idea of the approach that I will outline is a greedy heuristic that attempts to maximize how much information each comparison gives us. It is complicated, but should do very well.
The first thing we need is how to measure information. Here is the mathematical theory. Suppose that we have a biased coin with a probability p of coming up heads. The information in it comings up heads is - log2(p). The information in it coming up tails is - log2(1-p). (Note that log of a number between 0 and 1 is negative, and the negative of a negative is positive. So information is always positive.) If you use an efficient encoding and have many flips to encode, the sum of the information of a sequence of flips is how many bits you need to send to communicate it.
The expected information of a single flip is therefore - p log2(p) - (1-p) log2(1-p).
So the idea is to pick a comparison set such that sorting it gives us as much information as possible about the final sort that we don't already have. But how do we estimate how much is not known about a particular pair? For example if I sort 2 groups of 5, the top of one group is unlikely to be less than the bottom of the other. It could be, but there is much less information in that comparison than comparing the two middle elements with each other. How do we capture that?
My idea for how to do that is to do a series of topological sorts to get a sense. In particular you do the first topological sort randomly. The second topological sort you try to make as different as possible by, at every choice, choosing the element which had largest rank the last time. The third topological sort you choose the element whose sum of ranks in the previous sorts was as large as possible. And so on. Do this 20x or so.
Now for any pair of elements we can just look at how often they disagree in our sorts to estimate a probability that one is really larger than the other. We can turn that into an expected entropy with the formula from before.
So we start the comparison set with the element with the largest difference between its maximum and minimum rank in the sorts.
The second element is the one that has the highest entropy with the first, breaking ties by the largest difference between its minimum and maximum rank in the sorts.
The third is the one whose sum of entropies with the first two is the most, again breaking ties in the same way.
The exact logic that the algorithm will follow is, of course, randomized. In fact you're doing O(k^2 n) work per comparison set that you find. But it will on average finish with surprisingly few comparison sets.
I don't have a proof, but I suspect that you will on average only need the theoretically optimal O(log(n!) / log(k!)) = O(n log(n) / (k log(k))) comparisons. For k=2 my further suspicion is that it will give a solution that is on average more efficient than merge sort.
At each round, you'll sort floor(X/Y) batches of Y elements and one batch of X mod Y elements.
Suppose for simplicity that the input is given as an array A[1...X].
At the first round, the batches will be A[1...Y], A[Y+1...2Y], ..., A[(floor(X/Y)-1)Y+1...floor(X/Y)Y], A[floor(X/Y)Y+1...X].
For the second round, shift these ranges right by Y/2 places (you can use wrap-around if you like, though for simplicity I will simply assume the first Y/2 elements will be left alone in even-numbered iterations). So, the ranges could be A[Y/2+1...3Y/2], A[3Y/2+1...5Y/2], etc.. The next round will repeat the ranges of the first, and the round after that will repeat the ranges of the second, and so on. How many iterations are needed in the worst case to guarantee a fully-sorted list? Well, in the worst case, the maximum element must migrate from the beginning to the end, and since it takes two iterations for an element to migrate one full odd-iteration section (see below) it stands to reason that it takes 2*ceiling(X/Y) iterations in total for an element at the front to get to the end.
Example:
X=11
Y=3
A = [7, 2, 4, 5, 2, 1, 6, 2, 3, 5, 6]
[7,2,4] [5,2,1] [6,2,3] [5,6] => [2,4,7] [1,2,5] [2,3,6] [5,6]
2 [4,7,1] [2,5,2] [3,6,5] [6] => 2 [1,4,7] [2,2,5] [3,5,6] [6]
[2,1,4] [7,2,2] [5,3,5] [6,6] => [1,2,4] [2,2,7] [3,5,5] [6,6]
1 [2,4,2] [2,7,3] [5,5,6] [6] => 1 [2,2,4] [2,3,7] [5,5,6] [6]
[1,2,2] [4,2,3] [7,5,5] [6,6] => [1,2,2] [2,3,4] [5,5,7] [6,6]
1 [2,2,2] [3,4,5] [5,7,6] [6] => 1 [2,2,2] [3,4,5] [5,6,7] [6]
[1,2,2] [2,3,4] [5,5,6] [7,6] => [1,2,2] [2,3,4] [5,5,6] [6,7]
1 [2,2,2] [3,4,5] [5,6,6] [7] => no change, termination condition
This might seem a little silly, but if you have an efficient way to sort small groups and a lot of parallelism available this could be pretty nifty.

Maximum number divisible by another one created from sum of previous numbers

There are given numbers 1, 2, ...., b-1. Every number of these can be used a[1], a[2], ...., a[b-1] times.
From them the biggest possible number (from data given) has to be concatenated, while sum of its digits (partial numbers) has to be divisible by b. "Digits" of this number can be of any base bigger than 2.
So basically the biggest number of base b has to be created, by concatenating numbers 1...b-1, up to a[1]...a[b-1] times each, while sum of all used partial numbers/digits has to be divisible by b.
For example:
There is 5 times 1, 10 times 2, 4 times 3 and 2 times 4. As stated above, they have to concatenate the biggest number divisible by b (here 5).
They would give:
44333322222222221111.
Concatenating from the biggest to the lowest gives needed number, as their sum is divisible by 5.
For 1 times 1 it is:
0
Because 1 is not divisible by 2, so no numbers should be used then.
What are the algorithms or similar problems to this? How can it be approached?
At first, we can simply arrange numbers from the biggest to the lowest, so the concatenated number will naturally be the biggest. Then, we have to take the least amount of these numbers, so their sum will be divisible by b. When there can be taken different combinations of these numbers but in the same amounts, the one that has its biggest number the smallest among others should be choosen (or the second biggest and so on).
For example:
If combinations of (3, 3, 2) and (4, 2, 2) can be taken, then the first one should be cut out from the number.
This really looks like change-making problem, but with finite amount of coins of different denominations and at the end, we have to have the combination, not only the minimal amount of coins. In addition, with dynamic approach, 2 different combinations of same length (like 332 and 442 above) can't be rather easily chosen in the middle of dynamic array, as in next steps they can give quite much different values.

Interview Algorithm: find two largest elements in array of size n

This is an interview question I saw online and I am not sure I have correct idea for it.
The problem is here:
Design an algorithm to find the two largest elements in a sequence of n numbers.
Number of comparisons need to be n + O(log n)
I think I might choose quick sort and stop when the two largest elements are find?
But not 100% sure about it. Anyone has idea about it please share
Recursively split the array, find the largest element in each half, then find the largest element that the largest element was ever compared against. That first part requires n compares, the last part requires O(log n). Here is an example:
1 2 5 4 9 7 8 7 5 4 1 0 1 4 2 3
2 5 9 8 5 1 4 3
5 9 5 4
9 5
9
At each step I'm merging adjacent numbers and taking the larger of the two. It takes n compares to get down to the largest number, 9. Then, if we look at every number that 9 was compared against (5, 5, 8, 7), we see that the largest one was 8, which must be the second largest in the array. Since there are O(log n) levels in this, it will take O(log n) compares to do this.
For only 2 largest element, a normal selection may be good enough. it's basically O(2*n).
For a more general "select k elements from an array size n" question, quick Sort is a good thinking, but you don't have to really sort the whole array.
try this
you pick a pivot, split the array to N[m] and N[n-m].
if k < m, forget the N[n-m] part, do step 1 in N[m].
if k > m, forget the N[m] part, do step in in N[n-m]. this time, you try to find the first k-m element in the N[n-m].
if k = m, you got it.
It's basically like locate k in an array N. you need log(N) iteration, and move (N/2)^i elements in average. so it's a N + log(N) algorithm (which meets your requirement), and has very good practical performance (faster than plain quick sort, since it avoid any sorting, so the output is not ordered).

Divide list into two equal parts algorithm

Related questions:
Algorithm to Divide a list of numbers into 2 equal sum lists
divide list in two parts that their sum closest to each other
Let's assume I have a list, which contains exactly 2k elements. Now, I'm willing to split it into two parts, where each part has a length of k while trying to make the sum of the parts as equal as possible.
Quick example:
[3, 4, 4, 1, 2, 1] might be splitted to [1, 4, 3] and [1, 2, 4] and the sum difference will be 1
Now - if the parts can have arbitrary lengths, this is a variation of the Partition problem and we know that's it's weakly NP-Complete.
But does the restriction about splitting the list into equal parts (let's say it's always k and 2k) make this problem solvable in polynomial time? Any proofs to that (or a proof scheme for the fact that it's still NP)?
It is still NP complete. Proof by reduction of PP (your full variation of the Partition problem) to QPP (equal parts partition problem):
Take an arbitrary list of length k plus additional k elements all valued as zero.
We need to find the best performing partition in terms of PP. Let us find one using an algorithm for QPP and forget about all the additional k zero elements. Shifting zeroes around cannot affect this or any competing partition, so this is still one of the best performing unrestricted partitions of the arbitrary list of length k.

Sorting an array in minimum cost

I have an array A[] with 4 element A={
8 1 2 4 }. How to sort it with minimized cost. Criteria is defined as follows-
a. It is possible to swap any 2 element.
b. The cost of any swap is sum of the element value , Like if i swap 8 and 4 the cost is 12 an resultant array is look like A={4 1 2 8}, which is still unsorted so more swap needed.
c. Need to find a way to sort the array with minimum cost.
From my observation greedy will not work, like in each step place any element to its sorted position in array with minimum cost. So a DP solution needed.
Can any one help??
Swap 2 and 1, and then 1 and 4, and then 1 and 8? Or is it a general question?
For a more general approach you could try:
Swapping every pair of 2 elements (with the highest sum) if they are perfect swaps (i.e. swapping them will put them both at their right spot). Th
Use the lowest element as a pivot for swaps (by swapping the element whose spot it occupies), until it reaches its final spot
Then, you have two possibilities:
Repeat step 2: use the lowest element not in its final spot as a pivot until it reaches its final spot, then go back to step 3
Or swap the lowest element not in its final spot (l2) with the lowest element (l1), repeat step 2 until l1 reaches the final spot of l2. Then:
Either swap l1 and l2 again, go to step 3.1
Or go to step 3.2 again, with the next lowest element not in its final spot being used.
When all this is done, if some opposite swaps are performed one next to another (for example it could happen from going to step 2. to step 3.2.), remove them.
There are still some things to watch out for, but this is already a pretty good approximation. Step one and two should always work though, step three would be the one to improve in some borderline cases.
Example of the algorithm being used:
With {8 4 5 3 2 7}: (target array {2 3 4 5 7 8})
Step 2: 2 <> 7, 2 <> 8
Array is now {2, 4, 5, 3, 7, 8}
Choice between 3.1 and 3.2:
3.1 gives 3 <> 5, 3 <> 4
3.2 gives 2 <> 3, 2 <> 5, 2 <> 4, 2 <> 3
3 <> 5, 3 <> 4 is the better result
Conclusion: 2 <> 7, 2 <> 8, 3 <> 5, 3 <> 4 is the best answer.
With {1 8 9 7 6} (resulting array {1 6 7 8 9})
You're beginning at step three already
Choice between 3.1 and 3.2:
3.1 gives 6 <> 9, 6 <> 7, 6 <> 8 (total: 42)
3.2 gives 1 <> 6, 1 <> 9, 1 <> 7, 1 <> 8, 1 <> 6 (total: 41)
So 1 <> 6, 1 <> 9, 1 <> 7, 1 <> 8, 1 <> 6 is the best result
This smells like homework. What you need to do is sort the array but doing so while minimizing cost of swaps. So, it's a optimization problem rather than a sorting problem.
A greedy algorithm would despite this work, all you do is that you fix the solution by swapping the cheapest first (figuring out where in the list it belongs). This is however, not necessarily optimal.
As long as you never swap the same element twice a greedy algorithm should be optimal though.
Anyway, back to the dynamic programming stuff, just build your solution tree using recursion and then prune the tree as you find a more optimal solutions. This is pretty basic recursion.
If you a more complicated sorting algorithm you'll have a lot more difficulty puzzling that together with the dynamic programming so I suggest you start out with a simple, slow O(n^2) sort. And build on top of this.
Rather than to provide you with a solution, I'd like to explain how dynamic programming works in my own words.
The first thing you need to do, is to figure out an algorithm that will explore all possible solutions (this can be a really stupid brute force algorithm).
You then implement this using recursion because dynamic programming is based around being able to figure out overlapping sub problems quickly, ergo recursion.
At each recursive call you look up where you are in your solution and check where you've computed this part of the solution tree before, if you have done this, you can test whether the current solution is more optimal, if it is then you continue, otherwise you're done with this branch of the problem.
When you arrive at the final solution you will have solved the problem.
Think of each recursive call as a snapshot of a partial solution. It's your job to figure how each recursive call fits together in the final optimal solution.
This what I recommend you do:
Write a recursive sort algorithm
Add a parameter to your recursive function that maintains the cost of this execution path, as you sort the array, add to this cost. For every possible swap at any given point do another recursive call (this will branch your solution tree)
Whenever you realize that the cost of the solution you are currently exploring exceeds what you already have somewhere else, abort (just return).
To be able to answer the last question you need to maintain shared memory area in which you can index depending on where you are in you're recursive algorithm. If there's a precomputed cost there you just return that value and don't continue processing (this is the pruning, which makes it fast).
Using this method you can even base your solution on a permutation brute force algorithm, it will probably be very slow or memory intensive because it is stupid when it comes to when you branch or prune but you don't really need a specific sort algorithm to make this work, it will just be more efficient to go about it that way.
Good luck!
If you do a high-low selection sort, you can guarantee that the Nth greatest element isn't swapped more than N times. This a simple algorithm with a pretty easy and enticing guarantee... Maybe check this on a few examples and see how it could be tweaked. Note: this may not lead to an optimal answer...
To find the absolute minimal cost you'll have to try all ways to swap and then find the fastest one.
def recsort(l, sort):
if sorted(l2):
if min>cost:
cost=min
bestsort=sort
if(len(sort) > len(l)*len(l)): //or some other criteria
return
for p1 in (0,len(l)):
for p2 in (0,len(l)):
cost += l[p1] + l[p2]
l2 = swap(l, p1,p2)
if cost<min:
recsort(l2, append sort (p1,p2))
An approach that will be pretty good is to recursively place the biggest value at the top.

Resources