Finding a single number in a list [duplicate] - algorithm

This question already has answers here:
How to find the only number in an array that doesn't occur twice [duplicate]
(5 answers)
Closed 7 years ago.
What would be the best algorithm for finding a number that occurs only once in a list which has all other numbers occurring exactly twice.
So, in the list of integers (lets take it as an array) each integer repeats exactly twice, except one. To find that one, what is the best algorithm.

The fastest (O(n)) and most memory efficient (O(1)) way is with the XOR operation.
In C:
int arr[] = {3, 2, 5, 2, 1, 5, 3};
int num = 0, i;
for (i=0; i < 7; i++)
num ^= arr[i];
printf("%i\n", num);
This prints "1", which is the only one that occurs once.
This works because the first time you hit a number it marks the num variable with itself, and the second time it unmarks num with itself (more or less). The only one that remains unmarked is your non-duplicate.

By the way, you can expand on this idea to very quickly find two unique numbers among a list of duplicates.
Let's call the unique numbers a and b. First take the XOR of everything, as Kyle suggested. What we get is a^b. We know a^b != 0, since a != b. Choose any 1 bit of a^b, and use that as a mask -- in more detail: choose x as a power of 2 so that x & (a^b) is nonzero.
Now split the list into two sublists -- one sublist contains all numbers y with y&x == 0, and the rest go in the other sublist. By the way we chose x, we know that a and b are in different buckets. We also know that each pair of duplicates is still in the same bucket. So we can now apply ye olde "XOR-em-all" trick to each bucket independently, and discover what a and b are completely.
Bam.

O(N) time, O(N) memory
HT= Hash Table
HT.clear()
go over the list in order
for each item you see
if(HT.Contains(item)) -> HT.Remove(item)
else
ht.add(item)
at the end, the item in the HT is the item you are looking for.
Note (credit #Jared Updike): This system will find all Odd instances of items.
comment: I don't see how can people vote up solutions that give you NLogN performance. in which universe is that "better" ?
I am even more shocked you marked the accepted answer s NLogN solution...
I do agree however that if memory is required to be constant, then NLogN would be (so far) the best solution.

Kyle's solution would obviously not catch situations were the data set does not follow the rules. If all numbers were in pairs the algorithm would give a result of zero, the exact same value as if zero would be the only value with single occurance.
If there were multiple single occurance values or triples, the result would be errouness as well.
Testing the data set might well end up with a more costly algorithm, either in memory or time.
Csmba's solution does show some errouness data (no or more then one single occurence value), but not other (quadrouples). Regarding his solution, depending on the implementation of HT, either memory and/or time is more then O(n).
If we cannot be sure about the correctness of the input set, sorting and counting or using a hashtable counting occurances with the integer itself being the hash key would both be feasible.

I would say that using a sorting algorithm and then going through the sorted list to find the number is a good way to do it.
And now the problem is finding "the best" sorting algorithm. There are a lot of sorting algorithms, each of them with its strong and weak points, so this is quite a complicated question. The Wikipedia entry seems like a nice source of info on that.

Implementation in Ruby:
a = [1,2,3,4,123,1,2,.........]
t = a.length-1
for i in 0..t
s = a.index(a[i])+1
b = a[s..t]
w = b.include?a[i]
if w == false
puts a[i]
end
end

You need to specify what you mean by "best" - to some, speed is all that matters and would qualify an answer as "best" - for others, they might forgive a few hundred milliseconds if the solution was more readable.
"Best" is subjective unless you are more specific.
That said:
Iterate through the numbers, for each number search the list for that number and when you reach the number that returns only a 1 for the number of search results, you are done.

Seems like the best you could do is to iterate through the list, for every item add it to a list of "seen" items or else remove it from the "seen" if it's already there, and at the end your list of "seen" items will include the singular element. This is O(n) in regards to time and n in regards to space (in the worst case, it will be much better if the list is sorted).
The fact that they're integers doesn't really factor in, since there's nothing special you can do with adding them up... is there?
Question
I don't understand why the selected answer is "best" by any standard. O(N*lgN) > O(N), and it changes the list (or else creates a copy of it, which is still more expensive in space and time). Am I missing something?

Depends on how large/small/diverse the numbers are though. A radix sort might be applicable which would reduce the sorting time of the O(N log N) solution by a large degree.

The sorting method and the XOR method have the same time complexity. The XOR method is only O(n) if you assume that bitwise XOR of two strings is a constant time operation. This is equivalent to saying that the size of the integers in the array is bounded by a constant. In that case you can use Radix sort to sort the array in O(n).
If the numbers are not bounded, then bitwise XOR takes time O(k) where k is the length of the bit string, and the XOR method takes O(nk). Now again Radix sort will sort the array in time O(nk).

You could simply put the elements in the set into a hash until you find a collision. In ruby, this is a one-liner.
def find_dupe(array)
h={}
array.detect { |e| h[e]||(h[e]=true; false) }
end
So, find_dupe([1,2,3,4,5,1]) would return 1.
This is actually a common "trick" interview question though. It is normally about a list of consecutive integers with one duplicate. In this case the interviewer is often looking for you to use the Gaussian sum of n-integers trick e.g. n*(n+1)/2 subtracted from the actual sum. The textbook answer is something like this.
def find_dupe_for_consecutive_integers(array)
n=array.size-1 # subtract one from array.size because of the dupe
array.sum - n*(n+1)/2
end

Related

searching through a vast collection of potential solutions

I have a quite difficult problem (perhaps even a NP-hard problem ^^) with looking for a solution in a massive collection of results. Perhaps there is an algorithm for it.
Below exercise is artificial but is a perfect example to illustrate my issue.
There is a big array with integers. Lets say it has 100.000 elements.
int numbers[] = {-123,32,4,-234564,23,5,....}
I want to check in a relatively quick way if a sum on any 2 numbers from this array is equal to 0. In other words, if the array has "-123" I want to find is there also a "123" number.
The easiest solution would be brute force - check everything with everything. That gives 100.000 x 100.000 a big number ;-) Obviously brute force method can by optimised. Order numbers and check negatives against positive only. My question is - is there something better then optimised brute force to find a solution?
First, sort the array by magnitude of the value.
Then, if the data contains a pair which satisfies the conditions you're after, it contains such a pair adjacent in the array. So just sweep through looking for adjacent pairs whose sum is 0.
Overall time complexity is O(n log n) for the sort, could be O(n) if you use "cheating" sorts not based solely on comparisons. Clearly it can't be done in less than linear time, because in the worst case you can't do it without looking at all the elements. I think n log n is probably optimal in the decision tree model of computing, but only because it "feels a bit like" the element uniqueness problem.
Alternative approach:
Add the elements one at a time to a hash-based or tree-based container. Before adding each element, check whether its negative is present. If so, stop.
This is likely to be faster in the case where there are lots of suitable pairs, because you save the cost of sorting the whole data. That said, you could write a modified sort that exits early by checking for adjacent pairs as soon as any subset of the data is in its final order, but that's effort.
Brute force would be an O(n^2) solution. You can certainly do better.
Off the top of my head, first sort it. Heap sort will have a complexity of O(nlogn).
Now, for the first element, say a, you know you need to find an element b, such that a+b = 0. This can be found using binary search (since your array is now sorted). Binary search has a complexity of O(logn).
This gives you an overall solution of O(nlogn) complexity.
The example you provided can be brute-force solved in O(n^2) time.
You can start ordering the numbers (O(n·logn)) from smaller to bigger. If you place one pointer at the beginning (the "most negative number") and other at the end (the "most positive"), you can check if there is such pair of numbers in an additional O(n) steps by following the next procedure:
If the numbers at both pointers have the same module, you have the solution
If not, move the pointer of the number with bigger module towards "zero" (this is, increase if it is the pointer on the negative side, decrease if it is the positive-side one)
Repeat until finding a solution, or the pointers cross.
Total complexity is O(n·logn)+O(n) = O(n·logn).
Sort your array using Quicksort. After this happened, use two indexes, let's call them positive and negative.
positive <- 0
negative <- size - 1
while ((array[positive] > 0) and (array(negative < 0) and (positive >= 0) and (negative < size)) do
delta <- array[positive] + array[negative]
if (delta = 0) then
return true
else if (delta < 0) then
negative <- negative + 1
else
positive <- positive - 1
end if
end while
return (array[positive] * array[negative] = 0)
You didn't say what should the algorithm do if 0 is part of the array, I've supposed that in this case true should be returned.

Where can I use a technique from Majority Vote algorithm

As seen in the answers to Linear time majority algorithm?, it is possible to compute the majority of an array of elements in linear time and log(n) space.
It was shown that everyone who sees this algorithm believes that it is a cool technique. But does the idea generalize to new algorithms?
It seems the hidden power of this algorithm is in keeping a counter that plays a complex role -- such as "(count of majority element so far) - (count of second majority so far)". Are there other algorithms based on the same idea?
Umh, let's first start to understand why the algorithm works, in order to "isolate" the ideas there.
The point of the algorithm is that if you have a majority element, then you can match each occurrence of it with an "another" element, and then you have some more "spare".
So, we just have a counter which counts the number of "spare" occurrences of our guest answer.
If it reaches 0, then it isn't a majority element for the subsequence starting from when we have "elected" the "current" element as the guest major element to the "current" position.
Also, since our "guest" element matches every other element occurrence in the considered subsequence, there are no major elements in the considered subsequence.
Now, since:
our algorithm gives a correct answer only if there is a major element, and
if there is a major element, then it'll still be if we ignore the "current" subsequence when the counter goes to zero
it is obvious to see by contradiction that, if a major element exists, then we have a suffix of the whole sequence when the counter never gets to zero.
Now: what's the idea that can be exploited in new, O(1) size O(n) time algorithms?
To me, you can apply this technique whenever you have to compute a property P on a sequence of elements which:
can be exteded from seq[n, m] to seq[n, m+1] in O(1) time if Q(seq[n, m+1]) doesn't hold
P(seq[n, m]) can be computed in O(1) time and space from P(seq[n, j]) and P(seq[j, m]) if Q(seq[n, j]) holds
In our case, P is the "spare" occurrences of our "elected" major element and Q is "P is zero".
If you see things in that way, longest common subsequence exploits the same idea (dunno about its "coolness factor" ;))
Jaydev Misra and David Gries have a paper called Finding Repeated Elements (ACM page) which generalizes it to an element repeating more than n/k times (k=2 is the majority problem).
Of course, this is probably very similar to the original problem, and you are probably looking for 'different' algorithms.
Here is an example which is possibly different.
Give an algorithm which will detect if a string of parentheses ( '(' and ')') is well formed.
I believe the standard solution is to maintain a counter.
Side note:
As to answers which claim cannot be constant space etc, ask them for the model of computation. In the WORD RAM model for instance, you assume the integers/array indices etc are O(1).
A lot of folks incorrectly mix and match models. For instance, they will happily have the input array of n integers be O(n), have an array index be O(1) space, but a counter they consider Omega(log n) etc, which is nonsense. If they want to consider the size in bits, then the input itself is Omega(n log n) etc.
For people who want to understand what does this algorithm do and why does it works: look at my detailed answer.
Here I will describe a natural extension of this algorithm (or a generalization). So in a standard majority voting algorithm you have to find an element which appears at least n/2 times in the stream, where n is the size of the stream. You can do this in O(n) time (with a tiny constant and in O(log(n)) space, worse case and highly unlikely.
The generalized algorithm allows you to find k most frequent items, where each time appeared at least n/(k+1) times in the original stream. Note that if k=1, you end up with your original problem.
Solution to this problem is really similar to the original one, except instead of one counter and one possible element, you maintain k counters and k possible elements. Now the logic goes in a similar way. You iterate through the array and if the element is in the possible elements, you increase it's counter, if one of the counters is zero - substitute the element of this counter with new element. Otherwise just decrease the values.
As with original majority voting algorithm, you need to have a guarantee that you have these k majority elements, otherwise you have to do another pass over the array to verify that your previously found possible elements are correct. Here is my python attempt (have not done a thorough testing).
from collections import defaultdict
def majority_element_general(arr, k=1):
counter, i = defaultdict(int), 0
while len(counter) < k and i < len(arr):
counter[arr[i]] += 1
i += 1
for i in arr[i:]:
if i in counter:
counter[i] += 1
elif len(counter) < k:
counter[i] = 1
else:
fields_to_remove = []
for el in counter:
if counter[el] > 1:
counter[el] -= 1
else:
fields_to_remove.append(el)
for el in fields_to_remove:
del counter[el]
potential_elements = counter.keys()
# might want to check that they are really frequent.
return potential_elements

Finding the repeated element

In an array with integers between 1 and 1,000,000 or say some very larger value ,if a single value is occurring twice twice. How do you determine which one?
I think we can use a bitmap to mark the elements , and then traverse allover again to find out the repeated element . But , i think it is a process with high complexity.Is there any better way ?
This sounds like homework or an interview question ... so rather than giving away the answer, here's a hint.
What calculations can you do on a range of integers whose answer you can determine ahead of time?
Once you realize the answer to this, you should be able to figure it out .... if you still can't figure it out ... (and it's not homework) I'll post the solution :)
EDIT: Ok. So here's the elegant solution ... if the list contains ALL of the integers within the range.
We know that all of the values between 1 and N must exist in the list. Using Guass' formula we can quickly compute the expected value of a range of integers:
Sum(1..N) = 1/2 * (1 + N) * Count(1..N).
Since we know the expected sum, all we have to do is loop through all the values and sum their values. The different between this sum and the expected sum is the duplicate value.
EDIT: As other's have commented, the question doesn't state that the range contains all of the integers ... in this case, you have to decide whether you want to optimize for memory or time.
If you want to perform the operation using O(1) storage, you can perform an in-place sort of the list. As you're sorting you have to check adjacent elements. Once you see a duplicate, you know you can stop. Optimal sorting is an O(n log n) operation on average - which establishes an upper bound for find the duplicate in this manner.
If you want to optimize for speed, you can use an additional O(n) storage. Using a HashSet (or similar structure), insert values from your list until you determine you are inserting a duplicate into the HashSet. Inserting n items into a HashSet is an O(n) operation on average, which establishes that as an upper bound for this method.
you may try to use bits as hashmap:
1 at position k means that number k occured before
0 at position k means that number k did not occured before
pseudocode:
0. assume that your array is A
1. initialize bitarray(there is nice class in c# for this) of 1000000 length filled with zeros
2. for each num in A:
if bitarray[num]
return num
else
bitarray[num] = 1
end
The time complexity of the bitmap solution is O(n) and it doesn't seem like you could do better than that. However it will take up a lot of memory for a generic list of numbers. Sorting the numbers is an obvious way to detect duplicates and doesn't require extra space if you don't mind the current order changing.
Assuming the array is of length n < N (i.e. not ALL integers are present -- in this case LBushkin's trick is the answer to this homework problem), there is no way to solve this problem using less than O(n) memory using an algorithm that just takes a single pass through the array. This is by reduction to the set disjointness problem.
Suppose I made the problem easier, and I promised you that the duplicate elements were in the array such that the first one was in the first n/2 elements, and the second one was in the last n/2 elements. Now we can think of playing a game in which two people each hold a string of n/2 elements, and want to know how many messages they have to send to be sure that none of their elements are the same. Since the first player could simulate the run of any algorithm that takes a pass through the array, and send the contents of its memory to the second player, a lower bound on the number of messages they need to send implies a lower bound on the memory requirements of any algorithm.
But its easy to see in this simple game that they need to send n/2 messages to be sure that they don't hold any of the same elements, which yields the lower bound.
Edit: This generalizes to show that for algorithms that make k passes through the array and use memory m, that m*k = Omega(n). And it is easy to see that you can in fact trade off memory for time in this way.
Of course, if you are willing to use algorithms that don't simply take passes through the array, you can do better as suggested already: sort the array, then take 1 pass through. This takes time O(nlogn) and space O(1). But note curiously that this proves that any sorting algorithm that just makes passes through the array must take time Omega(n^2)! Sorting algorithms that break the n^2 bound must make random accesses.

Tricky programming problem that I'm having trouble getting my head around

First off, let me say that this is not homework (I am an A-Level student, this is nothing close to what we problem solve (this is way harder)), but more of a problem I'm trying to suss out to improve my programming logic.
I thought of a scenario where there is an array of random integers, let's for example say 10 integers. The user will input a number he wants to count to, and the algorithm will try and work out what numbers are needed to make that sum. For example if I wanted to make the sum 44 from this array of integers:
myIntegers = array(1, 5, 9, 3, 7, 12, 36, 22, 19, 63);
The output would be:
36 + 3 + 5 = 44
Or something along those lines. I hope I make myself clear. As an added bonus I would like to make the algorithm pick as few numbers as possible to make the required sum, or give out an error if the sum cannot be made with the numbers supplied.
I thought about using recursion and iterating through the array, adding numbers over and over until the sum is met or gone past. But what I can't get my head around is what to do if the algorithm goes past the sum and needs to be selective about what numbers to pick from the array.
I'm not looking for complete code, or a complete algorithm, I just want your opinions on how I should proceed with this and perhaps share a few tips or something. I'll probably start work on this tonight. :P
As I said, not homework. Just me wanting to do something a bit more advanced.
Thanks for any help you're able to offer. :)
You are looking at the Knapsack Problem
The knapsack problem or rucksack problem is a problem in combinatorial optimization: Given a set of items, each with a weight and a value, determine the number of each item to include in a collection so that the total weight is less than a given limit and the total value is as large as possible. It derives its name from the problem faced by someone who is constrained by a fixed-size knapsack and must fill it with the most useful items.
Edit: Your special case is the Subset Sum Problem
Will subset sum do? ;]
This is the classic Knapsack problem that you would see in college level algorithms course (or at least I saw it then). Best to work this out on paper and the solution in code should be relatively easy to work out.
EDIT: One thing to consider is dynamic programming.
Your Problem is related to the subset sum problem.
You have to try all possible combinations in the worst case.
No shortcuts here I'm afraid. In addition to what other people have said, about what specific problem this is etc., here's some practical advice to offer you a starting point:
I would sort the array and given the input sum m, would find the first number in the array less than m, call it n (this is your first possible number for the sum), and start from the highest possible complement (m-n), working your way down.
If you don't find a precise match, pick the highest available, call it o, (that now is your 2nd number) and look for the 3rd one starting from (m-n-o) and work your way down again.
If you don't find a precise match, start with the next number n (index of original n at index-1) and do the same. You can keep doing this until you find a precise match for two numbers. If no match for the sum is found for two numbers, start the process again, but expand it to include a 3rd number. And so on.
That could be done recursively. At least this approach ensures that when you find a match, it will be the one with the least possible numbers in the set forming the total input sum.
Potentially though, worst case, you end up going through the whole lot.
Edit: As Venr correctly points out, my first approach was incorrect. Edited approach to reflect this.
There is a very efficient randomized algorithm for this problem. I know you already accepted an answer, but I'm happy to share anyway, I just hope people will still check this question :).
Let Used = list of numbers that you sum.
Let Unused = list of numbers that you DON'T sum.
Let tmpsum = 0.
Let S = desired sum you want to reach.
for ( each number x you read )
toss a coin:
if it's heads and tmpsum < S
add x to Used
else
add x to Unused
while ( tmpsum != S )
if tmpsum < S
MOVE one random number from Unused to Used
else
MOVE one random number from Used to Unused
print the Used list, containing the numbers you need to add to get S
This will be much faster than the dynamic programming solution, especially for random inputs. The only problems are that you cannot reliably detect when there is no solution (you could let the algorithm run for a few seconds and if it doesn't finish, assume there is no solution) and that you cannot be sure you will get the solution with minimum number of elements chosen. Again, you could add some logic to make the algorithm keep going and trying to find a solution with less elements until certain stop conditions are met, but this will make it slower. However, if you are only interested in a solution that works and you have a LOT of numbers and the desired sum can be VERY big, this is probably better than the DP algorithm.
Another advantage of this approach is that it will also work for negative and rational numbers with no modifications, which is not true for the DP solution, because the DP solution involves using partial sums as array indexes, and indexes can only be natural numbers. You can of course use hashtables for example, but that will make the DP solution even slower.
I don't know exactly what's this task is called, but it seems that it's kind of http://en.wikipedia.org/wiki/Knapsack_problem.
Heh, I'll play the "incomplete specification" card (nobody said that numbers couldn't appear more than once!) and reduce this to the "making change" problem. Sort your numbers in decreasing order, find the first one less than your desired sum, then subtract that from your sum (division and remainders could speed this up). Repeat until sum = 0 or no number less than the sum is found.
For completeness, you would need to keep track of the number of addends in each sum, and of course generate the additional sequences by keeping track of the first number you use, skipping that, and repeating the process with the additional numbers. This would solve the (7 + 2 + 1) over (6 + 4) problem.
Repeating the answer of others: it is a Subset Sum problem.
It could be efficiently solved by Dynamic Programing technique.
The following has not been mentioned yet: the problem is Pseudo-P (or NP-Complete in weak sense).
Existence of an algorithm (based on dynamic programming) polynomial in S (where S is the sum) and n (the number of elements) proves this claim.
Regards.
Ok, I wrote a C++ program to solve the above problem. The algorithm is simple :-)
First of all arrange whatever array you have in descending order(I have hard-coded the array in descending form but you may apply any of the sorting algorithms ).
Next I took three stacks n, pos and sum. The first one stores the number for which a possible sum combination is to be found, the second holds the index of the array from where to start the search, the third stores the elements whose addition will give you the number you enter.
The function looks for the largest number in the array which is smaller than or equal to the number entered. If it is equal, it simply pushes the number onto the sum stack. If not, then it pushes the encountered array element to the sum stack(temporarily), and finds the difference between the number to search for and number encountered, and then it performs recursion.
Let me show an example:-
to find 44 in {63,36,22,19,12,9,7,5,3,1}
first 36 will be pushed in sum(largest number less than 44)
44-36=8 will be pushed in n(next number to search for)
7 will be pushed in sum
8-7=1 will be pushed in n
1 will be pushed in sum
thus 44=36+7+1 :-)
#include <iostream>
#include<conio.h>
using namespace std;
int found=0;
void func(int n[],int pos[],int sum[],int arr[],int &topN,int &topP,int &topS)
{
int i=pos[topP],temp;
while(i<=9)
{
if(arr[i]<=n[topN])
{
pos[topP]=i;
topS++;
sum[topS]=arr[i];
temp=n[topN]-arr[i];
if(temp==0)
{
found=1;
break;
}
topN++;
n[topN]=temp;
temp=pos[topP]+1;
topP++;
pos[topP]=temp;
break;
}
i++;
}
if(i==10)
{
topP=topP-1;
topN=topN-1;
pos[topP]+=1;
topS=topS-1;
if(topP!=-1)
func(n,pos,sum,arr,topN,topP,topS);
}
else if(found!=1)
func(n,pos,sum,arr,topN,topP,topS);
}
main()
{
int x,n[100],pos[100],sum[100],arr[10]={63,36,22,19,12,9,7,5,3,1},topN=-1,topP=-1,topS=-1;
cout<<"Enter a number: ";
cin>>x;
topN=topN+1;
n[topN]=x;
topP=topP+1;
pos[topP]=0;
func(n,pos,sum,arr,topN,topP,topS);
if(found==0)
cout<<"Not found any combination";
else{
cout<<"\n"<<sum[0];
for(int i=1;i<=topS;i++)
cout<<" + "<<sum[i];
}
getch();
}
You can copy the code and paste it in your IDE, works fine :-)

How do I write a sort worse than O(n!)

I wrote an O(n!) sort for my amusement that can't be trivially optimized to run faster without replacing it entirely. [And no, I didn't just randomize the items until they were sorted].
How might I write an even worse Big-O sort, without just adding extraneous junk that could be pulled out to reduce the time complexity?
http://en.wikipedia.org/wiki/Big_O_notation has various time complexities sorted in growing order.
Edit: I found the code, here is my O(n!) deterministic sort with amusing hack to generate list of all combinations of a list. I have a slightly longer version of get_all_combinations that returns an iterable of combinations, but unfortunately I couldn't make it a single statement. [Hopefully I haven't introduced bugs by fixing typos and removing underscores in the below code]
def mysort(somelist):
for permutation in get_all_permutations(somelist):
if is_sorted(permutation):
return permutation
def is_sorted(somelist):
# note: this could be merged into return... something like return len(foo) <= 1 or reduce(barf)
if (len(somelist) <= 1): return True
return 1 > reduce(lambda x,y: max(x,y),map(cmp, somelist[:-1], somelist[1:]))
def get_all_permutations(lst):
return [[itm] + cbo for idx, itm in enumerate(lst) for cbo in get_all_permutations(lst[:idx] + lst[idx+1:])] or [lst]
There's a (proven!) worst sorting algorithm called slow sort that uses the “multiply and surrender” paradigm and runs in exponential time.
While your algorithm is slower, it doesn't progress steadily but instead performs random jumps. Additionally, slow sort's best case is still exponential while yours is constant.
Chris and I mentioned Bozosort and Bogosort in a different question.
There's always NeverSort, which is O(∞):
def never_sort(array)
while(true)
end
return quicksort(array)
end
PS: I really want to see your deterministic O(n!) sort; I can't think of any that are O(n!), but have a finite upper bound in classical computation (aka are deterministic).
PPS: If you're worried about the compiler wiping out that empty while block, you can force it not to by using a variable both in- and outside the block:
def never_sort(array)
i=0
while(true) { i += 1 }
puts "done with loop after #{i} iterations!"
return quicksort(array)
end
You could always do a Random sort. It works by rearranging all the elements randomly, then checking to see if it's sorted. If not, it randomly resorts them. I don't know how it would fit in big-O notation, but it will definitely be slow!
Here is the slowest, finite sort you can get:
Link each operation of Quicksort to the Busy Beaver function.
By the time you get >4 operations, you'll need up-arrow notation :)
One way that I can think of would be to calculated the post position of each element through a function that vary gradually moved the large elements to the end and the small ones to the beginning. If you used a trig based function, you could make the elements osculate through the list instead of going directly toward their final position. After you've processed each element in the set, then do a full traversal to determine if the array is sorted or not.
I'm not positive that this will give you O(n!) but it should still be pretty slow.
I think that if you do lots of copying then you can get a "reasonable" brute force search (N!) to take N^2 time per case giving N!*N^2
How about looping over all arrays t of n integers (n-tuples of integers are countable, so this is doable though it's an infinite loop of course), and for each of these:
if its elements are exactly those of the input array (see algo below!) and the array is sorted (linear algo for example, but I'm sure we can do worse), then return t;
otherwise continue looping.
To check that two arrays a and b of length n contain the same elements, how about the following recursive algorithm: loop over all couples (i,j) of indices between 0 and n-1, and for each such couple
test if a[i]==b[j]:
if so, return TRUE if and only if a recursive call on the lists obtained by removing a[i] from a and b[j] from b returns TRUE;
continue looping over couples, and if all couples are done, return FALSE.
The time will depend a lot on the distribution of integers in the input array.
Seriously, though, is there a point to such a question?
Edit:
#Jon, your random sort would be in O(n!) on average (since there are n! permutations, you have probability 1/n! of finding the right one). This holds for arrays of distinct integers, might be slightly different if some elements have multiple occurences in the input array, and would then depend on the distribution of the elements of the input arrays (in the integers).

Resources