An algorithm to find a permutation in sequence - algorithm

I'm asking for a simple problem: how to find one (and only one) permutation in a sequence of numbers (with repetitions) with the lowest complexity?
Suppose we have the sequence: 1 1 2 3 4. Then we permute 2 and 3, so we have: 1 1 3 2 4. How can I find that 2 and 3 have been permuted? The worst solution would be to generate all possibilities and compare each one with original permuted sequence, but I need something fast...
Thank you for your answer.

The problem with this is there will be multiple solutions to your problem without some constraints such as the order is sequentially found.
What I'd look at is first test that there are still the same values in the sequence and if so just step through one by one until a mismatch is found and then find where the first occurance of the other value is and mark that as the permutation. Now continue searching for the next modification and so on...
If you just want to know how much it's changed I'd look at levenshtein algorithm. The basis of this algorithm may even give you what you need for your own custom algorithm or inspire other approaches.
This is fast but it won't tell you which items have changed.
The only full solution I know of would be to record each change as it happens so you can just look at the history of changes to know the perfect answer.

function findswaps:
linkedlist old <- store old string in linkedlist
linkedlist new <- store new string in linkedlist
compare elements one by one:
if same
next iteration until exhausted
else
remember old item
iterate through future `new` elements one by one:
if old item is found
report its position in new list
else
error
My humble attempt please correct me if wrong, so I can help better. I'm guessing the data is unordered so it can't be any faster than linear?

If there is only 1 swap between the original and derived arrays, you could try something like this at O(n) for array length n:
int count = 0;
int[] mismatches;
foreach index in array {
if original[index] != derived[index] {
if count == 2 {
fail
}
mismatches[count++] = index;
}
}
if count == 2 and
original[mismatches[0]] == derived[mismatches[1]] and
original[mismatches[1]] == derived[mismatches[0]] {
succeed
}
fail
Note that this reports a fail when nothing was swapped between the arrays.

Related

Paired comparisons algorithm design

I have 15 groupings of 5 words. Let's say the first grouping is the 'happy' grouping and is the following: ["happy", "smile", "fun", "joy", "laugh"], and the second is the 'sad' grouping and is the following: ["sad", "frown", "bummer", "cry", "rain-cloud"]. All the other groupings are similar, with five words in an array.
I am designing a paired comparisons react app, and I need one word from each grouping to be randomly chosen and paired with a randomly chosen word from each other grouping. From the examples above, a pair for grouping 1 and 2 might be ["smile", "cry"]. There should be 120 total pairs (exactly one for each grouping with each other grouping).
I was thinking of using a loop and going through the groupings one by one, then for each of the remaining groupings, taking a random word from the grouping I'm looking at and one from the other and creating a pair.
I feel like this isn't very elegant or efficient, and I'm curious how I might design a better algorithm. I think recursion might be helpful, but I can't think of how I could use it in this scenario.
Any thoughts or ideas? Thanks!
I had a thought about using recursion, but unfortunately I couldn't think of any algorithm.
What I tried here is: while the list of all sets is not empty, iterate through every item on the first set, and pair it with a random item from any other set, then remove that chosen item. After the whole first set is iterated, simply remove that first set from the list of all sets. This way there is no need of extra calculation that checks for duplicates.
(I used javascript for implementing, and made the sets simpler)
let myArray = [["happy", "smile", "fun"],
["sad", "frown", "bummer"],
[1,2,3],
[4,5,6]]
let pairs = []
while (myArray.length != 1){
for (var i = 0; i<myArray[0].length; i++){
var next = myArray[Math.floor(Math.random() * (myArray.length-1)) + 1];
var nextindex = Math.floor(Math.random()*next.length)
pairs.push([myArray [0][i], next[nextindex]]);
next.splice(nextindex, 1)
}
anarray.splice(0, 1);
}
console.log(pairs)
Although, as you said, this code is not elegant (nor the most efficient)... There should be a better solution, so it is worth it to keep on thinking for a better algorithm!

Does Enumerable.OrderBy order complete list if only first element is requested

If a sequence is ordered. And you only ask for the first element of the ordered sequence. Is Orderby smart enough not to order the complete sequence?
IEnumerable<MyClass> myItems = ...
MyClass maxItem = myItems.OrderBy(item => item.Id).FirstOrDefault();
So if the first element is asked, only the item with the minimum value is ordered as first element of the sequence. When the next element is asked, the item with the minimum value of the remaining sequence is ordered etc.
Or is the complete sequence completely ordered if you only want the first element?
Addition
Apparently the question is unclear. Let's give an example.
The Sort function could do the following:
Create a linked list containing all the elements
as long as the linked list contains element:
Take the first element of the linked list as the smallest
scan the rest of the linked list once to find any smaller elements
remove the smallest element from the linked list
yield return the smallest element
Code:
public static IEnumerable<TSource> Sort<TSource, TKey>(
this IEnumerable<TSource> source, Func<TSource, TKey> keySelector)
{
if (source == null) throw new ArgumentNullException(nameof(source));
if (keySelector == null) throw new ArgumentNullException(nameof(keySelector));
IComparer<TKey> comparer = Comparer<TKey>.Default;
// create a linkedList with keyValuePairs of TKey and TSource
var keyValuePairs = source
.Select(source => new KeyValuePair<TKey, TSource>(keySelector(source), source);
var itemsToSort = new LinkedList<KeyValuePair<Tkey, TSource>>(keyValuePairs);
while (itemsToSort.Any())
{ // there are still items in the list
// select the first element as the smallest one
var smallest = itemsToSort.First();
// scan the rest of the linkedList to find the smallest one
foreach (var element in itemsToSort.Skip(1))
{
if (comparer.Compare(element.Key, smallest.Key) < 1)
{ // element.Key is smaller than smallest.Key: element becomes the smallest:
smallest = element;
}
}
// remove the smallest element from the linked list and return the value:
itemsToSort.Remove(smallestElement);
yield return smallestElement.Value;
}
Suppose I have a sequence of integers.
Suppose I have the following sequence of integers:
{4, 8, 3, 1, 7}
At the first iteration the iterator internally creates a linked list of key/value pairs and assigns the first element of the list as smallest
Linked List = 4 - 8 - 3 - 1 - 7
Smallest = 4
The linked list is scanned once to see if there is a smaller one.
Linked List = 4 - 8 - 3 - 1 - 7
Smallest = 1
The smallest is removed from the linked list and yield return:
Linked List = 4 - 8 - 3 - 7
return 1
The second iteration the same is done with the shorter linked list
Linked List = 4 - 8 - 3 - 7
smallest = 4
Again the linked list is scanned once to find the smallest one
Linked List = 4 - 8 - 3 - 7
smallest = 3
Remove the smallest from the linked list and return the smallest
Linked List = 4 - 8 - 7
return 3
It's easy to see that if you only ask for first element in the sorted list, the list is scanned only once. Every iteration the list to scan becomes smaller.
Back to my original question:
I understand that if you only want the first element, you have to scan the list at least once. If you don't ask for a second element, the rest of the list is not ordered.
Is the sort that is used by Enumerable.OrderBy thus smart that if doesn't sort the remainder of the list if you only ask for the firs ordered item?
It depends on the version.
In the framework versions (4.0, 4.5, etc.) then:
The entire source is loaded into a buffer.
Produce a map of keys (so that they key production is only once per element).
A map of integers is produced and then sorted according to those keys (using a map means swap operations have cheaper copies if the source elements are large value types).
The FirstOrDefault attempts to obtain the first item according to this mapping by using MoveNext and Current on the resulting object. Either it finds one, or (if the buffer is empty because the source was empty) returns default(TSource).
In .NET Core, then:
The FirstOrDefault operation on the IOrderedEnumerable scans through the source. If there are no elements it returns default(TSource) otherwise it holds onto the first element found and the key produced by the key generator and compares it with all subsequent, replacing that held-onto value and key with the next found if the next found compares as lower than the current value.
The held-onto value will be the same element as the Framework version would have found by first sorting, so it is returned.
This means that in the Framework version myItems.OrderBy(item => item.Id).FirstOrDefault() is O(n log n) time complexity (worse case O(n²)) and O(n) space complexity, but in the .NET Core version it is O(n) time complexity and O(1) space complexity.
The main difference here is that in .NET Core FirstOrDefault() has knowledge of how the results of OrderBy (and ThenBy etc.) differ from other possible sources and has code to handle it*, while in the framework version it does not.
Both scan the entire sequence (you can't know the last element in myItems isn't the first by the sorting rules until you've examined it) but they differ in the mechanism and efficiency after that point.
When the next element is asked, the item with the minimum value of the remaining sequence is ordered etc.
If the next element is asked, then not only would any sorting be done again, but it would have to be done again as the contents of myItems could have change in the meantime.
If you were trying to obtain it with myItems.OrderBy(item => item.Id).ElementAtOrDefault(i) then the framework version would find the element by first doing a sort (O(n log n)) and then a scan (O(n) relative to i) while the .NET Core version would find it with a quickselect (O(n) though the constant factors are bigger than for FirstOrDefault() and can be as high as O(n²) in the same cases that sorting is, so its a slower O(n) than with that (it's smart enough to turn ElementAtOrDefault(0) into FirstOrDefault() for that reason). Both versions also use space complexity of O(n) (unless .NET Core can turn it into FirstOrDefault()).
If you were finding the first few values with myItems.OrderBy(item => item.Id).Take(k) then the Framework version would again do a sort (O(n log n)) and the put a limit on the subsequent enumeration of the results so that it stopped returning elements after k were obtained. The .NET Core version would do a partial sort, not bothering to sort elements it realised were always going to come after the portion taken, which is O(n + k log k) time complexity. .NET Core would also do a single partial sort for combinations of Take and Skip reducing the amount of sorting necessary further.
In theory the sorting of just OrderBy(cmp) could be lazier as per:
Load the elements into the buffer.
Do a sort, probably favouring the "left" partition as partitioning is happening.
yield elements as soon as it is found that they are the next to enumerate.
This would improve time-to-first-result (low time-to-first-result is often a nice feature of other Linq operations), and particularly benefit consumers who may stop working on the result part way through. However it adds extra constant costs to the sorting operation and either prevents picking the next partition to work on in such a way as to reduce the amount of recursion (an important optimisation of partition-based sorting) or else would often not actually yield anything until near the end anyway (making the exercise rather pointless). It would also make the sorting much more complicated. While I experimented with this approach the pay-offs to some cases didn't justify the costs to others, especially as it seemed likely to hurt more people than it benefited.
*Strictly speaking, the results of several linq operations have knowledge of how to find the first element in a way that is optimised for each of them, and FirstOrDefault() knows how to detect any of those cases.
If a sequence is ordered ...
That is fine but not a property of IEnumerable so OrderBy can never 'know' this directly.
There are precedents for this though, Count() will check at runtime if its IEnumerable<> source is actually pointing at a List and then take a shortcut to the Count property.
Likewise, OrderBy could look to see if it's called on a SortedList or something but there is no clear marker interface and those collections are used far too infrequently to make this worth the effort.
There are other ways to optimize this, .OrderBy().First() could conceivably map to a .Min() but again, nobody has bothered till now as far as I knew. See Jon's answer.
No, it's not. How can it know that the list is in order without iterating through the entire list?
Here's a simple test:
void Main()
{
Console.WriteLine(OrderedEnumerable().OrderBy(x => x).First());
}
public IEnumerable<int> OrderedEnumerable()
{
Console.WriteLine(1);
yield return 1;
Console.WriteLine(2);
yield return 2;
Console.WriteLine(3);
yield return 3;
}
This, as expected, outputs:
1
2
3
1
If you look at the reference source and follow the classes you will see that all keys will be computed and then a quick sort algorithm will sort the index table according to the keys.
So the sequence is read once, all the keys are computed, then an index is sorted according to the keys and then you get your first output.

Divide a group of people into two disjoint subgroups (of arbitrary size) and find some values

As we know from programming, sometimes a slight change in a problem can
significantly alter the form of its solution.
Firstly, I want to create a simple algorithm for solving
the following problem and classify it using bigtheta
notation:
Divide a group of people into two disjoint subgroups
(of arbitrary size) such that the
difference in the total ages of the members of
the two subgroups is as large as possible.
Now I need to change the problem so that the desired
difference is as small as possible and classify
my approach to the problem.
Well,first of all I need to create the initial algorithm.
For that, should I make some kind of sorting in order to separate the teams, and how am I suppose to continue?
EDIT: for the first problem,we have ruled out the possibility of a set being an empty set. So all we have to do is just a linear search to find the min age and then put it in a set B. SetA now has all the other ages except the age of setB, which is the min age. So here is the max difference of the total ages of the two sets, as high as possible
The way you described the first problem, it is trivial in the way that it requires you to find only the minimum element (in case the subgroups should contain at least 1 member), otherwise it is already solved.
The second problem can be solved recursively the pseudo code would be:
// compute sum of all elem of array and store them in sum
min = sum;
globalVec = baseVec;
fun generate(baseVec, generatedVec, position, total)
if (abs(sum - 2*total) < min){ // check if the distribution is better
min = abs(sum - 2*total);
globalVec = generatedVec;
}
if (position >= baseVec.length()) return;
else{
// either consider elem at position in first group:
generate(baseVec,generatedVec.pushback(baseVec[position]), position + 1, total+baseVec[position]);
// or consider elem at position is second group:
generate(baseVec,generatedVec, position + 1, total);
}
And now just start the function with generate(baseVec,"",0,0) where "" stand for an empty vector.
The algo can be drastically improved by applying it to a sorted array, hence adding a test condition to stop branching, but the idea stays the same.

How to "sort" elements of 2 possible values in place in linear time? [duplicate]

This question already has answers here:
Stable separation for two classes of elements in an array
(3 answers)
Closed 9 years ago.
Suppose I have a function f and array of elements.
The function returns A or B for any element; you could visualize the elements this way ABBAABABAA.
I need to sort the elements according to the function, so the result is: AAAAAABBBB
The number of A values doesn't have to equal the number of B values. The total number of elements can be arbitrary (not fixed). Note that you don't sort chars, you sort objects that have a single char representation.
Few more things:
the sort should take linear time - O(n),
it should be performed in place,
it should be a stable sort.
Any ideas?
Note: if the above is not possible, do you have ideas for algorithms sacrificing one of the above requirements?
If it has to be linear and in-place, you could do a semi-stable version. By semi-stable I mean that A or B could be stable, but not both. Similar to Dukeling's answer, but you move both iterators from the same side:
a = first A
b = first B
loop while next A exists
if b < a
swap a,b elements
b = next B
a = next A
else
a = next A
With the sample string ABBAABABAA, you get:
ABBAABABAA
AABBABABAA
AAABBBABAA
AAAABBBBAA
AAAAABBBBA
AAAAAABBBB
on each turn, if you make a swap you move both, if not you just move a. This will keep A stable, but B will lose its ordering. To keep B stable instead, start from the end and work your way left.
It may be possible to do it with full stability, but I don't see how.
A stable sort might not be possible with the other given constraints, so here's an unstable sort that's similar to the partition step of quick-sort.
Have 2 iterators, one starting on the left, one starting on the right.
While there's a B at the right iterator, decrement the iterator.
While there's an A at the left iterator, increment the iterator.
If the iterators haven't crossed each other, swap their elements and repeat from 2.
Lets say,
Object_Array[1...N]
Type_A objs are A1,A2,...Ai
Type_B objs are B1,B2,...Bj
i+j = N
FOR i=1 :N
if Object_Array[i] is of Type_A
obj_A_count=obj_A_count+1
else
obj_B_count=obj_B_count+1
LOOP
Fill the resultant array with obj_A and obj_B with their respective counts depending on obj_A > obj_B
The following should work in linear time for a doubly-linked list. Because up to N insertion/deletions are involved that may cause quadratic time for arrays though.
Find the location where the first B should be after "sorting". This can be done in linear time by counting As.
Start with 3 iterators: iterA starts from the beginning of the container, and iterB starts from the above location where As and Bs should meet, and iterMiddle starts one element prior to iterB.
With iterA skip over As, find the 1st B, and move the object from iterA to iterB->previous position. Now iterA points to the next element after where the moved element used to be, and the moved element is now just before iterB.
Continue with step 3 until you reach iterMiddle. After that all elements between first() and iterB-1 are As.
Now set iterA to iterB-1.
Skip over Bs with iterB. When A is found move it to just after iterA and increment iterA.
Continue step 6 until iterB reaches end().
This would work as a stable sort for any container. The algorithm includes O(N) insertion/deletion, which is linear time for containers with O(1) insertions/deletions, but, alas, O(N^2) for arrays. Applicability in you case depends on whether the container is an array rather than a list.
If your data structure is a linked list instead of an array, you should be able to meet all three of your constraints. You just skim through the list and accumulating and moving the "B"s will be trivial pointer changes. Pseudo code below:
sort(list) {
node = list.head, blast = null, bhead = null
while(node != null) {
nextnode = node.next
if(node.val == "a") {
if(blast != null){
//move the 'a' to the front of the 'B' list
bhead.prev.next = node, node.prev = bhead.prev
blast.next = node.next, node.next.prev = blast
node.next = bhead, bhead.prev = node
}
}
else if(node.val == "b") {
if(blast == null)
bhead = blast = node
else //accumulate the "b"s..
blast = node
}
3
node = nextnode
}
}
So, you can do this in an array, but the memcopies, that emulate the list swap, will make it quiet slow for large arrays.
Firstly, assuming the array of A's and B's is either generated or read-in, I wonder why not avoid this question entirely by simply applying f as the list is being accumulated into memory into two lists that would subsequently be merged.
Otherwise, we can posit an alternative solution in O(n) time and O(1) space that may be sufficient depending on Sir Bohumil's ultimate needs:
Traverse the list and sort each segment of 1,000,000 elements in-place using the permutation cycles of the segment (once this step is done, the list could technically be sorted in-place by recursively swapping the inner-blocks, e.g., ABB AAB -> AAABBB, but that may be too time-consuming without extra space). Traverse the list again and use the same constant space to store, in two interval trees, the pointers to each block of A's and B's. For example, segments of 4,
ABBAABABAA => AABB AABB AA + pointers to blocks of A's and B's
Sequential access to A's or B's would be immediately available, and random access would come from using the interval tree to locate a specific A or B. One option could be to have the intervals number the A's and B's; e.g., to find the 4th A, look for the interval containing 4.
For sorting, an array of 1,000,000 four-byte elements (3.8MB) would suffice to store the indexes, using one bit in each element for recording visited indexes during the swaps; and two temporary variables the size of the largest A or B. For a list of one billion elements, the maximum combined interval trees would number 4000 intervals. Using 128 bits per interval, we can easily store numbered intervals for the A's and B's, and we can use the unused bits as pointers to the block index (10 bits) and offset in the case of B (20 bits). 4000*16 bytes = 62.5KB. We can store an additional array with only the B blocks' offsets in 4KB. Total space under 5MB for a list of one billion elements. (Space is in fact dependent on n but because it is extremely small in relation to n, for all practical purposes, we may consider it O(1).)
Time for sorting the million-element segments would be - one pass to count and index (here we can also accumulate the intervals and B offsets) and one pass to sort. Constructing the interval tree is O(nlogn) but n here is only 4000 (0.00005 of the one-billion list count). Total time O(2n) = O(n)
This should be possible with a bit of dynamic programming.
It works a bit like counting sort, but with a key difference. Make arrays of size n for both a and b count_a[n] and count_b[n]. Fill these arrays with how many As or Bs there has been before index i.
After just one loop, we can use these arrays to look up the correct index for any element in O(1). Like this:
int final_index(char id, int pos){
if(id == 'A')
return count_a[pos];
else
return count_a[n-1] + count_b[pos];
}
Finally, to meet the total O(n) requirement, the swapping needs to be done in a smart order. One simple option is to have recursive swapping procedure that doesn't actually perform any swapping until both elements would be placed in correct final positions. EDIT: This is actually not true. Even naive swapping will have O(n) swaps. But doing this recursive strategy will give you absolute minimum required swaps.
Note that in general case this would be very bad sorting algorithm since it has memory requirement of O(n * element value range).

How to sort an integer array on lexicological order using only adjacent swaps for a given max # of swaps(m)

I was asked that one during a phone interview of course, the other questions where fine, but that one I'm still not sure of the best answer.
At first i thought it smelled of a radix sort but since you can't only use adjacent swaps of course not.
So I think it's more of a bubble sort type algo, which is what I tried to do but the "max number of swaps" bit makes it very tricky (along with he lexicological part but i guess that's just a comparaison side issue)
I guess my algo would be something like (of course now i have better ideas than during the interview !)
int index = 0;
while(swapsLeft>0 && index < arrays.length)
{
int smallestIndex = index;
for(int i=index; i < index + swapsLeft)
{
// of course < is not correct, we need to compare as string or "by radix" or something
if(array[i]) < array[smallestIndex]
smallestIndex = i;
}
// if found a smaller item within swap range then swap it to the front
for(int i = smallestIndex; i > index; i--)
{
temp = array[smallestIndex];
array[smallestIndex] = array[index];
array[index] = temp
swapsLeft--;
}
// continue for next item in array
index ++; // edit:could probably optimize to index = index + 1 + (smallestIndex - index) ?
}
Does that seem about right ?
Who as a better solution, I'm curious as to an efficient / proper way to do this.
I am actually working on writing this exact code for my Algorithms class in Java for my Software Engineering Bachelors degree. So I will help you solve this by explaining the problem, and the steps to solve it. You are going to need at least 2 methods to do this more than once.
First you take your first value, just to make this easy lets keep it small and simple.
1 2 3 4
You should be using an array for sorting. To find the next number lexologically, you start out on the far right, move to the left, and stop when you find your first decrease. You have to replace that smaller value with the next largest value on the right. So for our example we would be replacing 3 with 4. So our next number is:
1 2 4 3
That was pretty simple right? Don't worry it gets much harder. Let's now try to get the next number using:
1 4 3 2
Ok so we start out on the far right and move left till our first smaller number. 2 is smaller than 3 is smaller than 4 is larger than 1. Ok so we have our first decrease at 1. So now we need to move back to the right till we hit the last number that is larger than 1. 4 is larger than 1, 3 is larger than 1, and 2 is larger than 1. Ok with 2 being the last number that means that 2 need to replace 1. But what about the rest of the numbers, well they are already in order, they are just backwards of what we need. So we need to flip the order and we come up with:
2 1 3 4
So you need a method that does that sorting, and another method that calls that method in a loop until you have done the correct number of parameters.

Resources