Fastest sorting algorithm for a specific situation - algorithm

What is the fastest sorting algorithm for a large number (tens of thousands) of groups of 9 positive double precision values, where each group must be sorted individually? So it's got to sort fast a small number of possibly repeated double precision values, many times in a row.
The values are in the [0..1] interval. I don't care about space complexity or stability, just about speed.

Sorting each group individually, merge sort would probably be easiest to implement with good results.
A sorting network would probably be the fastest solution:
http://en.wikipedia.org/wiki/Sorting_network

Good question because this comes down to "Fastest way to sort an array of 9 elements", and most comparisons between and analysis of sorting methods are about large N. I assume the 'groups' are clearly defined and don't play a real role here.
You will probably have to benchmark a few candidates because a lot of factors (locality) come into play here.
In any case, making it parallel sounds like a good idea. Use Parallel.For() if you can use ,NET4.

I think you will need to try out a few examples to see what works best, as you have an unusual set of conditions. My guess that the best will be one of
sorting network
insertion sort
quick sort (one level -- insertion sort below)
merge sort
Given that double precision number are relatively long I suspect you will not do better with a radix sort, but feel free to add it in.
For what its worth, Java uses quick sort on doubles until the number of items to be sorted drops below 7, at which is uses insertion sort. The third option mimics that solution.
Also your overall problem is embarrassingly parallel so you want to make use of parallelism when possible. The problem looks too small for a distributed solution (more time would be lost in networking than saved), but if set up right, your problem can make use of multiple cores very effectively.

It looks like you want the most cycle-stingy way to sort 9 values. Since the number of values is limited, I would (as Kathy suggested) first do an unrolled insertion sort on the first 4 elements and the second 5 elements. Then I would merge those two groups.
Here's an unrolled insertion sort of 4 elements:
if (u[1] < u[0]) swap(u[0], u[1]);
if (u[2] < u[0]) swap(u[0], u[2]);
if (u[3] < u[0]) swap(u[0], u[3]);
if (u[2] < u[1]) swap(u[1], u[2]);
if (u[3] < u[1]) swap(u[1], u[3]);
if (u[3] < u[2]) swap(u[2], u[3]);
Here's a merge loop. The first set of 4 elements is in u, and the second set of 5 elements in in v. The result is in r.
i = j = k = 0;
while(i < 4 && j < 5){
if (u[i] < v[j]) r[k++] = u[i++];
else if (v[j] < u[i]) r[k++] = v[j++];
else {
r[k++] = u[i++];
r[k++] = v[j++];
}
}
while (i < 4) r[k++] = u[i++];
while (j < 5) r[k++] = v[j++];

Related

Sorting algorithm for list of integers

I have a list of about 200 integers whose values are between 1 and 5.
I want to get into learning about sorting algorithms and knowing where to apply each because at the moment I use bubble-sort for everything which I've been told is a terrible way to do things.
What would be the fastest sorting algorithm for this integer sorting?
EDIT: It turns out that because I know the numbers are 1 to 5 then I can use a bucket sort (?) algorithm which if I'm not mistaken - and I definitely could be - means that for each integer of value 1, I put it in the 1 group, value 2 I put it in the 2 group etc, then concatenate the groups at the end. This seems like a simple and efficient way to do it.
However since this is (currently) a learning excercise for me I am going to remove the 1 - 5 limitation and try to implement bubble-sort and merge-sort then compare the two to see which is faster.
Thanks for your help!
... which I've been told is a terrible way to do things.
First off, don't accept as gospel anything you hear from random bods on the internet (even me).
Bubble sort is fine under certain conditions, such as when the data is already mostly sorted, or the item count is relatively small (such as 200) (a), or you have no sort functionality built into the language and you're on a tight deadline where lack of performance will annoy the customer but lack of functionality will get you fired :-)
This bias against bubble sort is similar to the "only one exit point from a function" and "no goto" rules. You should understand the reasoning behind them so that you know when the rules can be ignored safely.
Anyway, on to the question proper. An efficient way for your specific case is to just count the items then output them, something like:
dim count[1..5] = {0, 0, 0, 0, 0};
for each item in list:
count[item] = count[item] + 1
for val in 1..5:
for quant in 1..count[val]:
output val
That's an O(n) time and O(1) space solution and you won't find a more efficient big-O for a generalised sort routine - it's only possible in this case because of the extra information you have about the data (limited to the values 1 through 5).
If you wanted to examine all the different sort algorithms, the Wikipedia Sorting Algorithm page is a useful starting point, including the major algorithms and their properties.
(a) As an aside, the following code (using worst case data for bubble sort), when run under CygWin on a not-very-powerful IBM T60 (2GHz dual core) laptop, completes in, on average, 0.157 seconds (5 samples: 0.150, 0.125, 0.192, 0.199, 0.115).
I wouldn't use it for sorting a million items (everyone knows bubble sort scales poorly) but 200 should be fine in most cases:
#include <stdio.h>
#define COUNT 200
int main (void) {
int i, swapped, tmp, item[COUNT];
// Set up worst case (reverse order) data.
for (i = 0; i < COUNT; i++)
item[i] = 200 - i;
// Slightly optimised bubble sort.
swapped = 1;
while (swapped) {
swapped = 0;
for (i = 1; i < COUNT; i++) {
if (item[i-1] > item[i]) {
tmp = item[i-1];
item[i-1] = item[i];
item[i] = tmp;
swapped = 1;
}
}
}
// for (i = 0; i < COUNT; i++)
// printf ("%d ", item[i]);
// putchar ('\n');
return 0;
}
You may not need sorting here, since you only have 5 possible values.
You could use 5 containers (or buckets) and as you scan your list of integers you place the values in the right bucket.
At the end, join the buckets together, in order.
Merge sort is an O(n log n) I think its way better than QuickSort
You can find some C# code here.

How to test if one set of (unique) integers belongs to another set, efficiently?

I'm writing a program where I'm having to test if one set of unique integers A belongs to another set of unique numbers B. However, this operation might be done several hundred times per second, so I'm looking for an efficient algorithm to do it.
For example, if A = [1 2 3] and B = [1 2 3 4], it is true, but if B = [1 2 4 5 6], it's false.
I'm not sure how efficient it is to just sort and compare, so I'm wondering if there are any more efficient algorithms.
One idea I came up with, was to give each number n their corresponding n'th prime: that is 1 = 2, 2 = 3, 3 = 5, 4 = 7 etc. Then I could calculate the product of A, and if that product is a factor of the similar product of B, we could say that A is a subset of similar B with certainty. For example, if A = [1 2 3], B = [1 2 3 4] the primes are [2 3 5] and [2 3 5 7] and the products 2*3*5=30 and 2*3*5*7=210. Since 210%30=0, A is a subset of B. I'm expecting the largest integer to be couple of million at most, so I think it's doable.
Are there any more efficient algorithms?
The asymptotically fastest approach would be to just put each set in a hash table and query each element, which is O(N) time. You cannot do better (since it will take that much time to read the data).
Most set datastructures already support expected and/or amortized O(1) query time. Some languages even support this operation. For example in python, you could just do
A < B
Of course the picture changes drastically depending on what you mean by "this operation is repeated". If you have the ability to do precalculations on the data as you add it to the set (which presumably you have the ability to do so), this will allow you to subsume the minimal O(N) time into other operations such as constructing the set. But we can't advise without knowing much more.
Assuming you had full control of the set datastructure, your approach to keep a running product (whenever you add an element, you do a single O(1) multiplication) is a very good idea IF there exists a divisibility test that is faster than O(N)... in fact your solution is really smart, because we can just do a single ALU division and hope we're within float tolerance. Do note however this will only allow you roughly a speedup factor of 20x max I think, since 21! > 2^64. There might be tricks to play with congruence-modulo-an-integer, but I can't think of any. I have a slight hunch though that there is no divisibility test that is faster than O(#primes), though I'd like to be proved wrong!
If you are doing this repeatedly on duplicates, you may benefit from caching depending on what exactly you are doing; give each set a unique ID (though since this makes updates hard, you may ironically wish to do something exactly like your scheme to make fingerprints, but mod max_int_size with detection-collision). To manage memory, you can pin extremely expensive set comparison (e.g. checking if a giant set is part of itself) into the cache, while otherwise using a most-recent policy if you run into memory issues. This nice thing about this is it synergizes with an element-by-element rejection test. That is, you will be throwing out sets quickly if they don't have many overlapping elements, but if they have many overlapping elements the calculations will take a long time, and if you repeat these calculations, caching could come in handy.
Let A and B be two sets, and you want to check if A is a subset of B. The first idea that pops into my mind is to sort both sets and then simply check if every element of A is contained in B, as following:
Let n_A and n_B be the cardinality of A and B, respectively. Let i_A = 1, i_B = 1. Then the following algorithm (that is O(n_A + n_B)) will solve the problem:
// A and B assumed to be sorted
i_A = 1;
i_B = 1;
n_A = size(A);
n_B = size(B);
while (i_A <= n_A) {
while (A[i_A] > B[i_B]) {
i_B++;
if (i_B > n_B) return false;
}
if (A[i_A] != B[i_B}) return false;
i_A++;
}
return true;
The same thing, but in a more functional, recursive way (some will find the previous easier to understand, others might find this one easier to understand):
// A and B assumed to be sorted
function subset(A, B)
n_A = size(A)
n_B = size(B)
function subset0(i_A, i_B)
if (i_A > n_A) true
else if (i_B > n_B) false
else
if (A[i_A] <= B[i_B]) return (A[i_A] == B[i_B]) && subset0(i_A + 1, i_B + 1);
else return subset0(i_A, i_B + 1);
subset0(1, 1)
In this last example, notice that subset0 is tail recursive, since if (A[i_A] == B[i_B]) is false then there will be no recursive call, otherwise, if (A[i_A] == B[i_B]) is true, than there's no need to keep this information, since the result of true && subset0(...) is exactly the same as subset0(...). So, any smart compiler will be able to transform this into a loop, avoiding stack overflows or any performance hits caused by function calls.
This will certainly work, but we might be able to optimize it a lot in the average case if you have and provide more information about your sets, such as the probability distribution of the values in the sets, if you somehow expect the answer to be biased (ie, it will more often be true, or more often be false), etc.
Also, have you already written any code to actually measure its performance? Or are you trying to pre-optimize?
You should start by writing the simplest and most straightforward solution that works, and measure its performance. If it's not already satisfactory, only then you should start trying to optimize it.
I'll present an O(m+n) time-per-test algorithm. But first, two notes regarding the problem statement:
Note 1 - Your edits say that set sizes may be a few thousand, and numbers may range up to a million or two.
In following, let m, n denote the sizes of sets A, B and let R denote the size of the largest numbers allowed in sets.
Note 2 - The multiplication method you proposed is quite inefficient. Although it uses O(m+n) multiplies, it is not an O(m+n) method because the product lengths are worse than O(m) and O(n), so it would take more than O(m^2 + n^2) time, which is worse than the O(m ln(m) + n ln(n)) time required for sorting-based methods, which in turn is worse than the O(m+n) time of the following method.
For the presentation below, I suppose that sets A, B can completely change between tests, which you say can occur several hundred times per second. If there are partial changes, and you know which p elements change in A from one test to next, and which q change in B, then the method can be revised to run in O(p+q) time per test.
Step 0. (Performed one time only, at outset.) Clear an array F, containing R bits or bytes, as you prefer.
Step 1. (Initial step of per-test code.) For i from 0 to n-1, set F[B[i]], where B[i] denotes the i'th element of set B. This is O(n).
Step 2. For i from 0 to m-1, { test F[A[i]]. If it is clear, report that A is not a subset of B, and go to step 4; else continue }. This is O(m).
Step 3. Report that A is a subset of B.
Step 4. (Clear used bits) For i from 0 to n-1, clear F[B[i]]. This is O(n).
The initial step (clearing array F) is O(R) but steps 1-4 amount to O(m+n) time.
Given the limit on the size of the integers, if the set of B sets is small and changes seldom, consider representing the B sets as bitsets (bit arrays indexed by integer set member). This doesn't require sorting, and the test for each element is very fast.
If the A members are sorted and tend to be clustered together, then get another speedup by testing all the element in one word of the bitset at a time.

Fast algorithm for finding prime numbers? [duplicate]

This question already has answers here:
Which is the fastest algorithm to find prime numbers?
(20 answers)
Closed 9 years ago.
First of all - I checked a lot in this forum and I haven't found something fast enough.
I try to make a function that returns me the prime numbers in a specified range.
For example I did this function (in C#) using the sieve of Eratosthenes. I tried also Atkin's sieve but the Eratosthenes one runs faster (in my implementation):
public static void SetPrimesSieve(int Range)
{
Primes = new List<uint>();
Primes.Add(2);
int Half = (Range - 1) >> 1;
BitArray Nums = new BitArray(Half, false);
int Sqrt = (int)Math.Sqrt(Range);
for (int i = 3, j; i <= Sqrt; )
{
for (j = ((i * i) >> 1) - 1; j < Half; j += i)
Nums[j] = true;
do
i += 2;
while (i <= Sqrt && Nums[(i >> 1) - 1]);
}
for (int i = 0; i < Half; ++i)
if (!Nums[i])
Primes.Add((uint)(i << 1) + 3);
}
It runs about twice faster than codes & algorithms I found...
There's should be a faster way to find prime numbers, could you help me?
When searching around for algorithms on this topic (for project Euler) I don't remember finding anything faster. If speed is really the concern, have you thought about just storing the primes so you simply need to look it up?
EDIT: quick google search found this, confirming that the fastest method would be just to page the results and look them up as needed.
One more edit - you may find more information here, essentially a duplicate of this topic. Top post there states that atkin's sieve was faster than eras' as far as generating on the fly.
The fastest algorithm in my experience so far is the Sieve of Erathostenes with wheel factorization for 2, 3 and 5, where the primes among the remaining numbers are represented as bits in a byte array. In Java on one core of my 3 year old Laptop it takes 23 seconds to compute the primes up to 1 billion.
With wheel factorization the Sieve of Atkin was about a factor of two slower, while with an ordinary BitSet it was about 30% faster.
See also this answer.
I did an algorithm that can find prime numbers from range 2-90 000 000 for 0.65 sec on I 350M-notebook, written in C .... you have to use bitwise operations and have "code" for recalculating index of your array to index of concrete bit you want. for example If you want folds of number 2, concrete bits will be for example ....10101000 ... so if you read from left ... you get index 4,6,8 ... thats it
Several comments.
For speed, precompute then load from disk. It's super fast. I did it in Java long ago.
Don't store as an array, store as a bitsequence for odd numbers. Way more efficient on memory
If your speed question is that you want this particular computation to run fast (you need to justify why you can't precompute and load it from disk) you need to code a better Atkin's sieve. It is faster. But only slightly.
You haven't indicated the end use for these primes. We may be missing something completely because you've not told us the application. Tell us a sketch of the application and the answers will be targetted better for your context.
Why on earth do you think something faster exists? You haven't justified your hunch. This is a very hard problem. (that is to find something faster)
You can do better than that using the Sieve of Atkin, but it is quite tricky to implement it fast and correctly. A simple translation of the Wikipedia pseudo-code is probably not good enough.

random permutation

I would like to genrate a random permutation as fast as possible.
The problem: The knuth shuffle which is O(n) involves generating n random numbers.
Since generating random numbers is quite expensive.
I would like to find an O(n) function involving a fixed O(1) amount of random numbers.
I realize that this question has been asked before, but I did not see any relevant answers.
Just to stress a point: I am not looking for anything less than O(n), just an algorithm involving less generation of random numbers.
Thanks
Create a 1-1 mapping of each permutation to a number from 1 to n! (n factorial). Generate a random number in 1 to n!, use the mapping, get the permutation.
For the mapping, perhaps this will be useful: http://en.wikipedia.org/wiki/Permutation#Numbering_permutations
Of course, this would get out of hand quickly, as n! can become really large soon.
Generating a random number takes long time you say? The implementation of Javas Random.nextInt is roughly
oldseed = seed;
nextseed = (oldseed * multiplier + addend) & mask;
return (int)(nextseed >>> (48 - bits));
Is that too much work to do for each element?
See https://doi.org/10.1145/3009909 for a careful analysis of the number of random bits required to generate a random permutation. (It's open-access, but it's not easy reading! Bottom line: if carefully implemented, all of the usual methods for generating random permutations are efficient in their use of random bits.)
And... if your goal is to generate a random permutation rapidly for large N, I'd suggest you try the MergeShuffle algorithm. An article published in 2015 claimed a factor-of-two speedup over Fisher-Yates in both parallel and sequential implementations, and a significant speedup in sequential computations over the other standard algorithm they tested (Rao-Sandelius).
An implementation of MergeShuffle (and of the usual Fisher-Yates and Rao-Sandelius algorithms) is available at https://github.com/axel-bacher/mergeshuffle. But caveat emptor! The authors are theoreticians, not software engineers. They have published their experimental code to github but aren't maintaining it. Someday, I imagine someone (perhaps you!) will add MergeShuffle to GSL. At present gsl_ran_shuffle() is an implementation of Fisher-Yates, see https://www.gnu.org/software/gsl/doc/html/randist.html?highlight=gsl_ran_shuffle.
Not what you asked exactly, but if provided random number generator doesn't satisfy you, may be you should try something different. Generally, pseudorandom number generation can be very simple.
Probably, best-known algorithm
http://en.wikipedia.org/wiki/Linear_congruential_generator
More
http://en.wikipedia.org/wiki/List_of_pseudorandom_number_generators
As other answers suggest, you can make a random integer in the range 0 to N! and use it to produce a shuffle. Although theoretically correct, this won't be faster in general since N! grows fast and you'll spend all your time doing bigint arithmetic.
If you want speed and you don't mind trading off some randomness, you will be much better off using a less good random number generator. A linear congruential generator (see http://en.wikipedia.org/wiki/Linear_congruential_generator) will give you a random number in a few cycles.
Usually there is no need in full-range of next random value, so to use exactly the same amount of randomness you can use next approach (which is almost like random(0,N!), I guess):
// ...
m = 1; // range of random buffer (single variant)
r = 0; // random buffer (number zero)
// ...
for(/* ... */) {
while (m < n) { // range of our buffer is too narrow for "n"
r = r*RAND_MAX + random(); // add another random to our random-buffer
m *= RAND_MAX; // update range of random-buffer
}
x = r % n; // pull-out next random with range "n"
r /= n; // remove it from random-buffer
m /= n; // fix range of random-buffer
// ...
}
P.S. of course there will be some errors related with division by value different from 2^n, but they will be distributed among resulted samples.
Generate N numbers (N < of the number of random number you need) before to do the computation, or store them in an array as data, with your slow but good random generator; then pick up a number simply incrementing an index into the array inside your computing loop; if you need different seeds, create multiple tables.
Are you sure that your mathematical and algorithmical approach to the problem is correct?
I hit exactly same problem where Fisher–Yates shuffle will be bottleneck in corner cases. But for me the real problem is brute force algorithm that doesn't scale well to all problems. Following story explains the problem and optimizations that I have come up with so far.
Dealing cards for 4 players
Number of possible deals is 96 bit number. That puts quite a stress for random number generator to avoid statical anomalies when selecting play plan from generated sample set of deals. I choose to use 2xmt19937_64 seeded from /dev/random because of the long period and heavy advertisement in web that it is good for scientific simulations.
Simple approach is to use Fisher–Yates shuffle to generate deals and filter out deals that don't match already collected information. Knuth shuffle takes ~1400 CPU cycles per deal mostly because I have to generate 51 random numbers and swap 51 times entries in the table.
That doesn't matter for normal cases where I would only need to generate 10000-100000 deals in 7 minutes. But there is extreme cases when filters may select only very small subset of hands requiring huge number of deals to be generated.
Using single number for multiple cards
When profiling with callgrind (valgrind) I noticed that main slow down was C++ random number generator (after switching away from std::uniform_int_distribution that was first bottleneck).
Then I came up with idea that I can use single random number for multiple cards. The idea is to use least significant information from the number first and then erase that information.
int number = uniform_rng(0, 52*51*50*49);
int card1 = number % 52;
number /= 52;
int cards2 = number % 51;
number /= 51;
......
Of course that is only minor optimization because generation is still O(N).
Generation using bit permutations
Next idea was exactly solution asked in here but I ended up still with O(N) but with larger cost than original shuffle. But lets look into solution and why it fails so miserably.
I decided to use idea Dealing All the Deals by John Christman
void Deal::generate()
{
// 52:26 split, 52!/(26!)**2 = 495,918,532,948,1041
max = 495918532948104LU;
partner = uniform_rng(eng1, max);
// 2x 26:13 splits, (26!)**2/(13!)**2 = 10,400,600**2
max = 10400600LU*10400600LU;
hands = uniform_rng(eng2, max);
// Create 104 bit presentation of deal (2 bits per card)
select_deal(id, partner, hands);
}
So far good and pretty good looking but select_deal implementation is PITA.
void select_deal(Id &new_id, uint64_t partner, uint64_t hands)
{
unsigned idx;
unsigned e, n, ns = 26;
e = n = 13;
// Figure out partnership who owns which card
for (idx = CARDS_IN_SUIT*NUM_SUITS; idx > 0; ) {
uint64_t cut = ncr(idx - 1, ns);
if (partner >= cut) {
partner -= cut;
// Figure out if N or S holds the card
ns--;
cut = ncr(ns, n) * 10400600LU;
if (hands > cut) {
hands -= cut;
n--;
} else
new_id[idx%NUM_SUITS] |= 1 << (idx/NUM_SUITS);
} else
new_id[idx%NUM_SUITS + NUM_SUITS] |= 1 << (idx/NUM_SUITS);
idx--;
}
unsigned ew = 26;
// Figure out if E or W holds a card
for (idx = CARDS_IN_SUIT*NUM_SUITS; idx-- > 0; ) {
if (new_id[idx%NUM_SUITS + NUM_SUITS] & (1 << (idx/NUM_SUITS))) {
uint64_t cut = ncr(--ew, e);
if (hands >= cut) {
hands -= cut;
e--;
} else
new_id[idx%NUM_SUITS] |= 1 << (idx/NUM_SUITS);
}
}
}
Now that I had the O(N) permutation solution done to prove algorithm could work I started searching for O(1) mapping from random number to bit permutation. Too bad it looks like only solution would be using huge lookup tables that would kill CPU caches. That doesn't sound good idea for AI that will be using very large amount of caches for double dummy analyzer.
Mathematical solution
After all hard work to figure out how to generate random bit permutations I decided go back to maths. It is entirely possible to apply filters before dealing cards. That requires splitting deals to manageable number of layered sets and selecting between sets based on their relative probabilities after filtering out impossible sets.
I don't yet have code ready for that to tests how much cycles I'm wasting in common case where filter is selecting major part of deal. But I believe this approach gives the most stable generation performance keeping the cost less than 0.1%.
Generate a 32 bit integer. For each index i (maybe only up to half the number of elements in the array), if bit i % 32 is 1, swap i with n - i - 1.
Of course, this might not be random enough for your purposes. You could probably improve this by not swapping with n - i - 1, but rather by another function applied to n and i that gives better distribution. You could even use two functions: one for when the bit is 0 and another for when it's 1.

Create a sorted array out of 2 arrays

There are 2 arrays given for ex.
A = [20,4,21,6,3]
B = [748,32,48,92,23......]
assuming B is very large and can hold all the elements of array A.
Find the way in which array B is in (containing all the elements of array A as well) sorted order.
Design an algorithm in the most efficient way.
This sounds like merge sort algorithm. You will find tons of examples here. You can then modify to suit.
Given that your array is integer array, you can use Radix sort algorithm to sort B in linear time, O(n). Wikipedia has a nice write-up and sample python code.
Radix sort is linear with respect to the number of elements. While it also has a dependence on the size of the array, you take it as a constant; just like you take the comparison operator to be constant as well. When sorting bignum for instance, the comparison operator would also depend on the integer size!
Smells like homework. Basically, write into array B starting from the end, keeping track of the place you are reading from in both A and B.
Just try it out :
Merge the array A into B .
Use quick sort algorithm.
Before adding elements from A to B check if by doing that if you exceed the size of the array, if not Merge A into B
Then do a quick sort,
But if you just want to merge both arrays into a new arrays where new array has the combine length of both. Here is a jump start for you, try if you can go forward from here...
public double[] combineArrays(double[] first, double[] second) {
int totalLenth = first.length + second.length;
double[] newDoubles = new double[totalLenth];
for (int i = 0; i < first.length; i++) {
newDoubles[i] = first[i];
}
for (int j = first.length; j < newDoubles.length; j++) {
newDoubles[j] = second[j - first.length];
}
return newDoubles;
}
Hope this helps, Good Luck.
You can also modify an insertion sort idea:
0) do all necessary tests: if arrays are null, if bigger array has enough space
1) add small array at the end of the big array
2) do normal insertion sort, but start it at the beginning of the small array
here if you do quick_sort or some other "quickiest" O(n*log_n) sort, the problem is that you are not using the fact, that both array are sorted. With the insertion sort you are using the fact, that array B is sorted (but not the fact that A is sorted, so maybe we should develop the idea and modify insertion sort to use that fact as well).

Resources