Something I dont understand about median of medians algorithm - algorithm

There is something I don't understand about the algorithm of median of medians.
One key step about this algorithm is to find an approximate median, and according to Wikipedia, we have the guarantee that this approximate median is greater than 30% of elements of the initial set.
To find this approximate median, we compute the median of each group of 5 elements, we gather these medians in a new set, and we recompute the medians until the obtained set have least than 5 elements. In this case, we get the median of the set. (see the wikipedia page if my explanations are not clear)
But, consider the following set of 125 elements :
1 2 3 1001 1002
4 5 6 1003 1004
7 8 9 1005 1006
1020 1021 1022 1023 1034
1025 1026 1027 1028 1035
10 11 12 1007 1008
13 14 15 1009 1010
16 17 18 1011 1013
1029 1030 1031 1032 1033
1036 1037 1038 1039 1040
19 20 21 1014 1015
22 23 24 1016 1017
25 26 27 1018 1019
1041 1042 1043 1044 1045
1046 1047 1048 1049 1050
1051 1052 1053 1054 1055
1056 1057 1058 1059 1060
1061 1062 1063 1064 1065
1066 1067 1068 1069 1070
1071 1072 1073 1074 1075
1076 1077 1078 1079 1080
1081 1082 1083 1084 1085
1086 1087 1088 1089 1090
1091 1092 1093 1094 1095
1096 1097 1098 1099 1100
So we divide the set in group of 5 elements, we compute and gather the medians, and so, we obtain the following set :
3 6 9 1022 1207
12 15 18 1031 1038
21 24 27 1043 1048
1053 1058 1063 1068 1073
1078 1083 1088 1093 1098
We redo the same algorithm, and we obtain the following set :
9 18 27 1063 1068
So we obtain that the approximate median is 27. But this number is greater or equals than only 27 elements. And 27/125 = 21.6% < 30%!!
So my question is : where am I wrong?? Why is the approximate median is in my case not greater than 30% of elements????
Thank you for your replies!!

The cause of your confusion about the median-of-medians algorithm is that, while median-of-medians returns an approximate result within 20% of the actual median, at some stages in the algorithm we also need to calculate exact medians. If you mix up the two, you will not get the expected result, as demonstrated in your example.
Median-of-medians uses three functions as its building blocks:
medianOfFive(array, first, last) {
// ...
return median;
}
This function returns the exact median of five (or fewer) elements from (part of) an array. There are several ways to code this, based on e.g. a sorting network or insertion sort. The details are not important for this question, but it is important to note that this function returns the exact median, not an approximation.
medianOfMedians(array, first, last) {
// ...
return median;
}
This function returns an approximation of the median from (part of) an array, which is guaranteed to be larger than the 30% smallest elements, and smaller than the 30% largest elements. We'll go into more detail below.
select(array, first, last, n) {
// ...
return element;
}
This function returns the n-th smallest element from (part of) an array. This function too returns an exact result, not an approximation.
At its most basic, the overall algorithm works like this:
medianOfMedians(array, first, last) {
call medianOfFive() for every group of five elements
fill an array with these medians
call select() for this array to find the middle element
return this middle element (i.e. the median of medians)
}
So this is where your calculation went wrong. After creating an array with the median-of-fives, you then used the median-of-medians function again on this array, which gives you an approximation of the median (27), but here you need the actual median (1038).
This all sounds fairly straightforward, but where it becomes complicated is that the function select() calls medianOfMedians() to get a first estimate of the median, which it then uses to calculate the exact median, so you get a two-way recursion where two functions call each other. This recursion stops when medianOfMedians() is called for 25 elements or fewer, because then there are only 5 medians, and instead of using select() to find their median, it can use medianOfFive().
The reason why select() calls medianOfMedians() is that it uses partitioning to split (part of) the array into two parts of close to equal size, and it needs a good pivot value to do that. After it has partitioned the array into two parts with the elements which are smaller and larger than the pivot, it then checks which part the n-th smallest element is in, and recurses with this part. If the size of the part with the smaller values is n-1, the pivot is the n-th value, and no further recursion is needed.
select(array, first, last, n) {
call medianOfMedians() to get approximate median as pivot
partition (the range of) the array into smaller and larger than pivot
if part with smaller elements is size n-1, return pivot
call select() on the part which contains the n-th element
}
As you see, the select() function recurses (unless the pivot happens to be the n-th element), but on ever smaller ranges of the array, so at some point (e.g. two elements) finding the n-th element will become trivial, and recursing further is no longer needed.
So finally we get, in some more detail:
medianOfFive(array, first, last) {
// some algorithmic magic ...
return median;
}
medianOfMedians(array, first, last) {
if 5 elements or fewer, call medianOfFive() and return result
call medianOfFive() for every group of five elements
store the results in an array medians[]
if 5 elements or fewer, call medianOfFive() and return result
call select(medians[]) to find the middle element
return the result (i.e. the median of medians)
}
select(array, first, last, n) {
if 2 elements, compare and return n-th element
if 5 elements or fewer, call medianOfFive() to get median as pivot
else call medianOfMedians() to get approximate median as pivot
partition (the range of) the array into smaller and larger than pivot
if part with smaller elements is size n-1, return pivot
if n-th value is in part with larger values, recalculate value of n
call select() on the part which contains the n-th element
}
EXAMPLE
Input array (125 values, 25 groups of five):
#1 #2 #3 #4 #5 #6 #7 #8 #9 #10 #11 #12 #13 #14 #15 #16 #17 #18 #19 #20 #21 #22 #23 #24 #25
1 4 7 1020 1025 10 13 16 1029 1036 19 22 25 1041 1046 1051 1056 1061 1066 1071 1076 1081 1086 1091 1096
2 5 8 1021 1026 11 14 17 1030 1037 20 23 26 1042 1047 1052 1057 1062 1067 1072 1077 1082 1087 1092 1097
3 6 9 1022 1027 12 15 18 1031 1038 21 24 27 1043 1048 1053 1058 1063 1068 1073 1078 1083 1088 1093 1098
1001 1003 1005 1023 1028 1007 1009 1011 1032 1039 1014 1016 1018 1044 1049 1054 1059 1064 1069 1074 1079 1084 1089 1094 1099
1002 1004 1006 1034 1035 1008 1010 1013 1033 1040 1015 1017 1019 1045 1050 1055 1060 1065 1070 1075 1080 1085 1090 1095 1100
Medians of groups of five (25 values):
3, 6, 9, 1022, 1027, 12, 15, 18, 1031, 1038, 21, 24, 27, 1043,
1048, 1053, 1058, 1063, 1068, 1073, 1078, 1083, 1088, 1093, 1098
Groups of five for approximate median:
#1 #2 #3 #4 #5
3 12 21 1053 1078
6 15 24 1058 1083
9 18 27 1063 1088
1022 1031 1043 1068 1096
1027 1038 1048 1073 1098
Medians of five for approximate median:
9, 18, 27, 1063, 1088
Approximate median as pivot:
27
Medians of five partitioned with pivot 27 (depends on method):
small: 3, 6, 9, 24, 21, 12, 15, 18
pivot: 27
large: 1031, 1038, 1027, 1022, 1043, 1048, 1053, 1058,
1063, 1068, 1073, 1078, 1083, 1088, 1093, 1098
The smaller group has 8 elements, the larger group 16 elements. We were looking for the middle 13th element out of 25, so now we look for the 13 - 8 - 1 = 4th element out of 16:
Groups of five:
#1 #2 #3 #4
1031 1048 1073 1098
1038 1053 1078
1027 1058 1083
1022 1063 1088
1043 1068 1093
Medians of groups of five:
1031, 1058, 1083, 1098
Approximate median as pivot:
1058
Range of medians of five partitioned with pivot 1058 (depends on method):
small: 1031, 1038, 1027, 1022, 1043, 1048, 1053
pivot: 1058
large: 1063, 1068, 1073, 1078, 1083, 1088, 1093, 1098
The smaller group has 7 elements. We were looking for the 4th element of 16, so now we look for the 4th element out of 7:
Groups of five:
#1 #2
1031 1048
1038 1053
1027
1022
1043
Medians of groups of five:
1031, 1048
Approximate median as pivot:
1031
Range of medians of five partitioned with pivot 1031 (depends on method):
small: 1022, 1027
pivot: 1031
large: 1038, 1043, 1048, 1053
The smaller part has 2 elements, and the larger has 4, so now we look for the 4 - 2 - 1 = 1st element out of 4:
Median of five as pivot:
1043
Range of medians of five partitioned with pivot 1043 (depends on method):
small: 1038
pivot: 1043
large: 1048, 1053
The smaller part has only one element, and we were looking for the first element, so we can return the small element 1038.
As you will see, 1038 is the exact median of the original 25 median-of-fives, and there are 62 smaller values in the original array of 125:
1 ~ 27, 1001 ~ 1011, 1013 ~ 1023, 1025 ~ 1037
which not only puts it in the 30~70% range, but means it is actually the exact median (note that this is a coincidence of this particular example).

I'm completely with your analysis up through the point where you get the medians of each of the blocks of five elements, when you're left with this collection of elements:
3 6 9 1022 1207 12 15 18 1031 1038 21 24 27 1043 1048 1053 1058 1063 1068 1073 1078 1083 1088 1093 1098
You are correct that, at this point, we need to get the median of this collection of elements. However, the way that the median-of-medians algorithm accomplishes this is different than what you've proposed.
When you were working through your analysis, you attempted to get the median of this set of values by, once again, splitting the input into blocks of size five and taking the median of each. However, that approach won't actually give you the median of the medians. (You can see this by noting that you got back 27, which isn't the true median of that collection of values).
The way that the median-of-medians algorithm actually gets back the median of the medians is by recursively invoking the overall algorithm to obtain the median of those elements. This is subtly different from just repeatedly breaking things apart into blocks and computing the medians of each block. In particular, each recursive call will
get an estimate of the pivot by using the groups-of-five heuristic,
recursively invoke the function on itself to find the median of those medians, then
apply a partitioning step on that median and use that to determine how to proceed from there.
This algorithm is, in my opinion, something that's way too complicated to actually trace through by hand. You really need to trust that, since each recursive call you're making works on a smaller array than what you started with, each recursive call will indeed do what it says to do. So when you're left with the medians of each group, as you were before, you should just trust that when you need to get the median by a recursive call, you end up with the true median.
If you look at the true median of the medians that you've generated in the first step, you'll find that it indeed will be between the 30th and 70th percentiles of the original data set.
If this seems confusing, don't worry - you're in really good company. This algorithm is famously tricky to understand. For me, the easiest way to understand it is to just trust that recursion works and to trace through it only one layer deep, working under the assumption that all the recursive calls work, rather than trying to walk all the way down to the bottom of the recursion tree.

Related

reformulating for loop with vectorization or other approach - octave

Is there any way to vectorize (or reformulate) each body of the loop in this code:
col=load('col-deau'); %load data
h=col(:,8); % corresponding water column
dates=col(:,3); % and its dates
%removing out-of-bound data
days=days(h~=9999.000);
h=h(h~=9999.000);
dates=sort(dates(h~=9999.000));
[k,hcat]=hist(h,nbin); %making classes (k) and boundaries of classes (hcat) of water column automatically
dcat=1:15; % make boundaries for dates
for k=1:length(dcat)-1 % Loop for each date class
ii=find(dates>=dcat(k)&dates<dcat(k+1));% Counting dates corresponding to the boundaries of each date class
for j=1:length(hcat)-1 % Loop over each class of water column
ij=find(h>=hcat(j)&h<hcat(j+1)); % Count water column corresponding to the boundaries of each water column class
obs(k,j)=length(intersect(ii,ij)); % Find the size of each intersecting matrix
end
end
I've tried using vectorization, for example, to change this part:
for k=1:length(dcat)-1
ii=find(dates>=dcat(k)&dates<dcat(k+1))
endfor
with this:
nk=1:length(dcat)-1;
ii2=find(dates>=dcat(nk)&dates<dcat(nk+1));
and also using bsxfun:
ii2=find(bsxfun(#and,bsxfun(#ge,dates,nk),bsxfun(#lt,dates,nk+1)));
but to no avail. Both these approaches produce identical output, and do not correspond to that of using for loop (in terms of elements and vector size).
For information, h is a vector which contains water column in meters and dates is a vector (integer with two digits) which contains the dates in which the measurement for a corresponding water column was taken.
The input file can be found here: https://drive.google.com/open?id=1EomLGYleaNtiGG2iV_9LRt425blxdIsm
As for the output, I want to have ii like this:
ii =
1177
1178
1179
1180
1181
1182
1183
1184
1185
1186
1187
1188
1189
1190
1191
1192
1193
1194
1195
1196
1197
1198
1199
1200
1201
1202
1203
1204
1205
1206
1207
1208
1209
1210
1211
1212
1213
1214
1215
1216
1217
1218
1219
1220
1221
1222
1223
1224
1225
1226
1227
1228
1229
1230
1231
1232
1233
1234
1235
1236
1237
1238
1239
1240
1241
1242
1243
1244
1245
1246
1247
1248
1249
1250
1251
1252
1253
1254
1255
1256
1257
1258
1259
1260
1261
1262
1263
1264
1265
1266
1267
1268
1269
1270
1271
1272
instead with the first approach I get ii2 which is very different in terms of value and vector size (I can't post the result because the vector size is too big).
Can someone help a desperate newbie here? I just need to reformulate the loop part into a better, more concise version.
If more details need to be added, please feel free to ask me.
You can use hist3:
pkg load statistics
[obs, ~] = hist3([dates(:) h(:)] ,'Edges', {dcat,hcat});

Print Maximum List

We are given a set F={a1,a2,a3,…,aN} of N Fruits. Each Fruits has price Pi and vitamin content Vi.Now we have to arrange these fruits in such a way that the list contains prices in ascending order and the list contains vitamins in descending order.
For example::
N=4
Pi: 2 5 7 10
Vi: 8 11 9 2
This is the exact question https://cs.stackexchange.com/questions/1287/find-subsequence-of-maximal-length-simultaneously-satisfying-two-ordering-constr/1289#1289
I'd try to reduce the problem to longest increasing subsequent problem.
Sort the list according to first criteria [vitamins]
Then, find the longest increasing subsequent in the modified list,
according to the second criteria [price]
This solution is O(nlogn), since both step (1) and (2) can be done in O(nlogn) each.
Have a look on the wikipedia article, under Efficient Algorithms - how you can implement longest increasing subsequent
EDIT:
If your list allows duplicates, your sort [step (1)] will have to sort by the second parameter as secondary criteria, in case of equality of the primary criteria.
Example [your example 2]:
Pi::99 12 34 10 87 19 90 43 13 78
Vi::10 23 4 5 11 10 18 90 100 65
After step 1 you get [sorting when Vi is primary criteria, descending]:
Pi:: 013 43 78 12 90 87 87 99 10 34
Vi:: 100 90 65 23 18 11 10 10 05 04
Step two finds for longest increasing subsequence in Pi, and you get:
(13,100), (43,90), (78,65), (87,11), (99,10)
as a feasible solution, since it is an increasing subsequence [according to Pi] in the sorted list.
P.S. In here I am assuming the increasing subsequence you want is strictly increasing, otherwise the result is (13,100),(43,90),(78,65),(87,11),(87,10),(99,10) - which is longer subsequence, but it is not strictly increasing/decreasing according to Pi and Vi

How to calculate classification error rate

Alright. Now this question is pretty hard. I am going to give you an example.
Now the left numbers are my algorithm classification and the right numbers are the original class numbers
177 86
177 86
177 86
177 86
177 86
177 86
177 86
177 86
177 86
177 89
177 89
177 89
177 89
177 89
177 89
177 89
So here my algorithm merged 2 different classes into 1. As you can see it merged class 86 and 89 into one class. So what would be the error at the above example ?
Or here another example
203 7
203 7
203 7
203 7
16 7
203 7
17 7
16 7
203 7
At the above example left numbers are my algorithm classification and the right numbers are original class ids. As can be seen above it miss classified 3 products (i am classifying same commercial products). So at this example what would be the error rate? How would you calculate.
This question is pretty hard and complex. We have finished the classification but we are not able to find correct algorithm for calculating success rate :D
Here's a longish example, a real confuson matrix with 10 input classes "0" - "9"
(handwritten digits),
and 10 output clusters labelled A - J.
Confusion matrix for 5620 optdigits:
True 0 - 9 down, clusters A - J across
-----------------------------------------------------
A B C D E F G H I J
-----------------------------------------------------
0: 2 4 1 546 1
1: 71 249 11 1 6 228 5
2: 13 5 64 1 13 1 460
3: 29 2 507 20 5 9
4: 33 483 4 38 5 3 2
5: 1 1 2 58 3 480 13
6: 2 1 2 294 1 1 257
7: 1 5 1 546 6 7
8: 415 15 2 5 3 12 13 87 2
9: 46 72 2 357 35 1 47 2
----------------------------------------------------
580 383 496 1002 307 670 549 557 810 266 estimates in each cluster
y class sizes: [554 571 557 572 568 558 558 566 554 562]
kmeans cluster sizes: [ 580 383 496 1002 307 670 549 557 810 266]
For example, cluster A has 580 data points, 415 of which are "8"s;
cluster B has 383 data points, 249 of which are "1"s; and so on.
The problem is that the output classes are scrambled, permuted;
they correspond in this order, with counts:
A B C D E F G H I J
8 1 4 3 6 7 0 5 2 6
415 249 483 507 294 546 546 480 460 257
One could say that the "success rate" is
75 % = (415 + 249 + 483 + 507 + 294 + 546 + 546 + 480 + 460 + 257) / 5620
but this throws away useful information —
here, that E and J both say "6", and no cluster says "9".
So, add up the biggest numbers in each column of the confusion matrix
and divide by the total.
But, how to count overlapping / missing clusters,
like the 2 "6"s, no "9"s here ?
I don't know of a commonly agreed-upon way
(doubt that the Hungarian algorithm
is used in practice).
Bottom line: don't throw away information; look at the whole confusion matrix.
NB such a "success rate" will be optimistic for new data !
It's customary to split the data into say 2/3 "training set" and 1/3 "test set",
train e.g. k-means on the 2/3 alone,
then measure confusion / success rate on the test set — generally worse than on the training set alone.
Much more can be said; see e.g.
Cross-validation.
You have to define the error criteria if you want to evaluate the performance of an algorithm, so I'm not sure exactly what you're asking. In some clustering and machine learning algorithms you define the error metric and it minimizes it.
Take a look at this
https://en.wikipedia.org/wiki/Confusion_matrix
to get some ideas
You have to define a error metric to measure yourself. In your case, a simple method should be to find the properties mapping of your product as
p = properties(id)
where id is the product id, and p is likely be a vector with each entry of different properties. Then you can define the error function e (or distance) between two products as
e = d(p1, p2)
Sure, each properties must be evaluated to a number in this function. Then this error function can be used in the classification algorithm and learning.
In your second example, it seems that you treat the pair (203 7) as successful classification, so I think you have already a metric yourself. You may be more specific to get better answer.
Classification Error Rate(CER) is 1 - Purity (http://nlp.stanford.edu/IR-book/html/htmledition/evaluation-of-clustering-1.html)
ClusterPurity <- function(clusters, classes) {
sum(apply(table(classes, clusters), 2, max)) / length(clusters)
}
Code of #john-colby
Or
CER <- function(clusters, classes) {
1- sum(apply(table(classes, clusters), 2, max)) / length(clusters)
}

Suggest optimal algorithm to find min number of days to purchase all toys

Note: I am still looking for a fast solution. Two of the solutions below are wrong and the third one is terribly slow.
I have N toys from 1....N. Each toy has an associated cost with it. You have to go on a shopping spree such that on a particular day, if you buy toy i, then the next toy you can buy on the same day should be i+1 or greater. Moreover, the absolute cost difference between any two consecutively bought toys should be greater than or equal to k. What is the minimum number of days can I buy all the toys.
I tried a greedy approach by starting with toy 1 first and then seeing how many toys can I buy on day 1. Then, I find the smallest i that I have not bought and start again from there.
Example:
Toys : 1 2 3 4
Cost : 5 4 10 15
let k be 5
On day 1, buy 1,3, and 4
on day 2, buy toy 2
Thus, I can buy all toys in 2 days
Note greedy not work for below example: N = 151 and k = 42
the costs of the toys 1...N in that order are :
383 453 942 43 27 308 252 721 926 116 607 200 195 898 568 426 185 604 739 476 354 533 515 244 484 38 734 706 608 136 99 991 589 392 33 615 700 636 687 625 104 293 176 298 542 743 75 726 698 813 201 403 345 715 646 180 105 732 237 712 867 335 54 455 727 439 421 778 426 107 402 529 751 929 178 292 24 253 369 721 65 570 124 762 636 121 941 92 852 178 156 719 864 209 525 942 999 298 719 425 756 472 953 507 401 131 150 424 383 519 496 799 440 971 560 427 92 853 519 295 382 674 365 245 234 890 187 233 539 257 9 294 729 313 152 481 443 302 256 177 820 751 328 611 722 887 37 165 739 555 811
You can find the optimal solution by solving the asymmetric Travelling Salesman.
Consider each toy as a node, and build the complete directed graph (that is, add an edge between each pair of nodes). The edge has cost 1 (has to continue on next day) if the index is smaller or the cost of the target node is less than 5 plus the cost of the source node, and 0 otherwise. Now find the shortest path covering this graph without visiting a node twice - i.e., solve the Travelling Salesman.
This idea is not very fast (it is in NP), but should quickly give you a reference implementation.
This is not as difficult as ATSP. All you need to do is look for increasing subsequences.
Being a mathematician, the way I would solve the problem is to apply RSK to get a pair of Young tableaux, then the answer for how many days is the height of the tableau and the rows of the second tableau tell you what to purchase on which day.
The idea is to do Schensted insertion on the cost sequence c. For the example you gave, c = (5, 4, 10, 15), the insertion goes like this:
Step 1: Insert c[1] = 5
P = 5
Step 2: Insert c[2] = 4
5
P = 4
Step 3: Insert c[3] = 10
5
P = 4 10
Step 4: Insert c[4] = 15
5
P = 4 10 15
The idea is that you insert the entries of c into P one at a time. When inserting c[i] into row j:
if c[i] is bigger than the largest element in the row, add it to the end of the row;
otherwise, find the leftmost entry in row j that is larger than c[i], call it k, and replace k with c[i] then insert k into row j+1.
P is an array where the lengths of the rows are weakly decreasing and The entries in each of row P (these are the costs) weakly increase. The number of rows is the number of days it will take.
For a more elaborate example (made by generating 9 random numbers)
1 2 3 4 5 6 7 8 9
c = [ 5 4 16 7 11 4 13 6 5]
16
7
5 6 11
P = 4 4 5 13
So the best possible solution takes 4 days, buying 4 items on day 1, 3 on day 2, 1 on day 3, and 1 on day 4.
To handle the additional constraint that consecutive costs must increase by at least k involves redefining the (partial) order on costs. Say that c[i] <k< c[j] if and only if c[j]-c[i] >= k in the usual ordering on numbers. The above algorithm works for partial orders as well as total orders.
I somewhat feel that a greedy approach would give a fairly good result.
I think your approach is not optimal just because you always pick toy 1 to start while you should really pick the least expensive toy. Doing so would give you the most room to move to the next toy.
Each move being the least expensive one, it is just DFS problem where you always follow the least expensive path constrained by k.

Fastest gap sequence for shell sort?

According to Marcin Ciura's Optimal (best known) sequence of increments for shell sort algorithm,
the best sequence for shellsort is 1, 4, 10, 23, 57, 132, 301, 701...,
but how can I generate such a sequence?
In Marcin Ciura's paper, he said:
Both Knuth’s and Hibbard’s sequences
are relatively bad, because they are
defined by simple linear recurrences.
but most algorithm books I found tend to use Knuth’s sequence: k = 3k + 1, because it's easy to generate. What's your way of generating a shellsort sequence?
Ciura's paper generates the sequence empirically -- that is, he tried a bunch of combinations and this was the one that worked the best. Generating an optimal shellsort sequence has proven to be tricky, and the problem has so far been resistant to analysis.
The best known increment is Sedgewick's, which you can read about here (see p. 7).
If your data set has a definite upper bound in size, then you can hardcode the step sequence. You should probably only worry about generality if your data set is likely to grow without an upper bound.
The sequence shown seems to grow roughly as an exponential series, albeit with quirks. There seems to be a majority of prime numbers, but with non-primes in the mix as well. I don't see an obvious generation formula.
A valid question, assuming you must deal with arbitrarily large sets, is whether you need to emphasise worst-case performance, average-case performance, or almost-sorted performance. If the latter, you may find that a plain insertion sort using a binary search for the insertion step might be better than a shellsort. If you need good worst-case performance, then Sedgewick's sequence appears to be favoured. The sequence you mention is optimised for average-case performance, where the number of comparisons outweighs the number of moves.
I would not be ashamed to take the advice given in Wikipedia's Shellsort article,
With respect to the average number of comparisons, the best known gap
sequences are 1, 4, 10, 23, 57, 132, 301, 701 and similar, with gaps
found experimentally. Optimal gaps beyond 701 remain unknown, but good
results can be obtained by extending the above sequence according to
the recursive formula h_k = \lfloor 2.25 h_{k-1} \rfloor.
Tokuda's sequence [1, 4, 9, 20, 46, 103, ...], defined by the simple formula h_k = \lceil h'_k
\rceil, where h'k = 2.25h'k − 1 + 1, h'1 = 1, can be recommended for
practical applications.
guessing from the pseudonym, it seems Marcin Ciura edited the WP article himself.
The sequence is 1, 4, 10, 23, 57, 132, 301, 701, 1750. For every next number after 1750 multiply previous number by 2.25 and round down.
Sedgewick observes that coprimality is good. This rings true: if there are separate ‘streams’ not much cross-compared until the gap is small, and one stream contains mostly smalls and one mostly larges, then the small gap might need to move elements far. Coprimality maximises cross-stream comparison.
Gonnet and Baeza-Yates advise growth by a factor of about 2.2; Tokuda by 2.25. It is well known that if there is a mathematical constant between 2⅕ and 2¼ then it must† be precisely √5 ≈ 2.236.
So start {1, 3}, and then each subsequent is the integer closest to previous·√5 that is coprime to all previous except 1. This sequence can be pre-calculated and embedded in code. There follow the values up to 2⁶⁴ ≈ eighteen quintillion.
{1, 3, 7, 16, 37, 83, 187, 419, 937, 2099, 4693, 10499, 23479, 52501, 117391, 262495, 586961, 1312481, 2934793, 6562397, 14673961, 32811973, 73369801, 164059859, 366848983, 820299269, 1834244921, 4101496331, 9171224603, 20507481647, 45856123009, 102537408229, 229280615033, 512687041133, 1146403075157, 2563435205663, 5732015375783, 12817176028331, 28660076878933, 64085880141667, 143300384394667, 320429400708323, 716501921973329, 1602147003541613, 3582509609866643, 8010735017708063, 17912548049333207, 40053675088540303, 89562740246666023, 200268375442701509, 447813701233330109, 1001341877213507537, 2239068506166650537, 5006709386067537661, 11195342530833252689}
(Obviously, omit those that would overflow the relevant array index type. So if that is a signed long long, omit the last.)
On average these have ≈1.96 distinct prime factors and ≈2.07 non-distinct prime factors; 19/55 ≈ 35% are prime; and all but three are square-free (2⁴, 13·19² = 4693, 3291992692409·23³ ≈ 4.0·10¹⁶).
I would welcome formal reasoning about this sequence.
† There’s a little mischief in this “well known … must”. Choosing ∉ℚ guarantees that the closest number that is coprime cannot be a tie, but rational with odd denominator would achieve same. And I like the simplicity of √5, though other possibilities include e^⅘, 11^⅓, π/√2, and √π divided by the Chow-Robbins constant. Simplicity favours √5.
I've found this sequence similar to Marcin Ciura's sequence:
1, 4, 9, 23, 57, 138, 326, 749, 1695, 3785, 8359, 18298, 39744, etc.
For example, Ciura's sequence is:
1, 4, 10, 23, 57, 132, 301, 701, 1750
This is a mean of prime numbers. Python code to find mean of prime numbers is here:
import numpy as np
def isprime(n):
''' Check if integer n is a prime '''
n = abs(int(n)) # n is a positive integer
if n < 2: # 0 and 1 are not primes
return False
if n == 2: # 2 is the only even prime number
return True
if not n & 1: # all other even numbers are not primes
return False
# Range starts with 3 and only needs to go up the square root
# of n for all odd numbers
for x in range(3, int(n**0.5)+1, 2):
if n % x == 0:
return False
return True
# To apply a function to a numpy array, one have to vectorize the function
vectorized_isprime = np.vectorize(isprime)
a = np.arange(10000000)
primes = a[vectorized_isprime(a)]
#print(primes)
for i in range(2,20):
print(primes[0:2**i].mean())
The output is:
4.25
9.625
23.8125
57.84375
138.953125
326.1015625
749.04296875
1695.60742188
3785.09082031
8359.52587891
18298.4733887
39744.887085
85764.6216431
184011.130096
392925.738174
835387.635033
1769455.40302
3735498.24225
The gap in the sequence is slowly decreasing from 2.5 to 2.
Maybe this association could improve the Shellsort in the future.
I discussed this question here yesterday including the gap sequences I have found work best given a specific (low) n.
In the middle I write
A nasty side-effect of shellsort is that when using a set of random
combinations of n entries (to save processing/evaluation time) to test
gaps you may end up with either the best gaps for n entries or the
best gaps for your set of combinations - most likely the latter.
The problem lies in testing the proposed gaps such that valid conclusions can be drawn. Obviously, testing the gaps against all n! orderings that a set of n unique values can be expressed as is unfeasible. Testing in this manner for n=16, for example, means that 20,922,789,888,000 different combinations of n values must be sorted to determine the exact average, worst and reverse-sorted cases - just to test one set of gaps and that set might not be the best. 2^(16-2) sets of gaps are possible for n=16, the first being {1} and the last {15,14,13,12,11,10,9,8,7,6,5,4,3,2,1}.
To illustrate how using random combinations might give incorrect results assume n=3 that can assume six different orderings 012, 021, 102, 120, 201 and 210. You produce a set of two random sequences to test the two possible gap sets, {1} and {2,1}. Assume that these sequences turn out to be 021 and 201. for {1} 021 can be sorted with three comparisons (02, 21 and 01) and 201 with (20, 21, 01) giving a total of six comparisons, divide by two and voilà, an average of 3 and a worst case of 3. Using {2,1} gives (01, 02, 21 and 01) for 021 and (21, 10 and 12) for 201. Seven comparisons with a worst case of 4 and an average of 3.5. The actual average and worst case for {1] is 8/3 and 3, respectively. For {2,1} the values are 10/3 and 4. The averages were too high in both cases and the worst cases were correct. Had 012 been one of the cases {1} would have given a 2.5 average - too low.
Now extend this to finding a set of random sequences for n=16 such that no set of gaps tested will be favored in comparison with the others and the result close (or equal) to the true values, all the while keeping processing to a minimum. Can it be done? Possibly. After all, everything is possible - but is it probable? I think that for this problem random is the wrong approach. Selecting the sequences according to some system may be less bad and might even be good.
More information regarding jdaw1's post:
Gonnet and Baeza-Yates advise growth by a factor of about 2.2; Tokuda by 2.25. It is well known that if there is a mathematical constant between 2⅕ and 2¼ then it must† be precisely √5 ≈ 2.236.
It is known that √5 * √5 is 5 so I think every other index should increase by a factor of five. So first index being 1 insertion sort, second being 3 then each other subsequent is of the factor 5. There follow the values up to 2⁶⁴ ≈ eighteen quintillion.
{1, 3,, 15,, 75,, 375,, 1 875,, 9 375,, 46 875,, 234 375,, 1 171 875,, 5 859 375,, 29 296 875,, 146 484 375,, 732 421 875,, 3 662 109 375,, 18 310 546 875,, 91 552 734 375,, 457 763 671 875,, 2 288 818 359 375,, 11 444 091 796 875,, 57 220 458 984 375,, 286 102 294 921 875,, 1 430 511 474 609 375,, 7 152 557 373 046 875,, 35 762 786 865 234 375,, 178 813 934 326 171 875,, 894 069 671 630 859 375,, 4 470 348 358 154 296 875,}
The values in the gaps can simply be calculated by taking the value before and multiply by √5 rounding to whole numbers giving the resulting array (using 2.2360679775 * 5 ^ n * 3):
{1, 3, 7, 15, 34, 75, 168, 375, 839, 1 875, 4 193, 9 375, 20 963, 46 875, 104 816, 234 375, 524 078, 1 171 875, 2 620 392, 5 859 375, 13 101 961, 29 296 875, 65 509 804, 146 484 375, 327 549 020, 732 421 875, 1 637 745 101, 3 662 109 375, 8 188 725 504, 18 310 546 875, 40 943 627 518, 91 552 734 375, 204 718 137 589, 457 763 671 875, 1 023 590 687 943, 2 288 818 359 375, 5 117 953 439 713, 11 444 091 796 875, 25 589 767 198 563, 57 220 458 984 375, 127 948 835 992 813, 286 102 294 921 875, 639 744 179 964 066, 1 430 511 474 609 375, 3 198 720 899 820 328, 7 152 557 373 046 875, 15 993 604 499 101 639, 35 762 786 865 234 375, 79 968 022 495 508 194, 178 813 934 326 171 875, 399 840 112 477 540 970, 894 069 671 630 859 375, 1 999 200 562 387 704 849, 4 470 348 358 154 296 875, 9 996 002 811 938 524 246}
(Obviously, omit those that would overflow the relevant array index type. So if that is a signed long long, omit the last.)

Resources