When I use the imrotate function of MATLAB, output image has missing parts which are filled with zeros.
Can I somehow make it so the triangles in the corners (i.e. the missing parts) are filled with the opposite edge of the image? So it will be like I have tiled the image with 8 neighbors around itself, and then rotated and cropped the bigger image?
Thanks for any help,
I think that the best (and most memory efficient) way of doing this, would be to use interp2 to sample the original image at the new pixel centers (the original pixel centers "rotated" by the opposite of the desired angle). And then to use mod on these sample points to ensure that they fall within the dimension of the original image. And the added benefit of mod is that the new x,y coordinates that are out of range, simply "wrap around" to the other side of the image.
% Load some sample image
load mri
im = double(D(:,:,12));
[rows, cols] = size(im);
% The original pixel centers
[xx,yy] = meshgrid(1:cols, 1:rows);
% Setup the rotation matrix
theta = pi/4;
R = [cos(-theta), -sin(-theta);
sin(-theta), cos(-theta)];
% Center of Rotation (center of the image)
center = [cols/2, rows/2];
% Determine the new pixel centers (rotated)
xy = bsxfun(#minus, [xx(:), yy(:)], center) * R;
xy = bsxfun(#plus, xy, center);
% Mod these using the dimensions of the image to "wrap" around
x = mod(xy(:,1) - 1, cols - 1) + 1;
y = mod(xy(:,2) - 1, rows - 1) + 1;
% Sample the original image at the new pixel centers
im2 = interp2(xx, yy, im, x, y);
im2 = reshape(im2, [rows, cols])
Rotation = 45 degrees
This will work with any arbitrary aspect ratio (below is an image demonstrating the issue brought up by #BlackAdder where a repmat [3,3] wouldn't work due to the image being tall and narrow).
Rotation = 90 degrees
This also has the added benefit that it doesn't rely on the Image Processing Toolbox.
The most direct way, using the Image Processing Toolbox, is to use padarray. Given an image:
>> img = reshape(1:25, 5, 5)
img =
1 6 11 16 21
2 7 12 17 22
3 8 13 18 23
4 9 14 19 24
5 10 15 20 25
Then you can replicate the image on all sides:
>> padarray(img, size(img), 'circular')
ans =
1 6 11 16 21 | 1 6 11 16 21 | 1 6 11 16 21
2 7 12 17 22 | 2 7 12 17 22 | 2 7 12 17 22
3 8 13 18 23 | 3 8 13 18 23 | 3 8 13 18 23
4 9 14 19 24 | 4 9 14 19 24 | 4 9 14 19 24
5 10 15 20 25 | 5 10 15 20 25 | 5 10 15 20 25
--------------------------------------------------------------------------
1 6 11 16 21 | 1 6 11 16 21 | 1 6 11 16 21
2 7 12 17 22 | 2 7 12 17 22 | 2 7 12 17 22
3 8 13 18 23 | 3 8 13 18 23 | 3 8 13 18 23
4 9 14 19 24 | 4 9 14 19 24 | 4 9 14 19 24
5 10 15 20 25 | 5 10 15 20 25 | 5 10 15 20 25
--------------------------------------------------------------------------
1 6 11 16 21 | 1 6 11 16 21 | 1 6 11 16 21
2 7 12 17 22 | 2 7 12 17 22 | 2 7 12 17 22
3 8 13 18 23 | 3 8 13 18 23 | 3 8 13 18 23
4 9 14 19 24 | 4 9 14 19 24 | 4 9 14 19 24
5 10 15 20 25 | 5 10 15 20 25 | 5 10 15 20 25
(Lines added to show the original matrix in the center and the padded copies.) Once you're done rotating, you can crop the middle of the matrix for your final image.
Note that this method also works on 3-channel images.
As #Suever and #BlackAdder note in the comments, this padding can be insufficient for images with large aspect ratio (greater than 25.456:9), particularly for rotations near odd multiples of 45°. You can make the padding more accurate by calculating the maximum you might need.
s = size(img);
s = s(1:2); % account for multi-channel images
maxext = sqrt(s * s.'); % calculate length of image diagonal
padsize = ceil((maxext - s)/2); % find amount of padding needed for each side
padarray(img, padsize, 'circular');
Related
I want to extract image patches from the input image in my tensorflow model.
Let's say the input image is [batch, in_width, in_height, channels], I want to output [no_patches, patch_width, patch_height, channels]. no_patches are the total number of patches can be extracted from the input_image.
I found out that tf.extract_image_patches can do the job.
However, I don't understand the difference of the arguments strides and rates.
Can someone explain how to use the above function to do the work?
strides is about the movement of the window on your data.
rates is about how 'spread out' the window is.
For instance, if you use strides = [1,5,5,1] your window jumps by 5 pixels in both the 1st and 2nd dimension. If you use rates = [1,1,1,1] your window is 'compact', meaning that all the pixels are contiguous. If you use rates = [1,1,2,1], then your window spreads out in the 2nd dimension and takes a pixel every 2.
Example with ksizes = [1,3,2,1] (ignore strides for now): on the left we use , rates = [1,1,1,1], in the middle we use rates = [1,1,2,1], on the right we use rates = [1,2,2,1] :
* * 3 4 5 * 2 * 4 5 * 2 * 4 5
* * 8 9 10 * 7 * 9 10 6 7 8 9 10
* * 13 14 15 * 12 * 14 15 * 12 * 14 15
16 17 18 19 20 16 17 18 19 20 16 17 18 19 20
21 22 23 24 25 21 22 23 24 25 * 22 * 24 25
This is an algorithmic problem. I can't seem to find a way to compare relative positions of 2 cubes in a rubix cube.
I've numbered all the 20 cubes in my program. and I'm using their this coordinate system, but now that I wanted to model two cubes in relative position I'm having trouble.
For example, say I saw the two cubes I'm watching in position 8 and 10, then later I saw them in position 12 and 13, well in both situations they're both on the same face of the cube, and they're both across from each other, not adjacent. Relatively speaking, that's the same representation of their location.
(By the way I'm only concerned with the "edge cubes" at this point, that's not the corners, so: 8 10 9 11 12 13 14 15 16 17 18 19 positions).
So anyway I thought if I listed every position in relation to each staring point, using the same algorithm to list each one, then I could compare the indexes and if they were the same, the relative position would be the same (but I was wrong, I might be on the right track, but it doesn't always work):
08 10 18 16 12 13 14 15 09 11 19 17
09 11 19 17 13 14 15 12 10 08 16 18
10 18 16 08 14 15 12 13 11 09 17 19
11 19 17 09 15 12 13 14 08 10 18 16
12 13 14 15 11 19 17 09 16 08 10 18
13 14 15 12 08 16 18 10 17 09 11 19
14 15 12 13 09 17 19 11 18 10 08 16
15 12 13 14 10 18 16 08 19 11 09 17
16 08 10 18 19 17 09 11 13 12 15 14
17 09 11 19 16 18 10 08 14 13 12 15
18 16 08 10 17 19 11 09 15 14 13 12
19 17 09 11 18 16 08 10 12 15 14 13
Consider the following two positions: cube A is at potion 19 and cube b is at 16. they're adjacent on the bottom level. Here's "19" row and it's indices to 16:
0 1 2 3 4 5
19 17 09 11 18 16 08 10 12 15 14 13
Now compare that to the relative position of the cube c and d at 13 and 9. C and D are adjacent on the right side, so they should have the same relative position. But my method doesn't determine that.
0 1 2 3 4 5 6 7 8 9
13 14 15 12 08 16 18 10 17 09 11 19
index 6 is not equal to index 9. Anyway that was my best approach and it took all day to come up with.
Does anyone have any other strategies that come to mind for calculating / expressing relative position between two locations on a cube?
Thanks very much for your help, and consideration on this topic!
There are two problems here:
I think you made a mistake when you calculated the relative positions from cube 13. I get:
0 1 2 3 4 5 6 7 8 9 10 11
13 14 15 12 17 09 11 19 08 16 18 10
This lines up with the other one, so cube 9 occurs at position 5. Compare this with the first row:
0 1 2 3 4 5 6
19 17 09 11 18 16 08 10 12 15 14 13
As required, cube 16 also occurs at position 5 (I think you mixed something up in your question. You mention index 6 when you mean 5. You number the indexes up to 6, but at position 6 there is cube 8, not cube 16. Please check that again).
The second problem is that given only a cube position without a reference cube for the orientation, there are two ways to number the cubes. Since your cube is not colored, you can rotate the cube by 180 degrees and come to another numbering for the reference cubes. Given that the relative positions for cube 19 are correct, I can also number the relative positions for cube 13 like this:
0 1 2 3 4 5 6 7 8 9 10 11
13 12 15 14 08 16 18 10 17 09 11 19
Note that this is close to your version but indexes 1 to 3 are in a different order. I think you were not consistent in the way you looked at the cube.
The main problem already becomes apparent in this paragraph:
For example, say I saw the two cubes I'm watching in position 8 and
10, then later I saw them in position 12 and 13, well in both
situations they're both on the same face of the cube, and they're both
across from each other, not adjacent. Relatively speaking, that's the
same representation of their location.
For every cube, there are two other cubes being on the same face and across from each other. To eliminate this ambiguity, you have to take orientations into account or reduce the number of relative positions (e.g. index 1 and 3 in your current scheme would denote the same relative position).
I'm looking for something in Julia like a comprehension but for a matrix instead of a vector. If i have some single-variable function f(x) and I want an array that is filled with f(i) for i in 1..10, I can do this:
[f(i) for i = 1:10]
If I have some two-variable function g(i,j) and I want a matrix from i=[1,10]; j=[1,10] filled with the function I can do this:
M = zeros (10,10)
for i in 1:10
for j in 1:10
M[i,j] = g(i,j)
end
end
Is there some shortcut that allows me to express that in a shorter way and without wasting time allocating all that zeros?
Just use a multidimensional comprehension directly:
julia> g(x,y) = 2x+y
g (generic function with 1 method)
julia> [g(i,j) for i=1:10, j=1:10]
10x10 Array{Int64,2}:
3 4 5 6 7 8 9 10 11 12
5 6 7 8 9 10 11 12 13 14
7 8 9 10 11 12 13 14 15 16
9 10 11 12 13 14 15 16 17 18
11 12 13 14 15 16 17 18 19 20
13 14 15 16 17 18 19 20 21 22
15 16 17 18 19 20 21 22 23 24
17 18 19 20 21 22 23 24 25 26
19 20 21 22 23 24 25 26 27 28
21 22 23 24 25 26 27 28 29 30
This works for any number of dimensions, by adding variable ranges at the end.
I have the following problem:
An image is given and I am doing some blob detection. As a limit, lets say I have a max of 16 blobs and from each blob I calculate the centroid (x,y position).
If no distorion happends, these centroids are arranged in an equidistant 4x4 grid but they could be really much distorted.
The assumption is that they keep more or less the grid form but they could be really much warped.
I need to sort the blobs such that I know which one is the nearest left, right, up and down. So the best would be to write these blobs into a matrix.
If this is not enough, it could happen that I detect less then 16 and then I also need to sort them into a matrix.
Does anyone know how this could be efficiently solved in Matlab?
Thanks.
[update 1:]
I uploaded an image and the red numbers are the numbers which my blob detection algorithm assign each blob.
The resulting matrix should look like this with these numbers:
1 2 4 3
6 5 7 8
9 10 11 12
13 16 14 15
e.g. I start with blob 11 and the nearest right number is 12 and so on
[update 2:]
The posted solution looks quite nice. In reality it could happen, that one of the outer spots is missing or maybe two ... I know that this makes everything much more complicated and I just want to get a feeling if this is worth spending time.
These problems arise if you analyze a wavefront with a shack-hartmann wavefront sensor and you want to increase the dynamic range :-)
The spots could be really warped such that the dividing lines are not orthogonal any more.
Maybe someone knows a good literature for classification algorithms.
Best solution would be one, which could be implemented on a FPGA without to much effort but this is at this stage not so much important.
This will work as long as the blobs form a square and are relatively ordered:
Image:
Code:
bw = imread('blob.jpg');
bw = im2bw(bw);
rp = regionprops(bw,'Centroid');
% Must be a square
side = sqrt(length(rp));
centroids = vertcat(rp.Centroid);
centroid_labels = cellstr(num2str([1:length(rp)]'));
figure(1);
imshow(bw);
hold on;
text(centroids(:,1),centroids(:,2),centroid_labels,'Color','r','FontSize',60);
hold off;
% Find topleft element - minimum distance from origin
[~,topleft_idx] = min(sqrt(centroids(:,1).^2+centroids(:,2).^2));
% Find bottomright element - maximum distance from origin
[~,bottomright_idx] = max(sqrt(centroids(:,1).^2+centroids(:,2).^2));
% Find bottom left element - maximum normal distance from line formed by
% topleft and bottom right blob
A = centroids(bottomright_idx,2)-centroids(topleft_idx,2);
B = centroids(topleft_idx,1)-centroids(bottomright_idx,1);
C = -B*centroids(topleft_idx,2)-A*centroids(topleft_idx,1);
[~,bottomleft_idx] = max(abs(A*centroids(:,1)+B*centroids(:,2)+C)/sqrt(A^2+B^2));
% Sort blobs based on distance from line formed by topleft and bottomleft
% blob
A = centroids(bottomleft_idx,2)-centroids(topleft_idx,2);
B = centroids(topleft_idx,1)-centroids(bottomleft_idx,1);
C = -B*centroids(topleft_idx,2)-A*centroids(topleft_idx,1);
[~,leftsort_idx] = sort(abs(A*centroids(:,1)+B*centroids(:,2)+C)/sqrt(A^2+B^2));
% Reorder centroids and redetermine bottomright_idx and bottomleft_idx
centroids = centroids(leftsort_idx,:);
bottomright_idx = find(leftsort_idx == bottomright_idx);
bottomleft_idx = find(leftsort_idx == bottomleft_idx);
% Sort blobs based on distance from line formed by bottomleft and
% bottomright blob
A = centroids(bottomright_idx,2)-centroids(bottomleft_idx,2);
B = centroids(bottomleft_idx,1)-centroids(bottomright_idx,1);
C = -B*centroids(bottomleft_idx,2)-A*centroids(bottomleft_idx,1);
[~,bottomsort_idx] = sort(abs(A*reshape(centroids(:,1),side,side)+B*reshape(centroids(:,2),side,side)+C)/sqrt(A^2+B^2),'descend');
disp(leftsort_idx(bsxfun(#plus,bottomsort_idx,0:side:side^2-1)));
Output:
2 12 13 20 25 31
4 11 15 19 26 32
1 7 14 21 27 33
3 8 16 22 28 34
6 9 17 24 29 35
5 10 18 23 30 36
Just curious, are you using this to automate camera calibration through a checkerboard or something?
UPDATE:
For skewed image
tform = maketform('affine',[1 0 0; .5 1 0; 0 0 1]);
bw = imtransform(bw,tform);
Output:
1 4 8 16 21 25
2 5 10 18 23 26
3 6 13 19 27 29
7 9 17 24 30 32
11 14 20 28 33 35
12 15 22 31 34 36
For rotated image:
bw = imrotate(bw,20);
Output:
1 4 10 17 22 25
2 5 12 18 24 28
3 6 14 21 26 31
7 9 16 23 30 32
8 13 19 27 33 35
11 15 20 29 34 36
When does the quicksort algorithm take O(n^2) time?
Quicksort works by taking a pivot, then putting all the elements lower than that pivot on one side and all the higher elements on the other; it then recursively sorts the two sub groups in the same way (all the way down until everything is sorted.) Now if you pick the worst pivot each time (the highest or lowest element in the list) you'll only have one group to sort, with everything in that group other than the original pivot that you picked. This in essence gives you n groups that each need to be iterated through n times, hence the O(n^2) complexity.
The most common reason for this occurring is if the pivot is chosen to be the first or last element in the list in the quicksort implementation. For unsorted lists this is just as valid as any other, however for sorted or nearly sorted lists (which occur quite commonly in practice) this is very likely to give you the worst case scenario. This is why all half-decent implementations tend to take a pivot from the centre of the list.
There are modifications to the standard quicksort algorithm to avoid this edge case - one example is the dual-pivot quicksort that was integrated into Java 7.
In short, Quicksort for sorting an array lowest element first works like this:
Choose a pivot element
Presort array, such that all elements smaller than the pivot are on the left side
Recursively do step 1. and 2. for the left side and the right side
Ideally, you would want a pivot element that partitions the sequence in two equally long subsequences but this is not so easy.
There are different schemes for choosing the pivot element. Early versions just took the leftmost element. In the worst case, the pivot element will always be the lowest element of the current range.
Leftmost element is pivot
In this case it can be easily thought out that the worst case is an monotonic increasing array:
0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20
Rightmost element is pivot
Similarly, when choosing the rightmost element the worst case will be a decreasing sequence.
20 19 18 17 16 15 14 13 12 11 10 9 8 7 6 5 4 3 2 1 0
Center element is pivot
One possible remedy for the worst-case for presorted arrays, is to use the center element (or slightly left of center if the sequence is of even length). Then, the worst case would be quite more exotic. It can be constructed by modifying the Quicksort algorithm to set the array elements corresponding to the currently selected pivot element to a monotonic increasing value. I.e. we know the first pivot is the center, so the center must be the lowest value, e.g. 0. Next it gets swapped to the leftmost, i.e. the leftmost value is now in the center and would be the next pivot element, so it must be 1. Now, we can already guess that the array would look like this:
1 ? ? 0 ? ? ?
Here is the C++ code for the modified Quicksort to generate a worst sequence:
// g++ -std=c++11 worstCaseQuicksort.cpp && ./a.out
#include <algorithm> // swap
#include <iostream>
#include <vector>
#include <numeric> // iota
int main( void )
{
std::vector<int> v(20); /**< will hold the worst case later */
/* p basically saves the indices of what was the initial position of the
* elements of v. As they get swapped around by Quicksort p becomes a
* permutation */
auto p = v;
std::iota( p.begin(), p.end(), 0 );
/* in the worst case we need to work on v.size( sequences, because
* the initial sequence is always split after the first element */
for ( auto i = 0u; i < v.size(); ++i )
{
/* i can be interpreted as:
* - subsequence starting index
* - current minimum value, if we start at 0 */
/* note thate in the last step iPivot == v.size()-1 */
auto const iPivot = ( v.size()-1 + i )/2;
v[ p[ iPivot ] ] = i;
std::swap( p[ iPivot ], p[i] );
}
for ( auto x : v ) std::cout << " " << x;
}
The result:
0
0 1
1 0 2
2 0 1 3
1 3 0 2 4
4 2 0 1 3 5
1 5 3 0 2 4 6
4 2 6 0 1 3 5 7
1 5 3 7 0 2 4 6 8
8 2 6 4 0 1 3 5 7 9
1 9 3 7 5 0 2 4 6 8 10
6 2 10 4 8 0 1 3 5 7 9 11
1 7 3 11 5 9 0 2 4 6 8 10 12
10 2 8 4 12 6 0 1 3 5 7 9 11 13
1 11 3 9 5 13 7 0 2 4 6 8 10 12 14
8 2 12 4 10 6 14 0 1 3 5 7 9 11 13 15
1 9 3 13 5 11 7 15 0 2 4 6 8 10 12 14 16
16 2 10 4 14 6 12 8 0 1 3 5 7 9 11 13 15 17
1 17 3 11 5 15 7 13 9 0 2 4 6 8 10 12 14 16 18
10 2 18 4 12 6 16 8 14 0 1 3 5 7 9 11 13 15 17 19
1 11 3 19 5 13 7 17 9 15 0 2 4 6 8 10 12 14 16 18 20
16 2 12 4 20 6 14 8 18 10 0 1 3 5 7 9 11 13 15 17 19 21
1 17 3 13 5 21 7 15 9 19 11 0 2 4 6 8 10 12 14 16 18 20 22
12 2 18 4 14 6 22 8 16 10 20 0 1 3 5 7 9 11 13 15 17 19 21 23
1 13 3 19 5 15 7 23 9 17 11 21 0 2 4 6 8 10 12 14 16 18 20 22 24
There is order in this. The right side is just increments of two starting with zero. The left side also has an order. Let's format the left side for the 73 element long worst case sequence nicely using Ascii art:
0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35
------------------------------------------------------------------------------------------------------------
1 3 5 7 9 11 13 15 17 19 21 23 25 27 29 31 33 35
37 39 41 43 45 47 49 51 53
55 57 59 61 63
65 67
69
71
The header is the element index. In the first row numbers starting from 1 and increasing by 2 are given to every 2nd element. In the second row the same is done to every 4th element, in the 3rd row numbers are assigned to every 8th element and so on. In this case the first value to be written in the i-th row is at index 2^i-1, but for certain lengths this looks a tad different.
The resulting structure is reminiscent to an inverted binary tree whose nodes are labeled bottom-up starting from the leaves.
Median of leftmost, center and rightmost elements is pivot
Another way is to use the median of the leftmost, the center and the rightmost element. In this case the worst case can only be, that the w.l.o.g. left subsequence is of length 2 (not just length 1 like in the examples above). Also we assume that the rightmost value will always be the highest of the median-of-three. This also means it is the highest of all values. Making adjustments in the program above, we now have this:
auto p = v;
std::iota( p.begin(), p.end(), 0 );
auto i = 0u;
for ( ; i < v.size(); i+=2 )
{
auto const iPivot0 = i;
auto const iPivot1 = ( i + v.size()-1 )/2;
v[ p[ iPivot1 ] ] = i+1;
v[ p[ iPivot0 ] ] = i;
std::swap( p[ iPivot1 ], p[i+1] );
}
if ( v.size() > 0 && i == v.size() )
v[ v.size()-1 ] = i-1;
The generated sequences are:
0
0 1
0 1 2
0 1 2 3
0 2 1 3 4
0 2 1 3 4 5
0 4 2 1 3 5 6
0 4 2 1 3 5 6 7
0 4 2 6 1 3 5 7 8
0 4 2 6 1 3 5 7 8 9
0 8 2 6 4 1 3 5 7 9 10
0 8 2 6 4 1 3 5 7 9 10 11
0 6 2 10 4 8 1 3 5 7 9 11 12
0 6 2 10 4 8 1 3 5 7 9 11 12 13
0 10 2 8 4 12 6 1 3 5 7 9 11 13 14
0 10 2 8 4 12 6 1 3 5 7 9 11 13 14 15
0 8 2 12 4 10 6 14 1 3 5 7 9 11 13 15 16
0 8 2 12 4 10 6 14 1 3 5 7 9 11 13 15 16 17
0 16 2 10 4 14 6 12 8 1 3 5 7 9 11 13 15 17 18
0 16 2 10 4 14 6 12 8 1 3 5 7 9 11 13 15 17 18 19
0 10 2 18 4 12 6 16 8 14 1 3 5 7 9 11 13 15 17 19 20
0 10 2 18 4 12 6 16 8 14 1 3 5 7 9 11 13 15 17 19 20 21
0 16 2 12 4 20 6 14 8 18 10 1 3 5 7 9 11 13 15 17 19 21 22
0 16 2 12 4 20 6 14 8 18 10 1 3 5 7 9 11 13 15 17 19 21 22 23
0 12 2 18 4 14 6 22 8 16 10 20 1 3 5 7 9 11 13 15 17 19 21 23 24
Pseudorandom element with random seed 0 is pivot
The worst case sequences for center element and median-of-three look already pretty random, but in order to make Quicksort even more robust the pivot element can be chosen randomly. If the random sequence used is at least reproducible on every Quicksort run, then we can also construct a worst case sequence for that. We only have to adjust the iPivot = line in the first program, e.g. to:
srand(0); // you shouldn't use 0 as a seed
for ( auto i = 0u; i < v.size(); ++i )
{
auto const iPivot = i + rand() % ( v.size() - i );
[...]
The generated sequences are:
0
1 0
1 0 2
2 3 1 0
1 4 2 0 3
5 0 1 2 3 4
6 0 5 4 2 1 3
7 2 4 3 6 1 5 0
4 0 3 6 2 8 7 1 5
2 3 6 0 8 5 9 7 1 4
3 6 2 5 7 4 0 1 8 10 9
8 11 7 6 10 4 9 0 5 2 3 1
0 12 3 10 6 8 11 7 2 4 9 1 5
9 0 8 10 11 3 12 4 6 7 1 2 5 13
2 4 14 5 9 1 12 6 13 8 3 7 10 0 11
3 15 1 13 5 8 9 0 10 4 7 2 6 11 12 14
11 16 8 9 10 4 6 1 3 7 0 12 5 14 2 15 13
6 0 15 7 11 4 5 14 13 17 9 2 10 3 12 16 1 8
8 14 0 12 18 13 3 7 5 17 9 2 4 15 11 10 16 1 6
3 6 16 0 11 4 15 9 13 19 7 2 10 17 12 5 1 8 18 14
6 0 14 9 15 2 8 1 11 7 3 19 18 16 20 17 13 12 10 4 5
14 16 7 9 8 1 3 21 5 4 12 17 10 19 18 15 6 0 11 2 13 20
1 2 22 11 16 9 10 14 12 6 17 0 5 20 4 21 19 8 3 7 18 15 13
22 1 15 18 8 19 13 0 14 23 9 12 10 5 11 21 6 4 17 2 16 7 3 20
2 19 17 6 10 13 11 8 0 16 12 22 4 18 15 20 3 24 21 7 5 14 9 1 23
So how to check whether those sequences are correct?
Measure time it took for the sequences. Plot time over the sequence length N. If the curve scales with O(N^2) instead of O(N log(N)), then these are indeed worst case sequences.
Adjust a correct Quicksort to give debug output about the subsequence lengths and/or the chosen pivot elements. One of the subsequences should always be of length 1 (or 2 for median-of-three). The chosen pivot elements printed should be increasing.
Getting a pivot equal to the lowest or highest number, should also trigger the worst case scenario of O(n2).
Different implementations of quicksort have different datasets required to give it a worstcase runtime. It depends on where the algorithm selects it's pivot-element.
And also as Ghpst said, selecting the biggest or smallest number would give you a worstcase.
If I remember correctly quicksort normally uses a random element for pivot to minimize the chance of getting a worstcase.
I think if the array is in revrse order then it will be worst case for pivot the last element of that array
The factors that contribute to the worst-case scenario of quicksort are as follows:
Worst case occurs when the subarrays are completely unbalanced
The worst case occurs when there are 0 elements in one subarray and n-1 elements in the other.
In other words, the worst-case running time of quicksort occurs when Quicksort takes in a sorted array (in decreasing order), to be on the time complexity of O(n^2).