Dual pivot quick sort algorithm - algorithm

I was analyzing the code for Arrays.sort() method in java . My question is for what values of integer array a[] will this code return true ?
if (less < e1 && e5 < great)
After Sorting left and right parts recursively, excluding known pivots for what value of array a[] will the center part become too large (comprises > 4/7 of the array) ?
Given QUICKSORT_THRESHOLD = 286 .
Array size cannot be more than 286
Any example of int array please .

It happens when all candidates for pivots are close to either the maximum or the minimum value of the array.
java.util.DualPivotQuicksort#sort() chooses the pivots from 5 positions in the array:
int seventh = (length >> 3) + (length >> 6) + 1;
int e3 = (left + right) >>> 1; // The midpoint
int e2 = e3 - seventh;
int e1 = e2 - seventh;
int e4 = e3 + seventh;
int e5 = e4 + seventh;
So, in order to construct an array that satisfies the condition, we need to fill those 5 positions with extreme values. For example:
int[] x = {
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, -2, /* e1 = 10 */
0, 0, 0, 0, 0, 0, -1, /* e2 = 17 */
0, 0, 0, 0, 0, 0, 0, /* e3 = 24 */
0, 0, 0, 0, 0, 0, 1, /* e4 = 31 */
0, 0, 0, 0, 0, 0, 2, /* e5 = 38 */
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0
};
Arrays.sort(x);
And a non-trivial case where the method changes the boundaries of the central part before sorting it:
int[] x = {
70, 66, 11, 24, 10, 28, 58, 13, 19, 90, 15,
79, 16, 69, 39, 14, 10, 16,
40, 59, 47, 77, 90, 50, 50,
50, 16, 76, 86, 70, 33, 90,
24, 35, 73, 93, 87, 19, 91,
73, 87, 22, 15, 24, 92, 34, 35, 98, 11, 40
};

Related

How can I create a random array between a range of two other arrays?

I need to generate an array of random 20 bytes between a given range of arrays. Since arrays are comparable in Rust, this works:
let low = [0u8; 20];
let high = [2u8; 20];
assert_eq!(true, low < high);
assert_eq!(false, low > high);
assert_eq!(true, low == [0u8; 20]);
For these bounds:
let low: [u8; 20] = [98, 0, 1, 0, 2, 6, 99, 3, 0, 5, 23, 3, 5, 6, 11, 8, 0, 2, 0, 17];
let high: [u8; 20] = [99, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1];
These would be a valid result:
[98, 0, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]
[99, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]
These are not:
[98, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]
[99, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 2]
I want to do something like:
use rand::prelude::*;
fn main() {
let low = [0u8; 20];
let high = [2u8; 20];
let value = rand::thread_rng().gen_range(low, high);
println!("{:?}", value);
}
but I get following error:
error[E0277]: the trait bound `[u8; 20]: rand::distributions::uniform::SampleUniform` is not satisfied
--> src\main.rs:6:36
|
6 | let value = rand::thread_rng().gen_range(low, high);
| ^^^^^^^^^ the trait `rand::distributions::uniform::SampleUniform` is not implemented for `[u8; 20]`
I tried implementing SampleUniform and UniformSampler without much success. Is there a simple way to implement this?
If you want to treat the byte arrays as big integers, use the
num-bigint crate with the rand feature enabled:
use bigint::{ToBigInt, RandBigInt};
let low = -10000.to_bigint().unwrap();
let high = 10000.to_bigint().unwrap();
let b = rng.gen_bigint_range(&low, &high);
You could also use unsigned integers instead of signed. There are methods to convert to and from big endian byte arrays:
from_bytes_be
to_bytes_be
See also:
How do I generate a random num::BigUint?

Why won't ruby-mcrypt accept an array as a key?

Hello I am having trouble encrypting using an array as the key and the value with the ruby-mcrypt gem. The gem lets me use an array for the key fine, cipher = Mcrypt.new("rijndael-256", :ecb, secret) works. But it will give me an error when I try to encrypt. I've tried many things but no luck. Does anyone know if Mcrypt just doesn't like encrypting with an array?
require 'mcrypt'
def encrypt(plain, secret)
cipher = Mcrypt.new("rijndael-256", :ecb, secret)
cipher.padding = :zeros
encrypted = cipher.encrypt(plain)
p encrypted
encrypted.unpack("H*").first.to_s.upcase
end
array_to_encrypt = [16, 0, 0, 0, 50, 48, 49, 55, 47, 48, 50, 47, 48, 55, 32, 50, 50, 58, 52, 54, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]
key_array = [65, 66, 67, 68, 49, 50, 51, 52, 70, 71, 72, 73, 53, 54, 55, 56]
result = encrypt(array_to_encrypt, key_array)
p "RESULT IS #{result}"
The output is as follows:
Mcrypt::RuntimeError: Could not initialize mcrypt: Key length is not legal.
I traced this error to here in the ruby-mcrypt gem but don't understand it enough to figure out why I am getting the error message. Any help or insights would be amazing. Thanks!
The library doesn't support arrays. You'll need to use Strings instead:
def binary(byte_array)
byte_array.pack('C*')
end
array_to_encrypt = [16, 0, 0, 0, 50, 48, 49, 55, 47, 48, 50, 47, 48, 55, 32, 50, 50, 58, 52, 54, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]
key_array = [65, 66, 67, 68, 49, 50, 51, 52, 70, 71, 72, 73, 53, 54, 55, 56]
result = encrypt(binary(array_to_encrypt), binary(key_array))
p "RESULT IS #{result}"

D3 Chord Diagram Not Rendering Correctly

Using the excellent guide by Nadieh Bremer I'm making a stretched chord diagram.
However, with certain data inputs the rendering goes awry.
I've made a demo to demonstrate my issue here:
https://codepen.io/benmayocode/pen/MPEwdr
Specifically, in the .js file lines 269 to 281 file I have:
var respondents = 40,
emptyPerc = 0.4,
emptyStroke = Math.round(respondents*emptyPerc);
var Names = ['BEN', 'ROSE', '', '1', '2', '6', ''];
var matrix = [
[0, 0, 0, 10, 10, 0, 0] ,
[0, 0, 0, 0, 10, 10, 0] ,
[0, 0, 0, 0, 0, 0, 24] ,
[10, 0, 0, 0, 0, 0, 0] ,
[10, 10, 0, 0, 0, 0, 0] ,
[0, 10, 0, 0, 0, 0, 0] ,
[0, 0, 0, 24, 0, 0, 0] ,
];
This renders incorrectly - but if I change it to...
var respondents = 40,
emptyPerc = 0.4,
emptyStroke = Math.round(respondents*emptyPerc);
var Names = ['BEN', 'LIB', 'ROSE', '', '1', '2', '6', ''];
var matrix = [
[0, 0, 0, 0, 10, 10, 0, 0] ,
[0, 0, 0, 0, 0, 10, 0, 0] ,
[0, 0, 0, 0, 0, 10, 10, 0] ,
[0, 0, 0, 0, 0, 0, 0, 24] ,
[10, 0, 0, 0, 0, 0, 0, 0] ,
[10, 10, 10, 0, 0, 0, 0, 0] ,
[0, 0, 10, 0, 0, 0, 0, 0] ,
[0, 0, 0, 0, 24, 0, 0, 0] ,
];
Then it works great. I obviously see the difference between the two blocks of code, but why are they producing different results, and is it possible to modify my code to accommodate both examples?
If you examine the dodgy arc, you will see you can flip it into the right place by altering the sign on the transform from (50,0) to (-50,0). If you then look at the code that assigns the transform, it is
.attr("transform", function(d, i) {
d.pullOutSize = pullOutSize * ( d.startAngle + 0.01 > Math.PI ? -1 : 1);
return "translate(" + d.pullOutSize + ',' + 0 + ")";
});
with a note in the original text to say that "the 0.01 is for rounding errors". Given that the startAngle is already 3.13--i.e. very close to Pi--it looks like this is an edge case where the value fell just the wrong side of the cutoff. Changing the allowable rounding error value to 0.02 puts the arc in the correct place, or you could do something like
d.pullOutSize = pullOutSize * (
// is the start angle less than Pi?
d.startAngle + 0.01 < Math.PI ? 1 :
// if yes, is the end angle also less than Pi?
d.endAngle < Math.PI ? 1 : -1 );
to prevent edge cases like that in your dataset.

How to initialize __m128i array statically in gcc?

I am porting some SSE optimization code from Windows to Linux. And I found that the following code, which works well in MSVC, won't work in GCC.
The code is to initialize an array of __m128i. Each __mi28i contains 16 int8_t. It does compile with gcc but the result is not as expected.
Actually, as gcc defines __m128i as long long int, the code will initialize an array like:
long long int coeffs_ssse3[4] = {64, 83, 64, 36}.
I googled and was told that "The only portable way to initialize a vector is to use _mm_set_XXX intrinsics." However, I want to know is there any other way to initialize the __m128i array? Better statically, and don't need to modify the following code much (since I have tons of code in the following format). Any suggestion is appreciated.
static const __m128i coeffs_ssse3[4] =
{
{ 64, 0, 64, 0, 64, 0, 64, 0, 64, 0, 64, 0, 64, 0, 64, 0},
{ 83, 0, 36, 0,-36,-1,-83,-1, 83, 0, 36, 0,-36,-1,-83, -1},
{ 64, 0,-64,-1,-64,-1, 64, 0, 64, 0,-64,-1,-64,-1, 64, 0},
{ 36, 0,-83,-1, 83, 0,-36,-1, 36, 0,-83,-1, 83, 0,-36,-1}
};
It seems that gcc doesn't treat the __m128* types as being candidates for aggregate initialization. Since they aren't standard types, this behavior will vary from compiler to compiler. One approach would be to declare the array as an aligned array of 8-bit integers, then just cast a pointer to it:
static const int8_t coeffs[64] __attribute__((aligned(16))) =
{
64, 0, 64, 0, 64, 0, 64, 0, 64, 0, 64, 0, 64, 0, 64, 0,
83, 0, 36, 0,-36,-1,-83,-1, 83, 0, 36, 0,-36,-1,-83, -1,
64, 0,-64,-1,-64,-1, 64, 0, 64, 0,-64,-1,-64,-1, 64, 0,
36, 0,-83,-1, 83, 0,-36,-1, 36, 0,-83,-1, 83, 0,-36,-1
};
static const __m128i *coeffs_ssse3 = (__m128i *) coeffs;
However, I don't think this syntax (__attribute__((aligned(x)))) is supported by Visual Studio, so you would need some #ifdef trickery in there to use the right directives to achieve the alignment that you want on all of your target platforms.

difference between CV_32FC3 and CV_64FC3 in OpenCV?

I was testing around with OpenCV matrices and the display function and had this bug. It took me more than half a day to reveal it:
I originally tried to display OpenCV matrices regardless of the type of matric e.g. CvMat or Mat, ...
with a display method recommended by Mr vasile from another post of mine Multi channel Mat display function
The display method simply fetches all data of the matrix to cout stream
this is my program:
// First: CV_32FC3 works OK
float objpts[12] = {0, 105, 105, 0, 0, 0, 105, 105, 0, 0, 0, 0};
CvMat objptsmat = cvMat( 1, 4, CV_32FC3, objpts);
CvMat* objectPoints = &objptsmat;
CvMatShow(objectPoints);
getchar();
output:
// Second: CV_64FC3 crashes
float objpts[12] = {0, 105, 105, 0, 0, 0, 105, 105, 0, 0, 0, 0};
CvMat objptsmat = cvMat( 1, 4, CV_64FC3, objpts);
CvMat* objectPoints = &objptsmat;
CvMatShow(objectPoints);
getchar();
output:
they should be both the same. Right??!!
In the second example, you should have the array declared as
double objpts[12] = {0, 105, 105, 0, 0, 0, 105, 105, 0, 0, 0, 0};
You can read CV_xxtCn as
xx: number of bits
t: type (F = floating point type, S = signed integer, U = unsigned integer)
n: number of channels

Resources