Java Hipparchus Eigenvector and C++ Eigen Eigenvector have different results - eigen

While comparing the eigen vector results from Java Hipparchus and C++ Eigen, I'm getting different results. They are transposed for the most part as well as 2 elements not matching. Why are libraries returning different values?
Java Hipparchus
import org.hipparchus.linear.Array2DRowRealMatrix;
import org.hipparchus.linear.EigenDecomposition;
import org.hipparchus.linear.RealVector;
...
double[][] example_matrix = {
{ 1, 2, 3 },
{ 3, 2, 1 },
{ 2, 1, 3 }
};
RealMatrix P = new Array2DRowRealMatrix(example_matrix , true);
EigenDecomposition eigenDecomposition = new EigenDecomposition(P);
RealVector[] eigenvectors = new RealVector[3];
for (int i = 0; i < 3; i++) {
System.out.println(eigenDecomposition.getEigenvector(i));
}
// prints:
// {-0.7641805281; 0.6105725033; 0.2079166628}
// {0.5776342875; 0.5776342875; 0.5776342875}
// {-0.0235573892; -0.9140029063; 0.6060826741}
C++ Eigen
Eigen::Matrix<double, 3, 3> matrix;
matrix <<
1, 2, 3,
3, 2, 1,
2, 1, 3;
Eigen::EigenSolver<Eigen::Matrix<double, 3, 3>> eigen_decomposition{ matrix };
eigen_decomposition.compute(matrix, true);
const auto eigen_vectors = eigen_decomposition.eigenvectors().real();
std::cout << eigen_vectors.matrix() << "\n"
// prints:
// -0.764181 0.57735 -0.0214754
// 0.610573 0.57735 -0.833224
// 0.207917 0.57735 0.552518

While not identical the results of the two are actually both correct:
In Java Hipparchus the eigenvectors are not normalised (the vector norm is not 1!): This is in particular apparent for the last one where the norm is approximately 1.10. If you normalise it then you will see that it corresponds to the last column of the results Eigen gives you. Physically this does not matter as any scalar multiple of an eigenvector is again an eigenvector. Instead the library seems to normalise the matrix spanned by the eigenvectors: The magnitude of the determinant seems to correspond to unity. A matrix with determinant 1 preserves volume so it seems like a logical choice.
The Eigen eigenvectors on the other hand seem to be normalised. The matrix you plot gives you the eigenvectors as the columns. Normalising the individual eigenvectors seems like an equally reasonable choice to me.

Related

Is there an algorithm to redistribute an array equally?

I have a one-dimensional array of non-negative integers. When plotted as a histogram, the range is mostly smaller numbers:
Is there an algorithm to redistribute the values more equally while maintaining the original order? By order I mean the minimum and maximum values would retain their original places in the array and everything else in between would scale up or down. When plotted afterwards the histogram would be more or less flat.
Searching the web I came across a "probability integral transform" in statistics, but that required sorting the data.
EDIT - Apologies for omitting why I don't want to sort it. The array is a plot and each integer represents a pixel. If I sort it that would destroy the plot. I'm dividing each integer by the maximum value and using that as an index into a palette. Because there's so much bias towards smaller values, only a small amount of the palette is visible in the final image. I thought if I was able to redistribute the values somehow, it'd use the full range of the palette.
You could apply this algorithm:
Let c be a freely chosen coefficient between 0 and 1: the closer to 1 the more close the resulting values will be to each other. If exactly 1, then all values will be equal. If 0, the result will be the original data set. A candidate value for c could for instance be 0.9
Let avg be the average of the input values
Apply the following transformation to each value in the input set:
new value := value * (1 − c) + avg * c
Here is an interactive implementation in a JavaScript snippet:
let a = [150, 100, 40, 33, 9, 3, 5, 13, 8, 1, 3, 2, 1, 1, 0, 0];
let avg = a.reduce((acc, val) => acc + val) / a.length;
function refresh(c) {
// Apply transformation from a to b using c:
display(a.map(val => val * (1 - c) + avg * c));
}
// I/O management
var display = (function (b) {
this.clearRect(0, 0, this.canvas.width, this.canvas.height); // Clear display
for (let i = 0; i < b.length; i++) {
this.beginPath();
this.rect(20 + i * 20, 150 - b[i], 19, b[i]);
this.fill();
}
}).bind(document.querySelector("canvas").getContext("2d"));
document.querySelector("input").oninput = e => refresh(+e.target.value);
refresh(0);
Coefficient: <input type="number" min="0" max="1" step="0.01" value="0"><br>
<canvas height="150" width="400"></canvas>
Use the Coefficient input box to experiment with different values for it on a sample data set.
In Python the transformation, for a given list a and coefficient c, could look like this:
avg = sum(a) / len(a)
b = [value * (1 - c) + avg * c for value in a]

Linear problem solving with matrix constraints in Rust

I am trying to rewrite a fairness ranking algorithm (source: https://arxiv.org/abs/1802.07281) from Python to Rust. The objective is finding a document-ranking probability matrix that is doubly stochastic and, by use of an utility vector (i.e. the document relevance in this case) gives fair exposure to all document types.
The objective is thus to maximise the expected utility under the following constraints:
sum of probabilities for each position equals 1;
sum of probabilities for each document equals 1;
every probibility is valid (i.e. 0 <= P[i,j] <= 1);
P is fair (disparate treatment constraints).
In Python we have done this using CVXPY:
u = documents[['rel']].iloc[:n].values.ravel() # utility vector
v = np.array([1.0 / (np.log(2 + i)) for i in range(n)]) # position discount vector
P = cp.Variable((n, n)) # linear maximizing problem uͭPv s.t. P is doubly stochastic and fair.
# Construct f in fͭPv such that for P every group's exposure divided by mean utility should be
# equal (i.e. enforcing DTC). Do this for the set of every individual two groups:
# example: calculated f for three groups {a, b, c}
# resulting constraints: [a - b == 0, a - c == 0, b - c == 0]
groups = {k: group.index.values for k, group in documents.iloc[:n].groupby('document_type')}
fairness_constraints = []
for k0, k1 in combinations(groups, 2):
g0, g1 = groups[k0], groups[k1]
f_i = np.zeros(n)
f_i[g0] = 1 / u[g0].sum()
f_i[g1] = -1 / u[g1].sum()
fairness_constraints.append(f_i)
# Create convex problem to solve for finding the probabilities that
# a document is at a certain position/rank, matching the fairness criteria
objective = cp.Maximize(cp.matmul(cp.matmul(u, P), v))
constraints = ([cp.matmul(np.ones((1, n)), P) == np.ones((1, n)), # ┤
cp.matmul(P, np.ones((n,))) == np.ones((n,)), # ┤
0.0 <= P, P <= 1] + # └┤ doubly stochastic matrix constraints
[cp.matmul(cp.matmul(c, P), v) == 0 for c in fairness_constraints]) # DTC
prob = cp.Problem(objective, constraints)
prob.solve(solver=cp.CBC)
This works great for multiple solvers, including SCS, ECOS and CBC.
Now trying to implement the algorithm above to Rust, I have resolved to crates like good_lp and lp_modeler. These should both be able to solve linear problems using CBC as also demonstrated in the Python example above. I am struggling however to find examples on how to define the needed constraints on my matrix variable P.
The code below is my work in progress for rewriting the Python code in Rust, using in this case the lp_modeler crate as an example. The code below compiles but panics when run. Furthermore I don't know how to add the disparate treatment constraints in a way Rust likes, as no package seems to be able to accept equality constraints on two vectors.
let n = cmp::min(u.len(), 25);
let u: Array<f32, Ix1> = array![...]; // utility vector filled with dummy data
// position discount vector
let v: Array<f32, Ix1> = (0..n)
.map(|i| 1.0 / ((2 + i) as f32).ln())
.collect();
let P: Array<f32, Ix2> = Array::ones((n, n));
// dummy data for document indices and their types
let groups = vec![
vec![23], // type A
vec![8, 10, 16, 19], // type B
vec![0, 1, 2, 3, 4, 5, 6, 7, 9, 11, 12, 13, 15, 21, 24], // type C
vec![14, 17, 18, 20, 22] // type D
];
let mut fairness_contraints: Vec<Vec<f32>> = Vec::new();
for combo in groups.iter().combinations(2).unique() {
let mut f_i: Vec<f32> = vec![0f32; n];
{ // f_i[g0] = 1 / u[g0].sum()
let usum_g0: f32 = combo[0].iter()
.map(|&i| u[i])
.sum();
for &i in combo[0].iter() {
f_i[i] = 1f32 / usum_g0;
}
}
{ // f_i[g1] = -1 / u[g1].sum()
let usum_g1: f32 = combo[1].iter()
.map(|&i| u[i])
.sum();
for &i in combo[1].iter() {
f_i[i] = -1.0 / usum_g1;
}
}
fairness_contraints.push(f_i);
}
let mut problem = LpProblem::new("Fairness", LpObjective::Maximize);
problem += u.dot(&P).dot(&v); // Expected utility objective
// Doubly stochastic constraints
for col in P.columns() { // Sum of probabilities for each position
problem += sum(&col.to_vec(), |&el| el).equal(1);
}
for row in P.rows() { // Sum of probabilities for each document
problem += sum(&row.to_vec(), |&el| el).equal(1);
}
// Valid probability constraints
for el in P.iter() {
problem += lp_sum(&vec![el]).ge(0);
problem += lp_sum(&vec![el]).le(1);
}
// TODO: implement DTC fairness constraints
let solver = CbcSolver::new();
let result = solver.run(&problem);
Can anybody give me a nudge in the right direction on this specific problem? Thanks in advance!

optimization of pairwise L2 distance computations

I need help optimizing this loop. matrix_1 is a (nx 2) int matrix and matrix_2 is a (m x 2), m & n very.
index_j = 1;
for index_k = 1:size(Matrix_1,1)
for index_l = 1:size(Matrix_2,1)
M2_Index_Dist(index_j,:) = [index_l, sqrt(bsxfun(#plus,sum(Matrix_1(index_k,:).^2,2),sum(Matrix_2(index_l,:).^2,2)')-2*(Matrix_1(index_k,:)*Matrix_2(index_l,:)'))];
index_j = index_j + 1;
end
end
I need M2_Index_Dist to provide a ((n*m) x 2) matrix with the index of matrix_2 in the first column and the distance in the second column.
Output example:
M2_Index_Dist = [ 1, 5.465
2, 56.52
3, 6.21
1, 35.3
2, 56.52
3, 0
1, 43.5
2, 9.3
3, 236.1
1, 8.2
2, 56.52
3, 5.582]
Here's how to apply bsxfun with your formula (||A-B|| = sqrt(||A||^2 + ||B||^2 - 2*A*B)):
d = real(sqrt(bsxfun(#plus, dot(Matrix_1,Matrix_1,2), ...
bsxfun(#minus, dot(Matrix_2,Matrix_2,2).', 2 * Matrix_1*Matrix_2.')))).';
You can avoid the final transpose if you change your interpretation of the matrix.
Note: There shouldn't be any complex values to handle with real but it's there in case of very small differences that may lead to tiny negative numbers.
Edit: It may be faster without dot:
d = sqrt(bsxfun(#plus, sum(Matrix_1.*Matrix_1,2), ...
bsxfun(#minus, sum(Matrix_2.*Matrix_2,2)', 2 * Matrix_1*Matrix_2.'))).';
Or with just one call to bsxfun:
d = sqrt(bsxfun(#plus, sum(Matrix_1.*Matrix_1,2), sum(Matrix_2.*Matrix_2,2)') ...
- 2 * Matrix_1*Matrix_2.').';
Note: This last order of operations gives identical results to you, rather than with an error ~1e-14.
Edit 2: To replicate M2_Index_Dist:
II = ndgrid(1:size(Matrix_2,1),1:size(Matrix_2,1));
M2_Index_Dist = [II(:) d(:)];
If I understand correctly, this does what you want:
ind = repmat((1:size(Matrix_2,1)).',size(Matrix_1,1),1); %'// first column: index
d = pdist2(Matrix_2,Matrix_1); %// compute distance between each pair of rows
d = d(:); %// second column: distance
result = [ind d]; %// build result from first column and second column
As you see, this code calls pdist2 to compute the distance between every pair of rows of your matrices. By default this function uses Euclidean distance.
If you don't have pdist2 (which is part of the the Statistics Toolbox), you can replace line 2 above with bsxfun:
d = squeeze(sqrt(sum(bsxfun(#minus,Matrix_2,permute(Matrix_1, [3 2 1])).^2,2)));

In-place transposition of a matrix

Is it possible to transpose a (m,n) matrix in-place, giving that the matrix is represented as a single array of size m*n ?
The usual algorithm
transpose(Matrix mat,int rows, int cols ){
//construction step
Matrix tmat;
for(int i=0;i<rows;i++){
for(int j=0;j<cols;j++){
tmat[j][i] = mat[i][j];
}
}
}
doesn't apply to a single array unless the matrix is a square matrix.
If none, what is the minimum amount of additional memory needed??
EDIT:
I have already tried all flavors of
for(int i=0;i<n;++i) {
for(int j=0;j<i;++j) {
var swap = m[i][j];
m[i][j] = m[j][i];
m[j][i] = swap;
}
}
And it is not correct. In this specific example, m doesnt even exist. In a single line
matrix mat[i][j] = mat[i*m + j], where trans[j][i] = trans[i*n + j]
Inspired by the Wikipedia - Following the cycles algorithm description, I came up with following C++ implementation:
#include <iostream> // std::cout
#include <iterator> // std::ostream_iterator
#include <algorithm> // std::swap (until C++11)
#include <vector>
template<class RandomIterator>
void transpose(RandomIterator first, RandomIterator last, int m)
{
const int mn1 = (last - first - 1);
const int n = (last - first) / m;
std::vector<bool> visited(last - first);
RandomIterator cycle = first;
while (++cycle != last) {
if (visited[cycle - first])
continue;
int a = cycle - first;
do {
a = a == mn1 ? mn1 : (n * a) % mn1;
std::swap(*(first + a), *cycle);
visited[a] = true;
} while ((first + a) != cycle);
}
}
int main()
{
int a[] = { 0, 1, 2, 3, 4, 5, 6, 7 };
transpose(a, a + 8, 4);
std::copy(a, a + 8, std::ostream_iterator<int>(std::cout, " "));
}
The program makes the in-place matrix transposition of the 2 × 4 matrix
0 1 2 3
4 5 6 7
represented in row-major ordering {0, 1, 2, 3, 4, 5, 6, 7} into the 4 × 2 matrix
0 4
1 5
2 6
3 7
represented by the row-major ordering {0, 4, 1, 5, 2, 6, 3, 7}.
The argument m of transpose represents the rowsize, the columnsize n is determined by the rowsize and the sequence size. The algorithm needs m × n bits of auxiliary storage to store the information, which elements have been swapped. The indexes of the sequence are mapped with the following scheme:
0 → 0
1 → 2
2 → 4
3 → 6
4 → 1
5 → 3
6 → 5
7 → 7
The mapping function in general is:
idx → (idx × n) mod (m × n - 1) if idx < (m × n), idx → idx otherwise
We can identify four cycles within this sequence: { 0 }, { 1, 2, 4 }, {3, 5, 6} and { 7 }. Each cycle can be transposed independent of the other cycles. The variable cycle initially points to the second element (the first does not need to be moved because 0 → 0). The bit-array visited holds the already transposed elements and indicates, that index 1 (the second element) needs to be moved. Index 1 gets swapped with index 2 (mapping function). Now index 1 holds the element of index 2 and this element gets swapped with the element of index 4. Now index 1 holds the element of index 4. The element of index 4 should go to index 1, it is in the right place, transposing of the cycle has finished, all touched indexes have been marked visited. The variable cycle gets incremented till the first not visited index, which is 3. The procedure continues with this cycle till all cycles have been transposed.
The problem is, that the task is set uncorrectly. If you would meant by "the same place" use of the same matrix, it is a correct task. But when you are talking about writing down to the same area in memory, " the matrix is represented as a single array of size m*n", you have to add how is it represented there. Otherwards it is enough to change nothing except the function that reads that matrix - simply swap indexes in it.
You want to transpose the matrix representation in memory so, that the reading/setting function for this matrix by indexes remains the same. Don't you?
Also, we can't write down the algorithm not knowing, is the matrix written in memory by rows or by columns. OK, let's say it is written by rows. Isn't it?
If we set these two lacking conditions, the task becomes correct and is not hard to be solved.
Simply we should take every element in the matrix by linear index, find its row/column pair, transpose it, find another resulting linear index and put the value into the new place. The problem is that the transformation is autosymmetric only in the case of square matrices, so it really could not be done in site. Or it could, if we find the whole index transformation map and later use it on matrix.
Starting matrix A:
m- number of rows
n- number of columns
nm - number of elements
li - linear index
i - column number
j - row number
resulting matrix B:
lir - resulting linear index
Transforming array trans
//preparation
for (li=0;li<nm;li++){
j=li / n;
i=li-j*n;
lir=i*m+j;
trans[li]=lir;
}
// transposition
for (li=0;li<nm;li++){
cur=li;
lir=trans[cur];
temp2=a[lir];
cur=lir;
while (cur!=li){
lir=trans[cur];
temp1=a[cur];
a[cur]=temp2;
temp2=temp1;
check[cur]=1;
cur=lir;
}
}
Such auto transposing has sense only if there are heavy elements in cells.
It is possible to realize trans[] array as a function.
Doing this efficiently in the general case requires some effort. The non-square and in- versus out-of-place algorithms differ. Save yourself much effort and just use FFTW. I previously prepared a more complete write up, including sample code, on the matter.

How to round floats to integers while preserving their sum?

Let's say I have an array of floating point numbers, in sorted (let's say ascending) order, whose sum is known to be an integer N. I want to "round" these numbers to integers while leaving their sum unchanged. In other words, I'm looking for an algorithm that converts the array of floating-point numbers (call it fn) to an array of integers (call it in) such that:
the two arrays have the same length
the sum of the array of integers is N
the difference between each floating-point number fn[i] and its corresponding integer in[i] is less than 1 (or equal to 1 if you really must)
given that the floats are in sorted order (fn[i] <= fn[i+1]), the integers will also be in sorted order (in[i] <= in[i+1])
Given that those four conditions are satisfied, an algorithm that minimizes the rounding variance (sum((in[i] - fn[i])^2)) is preferable, but it's not a big deal.
Examples:
[0.02, 0.03, 0.05, 0.06, 0.07, 0.08, 0.09, 0.1, 0.11, 0.12, 0.13, 0.14]
=> [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1]
[0.1, 0.3, 0.4, 0.4, 0.8]
=> [0, 0, 0, 1, 1]
[0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1]
=> [0, 0, 0, 0, 0, 0, 0, 0, 0, 1]
[0.4, 0.4, 0.4, 0.4, 9.2, 9.2]
=> [0, 0, 1, 1, 9, 9] is preferable
=> [0, 0, 0, 0, 10, 10] is acceptable
[0.5, 0.5, 11]
=> [0, 1, 11] is fine
=> [0, 0, 12] is technically not allowed but I'd take it in a pinch
To answer some excellent questions raised in the comments:
Repeated elements are allowed in both arrays (although I would also be interested to hear about algorithms that work only if the array of floats does not include repeats)
There is no single correct answer - for a given input array of floats, there are generally multiple arrays of ints that satisfy the four conditions.
The application I had in mind was - and this is kind of odd - distributing points to the top finishers in a game of MarioKart ;-) Never actually played the game myself, but while watching someone else I noticed that there were 24 points distributed among the top 4 finishers, and I wondered how it might be possible to distribute the points according to finishing time (so if someone finishes with a large lead they get a larger share of the points). The game tracks point totals as integers, hence the need for this kind of rounding.
For the curious, here is the test script I used to identify which algorithms worked.
One option you could try is "cascade rounding".
For this algorithm you keep track of two running totals: one of floating point numbers so far, and one of the integers so far.
To get the next integer you add the next fp number to your running total, round the running total, then subtract the integer running total from the rounded running total:-
number running total integer integer running total
1.3 1.3 1 1
1.7 3.0 2 3
1.9 4.9 2 5
2.2 8.1 3 8
2.8 10.9 3 11
3.1 14.0 3 14
Here is one algorithm which should accomplish the task. The main difference to other algorithms is that this one rounds the numbers in correct order always. Minimizing roundoff error.
The language is some pseudo language which probably derived from JavaScript or Lua. Should explain the point. Note the one based indexing (which is nicer with x to y for loops. :p)
// Temp array with same length as fn.
tempArr = Array(fn.length)
// Calculate the expected sum.
arraySum = sum(fn)
lowerSum = 0
-- Populate temp array.
for i = 1 to fn.lengthf
tempArr[i] = { result: floor(fn[i]), // Lower bound
difference: fn[i] - floor(fn[i]), // Roundoff error
index: i } // Original index
// Calculate the lower sum
lowerSum = lowerSum + tempArr[i].result
end for
// Sort the temp array on the roundoff error
sort(tempArr, "difference")
// Now arraySum - lowerSum gives us the difference between sums of these
// arrays. tempArr is ordered in such a way that the numbers closest to the
// next one are at the top.
difference = arraySum - lowerSum
// Add 1 to those most likely to round up to the next number so that
// the difference is nullified.
for i = (tempArr.length - difference + 1) to tempArr.length
tempArr.result = tempArr.result + 1
end for
// Optionally sort the array based on the original index.
array(sort, "index")
One really easy way is to take all the fractional parts and sum them up. That number by the definition of your problem must be a whole number. Distribute that whole number evenly starting with the largest of your numbers. Then give one to the second largest number... etc. until you run out of things to distribute.
Note this is pseudocode... and may be off by one in an index... its late and I am sleepy.
float accumulator = 0;
for (i = 0; i < num_elements; i++) /* assumes 0 based array */
{
accumulator += (fn[i] - floor(fn[i]));
fn[i] = (fn[i] - floor(fn[i]);
}
i = num_elements;
while ((accumulator > 0) && (i>=0))
{
fn[i-1] += 1; /* assumes 0 based array */
accumulator -= 1;
i--;
}
Update: There are other methods of distributing the accumulated values based on how much truncation was performed on each value. This would require keeping a seperate list called loss[i] = fn[i] - floor(fn[i]). You can then repeat over the fn[i] list and give 1 to the greatest loss item repeatedly (setting the loss[i] to 0 afterwards). Its complicated but I guess it works.
How about:
a) start: array is [0.1, 0.2, 0.4, 0.5, 0.8], N=3, presuming it's sorted
b) round them all the usual way: array is [0 0 0 1 1]
c) get the sum of the new array and subtract it from N to get the remainder.
d) while remainder>0, iterate through elements, going from the last one
- check if the new value would break rule 3.
- if not, add 1
e) in case that remainder<0, iterate from first one to the last one
- check if the new value would break rule 3.
- if not, subtract 1
Essentially what you'd do is distribute the leftovers after rounding to the most likely candidates.
Round the floats as you normally would, but keep track of the delta from rounding and associated index into fn and in.
Sort the second array by delta.
While sum(in) < N, work forwards from the largest negative delta, incrementing the rounded value (making sure you still satisfy rule #3).
Or, while sum(in) > N, work backwards from the largest positive delta, decrementing the rounded value (making sure you still satisfy rule #3).
Example:
[0.02, 0.03, 0.05, 0.06, 0.07, 0.08, 0.09, 0.1, 0.11, 0.12, 0.13, 0.14] N=1
1. [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0] sum=0
and [[-0.02, 0], [-0.03, 1], [-0.05, 2], [-0.06, 3], [-0.07, 4], [-0.08, 5],
[-0.09, 6], [-0.1, 7], [-0.11, 8], [-0.12, 9], [-0.13, 10], [-0.14, 11]]
2. sorting will reverse the array
3. working from the largest negative remainder, you get [-0.14, 11].
Increment `in[11]` and you get [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1] sum=1
Done.
Can you try something like this?
in [i] = fn [i] - int (fn [i]);
fn_res [i] = fn [i] - in [i];
fn_res → is the resultant fraction.
(I thought this was basic ...), Are we missing something?
Well, 4 is the pain point. Otherwise you could do things like "usually round down and accumulate leftover; round up when accumulator >= 1". (edit: actually, that might still be OK as long as you swapped their position?)
There might be a way to do it with linear programming? (that's maths "programming", not computer programming - you'd need some maths to find the feasible solution, although you could probably skip the usual "optimisation" part).
As an example of the linear programming - with the example [1.3, 1.7, 1.9, 2.2, 2.8, 3.1] you could have the rules:
1 <= i < 2
1 <= j < 2
1 <= k < 2
2 <= l < 3
3 <= m < 4
i <= j <= k <= l <= m
i + j + k + l + m = 13
Then apply some linear/matrix algebra ;-p Hint: there are products to do the above based on things like the "Simplex" algorithm. Common university fodder, too (I wrote one at uni for my final project).
The problem, as I see it, is that the sorting algorithm is not specified. Or more like - whether it's a stable sort or not.
Consider the following array of floats:
[ 0.2 0.2 0.2 0.2 0.2 ]
The sum is 1. The integer array then should be:
[ 0 0 0 0 1 ]
However, if the sorting algorithm isn't stable, it could sort the "1" somewhere else in the array...
Make the summed diffs are to be under 1, and check to be sorted.
some like,
while(i < sizeof(fn) / sizeof(float)) {
res += fn[i] - floor(fn[i]);
if (res >= 1) {
res--;
in[i] = ceil(fn[i]);
}
else
in[i] = floor(fn[i]);
if (in[i-1] > in[i])
swap(in[i-1], in[i++]);
}
(it's paper code, so i didn't check the validity.)
Below a python and numpy implementation of #mikko-rantanen 's code. It took me a bit to put this together, so this may be helpful to future Googlers despite the age of the topic.
import numpy as np
from math import floor
original_array = np.array([1.2, 1.5, 1.4, 1.3, 1.7, 1.9])
# Calculate length of original array
# Need to substract 1, as indecies start at 0, but product of dimensions
# results in a count starting at 1
array_len = original_array.size - 1 # Index starts at 0, but product at 1
# Calculate expected sum of original values (must be integer)
expected_sum = np.sum(original_array)
# Collect values for temporary array population
array_list = []
lower_sum = 0
for i, j in enumerate(np.nditer(original_array)):
array_list.append([i, floor(j), j - floor(j)]) # Original index, lower bound, roundoff error
# Calculate the lower sum of values
lower_sum += floor(j)
# Populate temporary array
temp_array = np.array(array_list)
# Sort temporary array based on roundoff error
temp_array = temp_array[temp_array[:,2].argsort()]
# Calculate difference between expected sum and the lower sum
# This is the number of integers that need to be rounded up from the lower sum
# The sort order (roundoff error) ensures that the value closest to be
# rounded up is at the bottom of the array
difference = int(expected_sum - lower_sum)
# Add one to the number most likely to round up to eliminate the difference
temp_array_len, _ = temp_array.shape
for i in xrange(temp_array_len - difference, temp_array_len):
temp_array[i,1] += 1
# Re-sort the array based on original index
temp_array = temp_array[temp_array[:,0].argsort()]
# Return array to one-dimensional format of original array
array_list = []
for i in xrange(temp_array_len):
array_list.append(int(temp_array[i,1]))
new_array = np.array(array_list)
Calculate sum of floor and sum of numbers.
Round sum of numbers, and subtract with sum of floor, the difference is how many ceiling we need to patch(how many +1 we need).
Sorting the array with its difference of ceiling to number, from small to large.
For diff times(diff is how many ceiling we need to patch), we set result as ceiling of number. Others set result as floor of numbers.
public class Float_Ceil_or_Floor {
public static int[] getNearlyArrayWithSameSum(double[] numbers) {
NumWithDiff[] numWithDiffs = new NumWithDiff[numbers.length];
double sum = 0.0;
int floorSum = 0;
for (int i = 0; i < numbers.length; i++) {
int floor = (int)numbers[i];
int ceil = floor;
if (floor < numbers[i]) ceil++; // check if a number like 4.0 has same floor and ceiling
floorSum += floor;
sum += numbers[i];
numWithDiffs[i] = new NumWithDiff(ceil,floor, ceil - numbers[i]);
}
// sort array by its diffWithCeil
Arrays.sort(numWithDiffs, (a,b)->{
if(a.diffWithCeil < b.diffWithCeil) return -1;
else return 1;
});
int roundSum = (int) Math.round(sum);
int diff = roundSum - floorSum;
int[] res = new int[numbers.length];
for (int i = 0; i < numWithDiffs.length; i++) {
if(diff > 0 && numWithDiffs[i].floor != numWithDiffs[i].ceil){
res[i] = numWithDiffs[i].ceil;
diff--;
} else {
res[i] = numWithDiffs[i].floor;
}
}
return res;
}
public static void main(String[] args) {
double[] arr = { 1.2, 3.7, 100, 4.8 };
int[] res = getNearlyArrayWithSameSum(arr);
for (int i : res) System.out.print(i + " ");
}
}
class NumWithDiff {
int ceil;
int floor;
double diffWithCeil;
public NumWithDiff(int c, int f, double d) {
this.ceil = c;
this.floor = f;
this.diffWithCeil = d;
}
}
Without minimizing the variance, here's a trivial one:
Sort values from left to right.
Round all down to the next integer.
Let the sum of those integers be K. Increase the N-K rightmost values by 1.
Restore original order.
This obviously satisfies your conditions 1.-4. Alternatively, you could round to the closest integer, and increase N-K of the ones you had rounded down. You can do this greedily by the difference between the original and rounded value, but each run of rounded-down values must only be increased from right to left, to maintain sorted order.
If you can accept a small change in the total while improving the variance this will probabilistically preserve totals in python:
import math
import random
integer_list = [int(x) + int(random.random() <= math.modf(x)[0]) for x in my_list]
to explain it rounds all numbers down and adds one with a probability equal to the fractional part i.e. one in ten 0.1 will become 1 and the rest 0
this works for statistical data where you are converting a large numbers of fractional persons into either 1 person or 0 persons

Resources