tf.keras.metrics.MeanIoU with sigmoid layer - validation

I have a network for semantic segmentation and the last layer of my model applies a sigmoid activation, so all predictions are scaled between 0-1. There is this validation metric tf.keras.metrics.MeanIoU(num_classes), which compares classified predictions (0 or 1) with validation (0 or 1). So if i make a prediction and apply this metric, will it automatically map the continuous predictions to binary with threshold = 0.5? Are there any possibilities to manually define the threshold?

No, tf.keras.metrics.MeanIoU will not automatically map the continuous predictions to binary with threshold = 0.5.
It will convert the continuous predictions to its binary, by taking the binary digit before decimal point as predictions like 0.99 as 0, 0.50 as 0, 0.01 as 0, 1.99 as 1, 1.01 as 1 etc when num_classes=2. So basically if your predicted values are between 0 to 1 and num_classes=2, then everything is considered 0 unless the prediction is 1.
Below are the experiments to justify the behavior in tensorflow version 2.2.0:
All binary result :
import tensorflow as tf
m = tf.keras.metrics.MeanIoU(num_classes=2)
_ = m.update_state([0, 0, 1, 1], [0, 0, 1, 1])
m.result().numpy()
Output -
1.0
Change one prediction to continuous 0.99 - Here it considers 0.99 as 0.
import tensorflow as tf
m = tf.keras.metrics.MeanIoU(num_classes=2)
_ = m.update_state([0, 0, 1, 1], [0, 0, 1, 0.99])
m.result().numpy()
Output -
0.5833334
Change one prediction to continuous 0.01 - Here it considers 0.01 as 0.
import tensorflow as tf
m = tf.keras.metrics.MeanIoU(num_classes=2)
_ = m.update_state([0, 0, 1, 1], [0, 0.01, 1, 1])
m.result().numpy()
Output -
1.0
Change one prediction to continuous 1.99 - Here it considers 1.99 as 1.
%tensorflow_version 2.x
import tensorflow as tf
m = tf.keras.metrics.MeanIoU(num_classes=2)
_ = m.update_state([0, 0, 1, 1], [0, 0, 1, 1.99])
m.result().numpy()
Output -
1.0
So ideal way is to define a function to convert the continuous to binary before evaluating the MeanIoU.
Hope this answers your question. Happy Learning.

Try this(remember to replace the space with tab):
def mean_iou(y_true, y_pred):
th = 0.5
y_pred_ = tf.to_int32(y_pred > th)
score, up_opt = tf.metrics.mean_iou(y_true, y_pred_, 2)
K.get_session().run(tf.local_variables_initializer())
with tf.control_dependencies([up_opt]):
score = tf.identity(score)
return score

Related

LightGBM predict with pred_contrib=True for multiclass: order of SHAP values in the returned array

LightGBM predict method with pred_contrib=True returns an array of shape =(n_samples, (n_features + 1) * n_classes).
What is the order of data in the second dimension of this array?
In other words, there are two questions:
What is the correct way to reshape this array to use it: shape = (n_samples, n_features + 1, n_classes) or shape = (n_samples, n_classes, n_features + 1)?
In the feature dimension, there are n_features entries, one for each feature, and a (useless) entry for the contribution not related to any feature. What is the order of these entries: feature contributions in the entries 1,..., n_features in the same order they appear in the dataset, with the remaining (useless) entry at index 0, or some other way?
The answers are as follows:
The correct shape is (n_samples, n_classes, n_features + 1).
The feature contributions are in the entries 1,..., n_features in the same order they appear in the dataset, with the remaining (useless) entry at index 0.
The following code shows it convincingly:
import lightgbm, pandas, numpy
params = {'objective': 'multiclass', 'num_classes': 4, 'num_iterations': 10000,
'metric': 'multiclass', 'early_stopping_rounds': 10}
train_df = pandas.DataFrame({'f0': [0, 1, 2, 3] * 50, 'f1': [0, 0, 1] * 66 + [1, 2]}, dtype=float)
val_df = train_df.copy()
train_target = pandas.Series([0, 1, 2, 3] * 50)
val_target = pandas.Series([0, 1, 2, 3] * 50)
train_set = lightgbm.Dataset(train_df, train_target)
val_set = lightgbm.Dataset(val_df, val_target)
model = lightgbm.train(params=params, train_set=train_set, valid_sets=[val_set, train_set])
feature_contribs = model.predict(val_df, pred_contrib=True)
print('Shape of SHAP:', feature_contribs.shape)
# Shape of SHAP: (200, 12)
print('Averages over samples:', numpy.mean(feature_contribs, axis=0))
# Averages over samples: [ 3.99942301e-13 -4.02281771e-13 -4.30029167e+00 -1.90606677e-05
# 1.90606677e-05 -4.04157656e+00 2.24205077e-05 -2.24205077e-05
# -4.04265615e+00 -3.70370401e-15 5.20335728e-18 -4.30029167e+00]
feature_contribs.shape = (200, 4, 3)
print('Mean feature contribs:', numpy.mean(feature_contribs, axis=(0, 1)))
# Mean feature contribs: [ 8.39960111e-07 -8.39960113e-07 -4.17120401e+00]
(Each output appears as a comment in the following line.)
The explanation is as follows.
I have created a dataset with two features and with labels identical to the second of these features.
I would expect significant contribution from the second feature only.
After averaging the SHAP output over the samples, we get an array of the shape (12,) with nonzero values at the positions 2, 5, 8, 11 (zero-based).
This shows that the correct shape of this array is (4, 3).
After reshaping this way and averaging over the samples and the classes, we get an array of the shape (3,) with the nonzero entry at the end.
This shows that the last entry of this array corresponds to the last feature. This means that the entry at the position 0 does not correspond to any feature and the following entries correspond to features.

Pairwise Cosine Similarity using TensorFlow

How can we efficiently calculate pairwise cosine distances in a matrix using TensorFlow? Given an MxN matrix, the result should be an MxM matrix, where the element at position [i][j] is the cosine distance between i-th and j-th rows/vectors in the input matrix.
This can be done with Scikit-Learn fairly easily as follows:
from sklearn.metrics.pairwise import pairwise_distances
pairwise_distances(input_matrix, metric='cosine')
Is there an equivalent method in TensorFlow?
There is an answer for getting a single cosine distance here: https://stackoverflow.com/a/46057597/288875 . This is based on tf.losses.cosine_distance .
Here is a solution which does this for matrices:
import tensorflow as tf
import numpy as np
with tf.Session() as sess:
M = 3
# input
input = tf.placeholder(tf.float32, shape = (M, M))
# normalize each row
normalized = tf.nn.l2_normalize(input, dim = 1)
# multiply row i with row j using transpose
# element wise product
prod = tf.matmul(normalized, normalized,
adjoint_b = True # transpose second matrix
)
dist = 1 - prod
input_matrix = np.array(
[[ 1, 1, 1 ],
[ 0, 1, 1 ],
[ 0, 0, 1 ],
],
dtype = 'float32')
print "input_matrix:"
print input_matrix
from sklearn.metrics.pairwise import pairwise_distances
print "sklearn:"
print pairwise_distances(input_matrix, metric='cosine')
print "tensorflow:"
print sess.run(dist, feed_dict = { input : input_matrix })
which gives me:
input_matrix:
[[ 1. 1. 1.]
[ 0. 1. 1.]
[ 0. 0. 1.]]
sklearn:
[[ 0. 0.18350345 0.42264974]
[ 0.18350345 0. 0.29289323]
[ 0.42264974 0.29289323 0. ]]
tensorflow:
[[ 5.96046448e-08 1.83503449e-01 4.22649741e-01]
[ 1.83503449e-01 5.96046448e-08 2.92893231e-01]
[ 4.22649741e-01 2.92893231e-01 0.00000000e+00]]
Note that this solution may not be the optimal one as it calculates all entries of the (symmetric) result matrix, i.e. does almost twice of the calculations. This is likely not a problem for small matrices, for large matrices a combination of loops may be faster.
Note also that this does not have a minibatch dimension so works for a single matrix only.
Elegant solution (output is the same as from scikit-learn pairwise_distances function):
def compute_cosine_distances(a, b):
# x shape is n_a * dim
# y shape is n_b * dim
# results shape is n_a * n_b
normalize_a = tf.nn.l2_normalize(a,1)
normalize_b = tf.nn.l2_normalize(b,1)
distance = 1 - tf.matmul(normalize_a, normalize_b, transpose_b=True)
return distance
test
input_matrix = np.array([[1, 1, 1],
[0, 1, 1],
[0, 0, 1]], dtype = 'float32')
compute_cosine_distances(input_matrix, input_matrix)
output:
<tf.Tensor: id=442, shape=(3, 3), dtype=float32, numpy=
array([[5.9604645e-08, 1.8350345e-01, 4.2264974e-01],
[1.8350345e-01, 5.9604645e-08, 2.9289323e-01],
[4.2264974e-01, 2.9289323e-01, 0.0000000e+00]], dtype=float32)>

How to sample without replacement and reweigh each time (conditional sampling)?

Consider a dataset of N rows with weights. This is the basic algorithm:
Normalize the weights so that they sum to 1.
Backup the weights into another column to record sample probabilities
Randomly choose 1 row (without replacement), given the sample probabilities, and add it to the sample dataset
Remove the drawn weight from the original dataset, and recompute the sample probabilities by normalizing the weights of the remaining rows
Repeat steps 3 and 4 till sum of weights in sample reaches or exceeds threshold (assume 0.6)
Here is a toy example:
import pandas as pd
import numpy as np
def sampler(n):
df = pd.DataFrame(np.random.rand(n), columns=['weight'])
df['weight'] = df['weight']/df['weight'].sum()
df['samp_prob'] = df['weight']
samps = pd.DataFrame(columns=['weight'])
while True:
choice = np.random.choice(df.index, 1, replace=False, p=df['samp_prob'])[0]
samps.loc[choice, 'weight'] = df.loc[choice, 'weight']
df.drop(choice, axis=0, inplace=True)
df['samp_prob'] = df['weight']/df['weight'].sum()
if samps['weight'].sum() >= 0.6:
break
return samps
The problem with the toy example is the exponential growth in run times with increasing size of n:
Starting off approach
Few observations :
The dropping of rows per iteration that results in creation of new dataframes isn't helping with the performance.
Doesn't look like easy to vectorize, BUT should be easy to work with the underlying array data for performance. The idea would be to use masks and avoid re-creating dataframes or arrays. Starting off, we would be using two columns array, corresponding to the columns named : 'weights' and 'samp_prob'.
So, with those in mind, the starting approach would be something like this -
def sampler2(n):
a = np.random.rand(n,2)
a[:,0] /= a[:,0].sum()
a[:,1] = a[:,0]
N = len(a)
idx = np.arange(N)
mask = np.ones(N,dtype=bool)
while True:
choice = np.random.choice(idx[mask], 1, replace=False, p=a[mask,1])[0]
mask[choice] = 0
a_masked = a[mask,0]
a[mask,1] = a_masked/a_masked.sum()
if a[~mask,0].sum() >= 0.6:
break
out = a[~mask,0]
return out
Improvement #1
A later observation revealed that the first column of the array isn't changing across iterations. So, we could optimize for the masked summations for the first column, by pre-computing the total summation and then at each iteration, a[~mask,0].sum() would be simply the total summation minus a_masked.sum(). Thsi leads us to the first improvement, listed below -
def sampler3(n):
a = np.random.rand(n,2)
a[:,0] /= a[:,0].sum()
a[:,1] = a[:,0]
N = len(a)
idx = np.arange(N)
mask = np.ones(N,dtype=bool)
a0_sum = a[:,0].sum()
while True:
choice = np.random.choice(idx[mask], 1, replace=False, p=a[mask,1])[0]
mask[choice] = 0
a_masked = a[mask,0]
a_masked_sum = a_masked.sum()
a[mask,1] = a_masked/a_masked_sum
if a0_sum - a_masked_sum >= 0.6:
break
out = a[~mask,0]
return out
Improvement #2
Now, slicing and masking into the columns of a 2D array could be improved by using two separate arrays instead, given that the first column wasn't changing between iterations. That gives us a modified version, like so -
def sampler4(n):
a = np.random.rand(n)
a /= a.sum()
b = a.copy()
N = len(a)
idx = np.arange(N)
mask = np.ones(N,dtype=bool)
a_sum = a.sum()
while True:
choice = np.random.choice(idx[mask], 1, replace=False, p=b[mask])[0]
mask[choice] = 0
a_masked = a[mask]
a_masked_sum = a_masked.sum()
b[mask] = a_masked/a_masked_sum
if a_sum - a_masked_sum >= 0.6:
break
out = a[~mask]
return out
Runtime test -
In [250]: n = 1000
In [251]: %timeit sampler(n) # original app
...: %timeit sampler2(n)
...: %timeit sampler3(n)
...: %timeit sampler4(n)
1 loop, best of 3: 655 ms per loop
10 loops, best of 3: 50 ms per loop
10 loops, best of 3: 44.9 ms per loop
10 loops, best of 3: 38.4 ms per loop
In [252]: n = 2000
In [253]: %timeit sampler(n) # original app
...: %timeit sampler2(n)
...: %timeit sampler3(n)
...: %timeit sampler4(n)
1 loop, best of 3: 1.32 s per loop
10 loops, best of 3: 134 ms per loop
10 loops, best of 3: 119 ms per loop
10 loops, best of 3: 100 ms per loop
Thus, we are getting 17x+ and 13x+ speedups with the final version over the original method for n=1000 and n=2000 sizes!
I think you can rewrite this while loop to do it in a single pass:
while True:
choice = np.random.choice(df.index, 1, replace=False, p=df['samp_prob'])[0]
samps.loc[choice, 'weight'] = df.loc[choice, 'weight']
df.drop(choice, axis=0, inplace=True)
df['samp_prob'] = df['weight']/df['weight'].sum()
if samps['weight'].sum() >= 0.6:
break
to something more like:
n = len(df.index)
ind = np.random.choice(n, n, replace=False, p=df["samp_prob"])
res = df.iloc[ind]
i = (res.cumsum() >= 0.6).idxmax() # first index that satisfies .sum() >= 0.6
samps = res.iloc[:i+1]
The key parts are that choice can take multiple elements (indeed the entire array) whilst still respecting the probabilities. The cumsum allows you to cut off after passing the 0.6 threshold.
In this example you can see that the array is randomly chosen, but that 4 is most likely chosen nearer the top.
In [11]: np.random.choice(5, 5, replace=False, p=[0.05, 0.05, 0.1, 0.2, 0.6])
Out[11]: array([0, 4, 3, 2, 1])
In [12]: np.random.choice(5, 5, replace=False, p=[0.05, 0.05, 0.1, 0.2, 0.6])
Out[12]: array([3, 4, 1, 2, 0])
In [13]: np.random.choice(5, 5, replace=False, p=[0.05, 0.05, 0.1, 0.2, 0.6])
Out[13]: array([0, 4, 3, 1, 2])
In [14]: np.random.choice(5, 5, replace=False, p=[0.05, 0.05, 0.1, 0.2, 0.6])
Out[14]: array([4, 3, 0, 2, 1])
In [15]: np.random.choice(5, 5, replace=False, p=[0.05, 0.05, 0.1, 0.2, 0.6])
Out[15]: array([4, 2, 3, 0, 1])
In [16]: np.random.choice(5, 5, replace=False, p=[0.05, 0.05, 0.1, 0.2, 0.6])
Out[16]: array([3, 4, 2, 0, 1])
Note: The replace=False, ensures the probabilities are "reweighed" in the sense that it can't be picked again.

Generating a random number with weighted probability - 'Distribution' gem

I would like to create a random number generator, that generates a random decimal number:
Greater than 0.0
Less than 15.0
Where the probability of that number being close to 2.0 is relatively high
The probability of it being near 15.0 or very close to zero is very low
I'm terrifically poor at mathematics but my research seems to tell me I want to pull a random number from a Cumulative Distribution Function resembling a Fisher–Snedecor (F) pattern, a bit like this one:
http://cdn.app.compendium.com/uploads/user/458939f4-fe08-4dbc-b271-efca0f5a2682/742d7708-efd3-492c-abff-6044d78e3bbd/Image/6303a2314437d8fcf2f72d9a56b1293a/f_distribution_probability.png
I am using a Ruby gem called Distribution (https://github.com/sciruby/distribution) to try and achieve this. It looks like the right tool, but I'm having a terrible time trying to understand how to use it to achieve the desired outcome :( Any help please.
I'll take it back, there is no rng call for F. So, if you want to use Distribution gem, what I would propose is to use Chi2 with 4 degrees of freedom.
Mode for Chi2 with k degress of freedom is equal to k-2, so for 4 d.f. you'll get mode at 2, see here. My Ruby is rusty, bear with me
require 'distribution'
normal = Distribution::Normal.rng(0)
g1 = normal.call
g2 = normal.call
g3 = normal.call
g4 = normal.call
chi2 = g1*g1 + g2*g2 + g3*g3 + g4*g4
UPDATE
You have to truncate it at 15, so if generated chi2 is greater than 15 just reject it and generate another one. Though I would say you won't see a lot of
value above 15, check graphs for PDF/CDF.
UPDATE II
And if you want to get samples from F, make generic Chi2 generator for d degrees of freedom from code above, and just sample ratio of chi2, check here
chi2_d1 = DChi2(d1)
chi2_d2 = DChi2(d2)
f = (chi2_d1.call / d1) / (chi2_d2.call / d2)
UPDATE III
And, frankly, I don't see how you could get F distribution working for you. It is ok at 0, but mode is equal to (d1-2)/d1 * d2/(d2 + 2), and it is hard to see it equal to 2. Graph you provided has mode at about 1/3.
Here's a very crude, unscientific, non-mathy attempt at using the F-distribution with the parameters you gave in the F-function image (3 and 36).
First I calculate what F-value is needed for the CDF to be 0.975 (100% - 2.5% for the upper end of the range for your number 15):
To calculate that we can use the p_value method like so:
> F_15 = Distribution::F.p_value(0.975, 3, 36)
=> 3.5046846420861977
Next we simply use a multiplier so that when we calculate the CDF it will return the value 15 when the F-value is F_15.
> M = 15 / F_15
=> 4.27998565687528
And now we can generate random numbers with rand, which has a range of 0..1 like so:
[M * Distribution::F.p_value(rand, 3, 36), 15].min
The question is will this function be close to the number 2 with a 45% probability? Well..sort of. You need to pick the right parameters for the F-distribution to tweak the curve (or just adjust the multiplier M). But here's a sample with the parameters from your image:
0.step(0.99, 0.02).map { |n|
sprintf("%0.2f", M * Distribution::F.p_value(n, 3, 36))
}
Gives you:
["0.00", "0.26", "0.42", "0.57", "0.70", "0.83", "0.95", "1.07",
"1.20", "1.31", "1.43", "1.55", "1.67", "1.80", "1.92", "2.04",
"2.17", "2.30", "2.43", "2.56", "2.70", "2.84", "2.98", "3.13",
"3.28", "3.44", "3.60", "3.77", "3.95", "4.13", "4.32", "4.52",
"4.73", "4.95", "5.18", "5.43", "5.69", "5.97", "6.28", "6.61",
"6.97", "7.37", "7.81", "8.32", "8.90", "9.60", "10.45", "11.56",
"13.14", "15.90"]
Sometimes you know which distribution applies because of the nature of the data. If, for example, the random variable is the sum of independent, identical Bernouli (two-state) random variables, you know the former has a binomial distribution, which can be approximated by a Normal distribution. When, as here, that does not apply, you can use a continuous distribution, shaped by it's parameters, or simply use a discrete distribution. Others have made suggestions for using various continuous distributions, so I'll pass on some remarks about using a discrete distribution.
Suppose the discrete probability density function were the following:
pdf = [[0.5, 0.03], [1.0, 0.06], [1.5, 0.10], [ 2.0, 0.15], [2.5 , 0.15], [ 3.0, 0.10],
[4.0, 0.11], [6.0, 0.14], [9.0, 0.10], [12.0, 0.03], [14.0, 0.02], [15.0, 0.01]]
pdf.map(&:last).reduce(:+)
#=> 1.0
This could be interpreted as there being a probability of 0.03 that the random variable will be less than 0.5, a 0.06 probability of the random variable being greater than or equal 0.5 and less than 1.0, and so on.
A discrete pdf might be constructed from historical data or by sampling, an advantage it has over using a continuous distribution. It can be made arbitrarily fine by increasing the numbers of intervals.
Next convert the pdf to a cumulative distribution function:
cum = 0.0
cdf = pdf.map { |k,v| [k, cum += v] }
#=> [[0.5, 0.03], [1.0, 0.09], [1.5, 0.19], [2.0, 0.34], [2.5, 0.49], [3.0, 0.59],
# [4.0, 0.7], [6.0, 0.84], [9.0, 0.94], [12.0, 0.97], [14.0, 0.99], [15.0, 1.0]]
Now use Kernel#rand to generate pseudo random variates between 0.0 and 1.0 and use Enumerable#find to associate the random variate with a cdf key:
def rnd(cdf)
r = rand
cdf.find { |k,v| r < v }.first
end
Note that cdf.find { |k,v| rand < v }.first would produce erroneous results, since rand is executed for each key-value pair of cdf.
Let's try it 100,000 times, recording the relative frequencies
n = 100_000
inc = 1.0/n
n.times.with_object(Hash.new(0.0)) { |_, h| h[rnd(cdf)] += inc }.
sort.
map { |k,v| [k, v.round(5)] }.to_h
#=> { 0.5=>0.03053, 1.0=>0.05992, 1.5=>0.10084, 2.0=>0.14959, 2.5=>0.15024,
# 3.0=>0.10085, 4.0=>0.10946, 6.0=>0.13923, 9.0=>0.09919, 12.0=>0.03073,
# 14.0=>0.01931, 15.0=>0.01011}

How to round floats to integers while preserving their sum?

Let's say I have an array of floating point numbers, in sorted (let's say ascending) order, whose sum is known to be an integer N. I want to "round" these numbers to integers while leaving their sum unchanged. In other words, I'm looking for an algorithm that converts the array of floating-point numbers (call it fn) to an array of integers (call it in) such that:
the two arrays have the same length
the sum of the array of integers is N
the difference between each floating-point number fn[i] and its corresponding integer in[i] is less than 1 (or equal to 1 if you really must)
given that the floats are in sorted order (fn[i] <= fn[i+1]), the integers will also be in sorted order (in[i] <= in[i+1])
Given that those four conditions are satisfied, an algorithm that minimizes the rounding variance (sum((in[i] - fn[i])^2)) is preferable, but it's not a big deal.
Examples:
[0.02, 0.03, 0.05, 0.06, 0.07, 0.08, 0.09, 0.1, 0.11, 0.12, 0.13, 0.14]
=> [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1]
[0.1, 0.3, 0.4, 0.4, 0.8]
=> [0, 0, 0, 1, 1]
[0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1]
=> [0, 0, 0, 0, 0, 0, 0, 0, 0, 1]
[0.4, 0.4, 0.4, 0.4, 9.2, 9.2]
=> [0, 0, 1, 1, 9, 9] is preferable
=> [0, 0, 0, 0, 10, 10] is acceptable
[0.5, 0.5, 11]
=> [0, 1, 11] is fine
=> [0, 0, 12] is technically not allowed but I'd take it in a pinch
To answer some excellent questions raised in the comments:
Repeated elements are allowed in both arrays (although I would also be interested to hear about algorithms that work only if the array of floats does not include repeats)
There is no single correct answer - for a given input array of floats, there are generally multiple arrays of ints that satisfy the four conditions.
The application I had in mind was - and this is kind of odd - distributing points to the top finishers in a game of MarioKart ;-) Never actually played the game myself, but while watching someone else I noticed that there were 24 points distributed among the top 4 finishers, and I wondered how it might be possible to distribute the points according to finishing time (so if someone finishes with a large lead they get a larger share of the points). The game tracks point totals as integers, hence the need for this kind of rounding.
For the curious, here is the test script I used to identify which algorithms worked.
One option you could try is "cascade rounding".
For this algorithm you keep track of two running totals: one of floating point numbers so far, and one of the integers so far.
To get the next integer you add the next fp number to your running total, round the running total, then subtract the integer running total from the rounded running total:-
number running total integer integer running total
1.3 1.3 1 1
1.7 3.0 2 3
1.9 4.9 2 5
2.2 8.1 3 8
2.8 10.9 3 11
3.1 14.0 3 14
Here is one algorithm which should accomplish the task. The main difference to other algorithms is that this one rounds the numbers in correct order always. Minimizing roundoff error.
The language is some pseudo language which probably derived from JavaScript or Lua. Should explain the point. Note the one based indexing (which is nicer with x to y for loops. :p)
// Temp array with same length as fn.
tempArr = Array(fn.length)
// Calculate the expected sum.
arraySum = sum(fn)
lowerSum = 0
-- Populate temp array.
for i = 1 to fn.lengthf
tempArr[i] = { result: floor(fn[i]), // Lower bound
difference: fn[i] - floor(fn[i]), // Roundoff error
index: i } // Original index
// Calculate the lower sum
lowerSum = lowerSum + tempArr[i].result
end for
// Sort the temp array on the roundoff error
sort(tempArr, "difference")
// Now arraySum - lowerSum gives us the difference between sums of these
// arrays. tempArr is ordered in such a way that the numbers closest to the
// next one are at the top.
difference = arraySum - lowerSum
// Add 1 to those most likely to round up to the next number so that
// the difference is nullified.
for i = (tempArr.length - difference + 1) to tempArr.length
tempArr.result = tempArr.result + 1
end for
// Optionally sort the array based on the original index.
array(sort, "index")
One really easy way is to take all the fractional parts and sum them up. That number by the definition of your problem must be a whole number. Distribute that whole number evenly starting with the largest of your numbers. Then give one to the second largest number... etc. until you run out of things to distribute.
Note this is pseudocode... and may be off by one in an index... its late and I am sleepy.
float accumulator = 0;
for (i = 0; i < num_elements; i++) /* assumes 0 based array */
{
accumulator += (fn[i] - floor(fn[i]));
fn[i] = (fn[i] - floor(fn[i]);
}
i = num_elements;
while ((accumulator > 0) && (i>=0))
{
fn[i-1] += 1; /* assumes 0 based array */
accumulator -= 1;
i--;
}
Update: There are other methods of distributing the accumulated values based on how much truncation was performed on each value. This would require keeping a seperate list called loss[i] = fn[i] - floor(fn[i]). You can then repeat over the fn[i] list and give 1 to the greatest loss item repeatedly (setting the loss[i] to 0 afterwards). Its complicated but I guess it works.
How about:
a) start: array is [0.1, 0.2, 0.4, 0.5, 0.8], N=3, presuming it's sorted
b) round them all the usual way: array is [0 0 0 1 1]
c) get the sum of the new array and subtract it from N to get the remainder.
d) while remainder>0, iterate through elements, going from the last one
- check if the new value would break rule 3.
- if not, add 1
e) in case that remainder<0, iterate from first one to the last one
- check if the new value would break rule 3.
- if not, subtract 1
Essentially what you'd do is distribute the leftovers after rounding to the most likely candidates.
Round the floats as you normally would, but keep track of the delta from rounding and associated index into fn and in.
Sort the second array by delta.
While sum(in) < N, work forwards from the largest negative delta, incrementing the rounded value (making sure you still satisfy rule #3).
Or, while sum(in) > N, work backwards from the largest positive delta, decrementing the rounded value (making sure you still satisfy rule #3).
Example:
[0.02, 0.03, 0.05, 0.06, 0.07, 0.08, 0.09, 0.1, 0.11, 0.12, 0.13, 0.14] N=1
1. [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0] sum=0
and [[-0.02, 0], [-0.03, 1], [-0.05, 2], [-0.06, 3], [-0.07, 4], [-0.08, 5],
[-0.09, 6], [-0.1, 7], [-0.11, 8], [-0.12, 9], [-0.13, 10], [-0.14, 11]]
2. sorting will reverse the array
3. working from the largest negative remainder, you get [-0.14, 11].
Increment `in[11]` and you get [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1] sum=1
Done.
Can you try something like this?
in [i] = fn [i] - int (fn [i]);
fn_res [i] = fn [i] - in [i];
fn_res → is the resultant fraction.
(I thought this was basic ...), Are we missing something?
Well, 4 is the pain point. Otherwise you could do things like "usually round down and accumulate leftover; round up when accumulator >= 1". (edit: actually, that might still be OK as long as you swapped their position?)
There might be a way to do it with linear programming? (that's maths "programming", not computer programming - you'd need some maths to find the feasible solution, although you could probably skip the usual "optimisation" part).
As an example of the linear programming - with the example [1.3, 1.7, 1.9, 2.2, 2.8, 3.1] you could have the rules:
1 <= i < 2
1 <= j < 2
1 <= k < 2
2 <= l < 3
3 <= m < 4
i <= j <= k <= l <= m
i + j + k + l + m = 13
Then apply some linear/matrix algebra ;-p Hint: there are products to do the above based on things like the "Simplex" algorithm. Common university fodder, too (I wrote one at uni for my final project).
The problem, as I see it, is that the sorting algorithm is not specified. Or more like - whether it's a stable sort or not.
Consider the following array of floats:
[ 0.2 0.2 0.2 0.2 0.2 ]
The sum is 1. The integer array then should be:
[ 0 0 0 0 1 ]
However, if the sorting algorithm isn't stable, it could sort the "1" somewhere else in the array...
Make the summed diffs are to be under 1, and check to be sorted.
some like,
while(i < sizeof(fn) / sizeof(float)) {
res += fn[i] - floor(fn[i]);
if (res >= 1) {
res--;
in[i] = ceil(fn[i]);
}
else
in[i] = floor(fn[i]);
if (in[i-1] > in[i])
swap(in[i-1], in[i++]);
}
(it's paper code, so i didn't check the validity.)
Below a python and numpy implementation of #mikko-rantanen 's code. It took me a bit to put this together, so this may be helpful to future Googlers despite the age of the topic.
import numpy as np
from math import floor
original_array = np.array([1.2, 1.5, 1.4, 1.3, 1.7, 1.9])
# Calculate length of original array
# Need to substract 1, as indecies start at 0, but product of dimensions
# results in a count starting at 1
array_len = original_array.size - 1 # Index starts at 0, but product at 1
# Calculate expected sum of original values (must be integer)
expected_sum = np.sum(original_array)
# Collect values for temporary array population
array_list = []
lower_sum = 0
for i, j in enumerate(np.nditer(original_array)):
array_list.append([i, floor(j), j - floor(j)]) # Original index, lower bound, roundoff error
# Calculate the lower sum of values
lower_sum += floor(j)
# Populate temporary array
temp_array = np.array(array_list)
# Sort temporary array based on roundoff error
temp_array = temp_array[temp_array[:,2].argsort()]
# Calculate difference between expected sum and the lower sum
# This is the number of integers that need to be rounded up from the lower sum
# The sort order (roundoff error) ensures that the value closest to be
# rounded up is at the bottom of the array
difference = int(expected_sum - lower_sum)
# Add one to the number most likely to round up to eliminate the difference
temp_array_len, _ = temp_array.shape
for i in xrange(temp_array_len - difference, temp_array_len):
temp_array[i,1] += 1
# Re-sort the array based on original index
temp_array = temp_array[temp_array[:,0].argsort()]
# Return array to one-dimensional format of original array
array_list = []
for i in xrange(temp_array_len):
array_list.append(int(temp_array[i,1]))
new_array = np.array(array_list)
Calculate sum of floor and sum of numbers.
Round sum of numbers, and subtract with sum of floor, the difference is how many ceiling we need to patch(how many +1 we need).
Sorting the array with its difference of ceiling to number, from small to large.
For diff times(diff is how many ceiling we need to patch), we set result as ceiling of number. Others set result as floor of numbers.
public class Float_Ceil_or_Floor {
public static int[] getNearlyArrayWithSameSum(double[] numbers) {
NumWithDiff[] numWithDiffs = new NumWithDiff[numbers.length];
double sum = 0.0;
int floorSum = 0;
for (int i = 0; i < numbers.length; i++) {
int floor = (int)numbers[i];
int ceil = floor;
if (floor < numbers[i]) ceil++; // check if a number like 4.0 has same floor and ceiling
floorSum += floor;
sum += numbers[i];
numWithDiffs[i] = new NumWithDiff(ceil,floor, ceil - numbers[i]);
}
// sort array by its diffWithCeil
Arrays.sort(numWithDiffs, (a,b)->{
if(a.diffWithCeil < b.diffWithCeil) return -1;
else return 1;
});
int roundSum = (int) Math.round(sum);
int diff = roundSum - floorSum;
int[] res = new int[numbers.length];
for (int i = 0; i < numWithDiffs.length; i++) {
if(diff > 0 && numWithDiffs[i].floor != numWithDiffs[i].ceil){
res[i] = numWithDiffs[i].ceil;
diff--;
} else {
res[i] = numWithDiffs[i].floor;
}
}
return res;
}
public static void main(String[] args) {
double[] arr = { 1.2, 3.7, 100, 4.8 };
int[] res = getNearlyArrayWithSameSum(arr);
for (int i : res) System.out.print(i + " ");
}
}
class NumWithDiff {
int ceil;
int floor;
double diffWithCeil;
public NumWithDiff(int c, int f, double d) {
this.ceil = c;
this.floor = f;
this.diffWithCeil = d;
}
}
Without minimizing the variance, here's a trivial one:
Sort values from left to right.
Round all down to the next integer.
Let the sum of those integers be K. Increase the N-K rightmost values by 1.
Restore original order.
This obviously satisfies your conditions 1.-4. Alternatively, you could round to the closest integer, and increase N-K of the ones you had rounded down. You can do this greedily by the difference between the original and rounded value, but each run of rounded-down values must only be increased from right to left, to maintain sorted order.
If you can accept a small change in the total while improving the variance this will probabilistically preserve totals in python:
import math
import random
integer_list = [int(x) + int(random.random() <= math.modf(x)[0]) for x in my_list]
to explain it rounds all numbers down and adds one with a probability equal to the fractional part i.e. one in ten 0.1 will become 1 and the rest 0
this works for statistical data where you are converting a large numbers of fractional persons into either 1 person or 0 persons

Resources