Binary search / bisection for floating point numbers - algorithm

It is easy to find an integer with binary search even if it can be arbitrarily large: first guess the order of magnitude, then keep dividing the interval.
This answer describes how to find an arbitrary rational number.
Having set the scene, my question is similar: how can we guess a IEEE 754 floating point number? Assume it is not NaN, but everything else is fair game. For each guess, your program will be told whether the number in question is higher, equal or lower. Minimize the number of guesses required in the worst case.
(This is not a homework assignment. Though, I might make it one, if this turns out to have an interesting answer that's not just "beat the floating point numerical difficulties to death with lots and lots of special case handling.")
Edit: if I were better at searching I could have found the answer---but that only works if you already know that reinterpretation as int works (with certain caveats). So leaving this up. Thanks to Harold for a great answer!

IEEE-754 64-bit floating point numbers are really 64-bit representations. Furthermore, with the exception of NaN values, there is no difference between floating point comparison and integer comparison of positive values. (That is, two bit patterns with the sign bit unset will produce the same comparison result regardless of whether you compare them as int64_t or double, unless one of the bit patterns is a floating point NaN-.)
That means you can find a number in 64 guesses by guessing one bit at a time, even if the number is ±∞. Start by comparing the number with 0; if the target is "less", then produce the guesses in the same way as below, but negate them before guessing. (Since IEEE-754 floats are sign/magnitude, you can negate the number by setting the sign bit to 1. Or you could do the positive bit-pattern reinterpretation and then floating point negate the result.)
After that, guess one bit at a time, starting with the highest-order value bit. Set that bit to 1 if the number is greater than or equal to the guess; set that bit to 0 if the number is less; and continue with the next bit until there aren't any more. To construct the guess, reinterpret the bit pattern as a double.
There are two caveats:
You cannot distinguish between ±0 with comparison tests. That means that if your opponent wants you to distinguish between them, they will have to supply you with a way to ask about equality with −0, and you'll have to use that mechanism after you've apparently established that the number is 0 (which will happen on the 64th guess). This would add one guess, for a total of 65.
If you are assured that the target is not a NaN, then there is no other problem. If it might be a NaN, you need to be careful how you compare: things will work out fine if you always ask "is X less than this guess?", because a NaN comparison will always return false. That means that after 11 successive "no" answers (not counting the one to establish the sign), you will find yourself guessing ∞, with the assumption that if the number is not less than ∞, it must be equal. However, in this case alone you need to explicitly test for equality as well, because that will also be false if the target is a NaN. This doesn't add an additional guess to the count, because it will always happen long before 64 guesses have been used up.

The same approach can be applied to a floating point number. Worse case run time is O(log n).
public class GuessComparer
{
private float random;
public GuessComparer() // generate a random float and keep it private
{
Random rnd = new Random();
var buffer = new byte[4];
rnd.NextBytes(buffer);
random = BitConverter.ToSingle(buffer, 0);
}
public int CheckGuess(float quess) // answer whether number is high, lower or the same.
{
return random.CompareTo(quess);
}
}
public class FloatFinder
{
public static int Find(GuessComparer checker)
{
float guess = 0;
int result = checker.CheckGuess(guess);
int guesscount = 1;
var high = float.MaxValue;
var low = float.MinValue;
while (result != 0)
{
if (result > 0) //random is higher than guess
low = guess;
else// random is lower than guess
high = guess;
guess = (high + low) / 2;
guesscount++;
result = checker.CheckGuess(guess);
}
Console.WriteLine("Found answer in {0}", guesscount);
return guesscount;
}
public static void Find()
{
var checker = new GuessComparer();
int guesses = Find(checker);
}
}

Related

Sampling from all possible floats in D

In the D programming language, the standard random (std.random) module provides a simple mechanism for generating a random number in some specified range.
auto a = uniform(0, 1024, gen);
What is the best way in D to sample from all possible floating point values?
For clarification, sampling from all possible 32-bit integers can be done as follows:
auto l = uniform!int(); // randomly selected int from all possible integers
Depends on the kind of distribution you want.
A uniform distribution over all possible values could be done by generating a random ulong and then casting the bits into floating point. For T being float or double:
union both { ulong input; T output; }
both val;
val.input = uniform!"[]"(ulong.min, ulong.max);
return val.output;
Since roughly half of the positive floating point numbers are between 0 and 1, this method will often give you numbers near zero.`It will also give you infinity and NaN values.
Aside: This code should be fine with D, but would be undefined behavior in C/C++. Use memcpy there.
If you prefer a uniform distribution over all possible numbers in floating point (equal probability for 0..1 and 1..2 etc), you need something like the normal uniform!double, which unfortunately does not work very well for large numbers. It also will not generate infinity or NaN. You could generate double numbers and convert them to float, but I have no answer for generating random large double numbers.

How to tell if float has non zero decimals?

Is there a way to determine if a float has non zero decimal values? I'd like to avoid a string conversion then splitting on any decimal. But not sure if there is some other way.
You can't.
Floating point variables (both Float and Double) store values with a limited precesion. Very rarely will a number will be stored as .000....
See Is floating point math broken?
The work around:
First, determine an epsilon value you deem to be "as far from .000... as a number can be for me to still consider it 'whole'". This number will depend on your problem domain. Suppose I values within .001 are acceptably "whole"
Secondly, determine what the closest whole number by rounding it.
Finally, subtract the original value from its rounded counterpart, and check if the difference is less within the epsilon value.
import Foundation
extension Double {
private static var epsilon = 0.0001
var isWhole: Bool { return abs(self - round(self)) < Double.epsilon }
}
let input = 1.0
print(input.isWhole)
This is very similar to the recommended technique for comparing equality of two Float/Double values.
In Swift 3.0
Since Swift 3.0 is fast approaching, I'll include an answer for it, even if the question specifically covers Swift 2.
In Swift 3.0 Enhanced Floating Point Protocols has been implemented, making it easier to work with floating point arithmetics. We can, e.g. use the isEqual method, which implements the IEEE 754 equality predicate, for comparing two floating point numbers
import Foundation // for access to round function
extension Double {
var isAsGoodAsIntegerValuedAsItGets: Bool {
return isEqual(to: round(self))
}
}
var input = 1.01
print(input.isAsGoodAsIntegerValuedAsItGets) // false
input = 1
print(input.isAsGoodAsIntegerValuedAsItGets) // true
/* increase to least representable value that compares
greater than current `self` */
input = input.nextUp
print(input.isAsGoodAsIntegerValuedAsItGets) // false
/* decrease to the greatest representable value that
compares less than current `self` */
input = input.nextDown
print(input.isAsGoodAsIntegerValuedAsItGets) // true

"interval is empty", Lua math.random isn't working for large numbers?

I didn't know if this is a bug in Lua itself or if I was doing something wrong. I couldn't find anything about it anywhere. I am using Lua for Windows (Lua 5.1.4):
>return math.random(0, 1000000000)
1251258
This returns a random integer between 0 and 10000000000, as expected. This seems to work for all other values. But if I add a single 0:
>return math.random(0, 10000000000)
stdin:1: bad argument #2 to 'random' (interval is empty)
Any number higher than that does the same thing.
I tried to figure out exactly how high a number has to be to cause this and found something even weirder:
>return math.random(0, 2147483647)
-75617745
If the value is 2147483647 then it gives me negative numbers. Any higher than that and it throws an error. Any lower than that and it works fine.
That's 0b1111111111111111111111111111111 in binary, 31 binary digits exactly. I am not sure what that means though.
This unexpected behavior (bug?) is due to how math.random treats the input arguments passed in Lua 5.1. From lmathlib.c:
case 2: { /* lower and upper limits */
int l = luaL_checkint(L, 1);
int u = luaL_checkint(L, 2);
luaL_argcheck(L, l<=u, 2, "interval is empty");
lua_pushnumber(L, floor(r*(u-l+1))+l); /* int between `l' and `u' */
break;
}
As you may know in C, a standard int can represent values -2,147,483,648 to 2,147,483,647. Adding +1 to 2,147,483,647, like in your use-case, will overflow and wrap around the value giving -2,147,483,648. The end result is negative since you're multiplying a positive with a negative number.
Furthermore, anything above 2,147,483,647 will fail the luaL_argcheck due to overflow wraparound.
There are a few ways to address this problem:
Upgrade to Lua 5.2. That one has since fixed this issue by treating the input arguments as lua_Number instead.
Switch to LuaJIT which does not have this integer overflow issue.
Patch the Lua 5.1 source yourself with the fix and recompile.
Modify your random range so it does not overflow.
If you need a range that is larger than what the random function supports (32 bit signed integers or 2^31 due to sign bit, because math.random is at C level), but smaller than the range of Lua "number" type (based on What is the maximum value of a number in Lua?, 2^52, or maybe even 2^53), you could try generating two random numbers: scale the first to the range desired; add the second to "fill the gap". For example, say you want a range of 0 to 2^36. The largest from math.random is 2^31. So you could do:
-- 2^36 = 2^31 * 2^5 so
scale = 2^5
baseRand = scale * math.random(0, 2^31)
-- baseRand is now between 0 and 2^36 but there are gaps of 2^5 in the set
-- of possible values; fill the gaps with second random number:
fillGap = math.random(0, 2^5)
randNum = baseRand + fillGap
This will work as long as the desired range is less than the Lua interpreter's maximum for Lua numbers, which is a configurable compile time parameter but if you use stock build it is 2^52, a very large number (although not as large as largest long integer, 2^63).
Note also that largest positive N-bit integer is 2^N-1 (not 2^N), but the above technique can be applied to any range, you could have for instance scale = 10^6 then randNum = 10^6 * math.random(0, 10^8) + math.random(0, 10^6).

Are floats a secure alternative to generating an un-biased random number

I have always generated un-biased random numbers by throwing away any numbers in the biased range. Similar to this
int biasCount = MAX_INT % max
int maxSafeNumber = MAX_INT - biasCount;
int generatedNumber = 0;
do
{
generatedNumber = GenerateNumber();
} while (generatedNumber > maxSafeNumber)
return generatedNumber % max;
Today a friend showed me how he generated random numbers by converting the generated number into a float, then multiplying that against the max.
float percent = generatedNumber / (float)MAX_INT;
return (int)(percent * max);
This seems to solve the bias issue by not having to use a modulus in the first place. It also looks simple and fast. Is there any reason why the float approach would not be as secure (unbiased) as the first one?
The float method with a floor (i.e. your cast) introduces a bias
against the largest value in your range.
In order to return max, generatedNumber == MAX_INT must be true.
So max has probability 1/MAX_INT, while every other number in the
range has probability max/MAX_INT
As Henry points out, there's also the issue of aliasing if MAX_INT
is not a multiple of max. This makes some values in the range more
likely than others. The larger the difference between max and MAX_INT the smaller this bias is.
(Assuming you get, and want, a uniform distribution.)
This presentation by Stephan T. Lavavej from GoingNative 2013 goes over a lot of common fallacies with random numbers, including these range schemes. It's C++ centric in the implementations, but all the concepts carry over to any language:
http://channel9.msdn.com/Events/GoingNative/2013/rand-Considered-Harmful
The float method may not generate uniformly distributed output numbers even when the input numbers are uniformly distributed. To see where it breaks down do some examples with small numbers e.g. max = 6, MAX_INT = 8
it gets better when MAX_INT is large, but it is almost never perfect.

How to generate a number in arbitrary range using random()={0..1} preserving uniformness and density?

Generate a random number in range [x..y] where x and y are any arbitrary floating point numbers. Use function random(), which returns a random floating point number in range [0..1] from P uniformly distributed numbers (call it "density"). Uniform distribution must be preserved and P must be scaled as well.
I think, there is no easy solution for such problem. To simplify it a bit, I ask you how to generate a number in interval [-0.5 .. 0.5], then in [0 .. 2], then in [-2 .. 0], preserving uniformness and density? Thus, for [0 .. 2] it must generate a random number from P*2 uniformly distributed numbers.
The obvious simple solution random() * (x - y) + y will generate not all possible numbers because of the lower density for all abs(x-y)>1.0 cases. Many possible values will be missed. Remember, that random() returns only a number from P possible numbers. Then, if you multiply such number by Q, it will give you only one of P possible values, scaled by Q, but you have to scale density P by Q as well.
If I understand you problem well, I will provide you a solution: but I would exclude 1, from the range.
N = numbers_in_your_random // [0, 0.2, 0.4, 0.6, 0.8] will be 5
// This turns your random number generator to return integer values between [0..N[;
function randomInt()
{
return random()*N;
}
// This turns the integer random number generator to return arbitrary
// integer
function getRandomInt(maxValue)
{
if (maxValue < N)
{
return randomInt() % maxValue;
}
else
{
baseValue = randomInt();
bRate = maxValue DIV N;
bMod = maxValue % N;
if (baseValue < bMod)
{
bRate++;
}
return N*getRandomInt(bRate) + baseValue;
}
}
// This will return random number in range [lower, upper[ with the same density as random()
function extendedRandom(lower, upper)
{
diff = upper - lower;
ndiff = diff * N;
baseValue = getRandomInt(ndiff);
baseValue/=N;
return lower + baseValue;
}
If you really want to generate all possible floating point numbers in a given range with uniform numeric density, you need to take into account the floating point format. For each possible value of your binary exponent, you have a different numeric density of codes. A direct generation method will need to deal with this explicitly, and an indirect generation method will still need to take it into account. I will develop a direct method; for the sake of simplicity, the following refers exclusively to IEEE 754 single-precision (32-bit) floating point numbers.
The most difficult case is any interval that includes zero. In that case, to produce an exactly even distribution, you will need to handle every exponent down to the lowest, plus denormalized numbers. As a special case, you will need to split zero into two cases, +0 and -0.
In addition, if you are paying such close attention to the result, you will need to make sure that you are using a good pseudorandom number generator with a large enough state space that you can expect it to hit every value with near-uniform probability. This disqualifies the C/Unix rand() and possibly the*rand48() library functions; you should use something like the Mersenne Twister instead.
The key is to dissect the target interval into subintervals, each of which is covered by different combination of binary exponent and sign: within each subinterval, floating point codes are uniformly distributed.
The first step is to select the appropriate subinterval, with probability proportional to its size. If the interval contains 0, or otherwise covers a large dynamic range, this may potentially require a number of random bits up to the full range of the available exponent.
In particular, for a 32-bit IEEE-754 number, there are 256 possible exponent values. Each exponent governs a range which is half the size of the next greater exponent, except for the denormalized case, which is the same size as the smallest normal exponent region. Zero can be considered the smallest denormalized number; as mentioned above, if the target interval straddles zero, the probability of each of +0 and -0 should perhaps be cut in half, to avoid doubling its weight.
If the subinterval chosen covers the entire region governed by a particular exponent, all that is necessary is to fill the mantissa with random bits (23 bits, for 32-bit IEEE-754 floats). However, if the subinterval does not cover the entire region, you will need to generate a random mantissa that covers only that subinterval.
The simplest way to handle both the initial and secondary random steps may be to round the target interval out to include the entirety of all exponent regions partially covered, then reject and retry numbers that fall outside it. This allows the exponent to be generated with simple power-of-2 probabilities (e.g., by counting the number of leading zeroes in your random bitstream), as well as providing a simple and accurate way of generating a mantissa that covers only part of an exponent interval. (This is also a good way of handling the +/-0 special case.)
As another special case: to avoid inefficient generation for target intervals which are much smaller than the exponent regions they reside in, the "obvious simple" solution will in fact generate fairly uniform numbers for such intervals. If you want exactly uniform distributions, you can generate the sub-interval mantissa by using only enough random bits to cover that sub-interval, while still using the aforementioned rejection method to eliminate values outside the target interval.
well, [0..1] * 2 == [0..2] (still uniform)
[0..1] - 0.5 == [-0.5..0.5] etc.
I wonder where have you experienced such an interview?
Update: well, if we want to start caring about losing precision on multiplication (which is weird, because somehow you did not care about that in the original task, and pretend we care about "number of values", we can start iterating. In order to do that, we need one more function, which would return uniformly distributed random values in [0..1) — which can be done by dropping the 1.0 value would it ever appear. After that, we can slice the whole range in equal parts small enough to not care about losing precision, choose one randomly (we have enough randomness to do that), and choose a number in this bucket using [0..1) function for all parts but the last one.
Or, you can come up with a way to code enough values to care about—and just generate random bits for this code, in which case you don't really care whether it's [0..1] or just {0, 1}.
Let me rephrase your question:
Let random() be a random number generator with a discrete uniform distribution over [0,1). Let D be the number of possible values returned by random(), each of which is precisely 1/D greater than the previous. Create a random number generator rand(L, U) with a discrete uniform distribution over [L, U) such that each possible value is precisely 1/D greater than the previous.
--
A couple quick notes.
The problem in this form, and as you phrased it is unsolvable. That
is, if N = 1 there is nothing we can do.
I don't require that 0.0 be one of the possible values for random(). If it is not, then it is possible that the solution below will fail when U - L < 1 / D. I'm not particularly worried about that case.
I use all half-open ranges because it makes the analysis simpler. Using your closed ranges would be simple, but tedious.
Finally, the good stuff. The key insight here is that the density can be maintained by independently selecting the whole and fractional parts of the result.
First, note that given random() it is trivial to create randomBit(). That is,
randomBit() { return random() >= 0.5; }
Then, if we want to select one of {0, 1, 2, ..., 2^N - 1} uniformly at random, that is simple using randomBit(), just generate each of the bits. Call this random2(N).
Using random2() we can select one of {0, 1, 2, ..., N - 1}:
randomInt(N) { while ((val = random2(ceil(log2(N)))) >= N); return val; }
Now, if D is known, then the problem is trivial as we can reduce it to simply choosing one of floor((U - L) * D) values uniformly at random and we can do that with randomInt().
So, let's assume that D is not known. Now, let's first make a function to generate random values in the range [0, 2^N) with the proper density. This is simple.
rand2D(N) { return random2(N) + random(); }
rand2D() is where we require that the difference between consecutive possible values for random() be precisely 1/D. If not, the possible values here would not have uniform density.
Next, we need a function that selects a value in the range [0, V) with the proper density. This is similar to randomInt() above.
randD(V) { while ((val = rand2D(ceil(log2(V)))) >= V); return val; }
And finally...
rand(L, U) { return L + randD(U - L); }
We now may have offset the discrete positions if L / D is not an integer, but that is unimportant.
--
A last note, you may have noticed that several of these functions may never terminate. That is essentially a requirement. For example, random() may have only a single bit of randomness. If I then ask you to select from one of three values, you cannot do so uniformly at random with a function that is guaranteed to terminate.
Consider this approach:
I'm assuming the base random number generator in the range [0..1]
generates among the numbers
0, 1/(p-1), 2/(p-1), ..., (p-2)/(p-1), (p-1)/(p-1)
If the target interval length is less than or equal to 1,
return random()*(y-x) + x.
Else, map each number r from the base RNG to an interval in the
target range:
[r*(p-1)*(y-x)/p, (r+1/(p-1))*(p-1)*(y-x)/p]
(i.e. for each of the P numbers assign one of P intervals with length (y-x)/p)
Then recursively generate another random number in that interval and
add it to the interval begin.
Pseudocode:
const p;
function rand(x, y)
r = random()
if y-x <= 1
return x + r*(y-x)
else
low = r*(p-1)*(y-x)/p
high = low + (y-x)/p
return x + low + rand(low, high)
In real math: the solution is just the provided:
return random() * (upper - lower) + lower
The problem is that, even when you have floating point numbers, only have a certain resolution. So what you can do is apply above function and add another random() value scaled to the missing part.
If I make a practical example it becomes clear what I mean:
E.g. take random() return value from 0..1 with 2 digits accuracy, ie 0.XY, and lower with 100 and upper with 1100.
So with above algorithm you get as result 0.XY * (1100-100) + 100 = XY0.0 + 100.
You will never see 201 as result, as the final digit has to be 0.
Solution here would be to generate again a random value and add it *10, so you have accuracy of one digit (here you have to take care that you dont exceed your given range, which can happen, in this case you have to discard the result and generate a new number).
Maybe you have to repeat it, how often depends on how many places the random() function delivers and how much you expect in your final result.
In a standard IEEE format has a limited precision (i.e. double 53 bits). So when you generate a number this way, you never need to generate more than one additional number.
But you have to be careful that when you add the new number, you dont exceed your given upper limit. There are multiple solutions to it: First if you exceed your limit, you start from new, generating a new number (dont cut off or similar, as this changes the distribution).
Second possibility is to check the the intervall size of the missing lower bit range, and
find the middle value, and generate an appropiate value, that guarantees that the result will fit.
You have to consider the amount of entropy that comes from each call to your RNG. Here is some C# code I just wrote that demonstrates how you can accumulate entropy from low-entropy source(s) and end up with a high-entropy random value.
using System;
using System.Collections.Generic;
using System.Security.Cryptography;
namespace SO_8019589
{
class LowEntropyRandom
{
public readonly double EffectiveEntropyBits;
public readonly int PossibleOutcomeCount;
private readonly double interval;
private readonly Random random = new Random();
public LowEntropyRandom(int possibleOutcomeCount)
{
PossibleOutcomeCount = possibleOutcomeCount;
EffectiveEntropyBits = Math.Log(PossibleOutcomeCount, 2);
interval = 1.0 / PossibleOutcomeCount;
}
public LowEntropyRandom(int possibleOutcomeCount, int seed)
: this(possibleOutcomeCount)
{
random = new Random(seed);
}
public int Next()
{
return random.Next(PossibleOutcomeCount);
}
public double NextDouble()
{
return interval * Next();
}
}
class EntropyAccumulator
{
private List<byte> currentEntropy = new List<byte>();
public double CurrentEntropyBits { get; private set; }
public void Clear()
{
currentEntropy.Clear();
CurrentEntropyBits = 0;
}
public void Add(byte[] entropy, double effectiveBits)
{
currentEntropy.AddRange(entropy);
CurrentEntropyBits += effectiveBits;
}
public byte[] GetBytes(int count)
{
using (var hasher = new SHA512Managed())
{
count = Math.Min(count, hasher.HashSize / 8);
var bytes = new byte[count];
var hash = hasher.ComputeHash(currentEntropy.ToArray());
Array.Copy(hash, bytes, count);
return bytes;
}
}
public byte[] GetPackagedEntropy()
{
// Returns a compact byte array that represents almost all of the entropy.
return GetBytes((int)(CurrentEntropyBits / 8));
}
public double GetDouble()
{
// returns a uniformly distributed number on [0-1)
return (double)BitConverter.ToUInt64(GetBytes(8), 0) / ((double)UInt64.MaxValue + 1);
}
public double GetInt(int maxValue)
{
// returns a uniformly distributed integer on [0-maxValue)
return (int)(maxValue * GetDouble());
}
}
class Program
{
static void Main(string[] args)
{
var random = new LowEntropyRandom(2); // this only provides 1 bit of entropy per call
var desiredEntropyBits = 64; // enough for a double
while (true)
{
var adder = new EntropyAccumulator();
while (adder.CurrentEntropyBits < desiredEntropyBits)
{
adder.Add(BitConverter.GetBytes(random.Next()), random.EffectiveEntropyBits);
}
Console.WriteLine(adder.GetDouble());
Console.ReadLine();
}
}
}
}
Since I'm using a 512-bit hash function, that is the max amount of entropy that you can get out of the EntropyAccumulator. This could be fixed, if necessarily.
If I understand your problem correctly, it's that rand() generates finely spaced but ultimately discrete random numbers. And if we multiply it by (y-x) which is large, this spreads these finely spaced floating point values out in a way that is missing many of the floating point values in the range [x,y]. Is that all right?
If so, I think we have a solution already given by Dialecticus. Let me explain why he is right.
First, we know how to generate a random float and then add another floating point value to it. This may produce a round off error due to addition, but it will be in the last decimal place only. Use doubles or something with finer numerical resolution if you want better precision. So, with that caveat, the problem is no harder than finding a random float in the range [0,y-x] with uniform density. Let's say y-x = z. Obviously, since z is a floating point it may not be an integer. We handle the problem in two steps: first we generate the random digits to the left of the decimal point and then generate the random digits to the right of it. Doing both uniformly means their sum is uniformly distributed across the range [0,z] too. Let w be the largest integer <= z. To answer our simplified problem, we can first pick a random integer from the range {0,1,...,w}. Then, step #2 is to add a random float from the unit interval to this random number. This isn't multiplied by any possibly large values, so it has as fine a resolution as the numerical type can have. (Assuming you're using an ideal random floating point number generator.)
So what about the corner case where the random integer was the largest one (i.e. w) and the random float we added to it was larger than z - w so that the random number exceeds the allowed maximum? The answer is simple: do all of it again and check the new result. Repeat until you get a digit in the allowed range. It's an easy proof that a uniformly generated random number which is tossed out and generated again if it's outside an allowed range results in a uniformly generated random in the allowed range. Once you make this key observation, you see that Dialecticus met all your criteria.
When you generate a random number with random(), you get a floating point number between 0 and 1 having an unknown precision (or density, you name it).
And when you multiply it with a number (NUM), you lose this precision, by lg(NUM) (10-based logarithm). So if you multiply by 1000 (NUM=1000), you lose the last 3 digits (lg(1000) = 3).
You may correct this by adding a smaller random number to the original, which has this missing 3 digits. But you don't know the precision, so you can't determine where are they exactly.
I can imagine two scenarios:
(X = range start, Y = range end)
1: you define the precision (PREC, eg. 20 digits, so PREC=20), and consider it enough to generate a random number, so the expression will be:
( random() * (Y-X) + X ) + ( random() / 10 ^ (PREC-trunc(lg(Y-X))) )
with numbers: (X = 500, Y = 1500, PREC = 20)
( random() * (1500-500) + 500 ) + ( random() / 10 ^ (20-trunc(lg(1000))) )
( random() * 1000 + 500 ) + ( random() / 10 ^ (17) )
There are some problems with this:
2 phase random generation (how much will it be random?)
the first random returns 1 -> result can be out of range
2: guess the precision by random numbers
you define some tries (eg. 4) to calculate the precision by generating random numbers and count the precision every time:
- 0.4663164 -> PREC=7
- 0.2581916 -> PREC=7
- 0.9147385 -> PREC=7
- 0.129141 -> PREC=6 -> 7, correcting by the average of the other tries
That's my idea.

Resources