Generate uniform pseudo-random numbers in a closed interval - random

What's the best way to generate pseudo-random numbers in the closed interval [0,1] instead of the usual [0,1)? One idea I've came up with is to reject values in (1/2,1), then double the number. I wonder if there is a better method.
real x
do
call random_number(x)
if (x <= 0.5) exit
end do
x = 2*x
print *, x
end
The most important requirement is that the algorithm should not make a worse distribution (in terms of uniformity and correlation) than that generated by random_number(). Also I'd favour simplicity. A wrapper around random_number() would be perfectly good, I'm not looking to implement a whole new generator.
As #francescalus points out in the comments, with the algorithm above lots of numbers in [0,1] will have zero probability of appearing. The following code implements a slightly different approach: the interval is enlarged a bit, then values in excess of 1 are cut out. It should behave better in that aspect.
real x
do
call random_number(x)
x = x*(1 + 1e-6)
if (x <= 1.) exit
end do
print *, x
end

What about swapping x and 1-x? Sorry, my Fortran is rusty
real function RNG()
real :: x
logical, save :: swap = .TRUE.
call random_number(x)
if (swap .EQV. .TRUE.)
RNG = x
swap = .FALSE.
else
RNG = 1.0 - x
swap = .TRUE.
end if
end
And if you want to use Box-Muller, use 1-U everywhere and it should work
z0 = sqrt(-2.0*log(1.0-U1))*sin(TWOPI*U2)
z1 = sqrt(-2.0*log(1.0-U1))*cos(TWOPI*U2)
same for rejection version of Box-Muller

Related

Avoiding Conditional Statements in Loops

There is a portion of my f90 program that is taking up a significant amount of compute time. I am basically looping through three matrices (of the same size, with dimensions as large as 250-by-250), and trying to make sure values stay bounded within the interval [-1.0, 1.0]. I know that it is best practice to avoid conditionals in loops, but I am having trouble figuring out how to re-write this block of code for optimal performance. Is there a way to "unravel" the loop or use a built-in function of some sort to "vectorize" the conditional statements?
do ind2 = 1, size(u_mat,2)
do ind1 = 1,size(u_mat,1)
! Dot product 1 must be bounded between [-1,1]
if (b1_dotProd(ind1,ind2) .GT. 1.0_dp) then
b1_dotProd(ind1,ind2) = 1.0_dp
else if (b1_dotProd(ind1,ind2) .LT. -1.0_dp) then
b1_dotProd(ind1,ind2) = -1.0_dp
end if
! Dot product 2 must be bounded between [-1,1]
if (b2_dotProd(ind1,ind2) .GT. 1.0_dp) then
b2_dotProd(ind1,ind2) = 1.0_dp
else if (b2_dotProd(ind1,ind2) .LT. -1.0_dp) then
b2_dotProd(ind1,ind2) = -1.0_dp
end if
! Dot product 3 must be bounded between [-1,1]
if (b3_dotProd(ind1,ind2) .GT. 1.0_dp) then
b3_dotProd(ind1,ind2) = 1.0_dp
else if (b3_dotProd(ind1,ind2) .LT. -1.0_dp) then
b3_dotProd(ind1,ind2) = -1.0_dp
end if
end do
end do
For what it's worth, I am compiling with ifort.
You can use the intrinsic min and max functions for this.
As they are both elemental, you can use them on the whole array, as
b1_dotProd = max(-1.0_dp, min(b1_dotProd, 1.0_dp))
While there are processor instructions which allow min and max to be implemented without branches, it will depend on the compiler implementation of min and max as to whether or not this is actually done and if this is actually any faster, but it is at least a lot more concise.
The answer by #veryreverie is definitely correct, but there
are two things to consider.
A where statement is another sensible choice. Since it still is a conditional choice the same caveat of
whether or not this actually avoids branches and if it's actually any faster, but it is at least a lot more concise
still applies.
One example is:
pure function clamp(X) result(res)
real, intent(in) :: X(:)
real :: res(size(X))
where (X < -1.0)
res = -1.0
else where (X > 1.0)
res = 1.0
else
res = X
end where
end function
If you want to normalize to strictly 1 or -1, I would actually think about changing the datatype to integer. Then you can actually use a == 1 etc. without thinking about floating point equality problems. Depending on your code I would also think about cases where the dot product gets close to zero. Of course this point only applies, if you are only interested in the sign.
pure function get_sign(X) result(res)
real, intent(in) :: X(:)
integer :: res(size(X))
! Or use another appropiate choice to test for near_zero
where (abs(X) < epsilon(X) * 10.)
res = 0
else where (X < 0.0)
res = -1
else where (X > 0.0)
res = +1
end where
end function

A faster alternative to all(a(:,i)==a,1) in MATLAB

It is a straightforward question: Is there a faster alternative to all(a(:,i)==a,1) in MATLAB?
I'm thinking of a implementation that benefits from short-circuit evaluations in the whole process. I mean, all() definitely benefits from short-circuit evaluations but a(:,i)==a doesn't.
I tried the following code,
% example for the input matrix
m = 3; % m and n aren't necessarily equal to those values.
n = 5000; % It's only possible to know in advance that 'm' << 'n'.
a = randi([0,5],m,n); % the maximum value of 'a' isn't necessarily equal to
% 5 but it's possible to state that every element in
% 'a' is a positive integer.
% all, equal solution
tic
for i = 1:n % stepping up the elapsed time in orders of magnitude
%%%%%%%%%% all and equal solution %%%%%%%%%
ax_boo = all(a(:,i)==a,1);
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
end
toc
% alternative solution
tic
for i = 1:n % stepping up the elapsed time in orders of magnitude
%%%%%%%%%%% alternative solution %%%%%%%%%%%
ax_boo = a(1,i) == a(1,:);
for k = 2:m
ax_boo(ax_boo) = a(k,i) == a(k,ax_boo);
end
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
end
toc
but it's intuitive that any "for-loop-solution" within the MATLAB environment will be naturally slower. I'm wondering if there is a MATLAB built-in function written in a faster language.
EDIT:
After running more tests I found out that the implicit expansion does have a performance impact in evaluating a(:,i)==a. If the matrix a has more than one row, all(repmat(a(:,i),[1,n])==a,1) may be faster than all(a(:,i)==a,1) depending on the number of columns (n). For n=5000 repmat explicit expansion has proved to be faster.
But I think that a generalization of Kenneth Boyd's answer is the "ultimate solution" if all elements of a are positive integers. Instead of dealing with a (m x n matrix) in its original form, I will store and deal with adec (1 x n matrix):
exps = ((0):(m-1)).';
base = max(a,[],[1,2]) + 1;
adec = sum( a .* base.^exps , 1 );
In other words, each column will be encoded to one integer. And of course adec(i)==adec is faster than all(a(:,i)==a,1).
EDIT 2:
I forgot to mention that adec approach has a functional limitation. At best, storing adec as uint64, the following inequality must hold base^m < 2^64 + 1.
Since your goal is to count the number of columns that match, my example converts the binary encoding to integer decimals, then you just loop over the possible values (with 3 rows that are 8 possible values) and count the number of matches.
a_dec = 2.^(0:(m-1)) * a;
num_poss_values = 2 ^ m;
num_matches = zeros(num_poss_values, 1);
for i = 1:num_poss_values
num_matches(i) = sum(a_dec == (i - 1));
end
On my computer, using 2020a, Here are the execution times for your first 2 options and the code above:
Elapsed time is 0.246623 seconds.
Elapsed time is 0.553173 seconds.
Elapsed time is 0.000289 seconds.
So my code is 853 times faster!
I wrote my code so it will work with m being an arbitrary integer.
The num_matches variable contains the number of columns that add up to 0, 1, 2, ...7 when converted to a decimal.
As an alternative you can use the third output of unique:
[~, ~, iu] = unique(a.', 'rows');
for i = 1:n
ax_boo = iu(i) == iu;
end
As indicated in a comment:
ax_boo isolates the indices of the columns I have to sum in a row vector b. So, basically the next line would be something like c = sum(b(ax_boo),2);
It is a typical usage of accumarray:
[~, ~, iu] = unique(a.', 'rows');
C = accumarray(iu,b);
for i = 1:n
c = C(i);
end

Logitech Gaming Software lua script code for pure random numbers

I've been trying for days to try find a way to make random numbers in the logitech gaming software (LGS) scripts. I know there is the
math.random()
math.randomseed()
but the thing is i need a changing value for the seed and the solutions from others are to add a os.time or tick() or GetRunningTime stuff which is NOT supported in the LGS scripts.
I was hoping some kind soul could help me by showing me a piece of code that makes pure random numbers. Because i don't want the pseudo random numbers because they are only random once. I need it to be random every time It runs the command. Like if i loop the math.randomI() one hundred times it will show a different number every time.
Thanks in advance!
Having a different seed won't guaratee you having a different number every time.
It will only ensure that you don't have the same random sequence every time you run your code.
A simple and most likely sufficient solution would be to use the mouse position as a random seed.
On a 4K screen that's over 8 Million different possible random seeds and it very unlikely that you hit the same coordinates within a reasonable time. Unless your game demands to click the same position over and over while you run that script.
This RNG receives entropy from all events.
Initial RNG state will be different on every run.
Just use random instead of math.random in your code.
local mix
do
local K53 = 0
local byte, tostring, GetMousePosition, GetRunningTime = string.byte, tostring, GetMousePosition, GetRunningTime
function mix(data1, data2)
local x, y = GetMousePosition()
local tm = GetRunningTime()
local s = tostring(data1)..tostring(data2)..tostring(tm)..tostring(x * 2^16 + y).."#"
for j = 2, #s, 2 do
local A8, B8 = byte(s, j - 1, j)
local L36 = K53 % 2^36
local H17 = (K53 - L36) / 2^36
K53 = L36 * 126611 + H17 * 505231 + A8 + B8 * 3083
end
return K53
end
mix(GetDate())
end
local function random(m, n) -- replacement for math.random
local h = mix()
if m then
if not n then
m, n = 1, m
end
return m + h % (n - m + 1)
else
return h * 2^-53
end
end
EnablePrimaryMouseButtonEvents(true)
function OnEvent(event, arg)
mix(event, arg) -- this line adds entropy to RNG
-- insert your code here:
-- if event == "MOUSE_BUTTON_PRESSED" and arg == 3 then
-- local k = random(5, 10)
-- ....
-- end
end

Optimizing computational cost on a task involving a multi-nested loop

I am just a beginner of programming, and sorry in advance for bothering you by a (presumably) basic question.
I would like to perform the following task:
(I apologize for inconvenience; I don't know how to input a TeX-y formula in Stack Overflow ). I am primarily considering an implementation on MATLAB or Scilab, but language does not matter so much.
The most naive approach to perform this, I think, is to form an n-nested for loop, that is (the case n=2 on MATLAB is shown for example),
n=2;
x=[x1,x2];
for u=0:1
y(1)=u;
if x(1)>0 then
y(1)=1;
end
for v=0:1
y(2)=v;
if x(2)>0 then
y(2)=1;
end
z=Function(y);
end
end
However, this implementation is too laborious for large n, and more importantly, it causes 2^n-2^k abundant evaluations of the function, where k is a number of negative elements in x. Also, naively forming a k-nested for loop with knowledge of which element in x is negative, e.g.
n=2;
x=[-1,2];
y=[1,1];
for u=0:1
y(1)=u;
z=Function(y);
end
doesn't seem to be a good way; if we want to perform the task for different x, we have to rewrite a code.
I would be grateful if you provide an idea to implement a code such that (a) evaluates the function only 2^k times (possible minimum number of evaluations) and (b) we don't have to rewrite a code even if we change x.
You can evaluate Function on y in Ax easily using recursion
function eval(Function, x, y, i, n) {
if(i == n) {
// break condition, evaluate Function
Function(y);
} else {
// always evaluate y(i) == 1
y(i) = 1;
eval(Function, x, y, i + 1, n);
// eval y(i) == 0 only if x(i) <= 0
if(x(i) <= 0) {
y(i) = 0;
eval(Function, x, y, i + 1, n);
}
}
}
Turning that into efficient Matlab code is another problem.
As you've stated the number of evaluations is 2^k. Let's sort x so that only the last k elements are non-positive. To evaluate Function index y using the reverse of the permutation of the sort of x: Function(y(perm)). Even better the same method allows us to build Ax directly using dec2bin:
// every column of the resulting matrix is a member of Ax: y_i = Ax(:,i)
function Ax = getAx(x)
n = length(x);
// find the k indices of non-positives in x
is = find(x <= 0);
k = length(is);
// construct Y (last k rows are all possible combinations of [0 1])
Y = [ones(n - k, 2 ^ k); (dec2bin(0:2^k-1)' - '0')];
// re-order the rows in Y to get Ax according to the permutation is (inverse is)
perm([setdiff(1:n, is) is]) = 1:n;
Ax = Y(perm, :);
end
Now rewrite Function to accept a matrix or iterate over the columns in Ax = getAx(x); to evaluate all Function(y).

Fastest way to sort vectors by angle without actually computing that angle

Many algorithms (e.g. Graham scan) require points or vectors to be sorted by their angle (perhaps as seen from some other point, i.e. using difference vectors). This order is inherently cyclic, and where this cycle is broken to compute linear values often doesn't matter that much. But the real angle value doesn't matter much either, as long as cyclic order is maintained. So doing an atan2 call for every point might be wasteful. What faster methods are there to compute a value which is strictly monotonic in the angle, the way atan2 is? Such functions apparently have been called “pseudoangle” by some.
I started to play around with this and realised that the spec is kind of incomplete. atan2 has a discontinuity, because as dx and dy are varied, there's a point where atan2 will jump between -pi and +pi. The graph below shows the two formulas suggested by #MvG, and in fact they both have the discontinuity in a different place compared to atan2. (NB: I added 3 to the first formula and 4 to the alternative so that the lines don't overlap on the graph). If I added atan2 to that graph then it would be the straight line y=x. So it seems to me that there could be various answers, depending on where one wants to put the discontinuity. If one really wants to replicate atan2, the answer (in this genre) would be
# Input: dx, dy: coordinates of a (difference) vector.
# Output: a number from the range [-2 .. 2] which is monotonic
# in the angle this vector makes against the x axis.
# and with the same discontinuity as atan2
def pseudoangle(dx, dy):
p = dx/(abs(dx)+abs(dy)) # -1 .. 1 increasing with x
if dy < 0: return p - 1 # -2 .. 0 increasing with x
else: return 1 - p # 0 .. 2 decreasing with x
This means that if the language that you're using has a sign function, you could avoid branching by returning sign(dy)(1-p), which has the effect of putting an answer of 0 at the discontinuity between returning -2 and +2. And the same trick would work with #MvG's original methodology, one could return sign(dx)(p-1).
Update In a comment below, #MvG suggests a one-line C implementation of this, namely
pseudoangle = copysign(1. - dx/(fabs(dx)+fabs(dy)),dy)
#MvG says it works well, and it looks good to me :-).
I know one possible such function, which I will describe here.
# Input: dx, dy: coordinates of a (difference) vector.
# Output: a number from the range [-1 .. 3] (or [0 .. 4] with the comment enabled)
# which is monotonic in the angle this vector makes against the x axis.
def pseudoangle(dx, dy):
ax = abs(dx)
ay = abs(dy)
p = dy/(ax+ay)
if dx < 0: p = 2 - p
# elif dy < 0: p = 4 + p
return p
So why does this work? One thing to note is that scaling all input lengths will not affect the ouput. So the length of the vector (dx, dy) is irrelevant, only its direction matters. Concentrating on the first quadrant, we may for the moment assume dx == 1. Then dy/(1+dy) grows monotonically from zero for dy == 0 to one for infinite dy (i.e. for dx == 0). Now the other quadrants have to be handled as well. If dy is negative, then so is the initial p. So for positive dx we already have a range -1 <= p <= 1 monotonic in the angle. For dx < 0 we change the sign and add two. That gives a range 1 <= p <= 3 for dx < 0, and a range of -1 <= p <= 3 on the whole. If negative numbers are for some reason undesirable, the elif comment line can be included, which will shift the 4th quadrant from -1…0 to 3…4.
I don't know if the above function has an established name, and who might have published it first. I've gotten it quite a while ago and copied it from one project to the next. I have however found occurrences of this on the web, so I'd consider this snipped public enough for re-use.
There is a way to obtain the range [0 … 4] (for real angles [0 … 2π]) without introducing a further case distinction:
# Input: dx, dy: coordinates of a (difference) vector.
# Output: a number from the range [0 .. 4] which is monotonic
# in the angle this vector makes against the x axis.
def pseudoangle(dx, dy):
p = dx/(abs(dx)+abs(dy)) # -1 .. 1 increasing with x
if dy < 0: return 3 + p # 2 .. 4 increasing with x
else: return 1 - p # 0 .. 2 decreasing with x
I kinda like trigonometry, so I know the best way of mapping an angle to some values we usually have is a tangent. Of course, if we want a finite number in order to not have the hassle of comparing {sign(x),y/x}, it gets a bit more confusing.
But there is a function that maps [1,+inf[ to [1,0[ known as inverse, that will allow us to have a finite range to which we will map angles. The inverse of the tangent is the well known cotangent, thus x/y (yes, it's as simple as that).
A little illustration, showing the values of tangent and cotangent on a unit circle :
You see the values are the same when |x| = |y|, and you see also that if we color the parts that output a value between [-1,1] on both circles, we manage to color a full circle. To have this mapping of values be continuous and monotonous, we can do two this :
use the opposite of the cotangent to have the same monotony as tangent
add 2 to -cotan, to have the values coincide where tan=1
add 4 to one half of the circle (say, below the x=-y diagonal) to have values fit on the one of the discontinuities.
That gives the following piecewise function, which is a continuous and monotonous function of the angles, with only one discontinuity (which is the minimum) :
double pseudoangle(double dx, double dy)
{
// 1 for above, 0 for below the diagonal/anti-diagonal
int diag = dx > dy;
int adiag = dx > -dy;
double r = !adiag ? 4 : 0;
if (dy == 0)
return r;
if (diag ^ adiag)
r += 2 - dx / dy;
else
r += dy / dx;
return r;
}
Note that this is very close to Fowler angles, with the same properties. Formally, pseudoangle(dx,dy) + 1 % 8 == Fowler(dx,dy)
To talk performance, it's much less branchy than Fowler's code (and generally less complicated imo). Compiled with -O3 on gcc 6.1.1, the above function generates an assembly code with 4 branches, where two of them come from dy == 0 (one checking if the both operands are "unordered", thus if dy was NaN, and the other checking if they are equal).
I would argue this version is more precise than others, since it only uses mantissa preserving operations, until shifting the result to the right interval. This should be especially visible when |x| << |y| or |y| >> |x|, then the operation |x| + |y| looses quite some precision.
As you can see on the graph the angle-pseudoangle relation is also nicely close to linear.
Looking where branches come from, we can make the following remarks:
My code doesn't rely on abs nor copysign, which makes it look more self-contained. However playing with sign bits on floating point values is actually rather trivial, since it's just flipping a separate bit (no branch!), so this is more of a disadvantage.
Furthermore other solutions proposed here do not check whether abs(dx) + abs(dy) == 0 before dividing by it, but this version would fail as soon as only one component (dy) is 0 -- so that throws in a branch (or 2 in my case).
If we choose to get roughly the same result (up to rounding errors) but without branches, we could abuse copsign and write:
double pseudoangle(double dx, double dy)
{
double s = dx + dy;
double d = dx - dy;
double r = 2 * (1.0 - copysign(1.0, s));
double xor_sign = copysign(1.0, d) * copysign(1.0, s);
r += (1.0 - xor_sign);
r += (s - xor_sign * d) / (d + xor_sign * s);
return r;
}
Bigger errors may happen than with the previous implementation, due to cancellation in either d or s if dx and dy are close in absolute value. There is no check for division by zero to be comparable with the other implementations presented, and because this only happens when both dx and dy are 0.
If you can feed the original vectors instead of angles into a comparison function when sorting, you can make it work with:
Just a single branch.
Only floating point comparisons and multiplications.
Avoiding addition and subtraction makes it numerically much more robust. A double can actually always exactly represent the product of two floats, but not necessarily their sum. This means for single precision input you can guarantee a perfect flawless result with little effort.
This is basically Cimbali's solution repeated for both vectors, with branches eliminated and divisions multiplied away. It returns an integer, with sign matching the comparison result (positive, negative or zero):
signed int compare(double x1, double y1, double x2, double y2) {
unsigned int d1 = x1 > y1;
unsigned int d2 = x2 > y2;
unsigned int a1 = x1 > -y1;
unsigned int a2 = x2 > -y2;
// Quotients of both angles.
unsigned int qa = d1 * 2 + a1;
unsigned int qb = d2 * 2 + a2;
if(qa != qb) return((0x6c >> qa * 2 & 6) - (0x6c >> qb * 2 & 6));
d1 ^= a1;
double p = x1 * y2;
double q = x2 * y1;
// Numerator of each remainder, multiplied by denominator of the other.
double na = q * (1 - d1) - p * d1;
double nb = p * (1 - d1) - q * d1;
// Return signum(na - nb)
return((na > nb) - (na < nb));
}
The simpliest thing I came up with is making normalized copies of the points and splitting the circle around them in half along the x or y axis. Then use the opposite axis as a linear value between the beginning and end of the top or bottom buffer (one buffer will need to be in reverse linear order when putting it in.) Then you can read the first then second buffer linearly and it will be clockwise, or second and first in reverse for counter clockwise.
That might not be a good explanation so I put some code up on GitHub that uses this method to sort points with an epsilion value to size the arrays.
https://github.com/Phobos001/SpatialSort2D
This might not be good for your use case because it's built for performance in graphics effects rendering, but it's fast and simple (O(N) Complexity). If your working with really small changes in points or very large (hundreds of thousands) data sets then this won't work because the memory usage might outweigh the performance benefits.
nice.. here is a varient that returns -Pi , Pi like many arctan2 functions.
edit note: changed my pseudoscode to proper python.. arg order changed for compatibility with pythons math module atan2(). Edit2 bother more code to catch the case dx=0.
def pseudoangle( dy , dx ):
""" returns approximation to math.atan2(dy,dx)*2/pi"""
if dx == 0 :
s = cmp(dy,0)
else::
s = cmp(dx*dy,0) # cmp == "sign" in many other languages.
if s == 0 : return 0 # doesnt hurt performance much.but can omit if 0,0 never happens
p = dy/(dx+s*dy)
if dx < 0: return p-2*s
return p
In this form the max error is only ~0.07 radian for all angles.
(of course leave out the Pi/2 if you don't care about the magnitude.)
Now for the bad news -- on my system using python math.atan2 is about 25% faster
Obviously replacing a simple interpreted code doesnt beat a compiled intrisic.
If angles are not needed by themselves, but only for sorting, then #jjrv approach is the best one. Here is a comparison in Julia
using StableRNGs
using BenchmarkTools
# Definitions
struct V{T}
x::T
y::T
end
function pseudoangle(v)
copysign(1. - v.x/(abs(v.x)+abs(v.y)), v.y)
end
function isangleless(v1, v2)
a1 = abs(v1.x) + abs(v1.y)
a2 = abs(v2.x) + abs(v2.y)
a2*copysign(a1 - v1.x, v1.y) < a1*copysign(a2 - v2.x, v2.y)
end
# Data
rng = StableRNG(2021)
vectors = map(x -> V(x...), zip(rand(rng, 1000), rand(rng, 1000)))
# Comparison
res1 = sort(vectors, by = x -> pseudoangle(x));
res2 = sort(vectors, lt = (x, y) -> isangleless(x, y));
#assert res1 == res2
#btime sort($vectors, by = x -> pseudoangle(x));
# 110.437 μs (3 allocations: 23.70 KiB)
#btime sort($vectors, lt = (x, y) -> isangleless(x, y));
# 65.703 μs (3 allocations: 23.70 KiB)
So, by avoiding division, time is almost halved without losing result quality. Of course, for more precise calculations, isangleless should be equipped with bigfloat from time to time, but the same can be told about pseudoangle.
Just use a cross-product function. The direction you rotate one segment relative to the other will give either a positive or negative number. No trig functions and no division. Fast and simple. Just Google it.

Resources