Generate initial guess for any function? - algorithm

Here is the Newton's method code from Wikipedia page:
x0 = 1 # The initial guess
f(x) = x^2 - 2 # The function whose root we are trying to find
fprime(x) = 2x # The derivative of the function
tolerance = 1e-7 # 7 digit accuracy is desired
epsilon = 1e-14 # Do not divide by a number smaller than this
maxIterations = 20 # Do not allow the iterations to continue indefinitely
solutionFound = false # Have not converged to a solution yet
for i = 1:maxIterations
y = f(x0)
yprime = fprime(x0)
if abs(yprime) < epsilon # Stop if the denominator is too small
break
end
global x1 = x0 - y/yprime # Do Newton's computation
if abs(x1 - x0) <= tolerance # Stop when the result is within the desired tolerance
global solutionFound = true
break
end
global x0 = x1 # Update x0 to start the process again
end
if solutionFound
println("Solution: ", x1) # x1 is a solution within tolerance and maximum number of iterations
else
println("Did not converge") # Newton's method did not converge
end
When I implement this I see that there are cases I need to apply new initial guess:
When functions (i.e: f, fPrime) give Infinity or NaN result (e.g in C#, this happens when result = 1/x when x=0, result = √x when x=-1,...)
When abs(yprime) < epsilon
When x0 is too large for y/yprime (e.g x0 = 1e99 but y/yprime = 1e25, this will make x1 = x0 while it's mathematically wrong, this will make the algorithm leads to nowhere).
My app allows user to input the math function and the initial guess, (e.g: Initial guess for x can be 1e308, function can be 9=√(-81+x), 45=InverseSin(x), 3=√(x-1e99),... ).
So when the initial guess is bad, my app will automatically apply the new initial guess with hope that it can give the result.
My current solution: the initial guess is the array of values:
double[] arrInitialGuess =
{
[User's initial guess], 0, 1, -1, 2, -2,... (you know, Factorial n!)..., 7.257416E+306, -7.257416E+306,
}
I have the following questions:
Is the big number (e.g 7.257416E+306) even needed? because I see that in x1 = x0 - y/yprime, if the initial guess x0 is too big compare to y/yprime, it programmatically leads to nowhere. If the big number is pointless, what is the cap for initial guess (e.g 1e17?)
2. What is better for the array of initial guess: the factorial n! {+-1, +-2, +-6,...}, or 2^x {+-2^0, +-2^1, +-2^2,...}, or 10^x {+-1e0, +-1e1, +-1e2,...},...
If my predefined-array-initial-guess method is not good, is there any better way to get new initial guess for Newton's method? (e.g an algorithm to get next initial-guess?)
Update:
Change of thought, the pre-defined array of initial guess doesn't work.
For example, I have the formula: 8=3/x => y=8-3/x which gives this graph
In this case, I can find the solution when initial guess is in the range [ 0.1 ; 0.7 ], so if I have the pre-defined initial guess arrray = {0, 1, 2,..., Inf}, it won't do me any good but wasting my precious resource.
So my new thought now is: steering the next initial guess base on the graph. The idea is: applying the last guess and compare with current guess to see that the value of y is heading toward 0 or not, so that I can determine to increase or decrease the next initial guess to steer the y toward 0. But I still consider the pre-defined initial guess idea in case the guesses all give Infinity value.
Update 2:
New thought: pick the new initial guess in the range [ x0; x1 ] where
there is no error between x0 and x1 (e.g there is no error divide by zero when apply a value in the range [ x0; x1 ]). So I can form the line AB: A(x0, y0) and B(x1, y1).
y0 and y1 have different sign: (y0 > 0 && y1 < 0) || (y0 < 0 && y1 > 0). So that the line AB can cut the x axis (which cause a big possibility there is an y = 0 somewhere between y0 and y1, if the graph isn't too weird).
Try to narrow the range [ x0; x1 ] as small as possible, then run a few initial guesses between the range.

Related

Is there a binary function f(x,y) where x,y are integers and result is 0 or 1, and result 1 in 2d plane are "continous" and "irregular" enough

That is, if f(x,y)=1, then f(x-1,y),f(x+1,y),f(x,y-1),f(x,y+1) at least one results 1.
I'm thinking about a technology to define game map, neither predefined nor random generated each time, but bind to 2d binary function, so the map data will never be saved to disk and each time entering game, the map keeps the same.
If 1 means land and 0 means ocean, I want the lands keep continous, all are reachable, no islands, and of course, the map must be enough irregular.
I'm not good at maths, is it possible? thanks.
What I need is only a simple function, no recursions, eg. once xy is given, the result is out, which has nothing to do with other values, only xy.
Guaranteeing connectivity with local considerations only is a very strong constraint on what we can do. I agree with the comments that suggest traditional map generation from a fixed seed.
Nevertheless, to answer the question as framed, my first thought would be star-shaped land. This idea requires a continuous function f(θ) > 0 with period 2π. We take every point (x, y) such that hypot(x, y) < f(atan2(y, x)).
This works great if x and y are real numbers -- every (x, y) in the land is connected by a straight line segment back to the origin (0, 0), hence "star-shaped". Over the integers, we have to put an extra condition on f: the function log(f(θ)) should be Lipschitz continuous (can't wiggle too much).
(You can skip this paragraph.) Assume without loss of generality that x > 0 and y > 0 are integers. If (x, y) is land, then we need (x-1, y) or (x, y-1) to be land. On one hand, one of these squares is closer, which is good since we're using a threshold: min(hypot(y, x-1), hypot(y-1, x)) <= hypot(y, x) - (sqrt(2) - 1), which is tight for (x, y) = (1, 1). On the other hand, the angle changes. We've deviated from the line segment by distance at most 1/sqrt(2). Let r = hypot(x, y). The change in angle is at most 1/sqrt(2) / (r - 1/sqrt(2)), which since r >= sqrt(2) is at most 1/sqrt(2) / (r - r/2) == sqrt(2) / r. Therefore a Lipschitz constant of (sqrt(2) - 1) / sqrt(2) = 1 - 1/sqrt(2) should suffice (probably this can be tightened).
So far this is very abstract. The classic way to get a periodic function that doesn't wiggle too much is by adding sine waves (with varying phases). I've provided a Python implementation and sample output below. The land is not 100% guaranteed to be connected, but it should be extremely unlikely.
import math
import os
import random
def make_parameters(n=20):
return [
(random.random() / (i + 1), 2 * math.pi * random.random()) for i in range(n)
]
width = 100
def is_land(parameters, x, y):
if (x, y) == (0, 0):
return True
theta = math.atan2(y, x)
return math.hypot(x, y) < 0.1 * width * math.exp(
sum(
amp * math.sin((i + 1) * theta + phase)
for i, (amp, phase) in enumerate(parameters)
)
)
def main():
dir = "lands"
os.mkdir(dir)
for i in range(10):
for j in range(10):
with open(os.path.join(dir, "%02d_%02d.pbm" % (i, j)), "w") as f:
parameters = make_parameters()
x0 = y0 = width // 2
print("P1", file=f)
print(width, width, file=f)
for y in range(width):
print(
*(
int(is_land(parameters, x - x0, y - y0))
for x in range(width)
),
file=f
)
if __name__ == "__main__":
main()

Writing a vector sum in MATLAB

Suppose I have a function phi(x1,x2)=k1*x1+k2*x2 which I have evaluated over a grid where the grid is a square having boundaries at -100 and 100 in both x1 and x2 axis with some step size say h=0.1. Now I want to calculate this sum over the grid with which I'm struggling:
What I was trying :
clear all
close all
clc
D=1; h=0.1;
D1 = -100;
D2 = 100;
X = D1 : h : D2;
Y = D1 : h : D2;
[x1, x2] = meshgrid(X, Y);
k1=2;k2=2;
phi = k1.*x1 + k2.*x2;
figure(1)
surf(X,Y,phi)
m1=-500:500;
m2=-500:500;
[M1,M2,X1,X2]=ndgrid(m1,m2,X,Y)
sys=#(m1,m2,X,Y) (k1*h*m1+k2*h*m2).*exp((-([X Y]-h*[m1 m2]).^2)./(h^2*D))
sum1=sum(sys(M1,M2,X1,X2))
Matlab says error in ndgrid, any idea how I should code this?
MATLAB shows:
Error using repmat
Requested 10001x1001x2001x2001 (298649.5GB) array exceeds maximum array size preference. Creation of arrays greater
than this limit may take a long time and cause MATLAB to become unresponsive. See array size limit or preference
panel for more information.
Error in ndgrid (line 72)
varargout{i} = repmat(x,s);
Error in new_try1 (line 16)
[M1,M2,X1,X2]=ndgrid(m1,m2,X,Y)
Judging by your comments and your code, it appears as though you don't fully understand what the equation is asking you to compute.
To obtain the value M(x1,x2) at some given (x1,x2), you have to compute that sum over Z2. Of course, using a numerical toolbox such as MATLAB, you could only ever hope to compute over some finite range of Z2. In this case, since (x1,x2) covers the range [-100,100] x [-100,100], and h=0.1, it follows that mh covers the range [-1000, 1000] x [-1000, 1000]. Example: m = (-1000, -1000) gives you mh = (-100, -100), which is the bottom-left corner of your domain. So really, phi(mh) is just phi(x1,x2) evaluated on all of your discretised points.
As an aside, since you need to compute |x-hm|^2, you can treat x = x1 + i x2 as a complex number to make use of MATLAB's abs function. If you were strictly working with vectors, you would have to use norm, which is OK too, but a bit more verbose. Thus, for some given x=(x10, x20), you would compute x-hm over the entire discretised plane as (x10 - x1) + i (x20 - x2).
Finally, you can compute 1 term of M at a time:
D=1; h=0.1;
D1 = -100;
D2 = 100;
X = (D1 : h : D2); % X is in rows (dim 2)
Y = (D1 : h : D2)'; % Y is in columns (dim 1)
k1=2;k2=2;
phi = k1*X + k2*Y;
M = zeros(length(Y), length(X));
for j = 1:length(X)
for i = 1:length(Y)
% treat (x - hm) as a complex number
x_hm = (X(j)-X) + 1i*(Y(i)-Y); % this computes x-hm for all m
M(i,j) = 1/(pi*D) * sum(sum(phi .* exp(-abs(x_hm).^2/(h^2*D)), 1), 2);
end
end
By the way, this computation takes quite a long time. You can consider either increasing h, reducing D1 and D2, or changing all three of them.

Searching a 3D array for closest point satisfying a certain predicate

I'm looking for an enumeration algorithm to search through a 3D array "sphering" around a given starting point.
Given an array a of size NxNxN where each N is 2^k for some k, and a point p in that array. The algorithm I'm looking for should do the following: If a[p] satisfies a certain predicate, the algorithm stops and p is returned. Otherwise the next point q is checked, where q is another point in the array that is the closest to p and hasn't been visited yet. If that doesn't match either, the next q'is checked an so on until in the worst case the whole array has been searched.
By "closest" here the perfect solution would be the point q that has the smallest Euclidean distance to p. As only discrete points have to be considered, perhaps some clever enumeration algorithm woukd make that possible. However, if this gets too complicated, the smallest Manhattan distance would be fine too. If there are several nearest points, it doesn't matter which one should be considered next.
Is there already an algorithm that can be used for this task?
You can search for increasing squared distances, so you won't miss a point. This python code should make it clear:
import math
import itertools
# Calculates all points at a certain distance.
# Coordinate constraint: z <= y <= x
def get_points_at_squared_euclidean_distance(d):
result = []
x = int(math.floor(math.sqrt(d)))
while 0 <= x:
y = x
while 0 <= y:
target = d - x*x - y*y
lower = 0
upper = y + 1
while lower < upper:
middle = (lower + upper) / 2
current = middle * middle
if current == target:
result.append((x, y, middle))
break
if current < target:
lower = middle + 1
else:
upper = middle
y -= 1
x -= 1
return result
# Creates all possible reflections of a point
def get_point_reflections(point):
result = set()
for p in itertools.permutations(point):
for n in range(8):
result.add((
p[0] * (1 if n % 8 < 4 else -1),
p[1] * (1 if n % 4 < 2 else -1),
p[2] * (1 if n % 2 < 1 else -1),
))
return sorted(result)
# Enumerates all points around a center, in increasing distance
def get_next_point_near(center):
d = 0
points_at_d = []
while True:
while not points_at_d:
d += 1
points_at_d = get_points_at_squared_euclidean_distance(d)
point = points_at_d.pop()
for reflection in get_point_reflections(point):
yield (
center[0] + reflection[0],
center[1] + reflection[1],
center[2] + reflection[2],
)
# The function you asked for
def get_nearest_point(center, predicate):
for point in get_next_point_near(center):
if predicate(point):
return point
# Example usage
print get_nearest_point((1,2,3), lambda p: sum(p) == 10)
Basically you consume points from the generator until one of them fulfills your predicate.
This is pseudocode for a simple algorithm that will search in increasing-radius spherical husks until it either finds a point or it runs out of array. Let us assume that condition returns either true or false and has access to the x, y, z coordinates being tested and the array itself, returning false (instead of exploding) for out-of-bounds coordinates:
def find_from_center(center, max_radius, condition) returns a point
let radius = 0
while radius < max_radius,
let point = find_in_spherical_husk(center, radius, condition)
if (point != null) return point
radius ++
return null
the hard part is inside find_in_spherical_husk. We are interested in checking out points such that
dist(center, p) >= radius AND dist(center, p) < radius+1
which will be our operating definition of husk. We could iterate over the whole 3D array in O(n^3) looking for those, but that would be really expensive in terms of time. A better pseudocode is the following:
def find_in_spherical_husk(center, radius, condition)
let z = center.z - radius // current slice height
let r = 0 // current circle radius; maxes at equator, then decreases
while z <= center + radius,
let z_center = (z, center.x, point.y)
let point = find_in_z_circle(z_center, r)
if (point != null) return point
// prepare for next z-sliced cirle
z ++
r = sqrt(radius*radius - (z-center.z)*(z-center.z))
the idea here is to slice each husk into circles along the z-axis (any axis will do), and then look at each slice separately. If you were looking at the earth, and the poles were the z axis, you would be slicing from north to south. Finally, you would implement find_in_z_circle(z_center, r, condition) to look at the circumference of each of those circles. You can avoid some math there by using the Bresenham circle-drawing algorithm; but I assume that the savings are negligible compared with the cost of checking condition.

Fastest way to sort vectors by angle without actually computing that angle

Many algorithms (e.g. Graham scan) require points or vectors to be sorted by their angle (perhaps as seen from some other point, i.e. using difference vectors). This order is inherently cyclic, and where this cycle is broken to compute linear values often doesn't matter that much. But the real angle value doesn't matter much either, as long as cyclic order is maintained. So doing an atan2 call for every point might be wasteful. What faster methods are there to compute a value which is strictly monotonic in the angle, the way atan2 is? Such functions apparently have been called “pseudoangle” by some.
I started to play around with this and realised that the spec is kind of incomplete. atan2 has a discontinuity, because as dx and dy are varied, there's a point where atan2 will jump between -pi and +pi. The graph below shows the two formulas suggested by #MvG, and in fact they both have the discontinuity in a different place compared to atan2. (NB: I added 3 to the first formula and 4 to the alternative so that the lines don't overlap on the graph). If I added atan2 to that graph then it would be the straight line y=x. So it seems to me that there could be various answers, depending on where one wants to put the discontinuity. If one really wants to replicate atan2, the answer (in this genre) would be
# Input: dx, dy: coordinates of a (difference) vector.
# Output: a number from the range [-2 .. 2] which is monotonic
# in the angle this vector makes against the x axis.
# and with the same discontinuity as atan2
def pseudoangle(dx, dy):
p = dx/(abs(dx)+abs(dy)) # -1 .. 1 increasing with x
if dy < 0: return p - 1 # -2 .. 0 increasing with x
else: return 1 - p # 0 .. 2 decreasing with x
This means that if the language that you're using has a sign function, you could avoid branching by returning sign(dy)(1-p), which has the effect of putting an answer of 0 at the discontinuity between returning -2 and +2. And the same trick would work with #MvG's original methodology, one could return sign(dx)(p-1).
Update In a comment below, #MvG suggests a one-line C implementation of this, namely
pseudoangle = copysign(1. - dx/(fabs(dx)+fabs(dy)),dy)
#MvG says it works well, and it looks good to me :-).
I know one possible such function, which I will describe here.
# Input: dx, dy: coordinates of a (difference) vector.
# Output: a number from the range [-1 .. 3] (or [0 .. 4] with the comment enabled)
# which is monotonic in the angle this vector makes against the x axis.
def pseudoangle(dx, dy):
ax = abs(dx)
ay = abs(dy)
p = dy/(ax+ay)
if dx < 0: p = 2 - p
# elif dy < 0: p = 4 + p
return p
So why does this work? One thing to note is that scaling all input lengths will not affect the ouput. So the length of the vector (dx, dy) is irrelevant, only its direction matters. Concentrating on the first quadrant, we may for the moment assume dx == 1. Then dy/(1+dy) grows monotonically from zero for dy == 0 to one for infinite dy (i.e. for dx == 0). Now the other quadrants have to be handled as well. If dy is negative, then so is the initial p. So for positive dx we already have a range -1 <= p <= 1 monotonic in the angle. For dx < 0 we change the sign and add two. That gives a range 1 <= p <= 3 for dx < 0, and a range of -1 <= p <= 3 on the whole. If negative numbers are for some reason undesirable, the elif comment line can be included, which will shift the 4th quadrant from -1…0 to 3…4.
I don't know if the above function has an established name, and who might have published it first. I've gotten it quite a while ago and copied it from one project to the next. I have however found occurrences of this on the web, so I'd consider this snipped public enough for re-use.
There is a way to obtain the range [0 … 4] (for real angles [0 … 2π]) without introducing a further case distinction:
# Input: dx, dy: coordinates of a (difference) vector.
# Output: a number from the range [0 .. 4] which is monotonic
# in the angle this vector makes against the x axis.
def pseudoangle(dx, dy):
p = dx/(abs(dx)+abs(dy)) # -1 .. 1 increasing with x
if dy < 0: return 3 + p # 2 .. 4 increasing with x
else: return 1 - p # 0 .. 2 decreasing with x
I kinda like trigonometry, so I know the best way of mapping an angle to some values we usually have is a tangent. Of course, if we want a finite number in order to not have the hassle of comparing {sign(x),y/x}, it gets a bit more confusing.
But there is a function that maps [1,+inf[ to [1,0[ known as inverse, that will allow us to have a finite range to which we will map angles. The inverse of the tangent is the well known cotangent, thus x/y (yes, it's as simple as that).
A little illustration, showing the values of tangent and cotangent on a unit circle :
You see the values are the same when |x| = |y|, and you see also that if we color the parts that output a value between [-1,1] on both circles, we manage to color a full circle. To have this mapping of values be continuous and monotonous, we can do two this :
use the opposite of the cotangent to have the same monotony as tangent
add 2 to -cotan, to have the values coincide where tan=1
add 4 to one half of the circle (say, below the x=-y diagonal) to have values fit on the one of the discontinuities.
That gives the following piecewise function, which is a continuous and monotonous function of the angles, with only one discontinuity (which is the minimum) :
double pseudoangle(double dx, double dy)
{
// 1 for above, 0 for below the diagonal/anti-diagonal
int diag = dx > dy;
int adiag = dx > -dy;
double r = !adiag ? 4 : 0;
if (dy == 0)
return r;
if (diag ^ adiag)
r += 2 - dx / dy;
else
r += dy / dx;
return r;
}
Note that this is very close to Fowler angles, with the same properties. Formally, pseudoangle(dx,dy) + 1 % 8 == Fowler(dx,dy)
To talk performance, it's much less branchy than Fowler's code (and generally less complicated imo). Compiled with -O3 on gcc 6.1.1, the above function generates an assembly code with 4 branches, where two of them come from dy == 0 (one checking if the both operands are "unordered", thus if dy was NaN, and the other checking if they are equal).
I would argue this version is more precise than others, since it only uses mantissa preserving operations, until shifting the result to the right interval. This should be especially visible when |x| << |y| or |y| >> |x|, then the operation |x| + |y| looses quite some precision.
As you can see on the graph the angle-pseudoangle relation is also nicely close to linear.
Looking where branches come from, we can make the following remarks:
My code doesn't rely on abs nor copysign, which makes it look more self-contained. However playing with sign bits on floating point values is actually rather trivial, since it's just flipping a separate bit (no branch!), so this is more of a disadvantage.
Furthermore other solutions proposed here do not check whether abs(dx) + abs(dy) == 0 before dividing by it, but this version would fail as soon as only one component (dy) is 0 -- so that throws in a branch (or 2 in my case).
If we choose to get roughly the same result (up to rounding errors) but without branches, we could abuse copsign and write:
double pseudoangle(double dx, double dy)
{
double s = dx + dy;
double d = dx - dy;
double r = 2 * (1.0 - copysign(1.0, s));
double xor_sign = copysign(1.0, d) * copysign(1.0, s);
r += (1.0 - xor_sign);
r += (s - xor_sign * d) / (d + xor_sign * s);
return r;
}
Bigger errors may happen than with the previous implementation, due to cancellation in either d or s if dx and dy are close in absolute value. There is no check for division by zero to be comparable with the other implementations presented, and because this only happens when both dx and dy are 0.
If you can feed the original vectors instead of angles into a comparison function when sorting, you can make it work with:
Just a single branch.
Only floating point comparisons and multiplications.
Avoiding addition and subtraction makes it numerically much more robust. A double can actually always exactly represent the product of two floats, but not necessarily their sum. This means for single precision input you can guarantee a perfect flawless result with little effort.
This is basically Cimbali's solution repeated for both vectors, with branches eliminated and divisions multiplied away. It returns an integer, with sign matching the comparison result (positive, negative or zero):
signed int compare(double x1, double y1, double x2, double y2) {
unsigned int d1 = x1 > y1;
unsigned int d2 = x2 > y2;
unsigned int a1 = x1 > -y1;
unsigned int a2 = x2 > -y2;
// Quotients of both angles.
unsigned int qa = d1 * 2 + a1;
unsigned int qb = d2 * 2 + a2;
if(qa != qb) return((0x6c >> qa * 2 & 6) - (0x6c >> qb * 2 & 6));
d1 ^= a1;
double p = x1 * y2;
double q = x2 * y1;
// Numerator of each remainder, multiplied by denominator of the other.
double na = q * (1 - d1) - p * d1;
double nb = p * (1 - d1) - q * d1;
// Return signum(na - nb)
return((na > nb) - (na < nb));
}
The simpliest thing I came up with is making normalized copies of the points and splitting the circle around them in half along the x or y axis. Then use the opposite axis as a linear value between the beginning and end of the top or bottom buffer (one buffer will need to be in reverse linear order when putting it in.) Then you can read the first then second buffer linearly and it will be clockwise, or second and first in reverse for counter clockwise.
That might not be a good explanation so I put some code up on GitHub that uses this method to sort points with an epsilion value to size the arrays.
https://github.com/Phobos001/SpatialSort2D
This might not be good for your use case because it's built for performance in graphics effects rendering, but it's fast and simple (O(N) Complexity). If your working with really small changes in points or very large (hundreds of thousands) data sets then this won't work because the memory usage might outweigh the performance benefits.
nice.. here is a varient that returns -Pi , Pi like many arctan2 functions.
edit note: changed my pseudoscode to proper python.. arg order changed for compatibility with pythons math module atan2(). Edit2 bother more code to catch the case dx=0.
def pseudoangle( dy , dx ):
""" returns approximation to math.atan2(dy,dx)*2/pi"""
if dx == 0 :
s = cmp(dy,0)
else::
s = cmp(dx*dy,0) # cmp == "sign" in many other languages.
if s == 0 : return 0 # doesnt hurt performance much.but can omit if 0,0 never happens
p = dy/(dx+s*dy)
if dx < 0: return p-2*s
return p
In this form the max error is only ~0.07 radian for all angles.
(of course leave out the Pi/2 if you don't care about the magnitude.)
Now for the bad news -- on my system using python math.atan2 is about 25% faster
Obviously replacing a simple interpreted code doesnt beat a compiled intrisic.
If angles are not needed by themselves, but only for sorting, then #jjrv approach is the best one. Here is a comparison in Julia
using StableRNGs
using BenchmarkTools
# Definitions
struct V{T}
x::T
y::T
end
function pseudoangle(v)
copysign(1. - v.x/(abs(v.x)+abs(v.y)), v.y)
end
function isangleless(v1, v2)
a1 = abs(v1.x) + abs(v1.y)
a2 = abs(v2.x) + abs(v2.y)
a2*copysign(a1 - v1.x, v1.y) < a1*copysign(a2 - v2.x, v2.y)
end
# Data
rng = StableRNG(2021)
vectors = map(x -> V(x...), zip(rand(rng, 1000), rand(rng, 1000)))
# Comparison
res1 = sort(vectors, by = x -> pseudoangle(x));
res2 = sort(vectors, lt = (x, y) -> isangleless(x, y));
#assert res1 == res2
#btime sort($vectors, by = x -> pseudoangle(x));
# 110.437 μs (3 allocations: 23.70 KiB)
#btime sort($vectors, lt = (x, y) -> isangleless(x, y));
# 65.703 μs (3 allocations: 23.70 KiB)
So, by avoiding division, time is almost halved without losing result quality. Of course, for more precise calculations, isangleless should be equipped with bigfloat from time to time, but the same can be told about pseudoangle.
Just use a cross-product function. The direction you rotate one segment relative to the other will give either a positive or negative number. No trig functions and no division. Fast and simple. Just Google it.

Efficient algorithm to determine range [a, b] of sin wave with interval

I have a sine wave whose parameters I can determine (they are user-input). It's of the form y=a*sin(m*x + t)
I'd like to know whether anyone knows an efficient algorithm to figure out the range of y for a given interval which goes from [0, x] (x is again another input)
For example:
for y = sin(x) (i.e. a=1, t=0, m=1), for the interval [0, 4] I'd like an output like [1, -0.756802]
Please keep in mind, m and t can be anything. Thus, the y-curve does not have to start (or end) at 0 (or 1). It could start anywhere.
Also, please note that x will be discrete.
Any ideas?
PS: I'll use python for implementing the algorithm.
Since function y(x) = a*sin(m*x + t) is continuous, maximum will be either at one of the interval's ends or at the maximum inside interval, in this case dy/dx will be equal to zero.
So:
1. Find values of y(x) at the ends of interval.
2. Find out if dy/dx == a * m cos (mx + t) have zero(s) in interval, find out values of y(x) at the zero(s).
3. Choose point where y(x) have maximum value
If you have greater than one period then the result is just +/- a.
For less than one period you can evaluate y at the start/end points and then find any maxima between the start/end points by solving for y' = 0, i.e. cos(m*x + t) = 0.
All the answers are more or less the same. Thanks guys=)
I think I'd go with something like the following (note that I am renaming the variable I called "x" to "end". I had this "x" at the beginning which denoted the end of my interval on the X-axis):
1) Evaluate y at 0 and "end", use an if-block to assign the two values to the correct PRELIMINARY "min" and "max" of the range
2) Evaluate number of evolutions: "evolNr" = (m*end)/2Pi. If evolNr > 1, return [-a, a]
3) If evolNr < 1: First find the root of the derivative, which is at "firstRoot" = (1/2m)*Pi - phase + q * 1/m * Pi, where q = ceil(m/Pi * ((1/2m) * Pi - phase) ) --- this gives me the first root at some position x > 0. From then on I know that all other extremes are within firstRoot and "end", we have a new root every 1/m * Pi.
In code: for (a=firstRoot; a < end; a += 1/m*Pi) {Eval y at a, if > 0 its a maximum, update "max", otherwise update "min"}
return [min, max]

Resources