Reducing number of nodes in polygons using Python - algorithm

I have a shapefile with several contiguous polygons and I want to reduce their number of nodes keeping the adjacent polygons topologically consistent.
I was thinking of deleting nodes based on the angle that results from the 2 segments on either side of the node; in particular deleting nodes which form angles <180º and >175º.
I have seen a comment referring to the same idea, but I have very basic knowledge of coding. How could this be implemented in Python?
https://stackoverflow.com/a/2624475/8435715

Here is an example of how you can do that based on two criteria - distance between vertices and the angle you described above:
import numpy as np
def reduce_polygon(polygon, angle_th=0, distance_th=0):
angle_th_rad = np.deg2rad(angle_th)
points_removed = [0]
while len(points_removed):
points_removed = list()
for i in range(0, len(polygon)-2, 2):
v01 = polygon[i-1] - polygon[i]
v12 = polygon[i] - polygon[i+1]
d01 = np.linalg.norm(v01)
d12 = np.linalg.norm(v12)
if d01 < distance_th and d12 < distance_th:
points_removed.append(i)
continue
angle = np.arccos(np.sum(v01*v12) / (d01 * d12))
if angle < angle_th_rad:
points_removed.append(i)
polygon = np.delete(polygon, points_removed, axis=0)
return polygon
example:
from matplotlib import pyplot as plt
from time import time
tic = time()
reduced_polygon = reduce_polygon(original_polygon, angle_th=5, distance_th=4)
toc = time()
plt.figure()
plt.scatter(original_polygon[:, 0], original_polygon[:, 1], c='r', marker='o', s=2)
plt.scatter(reduced_polygon[:, 0], reduced_polygon[:, 1], c='b', marker='x', s=20)
plt.plot(reduced_polygon[:, 0], reduced_polygon[:, 1], c='black', linewidth=1)
plt.show()
print(f'original_polygon length: {len(original_polygon)}\n',
f'reduced_polygon length: {len(reduced_polygon)}\n'
f'running time: {round(toc - tic, 4)} secends')
Produces the following result:

Related

SciPy: von Mises distribution on a half circle?

I'm trying to figure out the best way to define a von-Mises distribution wrapped on a half-circle (I'm using it to draw directionless lines at different concentrations). I'm currently using SciPy's vonmises.rvs(). Essentially, I want to be able to put in, say, a mean orientation of pi/2 and have the distribution truncated to no more than pi/2 either side.
I could use a truncated normal distribution, but I will lose the wrapping of the von-mises (say if I want a mean orientation of 0)
I've seen this done in research papers looking at mapping fibre orientations, but I can't figure out how to implement it (in python). I'm a bit stuck on where to start.
If my von Mesis is defined as (from numpy.vonmises):
np.exp(kappa*np.cos(x-mu))/(2*np.pi*i0(kappa))
with:
mu, kappa = 0, 4.0
x = np.linspace(-np.pi, np.pi, num=51)
How would I alter it to use a wrap around a half-circle instead?
Could anyone with some experience with this offer some guidance?
Is is useful to have direct numerical inverse CDF sampling, it should work great for distribution with bounded domain. Here is code sample, building PDF and CDF tables and sampling using inverse CDF method. Could be optimized and vectorized, of course
Code, Python 3.8, x64 Windows 10
import numpy as np
import matplotlib.pyplot as plt
import scipy.integrate as integrate
def PDF(x, μ, κ):
return np.exp(κ*np.cos(x - μ))
N = 201
μ = np.pi/2.0
κ = 4.0
xlo = μ - np.pi/2.0
xhi = μ + np.pi/2.0
# PDF normaliztion
I = integrate.quad(lambda x: PDF(x, μ, κ), xlo, xhi)
print(I)
I = I[0]
x = np.linspace(xlo, xhi, N, dtype=np.float64)
step = (xhi-xlo)/(N-1)
p = PDF(x, μ, κ)/I # PDF table
# making CDF table
c = np.zeros(N, dtype=np.float64)
for k in range(1, N):
c[k] = integrate.quad(lambda x: PDF(x, μ, κ), xlo, x[k])[0] / I
c[N-1] = 1.0 # so random() in [0...1) range would work right
#%%
# sampling from tabular CDF via insverse CDF method
def InvCDFsample(c, x, gen):
r = gen.random()
i = np.searchsorted(c, r, side='right')
q = (r - c[i-1]) / (c[i] - c[i-1])
return (1.0 - q) * x[i-1] + q * x[i]
# sampling test
RNG = np.random.default_rng()
s = np.empty(20000)
for k in range(0, len(s)):
s[k] = InvCDFsample(c, x, RNG)
# plotting PDF, CDF and sampling density
plt.plot(x, p, 'b^') # PDF
plt.plot(x, c, 'r.') # CDF
n, bins, patches = plt.hist(s, x, density = True, color ='green', alpha = 0.7)
plt.show()
and graph with PDF, CDF and sampling histogram
You could discard the values outside the desired range via numpy's filtering (theta=theta[(theta>=0)&(theta<=np.pi)], shortening the array of samples). So, you could first increment the number of generated samples, then filter and then take a subarray of the desired size.
Or you could add/subtract pi to put them all into that range (via theta = np.where(theta < 0, theta + np.pi, np.where(theta > np.pi, theta - np.pi, theta))). As noted by #SeverinPappadeux such changes the distribution and is probably not desired.
import matplotlib.pyplot as plt
from matplotlib.collections import LineCollection
import numpy as np
from scipy.stats import vonmises
mu = np.pi / 2
kappa = 4
orig_theta = vonmises.rvs(kappa, loc=mu, size=(10000))
fig, axes = plt.subplots(ncols=2, sharex=True, sharey=True, figsize=(12, 4))
for ax in axes:
theta = orig_theta.copy()
if ax == axes[0]:
ax.set_title(f"$Von Mises, \\mu={mu:.2f}, \\kappa={kappa}$")
else:
theta = theta[(theta >= 0) & (theta <= np.pi)]
print(len(theta))
ax.set_title(f"$Von Mises, angles\\ filtered\\ ({100 * len(theta) / (len(orig_theta)):.2f}\\ \\%)$")
segs = np.zeros((len(theta), 2, 2))
segs[:, 1, 0] = np.cos(theta)
segs[:, 1, 1] = np.sin(theta)
line_segments = LineCollection(segs, linewidths=.1, colors='blue', alpha=0.5)
ax.add_collection(line_segments)
ax.autoscale()
ax.set_aspect('equal')
plt.show()

Approximate the nonlinear function by piece-wise linear segments

I'm thinking if I can use GEKKO for the following problem. Please feel free to share your comments. Thank you in advance.
Given that I'd like to approximate some nonlinear functions by piece-wise linear(PWL) segments. For instance, I'd like to use N PWL segments to approximate the function of Gaussian. Is it possible to leverage GEKKO for the problem? What available examples do you suggest studying?
Thank you
The link that Junho sent is good if you have discontinuous functions that are linear or nonlinear with switching conditions. If you have data then there is a PWL function in Gekko that you can use without binary or MPCC switching conditions. Below is a simple PWL example in Python. Instead of the data points I included, you can use PWL segments to approximate the Gaussian function.
import matplotlib.pyplot as plt
from gekko import GEKKO
import numpy as np
m = GEKKO(remote=False)
m.options.SOLVER = 1
x = m.FV(value = 4.5)
y = m.Var()
xp = np.array([1, 2, 3, 3.5, 4, 5])
yp = np.array([1, 0, 2, 2.5, 2.8, 3])
m.pwl(x,y,xp,yp)
m.solve()
plt.plot(xp,yp,'rx-',label='PWL function')
plt.plot(x,y,'bo',label='Data')
plt.show()
If there is a data set with many points, sometimes it is desirable to fit just a few points with a PWL segments. This is another example that shows how to fit a PWL approximation. In this case you can't use the PWL object in Gekko.
from scipy import optimize
import matplotlib.pyplot as plt
from gekko import GEKKO
import numpy as np
m = GEKKO()
m.options.SOLVER = 3
m.options.IMODE = 2
xzd = np.linspace(1,5,100)
yzd = np.sin(xzd)
xz = m.Param(value=xzd)
yz = m.CV(value=yzd)
yz.FSTATUS = 1
xp_val = np.array([1, 2, 3, 3.5, 4, 5])
yp_val = np.array([1, 0, 2, 2.5, 2.8, 3])
xp = [m.FV(value=xp_val[i],lb=xp_val[0],ub=xp_val[-1]) for i in range(6)]
yp = [m.FV(value=yp_val[i]) for i in range(6)]
for i in range(6):
xp[i].STATUS = 0
yp[i].STATUS = 1
for i in range(5):
m.Equation(xp[i+1]>=xp[i]+0.05)
x = [m.Var(lb=xp[i],ub=xp[i+1]) for i in range(5)]
x[0].lower = -1e20
x[-1].upper = 1e20
# Variables
slk_u = [m.Var(value=1,lb=0) for i in range(4)]
slk_l = [m.Var(value=1,lb=0) for i in range(4)]
# Intermediates
slope = []
for i in range(5):
slope.append(m.Intermediate((yp[i+1]-yp[i]) / (xp[i+1]-xp[i])))
y = []
for i in range(5):
y.append(m.Intermediate((x[i]-xp[i])*slope[i]))
for i in range(4):
m.Obj(1000*(slk_u[i] + slk_l[i]))
m.Equation(xz == x[0] + slk_u[0])
for i in range(3):
m.Equation(xz == x[i+1] + slk_u[i+1] - slk_l[i])
m.Equation(xz == x[4] - slk_l[3])
m.Equation(yz == yp[0] + y[0] + y[1] + y[2] + y[3] + y[4])
m.solve()
#y_val = yz.value
#print(y_val)
import matplotlib.pyplot as plt
plt.plot(xp,yp,'rx-',label='PWL function')
plt.plot(xzd,yzd,'b.',label='Data')
plt.show()
Please check out the link below for examples of PWL using binary decision variables.
Logical conditions in Optimization

Faster approach for decomposing a rotation to rotations around arbitrary orthogonal axes

I have a rotation and I want to decompose it into a series of rotations around 3 orthogonal arbitrary axes. It's a bit like a generalisation of Euler decomposition where the rotations are not around the X, Y and Z axes
I've tried to find a closed form solution but not been successful so I have produced a numerical solution based on minimising the difference between the rotation I want and the product of 3 quaternions representing the 3 axes roations with the 3 angles being the unknowns. 'SimplexMinimize' is just an abstraction of the code to find the 3 angles that minimises the error.
double GSUtil::ThreeAxisDecomposition(const Quaternion &target, const Vector &ax1, const Vector &ax2, const Vector &ax3, double *ang1, double *ang2, double *ang3)
{
DataContainer data = {target, ax1, ax2, ax3};
VaraiablesContainer variables = {ang1, ang2, ang3};
error = SimplexMinimize(ThreeAxisDecompositionError, data, variables);
}
double GSUtil::ThreeAxisDecompositionError(const Quaternion &target, const Vector &ax1, const Vector &ax2, const Vector &ax3, double ang1, double ang2, double ang3)
{
Quaternion product = MakeQFromAxisAngle(ax3, ang3) * MakeQFromAxisAngle(ax2, ang2) * MakeQFromAxisAngle(ax1, ang1);
// now we need a distance metric between product and target. I could just calculate the angle between them:
// theta = acos(2?q1,q2?^2-1) where ?q1,q2? is the inner product (n1n2 + x1x2+ y1y2 + z1z2)
// but there are other quantities that will do a similar job in less time
// 1-(q1,q2)^2 should be faster to calculate and is 0 when they are identical and 1 when they are 180 degrees apart
double innerProduct = target.n * product.n + target.v.x * product.v.x + target.v.x * product.v.x + target.v.x * product.v.x;
double error = 1 - innerProduct * innerProduct;
return error;
}
It works (I think) but obviously it is quite slow. My feeling is there ought to be a closed form solution. At the very least there ought to be a gradient to the function so I can use a faster optimiser.
There is indeed a closed form solution. Since the axes form an orthonormal basis A (each axe is a column of the matrix), you can decompose a rotation R on the three axes by transforming R into the basis A and then do Euler Angle decomposition on the three main axes:
R = A*R'*A^t = A*X*Y*Z*A^t = (A*X*A^t)*(A*Y*A^t)*(A*Z*A^t)
This translates into the following algorithm:
Compute R' = A^t*R*A
Decompose R' into Euler Angles around main axes to obtain matrices X, Y, Z
Compute the three rotations around the given axes:
X' = A*X*A^t
Y' = A*Y*A^t
Z' = A*Y*A^t
As a reference, here's the Mathematica code I used to test my answer
(*Generate random axes and a rotation matrix for testing purposes*)
a = RotationMatrix[RandomReal[{0, \[Pi]}],
Normalize[RandomReal[{-1, 1}, 3]]];
t1 = RandomReal[{0, \[Pi]}];
t2 = RandomReal[{0, \[Pi]}];
t3 = RandomReal[{0, \[Pi]}];
r = RotationMatrix[t1, a[[All, 1]]].
RotationMatrix[t2, a[[All, 2]]].
RotationMatrix[t2, a[[All, 3]]];
(*Decompose rotation matrix 'r' into the axes of 'a'*)
rp = Transpose[a].r.a;
{a1, a2, a3} = EulerAngles[rp, {1, 2, 3}];
xp = a.RotationMatrix[a1, {1, 0, 0}].Transpose[a];
yp = a.RotationMatrix[a2, {0, 1, 0}].Transpose[a];
zp = a.RotationMatrix[a3, {0, 0, 1}].Transpose[a];
(*Test that the generated matrix is equal to 'r' (should give 0)*)
xp.yp.zp - r // MatrixForm
(*Test that the individual rotations preserve the axes (should give 0)*)
xp.a[[All, 1]] - a[[All, 1]]
yp.a[[All, 2]] - a[[All, 2]]
zp.a[[All, 3]] - a[[All, 3]]
I was doing the same thing in python and found #Gilles-PhilippePaillé 's answer really helpful although I had to tweak a couple of things, mostly getting the euler angles out in reverse. Thought I would add my python version here for reference anyway in case it helps anyone.
import numpy as np
from scipy.spatial.transform import Rotation
def normalise(v: np.ndarray) -> np.ndarray:
"""Normalise an array along its final dimension."""
return v / norm(v, axis=-1, keepdims=True)
# Generate random basis
A = Rotation.from_rotvec(normalise(np.random.random(3)) * np.random.rand() * np.pi).as_matrix()
# Generate random rotation matrix
t0 = np.random.rand() * np.pi
t1 = np.random.rand() * np.pi
t2 = np.random.rand() * np.pi
R = Rotation.from_rotvec(A[:, 0] * t0) * Rotation.from_rotvec(A[:, 1] * t1) * Rotation.from_rotvec(A[:, 2] * t2)
R = R.as_matrix()
# Decompose rotation matrix R into the axes of A
rp = Rotation.from_matrix(A.T # R # A)
a3, a2, a1 = rp.as_euler('zyx')
xp = A # Rotation.from_rotvec(a1 * np.array([1, 0, 0])).as_matrix() # A.T
yp = A # Rotation.from_rotvec(a2 * np.array([0, 1, 0])).as_matrix() # A.T
zp = A # Rotation.from_rotvec(a3 * np.array([0, 0, 1])).as_matrix() # A.T
# Test that the generated matrix is equal to 'r' (should give 0)
assert np.allclose(xp # yp # zp, R)
# Test that the individual rotations preserve the axes (should give 0)
assert np.allclose(xp # A[:, 0], A[:, 0])
assert np.allclose(yp # A[:, 1], A[:, 1])
assert np.allclose(zp # A[:, 2], A[:, 2])

How do I perform a curve fit with an array of points and touching a specific point in that array

I need help with curve fitting a given set of points. The points form a parabola and I ought to find the peak point of the result. Issue is when I do a curve fit, it sometimes doesn't touch the max y-coordinate even if the actual point is given in the input array.
Following is the code snippet. Here 1.88 is the actual peak y-coordinate (13.05,1.88). But the graph generated by the code does not touch the point due to curve fitting. So is there a way to fit the curve making sure that it touches the max point given in the input array?
import numpy as np
import matplotlib.pyplot as plt
from scipy.optimize import curve_fit, minimize_scalar
fig = plt.gcf()
#fig.set_size_inches(18.5, 10.5)
x = [4.59,9.02,13.05,18.47,20.3]
y = [1.7,1.84,1.88,1.7,1.64]
def f(x, p1, p2, p3):
return p3*(p1/((x-p2)**2 + (p1/2)**2))
plt.plot(x,y,"ro")
popt, pcov = curve_fit(f, x, y)
# find the peak
fm = lambda x: -f(x, *popt)
r = minimize_scalar(fm, bounds=(1, 5))
print( "maximum:", r["x"], f(r["x"], *popt) ) #maximum: 2.99846874275 18.3928199902
plt.text(1,1.9,'maximum '+str(round(r["x"],2))+'( #'+str(round(f(r["x"], *popt),2)) + ' )')
x_curve = np.linspace(min(x), max(x), 50)
plt.plot(x_curve, f(x_curve, *popt))
plt.plot(r['x'], f(r['x'], *popt), 'ko')
plt.show()
Here is a graphical code example using your equation with weighted fitting, where I have made the max point larger to more easily see the effect of the weighting. In non-weighted curve fitting, all weights are implicitly 1.0 as all data points have equal weight. Scipy's curve_fit routine uses weights in the form of uncertainties, so that giving a point a very small uncertainty (which I have done) is like giving the point a very large weight. This technique can be used to make a fit pass arbitrarily close to any single data point by any software that can perform weghted fitting.
import numpy, scipy, matplotlib
import matplotlib.pyplot as plt
from scipy.optimize import curve_fit
x = [4.59,9.02,13.05,18.47,20.3]
y = [1.7,1.84,2.0,1.7,1.64]
# note the single very small uncertainty - try making this value 1.0
uncertainties = numpy.array([1.0, 1.0, 1.0E-6, 1.0, 1.0])
# rename data to use previous example
xData = numpy.array(x)
yData = numpy.array(y)
def func(x, p1, p2, p3):
return p3*(p1/((x-p2)**2 + (p1/2)**2))
# these are the same as the scipy defaults
initialParameters = numpy.array([1.0, 1.0, 1.0])
# curve fit the test data, first without uncertainties to
# get us closer to initial starting parameters
ssqParameters, pcov = curve_fit(func, xData, yData, p0 = initialParameters)
# now that we have better starting parameters, use uncertainties
fittedParameters, pcov = curve_fit(func, xData, yData, p0 = ssqParameters, sigma=uncertainties, absolute_sigma=True)
modelPredictions = func(xData, *fittedParameters)
absError = modelPredictions - yData
SE = numpy.square(absError) # squared errors
MSE = numpy.mean(SE) # mean squared errors
RMSE = numpy.sqrt(MSE) # Root Mean Squared Error, RMSE
Rsquared = 1.0 - (numpy.var(absError) / numpy.var(yData))
print('Parameters:', fittedParameters)
print('RMSE:', RMSE)
print('R-squared:', Rsquared)
print()
##########################################################
# graphics output section
def ModelAndScatterPlot(graphWidth, graphHeight):
f = plt.figure(figsize=(graphWidth/100.0, graphHeight/100.0), dpi=100)
axes = f.add_subplot(111)
# first the raw data as a scatter plot
axes.plot(xData, yData, 'D')
# create data for the fitted equation plot
xModel = numpy.linspace(min(xData), max(xData))
yModel = func(xModel, *fittedParameters)
# now the model as a line plot
axes.plot(xModel, yModel)
axes.set_xlabel('X Data') # X axis data label
axes.set_ylabel('Y Data') # Y axis data label
plt.show()
plt.close('all') # clean up after using pyplot
graphWidth = 800
graphHeight = 600
ModelAndScatterPlot(graphWidth, graphHeight)

Results from my thin plate spline interpolation implementation are dependant of the independent variables

I implemented the thin plate spline algorithm (see also this description) in order to interpolate scattered data using Python.
My algorithm seems to work correctly when the bounding box of the initial scattered data has an aspect ratio close to 1. However, scaling one of the data points coordinates changes the interpolation result. I created a minimal working example that is representative of what I am trying to accomplish. Below are two plots showing the results of the interpolation of 50 random points.
First, the interpolation of z = x^2 on the domain x = [0, 3], y = [0, 120]:
As you can see, the interpolation fails. Now, executing the same process but after scaling the x values by a factor of 40, I get:
This time, the result looks better. Choosing a slightly different scaling factor would have resulted in a slightly different interpolation. This shows that something is wrong in my algorithm but I can't find what exactly. Here is the algorithm:
import numpy as np
import numba as nb
# pts1 = Mx2 matrix (original coordinates)
# z1 = Mx1 column vector (original values)
# pts2 = Nx2 matrix (interpolation coordinates)
def gen_K(n, pts1):
K = np.zeros((n,n))
for i in range(0,n):
for j in range(0,n):
if i != j:
r = ( (pts1[i,0] - pts1[j,0])**2.0 + (pts1[i,1] - pts1[j,1])**2.0 )**0.5
K[i,j] = r**2.0*np.log(r)
return K
def compute_z2(m, n, pts1, pts2, coeffs):
z2 = np.zeros((m,1))
x_min = np.min(pts1[:,0])
x_max = np.max(pts1[:,0])
y_min = np.min(pts1[:,1])
y_max = np.max(pts1[:,1])
for k in range(0,m):
pt = pts2[k,:]
# If point is located inside bounding box of pts1
if (pt[0] >= x_min and pt[0] <= x_max and pt[1] >= y_min and pt[1] <= y_max):
z2[k,0] = coeffs[-3,0] + coeffs[-2,0]*pts2[k,0] + coeffs[-1,0]*pts2[k,1]
for i in range(0,n):
r2 = ( (pts1[i,0] - pts2[k,0])**2.0 + (pts1[i,1] - pts2[k,1])**2.0 )**0.5
if r2 != 0:
z2[k,0] += coeffs[i,0]*( r2**2.0*np.log(r2) )
else:
z2[k,0] = np.nan
return z2
gen_K_nb = nb.jit(nb.float64[:,:](nb.int64, nb.float64[:,:]), nopython = True)(gen_K)
compute_z2_nb = nb.jit(nb.float64[:,:](nb.int64, nb.int64, nb.float64[:,:], nb.float64[:,:], nb.float64[:,:]), nopython = True)(compute_z2)
def TPS(pts1, z1, pts2, factor):
n, m = pts1.shape[0], pts2.shape[0]
P = np.hstack((np.ones((n,1)),pts1))
Y = np.vstack((z1, np.zeros((3,1))))
K = gen_K_nb(n, pts1)
K += factor*np.identity(n)
L = np.zeros((n+3,n+3))
L[0:n, 0:n] = K
L[0:n, n:n+3] = P
L[n:n+3, 0:n] = P.T
L_inv = np.linalg.inv(L)
coeffs = L_inv.dot(Y)
return compute_z2_nb(m, n, pts1, pts2, coeffs)
Finally, here is the code snippet I used to create the two plots:
import matplotlib.pyplot as plt
import numpy as np
N = 50 # Number of random points
pts = np.random.rand(N,2)
pts[:,0] *= 3.0 # initial x values
pts[:,1] *= 120.0 # initial y values
z1 = (pts[:,0])**2.0
for scale in [1.0, 40.0]:
pts1 = pts.copy()
pts1[:,0] *= scale
x2 = np.linspace(np.min(pts1[:,0]), np.max(pts1[:,0]), 40)
y2 = np.linspace(np.min(pts1[:,1]), np.max(pts1[:,1]), 40)
x2, y2 = np.meshgrid(x2, y2)
pts2 = np.vstack((x2.flatten(), y2.flatten())).T
z2 = TPS(pts1, z1.reshape(z1.shape[0], 1), pts2, 0.0)
# Display
fig = plt.figure(figsize=(4,3))
ax = fig.add_subplot(111)
C = ax.contourf(x2, y2, z2.reshape(x2.shape), np.linspace(0,9,10), extend='both')
ax.plot(pts1[:,0], pts1[:,1], 'ok')
ax.set_xlabel('x')
ax.set_ylabel('y')
plt.colorbar(C, extendfrac=0)
plt.tight_layout()
plt.show()
Thin Plate Spline is scalar invariant, which means if you scale x and y by the same factor, the result should be the same. However, if you scale x and y differently, then the result will be different. This is common characteristics among radial basis functions. Some radial basis functions are not even scalar invariant.
When you say it "fails", what do you mean? The big question is, does it still exactly interpolate at the construction points? Assuming your code is correct and you do not have ill-conditioning, it should in which case it does not fail.
What I think is happening is that the addition of the scale is making the behavior in the x direction more dominant so you do not see the wiggles that come naturally from the interpolation.
As an aside, you can greatly speed up your code without using Numba by vectorizing.
import scipy.spatial.distance
import scipy.special
def gen_K(n,pts1):
# No need for n but kept to maintain compatability
pts1 = np.atleast_2d(pts1)
r = scipy.spatial.distance.cdist(pts1,pts1)
return scipy.special.xlogy(r**2,r)
It means you will get horrible ridges running through the surface. Resulting in a sub-optimal model fit. Read the caption below the images. Your model is experiencing the same effect, although plotted in 2D.

Resources