FuncAnimation to teach about SVD - image

I have a question about animations. I would like to animate what it looks like to take an increasingly higher rank approximation of an image using the Singular Value Decomposition to show students an example of image compression. The animate function updates the array within im to a higher and higher rank approximation of the image X using the function X_reduced.
import numpy as np
from numpy.linalg import svd
from matplotlib.animation import FuncAnimation, ArtistAnimation
# Read Images
image_path = 'images/bellie.jpg'
img = imread(image_path)
X = np.asarray(img).mean(axis=2)
# Do economy svd
U, S, V = svd(X, full_matrices=False)
def X_reduced(U, S, V, r):
# Compute rank = r approximation
return U[:, :r] # np.diag(S)[:r, :r] # V[:r, :]
# Create figure and ax objects
fig, ax = plt.subplots()
X_0 = np.random.rand(*X.shape)
im = ax.imshow(X_0, cmap='gray')
def init():
im.set_data(np.random.rand(*X.shape))
return [im]
def animate(r):
im.set_array(X_reduced(U, S, V, r))
return [im]
anim = FuncAnimation(fig, animate, init_func=init, frames=100, interval=20)
plt.rcParams['animation.html'] = 'jshtml'
anim
Here is the output. It's animating something, but not what I want. If I just plot a single figure with a few subplots for different ranks it does work. The result is here.
fig, axs = plt.subplots(2,2)
ranks = [10, 25, 100, 200]
for i, ax in enumerate(axs.reshape(-1)):
ax.imshow(X_reduced(U, S, V, ranks[i]))
Is there anyone that could help me out? Me and the students I am going to teach about the SVD will be very grateful!

Related

From numpy to geotif having a georeferenced box

I created a numpy array by calculating the density of dwellings within an area through the following code:
def myplot(x, y, z, s, bins=10000):
heatmap, xedges, yedges = np.histogram2d(x, y, bins=bins, weights=z)
heatmap = gaussian_filter(heatmap, sigma=s)
extent = [xedges[0], xedges[-1], yedges[0], yedges[-1]]
return heatmap.T, extent
fig, axs = plt.subplots(2, 2)
# Generate some test data
x = buildings["x"]
y = buildings["y"]
weights = buildings["Area"]
sigmas = [0, 16, 32, 64]
for ax, s in zip(axs.flatten(), sigmas):
if s == 0:
ax.plot(x, y, weights, 'k.', markersize=5)
ax.set_title("Scatter plot")
else:
img, extent = myplot(x, y, weights, s)
ax.imshow(img, extent=extent, origin='lower', cmap=cm.jet)
ax.set_title("Smoothing with $\sigma$ = %d" % s)
plt.savefig('export_'+str(s)+'.png', dpi=150, bbox_inches='tight')
plt.show()
This is the result and works fine:
enter image description here
Now I need to save it as a geotif and I know the extreme coordinates of the box angles. I tried to do that using the following code:
# create a georeferenced box
transform = from_bounds(extent[0], extent[1],extent[2], extent[3], 10000, 10000)
# save the georeferenced tif
with rio.open('data.tif', 'w', driver='GTiff', height=10000, width=10000, count=1, dtype='float64', nodata=0, crs=32632, transform=transform) as dst:
dst.write(img, 1)
The problem is that the result is transpose and not on the right position. Could you help me to find the solution?
I tried to develop the code but did not work
You should simply use numpy.transpose on your array - it is a very fast operation that does not copy the array.
GDAL uses traditional C style raster coordinates. In numpy an array with shape (x, y) is x lines of y pixels, while in GDAL it is the other way around.
# save the georeferenced tif
with rio.open('data.tif', 'w', driver='GTiff', height=10000, width=10000, count=1, dtype='float64', nodata=0, crs=32632, transform=transform) as dst:
dst.write(img.tranpose(), 1)

SciPy: von Mises distribution on a half circle?

I'm trying to figure out the best way to define a von-Mises distribution wrapped on a half-circle (I'm using it to draw directionless lines at different concentrations). I'm currently using SciPy's vonmises.rvs(). Essentially, I want to be able to put in, say, a mean orientation of pi/2 and have the distribution truncated to no more than pi/2 either side.
I could use a truncated normal distribution, but I will lose the wrapping of the von-mises (say if I want a mean orientation of 0)
I've seen this done in research papers looking at mapping fibre orientations, but I can't figure out how to implement it (in python). I'm a bit stuck on where to start.
If my von Mesis is defined as (from numpy.vonmises):
np.exp(kappa*np.cos(x-mu))/(2*np.pi*i0(kappa))
with:
mu, kappa = 0, 4.0
x = np.linspace(-np.pi, np.pi, num=51)
How would I alter it to use a wrap around a half-circle instead?
Could anyone with some experience with this offer some guidance?
Is is useful to have direct numerical inverse CDF sampling, it should work great for distribution with bounded domain. Here is code sample, building PDF and CDF tables and sampling using inverse CDF method. Could be optimized and vectorized, of course
Code, Python 3.8, x64 Windows 10
import numpy as np
import matplotlib.pyplot as plt
import scipy.integrate as integrate
def PDF(x, μ, κ):
return np.exp(κ*np.cos(x - μ))
N = 201
μ = np.pi/2.0
κ = 4.0
xlo = μ - np.pi/2.0
xhi = μ + np.pi/2.0
# PDF normaliztion
I = integrate.quad(lambda x: PDF(x, μ, κ), xlo, xhi)
print(I)
I = I[0]
x = np.linspace(xlo, xhi, N, dtype=np.float64)
step = (xhi-xlo)/(N-1)
p = PDF(x, μ, κ)/I # PDF table
# making CDF table
c = np.zeros(N, dtype=np.float64)
for k in range(1, N):
c[k] = integrate.quad(lambda x: PDF(x, μ, κ), xlo, x[k])[0] / I
c[N-1] = 1.0 # so random() in [0...1) range would work right
#%%
# sampling from tabular CDF via insverse CDF method
def InvCDFsample(c, x, gen):
r = gen.random()
i = np.searchsorted(c, r, side='right')
q = (r - c[i-1]) / (c[i] - c[i-1])
return (1.0 - q) * x[i-1] + q * x[i]
# sampling test
RNG = np.random.default_rng()
s = np.empty(20000)
for k in range(0, len(s)):
s[k] = InvCDFsample(c, x, RNG)
# plotting PDF, CDF and sampling density
plt.plot(x, p, 'b^') # PDF
plt.plot(x, c, 'r.') # CDF
n, bins, patches = plt.hist(s, x, density = True, color ='green', alpha = 0.7)
plt.show()
and graph with PDF, CDF and sampling histogram
You could discard the values outside the desired range via numpy's filtering (theta=theta[(theta>=0)&(theta<=np.pi)], shortening the array of samples). So, you could first increment the number of generated samples, then filter and then take a subarray of the desired size.
Or you could add/subtract pi to put them all into that range (via theta = np.where(theta < 0, theta + np.pi, np.where(theta > np.pi, theta - np.pi, theta))). As noted by #SeverinPappadeux such changes the distribution and is probably not desired.
import matplotlib.pyplot as plt
from matplotlib.collections import LineCollection
import numpy as np
from scipy.stats import vonmises
mu = np.pi / 2
kappa = 4
orig_theta = vonmises.rvs(kappa, loc=mu, size=(10000))
fig, axes = plt.subplots(ncols=2, sharex=True, sharey=True, figsize=(12, 4))
for ax in axes:
theta = orig_theta.copy()
if ax == axes[0]:
ax.set_title(f"$Von Mises, \\mu={mu:.2f}, \\kappa={kappa}$")
else:
theta = theta[(theta >= 0) & (theta <= np.pi)]
print(len(theta))
ax.set_title(f"$Von Mises, angles\\ filtered\\ ({100 * len(theta) / (len(orig_theta)):.2f}\\ \\%)$")
segs = np.zeros((len(theta), 2, 2))
segs[:, 1, 0] = np.cos(theta)
segs[:, 1, 1] = np.sin(theta)
line_segments = LineCollection(segs, linewidths=.1, colors='blue', alpha=0.5)
ax.add_collection(line_segments)
ax.autoscale()
ax.set_aspect('equal')
plt.show()

How do I perform a curve fit with an array of points and touching a specific point in that array

I need help with curve fitting a given set of points. The points form a parabola and I ought to find the peak point of the result. Issue is when I do a curve fit, it sometimes doesn't touch the max y-coordinate even if the actual point is given in the input array.
Following is the code snippet. Here 1.88 is the actual peak y-coordinate (13.05,1.88). But the graph generated by the code does not touch the point due to curve fitting. So is there a way to fit the curve making sure that it touches the max point given in the input array?
import numpy as np
import matplotlib.pyplot as plt
from scipy.optimize import curve_fit, minimize_scalar
fig = plt.gcf()
#fig.set_size_inches(18.5, 10.5)
x = [4.59,9.02,13.05,18.47,20.3]
y = [1.7,1.84,1.88,1.7,1.64]
def f(x, p1, p2, p3):
return p3*(p1/((x-p2)**2 + (p1/2)**2))
plt.plot(x,y,"ro")
popt, pcov = curve_fit(f, x, y)
# find the peak
fm = lambda x: -f(x, *popt)
r = minimize_scalar(fm, bounds=(1, 5))
print( "maximum:", r["x"], f(r["x"], *popt) ) #maximum: 2.99846874275 18.3928199902
plt.text(1,1.9,'maximum '+str(round(r["x"],2))+'( #'+str(round(f(r["x"], *popt),2)) + ' )')
x_curve = np.linspace(min(x), max(x), 50)
plt.plot(x_curve, f(x_curve, *popt))
plt.plot(r['x'], f(r['x'], *popt), 'ko')
plt.show()
Here is a graphical code example using your equation with weighted fitting, where I have made the max point larger to more easily see the effect of the weighting. In non-weighted curve fitting, all weights are implicitly 1.0 as all data points have equal weight. Scipy's curve_fit routine uses weights in the form of uncertainties, so that giving a point a very small uncertainty (which I have done) is like giving the point a very large weight. This technique can be used to make a fit pass arbitrarily close to any single data point by any software that can perform weghted fitting.
import numpy, scipy, matplotlib
import matplotlib.pyplot as plt
from scipy.optimize import curve_fit
x = [4.59,9.02,13.05,18.47,20.3]
y = [1.7,1.84,2.0,1.7,1.64]
# note the single very small uncertainty - try making this value 1.0
uncertainties = numpy.array([1.0, 1.0, 1.0E-6, 1.0, 1.0])
# rename data to use previous example
xData = numpy.array(x)
yData = numpy.array(y)
def func(x, p1, p2, p3):
return p3*(p1/((x-p2)**2 + (p1/2)**2))
# these are the same as the scipy defaults
initialParameters = numpy.array([1.0, 1.0, 1.0])
# curve fit the test data, first without uncertainties to
# get us closer to initial starting parameters
ssqParameters, pcov = curve_fit(func, xData, yData, p0 = initialParameters)
# now that we have better starting parameters, use uncertainties
fittedParameters, pcov = curve_fit(func, xData, yData, p0 = ssqParameters, sigma=uncertainties, absolute_sigma=True)
modelPredictions = func(xData, *fittedParameters)
absError = modelPredictions - yData
SE = numpy.square(absError) # squared errors
MSE = numpy.mean(SE) # mean squared errors
RMSE = numpy.sqrt(MSE) # Root Mean Squared Error, RMSE
Rsquared = 1.0 - (numpy.var(absError) / numpy.var(yData))
print('Parameters:', fittedParameters)
print('RMSE:', RMSE)
print('R-squared:', Rsquared)
print()
##########################################################
# graphics output section
def ModelAndScatterPlot(graphWidth, graphHeight):
f = plt.figure(figsize=(graphWidth/100.0, graphHeight/100.0), dpi=100)
axes = f.add_subplot(111)
# first the raw data as a scatter plot
axes.plot(xData, yData, 'D')
# create data for the fitted equation plot
xModel = numpy.linspace(min(xData), max(xData))
yModel = func(xModel, *fittedParameters)
# now the model as a line plot
axes.plot(xModel, yModel)
axes.set_xlabel('X Data') # X axis data label
axes.set_ylabel('Y Data') # Y axis data label
plt.show()
plt.close('all') # clean up after using pyplot
graphWidth = 800
graphHeight = 600
ModelAndScatterPlot(graphWidth, graphHeight)

How to break y-axis using seaborn? [duplicate]

I'm trying to create a plot using pyplot that has a discontinuous x-axis. The usual way this is drawn is that the axis will have something like this:
(values)----//----(later values)
where the // indicates that you're skipping everything between (values) and (later values).
I haven't been able to find any examples of this, so I'm wondering if it's even possible. I know you can join data over a discontinuity for, eg, financial data, but I'd like to make the jump in the axis more explicit. At the moment I'm just using subplots but I'd really like to have everything end up on the same graph in the end.
Paul's answer is a perfectly fine method of doing this.
However, if you don't want to make a custom transform, you can just use two subplots to create the same effect.
Rather than put together an example from scratch, there's an excellent example of this written by Paul Ivanov in the matplotlib examples (It's only in the current git tip, as it was only committed a few months ago. It's not on the webpage yet.).
This is just a simple modification of this example to have a discontinuous x-axis instead of the y-axis. (Which is why I'm making this post a CW)
Basically, you just do something like this:
import matplotlib.pylab as plt
import numpy as np
# If you're not familiar with np.r_, don't worry too much about this. It's just
# a series with points from 0 to 1 spaced at 0.1, and 9 to 10 with the same spacing.
x = np.r_[0:1:0.1, 9:10:0.1]
y = np.sin(x)
fig,(ax,ax2) = plt.subplots(1, 2, sharey=True)
# plot the same data on both axes
ax.plot(x, y, 'bo')
ax2.plot(x, y, 'bo')
# zoom-in / limit the view to different portions of the data
ax.set_xlim(0,1) # most of the data
ax2.set_xlim(9,10) # outliers only
# hide the spines between ax and ax2
ax.spines['right'].set_visible(False)
ax2.spines['left'].set_visible(False)
ax.yaxis.tick_left()
ax.tick_params(labeltop='off') # don't put tick labels at the top
ax2.yaxis.tick_right()
# Make the spacing between the two axes a bit smaller
plt.subplots_adjust(wspace=0.15)
plt.show()
To add the broken axis lines // effect, we can do this (again, modified from Paul Ivanov's example):
import matplotlib.pylab as plt
import numpy as np
# If you're not familiar with np.r_, don't worry too much about this. It's just
# a series with points from 0 to 1 spaced at 0.1, and 9 to 10 with the same spacing.
x = np.r_[0:1:0.1, 9:10:0.1]
y = np.sin(x)
fig,(ax,ax2) = plt.subplots(1, 2, sharey=True)
# plot the same data on both axes
ax.plot(x, y, 'bo')
ax2.plot(x, y, 'bo')
# zoom-in / limit the view to different portions of the data
ax.set_xlim(0,1) # most of the data
ax2.set_xlim(9,10) # outliers only
# hide the spines between ax and ax2
ax.spines['right'].set_visible(False)
ax2.spines['left'].set_visible(False)
ax.yaxis.tick_left()
ax.tick_params(labeltop='off') # don't put tick labels at the top
ax2.yaxis.tick_right()
# Make the spacing between the two axes a bit smaller
plt.subplots_adjust(wspace=0.15)
# This looks pretty good, and was fairly painless, but you can get that
# cut-out diagonal lines look with just a bit more work. The important
# thing to know here is that in axes coordinates, which are always
# between 0-1, spine endpoints are at these locations (0,0), (0,1),
# (1,0), and (1,1). Thus, we just need to put the diagonals in the
# appropriate corners of each of our axes, and so long as we use the
# right transform and disable clipping.
d = .015 # how big to make the diagonal lines in axes coordinates
# arguments to pass plot, just so we don't keep repeating them
kwargs = dict(transform=ax.transAxes, color='k', clip_on=False)
ax.plot((1-d,1+d),(-d,+d), **kwargs) # top-left diagonal
ax.plot((1-d,1+d),(1-d,1+d), **kwargs) # bottom-left diagonal
kwargs.update(transform=ax2.transAxes) # switch to the bottom axes
ax2.plot((-d,d),(-d,+d), **kwargs) # top-right diagonal
ax2.plot((-d,d),(1-d,1+d), **kwargs) # bottom-right diagonal
# What's cool about this is that now if we vary the distance between
# ax and ax2 via f.subplots_adjust(hspace=...) or plt.subplot_tool(),
# the diagonal lines will move accordingly, and stay right at the tips
# of the spines they are 'breaking'
plt.show()
I see many suggestions for this feature but no indication that it's been implemented. Here is a workable solution for the time-being. It applies a step-function transform to the x-axis. It's a lot of code, but it's fairly simple since most of it is boilerplate custom scale stuff. I have not added any graphics to indicate the location of the break, since that is a matter of style. Good luck finishing the job.
from matplotlib import pyplot as plt
from matplotlib import scale as mscale
from matplotlib import transforms as mtransforms
import numpy as np
def CustomScaleFactory(l, u):
class CustomScale(mscale.ScaleBase):
name = 'custom'
def __init__(self, axis, **kwargs):
mscale.ScaleBase.__init__(self)
self.thresh = None #thresh
def get_transform(self):
return self.CustomTransform(self.thresh)
def set_default_locators_and_formatters(self, axis):
pass
class CustomTransform(mtransforms.Transform):
input_dims = 1
output_dims = 1
is_separable = True
lower = l
upper = u
def __init__(self, thresh):
mtransforms.Transform.__init__(self)
self.thresh = thresh
def transform(self, a):
aa = a.copy()
aa[a>self.lower] = a[a>self.lower]-(self.upper-self.lower)
aa[(a>self.lower)&(a<self.upper)] = self.lower
return aa
def inverted(self):
return CustomScale.InvertedCustomTransform(self.thresh)
class InvertedCustomTransform(mtransforms.Transform):
input_dims = 1
output_dims = 1
is_separable = True
lower = l
upper = u
def __init__(self, thresh):
mtransforms.Transform.__init__(self)
self.thresh = thresh
def transform(self, a):
aa = a.copy()
aa[a>self.lower] = a[a>self.lower]+(self.upper-self.lower)
return aa
def inverted(self):
return CustomScale.CustomTransform(self.thresh)
return CustomScale
mscale.register_scale(CustomScaleFactory(1.12, 8.88))
x = np.concatenate((np.linspace(0,1,10), np.linspace(9,10,10)))
xticks = np.concatenate((np.linspace(0,1,6), np.linspace(9,10,6)))
y = np.sin(x)
plt.plot(x, y, '.')
ax = plt.gca()
ax.set_xscale('custom')
ax.set_xticks(xticks)
plt.show()
Check the brokenaxes package:
import matplotlib.pyplot as plt
from brokenaxes import brokenaxes
import numpy as np
fig = plt.figure(figsize=(5,2))
bax = brokenaxes(
xlims=((0, .1), (.4, .7)),
ylims=((-1, .7), (.79, 1)),
hspace=.05
)
x = np.linspace(0, 1, 100)
bax.plot(x, np.sin(10 * x), label='sin')
bax.plot(x, np.cos(10 * x), label='cos')
bax.legend(loc=3)
bax.set_xlabel('time')
bax.set_ylabel('value')
A very simple hack is to
scatter plot rectangles over the axes' spines and
draw the "//" as text at that position.
Worked like a charm for me:
# FAKE BROKEN AXES
# plot a white rectangle on the x-axis-spine to "break" it
xpos = 10 # x position of the "break"
ypos = plt.gca().get_ylim()[0] # y position of the "break"
plt.scatter(xpos, ypos, color='white', marker='s', s=80, clip_on=False, zorder=100)
# draw "//" on the same place as text
plt.text(xpos, ymin-0.125, r'//', fontsize=label_size, zorder=101, horizontalalignment='center', verticalalignment='center')
Example Plot:
For those interested, I've expanded upon #Paul's answer and added it to the matplotlib wrapper proplot. It can do axis "jumps", "speedups", and "slowdowns".
There is no way currently to add "crosses" that indicate the discrete jump like in Joe's answer, but I plan to add this in the future. I also plan to add a default "tick locator" that sets sensible default tick locations depending on the CutoffScale arguments.
Adressing Frederick Nord's question how to enable parallel orientation of the diagonal "breaking" lines when using a gridspec with ratios unequal 1:1, the following changes based on the proposals of Paul Ivanov and Joe Kingtons may be helpful. Width ratio can be varied using variables n and m.
import matplotlib.pylab as plt
import numpy as np
import matplotlib.gridspec as gridspec
x = np.r_[0:1:0.1, 9:10:0.1]
y = np.sin(x)
n = 5; m = 1;
gs = gridspec.GridSpec(1,2, width_ratios = [n,m])
plt.figure(figsize=(10,8))
ax = plt.subplot(gs[0,0])
ax2 = plt.subplot(gs[0,1], sharey = ax)
plt.setp(ax2.get_yticklabels(), visible=False)
plt.subplots_adjust(wspace = 0.1)
ax.plot(x, y, 'bo')
ax2.plot(x, y, 'bo')
ax.set_xlim(0,1)
ax2.set_xlim(10,8)
# hide the spines between ax and ax2
ax.spines['right'].set_visible(False)
ax2.spines['left'].set_visible(False)
ax.yaxis.tick_left()
ax.tick_params(labeltop='off') # don't put tick labels at the top
ax2.yaxis.tick_right()
d = .015 # how big to make the diagonal lines in axes coordinates
# arguments to pass plot, just so we don't keep repeating them
kwargs = dict(transform=ax.transAxes, color='k', clip_on=False)
on = (n+m)/n; om = (n+m)/m;
ax.plot((1-d*on,1+d*on),(-d,d), **kwargs) # bottom-left diagonal
ax.plot((1-d*on,1+d*on),(1-d,1+d), **kwargs) # top-left diagonal
kwargs.update(transform=ax2.transAxes) # switch to the bottom axes
ax2.plot((-d*om,d*om),(-d,d), **kwargs) # bottom-right diagonal
ax2.plot((-d*om,d*om),(1-d,1+d), **kwargs) # top-right diagonal
plt.show()
This is a hacky but pretty solution for x-axis breaks.
The solution is based on https://matplotlib.org/stable/gallery/subplots_axes_and_figures/broken_axis.html, which gets rid of the problem with positioning the break above the spine, solved by How can I plot points so they appear over top of the spines with matplotlib?
from matplotlib.patches import Rectangle
import matplotlib.pyplot as plt
def axis_break(axis, xpos=[0.1, 0.125], slant=1.5):
d = slant # proportion of vertical to horizontal extent of the slanted line
anchor = (xpos[0], -1)
w = xpos[1] - xpos[0]
h = 1
kwargs = dict(marker=[(-1, -d), (1, d)], markersize=12, zorder=3,
linestyle="none", color='k', mec='k', mew=1, clip_on=False)
axis.add_patch(Rectangle(
anchor, w, h, fill=True, color="white",
transform=axis.transAxes, clip_on=False, zorder=3)
)
axis.plot(xpos, [0, 0], transform=axis.transAxes, **kwargs)
fig, ax = plt.subplots(1,1)
plt.plot(np.arange(10))
axis_break(ax, xpos=[0.1, 0.12], slant=1.5)
axis_break(ax, xpos=[0.3, 0.31], slant=-10)
if you want to replace an axis label, this would do the trick:
from matplotlib import ticker
def replace_pos_with_label(fig, pos, label, axis):
fig.canvas.draw() # this is needed to set up the x-ticks
labs = axis.get_xticklabels()
labels = []
locs = []
for text in labs:
x = text._x
lab = text._text
if x == pos:
lab = label
labels.append(lab)
locs.append(x)
axis.xaxis.set_major_locator(ticker.FixedLocator(locs))
axis.set_xticklabels(labels)
fig, ax = plt.subplots(1,1)
plt.plot(np.arange(10))
replace_pos_with_label(fig, 0, "-10", axis=ax)
replace_pos_with_label(fig, 6, "$10^{4}$", axis=ax)
axis_break(ax, xpos=[0.1, 0.12], slant=2)

Results from my thin plate spline interpolation implementation are dependant of the independent variables

I implemented the thin plate spline algorithm (see also this description) in order to interpolate scattered data using Python.
My algorithm seems to work correctly when the bounding box of the initial scattered data has an aspect ratio close to 1. However, scaling one of the data points coordinates changes the interpolation result. I created a minimal working example that is representative of what I am trying to accomplish. Below are two plots showing the results of the interpolation of 50 random points.
First, the interpolation of z = x^2 on the domain x = [0, 3], y = [0, 120]:
As you can see, the interpolation fails. Now, executing the same process but after scaling the x values by a factor of 40, I get:
This time, the result looks better. Choosing a slightly different scaling factor would have resulted in a slightly different interpolation. This shows that something is wrong in my algorithm but I can't find what exactly. Here is the algorithm:
import numpy as np
import numba as nb
# pts1 = Mx2 matrix (original coordinates)
# z1 = Mx1 column vector (original values)
# pts2 = Nx2 matrix (interpolation coordinates)
def gen_K(n, pts1):
K = np.zeros((n,n))
for i in range(0,n):
for j in range(0,n):
if i != j:
r = ( (pts1[i,0] - pts1[j,0])**2.0 + (pts1[i,1] - pts1[j,1])**2.0 )**0.5
K[i,j] = r**2.0*np.log(r)
return K
def compute_z2(m, n, pts1, pts2, coeffs):
z2 = np.zeros((m,1))
x_min = np.min(pts1[:,0])
x_max = np.max(pts1[:,0])
y_min = np.min(pts1[:,1])
y_max = np.max(pts1[:,1])
for k in range(0,m):
pt = pts2[k,:]
# If point is located inside bounding box of pts1
if (pt[0] >= x_min and pt[0] <= x_max and pt[1] >= y_min and pt[1] <= y_max):
z2[k,0] = coeffs[-3,0] + coeffs[-2,0]*pts2[k,0] + coeffs[-1,0]*pts2[k,1]
for i in range(0,n):
r2 = ( (pts1[i,0] - pts2[k,0])**2.0 + (pts1[i,1] - pts2[k,1])**2.0 )**0.5
if r2 != 0:
z2[k,0] += coeffs[i,0]*( r2**2.0*np.log(r2) )
else:
z2[k,0] = np.nan
return z2
gen_K_nb = nb.jit(nb.float64[:,:](nb.int64, nb.float64[:,:]), nopython = True)(gen_K)
compute_z2_nb = nb.jit(nb.float64[:,:](nb.int64, nb.int64, nb.float64[:,:], nb.float64[:,:], nb.float64[:,:]), nopython = True)(compute_z2)
def TPS(pts1, z1, pts2, factor):
n, m = pts1.shape[0], pts2.shape[0]
P = np.hstack((np.ones((n,1)),pts1))
Y = np.vstack((z1, np.zeros((3,1))))
K = gen_K_nb(n, pts1)
K += factor*np.identity(n)
L = np.zeros((n+3,n+3))
L[0:n, 0:n] = K
L[0:n, n:n+3] = P
L[n:n+3, 0:n] = P.T
L_inv = np.linalg.inv(L)
coeffs = L_inv.dot(Y)
return compute_z2_nb(m, n, pts1, pts2, coeffs)
Finally, here is the code snippet I used to create the two plots:
import matplotlib.pyplot as plt
import numpy as np
N = 50 # Number of random points
pts = np.random.rand(N,2)
pts[:,0] *= 3.0 # initial x values
pts[:,1] *= 120.0 # initial y values
z1 = (pts[:,0])**2.0
for scale in [1.0, 40.0]:
pts1 = pts.copy()
pts1[:,0] *= scale
x2 = np.linspace(np.min(pts1[:,0]), np.max(pts1[:,0]), 40)
y2 = np.linspace(np.min(pts1[:,1]), np.max(pts1[:,1]), 40)
x2, y2 = np.meshgrid(x2, y2)
pts2 = np.vstack((x2.flatten(), y2.flatten())).T
z2 = TPS(pts1, z1.reshape(z1.shape[0], 1), pts2, 0.0)
# Display
fig = plt.figure(figsize=(4,3))
ax = fig.add_subplot(111)
C = ax.contourf(x2, y2, z2.reshape(x2.shape), np.linspace(0,9,10), extend='both')
ax.plot(pts1[:,0], pts1[:,1], 'ok')
ax.set_xlabel('x')
ax.set_ylabel('y')
plt.colorbar(C, extendfrac=0)
plt.tight_layout()
plt.show()
Thin Plate Spline is scalar invariant, which means if you scale x and y by the same factor, the result should be the same. However, if you scale x and y differently, then the result will be different. This is common characteristics among radial basis functions. Some radial basis functions are not even scalar invariant.
When you say it "fails", what do you mean? The big question is, does it still exactly interpolate at the construction points? Assuming your code is correct and you do not have ill-conditioning, it should in which case it does not fail.
What I think is happening is that the addition of the scale is making the behavior in the x direction more dominant so you do not see the wiggles that come naturally from the interpolation.
As an aside, you can greatly speed up your code without using Numba by vectorizing.
import scipy.spatial.distance
import scipy.special
def gen_K(n,pts1):
# No need for n but kept to maintain compatability
pts1 = np.atleast_2d(pts1)
r = scipy.spatial.distance.cdist(pts1,pts1)
return scipy.special.xlogy(r**2,r)
It means you will get horrible ridges running through the surface. Resulting in a sub-optimal model fit. Read the caption below the images. Your model is experiencing the same effect, although plotted in 2D.

Resources