I am generating 2D arrays on log-spaced axes (for instance, the x pixel coordinates are generated using logspace(log10(0.95), log10(2.08), n).
I want to display the image using a plain old imshow, in its native resolution and scaling (I don't need to stretch it; the data itself is already log scaled), but I want to add ticks, labels, lines that are in the correct place on the log axes. How do I do this?
Ideally I could just use commands line axvline(1.5) and the line would be in the correct place (58% from the left), but if the only way is to manually translate between logscale coordinates and image coordinates, that's ok, too.
For linear axes, using extents= in the call to imshow does what I want, but I don't see a way to do the same thing with a log axis.
Example:
from matplotlib.colors import LogNorm
x = logspace(log10(10), log10(1000), 5)
imshow(vstack((x,x)), extent=[10, 1000, 0, 100], cmap='gray', norm=LogNorm(), interpolation='nearest')
axvline(100, color='red')
This example does not work, because extent= only applies to linear scales, so when you do axvline at 100, it does not appear in the center. I'd like the x axis to show 10, 100, 1000, and axvline(100) to put a line in the center at the 100 point, while the pixels remain equally spaced.
In my view, it is better to use pcolor and regular (non-converted) x and y values. pcolor gives you more flexibility and regular x and y axis are less confusing.
import pylab as plt
import numpy as np
from matplotlib.colors import LogNorm
from matplotlib.ticker import LogFormatterMathtext
x=np.logspace(1, 3, 6)
y=np.logspace(0, 2,3)
X,Y=np.meshgrid(x,y)
z = np.logspace(np.log10(10), np.log10(1000), 5)
Z=np.vstack((z,z))
im = plt.pcolor(X,Y,Z, cmap='gray', norm=LogNorm())
plt.axvline(100, color='red')
plt.xscale('log')
plt.yscale('log')
plt.colorbar(im, orientation='horizontal',format=LogFormatterMathtext())
plt.show()
As pcolor is slow, a faster solution is to use pcolormesh instead.
im = plt.pcolormesh(X,Y,Z, cmap='gray', norm=LogNorm())
Actually, it works fine. I'm confused.
Previously I was getting errors about "Images are not supported on non-linear axes" which is why I asked this question. But now when I try it, it works:
import matplotlib.pyplot as plt
import numpy as np
x = np.logspace(1, 3, 5)
y = np.linspace(0, 2, 3)
z = np.linspace(0, 1, 4)
Z = np.vstack((z, z))
plt.imshow(Z, extent=[10, 1000, 0, 1], cmap='gray')
plt.xscale('log')
plt.axvline(100, color='red')
plt.show()
This is better than pcolor() and pcolormesh() because
it's not insanely slow and
is interpolated nicely without misleading artifacts when the image is not shown at native resolution.
To display imshow with abscisse log scale:
ax = fig.add_subplot(nrow, ncol, i+1)
ax.set_xscale('log')
Related
An exciting animation was posted on twitter recently: https://twitter.com/thomas_rackow/status/1392509885883944960.
One of the authors explained in this Jupyter Notebook https://nbviewer.jupyter.org/github/koldunovn/FESOM_SST_shaded_by_U/blob/main/FESOM_SST_shaded_by_U.ipynb
how a frame is created.
Related to the simple code displayed by this notebook, my question is: when we call imshow twice for the same ax:
ax.imshow(np.flipud(sst.sst.values), cmap=cm.RdBu_r, vmin=12, vmax=24)
ax.imshow(np.flipud(u.u_surf.values), alpha=0.3, cmap=cm.gray, vmin=-.3, vmax=0.3)
what operations performs matplotlib, behind the scenes, to get a layered image?
I worked with alpha blending in Open CV - Python, but here it starts with two arrays of the same shape (1000, 1000), and via ax.imshow, called twice for the two arrays, it displays the resulting image. I'd like to know how is it possible. What arithmetic operations between images are involved?
I searched the matplotlib github repository to understand what's going on, but I couldn't find something relevant.
I succeeded to illustrate that the two imshow(s) hide the alpha-blending of the two images.
import numpy as np
import xarray as xr
import matplotlib.pyplot as plt
import matplotlib.cm as cm
sst = xr.open_dataset('GF_FESOM2_testdata/sst.nc')
u = xr.open_dataset('GF_FESOM2_testdata/u_surf.nc')
v = xr.open_dataset('GF_FESOM2_testdata/v_surf.nc')
#Define the heatmap from SST-data and extract the array representing it as an image:
fig1, ax1 = plt.subplots(1, 1,
constrained_layout=True,
figsize=(10, 10))
f1 = ax1.imshow(np.flipud(sst.sst.values), cmap=cm.RdBu_r, vmin=12, vmax=24)
ax1.axis('off');
arr1 = f1.make_image('notebook')[0] #array representing the above image
#Repeat the same procedure for u-data set:
fig2, ax2 = plt.subplots(1, 1,
constrained_layout=True,
figsize=(10, 10))
f2 = ax2.imshow(np.flipud(u.u_surf.values), cmap=cm.gray, vmin=-0.3, vmax=0.3)
ax2.axis('off');
arr2 = f2.make_image("notebook")[0]
#alpha blending of the two images amounts to a convex combination of the associated arrays
alpha1= 1 # background image alpha
alpha2 = 0.3 #foreground image alpha
arr = np.asarray((alpha2*arr2 + alpha1*(1-alpha2)*arr1)/(alpha2+alpha1*(1-alpha2)), dtype=np.uint8)
fig, ax = plt.subplots(1, 1,
constrained_layout=True,
figsize=(10, 10))
ax.imshow(np.flipud(arr))
ax.axis('off');
I'm struggling a bit to figure out
how to make sure all lines get recognized with Line Hough Transform taken from sckit-image library.
https://scikit-image.org/docs/dev/auto_examples/edges/plot_line_hough_transform.html#id3
Here below all lines got recognized:
But if I apply the same script on similar image,
one line will get ignored after applying the Hough transform,
I have read the documentation which says:
The Hough transform constructs a histogram array representing the parameter
space (i.e., an :math:`M \\times N` matrix, for :math:`M` different values of
the radius and :math:`N` different values of :math:`\\theta`). For each
parameter combination, :math:`r` and :math:`\\theta`, we then find the number
of non-zero pixels in the input image that would fall close to the
corresponding line, and increment the array at position :math:`(r, \\theta)`
appropriately.
We can think of each non-zero pixel "voting" for potential line candidates. The
local maxima in the resulting histogram indicates the parameters of the most
probably lines
So my conclusion is the line got removed since it hadn't got enough "votes",
(I have tested it with different precisions (0.05, 0.5, 0.1) degree, but still got the same issue).
Here is the code:
import numpy as np
from skimage.transform import hough_line, hough_line_peaks
from skimage.feature import canny
from skimage import data,io
import matplotlib.pyplot as plt
from matplotlib import cm
# Constructing test image
image = io.imread("my_image.png")
# Classic straight-line Hough transform
# Set a precision of 0.05 degree.
tested_angles = np.linspace(-np.pi / 2, np.pi / 2, 3600)
h, theta, d = hough_line(image, theta=tested_angles)
# Generating figure 1
fig, axes = plt.subplots(1, 3, figsize=(15, 6))
ax = axes.ravel()
ax[0].imshow(image, cmap=cm.gray)
ax[0].set_title('Input image')
ax[0].set_axis_off()
ax[1].imshow(np.log(1 + h),
extent=[np.rad2deg(theta[-1]), np.rad2deg(theta[0]), d[-1], d[0]],
cmap=cm.gray, aspect=1/1.5)
ax[1].set_title('Hough transform')
ax[1].set_xlabel('Angles (degrees)')
ax[1].set_ylabel('Distance (pixels)')
ax[1].axis('image')
ax[2].imshow(image, cmap=cm.gray)
origin = np.array((0, image.shape[1]))
for _, angle, dist in zip(*hough_line_peaks(h, theta, d)):
y0, y1 = (dist - origin * np.cos(angle)) / np.sin(angle)
ax[2].plot(origin, (y0, y1), '-r')
ax[2].set_xlim(origin)
ax[2].set_ylim((image.shape[0], 0))
ax[2].set_axis_off()
ax[2].set_title('Detected lines')
plt.tight_layout()
plt.show()
How should I "catch" this line too,
any suggestion?
Shorter lines have lower accumulator values in the Hough transform, so you have to adjust the threshold appropriately. If you know how many line segments you are looking for, you can set the threshold fairly low and then limit the number of peaks detected.
Here's a condensed version of the code above, with modified threshold, for reference:
import numpy as np
from skimage.transform import hough_line, hough_line_peaks
from skimage import io
import matplotlib.pyplot as plt
from matplotlib import cm
from skimage import color
# Constructing test image
image = color.rgb2gray(io.imread("my_image.png"))
# Classic straight-line Hough transform
# Set a precision of 0.05 degree.
tested_angles = np.linspace(-np.pi / 2, np.pi / 2, 3600)
h, theta, d = hough_line(image, theta=tested_angles)
hpeaks = hough_line_peaks(h, theta, d, threshold=0.2 * h.max())
fig, ax = plt.subplots()
ax.imshow(image, cmap=cm.gray)
for _, angle, dist in zip(*hpeaks):
(x0, y0) = dist * np.array([np.cos(angle), np.sin(angle)])
ax.axline((x0, y0), slope=np.tan(angle + np.pi/2))
plt.show()
(Note: axline requires matplotlib 3.3.)
Below are two scatter plots. The first one is for data points that have values of x and y, and I would like to know if there is a clustering algorithm that will automatically recognize that there are two clusters. They are concentric and not linearly separable. K-means is not right for several reasons. The other plot is similar but it has x, y and color values, and I would like to know what learning algorithm would be best at classifying or predicting the correct color from the values of x and y.
I got good classifier results for this problem using the sklearn MLPClassifier algorithm. Here is the scatter and contour plots:
Detailed code at: https://www.linkedin.com/pulse/couple-scikit-learn-classifiers-peter-thorsteinson. The simplified code below shows how it works:
import math
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
from sklearn.neural_network import MLPClassifier
from sklearn.model_selection import train_test_split
from sklearn.metrics import classification_report, confusion_matrix
# Generate the artificial data set and display the resulting scatter plot
x = []
y = []
z = []
for i in range(500):
rand = np.random.uniform(0.0, 2*math.pi)
randx = np.random.normal(0.0, 30.0)
randy = np.random.normal(0.0, 30.0)
if np.random.random() > 0.5:
z.append(0)
x.append(100*math.cos(rand) + randx)
y.append(100*math.sin(rand) + randy)
else:
z.append(1)
x.append(300*math.cos(rand) + randx)
y.append(300*math.sin(rand) + randy)
plt.axis('equal')
plt.axis([-500, 500, -500, 500])
plt.scatter(x, y, c=z)
plt.show()
# Run the MLPClassifier algorithm on the training data
XY = pd.DataFrame({'x': x, 'y': y})
print(XY.head())
Z = pd.DataFrame({'z': z})
print(Z.head())
XY_train, XY_test, Z_train, Z_test = train_test_split(XY, Z, test_size = 0.20)
mlp = MLPClassifier(hidden_layer_sizes=(10, 10, 10), max_iter=1000)
mlp.fit(XY_train, Z_train.values.ravel())
# Make predictions on the test data and display resulting scatter plot
predictions = mlp.predict(XY_test)
print(confusion_matrix(Z_test,predictions))
print(classification_report(Z_test,predictions))
plt.axis('equal')
plt.axis([-500, 500, -500, 500])
plt.scatter(XY_test.x, XY_test.y, c=predictions)
plt.show()
I am using imsave() sequentially to make many PNGs that I will combine as an AVI and I would like to add moving text annotations. I use ImageJ to make AVIs or GIFs.
I don't want the axes, numbers, borders or anything, just the color image (as imsave() provides for example) with text (and maybe arrows) inside. These will change frame by frame. Pardon the use of jet.
I could use savefig() with ticks off and then do cropping as post processing, but is there a more convenient, direct, or "matplotlibithic" way to do this that wouldn't be so hard on my hard drive? (final thing will be pretty big).
A code snippet, added by request:
import numpy as np
import matplotlib.pyplot as plt
nx, ny = 101, 101
phi = np.zeros((ny, nx), dtype = 'float')
do_me = np.ones_like(phi, dtype='bool')
x0, y0, r0 = 40, 65, 12
x = np.arange(nx, dtype = 'float')[None,:]
y = np.arange(ny, dtype = 'float')[:,None]
rsq = (x-x0)**2 + (y-y0)**2
circle = rsq <= r0**2
phi[circle] = 1.0
do_me[circle] = False
do_me[0,:], do_me[-1,:], do_me[:,0], do_me[:,-1] = False, False, False, False
n, nper = 100, 100
phi_hold = np.zeros((n+1, ny, nx))
phi_hold[0] = phi
for i in range(n):
for j in range(nper):
phi2 = 0.25*(np.roll(phi, 1, axis=0) +
np.roll(phi, -1, axis=0) +
np.roll(phi, 1, axis=1) +
np.roll(phi, -1, axis=1) )
phi[do_me] = phi2[do_me]
phi_hold[i+1] = phi
change = phi_hold[1:] - phi_hold[:-1]
places = [(32, 20), (54,25), (11,32), (3, 12)]
plt.figure()
plt.imshow(change[50])
for (x, y) in places:
plt.text(x, y, "WOW", fontsize=16)
plt.text(5, 95, "Don't use Jet!", color="white", fontsize=20)
plt.show()
Method 1
Using an excellent answer to another question as a reference, I came up with the following simplified variant which seems to work nicely - just make sure the figsize (which is given in inches) aspect ratio matches the size ratio of the plot data:
import numpy as np
import matplotlib.pyplot as plt
test_image = np.eye(100)
fig = plt.figure(figsize=(4,4))
ax = plt.axes(frameon=False, xticks=[],yticks=[])
ax.imshow(test_image)
plt.savefig('test.png', bbox_inches='tight', pad_inches=0)
Note that I am using imshow with a test_image, which might behave differently from other plotting functions... please let me know in a comment in case you'd like to do something else.
Also note that the image will be (re-) sampled, so the figsize will influence the resolution of the written image.
As pointed out in the comments, the figsize setting doesn't match the size of the output image (or the size on screen, for that matter). To overcome this, use...
Method 2
Reading the FAQ entry Move the edge of an axes to make room for tick labels, I found a way to make the figsize parameter set the output image size directly, by moving the axes' ticks out of the visible area:
import numpy as np
import matplotlib.pyplot as plt
test_image = np.eye(100)
fig = plt.figure(figsize=(4,4))
ax = fig.add_axes([0,0,1,1])
ax.imshow(test_image)
plt.savefig('test.png')
Note that savefig has a default DPI setting (100 in my case) which - in combination with figsize - determines the number of pixels in x and y directions of the saved image. You can override this with the dpi keyword argument to savefig.
If you want to display the image on screen rather than saving it (by using plt.show() instead of the plt.savefig line in the code above), the size of the figure is dependent on (apart from the already familiar figsize parameter) the figure's DPI setting, which also has a default (80 on my system). This value can be overridden by passing the dpi keyword argument to the plt.figure() call.
I am plotting tiled images in a similar way to the working code shown below:
import Image
import matplotlib.pyplot as plt
import random
import numpy
def r():
return random.randrange(50,200)
imsize = 100
rngsize = 5
rng = range(rngsize)
for i in rng:
for j in rng:
im = Image.new('RGB', (imsize, imsize), (r(),r(),r()))
plt.imshow(im, aspect='equal', extent=numpy.array([i, i+1, j, j+1])*imsize)
plt.xlim(-5,imsize * rngsize + 5)
plt.ylim(-5,imsize * rngsize + 5)
plt.show()
The problem is: as you pan and zoom, zoomscale-independent white stripes appear between the image edges, which is very undesireable. I guess this has to do with resampling and antialiasing, but have no idea how to solve it "the right way", specialy for not knowing exact implementation details of matplotlib's rendering engine.
With Cairo and HTML Canvas, you can draw "to the pixel corner" or "to the pixel center" (translating by 0.5 pixel) thus avoiding anti-aliasing effects. Would there be a way to do that with Matplotlib?
Thanks for any help!
You can simply fill in the values to a larger numpy array and plot the entire composite image in one shot. I've adapted your code above for a minimal example but with different sized images you'll need to take a different step size.
F = numpy.zeros((imsize*rngsize,imsize*rngsize,3))
for i in rng:
for j in rng:
F[i*imsize:(i+1)*imsize,
j*imsize:(j+1)*imsize, :] = (r(), r(), r())
plt.imshow(F, interpolation = 'nearest')
plt.show()