intersect image with mask in skimage - scikit-image

I've loaded an image into the scikit-image library in Python.
I've also created a mask of the same dimensions as the image, and now I want to intersect the mask with the image to only show pixels from the image where the mask is non-zero. How can I do that without iterating pixel by pixel?
Do I just apply the mask to the image, like:
new_image = image[mask]
Or can I just multiply the two image arrays together to do bitwise multiplication pixel by pixel?

Element-wise multiplication indeed works perfectly:
from skimage import data
from matplotlib import pyplot as plt
image = data.coins()
mask = image > 128
masked_image = image * mask
fig, (ax0, ax1) = plt.subplots(nrows=1, ncols=2)
ax0.imshow(image, cmap='gray')
ax1.imshow(masked_image, cmap='gray')
Note 1: your code example is not a scikit-image question but a NumPy indexing question, and will not do what you want, but rather return a linear array of all the pixels where mask is True. For more information, see the NumPy documentation on boolean indexing.
Note 2: you can also use scikit-image to save images:
from skimage import io
io.imsave('masked_image.png', masked_image)

You can do it like this:
from skimage import data
import numpy as np
from PIL import Image
# Load coins data-set
im = data.coins()
# Make mask of where image is less than mid-grey
mask = im<128
# Set image black everywhere it was less than mid-grey
im[mask] = 0
# Set image mid-grey everywhere it was mid-grey or brighter
im[~mask] = 128
# Convert to PIL image and save
Image.fromarray(im).save('result.png')
Starting image of coins:
Resulting image:

Related

What is the difference between cropping an image and applying an ROI (region of interest) on the image

I'm using opencv.
Having a base image (640x640) I want to know if applying a rectangular white shape mask of (100x100), in another words, a ROI is more time consuming compared to cropping the image in the same rectangular shape (100x100).
What I consider as being a ROI
mask = np.zeros_like(original_image)
shape = np.array([[a,b,c,d]])
cv2.fillPoly(mask, shape, (255,255,255))
cv2.bitwise_and(original_image, mask)
The way of cropping
cropped = original_image[x:y, z:t]
I want to apply a function (e.g transform to grayscale) or apply a color filter on the image.
I think that doing so on the ROIed image will be more time consuming as there are many more pixels (the resulting image has the same dimensions as the original one (640x640)).
On the other hand, applying some function on the cropped image will take less time compared to the latter.
The question is: wouldn't the operation of cropping the image compensate the time needed to compute the rest of the 640x640 - 100x100 pixels?
Hope I was clear enough. Thanks in advance
Edit
I did use timeit as Belal Homaidan suggested:
print("CROP TIMING: ", timeit.timeit(test_crop, setup=setup, number=10000) * 1e6, "us")
print("ROI TIMING: ", timeit.timeit(test_ROI, setup=setup, number=10000) * 1e6, "us")
Where setup was:
setup = """\
import cv2
import numpy as np
original_image = cv2.imread(r"C:\\Users\\path\\path\\test.png")
"""
Where test_crop was:
test_crop = """\
cropped = original_image[0:150, 0:150]
"""
And where test_ROI was:
test_ROI = """\
mask = np.zeros_like(original_image)
shape = np.array([[(0,0),(150,0),(150,150),(0,150)]])
cv2.fillPoly(mask, shape, (255,255,255))
final = cv2.bitwise_and(original_image, mask)
"""
The image was a BGR one with a size of 131 KB and a resolution of 384x208 pixels.
The result was:
CROP TIMING: 11560.400000007576 us
ROI TIMING: 780580.8000000524 us
I think that doing a ROI on an image fits when you want a specific shape, and cropping would best fit when you need rectangular shapes.

Python: how to create 3 single channel images

I want to create a 3 single channel images, red, green and blue with random scatters.
So far I manage to create one color images, but it is in RGB format.
import matplotlib.pyplot as plt
import numpy as np
import cv2
from PIL import Image
### blue
cells = np.random.RandomState(400)
x = cells.randn(10)
y = cells.randn(10)
colors = "blue"
sizes = 250 * cells.rand(200)
plt.scatter(x, y, c=colors, s=sizes, alpha=0.7, cmap='viridis')
plt.axis("off")
plt.show()
I want to use the single files and combinations to test an image- analysis tool for object detection.
Thanks for any help!

Image processing with numpy arrays

In the grayscale mode, 255 indicates white. So if all elements of the numpy matrix are 255, should not it be a white image?
l = np.full((184,184),255)
j = Image.fromarray(l,'L')
j.show()
I am getting a black-and-white vertical striped image as output instead of pure white image. Why is it so?
The issue is the 'L' Mode. L = 8 bit pixels, black and white. The array you created is likely 32 bit values.
Try j = Image.fromarray(l, 'I') ## (32-bit signed integer pixels)
reference.
(note: Many thanks to you for introducing me to the Pillow Image module for Python with this posting...)
Complete test code:
from PIL import Image
import numpy as np
l = np.full((184,184),255)
j = Image.fromarray(m, 'I')
j.show()

Greyscale in python - incorect colors changing from dark grey to light grey to dark grey

I am plotting a greyscale version of this image:
SOURCE: http://matplotlib.org/examples/pylab_examples/griddata_demo.html
I have used the following code:
import numpy as np
import matplotlib.pyplot as plt
import matplotlib.cm as cm
from PIL import Image
file_name = 'griddata_demo.png'
def func_grey(fname):
image = Image.open(fname).convert("L")
arr = np.asarray(image)
plt.imshow(arr, cmap = cm.Greys_r)
plt.show()
func_grey(file_name)
Display image as grayscale using matplotlib
The setup I am working is has python 2.7 and Pandas and I have installed Pillow with easy install.
Background information about the image and the requirements:
The image come from data found here. Ideally, the greyscale
version of this image should be generated directly from this raw
data.i.e. do not save it as a colored image and then try to convert
to greyscale - rather just produce a greyscale version of the plot.
I do not know the colors that correspond to the z-values - these
colors can be set arbitrarily.
The color map of the image can also be chosen arbitrarily - there is no preference. It
is the greyscale version that is of concern.
My question is related to the color scheme shown in the colorbar. I need to display a color scheme where the color bar has colors from light grey (lowest intensity) to dark grey (highest intensity).
After running the above code, a greyscale image is produced. In the color bar of the greyscale image, the intensity level -0.36 is dark grey. At 0.00, it is light grey. But then 0.48 is also dark grey.
Question:
Is is possible to change the colormap such that -0.36 is light grey and 0.48 is dark grey? I mean, is it possible to display to colorbar from light to dark?
I think this question may be about how to use a grayscale colormap in matplotlib. If so, then it's straightforward. Here's an example using different colormaps (based on the code for the op image):
from numpy.random import uniform, seed
from matplotlib.mlab import griddata
import matplotlib.pyplot as plt
import numpy as np
# make up data.
#npts = int(raw_input('enter # of random points to plot:'))
def f(spi, the_colormap):
plt.subplot(spi)
seed(0)
npts = 200
x = uniform(-2, 2, npts)
y = uniform(-2, 2, npts)
z = x*np.exp(-x**2 - y**2)
xi = np.linspace(-2.1, 2.1, 100)
yi = np.linspace(-2.1, 2.1, 200)
zi = griddata(x, y, z, xi, yi, interp='linear')
CS = plt.contour(xi, yi, zi, 15, linewidths=0.5, colors='k')
CS = plt.contourf(xi, yi, zi, 15, cmap=the_colormap,
vmax=abs(zi).max(), vmin=-abs(zi).max())
plt.colorbar() # draw colorbar
# plot data points.
plt.scatter(x, y, marker='o', c='b', s=5, zorder=10)
plt.xlim(-2, 2)
plt.ylim(-2, 2)
plt.title('griddata test (%d points)' % npts)
f(131, plt.cm.rainbow)
f(132, plt.cm.gray)
f(133, plt.cm.hot)
plt.show()
If one actually wants to convert to grayscale using PIL (a far less favorable, but sometimes necessary task), it's best to start with a colormap that has monotonic brightness, like hot above, but not rainbow. Also, in the comments I suggested using cubehelix but that's not standard with matplotlib, instead see here. See here for an image of the available matplotlib colormaps.
this solution works for me, and is a lot simpler
from PIL import Image
im = Image.open("image.png")
im.convert('L').show()
im.convert('L').save("image.png")
note that if you want to mix up the file types, you can (.png to .jpg for example)

Matplotlib imshow adjacent images with anomalous whitespace - is there a way to correct it?

I am plotting tiled images in a similar way to the working code shown below:
import Image
import matplotlib.pyplot as plt
import random
import numpy
def r():
return random.randrange(50,200)
imsize = 100
rngsize = 5
rng = range(rngsize)
for i in rng:
for j in rng:
im = Image.new('RGB', (imsize, imsize), (r(),r(),r()))
plt.imshow(im, aspect='equal', extent=numpy.array([i, i+1, j, j+1])*imsize)
plt.xlim(-5,imsize * rngsize + 5)
plt.ylim(-5,imsize * rngsize + 5)
plt.show()
The problem is: as you pan and zoom, zoomscale-independent white stripes appear between the image edges, which is very undesireable. I guess this has to do with resampling and antialiasing, but have no idea how to solve it "the right way", specialy for not knowing exact implementation details of matplotlib's rendering engine.
With Cairo and HTML Canvas, you can draw "to the pixel corner" or "to the pixel center" (translating by 0.5 pixel) thus avoiding anti-aliasing effects. Would there be a way to do that with Matplotlib?
Thanks for any help!
You can simply fill in the values to a larger numpy array and plot the entire composite image in one shot. I've adapted your code above for a minimal example but with different sized images you'll need to take a different step size.
F = numpy.zeros((imsize*rngsize,imsize*rngsize,3))
for i in rng:
for j in rng:
F[i*imsize:(i+1)*imsize,
j*imsize:(j+1)*imsize, :] = (r(), r(), r())
plt.imshow(F, interpolation = 'nearest')
plt.show()

Resources