I'm using hp.mollview() to draw a map where the entire projection is not fully populated. This makes it unclear where the boundaries of the map are because the white background just blends into the white of the figure area. Is it possible to draw a border around the map area?
I had the same problem and unfortunately do not have a direct solution, but I did find a workaround. You can change the background colour of your figure and of the masked pixels separately. See below an example using the inferno colour map, with white background but grey masked pixels.
import healpy as hp
from pylab import cm
# Some map with masked pixels
npix = hp.nside2npix(4)
m = np.arange(npix, dtype=float)
m[50:100] = hp.UNSEEN
# adjusting the colour map
cmap = cm.inferno
cmap.set_under('w')
cmap.set_bad('grey')
hp.mollview(m, cmap=cmap)
healpy map with different coloured background and masked pixels
You can use an empty graticule to add a border:
import healpy as hp
from pylab import cm
npix = hp.nside2npix(4)
m = np.arange(npix, dtype=float)
m[50:100] = hp.UNSEEN
cmap = cm.Blues
cmap.set_under('w')
hp.mollview(m,cmap=cmap)
hp.graticule(dmer=360,dpar=360,alpha=0)
Related
I need to extend an image array, that currently only holds grey-scale values in the shape of:
(640,480) to (640,480,3). Ultimately I need to concatenateboth - a rgb numpy array with the greyscale numpy array.
greyscalearray[..., np.newaxis] results in (640,480,1) - is there a way I can add np.zeros(3,1) for the last axis?
I think you want:
RGB = np.dstack((grey, np.zeros_like(grey), np.zeros_like(grey)))
The benefit of using np.zeros_like() is that you get an array matching the dimensions and the dtype of your single-channel, grey image without having to specify either!
So here is the full code:
import numpy as np
grey = np.ones((64,48),dtype=np.uint8)
print(grey.shape) # prints (64, 48)
# Make 3-channel from singe-channel
RGB = np.dstack((grey, np.zeros_like(grey), np.zeros_like(grey)))
print(RGB.shape) # prints (64, 48, 3)
print(RGB[0,0]) # prints [1,0,0]
np.dstack() stacks in the depth axis. Its friends are np.vstack() which stacks vertically to concatenate one image vertically below another, and np.hstack() which stacks horizontally to concatenate one image horizontally beside another.
Using PIL, I can transform an image's color by first converting it to grayscale and then applying the colorize transform. Is there a way to do the same with scikit-image?
The difference with e.g. the question at Color rotation in HSV using scikit-image is that there the black stays black while in PIL colorize function, I can define both where I want black and white mapped to.
I think you want something like this to avoid any dependency on PIL/Pillow:
#!/usr/bin/env python3
import numpy as np
from PIL import Image
def colorize(im,black,white):
"""Do equivalent of PIL's "colorize()" function"""
# Pick up low and high end of the ranges for R, G and B
Rlo, Glo, Blo = black
Rhi, Ghi, Bhi = white
# Make new, empty Red, Green and Blue channels which we'll fill & merge to RGB later
R = np.zeros(im.shape, dtype=np.float)
G = np.zeros(im.shape, dtype=np.float)
B = np.zeros(im.shape, dtype=np.float)
R = im/255 * (Rhi-Rlo) + Rlo
G = im/255 * (Ghi-Glo) + Glo
B = im/255 * (Bhi-Blo) + Blo
return (np.dstack((R,G,B))).astype(np.uint8)
# Create black-white left-right gradient image, 256 pixels wide and 100 pixels tall
grad = np.repeat(np.arange(256,dtype=np.uint8).reshape(1,-1), 100, axis=0)
Image.fromarray(grad).save('start.png')
# Colorize from green to magenta
result = colorize(grad, [0,255,0], [255,0,255])
# Save result - using PIL because I don't know skimage that well
Image.fromarray(result).save('result.png')
That will turn this:
into this:
Note that this is the equivalent of ImageMagick's -level-colors BLACK,WHITE operator which you can do in Terminal like this:
convert input.png -level-colors lime,magenta result.png
That converts this:
into this:
Keywords: Python, PIL, Pillow, image, image processing, colorize, colorise, colourise, colourize, level colors, skimage, scikit-image.
I want my code to find the corners of a square lego plate in an image like the one attached.
I also want to find its dimensions, i.e. the number of "blops" in both dimensions (48x48 in the attached image).
I am currently looking at detecting the individual "blops", and the result so far is pretty good: a combination of blur, adaptiveThreshold, findContours and selection based on area finds the contours rendered in the second attached image (coloring is random).
I'm now looking for an algorithm to find the "grid" losely represented by these contours (or their mid-points), but I lack the google fu. Any ideas?
(Suggestions for different approaches are also very welcome.)
(The sample image shows bricks placed in the corners - an algorithm could expect this, if it helps.)
(The sample image has a rather wild background. I'd prefer to cope with that, if possible.)
Update 8 July 2016: I'm trying to write an algorithm that looks for streaks of adjacent contours forming lines. The algo should be able to find a number of these and, from that, deduce the form of the whole plate, even with perspective. Will update if it works...
Update December 2017: The above algorithm sort of worked, although it was a bit too unpredictable. Also I got problems with perspective (adding a "thick" lego brick changes the surface) and color recognition (shadows, camera peculiarities etc). This endeavor is on hold for now. If I resume it I will try with fixed camera positions immediately above the plate and consistent lights.
Here's a potential approach using color thresholding. The idea is to convert the image to HSV format then color threshold using a lower and upper bound with the assumption that the baseplate is in gray. This will give us a mask image. From here we morph open to remove noise, find contours, and sort for the largest contour. Next we obtain the rotated bounding box coordinates and draw this onto a new blank mask. Finally we bitwise-and the mask with the input image to get our result. To find the corner coordinates, we can use cv2.goodFeaturesToTrack() to find the points on the mask. Take a look at how to find accurate corner positions of a distorted rectangle from blurry image in python? and Shi-Tomasi Corner Detector & Good Features to Track for more details
Here's a visualization of each step:
We load the image, convert to HSV format, define a lower/upper bound, and perform color thresholding using cv2.inRange()
import cv2
import numpy as np
# Load image, convert to HSV, and color threshold
image = cv2.imread('1.png')
blank_mask = np.zeros(image.shape, dtype=np.uint8)
original = image.copy()
hsv = cv2.cvtColor(image, cv2.COLOR_BGR2HSV)
lower = np.array([0, 0, 109])
upper = np.array([179, 36, 255])
mask = cv2.inRange(hsv, lower, upper)
Next we create a rectangular kernel using cv2.getStructuringElement() and perform morphological operations using cv2.morphologyEx(). This step removes small particles of noise.
# Morph open to remove noise
kernel = cv2.getStructuringElement(cv2.MORPH_RECT, (5,5))
opening = cv2.morphologyEx(mask, cv2.MORPH_OPEN, kernel, iterations=1)
From here we find contours on the mask using cv2.findContours() and filter using contour area to obtain the largest contour. We then obtain the rotated boding box coordinates using cv2.minAreaRect() and cv2.boxPoints() then draw this onto a new blank mask with cv2.fillPoly(). This step gives us a "perfect" outer contour of the baseplate. Here's the detected outer contour highlighted in green and the resulting mask.
# Find contours and sort for largest contour
cnts = cv2.findContours(opening, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
cnts = cnts[0] if len(cnts) == 2 else cnts[1]
cnts = sorted(cnts, key=cv2.contourArea, reverse=True)[0]
# Obtain rotated bounding box and draw onto a blank mask
rect = cv2.minAreaRect(cnts)
box = cv2.boxPoints(rect)
box = np.int0(box)
cv2.drawContours(image,[box],0,(36,255,12),3)
cv2.fillPoly(blank_mask, [box], (255,255,255))
Finally we bitwise-and the mask with our original input image to obtain our result. Depending on what you need, you can change the background to black or white.
# Bitwise-and mask with input image
blank_mask = cv2.cvtColor(blank_mask, cv2.COLOR_BGR2GRAY)
result = cv2.bitwise_and(original, original, mask=blank_mask)
# result[blank_mask==0] = (255,255,255) # Color background white
To detect the corner coordinates, we can use cv2.goodFeaturesToTrack(). Here's the detected corners highlighted in purple:
Coordinates:
(91.0, 525.0)
(463.0, 497.0)
(64.0, 152.0)
(436.0, 125.0)
# Detect corners
corners = cv2.goodFeaturesToTrack(blank_mask, maxCorners=4, qualityLevel=0.5, minDistance=150)
for corner in corners:
x,y = corner.ravel()
cv2.circle(image,(x,y),8,(155,20,255),-1)
print("({}, {})".format(x,y))
Full Code
import cv2
import numpy as np
# Load image, convert to HSV, and color threshold
image = cv2.imread('1.png')
blank_mask = np.zeros(image.shape, dtype=np.uint8)
original = image.copy()
hsv = cv2.cvtColor(image, cv2.COLOR_BGR2HSV)
lower = np.array([0, 0, 109])
upper = np.array([179, 36, 255])
mask = cv2.inRange(hsv, lower, upper)
# Morph open to remove noise
kernel = cv2.getStructuringElement(cv2.MORPH_RECT, (5,5))
opening = cv2.morphologyEx(mask, cv2.MORPH_OPEN, kernel, iterations=1)
# Find contours and sort for largest contour
cnts = cv2.findContours(opening, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
cnts = cnts[0] if len(cnts) == 2 else cnts[1]
cnts = sorted(cnts, key=cv2.contourArea, reverse=True)[0]
# Obtain rotated bounding box and draw onto a blank mask
rect = cv2.minAreaRect(cnts)
box = cv2.boxPoints(rect)
box = np.int0(box)
cv2.drawContours(image,[box],0,(36,255,12),3)
cv2.fillPoly(blank_mask, [box], (255,255,255))
# Bitwise-and mask with input image
blank_mask = cv2.cvtColor(blank_mask, cv2.COLOR_BGR2GRAY)
result = cv2.bitwise_and(original, original, mask=blank_mask)
result[blank_mask==0] = (255,255,255) # Color background white
# Detect corners
corners = cv2.goodFeaturesToTrack(blank_mask, maxCorners=4, qualityLevel=0.5, minDistance=150)
for corner in corners:
x,y = corner.ravel()
cv2.circle(image,(x,y),8,(155,20,255),-1)
print("({}, {})".format(x,y))
cv2.imwrite('mask.png', mask)
cv2.imwrite('opening.png', opening)
cv2.imwrite('blank_mask.png', blank_mask)
cv2.imwrite('image.png', image)
cv2.imwrite('result.png', result)
cv2.waitKey()
I am plotting a greyscale version of this image:
SOURCE: http://matplotlib.org/examples/pylab_examples/griddata_demo.html
I have used the following code:
import numpy as np
import matplotlib.pyplot as plt
import matplotlib.cm as cm
from PIL import Image
file_name = 'griddata_demo.png'
def func_grey(fname):
image = Image.open(fname).convert("L")
arr = np.asarray(image)
plt.imshow(arr, cmap = cm.Greys_r)
plt.show()
func_grey(file_name)
Display image as grayscale using matplotlib
The setup I am working is has python 2.7 and Pandas and I have installed Pillow with easy install.
Background information about the image and the requirements:
The image come from data found here. Ideally, the greyscale
version of this image should be generated directly from this raw
data.i.e. do not save it as a colored image and then try to convert
to greyscale - rather just produce a greyscale version of the plot.
I do not know the colors that correspond to the z-values - these
colors can be set arbitrarily.
The color map of the image can also be chosen arbitrarily - there is no preference. It
is the greyscale version that is of concern.
My question is related to the color scheme shown in the colorbar. I need to display a color scheme where the color bar has colors from light grey (lowest intensity) to dark grey (highest intensity).
After running the above code, a greyscale image is produced. In the color bar of the greyscale image, the intensity level -0.36 is dark grey. At 0.00, it is light grey. But then 0.48 is also dark grey.
Question:
Is is possible to change the colormap such that -0.36 is light grey and 0.48 is dark grey? I mean, is it possible to display to colorbar from light to dark?
I think this question may be about how to use a grayscale colormap in matplotlib. If so, then it's straightforward. Here's an example using different colormaps (based on the code for the op image):
from numpy.random import uniform, seed
from matplotlib.mlab import griddata
import matplotlib.pyplot as plt
import numpy as np
# make up data.
#npts = int(raw_input('enter # of random points to plot:'))
def f(spi, the_colormap):
plt.subplot(spi)
seed(0)
npts = 200
x = uniform(-2, 2, npts)
y = uniform(-2, 2, npts)
z = x*np.exp(-x**2 - y**2)
xi = np.linspace(-2.1, 2.1, 100)
yi = np.linspace(-2.1, 2.1, 200)
zi = griddata(x, y, z, xi, yi, interp='linear')
CS = plt.contour(xi, yi, zi, 15, linewidths=0.5, colors='k')
CS = plt.contourf(xi, yi, zi, 15, cmap=the_colormap,
vmax=abs(zi).max(), vmin=-abs(zi).max())
plt.colorbar() # draw colorbar
# plot data points.
plt.scatter(x, y, marker='o', c='b', s=5, zorder=10)
plt.xlim(-2, 2)
plt.ylim(-2, 2)
plt.title('griddata test (%d points)' % npts)
f(131, plt.cm.rainbow)
f(132, plt.cm.gray)
f(133, plt.cm.hot)
plt.show()
If one actually wants to convert to grayscale using PIL (a far less favorable, but sometimes necessary task), it's best to start with a colormap that has monotonic brightness, like hot above, but not rainbow. Also, in the comments I suggested using cubehelix but that's not standard with matplotlib, instead see here. See here for an image of the available matplotlib colormaps.
this solution works for me, and is a lot simpler
from PIL import Image
im = Image.open("image.png")
im.convert('L').show()
im.convert('L').save("image.png")
note that if you want to mix up the file types, you can (.png to .jpg for example)
I am plotting tiled images in a similar way to the working code shown below:
import Image
import matplotlib.pyplot as plt
import random
import numpy
def r():
return random.randrange(50,200)
imsize = 100
rngsize = 5
rng = range(rngsize)
for i in rng:
for j in rng:
im = Image.new('RGB', (imsize, imsize), (r(),r(),r()))
plt.imshow(im, aspect='equal', extent=numpy.array([i, i+1, j, j+1])*imsize)
plt.xlim(-5,imsize * rngsize + 5)
plt.ylim(-5,imsize * rngsize + 5)
plt.show()
The problem is: as you pan and zoom, zoomscale-independent white stripes appear between the image edges, which is very undesireable. I guess this has to do with resampling and antialiasing, but have no idea how to solve it "the right way", specialy for not knowing exact implementation details of matplotlib's rendering engine.
With Cairo and HTML Canvas, you can draw "to the pixel corner" or "to the pixel center" (translating by 0.5 pixel) thus avoiding anti-aliasing effects. Would there be a way to do that with Matplotlib?
Thanks for any help!
You can simply fill in the values to a larger numpy array and plot the entire composite image in one shot. I've adapted your code above for a minimal example but with different sized images you'll need to take a different step size.
F = numpy.zeros((imsize*rngsize,imsize*rngsize,3))
for i in rng:
for j in rng:
F[i*imsize:(i+1)*imsize,
j*imsize:(j+1)*imsize, :] = (r(), r(), r())
plt.imshow(F, interpolation = 'nearest')
plt.show()