I have two figures, one is a data plot resulting from some calculations and made with matplotlib and the other is a world map figure taken from google maps. I would like to reduce the matplotlib figure to some percentage value and superpose it over the map picture at certain position and get a final "mixed" picture. I know it can be done with graphical problems and so, but I would like to do it automatically on the shell for thousands of different cases, I wonder if you could propose some methodology / ideas for this.
Just in case you wanted to do it directly using matplotlib when you're plotting your data (imagemagick is great otherwise):
import Image
import matplotlib.pyplot as plt
import numpy as np
dpi = 100.0
im = Image.open('Dymaxion_map_unfolded.png')
width, height = im.size
fig = plt.figure(figsize=(width / dpi, height / dpi))
fig.figimage(np.array(im) / 255.0)
# Make an axis in the upper left corner that takes up 20% of the height and 30%
# of the width of the figure
ax = fig.add_axes([0, 0.7, 0.2, 0.3])
ax.plot(range(10))
plt.show()
ImageMagick can do the job, exactly the composite command. For the usage, check this url for the examples: http://www.imagemagick.org/Usage/annotating/#overlay
This sounds like something ImageMagick would be well suited for, esp. the -layers switch.
Related
Trying to segment out the lung region, I am having a lot of trouble. Incoming image is like this: (This is essentially a jpg conversion, and each pixel is 8 bits.)
I = dicomread('000019.dcm');
I8 = uint8(I / 256);
B = im2bw(I8, 0.007);
segmented = imclearborder(B);
Above script generates:
Q-1
I am interested in entire inner black part with white matter as well. I have started matlab couple of days ago, so not quite getting how can I do it. If it is not clear to you what kind of output I want, let me know-I will upload an image. But I think there is no need.
Q-2
in B = im2bw(I8, 0.007); why I need to give a threshold so low? with higher thresholds everything is white or black. I have read the documentation and as I understand it, the pixels with value less than 0.007 are marked black and everything above is white. Is it because of my 16-to-8 bit conversion?
An other automatic solution that I did quickly using ImageJ (there are the same algorithms in MatLab):
Automatic thresholding using Huang or Li in the color space of your choice (all of them work)
Opening with a structuring element of type disk (delete the small components)
Connected components labeling.
Delete the components that touches the border of the images.
Fill holes.
And you have a clean result.
Here's a working solution in python using OpenCV:
import cv2 #openCV
import numpy as np
filename = 'zFrkx.jpg' #name of file in quotations here... assumes file is in same dir as .py file
img_gray = cv2.imread(filename, 0) #converts jpg image to grayscale representation
min_val = 100 #try shifting these around to expand or collapse area of interest
max_val = 150
ret, lung_mask = cv2.threshold(img_gray, min_val, max_val, cv2.THRESH_BINARY_INV) #fixed threshold uses values you'll def above
lung_layer = cv2.bitwise_and(img_gray, img_gray, mask = lung_mask)
cv2.imwrite('cake.tif', lung_layer) #outputs desired layer to current working dir
I tried running the script with threshold values set arbitrarily to 100,150 and got the following result, from which you could select the largest continuous element using dilation and segmentation techniques (http://docs.opencv.org/master/d3/db4/tutorial_py_watershed.html#gsc.tab=0).
Also, I suggest you crop the bottom and top X pixels to cut out text since no lung will fill the top or bottom of the picture.
Use tif instead of jpg format to avoid compression related artifact.
I know you noted that you'd like the medullar(?) white matter, too. Would be glad to help with that, but could you first explain in plain english how your shared matlab code works? Seems to work pretty well for the WM.
Hope this helps!
I have a 565 * 584 image as shown
I want to reduce the radius of the circle by certain number of pixels without changing the size of the image. How can I do it? Please explain or give some ideas. Thank You.
I would use ImageMagick and an erosion like this:
convert http://i.stack.imgur.com/c8lfe.jpg -morphology erode octagon:8 out.png
If you know that the background of the image is a constant, as in your example, this is easy.
Resize the entire image by the ratio you wish to shrink by. Then create a new image at the size of the original and fill it with the background color, then paste the resized image into the center of it.
Here's how you'd do it in OpenCV Python. Going with Mark Setchell's approach, simply specify a round structuring element so that you can maintain or respect the round edges of the object. The closest thing that OpenCV has to offer is the elliptical mask.
As such:
import numpy as np # Import relevant packages - numpy and OpenCV
import cv2
# Read in image and threshold - convert to grayscale first
im = cv2.imread('c8lfe.jpg', 0) > 128
# Specify radius of ellipse
radius = 21
# Obtain structuring element, then erode image
se = cv2.getStructuringElement(cv2.MORPH_ELLIPSE, (radius, radius))
# Make sure you convert back to grayscale and multiply by 255
out = 255*(cv2.erode(im, se).astype('uint8'))
# Show the image, wait for user key, then close window and write image
cv2.imshow('Reduced shape', out)
cv2.waitKey(0)
cv2.destroyAllWindows()
cv2.imwrite('out.png', out)
We get:
Be advised that the small bump at the top right corner of your shape will mutate. As we are essentially shrinking the perimeter of the object, that bump will also shrink as well. If you wish to preserve the structure of the object while maintaining the image resolution, use Mark Ransom's approach or my slightly modified version of his approach. Both are shown below.
However, to be self-contained, we can certainly do what Mark Ransom has suggested. Resize the image, initialize a blank image that is size of the original image, and place it in the centre:
import numpy as np # Import relevant packages - OpenCV and Python
import cv2
im = cv2.imread('c8lfe.jpg', 0) # Read in the image - grayscale
scale_factor = 0.75 # Set scale factor - We are shrinking the image by 25%
# Get the desired size (row and columns) of the shrunken image
desired_size = np.floor(scale_factor*np.array(im.shape)).astype('int')
# Make sure desired size is ODD for easier placement
if desired_size[0] % 2 == 0:
desired_size[0] += 1
if desired_size[1] % 2 == 0:
desired_size[1] += 1
# Resize the image. Columns come first, followed by rows, which is why we
# reverse the desired_size array
rsz = cv2.resize(im, tuple(desired_size[::-1]))
# Determine half width of both dimensions of shrunken image
half_way = np.floor(desired_size/2.0).astype('int')
# Create output image that is the same size as the input and find its centre
out = np.zeros_like(im, dtype='uint8')
centre = np.floor(np.array(im.shape)/2.0).astype('int')
# Place shrunken image in the centre of the larger output image
out[centre[0]-half_way[0]:centre[0]+half_way[0]+1, centre[1]-half_way[1]:centre[1]+half_way[1]+1] = rsz
# Show the image, wait for user key, then close window and write image
cv2.imshow('Reduced shape', out)
cv2.waitKey(0)
cv2.destroyAllWindows()
cv2.imwrite('out.png', out)
We get:
Another suggestion
What I can also recommend you do is pad the array with zeroes, then reshrink the image back to its original size. You would essentially extend the borders of the original image so that the borders contain zeroes. In this case, we would do what Mark Ransom also suggested, but we are working within the inside, out.
Here's the way to pad a matrix with zeroes using OpenCV C++: Pad array with zeros- openCV . However, in Python, simply use numpy's pad function:
import numpy as np # Import relevant packages - numpy and OpenCV
import cv2
# Read in image and threshold - convert to grayscale first
im = cv2.imread('c8lfe.jpg', 0)
# Set how many pixels along the border you want to add on each side
pad_radius = 75
# Pad the image
out = np.lib.pad(im, ((pad_radius, pad_radius), (pad_radius, pad_radius)), 'constant', constant_values=((0,0),(0,0)))
# Shrink it back to what the original size was
out = cv2.resize(out, im.shape[::-1])
# Show the image, wait for user key, then close window and write image
cv2.imshow('Reduced shape', out)
cv2.waitKey(0)
cv2.destroyAllWindows()
cv2.imwrite('out.png', out)
We thus get:
There exist several ways to evaluate an image, brightness, saturation, hue, intensity, contrast etc. And we always hear about the operation of smoothing or sharperning an image. From this, there must exist a way to evaluate the overall smoothness of an image and an exact way to figure out this value in one formula probably based on wavelet. Or fortunately anyone could even provide the MATLAB function or combination of them to directly calculate this value.
Thanks in advance!
Smoothness is a vague term. What considered smooth for one application might not be considered smooth for another.
In the common case, smoothness is a function of the color gradients. Take a 2d gradient on the 3 color channels, then take their magnitude, sqrt(dx^2 + dy^2) and average, sum or some function over the 3 channels. That can give you local smoothness which you can then sum/average/least squares over the image.
In the more common case, however, linear changes in color is also smooth (think 2 color gradients, or how light might be reflected from an object). For that, a second differential could be more suitable. A laplacian does exactly that.
I've had much luck using the laplacian operator for calculating smoothness in Python with the scipy/numpy libraries. Similar utilities exist for matlab and other tools.
Note that the resulting value isn't something absolute from the math books, you should only use it relative to itself and using constants you deem fit.
Specific how to:
First get scipy. If you are on Linux it's it available on pypi. For Windows you'll have to use a precompiled version here. You should open the image using scipy.ndimage.imread and then use scipy.ndimage.filters.laplace on the image you read. You don't actually have to mix the channels, you can simply call numpy.average and it should be close enough.
import scipy as np
import scipy.ndimage as ndi
print np.average(np.absolute(ndi.filters.laplace(ndi.imread(path).astype(float) / 255.0)))
This would give the average smoothness (for some meaning of smoothness) of the image. I use np.absolute since values can be positive or negative and we don't want them to even out when averaging. I convert to float and divide by 255 to have values between 0.0 and 1.0 instead of 0 to 256, since it's easier to work with.
If you want to see the what the laplacian found, you can use matplotlib:
import matplotlib.pyplot as plt
v = np.absolute(ndi.filters.laplace(ndi.imread(path).astype(float) / 255.0))
v2 = np.average(v, axis=2) # Mixing the channels down
plt.imshow(v2);
plt.figure();
plt.imshow(v2 > 0.05);
plt.show()
A very similar question, solved the same way: how to use 'extent' in matplotlib.pyplot.imshow
I have a list of geographical coordinates (a "tracklog") that describe a geographical trajectory. Also, I have the means of obtaining an image spanning the tracklog coverage, where I know the "geographical coordinates" of the corners of the image.
My plot currently looks like this (notice the ticks - x=longitudes, y=latitudes, in UTM, WGS84):
Then suppose I know the corner coordinates of the following image (or a version of it without the blue track), and would like to plot it SO THAT IT FITS THE COORDINATE SYSTEM of the plot.
How would I do it?
(as a side note, in case that matters, I plan to use tiles)
As per the comment of Joe Kington (waiting for his actual answer so that I can accept it), the following code works as expected, giving a pannable and zoomable fixed-aspect "georeferenced" tile over which I am able to plot tracklogs:
import matplotlib.pyplot as plt
import Image
import numpy
imarray = numpy.asarray(Image.open('map.jpg'))
plt.plot([0,1], [0,1], 'o', c='red', ms=20) ## some reference circles for debugging
plt.imshow(imarray, extent=[0,1,0,1]) ## some random map whose corners have known coordinates
plt.axis('equal')
plt.show()
There is really not much of an answer here, but if you are using matplotlib, and you geos-tuff, take a look at matplotlib.basemap.
By default all operations are done on UTM maps, but you can choose your own projection.
Take also a look on the list of good tutorials in http://www.geophysique.be, for example.
I would like to plot a set of points using pyplot in matplotlib but have none of the points be on the edge of my axes. The autoscale (or something) sets the xlim and ylim such that often the first and last points lie at x = xmin or xmax making it difficult to read in some situations.
This is more often problematic with loglog() or semilog() plots because the autoscale would like xmin and xmax to be exact powers of ten, but if my data contains only three points, e.g. at xdata = [10**2,10**3,10**4] then the first and last points will lie on the border of the plot.
Attempted Workaround
This is my solution to add a 10% buffer to either side of the graph. But is there a way to do this more elegantly or automatically?
from numpy import array, log10
from matplotlib.pyplot import *
xdata = array([10**2,10**3,10**4])
ydata = xdata**2
figure()
loglog(xdata,ydata,'.')
xmin,xmax = xlim()
xbuff = 0.1*log10(xmax/xmin)
xlim(xmin*10**(-xbuff),xmax*10**(xbuff))
I am hoping for a one- or two-line solution that I can easily use whenever I make a plot like this.
Linear Plot
To make clear what I'm doing in my workaround, I should add an example in linear space (instead of log space):
plot(xdata,ydata)
xmin,xmax = xlim()
xbuff = 0.1*(xmax-xmin)
xlim(xmin-xbuff,xmax+xbuff))
which is identical to the previous example but for a linear axis.
Limits too large
A related problem is that sometimes the limits are too large. Say my data is something like ydata = xdata**0.25 so that the variance in the range is much less than a decade but ends at exactly 10**1. Then, the autoscale ylim are 10**0 to 10**1 though the data are only in the top portion of the plot. Using my workaround above, I can increase ymax so that the third point is fully within the limits but I don't know how to increase ymin so that there is less whitespace at the lower portion of my plot. i.e., the point is that I don't always want to spread my limits apart but would just like to have some constant (or proportional) buffer around all my points.
#askewchan I just succesfully achieved how to change matplotlib settings by editing matplotlibrc configuration file and running python directly from terminal. Don't know the reason yet, but matplotlibrc is not working when I run python from spyder3 (my IDE). Just follow steps here matplotlib.org/users/customizing.html.
1) Solution one (default for all plots)
Try put this in matplotlibrc and you will see the buffer increase:
axes.xmargin : 0.1 # x margin. See `axes.Axes.margins`
axes.ymargin : 0.1 # y margin See `axes.Axes.margins`
Values must be between 0 and 1.
Obs.: Due to bugs, scale is not correctly working yet. It'll be fixed for matplotlib 1.5 (mine is 1.4.3 yet...). More info:
axes.xmargin/ymargin rcParam behaves differently than pyplot.margins() #2298
Better auto-selection of axis limits #4891
2) Solution two (individually for each plot inside the code)
There is also the margins function (for put directly in the code). Example:
import numpy as np
from matplotlib import pyplot as plt
t = np.linspace(-6,6,1000)
plt.plot(t,np.sin(t))
plt.margins(x=0.1, y=0.1)
plt.savefig('plot.png')
Obs.: Here scale is working (0.1 will increase 10% of buffer before and after x-range and y-range).
A similar question was posed to the matplotlib-users list earlier this year. The most promising solution involves implementing a Locator (based on MaxNLocator in this case) to override MaxNLocator.view_limits.