What is the fastest way to draw circles in python and save as png? - performance

Background
I'm optimizing a project.
Profiling the code I found that 50% of the time is spend on a function in which a set of circles (different radii, colors and locations) are drawn to a choosen sector of fixed size (white canvas) if their center corrdinates is within the sector bounds. Depending on the usage the function saves the figure as a png and returns the path or returns the image as an numpy array.
The build-in method matplotlib._png.write_png from savefig is the most expensive But there is also some overhead from creating the figures, etc.
Generally the code is used with multiprocessing / parallel programming.
Example output
Code
import matplotlib.pyplot as plt
import cv2
import os
def get_top_view(sector, circles, file_path, save_image_flag):
# get the sector bounds.
x_low, y_low, x_high, y_high = get_sector_bounds(sector)
# init figure
fig, ax = plt.subplots()
ax.set_xlim(y_low, y_high)
ax.set_ylim(x_low, x_high)
ax.set_yticklabels([])
ax.set_xticklabels([])
ax.set_yticks([])
ax.set_xticks([])
ax.set_aspect('equal')
ax.axis('off')
# c is a circle object with all relevant data (center coordinates,
# radius, RGB color tuple)
for c in circles:
if x_low <= c.x_coord <= x_high and y_low <= c.y_coord <= y_high:
shape = plt.Circle((c.x_coord, c.y_coord), c.radius, color=c.color)
shape_plot = ax.add_artist(shape)
shapes.append(shape_plot)
plt.gca().invert_yaxis()
if save_image_flag:
plt.savefig(file_path + '_cc.png', bbox_inches='tight', pad_inches=0.02)
plt.close()
return file_path
else:
ax.margins(0)
fig.tight_layout()
fig.canvas.draw()
image_from_plot = np.frombuffer(fig.canvas.tostring_rgb(), dtype=np.uint8)
image_from_plot = image_from_plot.reshape(
fig.canvas.get_width_height()[::-1] + (3,))
image_from_plot = image_from_plot[:, 13:-14]
resized = cv2.resize(image_from_plot, (499, 391))
cropped = resized[78:-78]
plt.close()
return cropped
Questions
There is the issue that the array version and the png image is slightly different. I think that relates to the DPI of the image. I want to fix that and I'm thinking about different options who to speedup this function.
Speedup the process and keeping matplotlib, similar to this example from Github.
Get rid of matplotlib and draw it with Pillow, e.g. some thing like:
from PIL import Image, ImageDraw
def get_top_view(sector, circles, file_path, save_image_flag):
# get the sector bounds.
x_low, y_low, x_high, y_high = get_sector_bounds(sector)
im = Image.new('RGB', (499, 235), (255, 255, 255))
draw = ImageDraw.Draw(im)
# there needs to be some rescaling that the corrdinates match which
# I don't account for at the moment.
for c in circles:
if x_low <= c.x_coord <= x_high and y_low <= c.y_coord <= y_high:
draw.ellipse((c.x_coord - c.radius, c.y_coord - c.radius,
c.x_coord + c.radius, c.y_coord + c.radius),
fill=c.color)
if save_image_flag:
im.save(file_path + '.png')
return file_path
else:
image_as_array = convert_to_array() # have to think about how I'll do that
return image_as_array
A different approach that is faster (and somehow convenient)...
I'd be glad for any feedback on the two issues.

I'll share my findings here. I'm calling the function 2200 times for each output type within the simulation framework. All computation is serial as the multiprocessing code is not part of the example.
There are 220 sectors with 200 circles randomly distributed amongst the sectors. The simulation runs for 10 steps, where the circles radii get updated and new figures are drawn. Hence, 2200 calls to the function.
To generate and save png images it took 293483ms previously.
To generate numpy arrays it took 92715ms previously.
Speedup with Matplotlib
Generating and saving images (save_image_flag = True) now takes 126485ms.
Generating numpy arrays now takes 57029ms.
import matplotlib.pyplot as plt
import os
def get_top_view(sector, circles, file_path, save_image_flag):
# get the sector bounds.
x_low, y_low, x_high, y_high = get_sector_bounds(sector)
# init figure
fig, ax = plt.subplots()
ax.set_xlim(y_low, y_high)
ax.set_ylim(x_low, x_high)
# These were unnecessary
# ax.set_yticklabels([])
# ax.set_xticklabels([])
# ax.set_yticks([])
# ax.set_xticks([])
ax.set_aspect('equal')
ax.axis('off')
# c is a circle object with all relevant data (center coordinates,
# radius, RGB color tuple)
for c in circles:
if x_low <= c.x_coord <= x_high and y_low <= c.y_coord <= y_high:
shape = plt.Circle((c.x_coord, c.y_coord), c.radius, color=c.color)
shape_plot = ax.add_artist(shape)
shapes.append(shape_plot)
plt.gca().invert_yaxis()
# I added this to get the discrepancies between the generated image and
# the numpy array, as bbox_inches='tight' only applies to the saved image.
bbox0 = fig.get_tightbbox(fig.canvas.get_renderer()).padded(0.02)
if save_image_flag:
plt.savefig(file_path + '_cc.png', bbox_inches=bbox0)
plt.close()
return file_path
else:
buf = io.BytesIO()
fig.savefig(buf, format="rgba", dpi=100, bbox_inches=bbox0)
buf.seek(0)
img = np.reshape(np.frombuffer(buf.getvalue(), dtype=np.uint8),
newshape=(235, 499, -1))
img = img[..., :3]
buf.close()
plt.close()
return image
Speedup without Matplotlib
Research is in progress...

Related

Fast Radial Symmetry Transform (FRST) implementation (python) results in unusual cross-hair looking artifacts

I am trying to implement FRST on python to detect centroids of elliptical objects (e.g. cells in microscopy images), but my implementation does not find seed points (more or less center points) of elliptical objects. This effort comes from duplicating FRST from Segmentation of Overlapping Elliptical Objects in Silhouette Images (https://ieeexplore.ieee.org/document/7300433). I don't know why I have these artifacts. An interesting thing is that I see these patterns (crosses) all in the same direction per object. Any point in the right direction to generate the same result as in the paper (just to find the seed points) will be most welcome.
Original Paper: A Fast Radial Symmetry Transform for Detecting Points of Interest by Loy and Zelinsky (ECCV 2002)
I have also tried the pre-existing python package for FRST: https://pypi.org/project/frst/. This somehow results in the same artifacts. Weird.
First image: Original Image
Second image: Sobel-operated Image
Third image: Magnitude Projection Image
Fourth image: Magnitude Projection Image with positively affected pixels only
Fifth image: FRST'd image: end-product with original image overlaid (shadowed)
Sixth image: FRST'd image by the pre-existing python package with original image overlaid (shadowed).
from scipy.ndimage import gaussian_filter
import numpy as np
from scipy.signal import convolve
# Get orientation projection image
def get_proj_img(image, radius):
workingDims = tuple((e + 2*radius) for e in image.shape)
h,w = image.shape
ori_img = np.zeros(workingDims) # Orientation Projection Image
mag_img = np.zeros(workingDims) # Magnitutde Projection Image
# Kenels for the sobel operator
a1 = np.matrix([1, 2, 1])
a2 = np.matrix([-1, 0, 1])
Kx = a1.T * a2
Ky = a2.T * a1
# Apply the Sobel operator
sobel_x = convolve(image, Kx)
sobel_y = convolve(image, Ky)
sobel_norms = np.hypot(sobel_x, sobel_y)
# Distances to afpx, afpy (affected pixels)
dist_afpx = np.multiply(np.divide(sobel_x, sobel_norms, out = np.zeros(sobel_x.shape), where = sobel_norms!=0), radius)
dist_afpx = np.round(dist_afpx).astype(int)
dist_afpy = np.multiply(np.divide(sobel_y, sobel_norms, out = np.zeros(sobel_y.shape), where = sobel_norms!=0), radius)
dist_afpy = np.round(dist_afpy).astype(int)
for cords, sobel_norm in np.ndenumerate(sobel_norms):
i, j = cords
pos_aff_pix = (i+dist_afpx[i,j], j+dist_afpy[i,j])
neg_aff_pix = (i-dist_afpx[i,j], j-dist_afpy[i,j])
ori_img[pos_aff_pix] += 1
ori_img[neg_aff_pix] -= 1
mag_img[pos_aff_pix] += sobel_norm
mag_img[neg_aff_pix] -= sobel_norm
ori_img = ori_img[:h, :w]
mag_img = mag_img[:h, :w]
print ("Did it go back to the original image size? ")
print (ori_img.shape == image.shape)
# try normalizing ori and mag img
return ori_img, mag_img
def get_sn(ori_img, mag_img, radius, kn, alpha):
ori_img_limited = np.minimum(ori_img, kn)
fn = np.multiply(np.divide(mag_img,kn), np.power((np.absolute(ori_img_limited)/kn), alpha))
# convolute fn with gaussian filter.
sn = gaussian_filter(fn, 0.25*radius)
return sn
def do_frst(image, radius, kn, alpha, ksize = 3):
ori_img, mag_img = get_proj_img(image, radius)
sn = get_sn(ori_img, mag_img, radius, kn, alpha)
return sn
Parameters:
radius = 50
kn = 10
alpha = 2
beta = 0
stdfactor = 0.25

how can I keep the last frame when I use matplotlib animation function with blit = True

I have to set blit = true, since the plotting is much faster. But after animation (repeat = false), if I use zoom in the figure, the figure will just disappear. I need to keep the last frame so that I can zoom in the last figure.
Thanks!
One work around is to initialize the animation using the last frame. The obvious down side is you have to precompute the last frame. Adapting this example would be
import numpy as np
import matplotlib.pyplot as plt
import matplotlib.animation as animation
fig, ax = plt.subplots()
x = np.arange(0, 2*np.pi, 0.01)
line, = ax.plot(x, np.sin(x))
cnt = 50 # Define so we know what the last frame will be.
def init():
# Note the function is the same as `animate` with `i` set to the last value
line.set_ydata(np.sin(x + cnt / 100))
return line,
def animate(i):
line.set_ydata(np.sin(x + i / 100)) # update the data.
return line,
ani = animation.FuncAnimation(
fig, animate, init_func=init, interval=2, blit=True, save_count=cnt)
ani.save("mwe.mov")
fig.savefig("mwe.png")

Matplotlib animate space vs time plot

I'm currently working on traffic jams analysis and was wondering if there's a way to animate the generation of a plot of such jams.
A plot of this things grow from up to the lower end of the figure, each 'row' is a time instance. The horizontal axis is just the road indicating at each point the position of each vehicle and, with a certain numeric value, the velocity of it. So applying different colors to different velocities, you get a plot that shows how a jam evolves through time in a given road.
My question is, how can I use matplotlib to generate an animation of each instance of the road in time to get such a plot?
The plot is something like this:
I'm simulating a road with vehicles with certain velocities through time, so I wish to animate a plot showing how the traffic jams evolve...
EDIT:
I add some code to make clear what I'm already doing
import numpy as np
from matplotlib import pyplot as plt
from matplotlib import animation, rc
plt.rcParams['animation.ffmpeg_path'] = u'/usr/bin/ffmpeg'
# model params
vmax = 5
lenroad = 50
prob = 0.4
# sim params
numiters = 10
# traffic model
def nasch():
gaps = np.full(road.shape, -1)
road_r4 = np.full(road.shape, -1)
for n,x in enumerate(road):
if x > -1:
d = 1
while road[(n+d) % len(road)] < 0:
d += 1
d -= 1
gaps[n] = d
road_r1 = np.where(road!=-1, np.minimum(road+1, vmax), -1)
road_r2 = np.where(road_r1!=-1, np.minimum(road_r1, gaps), -1)
road_r3 = np.where(road_r2!=-1, np.where(np.random.rand() < prob, np.maximum(road-1, 0), road), -1)
for n,x in enumerate(road_r3):
if x > -1:
road_r4[(n+x) % len(road_r3)] = x
return road_r4
def plot_nasch(*args):
road = nasch()
plot.set_array([road])
return plot,
# init road
road = np.random.randint(-10, vmax+1, [lenroad])
road = np.where(road>-1, road, -1)
# simulate
fig = plt.figure()
plot = plt.imshow([road], cmap='Pastel2', interpolation='nearest')
for i in range(numiters):
ani = animation.FuncAnimation(fig, plot_nasch, frames=100, interval=500, blit=True)
plt.show()
And I get the following figure, just one road instead of each road painted at the bottom of the previous one:
This is possibly what you want, although I'm not sure why you want to animate the time, since time is already one of the axes in the plot.
The idea here is to store the simulation results of a time-step row by row in an array and replot this array. Thereby previous simulation results are not lost.
import numpy as np
from matplotlib import pyplot as plt
from matplotlib import animation, rc
# model params
vmax = 5
lenroad = 50
prob = 0.4
# sim params
numiters = 25
# traffic model
def nasch():
global road
gaps = np.full(road.shape, -1)
road_r4 = np.full(road.shape, -1)
for n,x in enumerate(road):
if x > -1:
d = 1
while road[(n+d) % len(road)] < 0:
d += 1
d -= 1
gaps[n] = d
road_r1 = np.where(road!=-1, np.minimum(road+1, vmax), -1)
road_r2 = np.where(road_r1!=-1, np.minimum(road_r1, gaps), -1)
road_r3 = np.where(road_r2!=-1, np.where(np.random.rand() < prob, np.maximum(road-1, 0), road), -1)
for n,x in enumerate(road_r3):
if x > -1:
road_r4[(n+x) % len(road_r3)] = x
return road_r4
def plot_nasch(i):
print i
global road
road = nasch()
#store result in array
road_over_time[i+1,:] = road
# plot complete array
plot.set_array(road_over_time)
# init road
road = np.random.randint(-10, vmax+1, [lenroad])
road = np.where(road>-1, road, -1)
# initiate array
road_over_time = np.zeros((numiters+1, lenroad))*np.nan
road_over_time[0,:] = road
fig = plt.figure()
plot = plt.imshow(road_over_time, cmap='Pastel2', interpolation='nearest', vmin=-1.5, vmax=6.5)
plt.colorbar()
ani = animation.FuncAnimation(fig, plot_nasch, frames=numiters, init_func=lambda : 1, interval=400, blit=False, repeat=False)
plt.show()

Plot over an image background in MATLAB

I'd like to plot a graph over an image. I followed this tutorial to Plot over an image background in MATLAB and it works fine:
% replace with an image of your choice
img = imread('myimage.png');
% set the range of the axes
% The image will be stretched to this.
min_x = 0;
max_x = 8;
min_y = 0;
max_y = 6;
% make data to plot - just a line.
x = min_x:max_x;
y = (6/8)*x;
imagesc([min_x max_x], [min_y max_y], img);
hold on;
plot(x,y,'b-*','linewidth',1.5);
But when I apply the procedure to my study case, it doesn't work. I'd like to do something like:
I = imread('img_png.png'); % here I load the image
DEM = GRIDobj('srtm_bigtujunga30m_utm11.tif');
FD = FLOWobj(DEM,'preprocess','c');
S = STREAMobj(FD,flowacc(FD)>1000);
% with the last 3 lines I calculated the stream network on a geographic area using the TopoToolBox
imagesc(I);
hold on
plot(S)
The aim is to plot the stream network over the satellite image of the same area.
The only difference between the two examples that doesn't let the code working is in the plot line, in the first case "plot(x,y)" works, in the other one "plot(S)" doesn't.
Thanks guys.
This is the satellite image, imagesc(I)
It is possible that the plot method of the STREAMobj performs it's own custom plotting including creating new figures, axes, toggling hold states, etc. Because you can't easily control what their plot routine does, it's likely easier to flip the order of your plotting so that you plot your stuff after the toolbox plots the STREAMobj. This way you have completely control over how your image is added.
% Plot the STREAMobj
hlines = plot(S);
% Make sure we plot on the same axes
hax = ancestor(hlines, 'axes');
% Make sure that we can add more plot objects
hold(hax, 'on')
% Plot your image data on the same axes
imagesc(I, 'Parent', hax)
Maybe I am preaching to the choir or overlooking something here but the example you used actually mapped the image to the data range of the plot, hence the lines:
% set the range of the axes
% The image will be stretched to this.
min_x = 0;
max_x = 8;
min_y = 0;
max_y = 6;
imagesc([min_x max_x], [min_y max_y], img);
where you directly plot your image
imagesc(I);
If now your data coordinates and your image coordinates are vastly different you either see one or the other.
Thanks guys, I solved in this way:
I = imread('orto.png'); % satellite image loading
DEM = GRIDobj('demF1.tif');
FD = FLOWobj(DEM,'preprocess','c');
S = STREAMobj(FD,flowacc(FD)>1000); % Stream network extraction
x = S.x; % [node attribute] x-coordinate vector
y = S.y; % [node attribute] y-coordinate vector
min_x = min(x);
max_x = max(x);
min_y = min(y);
max_y = max(y);
imagesc([min_x max_x], [min_y max_y], I);
hold on
plot(S);
Here's the resulting image: stream network over the satellite image
Actually the stream network doesn't match the satellite image just because I'm temporarily using different images and DEM.

How to extract color shade from a given sample image to convert another image using color of sample image?

I have a sample image and a target image. I want to transfer the color shades of sample image to target image. Please tell me how to extract the color from sample image.
Here the images:
input source image:
input map for desired output image
output image
You can use a technique called "Histogram matching" (another description)
Basically, you use the histogram for your source image as a goal and transform the values for each input map pixel to get the output histogram as close to source as possible. You do it for each rgb channel of the image.
Here is my python code for that:
from scipy.misc import imsave, imread
import numpy as np
imsrc = imread("source.jpg")
imtint = imread("tint_target.jpg")
nbr_bins=255
imres = imsrc.copy()
for d in range(3):
imhist,bins = np.histogram(imsrc[:,:,d].flatten(),nbr_bins,normed=True)
tinthist,bins = np.histogram(imtint[:,:,d].flatten(),nbr_bins,normed=True)
cdfsrc = imhist.cumsum() #cumulative distribution function
cdfsrc = (255 * cdfsrc / cdfsrc[-1]).astype(np.uint8) #normalize
cdftint = tinthist.cumsum() #cumulative distribution function
cdftint = (255 * cdftint / cdftint[-1]).astype(np.uint8) #normalize
im2 = np.interp(imsrc[:,:,d].flatten(),bins[:-1],cdfsrc)
im3 = np.interp(imsrc[:,:,d].flatten(),cdftint, bins[:-1])
imres[:,:,d] = im3.reshape((imsrc.shape[0],imsrc.shape[1] ))
imsave("histnormresult.jpg", imres)
The output for you samples will look like that:
You could also try making the same in HSV colorspace - it might give better results.
I think the hardest part is to determine the dominant color of the first image. Just looking at it, with all the highlights and shadows, the best overall color will be the one that has the highest combination of brightness and saturation. I start with a blurred image to reduce the effects of noise and other anomalies, then convert each pixel to the HSV color space for the brightness and saturation measurement. Here's how it looks in Python with PIL and colorsys:
blurred = im1.filter(ImageFilter.BLUR)
ld = blurred.load()
max_hsv = (0, 0, 0)
for y in range(blurred.size[1]):
for x in range(blurred.size[0]):
r, g, b = tuple(c / 255. for c in ld[x, y])
h, s, v = colorsys.rgb_to_hsv(r, g, b)
if s + v > max_hsv[1] + max_hsv[2]:
max_hsv = h, s, v
r, g, b = tuple(int(c * 255) for c in colorsys.hsv_to_rgb(*max_hsv))
For your image I get a color of (210, 61, 74) which looks like:
From that point it's just a matter of transferring the hue and saturation to the other image.
The histogram matching solutions above did not work for me. Here is my own, based on OpenCV:
def match_image_histograms(image, reference):
chans1 = cv2.split(image)
chans2 = cv2.split(reference)
new_chans = []
for ch1, ch2 in zip(chans1, chans2):
hist1 = cv2.calcHist([ch1], [0], None, [256], [0, 256])
hist1 /= hist1.sum()
hist2 = cv2.calcHist([ch2], [0], None, [256], [0, 256])
hist2 /= hist2.sum()
lut = np.searchsorted(hist1.cumsum(), hist2.cumsum())
new_chans.append(cv2.LUT(ch1, lut))
return cv2.merge(new_chans).astype('uint8')
obtain average color from color map
ignore saturated white/black colors
convert light map to grayscale
change dynamic range of lightmap to match your desired output
I use max dynamic range. You could compute the range of color map and set it for light map
multiply the light map by avg color
This is how it looks like:
And this is the C++ source code
//picture pic0,pic1,pic2;
// pic0 - source color
// pic1 - source light map
// pic2 - output
int x,y,rr,gg,bb,i,i0,i1;
double r,g,b,a;
// init output as source light map in grayscale i=r+g+b
pic2=pic1;
pic2.rgb2i();
// change light map dynamic range to maximum
i0=pic2.p[0][0].dd; // min
i1=pic2.p[0][0].dd; // max
for (y=0;y<pic2.ys;y++)
for (x=0;x<pic2.xs;x++)
{
i=pic2.p[y][x].dd;
if (i0>i) i0=i;
if (i1<i) i1=i;
}
for (y=0;y<pic2.ys;y++)
for (x=0;x<pic2.xs;x++)
{
i=pic2.p[y][x].dd;
i=(i-i0)*767/(i1-i0);
pic2.p[y][x].dd=i;
}
// extract average color from color map (normalized to unit vecotr)
for (r=0.0,g=0.0,b=0.0,y=0;y<pic0.ys;y++)
for (x=0;x<pic0.xs;x++)
{
rr=BYTE(pic0.p[y][x].db[picture::_r]);
gg=BYTE(pic0.p[y][x].db[picture::_g]);
bb=BYTE(pic0.p[y][x].db[picture::_b]);
i=rr+gg+bb;
if (i<400) // ignore saturated colors (whiteish) 3*255=white
if (i>16) // ignore too dark colors (whiteish) 0=black
{
r+=rr;
g+=gg;
b+=bb;
}
}
a=1.0/sqrt((r*r)+(g*g)+(b*b)); r*=a; g*=a; b*=a;
// recolor output
for (y=0;y<pic2.ys;y++)
for (x=0;x<pic2.xs;x++)
{
a=DWORD(pic2.p[y][x].dd);
rr=r*a; if (rr>255) rr=255; pic2.p[y][x].db[picture::_r]=BYTE(rr);
gg=g*a; if (gg>255) gg=255; pic2.p[y][x].db[picture::_g]=BYTE(gg);
bb=b*a; if (bb>255) bb=255; pic2.p[y][x].db[picture::_b]=BYTE(bb);
}
I am using own picture class so here some members:
xs,ys size of image in pixels
p[y][x].dd is pixel at (x,y) position as 32 bit integer type
p[y][x].db[4] is pixel access by color bands (r,g,b,a)
[notes]
If this does not meet your needs then please specify more and add more images. Because your current example is really not self explanatonary
Regarding previous answer, one thing to be careful with:
once the CDF will reach its maximum (=1), the interpolation will get mislead and will match wrongly your values. To avoid this, you should provide the interpolation function only the part of CDF meaningful (not after where it reaches 1) and the corresponding bins. Here the answer adapted:
from scipy.misc import imsave, imread
import numpy as np
imsrc = imread("source.jpg")
imtint = imread("tint_target.jpg")
nbr_bins=255
imres = imsrc.copy()
for d in range(3):
imhist,bins = np.histogram(imsrc[:,:,d].flatten(),nbr_bins,normed=True)
tinthist,bins = np.histogram(imtint[:,:,d].flatten(),nbr_bins,normed=True)
cdfsrc = imhist.cumsum() #cumulative distribution function
cdfsrc = (255 * cdfsrc / cdfsrc[-1]).astype(np.uint8) #normalize
cdftint = tinthist.cumsum() #cumulative distribution function
cdftint = (255 * cdftint / cdftint[-1]).astype(np.uint8) #normalize
im2 = np.interp(imsrc[:,:,d].flatten(),bins[:-1],cdfsrc)
if (cdftint==1).sum()>0:
idx_max = np.where(cdftint==1)[0][0]
im3 = np.interp(im2,cdftint[:idx_max+1], bins[:idx_max+1])
else:
im3 = np.interp(im2,cdftint, bins[:-1])
Enjoy!

Resources