Related
I am trying to animate a simulation. I want to include the time of the simulation. I have written the following code:
import matplotlib.animation as animation
fig, ax = plt.subplots()
ims = []
for i in range(40):
im=plt.imshow(np.log10(D[0,i,:,:]),cmap=plt.get_cmap("Spectral"),extent=[0,28,0,14],animated=True)
plt.text(10,2,"t="+str(t[i])+"Myr",c='w',fontsize='large')
ims.append([im])
ani = animation.ArtistAnimation(fig, ims, interval=50, blit=True,
repeat_delay=1000)
ani.save("rhdjet1.mp4")
plt.show()
But all the text is getting dumped at once in the beginning.This is a still from the animation. The gibberish in white is the text getting overlayed.
How to correct this?
This can be done using the celluloid package in python:
from celluloid import Camera
#Creating matplotlib figure and camera object
fig = plt.figure()
camera = Camera(fig)
#Looping the data and capturing frame at each iteration
for i in range(0,90):
plt.imshow(K[i])
plt.text(str(t[i]))
camera.snap()
#Creating the animation from captured frames
animation = camera.animate(interval = 200, repeat = True,
repeat_delay = 500)
animation.save("restart.mp4")
I have an RGB image, which I immediately take the red component. I then convert the resulting grayscale into bytes and display it in Graph using draw_image. However, only the background is shown and the red component image is not displayed. Let img be my RGB image. Here is my code:
import cv2
import PySimpleGUI as sg
from PIL import Image, ImageTk
r,g,b = cv2.split(img)
data = bytes(Image.fromarray(r).tobytes())
width = len(b)
length = len(b[0])
layout = [[sg.Graph(
canvas_size=(length, width),
graph_bottom_left=(0, 0),
graph_top_right=(length, width),
key="-GRAPH-",
change_submits=True,
background_color='black',
drag_submits=True) ]]
window = sg.Window(layout, finalize=True)
window.Maximize()
graph = window["-GRAPH-"]
graph.draw_image(data = data, location=(0,width))
while True:
event, values = window.read()
if event == sg.WIN_CLOSED:
break
The result is nothing but black background. I have checked that the image img and the red component r are both correct (i.e. statements like imshow will give the right image). The problem therefore lies in either the line data = bytes(Image.fromarray(r).tobytes()) or graph.draw_image(data = data, location=(0,width)). However, both seem correct to me. What am I missing? Is there any workarounds? As a side note, I am not allowed to save any images.
Image.tobytes(encoder_name='raw', *args)
This method returns the raw image data from the internal storage. For compressed image data (e.g. PNG, JPEG) use save(), with a BytesIO parameter for in-memory data.
import io
import cv2
import PySimpleGUI as sg
from PIL import Image, ImageTk
img = cv2.imread('D:/images.jpg')
r,g,b = cv2.split(img)
im = Image.fromarray(r)
width, height = im.size
buffer = io.BytesIO()
im.save(buffer, format='PNG')
data = buffer.getvalue()
layout = [[sg.Graph(
canvas_size=(width, height),
graph_bottom_left=(0, 0),
graph_top_right=(width, height),
key="-GRAPH-",
change_submits=True,
background_color='black',
drag_submits=True) ]]
window = sg.Window('Title', layout, finalize=True)
# window.Maximize()
graph = window["-GRAPH-"]
graph.draw_image(data = data, location=(0, height))
while True:
event, values = window.read()
if event == sg.WIN_CLOSED:
break
window.close()
I have a panorama image, and a smaller image of buildings seen within that panorama image. What I want to do is recognise if the buildings in that smaller image are in that panorama image, and how the 2 images line up.
For this first example, I'm using a cropped version of my panorama image, so the pixels are identical.
import cv2
import matplotlib.pyplot as plt
import matplotlib.image as mpimg
import numpy as np
import math
# Load images
cwImage = cv2.imread('cw1.jpg',0)
panImage = cv2.imread('pan1.jpg',0)
# Prepare for SURF image analysis
surf = cv2.xfeatures2d.SURF_create(4000)
# Find keypoints and point descriptors for both images
cwKeypoints, cwDescriptors = surf.detectAndCompute(cwImage, None)
panKeypoints, panDescriptors = surf.detectAndCompute(panImage, None)
Then I use OpenCV's FlannBasedMatcher to find good matches between the two images:
FLANN_INDEX_KDTREE = 0
index_params = dict(algorithm=FLANN_INDEX_KDTREE, trees=5)
search_params = dict(checks=50)
flann = cv2.FlannBasedMatcher(index_params, search_params)
# Find matches between the descriptors
matches = flann.knnMatch(cwDescriptors, panDescriptors, k=2)
good = []
for m, n in matches:
if m.distance < 0.7 * n.distance:
good.append(m)
So you can see that in this example, it perfectly matches the points between images. So then I find the homography, and apply a perspective warp:
cwPoints = np.float32([cwKeypoints[m.queryIdx].pt for m in good
]).reshape(-1, 1, 2)
panPoints = np.float32([panKeypoints[m.trainIdx].pt for m in good
]).reshape(-1, 1, 2)
h, status = cv2.findHomography(cwPoints, panPoints)
warpImage = cv2.warpPerspective(cwImage, h, (panImage.shape[1], panImage.shape[0]))
Result is that it perfectly places the smaller image within the larger image.
Now, I want to do this where the smaller image isn't a pixel-perfect version of the larger image.
For the new smaller image, the keypoints look like this:
You can see that in some cases, it matches correctly, and in some cases it doesn't.
If I call findHomography with these matches, it's going to take all of these data points into account and come up with a non-sensical warp perspective, because it's basing it on the correct matches and the incorrect matches.
What I'm looking for is a missing step in between detecting the good matches, and calling findHomography, where I can look at the relationship between the matches, and determine which matches are therefore correct.
I'm wondering if there's a function within OpenCV that I should be looking at for this step, or if this is something I'll need to work out on my own, and if so how I should go about doing that?
I wrote a blog in about finding object in scene last year( 2017.11.11). Maybe it helps. Here is the link. https://zhuanlan.zhihu.com/p/30936804
Env: OpenCV 3.3 + Python 3.5
Found matches:
The found object in the scene:
The code:
#!/usr/bin/python3
# 2017.11.11 01:44:37 CST
# 2017.11.12 00:09:14 CST
"""
使用Sift特征点检测和匹配查找场景中特定物体。
"""
import cv2
import numpy as np
MIN_MATCH_COUNT = 4
imgname1 = "box.png"
imgname2 = "box_in_scene.png"
## (1) prepare data
img1 = cv2.imread(imgname1)
img2 = cv2.imread(imgname2)
gray1 = cv2.cvtColor(img1, cv2.COLOR_BGR2GRAY)
gray2 = cv2.cvtColor(img2, cv2.COLOR_BGR2GRAY)
## (2) Create SIFT object
sift = cv2.xfeatures2d.SIFT_create()
## (3) Create flann matcher
matcher = cv2.FlannBasedMatcher(dict(algorithm = 1, trees = 5), {})
## (4) Detect keypoints and compute keypointer descriptors
kpts1, descs1 = sift.detectAndCompute(gray1,None)
kpts2, descs2 = sift.detectAndCompute(gray2,None)
## (5) knnMatch to get Top2
matches = matcher.knnMatch(descs1, descs2, 2)
# Sort by their distance.
matches = sorted(matches, key = lambda x:x[0].distance)
## (6) Ratio test, to get good matches.
good = [m1 for (m1, m2) in matches if m1.distance < 0.7 * m2.distance]
canvas = img2.copy()
## (7) find homography matrix
## 当有足够的健壮匹配点对(至少4个)时
if len(good)>MIN_MATCH_COUNT:
## 从匹配中提取出对应点对
## (queryIndex for the small object, trainIndex for the scene )
src_pts = np.float32([ kpts1[m.queryIdx].pt for m in good ]).reshape(-1,1,2)
dst_pts = np.float32([ kpts2[m.trainIdx].pt for m in good ]).reshape(-1,1,2)
## find homography matrix in cv2.RANSAC using good match points
M, mask = cv2.findHomography(src_pts, dst_pts, cv2.RANSAC,5.0)
## 掩模,用作绘制计算单应性矩阵时用到的点对
#matchesMask2 = mask.ravel().tolist()
## 计算图1的畸变,也就是在图2中的对应的位置。
h,w = img1.shape[:2]
pts = np.float32([ [0,0],[0,h-1],[w-1,h-1],[w-1,0] ]).reshape(-1,1,2)
dst = cv2.perspectiveTransform(pts,M)
## 绘制边框
cv2.polylines(canvas,[np.int32(dst)],True,(0,255,0),3, cv2.LINE_AA)
else:
print( "Not enough matches are found - {}/{}".format(len(good),MIN_MATCH_COUNT))
## (8) drawMatches
matched = cv2.drawMatches(img1,kpts1,canvas,kpts2,good,None)#,**draw_params)
## (9) Crop the matched region from scene
h,w = img1.shape[:2]
pts = np.float32([ [0,0],[0,h-1],[w-1,h-1],[w-1,0] ]).reshape(-1,1,2)
dst = cv2.perspectiveTransform(pts,M)
perspectiveM = cv2.getPerspectiveTransform(np.float32(dst),pts)
found = cv2.warpPerspective(img2,perspectiveM,(w,h))
## (10) save and display
cv2.imwrite("matched.png", matched)
cv2.imwrite("found.png", found)
cv2.imshow("matched", matched);
cv2.imshow("found", found);
cv2.waitKey();cv2.destroyAllWindows()
Here is a code snippet
tips = sns.load_dataset("tips")
g = sns.FacetGrid(tips, col = 'time')
g = g.map(plt.hist, "tip")
with the following output
I want to introduce despine offset to these plots while keeping the rest unchanged. Therefore, I inserted the despine function in the existing code:
tips = sns.load_dataset("tips")
g = sns.FacetGrid(tips, col = 'time')
g.despine(offset=10)
g = g.map(plt.hist, "tip")
which results in the following plots
As a result, the offset is applied to the axes. However, the ytick labels on the right plot are back, which I don't want.
Could anyone help me on this?
To remove the yaxis tick labels, you can use the code below:
The libs:
import seaborn as sns
sns.set_style('ticks')
The adjusted code:
tips = sns.load_dataset("tips")
g = sns.FacetGrid(tips, col = 'time')
g.despine(offset=10)
g = g.map(plt.hist, "tip")
# IMPORTANT: I assume that you use colwrap=None in FacetGrid constructor
# loop over the non-left axes:
for ax in g.axes[:, 1:].flat:
# get the yticklabels from the axis and set visibility to False
for label in ax.get_yticklabels():
label.set_visible(False)
ax.yaxis.offsetText.set_visible(False)
A bit more general, image you now have a 2x2 FacetGrid, you want to despine with an offset, but the x- and yticklabels return:
Remove them all using this code:
tips = sns.load_dataset("tips")
g = sns.FacetGrid(tips, col = 'time', row='sex')
g.despine(offset=10)
g = g.map(plt.hist, "tip")
# IMPORTANT: I assume that you use colwrap=None in FacetGrid constructor
# loop over the non-left axes:
for ax in g.axes[:, 1:].flat:
# get the yticklabels from the axis and set visibility to False
for label in ax.get_yticklabels():
label.set_visible(False)
ax.yaxis.offsetText.set_visible(False)
# loop over the top axes:
for ax in g.axes[:-1, :].flat:
# get the xticklabels from the axis and set visibility to False
for label in ax.get_xticklabels():
label.set_visible(False)
ax.xaxis.offsetText.set_visible(False)
UPDATE:
for completeness, mwaskom (ref to github issue) gave an explanation why this issue is occuring:
So this happens because matplotlib calls axis.reset_ticks() internally when moving the spine. Otherwise, the spine gets moved but the ticks stay in the same place. It's not configurable in matplotlib and, even if it were, I don't know if there is a public API for moving individual ticks. Unfortunately I think you'll have to remove the tick labels yourself after offsetting the spines.
I have a 3D array, of which the first two dimensions are spatial, so say (x,y). The third dimension contains point-specific information.
print H.shape # --> (200, 480, 640) spatial extents (200,480)
Now, by selecting a certain plane in the third dimension, I can display an image with
imdat = H[:,:,100] # shape (200, 480)
img = ax.imshow(imdat, cmap='jet',vmin=imdat.min(),vmax=imdat.max(), animated=True, aspect='equal')
I want to now rotate the cube, so that I switch from (x,y) to (y,x).
H = np.rot90(H) # could also use H.swapaxes(0,1) or H.transpose((1,0,2))
print H.shape # --> (480, 200, 640)
Now, when I call:
imdat = H[:,:,100] # shape (480,200)
img.set_data(imdat)
ax.relim()
ax.autoscale_view(tight=True)
I get weird behavior. The image along the rows displays the data till 200th row, and then it is black until the end of the y-axis (480). The x-axis extends from 0 to 200 and shows the rotated data. Now on, another rotation by 90-degrees, the image displays correctly (just rotated 180 degrees of course)
It seems to me like after rotating the data, the axis limits, (or image extents?) or something is not refreshing correctly. Can somebody help?
PS: to indulge in bad hacking, I also tried to regenerate a new image (by calling ax.imshow) after each rotation, but I still get the same behavior.
Below I include a solution to your problem. The method resetExtent uses the data and the image to explicitly set the extent to the desired values. Hopefully I correctly emulated the intended outcome.
import matplotlib.pyplot as plt
import numpy as np
def resetExtent(data,im):
"""
Using the data and axes from an AxesImage, im, force the extent and
axis values to match shape of data.
"""
ax = im.get_axes()
dataShape = data.shape
if im.origin == 'upper':
im.set_extent((-0.5,dataShape[0]-.5,dataShape[1]-.5,-.5))
ax.set_xlim((-0.5,dataShape[0]-.5))
ax.set_ylim((dataShape[1]-.5,-.5))
else:
im.set_extent((-0.5,dataShape[0]-.5,-.5,dataShape[1]-.5))
ax.set_xlim((-0.5,dataShape[0]-.5))
ax.set_ylim((-.5,dataShape[1]-.5))
def main():
fig = plt.gcf()
ax = fig.gca()
H = np.zeros((200,480,10))
# make distinguishing corner of data
H[100:,...] = 1
H[100:,240:,:] = 2
imdat = H[:,:,5]
datShape = imdat.shape
im = ax.imshow(imdat,cmap='jet',vmin=imdat.min(),
vmax=imdat.max(),animated=True,
aspect='equal',
# origin='lower'
)
resetExtent(imdat,im)
fig.savefig("img1.png")
H = np.rot90(H)
imdat = H[:,:,0]
im.set_data(imdat)
resetExtent(imdat,im)
fig.savefig("img2.png")
if __name__ == '__main__':
main()
This script produces two images:
First un-rotated:
Then rotated:
I thought just explicitly calling set_extent would do everything resetExtent does, because it should adjust the axes limits if 'autoscle' is True. But for some unknown reason, calling set_extent alone does not do the job.