How to detect eye movement, when it's closed. I am able to detect the closed eye region in a thermal video with finding the hottest spot then drawn a circle around that point.with the trial and error i was roughly able to estimate the eye corner co-ordinates then i cropped out the eye region[2] from the video. The next task is to detect its movement.
import numpy as np
import cv2
import scipy
from matplotlib import pyplot as plt
from PIL import Image
from collections import Counter
blur_radius = 1.0
threshold = 50
video = cv2.VideoCapture('12.avi')
while True:
ret, frame = video.read()
frame = frame[:,1:600]
gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
gray = cv2.GaussianBlur(gray, (15,15), 0)
(minVal, maxVal, minLoc, maxLoc) = cv2.minMaxLoc(gray)
image = frame
(x,y) = maxLoc
cv2.circle(image, maxLoc, 15, (255, 0, 0), 2)
cv2.rectangle(image,(maxLoc),(390,190),(0,255,0),2)
roi = frame [y:190,x:390]
try:
roi = cv2.resize(roi, None, fx=4, fy=4, interpolation=cv2.INTER_AREA)
cv2.imshow("Eye",roi)
cv2.imshow("Eyecorner", image)
except:
print''
if cv2.waitKey(1) & 0xFF == ord('q'):
break
video.release()
cv2.destroyAllWindows()
[1]
[2]
[1]: https://i.stack.imgur.com/jayfp.jpg = detected eye in a single frame
[2]: https://i.stack.imgur.com/MgEe7.jpg = the cropped eye region from a thermal video
Since you already have the hotspot containing the eye and the coordinates of the eye corners, can't you simply measure the distance between the center of the hotspot and the center point between the two corners at every frame and examine its rate of change? You didn't mention how precise the measurements are, but provided that they are 100% correct and that the head doesn't move too much (perspective doesn't change), then it should work. Otherwise you need another way to determine a frame of reference.
Related
I made an algorithm to measure an object using a reference, like this:
The reference is the frame and the other (AOL) is the desired object. My code obtained this result:
But the real AOL is 78.6. This is because of the perspective/angle of photograph. So I used in my code Deep Learning and I obtained the the reference and AOL mask, and I made a simple calculation based on the number of pixels for each mask to obtain AOL area (cm²), once I know the actual size of the reference. I tried to correct the angle/perpective based on the reference and I used the reference mask:
I tried to calculate quadrangle vertices based on the reference mask to correct the perspective. I created this code based on this reference Perspective correction in OpenCV using python:
# import the necessary packages
from scipy.spatial import distance as dist
from imutils import perspective
from imutils import contours
import numpy as np
import imutils
import cv2
import math
import matplotlib.pyplot as plt
# get the single external contours
# load the image, convert it to grayscale, and blur it slightly
image = cv2.imread("./ref/20210702_114527.png") ## Mask Image
gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
gray = cv2.GaussianBlur(gray, (7, 7), 0)
# perform edge detection, then perform a dilation + erosion to
# close gaps in between object edges
edged = cv2.Canny(gray, 50, 100)
edged = cv2.dilate(edged, None, iterations=1)
edged = cv2.erode(edged, None, iterations=1)
# find contours in the edge map
cnts = cv2.findContours(edged.copy(), cv2.RETR_EXTERNAL,
cv2.CHAIN_APPROX_SIMPLE)
cnts = imutils.grab_contours(cnts)
# sort the contours from left-to-right and initialize the
# 'pixels per metric' calibration variable
(cnts, _) = contours.sort_contours(cnts)
pixelsPerMetric = None
orig = image.copy()
box = cv2.minAreaRect(min(cnts, key=cv2.contourArea))
box = cv2.cv.BoxPoints(box) if imutils.is_cv2() else cv2.boxPoints(box)
box = np.array(box, dtype="int")
# order the points in the contour such that they appear
# in top-left, top-right, bottom-right, and bottom-left
# order, then draw the outline of the rotated bounding
# box
box = perspective.order_points(box)
cv2.drawContours(orig, [box.astype("int")], -1, (0, 255, 0), 2)
# loop over the original points and draw them
for (x, y) in box:
cv2.circle(orig, (int(x), int(y)), 5, (0, 0, 255), -1)
print('Box: ',box)
cv2.imshow('Orig', orig)
img = cv2.imread("./meat/sair/20210702_114527.jpg") #original image
rows,cols,ch = img.shape
#pts1 = np.float32([[185,9],[304,80],[290, 134],[163,64]]) #ficou legal 6e.jpg
### Coletando os pontos
pts1 = np.float32(box)
### Draw the vertices on the original image
for (x, y) in pts1:
cv2.circle(img, (int(x), int(y)), 5, (0, 0, 255), -1)
ratio= 1.6
moldH=math.sqrt((pts1[2][0]-pts1[1][0])*(pts1[2][0]-pts1[1][0])+(pts1[2][1]-pts1[1][1])*(pts1[2][1]-pts1[1][1]))
moldW=ratio*moldH
pts2 = np.float32([[pts1[0][0],pts1[0][1]], [pts1[0][0]+moldW, pts1[0][1]], [pts1[0][0]+moldW, pts1[0][1]+moldH], [pts1[0][0], pts1[0][1]+moldH]])
#print('cardH: ',cardH,cardW)
M = cv2.getPerspectiveTransform(pts1,pts2)
print('M:', M)
print('pts1:', pts1)
print('pts2:', pts2)
offsetSize= 320
transformed = np.zeros((int(moldW+offsetSize), int(moldH+offsetSize)), dtype=np.uint8)
dst = cv2.warpPerspective(img, M, transformed.shape)
plt.subplot(121),plt.imshow(img),plt.title('Input')
plt.subplot(122),plt.imshow(dst),plt.title('Output')
plt.show()
And I got this:
No perspective correction. I have a lot of information like vertices, the correct size of the reference. Is it possible to do a mathematical correction based on quadrangle vertices, like a regression? Not necessarily correcting the image directly, unless there is a good method to correct the perspective image. Or maybe a different approach based on math? Thanks for your patience.
For Christoph:
This is the result position too:
pts1: [[ 9. 51.]
[392. 56.]
[388. 336.]
[ 5. 331.]]
I'm using opencv.
Having a base image (640x640) I want to know if applying a rectangular white shape mask of (100x100), in another words, a ROI is more time consuming compared to cropping the image in the same rectangular shape (100x100).
What I consider as being a ROI
mask = np.zeros_like(original_image)
shape = np.array([[a,b,c,d]])
cv2.fillPoly(mask, shape, (255,255,255))
cv2.bitwise_and(original_image, mask)
The way of cropping
cropped = original_image[x:y, z:t]
I want to apply a function (e.g transform to grayscale) or apply a color filter on the image.
I think that doing so on the ROIed image will be more time consuming as there are many more pixels (the resulting image has the same dimensions as the original one (640x640)).
On the other hand, applying some function on the cropped image will take less time compared to the latter.
The question is: wouldn't the operation of cropping the image compensate the time needed to compute the rest of the 640x640 - 100x100 pixels?
Hope I was clear enough. Thanks in advance
Edit
I did use timeit as Belal Homaidan suggested:
print("CROP TIMING: ", timeit.timeit(test_crop, setup=setup, number=10000) * 1e6, "us")
print("ROI TIMING: ", timeit.timeit(test_ROI, setup=setup, number=10000) * 1e6, "us")
Where setup was:
setup = """\
import cv2
import numpy as np
original_image = cv2.imread(r"C:\\Users\\path\\path\\test.png")
"""
Where test_crop was:
test_crop = """\
cropped = original_image[0:150, 0:150]
"""
And where test_ROI was:
test_ROI = """\
mask = np.zeros_like(original_image)
shape = np.array([[(0,0),(150,0),(150,150),(0,150)]])
cv2.fillPoly(mask, shape, (255,255,255))
final = cv2.bitwise_and(original_image, mask)
"""
The image was a BGR one with a size of 131 KB and a resolution of 384x208 pixels.
The result was:
CROP TIMING: 11560.400000007576 us
ROI TIMING: 780580.8000000524 us
I think that doing a ROI on an image fits when you want a specific shape, and cropping would best fit when you need rectangular shapes.
I want to detect the zebra crossing lines. I have tried to find out the co-ordinates of zebra crossing line in the image using contour but it gives the output for the distinct white boxes (only white lines in the zebra crossing). But I need the co-ordinates of the entire zebra crossing.
Please let me know the way to group the contours or suggest me another method to detect zebra crossing.
Input image
Output image obtained
Expected output
import cv2
import numpy as np
image = cv2.imread('d.jpg',-1)
paper = cv2.resize(image,(500,500))
ret, thresh_gray = cv2.threshold(cv2.cvtColor(paper, cv2.COLOR_BGR2GRAY),
200, 255, cv2.THRESH_BINARY)
image, contours, hier = cv2.findContours(thresh_gray, cv2.RETR_EXTERNAL,cv2.CHAIN_APPROX_NONE)
for c in contours:
rect = cv2.minAreaRect(c)
box = cv2.boxPoints(rect)
# convert all coordinates floating point values to int
box = np.int0(box)
cv2.drawContours(paper, [box], 0, (0, 255, 0),1)
cv2.imshow('paper', paper)
cv2.imwrite('paper.jpg',paper)
cv2.waitKey(0)
You can closing morphological operation for closing the gaps.
I cat suggest the following stages:
Find contours in thresh_gray.
Erase contours with very small area (noise).
Erase contours with low aspect ratio (assume zebra line must be long and narrow.
Use morphologyEx to perform closing morphological operation - the closing merges close components.
Find contours again in the image after erasing and closing.
At the last stage, ignore small contours.
Here is a working code sample:
import cv2
import numpy as np
image = cv2.imread('d.jpg', -1)
paper = cv2.resize(image, (500,500))
ret, thresh_gray = cv2.threshold(cv2.cvtColor(paper, cv2.COLOR_BGR2GRAY), 200, 255, cv2.THRESH_BINARY)
image, contours, hier = cv2.findContours(thresh_gray, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_NONE)
# Erase small contours, and contours which small aspect ratio (close to a square)
for c in contours:
area = cv2.contourArea(c)
# Fill very small contours with zero (erase small contours).
if area < 10:
cv2.fillPoly(thresh_gray, pts=[c], color=0)
continue
# https://stackoverflow.com/questions/52247821/find-width-and-height-of-rotatedrect
rect = cv2.minAreaRect(c)
(x, y), (w, h), angle = rect
aspect_ratio = max(w, h) / min(w, h)
# Assume zebra line must be long and narrow (long part must be at lease 1.5 times the narrow part).
if (aspect_ratio < 1.5):
cv2.fillPoly(thresh_gray, pts=[c], color=0)
continue
# Use "close" morphological operation to close the gaps between contours
# https://stackoverflow.com/questions/18339988/implementing-imcloseim-se-in-opencv
thresh_gray = cv2.morphologyEx(thresh_gray, cv2.MORPH_CLOSE, cv2.getStructuringElement(cv2.MORPH_ELLIPSE, (51,51)));
# Find contours in thresh_gray after closing the gaps
image, contours, hier = cv2.findContours(thresh_gray, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_NONE)
for c in contours:
area = cv2.contourArea(c)
# Small contours are ignored.
if area < 500:
cv2.fillPoly(thresh_gray, pts=[c], color=0)
continue
rect = cv2.minAreaRect(c)
box = cv2.boxPoints(rect)
# convert all coordinates floating point values to int
box = np.int0(box)
cv2.drawContours(paper, [box], 0, (0, 255, 0),1)
cv2.imshow('paper', paper)
cv2.imwrite('paper.jpg', paper)
cv2.waitKey(0)
cv2.destroyAllWindows()
thresh_gray before erasing small and squared contours:
thresh_gray after erasing small and squared contours:
thresh_gray after close operation:
Final result:
Remark:
I have some doubts about the benefit of using morphological operation for closing the gaps.
It might be better using a smart logic based on geometry instead.
I want my code to find the corners of a square lego plate in an image like the one attached.
I also want to find its dimensions, i.e. the number of "blops" in both dimensions (48x48 in the attached image).
I am currently looking at detecting the individual "blops", and the result so far is pretty good: a combination of blur, adaptiveThreshold, findContours and selection based on area finds the contours rendered in the second attached image (coloring is random).
I'm now looking for an algorithm to find the "grid" losely represented by these contours (or their mid-points), but I lack the google fu. Any ideas?
(Suggestions for different approaches are also very welcome.)
(The sample image shows bricks placed in the corners - an algorithm could expect this, if it helps.)
(The sample image has a rather wild background. I'd prefer to cope with that, if possible.)
Update 8 July 2016: I'm trying to write an algorithm that looks for streaks of adjacent contours forming lines. The algo should be able to find a number of these and, from that, deduce the form of the whole plate, even with perspective. Will update if it works...
Update December 2017: The above algorithm sort of worked, although it was a bit too unpredictable. Also I got problems with perspective (adding a "thick" lego brick changes the surface) and color recognition (shadows, camera peculiarities etc). This endeavor is on hold for now. If I resume it I will try with fixed camera positions immediately above the plate and consistent lights.
Here's a potential approach using color thresholding. The idea is to convert the image to HSV format then color threshold using a lower and upper bound with the assumption that the baseplate is in gray. This will give us a mask image. From here we morph open to remove noise, find contours, and sort for the largest contour. Next we obtain the rotated bounding box coordinates and draw this onto a new blank mask. Finally we bitwise-and the mask with the input image to get our result. To find the corner coordinates, we can use cv2.goodFeaturesToTrack() to find the points on the mask. Take a look at how to find accurate corner positions of a distorted rectangle from blurry image in python? and Shi-Tomasi Corner Detector & Good Features to Track for more details
Here's a visualization of each step:
We load the image, convert to HSV format, define a lower/upper bound, and perform color thresholding using cv2.inRange()
import cv2
import numpy as np
# Load image, convert to HSV, and color threshold
image = cv2.imread('1.png')
blank_mask = np.zeros(image.shape, dtype=np.uint8)
original = image.copy()
hsv = cv2.cvtColor(image, cv2.COLOR_BGR2HSV)
lower = np.array([0, 0, 109])
upper = np.array([179, 36, 255])
mask = cv2.inRange(hsv, lower, upper)
Next we create a rectangular kernel using cv2.getStructuringElement() and perform morphological operations using cv2.morphologyEx(). This step removes small particles of noise.
# Morph open to remove noise
kernel = cv2.getStructuringElement(cv2.MORPH_RECT, (5,5))
opening = cv2.morphologyEx(mask, cv2.MORPH_OPEN, kernel, iterations=1)
From here we find contours on the mask using cv2.findContours() and filter using contour area to obtain the largest contour. We then obtain the rotated boding box coordinates using cv2.minAreaRect() and cv2.boxPoints() then draw this onto a new blank mask with cv2.fillPoly(). This step gives us a "perfect" outer contour of the baseplate. Here's the detected outer contour highlighted in green and the resulting mask.
# Find contours and sort for largest contour
cnts = cv2.findContours(opening, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
cnts = cnts[0] if len(cnts) == 2 else cnts[1]
cnts = sorted(cnts, key=cv2.contourArea, reverse=True)[0]
# Obtain rotated bounding box and draw onto a blank mask
rect = cv2.minAreaRect(cnts)
box = cv2.boxPoints(rect)
box = np.int0(box)
cv2.drawContours(image,[box],0,(36,255,12),3)
cv2.fillPoly(blank_mask, [box], (255,255,255))
Finally we bitwise-and the mask with our original input image to obtain our result. Depending on what you need, you can change the background to black or white.
# Bitwise-and mask with input image
blank_mask = cv2.cvtColor(blank_mask, cv2.COLOR_BGR2GRAY)
result = cv2.bitwise_and(original, original, mask=blank_mask)
# result[blank_mask==0] = (255,255,255) # Color background white
To detect the corner coordinates, we can use cv2.goodFeaturesToTrack(). Here's the detected corners highlighted in purple:
Coordinates:
(91.0, 525.0)
(463.0, 497.0)
(64.0, 152.0)
(436.0, 125.0)
# Detect corners
corners = cv2.goodFeaturesToTrack(blank_mask, maxCorners=4, qualityLevel=0.5, minDistance=150)
for corner in corners:
x,y = corner.ravel()
cv2.circle(image,(x,y),8,(155,20,255),-1)
print("({}, {})".format(x,y))
Full Code
import cv2
import numpy as np
# Load image, convert to HSV, and color threshold
image = cv2.imread('1.png')
blank_mask = np.zeros(image.shape, dtype=np.uint8)
original = image.copy()
hsv = cv2.cvtColor(image, cv2.COLOR_BGR2HSV)
lower = np.array([0, 0, 109])
upper = np.array([179, 36, 255])
mask = cv2.inRange(hsv, lower, upper)
# Morph open to remove noise
kernel = cv2.getStructuringElement(cv2.MORPH_RECT, (5,5))
opening = cv2.morphologyEx(mask, cv2.MORPH_OPEN, kernel, iterations=1)
# Find contours and sort for largest contour
cnts = cv2.findContours(opening, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
cnts = cnts[0] if len(cnts) == 2 else cnts[1]
cnts = sorted(cnts, key=cv2.contourArea, reverse=True)[0]
# Obtain rotated bounding box and draw onto a blank mask
rect = cv2.minAreaRect(cnts)
box = cv2.boxPoints(rect)
box = np.int0(box)
cv2.drawContours(image,[box],0,(36,255,12),3)
cv2.fillPoly(blank_mask, [box], (255,255,255))
# Bitwise-and mask with input image
blank_mask = cv2.cvtColor(blank_mask, cv2.COLOR_BGR2GRAY)
result = cv2.bitwise_and(original, original, mask=blank_mask)
result[blank_mask==0] = (255,255,255) # Color background white
# Detect corners
corners = cv2.goodFeaturesToTrack(blank_mask, maxCorners=4, qualityLevel=0.5, minDistance=150)
for corner in corners:
x,y = corner.ravel()
cv2.circle(image,(x,y),8,(155,20,255),-1)
print("({}, {})".format(x,y))
cv2.imwrite('mask.png', mask)
cv2.imwrite('opening.png', opening)
cv2.imwrite('blank_mask.png', blank_mask)
cv2.imwrite('image.png', image)
cv2.imwrite('result.png', result)
cv2.waitKey()
I am plotting tiled images in a similar way to the working code shown below:
import Image
import matplotlib.pyplot as plt
import random
import numpy
def r():
return random.randrange(50,200)
imsize = 100
rngsize = 5
rng = range(rngsize)
for i in rng:
for j in rng:
im = Image.new('RGB', (imsize, imsize), (r(),r(),r()))
plt.imshow(im, aspect='equal', extent=numpy.array([i, i+1, j, j+1])*imsize)
plt.xlim(-5,imsize * rngsize + 5)
plt.ylim(-5,imsize * rngsize + 5)
plt.show()
The problem is: as you pan and zoom, zoomscale-independent white stripes appear between the image edges, which is very undesireable. I guess this has to do with resampling and antialiasing, but have no idea how to solve it "the right way", specialy for not knowing exact implementation details of matplotlib's rendering engine.
With Cairo and HTML Canvas, you can draw "to the pixel corner" or "to the pixel center" (translating by 0.5 pixel) thus avoiding anti-aliasing effects. Would there be a way to do that with Matplotlib?
Thanks for any help!
You can simply fill in the values to a larger numpy array and plot the entire composite image in one shot. I've adapted your code above for a minimal example but with different sized images you'll need to take a different step size.
F = numpy.zeros((imsize*rngsize,imsize*rngsize,3))
for i in rng:
for j in rng:
F[i*imsize:(i+1)*imsize,
j*imsize:(j+1)*imsize, :] = (r(), r(), r())
plt.imshow(F, interpolation = 'nearest')
plt.show()