Motion detection using only video clips - image

I am trying to do motion detection of whether the person is walking or running. However I got only 40 short video clips of a single person walking or running. How to do the motion detection using video data? Can anyone please specify or point to any papers or implementation.

OpenCV has many trackers (https://docs.opencv.org/3.4.1/d9/df8/group__tracking.html). For instance: cv2.TrackerKCF_create().
You may find some comprehensive tutorials on the subject, like this one: https://www.pyimagesearch.com/2015/09/21/opencv-track-object-movement/

Related

Looking for a audio Analysis library for information extraction

Hey guys I'm a beginner in Audio Analysis and trying to find a library which gives me insights like amplitude, classification of sound, what should detect background noise. I have tried out Paura/pyAudioAnalysis (pAura: Python AUdio Recording and Analysis) which analyzes some of the information for live recording. Is there any good audio analysis library in GitHub ?
There are many. search for DTLN model for audio noise removal on github. DTLN is a pretrained noise removal lite model.
if you're not planning to use any models then try to fix this problem using audio signal processing. use audio features like zero crossing rate for noise/speech activity detection.

Is it possible to detect there is a motion happening from only an image(no referening is given)

I have searched around the internet, only seen motion detection can be done in video or two consecutive images. I wonder is that possible to detect a motion from an image(like jumping running swimming).The motion is referring any significant body movement. If it can be done, please tell me the algorithm and ways to learn it. thank you
As others have commented, for the general case, you probably can't. But, there are still avenues to explore, if you have control over some of the parameters.
One idea that comes to mind is detecting motion blur for some fast movement. You can accent that if you have control over the camera type/exposure.
You can find academic papers on the subject, and can start with:
https://www.google.com/search?q=detecting+motion+blur+in+one+image
A technique that can be helpful to you is called scene understanding. Basically you train a deep neural net with images and labels that describe that image. In that way you can know that a person is running, swimming or doing any other activity.
There is a good presentation about the subject by Prof. LeCun.
What yu are implying is an implicit comparison with an image of a person standng in a "stable/not moving directed way. So there is a two image comparison there non-withstanding.

Is there Aforge.NET algorithm for human activity recognition?

I was wondering if there is Aforge.NET algorithm that is intended for human activity recognition?
For example, I would like to recognizing drowning while capturing frames from surveillance camera on the beach.
I saw there are algorithms for motion detection, but what I need is motion detection plus logic to process that motion so that computer can conclude does that motion fit into drowning category or any other category I tell him.
Comments would be appreciated.
you might need to develop your own algorythms, i do that too with Aforge
Aforge basicly allows me for simple video aquisition, while my math does the interesting stuff.
In your case..
Detect spots with people
Zoom in to them ??
then it becommes tricky how to distinguish someone who dives from someone who sinks ?..
Also there are waves who can get in front of the person your trying to follow..
usually this recognition comes down to simple observations, like someone pulling his hands up is not like a circle of a swimming head..
you got to think how a beach guard can see the difference, what are the main visual clues and how can you convert them to bitmap math.
Consider using Accord.NET - it's a library based on AForge.NET that contains many machine learning algorithms. However you must write all the logic, as you call it, by yourself.
Another possibility is to use Emgu CV which has some motion detection algorithms.

OpenCV: how to detect if video has fast moving object in it?

What would be the best way to detect a fast moving object using OpenCV?
Say, I have 5 random video files:
1) Video of a crowd, people walking, static camera.
2) Video of a cat playing with a ball, shaky iPhone camera.
3) Video of a person being interviewed. Static camera.
4) Animation (3D) of a fast moving car, background is blurred etc. etc.
5) A blurred out video shot with iPhone camera (just camera waved around, nothing is visible).
So I would like to isolate video5 and detect that there is a lot of movement in video4 and video2.
What would be the best approach to do that? I think of using OpenCV2, but if there is a better solution for that, I'd be happy to learn about that.
Any input greatly appreciated. Pseudo-code or just recommendations of specific algorithms.
Thank you
Optical Flow This will be one of many ways of detecting motion.
I don't know if you are still on it but I found it interesting to answer.
Approach 1-
As suggested by user349026, one of the most intuitive way is to work with Optical flow, it will give you dominating motion but optical flow always comes with noises. You will have to use some filter before using the optical flow.
Approach- 2
This one difficult but gives good results.
This is from CVPR-2013 paper link- http://www.irisa.fr/texmex/people/jain/w-Flow/motion_cvpr13.pdf
I think the just introduction of this paper will solve your problem.

Real-time video(image) stitching

I'm thinking of stitching images from 2 or more(currently maybe 3 or 4) cameras in real-time using OpenCV 2.3.1 on Visual Studio 2008.
However, I'm curious about how it is done.
Recently I've studied some techniques of feature-based image stitching method.
Most of them requires at least the following step:
1.Feature detection
2.Feature matching
3.Finding Homography
4.Transformation of target images to reference images
...etc
Now most of the techniques I've read only deal with images "ONCE", while I would like it to deal with a series of images captured from a few cameras and I want it to be "REAL-TIME".
So far it may still sound confusing. I'm describing the detail:
Put 3 cameras at different angles and positions, while each of them must have overlapping areas with its adjacent one so as to build a REAL-TIME video stitching.
What I would like to do is similiar to the content in the following link, where ASIFT is used.
http://www.youtube.com/watch?v=a5OK6bwke3I
I tried to consult the owner of that video but I got no reply from him:(.
Can I use image-stitching methods to deal with video stitching?
Video itself is composed of a series of images so I wonder if this is possible.
However, detecting feature points seems to be very time-consuming whatever feature detector(SURF, SIFT, ASIFT...etc) you use. This makes me doubt the possibility of doing Real-time video stitching.
I have worked on a real-time video stitching system and it is a difficult problem. I can't disclose the full solution we used due to an NDA, but I implemented something similar to the one described in this paper. The biggest problem is coping with objects at different depths (simple homographies are not sufficient); depth disparities must be determined and the video frames appropriately warped so that common features are aligned. This essentially is a stereo vision problem. The images must first be rectified so that common features appear on the same scan line.
You might also be interested in my project from a few years back. It's a program which lets you experiment with different stitching parameters and watch the results in real-time.
Project page - https://github.com/lukeyeager/StitcHD
Demo video - https://youtu.be/mMcrOpVx9aY?t=3m38s

Resources