OpenCV: how to detect if video has fast moving object in it? - algorithm

What would be the best way to detect a fast moving object using OpenCV?
Say, I have 5 random video files:
1) Video of a crowd, people walking, static camera.
2) Video of a cat playing with a ball, shaky iPhone camera.
3) Video of a person being interviewed. Static camera.
4) Animation (3D) of a fast moving car, background is blurred etc. etc.
5) A blurred out video shot with iPhone camera (just camera waved around, nothing is visible).
So I would like to isolate video5 and detect that there is a lot of movement in video4 and video2.
What would be the best approach to do that? I think of using OpenCV2, but if there is a better solution for that, I'd be happy to learn about that.
Any input greatly appreciated. Pseudo-code or just recommendations of specific algorithms.
Thank you

Optical Flow This will be one of many ways of detecting motion.

I don't know if you are still on it but I found it interesting to answer.
Approach 1-
As suggested by user349026, one of the most intuitive way is to work with Optical flow, it will give you dominating motion but optical flow always comes with noises. You will have to use some filter before using the optical flow.
Approach- 2
This one difficult but gives good results.
This is from CVPR-2013 paper link- http://www.irisa.fr/texmex/people/jain/w-Flow/motion_cvpr13.pdf
I think the just introduction of this paper will solve your problem.

Related

My Pixelart Look Really Pixelated In Unity

I don't know what it is, but my pixelart looks really pixelated and wierdly optimized. I've turned off compression and filters, but still, something isn't right. Any help would be appreciated! btw this problem only occurs on rotated sprites.
(would also enjoy some general tips on making pixel art games in unity since I am quite new.)
As to answer your inquiry to general tips about making pixel art games in unity engine, check these out:
A post on Unity's blog about best practices of developing pixel art games with Unity
2D pixel perfect is a Unity package available at your disposal "which ensures your pixel art remains crisp and clear at different resolutions, and stable in motion".

Detecting the release of a ball in real time

I'm working on a project where I'm capturing people making free throw shots via a video camera. I need a way to detect, as fast as possible, the instant the ball is released from a player's hand. I tried researching a lot of detection/tracking algorithms, but everything I've found seemed more suited to tracking the ball itself. While I may eventually want to do that, right now all I need to know is the release timing.
I'm also open to other solutions that don't use the camera (I have a decent budget), but of course I'd like to use the camera if possible/fast enough. I'm also able to mess with the camera positioning/setup, and what I even want in the FOV.
Does anyone have any ideas? I'm pretty stuck right now, and haven't been able to find anything online that can help.
A solution is to use visual markers (motion trackers) on the throwing hands and on the ball. The precision is based on the FPS of the camera.
The assumption is that you know the ball dimension and the hand grip on the ball that may vary. By using visual markers/trackers you can know the position of the ball relative to the hand. When the distance between the initial grip of the ball and the hand is bigger than the distance between the center of the ball and it's extremity then is when you have your release. Schema of the method
A better solution is to use a graded meter bar (alternate between black and white bars like the ones used on the mythbusters show to track the speed of objects). At the moment there is a color gap between the hand and the ball you have your release. The downside of this approach is that you have to capture the image at a side angle or top-down angle and use panels to hold the grading.
Your problem is similar to the billiard ball collision detection. I hope you find this paper helpful.
Edit:
There is a powerful tool, that is not that expensive named Microsoft Kinect used for motion capture. The downside of this tool is that it's camera works with 30 fps and you cannot use it accurately on a very sunny scene. However I have found a scientific paper about using kinect to record athletes, including free-throws in basketball. Paper here
It's my first answer on so. Any feedback on how to improve my future answers is appreciated.

Is it possible to detect there is a motion happening from only an image(no referening is given)

I have searched around the internet, only seen motion detection can be done in video or two consecutive images. I wonder is that possible to detect a motion from an image(like jumping running swimming).The motion is referring any significant body movement. If it can be done, please tell me the algorithm and ways to learn it. thank you
As others have commented, for the general case, you probably can't. But, there are still avenues to explore, if you have control over some of the parameters.
One idea that comes to mind is detecting motion blur for some fast movement. You can accent that if you have control over the camera type/exposure.
You can find academic papers on the subject, and can start with:
https://www.google.com/search?q=detecting+motion+blur+in+one+image
A technique that can be helpful to you is called scene understanding. Basically you train a deep neural net with images and labels that describe that image. In that way you can know that a person is running, swimming or doing any other activity.
There is a good presentation about the subject by Prof. LeCun.
What yu are implying is an implicit comparison with an image of a person standng in a "stable/not moving directed way. So there is a two image comparison there non-withstanding.

Is there Aforge.NET algorithm for human activity recognition?

I was wondering if there is Aforge.NET algorithm that is intended for human activity recognition?
For example, I would like to recognizing drowning while capturing frames from surveillance camera on the beach.
I saw there are algorithms for motion detection, but what I need is motion detection plus logic to process that motion so that computer can conclude does that motion fit into drowning category or any other category I tell him.
Comments would be appreciated.
you might need to develop your own algorythms, i do that too with Aforge
Aforge basicly allows me for simple video aquisition, while my math does the interesting stuff.
In your case..
Detect spots with people
Zoom in to them ??
then it becommes tricky how to distinguish someone who dives from someone who sinks ?..
Also there are waves who can get in front of the person your trying to follow..
usually this recognition comes down to simple observations, like someone pulling his hands up is not like a circle of a swimming head..
you got to think how a beach guard can see the difference, what are the main visual clues and how can you convert them to bitmap math.
Consider using Accord.NET - it's a library based on AForge.NET that contains many machine learning algorithms. However you must write all the logic, as you call it, by yourself.
Another possibility is to use Emgu CV which has some motion detection algorithms.

Matlab image analysis, trying to detect direction of movement

i'm trying to solve a problem i'm facing in detecting the direction of movement of an image.
So i have this video which i'm trying to analyze, its composed of a contracting objects (continuaslly shrink and expand) and i'm trying to be able to detect if current frame of move is shrinked or expand !
here is an example of 2 frames 1 the objects there is expanded and other shrinked
Note: you can't see deference when they are on top of each other, try to save and view one after other on your computer.
So is there a way i can detect the direction of movement in video ? (inward of outward ?)
thanks a lot
This can be solve with "optical flow" which has been studied for several decades now.
The classical method is Horn-Schnuck http://en.wikipedia.org/wiki/Horn%E2%80%93Schunck_method which you can download here: http://www.mathworks.com/matlabcentral/fileexchange/22756-horn-schunck-optical-flow-method . It's fast but not the most accurate way to solve the problem as it tends to blur the regions you are interested in detecting since it minimizes the L2 norm of the gradients. Here's what I got on your images using Horn-Schnuck off the shelf:
Since your images have lots of edges it's probably worthwhile to try out some more modern algorithms. http://people.csail.mit.edu/celiu/OpticalFlow/ might help.

Resources