I am using opencv and visual studio ultimate 2010. My goal is to detect cars in a road and count them. I am using edge detection solution for this.
Does any one have any idea for detecting,counting and speed computing?
What is the solution for overlapping cars (for counting)?
We want to perform this tasks for any object that cross of virtual line.
Your question is still to broad.
However, I'll try to give you some pointers.
You can use a background subtraction method. You can find some info and code here and here. The work on your moving objects (cars). You can find some hints here
Setup you car classifier , and try to detect the cars in your scene. You can find a nice tutorial on Support Vector Machine (SVM) here, or you can start from the OpenCV people detector example, training it to detect cars instead of people.
For counting cars I recommend the first approach.
Good luck!
Related
I have a small team of developers with the task of identifying farm animals in a series of aerial photographs (from a small UAV) of farmland and/or river settings.
I understand this is an open-ended question but can anyone suggest starting places to look for techniques/packages or any other advice for attempting this kind of project?
Either image detection and/or machine learning for this purpose..
Any advice is appreciated.
This is actually a two part problem:
Separating 'objects' from background
Recognizing 'objects' as animals
If you separate this problem into two problems, you can start building for problem 1 and figure out number two on the way.
Problem one is a typical detection problem. Obviously it depends on the images you have, the resolution those images have and the size of the objects relative to the whole image. But this should certainly be do-able. It's a fairly basic Computer Vision problem, depending on your exact situation of course. Search for blocks of pixels that are significantly different from their environment. Then separate those blocks, to use them as input in problem number two.
Number two is actually also really dependant on the dependencies of problem one, however these factors are much more influential in solving for two. It might very well be impossible to properly separate actual animals from say, a water container standing in the same field of grass. But I'd try and solve this in a machine learning way, probably with a neural network. But you'd have to solve 1 first, and have a large enough result set to train with. First separate the animals from the non-animals yourself, and then train the network.
Not really a complete answer, but as you said, it's an open-ended question. Too many unknowns at this point.
I have a robot that I need to write an autonomous program for. The program is to play on this feild: http://www.vexforum.com/wiki/index.php/Gateway.
and pick up the balls and barrels and put them in the cylinders(goals). I have sensors like light detection(best for following white line on ground or keeping track of location by noticing when you cross a white line), ultrasonic sonar, bump sensors, and encoders(count amount of wheel rotations). I want to make a program where the program learns the field and learns how to navigate best with the tasks at hand. I am thinking a neural net is my best choice but I can't think of what inputs I would use. The main thing is I don't want scripted paths. I know this is pretty vague but too much detail and no one would read this. Anyone ave any ideas?
Check out Udacity course 373by Prof Thurn at http://www.udacity.com/overview/Course/cs373.
He has successfully applied 'particle filters' to program the Google Driveless car
You need to use Simultaneous localization and mapping (SLAM)
It is a pretty standard and successful technique for robot localization.
I have a database of images. When I take a new picture, I want to compare it against the images in this database and receive a similarity score (using OpenCV). This way I want to detect, if I have an image, which is very similar to the fresh picture.
Is it possible to create a fingerprint/hash of my database images and match new ones against it?
I'm searching for a alogrithm code snippet or technical demo and not for a commercial solution.
Best,
Stefan
As Pual R has commented, this "fingerprint/hash" is usually a set of feature vectors or a set of feature descriptors. But most of feature vectors used in computer vision are usually too computationally expensive for searching against a database. So this task need a special kind of feature descriptors because such descriptors as SURF and SIFT will take too much time for searching even with various optimizations.
The only thing that OpenCV has for your task (object categorization) is implementation of Bag of visual Words (BOW).
It can compute special kind of image features and train visual words vocabulary. Next you can use this vocabulary to find similar images in your database and compute similarity score.
Here is OpenCV documentation for bag of words. Also OpenCV has a sample named bagofwords_classification.cpp. It is really big but might be helpful.
Content-based image retrieval systems are still a field of active research: http://citeseerx.ist.psu.edu/search?q=content-based+image+retrieval
First you have to be clear, what constitutes similar in your context:
Similar color distribution: Use something like color descriptors for subdivisions of the image, you should get some fairly satisfying results.
Similar objects: Since the computer does not know, what an object is, you will not get very far, unless you have some extensive domain knowledge about the object (or few object classes). A good overview about the current state of research can be seen here (results) and soon here.
There is no "serve all needs"-algorithm for the problem you described. The more you can share about the specifics of your problem, the better answers you might get. Posting some representative images (if possible) and describing the desired outcome is also very helpful.
This would be a good question for computer-vision.stackexchange.com, if it already existed.
You can use pHash Algorithm and store phash value in Database, then use this code:
double const mismatch = algo->compare(image1Hash, image2Hash);
Here 'mismatch' value can easly tell you the similarity ratio between two images.
pHash function:
AverageHash
PHASH
MarrHildrethHash
RadialVarianceHash
BlockMeanHash
BlockMeanHash
ColorMomentHash
These function are well Enough to evaluate Image Similarities in Every Aspects.
I am a beginner in image processing. I want to write an application in C++ or in C# for
Searching an image in a list of images
Searching for a particular feature (for e.g. face) in a list of images.
Can anybody suggest where should I start from?
What all should I learn before doing this?
Where can I find the correct information regarding this?
In terms of the second one, you should start off with learning how to solve the decision problem of whether a square patch contains a face (or whatever kind of object you are interested in). For that, I suggest you study a little bit of machine learning, the AdaBoost algorithm, Haar features, and Viola-Jones.
Once you know how to do that, the trick is really just to take a sliding window across your image, feeding the contents of that window into your detector. Then you shrink your main input image and repeat the process until your input image has gotten smaller than the minimum size input for your detector. There are, of course, several clever ways to parallelize the computation and speed it up, but the binary detector is really the interesting part of the process.
You may find some of the material linked from the CSE 517: Machine Learning - Syllabus helpful in getting into machine learning and understanding AdaBoost. You will certainly find the Viola-Jones paper of interest.
I would like to know what algorithm is used to obtain an image and get the objects present in the image and process (give information about) it. And also, how is this done?
I agree with Sid Farkus, there is no simple answer to this question.
Maybe you can get started by checking out the Open Computer Vision Library. There is a Wiki page on object detection with links to a How-To and to papers.
You may find other examples and approaches (i.e. algorithms); it's likely that the algorithms differ by application (i.e. depending on what you actually want to detect).
There are many ways to do Object Detection and it still an open problem.
You can start with template matching, It is probably the simplest way to solve, which consists of making a convolution with the known image (IA) on the new image (IB). It is a fairly simple idea because it is like applying a filter on the signal, the filter will generate a maximum point in the image when it finds the object, as shown in the video. But that technique has several cons, does not handle variants in scale or rotation so it has no real application.
Also you can find another option more robust feature matching, which consist on create a dataset with features such as SIFT, SURF or ORB of different objects with this you can train a SVM to recognize objects
You can also check deformable part models. However, The state of the art object detection is based on deep-learning such as Faster R-CNN, Alexnet, which learn the features that will be used to detect/recognize the objects
Well this is hardly an answerable question, but for most computer vision applications a good starting point is the Hough Transform