How to design an approximate solution algorithm - algorithm

I want to write an algorithm that can take parts of a picture and match them to another picture of the same object.
For example, If I gave the computer a picture of a vase and a picture of a scene with the vase in it, I'd expect it to determine where in the image the vase is.
How would I begin to develop an algorithm like this?
The final usage for this algorithm will be an application that for example with a picture of somebody's face could tell if they were in a crowd of people. This algorithm would eventually be applied to video streams.
edit: I'm not expecting an actual solution to this problem as I don't hope to solve it anytime soon. The real question was how do you define something like this to a computer so that you could make an algorithm to do it.
Thanks

A former teacher of mine wrote his doctorate thesis on a similar sort of problem, except his input was a detailed 3D model of something, which he would use to find that object in 2D images. This is a VERY non-trivial problem, there is no single 'answer', certainly nothing that would fit the Stack Overflow format.
My best answer: gather a ton of money and hire a very experienced programmer.
Best of luck to you.

The first problem you describe and the second are both quite different.
A major part of each is solved by the numerous machine vision libraries available. You may need a combination of techniques to achieve any success at either task.
In the first one, you would need something that generically recognizes objects. Probably i'd use a number of algorithms in concert to identify the foreground object in the model image and then do some kind of weighted comparison of the partitioned target image.
In the second case, examining faces, is a much more difficult problem relative to the general recognizer above. Faces all look the same, or nearly so. The things that a general recognizer would notice aren't likely to be good for differentiating faces. You need an algorithm already tuned to facial recognition. Fortunately this is a rapidly maturing field and you can probably do this as well as the first case, but with a different set of functions.

The simple answer is, find a mathematical way to describe faces, that can account for angles and partial missing data, then refine and teach it.
Apparently apple has done something like this, however, it still makes mistakes and has to be taught as it moves forward.
I expect it will be more about the math, than about the programming.

I think you will find this to be quite a challenge. This is an extremely difficult problem and is one of the many areas of computing that fall under the domain of artificial intelligence (AI). Facial recognition would certainly be the most popular variant of this problem and in spite of what you may read in the media, any claimed success are not what they are made out to be. I think the closest solutions involve neural nets and they require very clear and carefully selected images usually.
You could try reading here though. Good luck!

Related

How can I isolate and recolor specific color range?

Given an image of the region containing the lips and other "noise" (teeth, skin), how can we isolate and recolor only the lips (simulating a "lipstick" effect)?
Attached is a photo describing the lips/mouth states.
What we have tried so far is a three-part process:
Color matching the lips using a stable point on the lips (provided by internal API).
Use this color as the base color for the lips isolation.
Recolor the lips (lipstick behavior)
We tried a few algorithms like hue difference, HSV difference, ∆and E after converting them to CIE color space. Unfortunately, nothing has panned out or has produced artifacts due to the skin's relative similarity in color to the lips and the discoloration from shadows cast by the nose and mouth.
What are we missing? Is there a better way to approach it?
We are looking for a solution/direction from a classic Computer Vision color algorithm, not a solution from the Machine Learning/Depp Learning domain. Thanks!
You probably won't like this answer, but your question is ill-posed (there is no measurable solution that is better than others, there are only peoples' opinions.)
In this case, the best answer you can hope for then is usually:
Ask an expert for a large set of examples that would be acceptable in practice.
Your problem can easily be solved by an appropriate artist (who you trust will produce usable results) with access to the right tools (for example photoshop,) but a single artist (or even a group of them) can't possibly scale to millions (or whatever large number you care about) of examples.
To address the short-coming of the artist-based solution, you can use the following strategy:
Collect a sufficiently large set of before and after images created by artists, who you deem trustworthy.
Apply your favorite machine learning algorithm to learn a mapping from the before to the after images. There are many possible choices, and it almost really doesn't matter which you choose as long as you know how to use it well.
Note, the above two steps are usually not one-and-done, as most algorithms are. Usually, you will come across pathological or not-well behaved examples to your ML solution above in using the product. The key is to collect these examples, pass them through the artist and retrain or update your ML model. Repeat this enough times and you will produce a state-of-the-art solution to your problem.
Whether you have the funding, time, motivation and resources to accomplish this is another matter.
You should try semantic segmentation techniques that would definitely give you very good results and it would be a generalized concept.

Understanding figures in the algorithm design manual

I want to start learning about algorithms so I began reading The Algorithm Design Manual by Steven Skiena because it is recommended in some threads I read in SO. However, I just stopped here
and can't understand most of it because many explanations are represented in figures or images and my screen reader recognize but can't read them.
For example,
"The nearest neighbor rule is very efficient, for it looks at each pair of points
tex2html_wrap_inline23349 //That's how my screen reader read it, I asume it's an image.
at most twice, once when adding
tex2html_wrap_inline23351 //image
to the tour, the other when adding
tex2html_wrap_inline23353 //another image
Against all these positives there is only one problem. This algorithm is completely wrong."
This is really frustrating for me because I'm beginning to enjoy this though I can understand why those images help a lot of readers.
So is their a way to understand this thing without seeing the figures? Or should I read another book?
Thanks in advance and happy new year everyone.
Considering these algorithms are dealing with geometrical analysis, I am afraid it would be difficult to understand them without the images, and even more difficult to replace these images with an equivalent textual description.

Is there an algorithm for positioning nodes on a link chart?

I'm a member of a small but fairly sociable online forum, and just for fun we've been plotting a chart of who's met who in real life. Here's what it looked like fairly recently.
(The colour is the "distance" from the currently-selected user, e.g., yellow is someone who's met someone who's met them. And no, I'm not Zak.) Apologies for the faded lines, they don't seem to have weathered the SO upload process very well.
It's generated as SVG, with a big block of JSON defining who's met who. The position (x,y) of each member on the chart is hard-coded into that JSON. Until now, it's been fairly easy to cope when someone meets someone else - at worst, maybe two or three people need to be shuffled around - but it does involve editing the co-ordinates manually. And now that the European and North American contingents are meeting up, and a few on the periphery are showing up at meets, all hell is breaking loose...
We can put some effort into making all the nodes draggable, which would make the job of re-arranging a bit less tiresome. But it seems more sensible to let the computer take care of positioning them, especially as the problem will only get harder with more members.
So, does anyone know of an algorithm for positioning these nodes on the chart, based on which other nodes they're linked with?
Ideally, it would
minimise or avoid long links
avoid having lines run underneath unrelated nodes
take account of the fact that well-connected nodes are bigger
do its best to show the wider "all these guys met each other" relationships (the big circle at the bottom is largely the result of one meet, for example, though the chart has no idea of when any two people met)
but if it gets us close enough to tweak it, that's progress.
And, what's the real name for these charts? I believe they're called "link charts", but I'm not getting good results from Google using that name or anything else I can think of.
We'll likely be implementing this in PHP or Javascript, but right now it's how to begin approaching the problem that's the bigger question.
Edit: Some great answers coming already. I would be very interested in the actual algorithm(s) used, though, as well as tools that do the job.
What you are looking for are f.e. force-based algorithms. There are quite a few libraries, and some have been named already, like prefuse, yWorks. Here a few more: jung, gvf, jGraph.
The real name for it is "graph". To generate graph, and have a good layout algorithm, the best is to use a software which will do the job.
I advise you to use Gephi.
This soft is able to do all the things you want to.
Have a look at the yWorks tools.
You can google for graph visualization. There are more libraries for this, including GraphViz, but probably not all your requirements will be met.
If you can deal w/ Java, take a look at prefuse.
Have a look at NodeXL
Also, this book may be relevant.

Is an algorithm to judge the age of person in a photo feasible?

My friend works for a non-profit organization working to stop the illegal exploitation of minors over sites such as craigslist.org, which is one of the more popular mediums. The question is whether or not it is possible, now or in the near future, to develop an algorithm to analyze a photo of a person and return a prediction of their relative age.
It sounds like a mammoth task. My only thought was some sort of Bayesian probability system. I know even people often have trouble judging someone's age but Bayesian spam filters are advertised as being "10 times as accurate as a human" so maybe it's possible?
I am pretty inexperienced though. I would appreciate it if someone else could suggest whether or not this is feasible and if so how and when?
EDIT: Thank you everyone for the responses. Smoore that study was very helpful but I think Hal's solution is the most practical for the time being.
Here's a possible (left-field) solution. Perhaps, you could tie it into some type of a captcha solution for the site itself. Prompt new users with images of other new users with the question: "Is this person over 18?". It's true that a 50% success rate is not a very effective captcha system, but it's a start.
Coupled with some other checks or repetitive checks and it could work. You could display the image to a number of new users, and base the result on a certain threshold. If, 8 out of 10 people flagged a certain image as not a minor, than it's probably pretty safe they are of age.
But, this whole system can be circumvented by simply uploading someone else's image so I'm not sure how effective any of this really is. :)
I expect it would be pretty hard to get right. Consider this set of photos where the same model is made up to look very different ages.
There are algorithm to reliably determine the attractiveness of a face. See acm.org and uni-regensburg.de. It wouldn't be too much of a stretch to imagine an algorithm which could predict age.
Characteristics such as smoothness would probably have a strong correlation with age. It would probably take a great deal of effort to be more reliable than your average carney though.
I think you would need some input from a forensic anthropoligist ( or at least an anatomist).
Differnet parts of the body grow at different rates so it might be possible to do something like size of head vs. shoulder width, arm length vs. body width.
Unfortunately it sounds like he is trying to differentiate between say a 14 year olds and 18 year olds. Which is only a four year difference, variations in genetic makeup and nutitrition would probaly give any system an accuracy of +/- 20% which would equate to three years for this age group.
On the other hand if you had a large sample of photos then you could account for the variance statisticaly and get a pretty good idea whether a site was likely to be exploiting minors systematicaly.
The direct answer to your question is that no, no such algorithm will exist in the near future, and is probably impossible to achieve with any accuracy without strong AI.
That said, a practical solution to your problem is probably the amazon mechanical turk:
http://mturk.com
There, you can pay a small fee to have real people complete a task for you. I'd probably set your task up so that you paid $0.02 to have a person estimate the age of maybe 5 faces at a time. You could double or triple check your results with other workers, particularly for those faces who seemed close to your age limit. This is probably your only practical solution other than hiring minimum wage interns to manually review all submissions.
Use mechanical turk
In this study they tried it by analysing facial geometry and wrinkle features. Problem is this would be affected by shot angle, lighting, etc.
In some theoretical sense it is probably possible. For all practical purposes though, it is currently impossible.
Mammoth is an understatement I think. "Giant glacier" or "moon" might be more appropriate.
This isn't to say it wouldn't be worth looking into but I have a feeling you'd be in for a lot of man hours before you came up with something remotely useful.
I don't think it's something that a computer could do with any degree of accuracy. It's even really hard for people to do. I mean, have you been the the liquor store lately, they are supposed to ask for ID from anybody who looks under 25 (drinking age is 19 here). Apparently some 40 year olds don't look old enough. Telling somebody's age just by looking at them is a very hard thing to do. Especially when you get into to erotic picture arena, where they are trying to make models seem younger than they really are.
I think you will also have difficulties with different composited pictures. For instance angles on a face, different lighting, as well as context and probably most of all... image quality/resolution. It's a lot easier to work with a 800x600 pic then it is to work with a 320x240. The algorithm is only as good as the subject.
I cannot see this approach (a software solution to measuring age) being very effective. I like the idea of users flagging images - a human being can discern age many times more effectively then any algorithm.
Practical approach aside, I'd advice against trying to develop anything in that direction for now.
Few reasons:
1. guessing someone's age is not a grateful task
2. "biological" age and "calendar" age of people vary greatly - I know people who are 30 and are still asked for an ID when buying liquor, and some who are barely 18 and already look over 30
3. some people's looks don't change over time - they just have that kind of looks
4. nowadays, everyone's working to look as young as they can - so basically, you've got the whole industry working against you :(
Anyways, to cut long story short, I don't think it's feasible for now.
A neural net is a reasonable approach, you would need a training set of pictures of people with known ages and a bit of image processing to remove hats etc.
edit: Question changed?
You might be ale to classify someone as 20-30 or 40-50 on a CCTV but you aren't going to be ale to tell if a model is 17 or 18 in a posed photo.
Just like nearly all advanced tasks in image classification this topic is still in research. Judging from this paper it is possible to do it but non-trivial, also you have to have a lot of (manually) annotated training data. Without any knowledge of this field and no experience in image processing this task is going to take you several months.
Develop a classification algorithm that bases a heuristic on many values of the pictures, amount of pixels that are dark within the face area (possibly wrinkles), and the color of the hair. These values should fall within a general area of any profile-esque picture, if you want to be fancy, carry weights with these values and develop a type of game tree that would be able to search hundreds of thousands of images quickly, finding where this image "falls" in the tree within an age-specific set of values.
Some Japanese cigarette vending machines do this. Not terribly well by all accounts, but then it probably doesn't matter since, as Hal mentioned, the easiest hack is just to use someone else's image...
Impossible is nothing, Only amount of efforts changes :
I think it would be near impossible if you target one particular feature of face.
you have to consider multiple factor, So decision will be lying in a matrix and you have to feed multiple things and you will get your answer i would enlist some feature :
1) Beard (Detect face , Now detect beard on face , Help full in distinguish male/female
/childern )
2) Hair
3) Wrinkles
4) Size of face
5) Ration between height and breadth of face
It would be a tough assignment but algorithm can be developed.
As of now, this is possible with 90% accuracy. Yes. please refer the following link..
http://www.omron.com/r_d/coretech/vision/okao.html

How would you implement a perfect line-of-sight algorithm?

Disclaimer: I'm not actually trying to make one I'm just curious as to how it could be done.
When I say "Most Accurate" I include the basics
wall
distance
light levels
and the more complicated
Dust in Atmosphere
rain, sleet, snow
clouds
vegetation
smoke
fire
If I were to want to program this, what resources should I look into and what things should I watch out for?
Also, are there any relevant books on the theory behind line of sight including all these variables?
I personally don't know too much about this topic but a quick couple of Google searches turns up some formal papers that contain some very relevant information:
http://www.tecgraf.puc-rio.br/publications/artigo_1999_efficient_lineofsight_algorithms.pdf - Provides a detailed description of two different methods of efficiently performing an LOS calculation, along with issues involved
http://www.agc.army.mil/operations/programs/LOS/LOS%20Compendium.doc - This one aims to maintain "a current list of unique LOS algorithms"; it has a section listing quite a few and describing them in detail with a focus on military applications.
Hope this helps!
Typically, one represents the world as a set of volumes of space held in some kind of space partitioning data structure, then intersects the ray representing your "line of sight" with that structure to find the set of objects it hits; these are then walked in order from ray origin to determine the overall result. Reflective objects cause further rays to be fired, opaque objects stop the walk and semitransparent objects partially contribute to the result.
You might like to read up on ray tracing; there is a great body of literature on the subject and well-understood ways of solving what are basically the same problems you list exist.
The obvious question is do you really want the most accurate, and why?
I've worked on games that depended on line of sight and you really need to think clearly about what kind of line of sight you want.
First, can the AI see any part of your body? Or are you talking about "eye to eye" LOS?
Second, if the player's camera view is not his avatar's eye view, the player will not perceive your highly accurate LOS as highly accurate. At which point inaccuracies are fine.
I'm not trying to dissuade you, but remember that player experience is #1, and that might mean not having the best LOS.
A good friend of mine has done the AI for a long=-running series of popular console games. He often tells a story about how the AIs are most interesting (and fun) in the first game, because they stumble into you rather than see you from afar. Now, he has great LOS and spends his time trying to dumb them down to make them as fun as they were in the first game.
So why are you doing this? Does the game need it? Or do you just want the challenge?
There is no "one algorithm" for these since the inputs are not well defined.
If you treat Dust-In-Atmosphere as a constant value then there is an algorithm that can take it into account, but the fact is that dust levels will vary from point to point, and thus the algorithm you want needs to be aware of how your dust-data is structured.
The most used algorithm in todays ray-tracers is just incremental ray-marching, which is by definition not correct, but it does approximate the Ultimate Answer to a fair degree.
Even if you managed to incorporate all these properties into a single master-algorithm, you'd still have to somehow deal with how different people perceive the same setting. Some people are near-sighted, some far-sighted. Then there's the colour-blind. Not to mention that Dust-In-Atmosphere levels also affect tear-glands, which in turn affects visibility. And then there's the whole dichotomy between what people are actually seeying and what they think they are seeying...
There are far too many variables here to aim for a unified solution. Treat your environment as a voxelated space and shoot your rays through it. I suspect that's the only solution you'll be able to complete within a single lifetime...

Resources