Salient object in an image is that part of the image where all the human attention goes and rest part is mostly ignored by the vision of humans.
i wonder why many people researching about saliency. i can't get enough illustration how saliency would be any of use to human in the world. could u give me any example that describe usefullness of saliency?
This is not the place where you should ask this kind of questions, because here we can help you with problems in your code...
But anyway, I will try to explain it. From Wikipedia:
Saliency detection is considered to be a key attentional mechanism
that facilitates learning and survival by enabling organisms to focus
their limited perceptual and cognitive resources on the most pertinent
subset of the available sensory data.
So in order to survive you use the visual saliency. In an primitive world, suppose you were hunting. If anything moved in the forest the saliency generated by the movement would made you react immediatelly.
Examples:
Suppose you are driving and suddenly a truck appears.
You react immediatelly because of the saliency generated by the
moving truck.
Suppose you are driving and suddenly a cyclist appears
(wearing yellow clothes). The saliency focus your attention in the cyclist and you slow down your car.
You can learn a little bit more here.
Related
I am getting the gps position and time of a voluntary person which moves around. I am acquiring the position every second with Matlab and save it in a matrix.
Now I would like to be able to say if the person is moving normal or not. For example running in circles is not normal for a person who usually only walks around.
I am not looking for a complete solution because I would like to learn through my project and understand every aspect. I would be very grateful if you could show me the right direction. Good literature, tutorials and simple catchwords would also be very helpful for me because at the moment I dont know how to approach my problem.
Thank you very much in advance!
Kind regards,
Tom
What you're looking for is anomaly detection. The primary commercial application of this technology is in fraud detection. As for pointing to resources any books that cover data mining should have a section about anomaly detection.
Something to for warn you about, it sounds from your description that you will be working with time series data which is its own branch of data mining.
Catchwords: Anomaly Detection, and Time Series Data.
Books: ISBN-13 978-0321321367 Introduction to Data Mining (This is a good starting point if you don't have a lot of background in the subject)
I need to implement algorithm which for input has picture ( jpeg ) and create new picture like output, but only with bodies ( background is removed completely ). Input picture is picture with people from vacation and I need to recognize human bodies and remove background. Can someone suggest me what algorithm to use, what book to buy to learn that algorihms ?
Check this link it will perfectly answer your question of removing the background and performing further processing
neural networks are particuarly useful for this kind of task, but the theory is a universe, if you're doing it from scratch ... that's a lot of work
This is a segmentation problem. In the general case, segmenting images is a hard research problem (I just spent five years doing a doctorate on segmenting greyscale medical images, for example) and the way you go about it is strongly tied to the type of images with which you have to deal. The best advice I can give is to go and read the appropriate literature on segmenting colour images (e.g. use Google Scholar). In terms of books, this one's a good general-purpose introduction to image processing:
http://www.amazon.co.uk/Digital-Image-Processing-Rafael-Gonzalez/dp/0130946508/ref=sr_1_7?ie=UTF8&qid=1326236038&sr=8-7
Searching for "segmenting people in colour images" on Google seems to turn up some good links, incidentally.
I have a question for you: you want to implement this using an algorithm? If so, then it might require a lot of things to be done (provided you are new to the field of image processing).
Otherwise you may try using masking techniques in image editing software like Adobe Photoshop (that would hardly take 15 mins, depending upon how well you know it)
A good book to start with image processing techniques is: "Digital Image Processing" by Gonzalez and Woods; it starts from the basics, and explains stuff in depth.
Still it may take a lot of time to develop an algorithm to do this job. I recommend you use some library for the same. OpenCV(opensource computer vision) is an excellent choice. The library itself comes with demos which include programs for face detection etc. The inbuilt functions provide a variety of features (edge detection/Feature identification and extraction, you may have to use this) Here's the link
http://opencv.willowgarage.com/wiki/
The link provides a lot of reference material that you can make use of! :)
Start with facial recognition software and algorithms; they have been the most refined over the years and as long as all of your bodies have heads, you can use exif data to figure image capture orientation (of course you can't completely rely on that), sample the facial skin to get skin tone ranges, and find the attached body. Anything that is not head and body should be deleted. This process assumes that a person has roughly the same skin tone on their face as their body and the camera flash isn't washing this out. You could grab the flash duration and some other attributes from exif and adjust your ranges accordingly.
A lot of software out there can recognize faces (look at iPhoto for example), so you'll have to use the face as a reference point, along with skin tone, to find your body edges. You result isn't going to be perfect, but as long as your approach is sound, you'll end up with something useful.
And release your software as open source when you're done so I can use it... :)
You can download a free PDF of the book Computer Vision by Richard Szeliski from the author's website. Not only do you have a free book on algorithms, but it's a book that addresses this specific problem.
http://szeliski.org/Book/
You'll see this image at the top of that page of the author's website.
Used copies of the hardcover are available for about $62 if you check addall.com. If you spent some time doing image processing, you'll appreciate having a paper copy of at least one good general reference book.
Its tough but not impossible. I can't give you any code but Peter Norvig had a great talk on the value of data and in the talk he shows how he was able to take a picture of lake and remove all the houses blocking the image and have the lake expanded with boats,etc..
The computer basically learned how lakes look and boats go on lakes and then removed the houses and placed it there. He explains his process(but no code or anything).
Here it is:
Peter Norvig - The Unreasonable Effectiveness of Data
http://www.youtube.com/watch?v=yvDCzhbjYWs
I am developing software that depends on musical chords detection. I know some algorithms for pitch detection, with techniques based on cepstral analysis or autocorrelation, but they are mainly focused on monophonic material recognition. But I need to work with some polyphonic recognition, that is, multiple pitches at the same time, like in a chord; does anyone know some good studies or solutions on that matter?
I am currently developing some algorithms based on the FFT, but if anyone has an idea on some algorithms or techniques that I can use, it would be of great help.
This is quite a good Open Source Project:
https://patterns.enm.bris.ac.uk/hpa-software-package
It detects chords based on a chromagram - a good solution, breaks down a window of the whole spectrum onto an array of pitch classes (size: 12) with float values. Then, chords can be detected by a Hidden Markov Model.
.. should provide you with everything you need. :)
The author of Capo, a transcription program for the Mac, has a pretty in-depth blog. The entry "A Note on Auto Tabbing" has some good jumping off points:
I started researching different methods of automatic transcription in mid-2009, because I was curious about how far along this technology was, and if it could be integrated into a future version of Capo.
Each of these automatic transcription algorithms start out with some kind of intermediate represenation of the audio data, and then they transfer that into a symbolic form (i.e. note onsets, and durations).
This is where I encountered some computationally expensive spectral representations (The Continuous Wavelet Transform (CWT), Constant Q Transform (CQT), and others.) I implemented all of these spectral transforms so that I could also implement the algorithms presented by the papers I was reading. This would give me an idea of whether they would work in practice.
Capo has some impressive technology. The standout feature is that its main view is not a frequency spectrogram like most other audio programs. It presents the audio like a piano roll, with the notes visible to the naked eye.
(source: supermegaultragroovy.com)
(Note: The hard note bars were drawn by a user. The fuzzy spots underneath are what Capo displays.)
There's significant overlap between chord detection and key detection, and so you may find some of my previous answer to that question useful, as it has a few links to papers and theses. Getting a good polyphonic recogniser is incredibly difficult.
My own viewpoint on this is that applying polyphonic recognition to extract the notes and then trying to detect chords from the notes is the wrong way to go about it. The reason is that it's an ambiguous problem. If you have two complex tones exactly an octave apart then it's impossible to detect whether there are one or two notes playing (unless you have extra context such as knowing the harmonic profile). Every harmonic of C5 is also a harmonic of C4 (and of C3, C2, etc). So if you try a major chord in a polyphonic recogniser then you are likely to get out a whole sequence of notes that are harmonically related to your chord, but not necessarily the notes you played. If you use an autocorrelation-based pitch detection method then you'll see this effect quite clearly.
Instead, I think it's better to look for the patterns that are made by certain chord shapes (Major, Minor, 7th, etc).
See my answer to this question:
How can I do real-time pitch detection in .Net?
The reference to this IEEE paper is mainly what you're looking for: http://ieeexplore.ieee.org/Xplore/login.jsp?reload=true&url=/iel5/89/18967/00876309.pdf?arnumber=876309
The harmonics are throwing you off. Plus, humans can find fundamentals in sound even when the fundamental isn't present! Think of reading, but by covering half of the letters. The brain fills in the gaps.
The context of other sounds in the mix, and what came before, is very important to how we perceive notes.
This is a very difficult pattern matching problem, probably suitable for an AI technique such as training neural nets or genetic algorithms.
Basically, at every point in time, you guess the number of notes being play, the notes, the instruments that played the notes, the amplitudes, and the duration of the note. Then you sum the magnitudes of all the harmonics and overtones that all those instruments would generate when played at that volume at that point in thier envelope (attack, decay, etc.). Subtract the sum of all those harmonics from the spectrum of you signal, then minimize the difference over all possibilities. Pattern recognition of the thump/squeak/pluck transient noise/etc. at the very onset of the note might also be important. Then do some decision analysis to make sure your choices make sense (e.g. a clarinet didn't suddenly change into a trumpet playing another note and back again 80 mS later), to minimize the error probability.
If you can constrain your choices (e.g. only 2 flutes playing only quarter notes, etc.), especially to instruments with very limited overtone energy, it makes the problem a lot easier.
Also http://www.schmittmachine.com/dywapitchtrack.html
The dywapitchtrack library computes the pitch of an audio stream in real time. The pitch is the main frequency of the waveform (the 'note' being played or sung). It is expressed as a float in Hz.
And http://clam-project.org/ may help a little.
This post is a bit old, but I thought I'd add the following paper to the discussion:
Klapuri,Anssi; Multipitch Analysis of Polyphonic Music and Speech Signals Using an Auditory Model; IEEE TRANSACTIONS ON AUDIO, SPEECH, AND LANGUAGE PROCESSING, VOL. 16, NO. 2, FEBRUARY 2008 255
The paper acts somewhat like a literature review of multipitch analysis and discusses a method based on an auditory model:
(The image is from the paper. I don't know if I have to get permission to post it.)
Disclaimer: I'm not actually trying to make one I'm just curious as to how it could be done.
When I say "Most Accurate" I include the basics
wall
distance
light levels
and the more complicated
Dust in Atmosphere
rain, sleet, snow
clouds
vegetation
smoke
fire
If I were to want to program this, what resources should I look into and what things should I watch out for?
Also, are there any relevant books on the theory behind line of sight including all these variables?
I personally don't know too much about this topic but a quick couple of Google searches turns up some formal papers that contain some very relevant information:
http://www.tecgraf.puc-rio.br/publications/artigo_1999_efficient_lineofsight_algorithms.pdf - Provides a detailed description of two different methods of efficiently performing an LOS calculation, along with issues involved
http://www.agc.army.mil/operations/programs/LOS/LOS%20Compendium.doc - This one aims to maintain "a current list of unique LOS algorithms"; it has a section listing quite a few and describing them in detail with a focus on military applications.
Hope this helps!
Typically, one represents the world as a set of volumes of space held in some kind of space partitioning data structure, then intersects the ray representing your "line of sight" with that structure to find the set of objects it hits; these are then walked in order from ray origin to determine the overall result. Reflective objects cause further rays to be fired, opaque objects stop the walk and semitransparent objects partially contribute to the result.
You might like to read up on ray tracing; there is a great body of literature on the subject and well-understood ways of solving what are basically the same problems you list exist.
The obvious question is do you really want the most accurate, and why?
I've worked on games that depended on line of sight and you really need to think clearly about what kind of line of sight you want.
First, can the AI see any part of your body? Or are you talking about "eye to eye" LOS?
Second, if the player's camera view is not his avatar's eye view, the player will not perceive your highly accurate LOS as highly accurate. At which point inaccuracies are fine.
I'm not trying to dissuade you, but remember that player experience is #1, and that might mean not having the best LOS.
A good friend of mine has done the AI for a long=-running series of popular console games. He often tells a story about how the AIs are most interesting (and fun) in the first game, because they stumble into you rather than see you from afar. Now, he has great LOS and spends his time trying to dumb them down to make them as fun as they were in the first game.
So why are you doing this? Does the game need it? Or do you just want the challenge?
There is no "one algorithm" for these since the inputs are not well defined.
If you treat Dust-In-Atmosphere as a constant value then there is an algorithm that can take it into account, but the fact is that dust levels will vary from point to point, and thus the algorithm you want needs to be aware of how your dust-data is structured.
The most used algorithm in todays ray-tracers is just incremental ray-marching, which is by definition not correct, but it does approximate the Ultimate Answer to a fair degree.
Even if you managed to incorporate all these properties into a single master-algorithm, you'd still have to somehow deal with how different people perceive the same setting. Some people are near-sighted, some far-sighted. Then there's the colour-blind. Not to mention that Dust-In-Atmosphere levels also affect tear-glands, which in turn affects visibility. And then there's the whole dichotomy between what people are actually seeying and what they think they are seeying...
There are far too many variables here to aim for a unified solution. Treat your environment as a voxelated space and shoot your rays through it. I suspect that's the only solution you'll be able to complete within a single lifetime...
Someone told me about swamp diagrams explaning that they were useful to predict code quality by measuring the rate of incoming defects and outgoing fixes on a given product.
Unfortunately, I am unable to find additional information on those diagrams and I am wondering if it is a jargon term specific to one company.
Can you explain what a swamp diagram is?
You can see an example of a "swamp diagram" in this article about the "THE COMMISSIONING AND PERFORMANCE CHARACTERISTICS OF CESR", page 5 of the pdf or p. 1988 of that document.
(CESR is the Cornell Electron Storage Ring, designed to provide colliding electron and positron beams up to center-ofmass energy of 16 GeV.)
(obviously, I am just copying stuff from the article here, I am just a coder, not a physicist ;) )
Now what is interesting about a swamp diagram is the repartition ratio aspect, the way you can easily see groups ("swamps") of data against two axis.
If your two axes are:
the rate of incoming defects and
the rate of outgoing fixes
You can visualize the nature of the fixing: is that fixing process efficient when a lot of bugs are found or not ?
And that can tell you a lot on the nature of the defects found (that 'lot' or swamp can refers to many very easy bugs due to a stupid typo repeated in lots of file and easily corrected, but that other lot quite as important but fixed really slowly may indicate a more structural problem affecting perhaps the architectural choices of your application: you want to focus on that swamp)
Finally, no, it is not an internal term from some company, just an old term referring to old diagrams of... actual geographic swamp repartition over a given territory. (as this article about "Forest destruction by early Polynesians" from 1977 shows, go back a few pages to see that map)
As David Segonds says in a comment, the modern name would be a "binomial trendline" (although the ratio aspect tends to get a bit lost in that kind of diagram).
You can see an example in this Graph of voter turnout by age.
Another modern example of a swamp diagram would be this diagram showing each county as an ellipse, with the size of the ellipse proportional to the population of the county (more precisely, the voter turnout) in the two elections. (weird, only political examples seem to pop-up in relation with that kind of data representation ;)
[Disclaimer: the following is just an example, and in no way illustrates any kind of political opinion here ;) ]