Particle Filter Resampling - image

I implemented a bootstrap Particle filter on C++ by reading few Papers and I first implemented a 1D mouse tracker which performed really well. I used normal Gaussian for weighting in this exam.
I extended the algorithm to track face using 2 features of Local motion and HSV 32 bin Histogram. In this example my weighing function becomes the probability of Motion x probability of Histogram. (Is this correct).
Incase if that is correct than I am confused on the resampling function. At the moment my resampling function is as follows:
For each Particle N = 50;
Compute CDF
Generate a random number (via Gaussian) X
Update the particle at index X
Repeat for all N particles.
This is my re-sampling function at the moment. Note: the second step I am using a Random Number via Gaussian distribution for get the index while my weighting function is Probability of Motion and Histogram.
My question is: Should I generate random number using the probability of Motion and Histogram or just the random number via Gaussian is ok.

In the SIR (Sequential Importance Resampling) particle filter, resampling aims to replicate particles that have gained high weight, while remove those with less weight.
So, when you have your particles weighted (typically with the likelihood you have used), one way to do resampling is to create the cumulative distribution of the weights, and then generate a random number following a uniform distribution and pick the particle corresponding to the slot of the CDF. This way there is more probability to select a particle that has more weight.
Also, don't forget to add some noise after generating replicas of particles, otherwise your point-estimate might be biased for a period of time.

Related

Algorithm: How to smoothly interpolate/reconstruct sparse samples with noise?

This question is not directly related to a particular programming language but is an algorithmic question.
What I have is a lot of samples of a 2D function. The samples are at random locations, they are not uniformly distributed over the domain, the sample values contain noise and each sample has a confidence-weight assigned to it.
What I'm looking for is an algorithm to reconstruct the original 2D function based on the samples, so a function y' = G(x0, x1) that approximates the original well and interpolates areas where samples are sparse smoothly.
It goes into the direction of what scipy.interpolate.griddata is doing, but with the added difficulty that:
the sample values contain noise - meaning that samples should not just be interpolated, but nearby samples also averaged in some way to average out the sampling noise.
the samples are weighted, so, samples with higher weight should contrbute more strongly to the reconstruction that those with lower weight.
scipy.interpolate.griddata seems to do a Delaunay triangulation and then use the barycentric cordinates of the triangles to interpolate values. This doesn't seem to be compatible with my requirement of weighting samples and averaging noise though.
Can someone point me in the right direction on how to solve this?
Based on the comments, the function is defined on a sphere. That simplifies life because your region is both well-studied and nicely bounded!
First, decide how many Spherical Harmonic functions you will use in your approximation. The fewer you use, the more you smooth out noise. The more you use, the more accurate it will be. But if you use any of a particular degree, you should use all of them.
And now you just impose the condition that the sum of the squares of the weighted errors should be minimized. That will lead to a system of linear equations, which you then solve to get the coefficients of each harmonic function.

Livewire algorithm (intelligent scissors) step by step

I'm trying to code the livewire algorithm but I'm a little stuck because the algorithm explained in the article "Intelligent Scissors for Image Composition" is a little messy and I don't understand complety how to apply certain things for example: How to calculate de local cost map and other stuff.
So please can anyone give a hand and explain it step by step in just simple words?
I would apreciate any help
Thanks.
You should read Mortensen, Eric N., and William A. Barrett. "Interactive segmentation with intelligent scissors." Graphical models and image processing 60.5 (1998): 349-384. which contains more details about the algorithm than the shorter paper "Intelligent Scissors for Image Composition."
Here is a high-level overview:
The Intelligent Scissors algorithm uses a variant of Dijkstra's graph search algorithm to find a minimum cost path from a seed pixel to a destination pixel (the position of the mouse cursor during interactive segmentation).
1) Local costs
Each edge from a pixel p to a pixel q has a local cost, which is a linear combination of the local costs (adjusted by the distance between p and q to account for diagonal pixels):
Laplacian zero-crossing f_Z(q)
Gradient magnitude f_G(q)
Gradient direction f_D(p,q)
Edge pixel value f_P(q)
Inside pixel value f_I(q)
Outside pixel value f_O(q)
Some of these local costs are static and can be computed offline. f_Z and f_G are computed at different scales (meaning with different size kernels) to better represent the edge a pixel q. f_G, f_P, f_I, f_O are dynamically (or have a dynamic component as is the case for f_G) computed for on-the-fly training.
2) On-the-fly training
To prevent snapping to a different edge with a lower cost than the current one being followed, the algorithm uses on-the-fly training to assign a lower cost to neighboring pixels that "look like" past pixels along the current edge.
This is done by building a histogram of image value features along the last 64 or 128 edge pixels. The image value features are computed by scaling and rounding f'_G (where f_G = 1 - f'_G), f_P, f_I, and f_O as to have integer values in [0 255] or [0 1023] which can be used to index the histograms.
The histograms are inverted and scaled to compute dynamic cost maps m_G, m_P, m_I, and m_O. The idea is that a low cost neighbor q should fit in the histogram of the 64 or 128 pixels previously seen.
The paper gives pseudo code showing how to compute these dynamic costs given a list of previously chosen pixels on the path.
3) Graph search
The static and dynamic costs are combined together into a single cost to move from pixel p to one of its 8 neighbors q. Finding the lowest cost path from a seed pixel to a destination pixel is done by essentially using Dijkstra's algorithm with a min-priority queue. The paper gives pseudo code.

Inverse of Laplacian and Gaussian Noise

Given a set of data points, I modify the data points by adding a Laplacian or a Gaussian Noise to them.
I am wondering if there exist mathematical inverse functions able to derive the original data points from the ones with noise.
My understanding is that, we can reconstruct only an estimation of the original data points that have a certain probability p of being equal to the original data points.
If this is the case, how to calculate such a probability p?

3D randomized generation of planets

For a project, I'm doing creation of planets in 3D space, based around a center "homeworld", that are randomly generated in all directions from the origin.
I've looked at procedural generation and Perlin noise, but I couldn't find a decent way to make them applicable, but I'm new to randomized generation of any kind.
Any good starting points for an algorithm for 3D-point generation, centered around the origin, preferably based on a seed (so the same seed makes the same universe).
Thanks!
Try using a set of different random numbers rather than trying for a specific algorithm to do this with a single seed.
first one is 1-360 is the rotation around the y axis
second one is 1-180 is the deviation from the y axis (wobble)
third one is 1-<really big number> is the distance from your centre point (homeworld)
fourth (optional) one is to randomize the radius of the planet
fifth (optional) is to randomize the colour of the object
To plot your planet then it's just some simple trigonometry to work out the location in 3d space (x,y,z) from your origin (homeworld).
And so long as your seed values for each are the same you will be able to generate a very large planet space.
If you want to do this with a single seed, then use that seed to generate 'random' seed numbers for all the subsequent random number generators.
I had an idea in the long time it took to load this page which I don't see represented yet.
You could start with a tetrahedron and then, for a specified number of iterations,
select a triangular face at random
replace the face with a new tetrahedron erected on that base.
With a completely uniform random number distribution, this should approximate a sphere. With a deterministic PRNG, the result should be reproducible by using the same initial seed.

Perlin\Fractal noise jump for just one unit between values

First of all sorry for my pour english.
I'm trying to make virtual world with terrain just like in simcity2000 or transport tycoon where terrain is made from tiles and tile heights can't jump more than one level between tiles, so there is no cliffs.
For terrain generation I'm using perlin\simplex noise but I'm getting to stiff slopes with that.
I've took a look on the source code of Open Transport Tycoon, and there after terrain generation all tiles on map are looped through and smoothed out to have elevation for just one unit.
But it won't work this way for me, because my map will be much bigger and I cannot afford smoothing all of it by loop. Also it's not possible to smooth just the visible part of terrain , because it will be different depending on from which tile smoothing was started.
I've tried to write my own noise function which is returning linearly interpolated value between two points with distance equal to max height of those points, that way slope can't be more than 45 degree, it worked but until you try to sum such functions together.
How can I pseudo-randomly generate terrain with mountain slopes of max 45 degrees, and aproach this other way than just smoothing out some previously generated map?
Right now I'm out of ideas, and hoping that Perlin noise may have some possible option like "max slope angle", but google didn't help me with that.
Perlin noise is inherently slope-limited, since the values within each grid cell are interpolated between four gradients that all have slope 1/gridSize (or some other fixed value depending on your implementation).
If you generate a limited number of octaves with a fairly wide grid relative to your tile size, you should be able to find a scaling factor experimentally that ensures a maximum slope of 1.

Resources