I am new to matlab, so forgive me if i am asking for the obvious here: what i have is a collection of color photographic images (all the same dimensions). What i want to do is calculate the median color value for each pixel.
I know there is a median filter in matlab, but as far as i know it does not do exactly what i want. Because i want to calculate the median value between the entire collection of images, for each separate pixel.
So for example, if i have three images, i want matlab to calculate (for each pixel) which colorvalue out of those three images is the median value. How would i go about doing this, does anyone know?
Edit: From what i can come up with, i would have to load all the images into a single matrix. The matrix would have to have 4 dimensions (height, width, rgb, images), and for each pixel and each color find the median in the 4th dimension (between the images).
Is that correct (and possible)? And how can i do this?
Your intuition is correct. If you have images image_1, image_2, image_3, for example, you can assign them to a 4 dimensional matrix:
X(:,:,:,1) = image_1;
X(:,:,:,2) = image_2;
X(:,:,:,3) = image_3;
Then use:
Y=median(X,4);
To get the median.
Expanding my comments into a full answer;
#prototoast's answer is elegant, but since medians for the R, G and B values of each pixel are calculated separately, the output image will look very strange.
To get a well-defined median that makes visual sense, the easiest thing to do is cast the images to black-and-white before you try to take the median.
rgb2gray() from the Image Processing toolbox will do this in a way that preserves the luminance of each pixel while discarding the hue and saturation.
EDIT:
If you want to define the "RGB median" as "the middle value in cartesian coordinates" this is easy enough to do for three images.
Consider a single pixel with three possible choices for the median colour, C1=(r1,g1,b1), C2=(r2,g2,b2), C3=(r3,g3,b3). Generally these form a triangle in 3D space.
Take the Pythagorean distance between the three colours: D1_2=abs(C2-C1), D2_3=abs(C3-C2), D1_3=abs(C3-C1).
Pick the "median" to be the colour that has lowest distance to the other two. Defining D1=D1_2+D1_3, etc. and taking min(D1,D2,D3) should work, courtesy of the Triangle Inequality. Note the degenerate cases: equilateral triangle (C1, C2, C3 equidistant), line (C1, C2, C3 linear with each other), or point (C1=C2=C3).
Note that this simple way of thinking about a 3D median is hard to extend to more than three images, because "the median" of a set of four or more 3D points is a bit harder to define.
Edit 2
For defining the "median" of N points as the centre of the smallest sphere that encloses them in 3D space, you could try:
Find the two points N1 and N2 in {N} that are furthest apart. The distance between N1 and N2 is the diameter of the smallest sphere that encloses all the points. (Proof: Any smaller and the sphere would not be able to enclose both N1 and N2 at the same time.)
The median is then halfway between N1 and N2: M = (N1+N2)/2.
Edit 3: The above only works if no three points are equidistant. Maybe you need to ask math.stackexchange.com?
Edit 4: Wikipedia delivers again! Smallest circle problem, Bounding sphere.
Related
I've been working for some time in an XNA roguelike game and I can't get my head around the following problem: developing an algorithm to divide a matrix of non-binary values into the fewest rectangles grouping these values.
Example: given the following matrix
01234567
0 ---##*##
1 ---##*##
2 --------
The algorithm should return:
3x3 rectangle of '-'s starting at (0,0)
2x2 rectangle of '#'s starting at (3, 0)
1x2 rectangle of '*'s starting at (5, 0)
2x2 rectangle of '#'s starting at (6, 0)
5x1 rectangle of '-'s starting at (3, 2)
Why am I doing this: I've gotten a pretty big dungeon type with a size of approximately 500x500. If I were to individually call the "Draw" method for each tile's Sprite, my FPS would be far too low. It is possible to optimize this process by grouping similar-textured tiles and applying texture repetition to them, which would dramatically decrease the amount of GPU draw calls for that. For example, if my map were the previous matrix, instead of calling draw 16 times, I'd call it only 5 times.
I've looked at some algorithms which can give you the biggest rectangle of a type inside a given binary matrix, but that doesn't fit my problem.
Thanks in advance!
You can use breadth first searches to separate each area of different tile type.
Picking a partitioning within the individual shapes is an NP-hard problem (see https://en.wikipedia.org/wiki/Graph_partition), so you can't find an efficient solution that guarantees the minimum number of rectangles. However if you don't mind an extra rectangle or two for each shape and your shapes are relatively small, you can come up with algorithms that split the shape into a number of rectangles close to the minimum.
An off the top of my head guess for something that could potentially work would be to pick a tile with the maximum connecting tiles and start growing a rectangle from it using a recursive algorithm to maximize the size. Remove the resulting rectangle from the shape, then repeat until there are no more tiles not included in a rectangle. Again, this won't produce perfect results, there are graphs on which this will return with more than the minimum amount of rectangles, but it's an easy to implement ballpark solution. With a little more effort I'm sure you will be able to find better heuristics to use and get better results too.
One possible building block is a routine to check, given two points, whether the rectangle formed by using those points as opposite corners is all of the same type. I think that a fast (but unreliable) means of testing this can be based on mapping each type to a large random number, and then working out the sum of the numbers within a rectangle modulo a large prime. Take one of the numbers within the rectangle. If the sum of the numbers within the rectangle is the size of the rectangle times the one number sampled, assume that the all of the numbers in the rectangle are the same.
In one dimension we can work out all of the cumulative sums a, a+b, a+b+c, a+b+c+d,... in time O(N) and then, for any two points, work out the sum for the interval between them by subtracting cumulative sums: b+c+d = a+b+c+d - a. In two dimensions, we can use cumulative sums to work out, for each point, the sum of all of the numbers from positions which have x and y co-ordinates no greater than the (x, y) coordinate of that position. For any rectangle we can work out the sum of the numbers within that rectangle by working out A-B-C+D where A,B,C,D are two-dimensional cumulative sums.
So with pre-processing O(N) we can work out a table which allows us to compute the sum of the numbers within a rectangle specified by its opposite corners in time O(1). Unless we are very unlucky, checking this sum against the size of the rectangle times a number extracted from within the rectangle will tell us whether the rectangle is all of the same type.
Based on this, repeatedly start with a random point not covered. Take a point just to its left and move that point left as long as the interval between the two points is of the same type. Then move that point up as long as the rectangle formed by the two points is of the same type. Now move the first point to the right and down as long as the rectangle formed by the two points is of the same type. Now you think you have a large rectangle covering the original point. Check it. In the unlikely event that it is not all of the same type, add that rectangle to a list of "fooled me" rectangles you can check against in future and try again. If it is all of the same type, count that as one extracted rectangle and mark all of the points in it as covered. Continue until all points are covered.
This is a greedy algorithm that makes no attempt at producing the optimal solution, but it should be reasonably fast - the most expensive part is checking that the rectangle really is all of the same type, and - assuming you pass that test - each cell checked is also a cell covered so the total cost for the whole process should be O(N) where N is the number of cells of input data.
I have a bitmap world map where each country is drawn in a unique color. Upon loading the map I have stored all border pixels in an array per country.
Next I calculate the distance between two counties (A and B). I do this by looping over every pixel in A's border array and calculating the distance between it and every pixel in B's border array. After finding the shortest distance I store it in a lookup table.
To optimize this I have:
Filtered out all immediate neighbors beforehand
When trying to find the shortest distance between two pixels I only compare the squared distance (only when I've found the closest one do I calculate the actual distance using square root).
When storing the distance I store it for both A->B and B->A so B will then only calculate distance against C to Z and C only against D to Z, etc.
With a large map this still takes quite a lot of time, so are there any other optimizations that I could do?
Store the border pixel data in quadtree or another hierarchical structure exploiting the actual geometry (perhaps in a triangular tree). Instead of calculating true distances for N*N/2 pixels, you will calculate ranges of min/max distances for log2(N)*log2(N)/2 areas containing the border pixels, ruling out large sets of impossible candidates, then refining to next level.
Here in sample A, there are 12 squares to be compared to 4 candidate squares of sample B, leading probably to 4*5 next level candidates (all B squares and 5 closest regions in A).
Consider calculating distance not between every pixel border of A and B but let's say each 10-th. This will give you rough solution. If the precision is not enough for your purpose, you can make it more accurate by more comparisons.
Another approach may be to introduce new model of border data structure. Store it not as each point but the set of 'characteristic' points
Suppose I have two boxes (each of them is a rectangular cuboid, aka rectangular parallelepiped). I need to write a function that decides if the box with dimensions (a, b, c) can fit into the box with dimensions (A, B, C), assuming any rotations by any angles are allowed (not only by 90°).
The tricky part is the edges of the inner box may be not parallel to corresponding edges of the outer one. For example, a box that is very thin in the dimensions (a, b) but with length 1 < c < √3 can fit into a unit cube (1, 1, 1) if placed along its main diagonal.
I've seen questions [1], [2] but they seem to cover only rotations by 90°.
Not a complete answer, but a good start is to determine the maximum diameter that fits inside the larger box (inscribe the box in a circle) and the minimum diameter needed for the smaller box. That gives a first filter for possibility. This also tells you how to orient the smaller box within the larger one.
If one box can fit inside the other than it can fit also if boxes have same center. So only rotation is enough to check, translation is not needed to check.
2D case: For boxes X=(2A,2B) and x=(2a,2b) positioned around (0,0). That means corners of X are (+-A, +-B).
---------(A,B)
|
-----------(a,b)
(0,0) |
-----------(a,-b)
|
---------(A,-B)
Be rotating x around (0,0), corners are always on circle C with radius sqrt(a^2+b^2). If part of circle lie inside box X, and if part inside X has enough arc length to place 2 points on distance 2a or 2b, than x can fit inside X. For that we need to calculate intersection of C with lines x=A and y=B, and calculate distance between these intersection. If distance is equal or grater than 2a or 2b, than x can fit inside X.
3D case: Similar. For boxes X=(2A,2B,2C) and x=(2a,2b,2c) positioned around (0,0,0). Rotating x around (0,0,0), all corners move on sphere with radius sqrt(a^2+b^2+c^2). To see is there enough space on sphere-box intersection part, find intersection of sphere with planes x=A, y=B and z=C, and check is there enough space to fit any of quads (2a,2b), (2a,2c) or (2b,2c) on that sphere part. It is enough to check are points on part border on sufficient distance. For this part I'm not sure about efficient approach, maybe finding 'center' of intersection part and checking it's distance to border can help.
You basically have to check several cases, some trivial and some needing minimization searches.
First, there are 4 3-parallel-axis cases. If any of them passes (with #jean's test), you fit. Otherwise, continue to the next test cases:
Next, there are 18 2d diagonal cases, where one axis is parallel and the other two are diagonal, with one angle degree of freedom. Discard a case if the parallel axis doesn't fit; otherwise find the minimum of some "impingement" metric function of the single rotation angle. Then check for any actual impingement at that angle. The impingement metric has to be some continuous measure of how the inner box (4 corners) are staying inside the 2 faces of the outer box, allowing that sometimes they may go outside during the search for the minimum impingement angle. Hopefully a) there are a predictable maximum number of minima, and hopefully b) if there is a possible fit, then a fit is guaranteed at the angle of one of these minima.
If none of those cases passes without impingement, then move on to the larger number of 3d no-parallel-axes cases, where the rotation parameter is now three angles instead of one, and you have to search for a (hopefully limited number of) minima of the impingement metric, and test there for actual impingement.
Not really elegant, I think. This is similar to another thread asking how long of a line of given width can fit inside a 2d box of given dimensions. I didn't consider the parallel-axis case there, but the diagonal case requires solving a quartic equation (much worse than a quadratic equation). You may have a similar problem for your one-parallel-axis cases, if you want to go analytic instead of searching for minima of an impingement metric. The analytic solution for the no-parallel-axis 3d diagonal cases probably involves solving (for the correct root of) an even higher order equation.
In fact, any box A with dimensions (a1, a2, a3) can fit in an other box B with dimensions (b1, b2, b3), in the following conditions:
i) Every ai is less than or equal to every bi with i = 1. 2. 3;
ii) Any ai has to be less than or equal to sqrt(b1^2+b2^2+b3^2), the main diagonal of B (diagB). Any box A with one of its dimensions equal to diagB, has the other two dimensions equal to 0, since any plane orthogonal to it would extend outside the box B.
iii) The sum of a1, a2 and a3 must be less than or equal to diagB.
From these, we can see that the greatest dimension ai of a box A for it to fit box B, given ai > bi, should lie in the interval (bi, diagB).
Thus, any box with one dimension bigger than any dimension of a box containing it will necessarily placed along the latter's main diagonal.
Put it simply:
A(a1, a2, a3) fits in B(b1, b2, b3) iff a1+a2+a3 <= diagB.
Can you get box dimensions? Say a0,a1,a2 are the dimensions of box A ordered by size and b0,b1,b2 are the dimensions of box B ordered by size.
A fits inside B if (a0 <= b0 AND a1 <= b1 AND a2 <= b2)
How many squares of size a×a can be packed into a circle of radius R?
I don't need a solution. I just need some kind of a starting idea.
I apologise for writing such a long answer. My approach is to start with a theoretical maximum and a guaranteed minimum. When you approach the problem, you can use these values to determine how good any algorithm you use is. If you can think of a better minimum then you can use that instead.
We can define an upper limit for the problem by simply using the area of the circle
Upper Limit = floor( (PI * (r pow 2)) / (L * L) )
Where L is the width or height of the squares you are packing and r is the radius of the circle you are packing the squares into. We are sure this is an upper limit because a) we must have a discrete number of boxes and b) we cannot take up more space than the area of the circle. (A formal proof would work somewhere along the lines of assume we had one more box than this, then the sum of the area of the boxes would be greater than the area of the circle).
So with an upper limit in hand, we can now take any solution that exists for all circles and call it a minimum solution.
So, let's consider a solution that exists for all circles by taking a look at the largest square we can fit inside the circle.
The largest square you can fit inside the circle has 4 points on the perimiter, and has a width and length of sqrt(2) * radius (by using pythagoras' theorem and using the radius for the length of the shorter edges)
So the first thing we note is that if sqrt(2) * radius is less than the dimension of your squares, then you cannot fit any squares in the circle, because afterall, this is the largest square you can fit.
Now we can do a straightforward computation to divide this large square into a regular grid of squares using the L you specified, which will give us at least one solution to the problem. So you have a grid of sqaures inside this maximum square. The number of squares you can fit into one row of this this grid is
floor((sqrt(2) * radius)/ L)
And so this minimum solution asserts that you can have at least
Lower Limit = floor((sqrt(2) * radius)/ L) pow 2
number of squares inside the circle.
So in case you got lost, all I did was take the largest square I could fit inside the circle and then pack as many squares as possible into a regular grid inside that, to give me at least one solution.
If you get an answer at 0 for this stage then you cannot fit any squares inside the circle.
Now armed with a theoreitical maximum and an absolute minimum, you can start trying any sort of heuristic algorithm you like for packing squares. A simple algorithm would be to just split the circle up into rows and fit as many sqaures as you can into each row. You can then take this minimum as a guideline to ensure that you came up with a better solution. If you want to spend more processing power looking for a better solution, you use the theoretical as a guideline for how close you are to the theoretical best.
And if you care about this, you could work out what the maximum and minimum theoretical percentage of cover the minimum algorithm I idenfied gives you. The largest square always covers a fixed ratio (pi/4 or about 78.5% of the internal area of the circle I think). So the maximum theoretical minimum is 78.5% cover. And the minimum non-trivial (ie. non zero) theoretical minimum is when you can only fit 1 square inside the largest square, which happens when the squares you are packing are 1 larger than half the width and height of the largest square you can fit in the circle. Basically you take up just over 25% of the inner square with 1 packed square, which means you get an approximate cover of about 20%
Rasterise the circle using something like the midpoint circle algorithm. The number of filled pixels is the number of squares you can fit in the circle. Of course, since you're not actually filling the pixels, just counting them, this should take time proportional to the circumference of the circle, not its area.
You'll have to pick the radius for rasterisation carefully, so that you only count pixels that are strictly inside the circle.
Edit: This may not be exactly correct, as it's possible that applying a sub-pixel offset to the grid could change the result. I'll leave the answer here as it may provide a useful starting point for an exact solution.
You can pack as many squares as you like into a circle. If you doubt this statement, draw a large circle on a piece of paper, then draw a square with side length 10^(-18)m inside it, repeat. When you get near to the boundary of the circle, start drawing squares with side length of 10^(-21)m.
So your first step must be to refine your question and state your problem more accurately.
Just a shot in the dark after a few minutes of thought...
What if you worked with half the circle and doubled it at the end. I would start with a grid of squares the length of the diameter and the width of the radius, essentially blanketing the semi-circle. Then check all 4 corners of each square and make sure their coordinates are within the radius of the circle. This would of course require that you plot the circle and squares on some sort of coordinate system or grid.
I hope this makes sense... It's in my head and it seems a bit difficult to articulate :)
EDIT:
After drawing it out, I think this method would work with a little tweaking. I would line up the squares along the diameter, but slide the first one down until it fits. Set that one in place and start lining up squares next to it until they no longer fit. Move out to the edge of this line of squares and repeat the same steps until your rows of squares reach the radius.
I need to initialize some three dimensional points, and I want them to be equally spaced throughout a cube. Are there any creative ways to do this?
I am using an iterative Expectation Maximization algorithm and I want my initial vectors to "span" the space evenly.
For example, suppose I have eight points that I want to space equally in a cube sized 1x1x1. I would want the points at the corners of a cube with a side length of 0.333, centered within the larger cube.
A 2D example is below. Notice that the red points are equidistant from eachother and the edges. I want the same for 3D.
In cases where the number of points does not have an integer cube root, I am fine with leaving some "gaps" in the arrangement.
Currently I am taking the cube root of the number of points and using that to calculate the number of points and the desired distance between them. Then I iterate through the points and increment the X, Y and Z coordinates (staggered so that Y doesn't increment until X loops back to 0, same for Z with regard for Y).
If there's an easy way to do this in MATLAB, I'd gladly use it.
The sampling strategy you are proposing is known as a Sukharev grid, which is the optimal low dispersion sampling strategy, http://planning.cs.uiuc.edu/node204.html. In cases where the number of samples is not n^3, the selection of which points to omit from the grid is unimportant from a sampling standpoint.
In practice, it's possible to use low discrepancy (quasi-random) sampling techniques to achieve very good results in three dimensions, http://planning.cs.uiuc.edu/node210.html. You might want to look at using Halton and Hammersley sequences.
http://en.wikipedia.org/wiki/Halton_sequence
http://en.wikipedia.org/wiki/Constructions_of_low-discrepancy_sequences
You'll have to define the problem in more detail for the cases where the number of points isn't a perfect cube. Hovever, for the cases where the number of points is a cube, you can use:
l=linspace(0,1,n+2);
x=l(2:n+1); y=x; z=x;
[X, Y, Z] = meshgrid(x, y, z);
Then for each position in the matrices, the coordinates of that point are given by the corresponding elements of X, Y, and Z. If you want the points listed in a single matrix, such that each row represents a point, with the three columns for x, y, and z coordinates, then you can say:
points(:,1) = reshape(X, [], 1);
points(:,2) = reshape(Y, [], 1);
points(:,3) = reshape(Z, [], 1);
You now have a list of n^3 points on a grid throughout the unit cube, excluding the boundaries. As others have suggested, you can probably randomly remove some of the points if you want fewer points. This would be easy to do, by using randi([0 n^3], a, 1) to generate a indices of points to remove. (Don't forget to check for duplicates in the matrix returned by randi(), otherwise you might not delete enough points.)
This looks related to sphere packing.
Choose the points randomly within the cube, and then compute vectors to the nearest neighbor or wall. Then, extend the endpoints of the smallest vector by exponentially decaying step size. If you do this iteratively, the points should converge to the optimal solution. This even works if the number of points is not cubic.
a good random generator could be a first a usable first approximation. maybe with a later filter to reposition (again randomly) the worst offenders.