I'm trying to plot XY graph in real time using Java. Functions that only rely on X are easy. Just iterate over x0...xn, get value and draw lines between the points. There are a lot of guides on it and it's intuitive.
But there is literally no guide on plotting graphs with x AND y being a variable.
Consider this equation: sin(x^3 * y^2) = cos(x^2 * y^3)
Using online Graph plotter I get this:
While my best result plotting the same function is this:
I just iterate over every pixel on screen and pass pixel positions as parameters to the function. If function's output is close to 0, I color the pixel. As you can see it's bad. It also takes huge amount of processing power. It only redraws once every couple of seconds. And if I try to increase precision, all lines just become thicker. Especially around intersections.
My question is how can I make my program faster and make it produce better looking graphs. Maybe there are some algorithms for that purpose?
The challenge is to chose the correct threshold. Pixels where abs(f(x,y)) is below the threshold should be colored. Pixels above the threshold should be white.
The problem is that if the threshold is too low, gaps appear in places where no pixel is exactly on the line. On the other hand, if the threshold is too high, the lines widen in places where the function is near zero, and the function is changing slowly.
So what's the correct threshold? The answer is the magnitude of the gradient, multiplied by the radius of a pixel. In other words, the pixel should be colored when
abs(f(x,y)) < |g(x,y)| * pixelRadius
The reason is that the magnitude of the gradient is equal to the maximum slope of the surface (at a given point). So a zero crossing occurs within a pixel if the slope is large enough to reduce the function to zero, inside the pixel.
That of course is only rough approximation. It assumes that the gradient doesn't change significantly within the area bounded by the pixel. The function in the question conforms to that assumption reasonably well, except in the upper right corner. Notice that in the graph below, there are Moiré patterns in the upper right. I believe that those are due to the failure in my antialiasing calculation: I don't compensate for a rapidly changing gradient.
In the graph below, pixels are white if
abs(f(x,y)) > |g(x,y)| * pixelRadius
Otherwise the pixel intensity is a number from 0 to 1, with 0 being black and 1 being white:
intensity = abs(f(x,y)) / (|g(x,y)| * pixelRadius)
I don't how exactly the online plotter did, but here are some suggestions.
Simplify your equation, as to this specific one, you can easily have x^2 * y^2 * (x ± y) = (2 * n + 1 / 2) * pi where n for any integer. It's much clearer than the original one.
Draw lines rather than points. Every n here stands for 4 curves, you can now loop over x and figure out y and draw a line between adjacent points.
Hope it helps!
Related
Suppose I have an image and I want to find a subarray with shape 3x3 that contains the maximum sum compared to other subarrays.
How do I do that in python efficiently (run as fast as possible)? If you can provide a sample code that would be great.
My specific problem:
I want to extract the location of the center of the blob in this heatmap
I don't want to just get the maximum point because that would cause the coordinate to not be very precise. The true center of the blob could actually be between 2 pixels. Thus, it's better to do weighted average between many points to obtain subpixel precision. For example, if there are 2 points (x1,y1) and (x2,y2) with values 200 and 100. Then the average coordinate will be x=(200*x1+100*x2)/300 y=(200*y1+100*y2)/300
One of my solution is to do a convolution operation. But I think it's not efficient enough because it requires multiplication to the kernel (which contains only ones). I'm looking for a fast implementation so I cannot do looping myself because I'm not sure if it will be fast.
I want to do this algorithm to 50 images every few milliseconds. (Image come in as a batch). Concretely, think of these images as output of a machine learning model that output heatmaps. In order to obtain the coordinate from these heatmaps, I need to do some kind of weighted average between the coordinates with high intensity. My idea is to do a weighted average around 3x3 area on the image. I am also open to other approaches that can be faster or more elegant.
Looking for the "subarray of shape 3x3 with the maximum sum" is the same as looking for the maximum of an image after it has been filtered with an un-normalized 3x3 box filter. So it boils down to finding efficiently the maximum of an image, which you assume is a (perhaps "noisy") discrete sample of an underlying continuous and smooth signal - hence your desire to find a sub-pixel location.
You really need to split the problem in 2 parts:
Find the pixel location m=(xm, ym) of the maximum value of the image. This requires no more than a visit of every pixel in the image, and one comparison per pixel, so it's O(N) and hence optimal as long as you are operating at the native image resolution. In OpenCv it is done using
the minMaxLoc function.
Apply whatever model of the image you are using to find its (subpixel-interpolated) maximum in a neighborhood of m.
To clarify point (2): you write
I don't want to just get the maximum point because that would cause the coordinate to not be very precise. The true center of the blob could actually be between 2 pixels
While intuitively plausible, this assertion needs to be made more precise in order to be computable. That is, you need to express mathematically what assumptions you make about the image, that bring you to search for a "true" maximum between pixel-sampled location.
A simple example for such assumptions is quadratic smoothness. In this scenario you assume that, in a small (say, 3x3, of 5x5) neighborhood of the "true" maximum location, the image signal z is well approximated by a quadratic:
z = A00 dx^2 + A01 dx dy + A11 dy^2 + A02 dx + A12 dy + A22
where:
dx = x - xm; dy = y - ym
This assumption makes sense if the underlying signal is expected to be at least 3rd order continuous and differentiable, because of the Taylor series theorem. Geometrically, it means that you assume (hope?) that the signal looks like a quadric (a paraboloid, or an ellipsoid) near its maximum.
You can then evaluate the above equation for each of the pixels in a neighborhood of m, replacing the actual image values for z, and thus obtain a linear system in the unknown Aij, with as many equations as there are neighbor pixels (so even a 3x3 neighborhood will yield an over-constrained system). Solving the system in the least-squares sense gives you the "optimal" coefficients Aij. The theoretical maximum as predicted by this model is where the first partial derivatives vanish:
del z / del dx = 2 A00 dx + A01 dy = 0
del z / del dy = A01 dx + 2 A11 dy = 0
This is a linear system in the two unknown (dx, dy), and solving it yields the estimated location of the maximum and, through the above equation for z, the predicted image value at the maximum.
In terms of computational cost, all such model estimations are extremely fast, compared with traversing an image of even moderate size.
I am sorry I did not exactly understand the meaning of your last paragraph so I have just stopped at a point where I got all the coordinates having the maximum value. I have used cv2.filter2D for convolution on a thresholded image and then using np.amax and np.where have found the coordinates having the maximum value.
import cv2
import numpy as np
from timeit import default_timer as timer
img = cv2.imread('blob.png', 0)
start = timer()
_, thresh = cv2.threshold(img, 240, 1, cv2.THRESH_BINARY)
mask = np.ones((3, 3), np.uint8)
res = cv2.filter2D(thresh, -1, mask)
result = np.where(res == np.amax(res))
end = timer()
print(end - start)
I don't whether it as efficient as you want or not but the output was 0.0013461999999435648 s
P.S. The image you have provided had a white border which I had to crop out for this method.
One way is to sub-sampling the image and find the neighborhood of the desired point. You can make it by doing a loop not on all the pixels but on e.g. every 5 pixels (row=row+5andcol=col+5 in the loop). After finding the near location, consider a specific neighborhood around that location and do a loop on whole pixels of that specific crop to find the exact location.
Based on my knowledge of image processing, to get a reliable result that works for any one blob, follow these steps:
Make the image greyscale if it isn’t already (pixel values 0-255)
Normalise the image so that pixel intensities cover the full range of 0-255
Convert image to binary (a pixel is either 0 or 1) - this can be achieved by thresholding, such as applying the rule that any pixel less than or equal to 127 in intensity is given an intensity of 0 and anything else is given an intensity of 1
Find the weighted average of all the pixels that hold the value of “1”
or
Apple an erosion to the image until you are left with either 2 pixels or 1 pixel.
Case 1
If you have two pixels then you need to find the u and v co-ordinates if both pixels. The centre of the blob will be the halfway point between the u and v coordinates of the pixels.
Case 2
If you have one pixel left then that pixel’s co-ordinates is the centre point.
—————
You mentioned about achieving this quickly in Python:
Python by design is an interpreted language, so it executed line by line, making it less suitable for highly iterative tasks like image processing. However, you can make use of libraries like OpenCV (https://docs.opencv.org/2.4/index.html), which is written in C, to mitigate this apart from making the task at hand a lot easier for you.
OpenCV also provides solutions for all the steps I listed above in this capacity, therefore you should be able to achieve a reliable solution fairly quickly, though I can’t say for sure if it will hit your target of 50 images every few milliseconds. Other factors to take into account is the size of the image you are processing. That will increase the processing load exponentially.
UPDATE
I just found a good article that practically echoes my step-process:
https://www.learnopencv.com/find-center-of-blob-centroid-using-opencv-cpp-python/
More importantly it also denotes the formula for finding the centroid mathematically as:
c = (1/n)sigma(n, i = 1, x_i)
but this is better written in the article than I can do so here.
I have some area X by Y pixels and I need to fill it up pixel by pixel. The problem is that at any given moment the drawn shape should be as round as possible.
I think that this algorithm is subset of Ordered Dithering, when converting grayscale images to one-bit, but I could not find any references nor could I figure it out myself.
I am aware of Bresenham's Circle, but it is used to draw circle of certain radius not area.
I created animation of all filling percents for 10 by 10 pixel grid. As full area is 10x10=100px, then each frame is exactly 1% inc.
A filled disk has the equation
(X - Xc)² + (Y - Yc)² ≤ C.
When you increase C, the number of points that satisfies the equation increases, but because of symmetry it increases in bursts.
To obtain the desired filling effect, you can compute (X - Xc)² + (Y - Yc)² for every pixel, sort on this value, and let the pixels appear one by one (or in a single go if you know the desired number of pixels).
You can break ties in different ways:
keep the original order as when you computed the pixels, by using a stable sort;
shuffle the runs of equal values;
slightly alter the center coordinates so that there are no ties.
Filling with the de-centering trick.
Values:
Order:
How many squares of size a×a can be packed into a circle of radius R?
I don't need a solution. I just need some kind of a starting idea.
I apologise for writing such a long answer. My approach is to start with a theoretical maximum and a guaranteed minimum. When you approach the problem, you can use these values to determine how good any algorithm you use is. If you can think of a better minimum then you can use that instead.
We can define an upper limit for the problem by simply using the area of the circle
Upper Limit = floor( (PI * (r pow 2)) / (L * L) )
Where L is the width or height of the squares you are packing and r is the radius of the circle you are packing the squares into. We are sure this is an upper limit because a) we must have a discrete number of boxes and b) we cannot take up more space than the area of the circle. (A formal proof would work somewhere along the lines of assume we had one more box than this, then the sum of the area of the boxes would be greater than the area of the circle).
So with an upper limit in hand, we can now take any solution that exists for all circles and call it a minimum solution.
So, let's consider a solution that exists for all circles by taking a look at the largest square we can fit inside the circle.
The largest square you can fit inside the circle has 4 points on the perimiter, and has a width and length of sqrt(2) * radius (by using pythagoras' theorem and using the radius for the length of the shorter edges)
So the first thing we note is that if sqrt(2) * radius is less than the dimension of your squares, then you cannot fit any squares in the circle, because afterall, this is the largest square you can fit.
Now we can do a straightforward computation to divide this large square into a regular grid of squares using the L you specified, which will give us at least one solution to the problem. So you have a grid of sqaures inside this maximum square. The number of squares you can fit into one row of this this grid is
floor((sqrt(2) * radius)/ L)
And so this minimum solution asserts that you can have at least
Lower Limit = floor((sqrt(2) * radius)/ L) pow 2
number of squares inside the circle.
So in case you got lost, all I did was take the largest square I could fit inside the circle and then pack as many squares as possible into a regular grid inside that, to give me at least one solution.
If you get an answer at 0 for this stage then you cannot fit any squares inside the circle.
Now armed with a theoreitical maximum and an absolute minimum, you can start trying any sort of heuristic algorithm you like for packing squares. A simple algorithm would be to just split the circle up into rows and fit as many sqaures as you can into each row. You can then take this minimum as a guideline to ensure that you came up with a better solution. If you want to spend more processing power looking for a better solution, you use the theoretical as a guideline for how close you are to the theoretical best.
And if you care about this, you could work out what the maximum and minimum theoretical percentage of cover the minimum algorithm I idenfied gives you. The largest square always covers a fixed ratio (pi/4 or about 78.5% of the internal area of the circle I think). So the maximum theoretical minimum is 78.5% cover. And the minimum non-trivial (ie. non zero) theoretical minimum is when you can only fit 1 square inside the largest square, which happens when the squares you are packing are 1 larger than half the width and height of the largest square you can fit in the circle. Basically you take up just over 25% of the inner square with 1 packed square, which means you get an approximate cover of about 20%
Rasterise the circle using something like the midpoint circle algorithm. The number of filled pixels is the number of squares you can fit in the circle. Of course, since you're not actually filling the pixels, just counting them, this should take time proportional to the circumference of the circle, not its area.
You'll have to pick the radius for rasterisation carefully, so that you only count pixels that are strictly inside the circle.
Edit: This may not be exactly correct, as it's possible that applying a sub-pixel offset to the grid could change the result. I'll leave the answer here as it may provide a useful starting point for an exact solution.
You can pack as many squares as you like into a circle. If you doubt this statement, draw a large circle on a piece of paper, then draw a square with side length 10^(-18)m inside it, repeat. When you get near to the boundary of the circle, start drawing squares with side length of 10^(-21)m.
So your first step must be to refine your question and state your problem more accurately.
Just a shot in the dark after a few minutes of thought...
What if you worked with half the circle and doubled it at the end. I would start with a grid of squares the length of the diameter and the width of the radius, essentially blanketing the semi-circle. Then check all 4 corners of each square and make sure their coordinates are within the radius of the circle. This would of course require that you plot the circle and squares on some sort of coordinate system or grid.
I hope this makes sense... It's in my head and it seems a bit difficult to articulate :)
EDIT:
After drawing it out, I think this method would work with a little tweaking. I would line up the squares along the diameter, but slide the first one down until it fits. Set that one in place and start lining up squares next to it until they no longer fit. Move out to the edge of this line of squares and repeat the same steps until your rows of squares reach the radius.
I am writing a function to draw an approximate circle on a square array (in Matlab, but the problem is mainly algorithmic).
The goal is to produce a mask for integrating light that falls on a portion of a CCD sensor from a diffraction-limited point source (whose diameter corresponds to a few pixels on the CCD array). In summary, the CCD sensor sees a pattern with revolution-symmetry, that has of course no obligation to be centered on one particular pixel of the CCD (see example image below).
Here is the algorithm that I currently use to produce my discretized circular mask, and which works partially (Matlab/Octave code):
xt = linspace(-xmax, xmax, npixels_cam); % in physical coordinates (meters)
[X Y] = meshgrid(xt-center(1), xt-center(2)); % shifted coordinate matrices
[Theta R] = cart2pol(X,Y);
R = R'; % cart2pol uses a different convention for lines/columns
mask = (R<=radius);
As you can see, my algorithm selects (sets to 1) all the pixels whose physical distance (in meters) is smaller or equal to a radius, which doesn't need to be an integer.
I feel like my algorithm may not be the best solution to this problem. In particular, I would like it to include the pixel in which the center is present, even when the radius is very small.
Any ideas ?
(See http://i.stack.imgur.com/3mZ5X.png for an example image of a diffraction-limited spot on a CCD camera).
if you like to select pixels if and only if they contain any part of the circle C:
in each pixel place a small circle A with the radius = halv size of the pixel, and another one around it with R=sqrt(2)*half size of the circle (a circumscribed circle)
To test if two circles touch each other you just calculate the center to center distances and subtract the sum of the two radii.
If the test circle C is within A then you select the pixel. If it's within B but not C you need to test all four pixel sides for overlap like this Circle line-segment collision detection algorithm?
A brute force approximate method is to make a much finer grid within each pixel and test each center point in that grid.
This is a well-studied problem. Several levels of optimization are possible:
You can brute-force check if every pixel is inside the circle. (r^2 >= (x-x0)^2 + (y-y0)^2)
You can brute-force check if every pixel in a square bounding the circle is inside the circle. (r^2 >= (x-x0)^2 + (y-y0)^2 where |x-x0| < r and |y-y0| < r)
You can go line-by-line (where |y-y0| < r) and calculate the starting x ending x and fill all the lines in between. (Although square roots aren't cheap.)
There's an infinite possibility of more sophisticated algorithms. Here's a common one: http://en.wikipedia.org/wiki/Midpoint_circle_algorithm (filling in the circle is left as an exercise)
It really depends on how sophisticated you want to be based on how imperative good performance is.
I'm working on a project that requires me to accurately control the number of pixels that are used to draw (roughly) circular stimuli, and although Bresenham's algorithms are great, they don't draw circles of an arbitrary area (to my knowledge). I've tried scripts that interrupt Bresenham's algorithm when the desired area has been plotted, but the results are decidedly hit-or-miss. Does anyone know of a way to plot the "best" circle (somewhat subjective, I know) using a given number of pixels? Many thanks!
A rough way of doing it, for example:
The radius of a circle of area 1000 sq px is sqrt(1000/pi) = 17.8... That circle should then fit in a 35x35 matrix. If you make "indices" for that matrix where the central pixel is (0,0), you can check easily if the pixel falls in the circle or not by substitution into the equation of a circle x^2 + y^2 = r^2. Or you can use the alternative equation for a circle centered at (a,b). If it evaluates to TRUE, it does, if not, it's outside the circle.
As a pseudocode/example, in Python I would do an optimized version of:
import numpy, math
target_area = 1000.0
r = (target_area / math.pi) ** 0.5
m = numpy.zeros((2*r+2,2*r+2))
a, b = r, r
for row in range(0, m.shape[0]):
for col in range(0, m.shape[1]):
if (col-a)**2 + (row-b)**2 <= r**2:
m[row,col] = 1
numpy.sum(m)
#>>> 999
Here is the result when the target area is 100,000 pixels (the actual circle generated is 99988.0):
You could also write a routine to find which areas can be matched more closely than others with this algorithm, and select those values to ensure conformity.
The area of a circle is A=Pi*r2. You're starting from the area and (apparently) want the radius, so we divide both sides by Pi to get: r2=A/pi. Taking the square root of both sides then gives us: r=sqrt(A/pi). Once you have the radius, drawing with most of the normal algorithms should be straightforward.
A simple (but somewhat naive approach) would be to simply count the number of pixels drawn by Bresenham's algorithm for a given radius, and then use binary search to find the radius that produces the desired number of pixels.
My first thought is to use an algorithm with sub-pixel precision. Consider what happens if you're center has irrational coordinates and you gradually increase the radius. This would fill seemingly random pixels around the perimeter as they became included in the circle. You want to avoid symmetry that causes the 4 quadrants of the circle adding pixels at the same time so you get closer to single pixels getting added. How something like this could be implemented I haven't a clue.
I had to solve a single instance of the 3d version once. I needed to a set of lattice points inside a sphere to be less-than or equal to 255. IIRC if r*r = 15 there are 240 points inside the sphere. I was not concerned with getting 255 exactly though.
Supposedly you have 2000 pixels in total that should make up your complete circle. By complete I mean there should be no breakage in pixels and must be connected to each other. Since 2Pi*R = circumference, the running length of diameter of the circle, this is the total amount of pixels you have. Now simply write R = 2000/2*Pi and this will give you the radius. Now you should be able to draw a circle the comprised of 2000 pixels. I hope this is what you wanted.
Let's forget about pixels for a second and let's work through the basic math/geometry.
We all know that
Area of a Circle = Pi * Radius ^2
which is the same as saying
Area of a Circle = Pi * (Diameter / 2) ^2
We all know that
Area of the Square Enclosing the Circle (i.e. each side of the square is tangent to the circle) = Diameter * Diameter
Thus
Ratio of the Circle Area to the Square Area = Circle Area / Square Area = (Pi * (Diameter / 2) ^2) / (Diameter * Diameter) = Pi / 4
Now let's assume that we have a circle and square with a pixel count large enough so that we don't have to worry about the troublesome edge cases around the border of the circle. In fact let's assume for a second that we have a very large diameter (maybe 10,000 or maybe even infinite). With this assumption the following holds:
Number of Pixels in the Circle = (Number of Pixels in the Square) * (Ratio of the Circle Area to the Square Area)
In other words for a sufficiently large number of pixels, the ratio of the areas of a perfectly drawn circle to a perfectly drawn square will approximate the ratio of the number of pixels in a pixelated circle to the number of pixels in the enclosing pixelated square.
Now in a pixelated square, the number of pixels in that square is the number of pixels across times the number of pixels high. Or in other words it is the square's diameter (in pixels) squared. Let's call the square's pixel diameter d. So substituting with the formulas above we have:
Number of Pixels in the Circle = (d * d) * (Pi /4)
So now let's solve for d
d = Sqrt(4 * (Num of Pixels in the Circle) / Pi)
Well we said earlier that d was the diameter of the square. Well it also happens to be the diameter of the circle. So when you want to draw a circle with a certain number of pixels, you draw a circle with the diameter being:
Diameter of Circle = Sqrt(4 * (Desired Number of Pixels in Circle Area) / Pi)
Now obviously you have to make some choices about rounding and so forth (there is no such thing as a fractional pixel), but you get the point. Also, this formula is more accurate as the desired number of pixels for the area of the circle goes up. For small amounts of pixels the roundoff error may not give you exactly the right number of pixels.