Perlin Noise algorithm - Why subtracting one from the points? - perlin-noise

I was looking at this doc: https://code.google.com/p/fractalterraingeneration/wiki/Perlin_Noise
And in the document it says:
top-left: just x, y
top-right: x-1
bottom-left: x, y-1
bottom-right: x-1, y-1
This may seem counter-intuitive, but it is to adjust for the possibly negative values in the gradients. Gradients pointing left or down will have negative values, so subtracting 1 from the values compensates for this. I admit I'm still not 100% clear on this, but it is a necessary step.
Am I right in assuming that this only applies if we plan to draw pixels using the final final as the weight? Otherwise, there is no need to subtract 1, correct?

As shown by the last image on the page you've linked, the authour is quite wrong about the -1's in the paragraph quoted. The actual purpose is to generate the four corners of a grid region from a single input point, this axis-aligned square grid is precisely the region size Perlin Noise is designed to operate in.
If the four points are generated in a way that does not correctly align with adjacent patches (like x+1 in a left to right count, or y-1 in a bottom up count), the image breaks sharply on the patch boundaries as that author describes at the bottom of page. Choosing the same point four times (not adding or subtracting anything) turns the entire patch into a single pixel - generating a white noise image with wasted computation.

Related

seeking approximate algorithm to find largest clear circle in an area

Related: Is there a simple algorithm for calculating the maximum inscribed circle into a convex polygon?
I'm writing a graphics program whose goals are artistic rather than mathematical. It composes a picture step by step, using geometric primitives such as line segments or arcs of small angle. As it goes, it looks for open areas to fill in with more detail; as the available open areas get smaller, the detail gets finer, so it's loosely fractal.
At a given step, in order to decide what to do next, we want to find out: where is the largest circular area that's still free of existing geometric primitives?
Some constraints of the problem
It does not need to be exact. A close-enough answer is fine.
Imprecision should err on the conservative side: an almost-maximal circle is acceptable, but a circle that's not quite empty isn't acceptable.
CPU efficiency is a priority, because it will be called often.
The program will run in a browser, so memory efficiency is a priority too.
I'll have to set a limit on level of detail, constrained presumably by memory space.
We can keep track of the primitives already drawn in any way desired, e.g. a spatial index. Exactness of these is not required; e.g. storing bounding boxes instead of arcs would be OK. However the more precision we have, the better, because it will allow the program to draw to a higher level of detail. But, given that the number of primitives can increase exponentially with the level of detail, we'd like storage of past detail not to increase linearly with the number of primitives.
To summarize the order of priorities
Memory efficiency
CPU efficiency
Precision
P.S.
I framed this question in terms of circles, but if it's easier to find the largest clear golden rectangle (or golden ellipse), that would work too.
P.P.S.
This image gives some idea of what I'm trying to achieve. Here is the start of a tendril-drawing program, in which decisions about where to sprout a tendril, and how big, are made without regard to remaining open space. But now we want to know, where is there room to draw a tendril next, and how big? And where after that?
One very efficient way would be to recursively divide your area into rectangular sub-areas, splitting them when necessary to divide occupied areas from unoccupied areas. Then you would simply need to keep track of the largest unoccupied area at each time. See https://en.wikipedia.org/wiki/Quadtree - but you needn't split into squares.
Given any rectangle, you can draw a line inside it, so that at least one of the rectangles to either side of the line is a golden rectangle. Therefore you can recursively erect partitions within a rectangle so that all but one of the rectangles formed by the partitions are golden rectangles, and the add rectangle left over is vanishingly small. You could do this to create a quadtree-like structure, where almost all of the rectangles left over were golden rectangles.
This seems like the kind of situation where a randomized algorithm might be helpful. Choose points at random, reject and choose more if they're inappropriate for some reason, then find the min distance from your choices to each of the figures already included. The random point with the max of the mins would be your choice.
The number of sample points might have to increase as the complexity of the figure increases.
The random algorithm could be improved by checking points nearby good choices. Keep checking neighbors until no more improvement is possible.
Here's a simple way that uses a fixed amount of memory and time per update, regardless of how many drawing primitives you use. How much memory (and time per update) is needed can be controlled according to how high a "resolution" you need:
Divide the space up into a grid of points. We will maintain a 2D array, d[], which records the minimum distance from the grid point (x, y) to any already-drawn primitive in the entry d[x, y]. Initially, set every element in this array to infinity (or some huge number).
Whenever you draw some primitive, iterate over all grid points (x, y) calculating the minimum distance (or some conservative approximation to it) from (x, y) to the just-drawn primitive. E.g., if the primitive just drawn was a circle of radius r centered at (p, q), then this distance would be sqrt((x-p)^2 + (y-q)^2) - r. Then update d[x, y] with this new distance value if it is smaller than its current value.
The grid point at which the largest circle can be drawn without touching any already-drawn primitive is the grid point that is the farthest away from any primitive drawn so far. To find it, simply scan through d[] to find its maximum value, and note the corresponding indices (x, y). d[x, y] will be the maximum radius you could safely use for this circle.
Repeat steps 2 and 3 as necessary.
A couple of points:
For primitives that have area, you can assign 0 or a negative value to all d[x, y] corresponding to grid points inside the primitive.
For any convex primitive, you can often avoid updating most of the d[] array by scanning rows (or columns) "outward" from the just-drawn primitive's border: the distance from the current grid point to the primitive will never decrease, so if this distance becomes larger than the previous maximum value in d[] then we know that we can stop scanning this row, because no further distance value that we would compute on it could possibly be less than an existing distance on it.

Curvature estimation from image

I have images like this ones:
In this images the red line is what I want to get from the image. Original images do not have that red lines, but only that green road.
What I want is to estimate the curve from image in form of a coeffitients of equation: A x^2 + B x + C = 0. In images there can be noise (black holes on edges as you see above).
I have tried to solve this by using least squares method (LSM), but there are two problems with this approach:
The method is too slow even on PC, because the points amount is high.
The road is too wide in the following case:
The curve on the left image is correctly recognized, but on the right side incorrectly. The reason is that the road is too wide and too short, I suppose.
As a solution for both cases I want to make the road narrow. In ideal case it is a red line in images above. Or I want to use LSM for line detection (A x + B = 0) for optimization of processing time.
I have tried eroding image - it is wrong approach.
Skeleton also not the right solution.
Any ideas about how to achieve the desired result (make the road narrow)?
Or any ideas of another approach for this problem?
If you can rely on always having one axis as the dependent variable in your fit (looks like it should be the x axis in the above "correct" examples, although your bottom right failure seems to be using y), then you could do something like this:
for each scanline y, pick the median x of the non-black pixels
if there are no non-black pixels (or fewer than some chosen noise threshold), skip the line
You now have a list of (x,y) pairs, at most as many as there are scan lines. These represent guesses as to the midpoint of the road at each level. Fit a low order polynomial x=f(y) (I'd go for linear or cubic, but you could do quadratic if you prefer) to these points by least squares.
For the sorts of images you've shown, the detail is very coarse, so you might be able to manage with just a subset of points. But even without that the processing cost should be reasonable unless you're using very constrained hardware.
If left-right paths occur often then you could fit both ways and then apply some kind of goodness of fit criterion. If paths loop back on themselves often, then this sort of midpoint approach won't give you a good answer, but then you're onto a loser with the fitting anyway.

Area divide algorithm

Is there any algorithm to find a distribution of area into n sub-regions, where each sub-region might have different area.
To formally put the problem statement: Suppose you have a rectangular plot. How will you divide the region into n rectangles. The sum of area of these sub-rectangles will be equal to original rectangular plot(So there wouldn't be any overlaps between the rectangles)
And the area of each of these smaller n rectangles is given before hand.
Restriction is on width of each sub-rectangle.
This subdivision has to be displayed on may be a computer screen which is divided into pixels. So I don't want any areas any dimension to be smaller than a pixel(or maybe 10), which might be of no use to display as such.
I was looking at a rectangle packing algorithm here but this seems to be wasting space which I don't want. Does there exist any algorithm to solve this problem.
Backtracking doesn't seem to be a good solution in this case as the sub-rectangles area is only specified, not the dimensions, or is it?
Example 1:
Example 2:
The integral of a function is the area bound by the limits, the curve of the function, and the x-axis. Define one side of the rectangle as the x-axis, then find the boundaries for the others. There are plenty of numerical integration libraries around in the language of your choice.
EDIT: some difficulties in trying to illustrate in words...
Assuming, at least, that the containing rectangle has an area larger than the sum of the areas of the sub-regions; and there is no requirement of a certain order of containment:
Contain the largest sub-region first with edges on the axes.
Pick the next smaller sub-region.
Create the function (integral) to calculate the free area as seen from each axes.
With windows/limits equal to the length on the sub-region's sides (facing the axes), slide these windows along the axes away from the origin.
Create the function for finding the free space bounded by the outside arms of the cross formed by the windows as they slide along the axes. Efficiency in the use of space is found in the region where free space is minimal (differentiation).
Rotate the sub-region by 90 degrees and repeat from step 3.
Place the sub-region in the orientation and location where most efficient.
Repeat step 2. Stop when sliding windows report negative
free space for the entire domain (allocated space overlaps the placeholder made by the windows).
In theory, this will systematically try to squeeze in sub-regions. Sketch and pseudocode to follow if time permits.

Efficient approximation of rotation

I am trying to write an algorithm that rotates one square around its centre in 2D until it matches or is "close enough" to the rotated square which started in the same position, is the same size and has the same centre. Which is fairly easy.
However the corners of the square need to match up, thus to get a match the top right corner of the square to rotate must be close enough to what was originally the top right corner of the rotated square.
I am trying to make this as efficient as possible, so if the closeness of the two squares based on the above criteria gets worse I know I need to try and rotate back in the opposite direction.
I have already written the methods to rotate the squares, and test how close they are to one another
My main problem is how should I change the amount to rotate on each iteration based on how close I get
E.g. If the current measurement is closer than the previous, halve the angle and go in the same direction otherwise double the angle and rotate in the opposite direction?
However I don't think this is quite a poor solution in terms of efficiency.
Any ideas would be much appreciated.
How about this scheme:
Rotate in 0, 90, 180, 270 angle (note that there are efficient algorithm for these special rotations than the generic rotation); compare each of them to find the quadrant you need to be searching for. In other word, try to find the two axis with the highest match.
Then do a binary search, for example when you determined that your rotated square is in the 90-180 quadrant, then partition the search area into two octants: 90-135 and 135-180. Rotate by 90+45/2 and 180-45/2 and compare. If the 90+45/2 rotation have higher match value than the 180-45/2, then continue searching in the 90-135 octant, otherwise continue searching in the 135-180 octant. Lather, Rinse, Repeat.
Each time in the recursion, you do this:
partition the search space into two orthants (if the search space is from A to B, then the first orthant is A + (A + B) / 2 and the second orthant is B - (A + B) / 2)
check the left orthant: rotate by A + (A + B) / 4. Compare.
check the right orthant: rotate by B - (A + B) / 4. Compare.
Adjust the search space, either to left orthant or the right orthant based on whether the left or right one is have higher match value.
Another scheme I can think of is, instead of trying to rotate and search, you try to locate the "corners" of the rotated image.
If your image does not contain any transparencies, then there are four points located at sqrt(width^2+height^2) away from the center, whose color are exactly the same as the corners of the unrotated image. This will limit the number of rotations you will need to search in.
...also, to build upon the other suggestions here, remember that for any rectangle you rotate around its center, you only need to calculate the rotation of a single corner. You can infer the other three corners by adding or substracting the same offset that you calculated to get the first corner. This should speed up your calculations a bit (assuming [but not thinking] that this is a bottleneck here).

Efficient algorithm to find a point not touched by a set of rectangles

Input: a set of rectangles within the area (0, 0) to (1600, 1200).
Output: a point which none of the rectangles contains.
What's an efficient algorithm for this? The only two I can currently think of are:
Create a 1600x1200 array of booleans. Iterate through the area of each rectangle, marking those bits as True. Iterate at the end and find a False bit. Problem is that it wastes memory and can be slow.
Iterate randomly through points. For each point, iterate through the rectangles and see if any of them contain the point. Return the first point that none of the rectangles contain. Problem is that it is really slow for densely populated problem instances.
Why am I doing this? It's not for homework or for a programming competition, although I think that a more complicated version of this question was asked at one (each rectangle had a 'color', and you had to output the color of a few points they gave you). I'm just trying to programmatically disable the second monitor on Windows, and I'm running into problems with a more sane approach. So my goal is to find an unoccupied spot on the desktop, then simulate a right-click, then simulate all the clicks necessary to disable it from the display properties window.
For each rectangle, create a list of runs along the horizontal direction. For example a rectangle of 100x50 will generate 50 runs of 100. Write these with their left-most X coordinate and Y coordinate to a list or map.
Sort the list, Y first then X.
Go through the list. Overlapping runs should be adjacent, so you can merge them.
When you find the first run that doesn't stretch across the whole screen, you're done.
I would allocate an image with my favorite graphics library, and let it do rectangle drawing.
You can try a low res version first (scale down a factor 8), that will work if there is at least a 15x15 area. If it fails, you can try a high res.
Use Windows HRGNs (Region in .net). They were kind of invented for this. But that's not language agnostic no.
Finally you can do rectangle subtraction. Only problem is that you can get up to 4 rectangles each time you subtract one rect from another. If there are lots of small ones, this can get out of hand.
P.S.: Consider optimizing for maximized windows. Then you can tell there are no pixels visible without hit testing.
Sort all X-coordinates (start and ends of rectangles), plus 0 & 1600, remove duplicates. Denote this Xi (0 <= i <= n).
Sort all Y-coordinates (start and ends of rectangles), plus 0 & 1200, remove duplicates. Denote this Yj (0 <= j <= m).
Make a n * m grid with the given Xi and Yj from the previous points, this should be much smaller than the original 1600x1200 one (unless you have a thousand rectangles, in which case this idea doesn't apply). Each point in this grid maps to a rectangle in the original 1600 x 1200 image.
Paint rectangles in this grid: find the coordinates of the rectangles in the sets from the first steps, paint in the grid. Each rectangle will be on the form (Xi1, Yj1, Xi2, Yj2), so you paint in the small grid all points (x, y) such that i1 <= x < i2 && j1 <= y < j2.
Find the first unpainted cell in the grid, take any point from it, the center for example.
Note: Rectangles are assumed to be on the form: (x1, y1, x2, y2), representing all points (x, y) such that x1 <= x < x2 && y1 <= y < y2.
Nore2: The sets of Xi & Yj may be stored in a sorted array or tree for O(log n) access. If the number of rectangles is big.
If you know the minimum x and y dimensions of the rectangles, you can use the first approach (a 2D array of booleans) using fewer pixels.
Take into account that 1600x1200 is less than 2M pixels. Is that really so much memory? If you use a bitvector, you only need 235k.
You first idea is not so bad... you should just change the representation of the data.
You may be interessed in a sparse array of booleans.
A language dependant solution is to use the Area (Java).
If I had to do this myself, I'd probably go for the 2d array of booleans (particularly downscaled as jdv suggests, or using accelerated graphics routines) or the random point approach.
If you really wanted to do a more clever approach, though, you can just consider rectangles. Start with a rectangle with corners (0,0),(1600,1200) = (lx,ly),(rx,ry) and "subtract" the first window (wx1,wy1)(wx2,wy2).
This can generate at most 4 new "still available" rectangles if it is completely contained within the original free rectangle: (eg, all 4 corners of the new window are contained within the old one) they are (lx,ly)-(rx,wy1), (lx,wy1)-(wx1,wy2), (wx2,wy1)-(rx,wy2), and (lx,wy2)-(rx,ry). If just a corner of the window overlaps (only 1 corner is inside the free rectangle), it breaks it into two new rectangles; if a side (2 corners) juts in it breaks it into 3; and if there's no overlap, nothing changes. (If they're all axes aligned, you can't have 3 corners inside).
So then keep looping through the windows, testing for intersection and sub-dividing rectangles, until you have a list (if any) of all remaining free space in terms of rectangles.
This is probably going to be slower than any of the graphics-library powered approaches above, but it'd be more fun to write :)
Keep a list of rectangles that represent uncovered space. Initialize it to the entire area.
For each of the given rectangles
For each rectangle in uncovered space
If they intersect, divide the uncovered space into smaller rectangles around the covering rectangle, and add the smaller rectangles (if any) to your list of uncovered ones.
If your list of uncovered space still has any entries, they contain all points not covered by the given rectangles.
This doesn't depend on the number of pixels in your area, so it will work for large (or infinite) resolution. Each new rectangle in the uncovered list will have corners at unique intersections of pairs of other rectangles, so there will be at most O(n^2) in the list, giving a total runtime of O(n^3). You can make it more efficient by keeping your list of uncovered rectangles an a better structure to check each covering rectangle against.
This is a simple solution with a 1600+1200 space complexity only, it is similar in concept to creating a 1600x1200 matrix but without using a whole matrix:
Start with two boolean arrays W[1600] and H[1200] set to true.
Then for each visible window rectangle with coordinate ranges w1..w2 and h1..h2, mark W[w1..w2] and H[h1..h2] to false.
To check if a point with coordinates (w, h) falls in an empty space just check that
(W[w] && H[h]) == true

Resources