For Loop - Circle instead of Linear - for-loop

I have an array of pixel data taken from a Texture2D (512 x 512) within Unity.
Currently, I'm adding a border around this texture by using a for loop and setting the outer pixels to Color.black.
But some Texture2Ds are actually circular (in terms of the image/photo itself). And I would like to add a circular border around these images.
The typical for-loops (r++ & c++) gives a linear (bottom left, to top right) value, which wont work for a circle, right? Is there some kind of algorithm I can use to make the for-loop circular (in terms of the r and c values)?
Does this make any sense?
Thanks.

Funny question :)
I can't think of a solution to make for-loop 'circular', becouse x and y depend on each other and simple x++ y++ is not possible.
A simple solution is to calculate x and y inside some loop. Like using x = r*cos(t); y = r*sin(t); and iterating over t
I made a fiddle, it draws circle on html canvas using for-loop :D

Depending on what you want the end result to look like, you might be better off modifying via a shader to do a form of edge detection (it does depend on what you're texture is though).
Another, possibly easier solution is just to layer a second texture on top of your original where the second is just the border with the rest alpha'd out.

Related

Plumb bob camera distortion in Three.js

I'm trying to replicate a real-world camera within Three.js, where I have the camera's distortion specified as parameters for a "plumb bob" model. In particular I have P, K, R and D as specified here:
If I understand everything correctly, I want to set up Three to do the translation from "X" on the right, to the "input image" in the top left. One approach I'm considering is making a Three.js Camera of my own, setting the projectionMatrix to ... something involving K and D. Does this make sense? What exactly would I set it to, considering K and D are specified as 1 dimensional arrays, of length 9 and 5 respectively. I'm a bit lost how to combine all the numbers :(
I notice in this answer that there are complicated things necessary to render straight lines as curved, they way they would be with certain camera distortions (like a fish eye lens). I do not need that for my purposes if that is more complicated. Simply rendering each vertex is the correct spot is sufficient.
This document shows the step by step derivation of the camera matrix (Matlab reference).
See: http://www.vision.caltech.edu/bouguetj/calib_doc/htmls/parameters.html
So, yes. You can calculate the matrix using this procedure and use that to map the real-world 3D point to 2D output image.

OpenGL 3.0 ES - Disjoint read and write from same texture

I'm in a situation where I have a (n+1)x(n+1) texture attached to a framebuffer.
My idea is to update this texture in 2 passes.
Draw a full screen quad and use the scissor test to mask out the outermost 1 pixels, so I write to the n x n 'inner' of the texture.
Second pass where I draw line primitives to write to the outermost pixels, and update the same texture, using the result of pass 1 as input. The computation on this step depend on the state of the inner (n x n) grid which was computed in the first pass.
In the second pass I would bind the result of the first pass for both reading and writing. I know that according to OpenGL this would yield undefined behaviour, but since I am never reading and writing simultaneously from the same texels, I think it could work.
Is it a good idea to do it like this, or is it perhaps better to draw a full screen quad, and in my glsl shader do a check like:
if (gl_FragCoord.x == 0.5 || gl_FragCoord.x == n + 0.5){
...
}
As you say, OpenGL says it's undefined behaviour, so it won't work. Your best bet for things like this is to ping-pong between two render targets.
Since I am never reading and writing simultaneously from the same
texels, I think it could work.
Unfortunately this doesn't help. Actually, there are extensions common on mobile (particularly Mali and PowerVR GPUs) that allow you to read and write simultaneously, but only if it's the same texels. Google 'Pixel Local Storage' for details.

Field of view/ convexity map

On a shape from a logical image, I am trying to extract the field of view from any point inside the shape on matlab :
I tried something involving to test each line going through the point but it is really really long.(I hope to do it for each points of the shape or at least each point of it's contour wich is quite a few times)
I think a faster method would be working iteratively by the expansion of a disk from the considered point but I am not sure how to do it.
How can I find this field of view in an efficient way?
Any ideas or solution would be appreciated, thanks.
Here is a possible approach (the principle behind the function I wrote, available on Matlab Central):
I created this test image and an arbitrary point of view:
testscene=zeros(500);
testscene(80:120,80:120)=1;
testscene(200:250,400:450)=1;
testscene(380:450,200:270)=1;
viewpoint=[250, 300];
imsize=size(testscene); % checks the size of the image
It looks like this (the circle marks the view point I chose):
The next line computes the longest distance to the edge of the image from the viewpoint:
maxdist=max([norm(viewpoint), norm(viewpoint-[1 imsize(2)]), norm(viewpoint-[imsize(1) 1]), norm(viewpoint-imsize)]);
angles=1:360; % use smaller increment to increase resolution
Then generate a set of points uniformly distributed around the viewpoint.:
endpoints=bsxfun(#plus, maxdist*[cosd(angles)' sind(angles)'], viewpoint);
for k=1:numel(angles)
[CX,CY,C] = improfile(testscene,[viewpoint(1), endpoints(k,1)],[viewpoint(2), endpoints(k,2)]);
idx=find(C);
intersec(k,:)=[CX(idx(1)), CY(idx(1))];
end
What this does is drawing lines from the view point to each directions specified in the array angles and look for the position of the intersection with an obstacle or the edge of the image.
This should help visualizing the process:
Finally, let's use the built-in roipoly function to create a binary mask from a set of coordinates:
FieldofView = roipoly(testscene,intersec(:,1),intersec(:,2));
Here is how it looks like (obstacles in white, visible field in gray, viewpoint in red):

Raytracing (LoS) on 3D hex-like tile maps

Greetings,
I'm working on a game project that uses a 3D variant of hexagonal tile maps. Tiles are actually cubes, not hexes, but are laid out just like hexes (because a square can be turned to a cube to extrapolate from 2D to 3D, but there is no 3D version of a hex). Rather than a verbose description, here goes an example of a 4x4x4 map:
(I have highlighted an arbitrary tile (green) and its adjacent tiles (yellow) to help describe how the whole thing is supposed to work; but the adjacency functions are not the issue, that's already solved.)
I have a struct type to represent tiles, and maps are represented as a 3D array of tiles (wrapped in a Map class to add some utility methods, but that's not very relevant).
Each tile is supposed to represent a perfectly cubic space, and they are all exactly the same size. Also, the offset between adjacent "rows" is exactly half the size of a tile.
That's enough context; my question is:
Given the coordinates of two points A and B, how can I generate a list of the tiles (or, rather, their coordinates) that a straight line between A and B would cross?
That would later be used for a variety of purposes, such as determining Line-of-sight, charge path legality, and so on.
BTW, this may be useful: my maps use the (0,0,0) as a reference position. The 'jagging' of the map can be defined as offsetting each tile ((y+z) mod 2) * tileSize/2.0 to the right from the position it'd have on a "sane" cartesian system. For the non-jagged rows, that yields 0; for rows where (y+z) mod 2 is 1, it yields 0.5 tiles.
I'm working on C#4 targeting the .Net Framework 4.0; but I don't really need specific code, just the algorithm to solve the weird geometric/mathematical problem. I have been trying for several days to solve this at no avail; and trying to draw the whole thing on paper to "visualize" it didn't help either :( .
Thanks in advance for any answer
Until one of the clever SOers turns up, here's my dumb solution. I'll explain it in 2D 'cos that makes it easier to explain, but it will generalise to 3D easily enough. I think any attempt to try to work this entirely in cell index space is doomed to failure (though I'll admit it's just what I think and I look forward to being proved wrong).
So you need to define a function to map from cartesian coordinates to cell indices. This is straightforward, if a little tricky. First, decide whether point(0,0) is the bottom left corner of cell(0,0) or the centre, or some other point. Since it makes the explanations easier, I'll go with bottom-left corner. Observe that any point(x,floor(y)==0) maps to cell(floor(x),0). Indeed, any point(x,even(floor(y))) maps to cell(floor(x),floor(y)).
Here, I invent the boolean function even which returns True if its argument is an even integer. I'll use odd next: any point point(x,odd(floor(y)) maps to cell(floor(x-0.5),floor(y)).
Now you have the basics of the recipe for determining lines-of-sight.
You will also need a function to map from cell(m,n) back to a point in cartesian space. That should be straightforward once you have decided where the origin lies.
Now, unless I've misplaced some brackets, I think you are on your way. You'll need to:
decide where in cell(0,0) you position point(0,0); and adjust the function accordingly;
decide where points along the cell boundaries fall; and
generalise this into 3 dimensions.
Depending on the size of the playing field you could store the cartesian coordinates of the cell boundaries in a lookup table (or other data structure), which would probably speed things up.
Perhaps you can avoid all the complex math if you look at your problem in another way:
I see that you only shift your blocks (alternating) along the first axis by half the blocksize. If you split up your blocks along this axis the above example will become (with shifts) an (9x4x4) simple cartesian coordinate system with regular stacked blocks. Now doing the raytracing becomes much more simple and less error prone.

Liquify filter/iwarp

I'm trying to build something like the Liquify filter in Photoshop. I've been reading through image distortion code but I'm struggling with finding out what will create similar effects. The closest reference I could find was the iWarp filter in Gimp but the code for that isn't commented at all.
I've also looked at places like ImageMagick but they don't have anything in this area
Any pointers or a description of algorithms would be greatly appreciated.
Excuse me if I make this sound a little simplistic, I'm not sure how much you know about gfx programming or even what techniques you're using (I'd do it with HLSL myself).
The way I would approach this problem is to generate a texture which contains offsets of x/y coordinates in the r/g channels. Then the output colour of a pixel would be:
Texture inputImage
Texture distortionMap
colour(x,y) = inputImage(x + distortionMap(x, y).R, y + distortionMap(x, y).G)
(To tell the truth this isn't quite right, using the colours as offsets directly means you can only represent positive vectors, it's simple enough to subtract 0.5 so that you can represent negative vectors)
Now the only problem that remains is how to generate this distortion map, which is a different question altogether (any image would generate a distortion of some kind, obviously, working on a proper liquify effect is quite complex and I'll leave it to someone more qualified).
I think liquefy works by altering a grid.
Imagine each pixel is defined by its location on the grid.
Now when the user clicks on a location and move the mouse he's changing the grid location.
The new grid is again projected into the 2D view able space of the user.
Check this tutorial about a way to implement the liquify filter with Javascript. Basically, in the tutorial, the effect is done transforming the pixel Cartesian coordinates (x, y) to Polar coordinates (r, α) and then applying Math.sqrt on r.

Resources