If I draw three rectangles to the surface, and "listen" to the onTouch event in a 2d Ortho world, then how can I identify which was the rectangle that was clicked?
If the triangles don't overlap you could keep track of the 3 xy points of each triangle in a triangle class object. then you can keep a list of those class objects to be compared later. then when the onTouch even is called you could see where the xy position of the finger is then compare it with the bounds of each rectangle to see if it is contained in it.
If it is within the bounds of one then you know that it is selected. If they overlap you just have to decide which is upfront. you can also keep track of order in the triangle objects if more that one triangle occupy the same space. then you would just choose the one that has the closest order with respect to the screen.
Related
As the title suggest my problem lies in some representation of a sphere surface in computer memory. For simplicity, let's say we are making a chess game where the board is on a sphere. If the board was a classic flat board, then the solution is simple: use a 2D table.
But I don't know what kind of a memory structure I should chose for a sphere. Namely, what I want from this representation are:
If I move a pawn stubbornly in one direction, then I should return to the point where I started,
During such "journey" I should cross a point directly on the other side of the sphere (I mean to avoid a common "error" in a 2D game where moving pass an edge of a board will move an object to the opposite edge, thus making the board a torus, not a real sphere)
the area of one board cell should be approximately equal to any other cell
a cell should have got an associated longitude-latitude coordinates (I wrote "associated" because I want from the representation to only have got some way to obtain these coordinates from the position of a cell, not to be eg. a table with lat-long indexes)
There's no simple geometric solution to this. The crux of the problem is that, say you have n columns at the equator, and you're currently near the north poll, and heading north. Then the combination of the direction and the column number from the top row (and second from top row) must be able to uniquely identify which one of the n positions at the equator that path is going to cross. Therefore, direction could not be an integer unless you have n columns in the top (or second to top) row. Notice that if the polygons have more than three sides, then they must have common edges (and triangles won't work for other reasons). So now you have a grid, but if you have more than three rows (i.e. a cube, or other regular prism), then moving sideways on the second-to-top row will not navigate you to the southern hemisphere.
The best bet might be to create a regular polyhedron, and keep the point and direction as floating point vectors/points, and calculate the actual position when you move, and figure out which polygon you land in (note, you would have the possibility of moving to non-adjacent polygons with this method, and you might have issues if you land exactly on an edge/vertex, etc).
I have many game objects with line renderers attached to it . They are roughly in the shape of rectangles . How do I go about snapping these rectangle on the edges when these objects are dragged and bought close to each other ?
I have referred to this question . But it doesn't explain how to snap at specific positions.
Here is a sample image of the objects I want to latch.
There are many ways to do this task. Simply you can calculate the position of the second shape and when it's becoming closer to first with x or y axis just set their start position to first shape end position. The second way is just adding 2D colliders near the first object and simply when it triggers, move position. I will strongly recommend the first way.
I'm currently dealing with OpenGL ES (2, iOS 6)… and I have a question
i. Let be a mesh that has to be drawn. Moreover,
ii. I can ask for a rotation/translation so that the point of view changes.
So,
how can I know (in real time) the position of any vertex that is displayed?
Thank you in advance.
jgapc
It's not entirely clear what it is you are after, but if you want to know where your object is after doing a bunch of rotations and translations, then one very easy option, if you perform these changes in your program code instead of in the shader, is to simply take the entire last row or column of your transformation matrix (depends if you are using row or column major matrices) which will be the final translation of your object's center as a coordinate vector.
This last row or column is the same thing as multiplying your final transformation matrix by your object's local coordinate center vector, which is (0,0,0,1).
If you want to know where an object's vertex is, rather than the object's center, then multiply that vertex in local coordinate space by the final transformation matrix, and you will get the new coordinate where that vertex is positioned.
There are two things I'd like to point out:
Back-face culling discards triangles, not vertices.
Triangles are also clipped so that they're within the viewing frustum.
I'm curious as to why you care about what is not displayed?
I want to create a shader to outline 2D geometry. I'm using OpenGL ES2.0. I don't want to use a convolution filter, as the outline is not dependent on the texture, and it is too slow (I tried rendering the textured geometry to another texture, and then drawing that with the convolution shader). I've also tried doing 2 passes, the first being single colorded overscaled geometry to represent an oultine, and then normal drawing on top, but this results in different thicknesses or unaligned outlines. I've looking into how silhouette's in cel-shading are done but they are all calculated using normals and lights, which I don't use at all.
I'm using Box2D for physics, and have "destructable" objects with multiple fixtures. At any point an object can be broken down (fixtures deleted), and I want to the outline to follow the new outter counter.
I'm doing the drawing with a vertex buffer that matches the vertices of the fixtures, preset texture coordinates, and indices to draw triangles. When a fixture is removed, it's associated indices in the index buffer are set to 0, so no triangles are drawn there anymore.
The following image shows what this looks like for one object when it is fully intact.
The red points are the vertex positions (texturing isn't shown), the black lines are the fixtures, and the blue lines show the seperation of how the triangles are drawn. The gray outline is what I would like the outline to look like in any case.
This image shows the same object with a few fixtures removed.
Is this possible to do this in a vertex shader (or in combination with other simple methods)? Any help would be appreciated.
Thanks :)
Assuming you're able to do something about those awkward points that are slightly inset from the corners (eg, if you numbered the points in English-reading order, with the first being '1', point 6 would be one)...
If a point is interior then if you list all the polygon edges connected to it in clockwise order, each pair of edges in sequence will have a polygon in common. If any two edges don't have a polygon in common then it's an exterior point.
Starting from any exterior point you can then get the whole outline by first walking in any direction and subsequently along any edge that connects to an exterior point you haven't visited yet (or, alternatively, that isn't the edge you walked along just now).
Starting from an existing outline and removing some parts, you can obviously start from either exterior point that used to connect to another but no longer does and just walk from there until you get to the other.
You can't handle this stuff in a shader under ES because you don't get connectivity information.
I think the best you could do in a shader is to expand the geometry by pushing vertices outward along their surface normals. Supposing that your data structure is a list of rectangles, each described by, say, a centre, a width and a height, you could achieve the same thing by drawing each with the same centre but with a small amount added to the width and height.
To be completely general you'd need to store normals at vertices, but also to update them as geometry is removed. So there'd be some pushing of new information from the CPU but it'd be relatively limited.
I have a board as a canvas with several shapes drawn on it, some of them are triangles, circles, rectangles but all are contained inside their own bound delimited rectangle.
"The circle will be inside a rectangle"
I put two circles A, B on the board where A is over B and has some area colliding.
If I click on A area corresponding to the container box but not the actual A circle shape area I wont select the A circle, however that would stop me from selecting B since my A container overlaps and is over B one.
In an event base framework, the child event will go to the parent not siblings I guess.
So my choice was to do a check for all shape container which have some area at point x ordered by z index. Then for each container check if the shape inside it collide.
It does not seem super efficient but is there any other ways?
---------
| --------
| | |
-----| |
--------
You're handling it about as well as it can be handled - windowing systems generally obey Z order (layers).
This will be better in the long run anyway, especially if you want to be able to select multiple items by drawing a selection box around them.
There are algorithms for finding if rectangles overlap by converting them into 2d representations on both the x and y axis. You can do the same thing, and then compare your point to see which objects your point overlaps:
Algorithm to detect intersection of two rectangles?
Just treat your point selection (or rectangle selection if you draw a bounding box to select multiple items) as another rectangle to be compared as overlapping to the others.
-Adam
There are tricks you can play if you really need speed. For example:
If you are working with a deep pallet you can use the low order bits of the color to tag the objects. Then peeking at the pixel gives you the object or at least lest you quickly drastically cull the list.
Even at low bit depth, if the objects are all monochrome you can use the whole color
If you're in low enough resolution you can keep an array that says what object owns what pixel.
At higher res you can do the same but use RLE to keep the size down (also look into quad trees)
And so forth...
If it's just simple implementation you're after, one quick trick is to record the X & Y, repaint the screen, and note which object paints that pixel.
-- MarkusQ