I have many game objects with line renderers attached to it . They are roughly in the shape of rectangles . How do I go about snapping these rectangle on the edges when these objects are dragged and bought close to each other ?
I have referred to this question . But it doesn't explain how to snap at specific positions.
Here is a sample image of the objects I want to latch.
There are many ways to do this task. Simply you can calculate the position of the second shape and when it's becoming closer to first with x or y axis just set their start position to first shape end position. The second way is just adding 2D colliders near the first object and simply when it triggers, move position. I will strongly recommend the first way.
Related
As the title suggest my problem lies in some representation of a sphere surface in computer memory. For simplicity, let's say we are making a chess game where the board is on a sphere. If the board was a classic flat board, then the solution is simple: use a 2D table.
But I don't know what kind of a memory structure I should chose for a sphere. Namely, what I want from this representation are:
If I move a pawn stubbornly in one direction, then I should return to the point where I started,
During such "journey" I should cross a point directly on the other side of the sphere (I mean to avoid a common "error" in a 2D game where moving pass an edge of a board will move an object to the opposite edge, thus making the board a torus, not a real sphere)
the area of one board cell should be approximately equal to any other cell
a cell should have got an associated longitude-latitude coordinates (I wrote "associated" because I want from the representation to only have got some way to obtain these coordinates from the position of a cell, not to be eg. a table with lat-long indexes)
There's no simple geometric solution to this. The crux of the problem is that, say you have n columns at the equator, and you're currently near the north poll, and heading north. Then the combination of the direction and the column number from the top row (and second from top row) must be able to uniquely identify which one of the n positions at the equator that path is going to cross. Therefore, direction could not be an integer unless you have n columns in the top (or second to top) row. Notice that if the polygons have more than three sides, then they must have common edges (and triangles won't work for other reasons). So now you have a grid, but if you have more than three rows (i.e. a cube, or other regular prism), then moving sideways on the second-to-top row will not navigate you to the southern hemisphere.
The best bet might be to create a regular polyhedron, and keep the point and direction as floating point vectors/points, and calculate the actual position when you move, and figure out which polygon you land in (note, you would have the possibility of moving to non-adjacent polygons with this method, and you might have issues if you land exactly on an edge/vertex, etc).
I'm currently dealing with OpenGL ES (2, iOS 6)… and I have a question
i. Let be a mesh that has to be drawn. Moreover,
ii. I can ask for a rotation/translation so that the point of view changes.
So,
how can I know (in real time) the position of any vertex that is displayed?
Thank you in advance.
jgapc
It's not entirely clear what it is you are after, but if you want to know where your object is after doing a bunch of rotations and translations, then one very easy option, if you perform these changes in your program code instead of in the shader, is to simply take the entire last row or column of your transformation matrix (depends if you are using row or column major matrices) which will be the final translation of your object's center as a coordinate vector.
This last row or column is the same thing as multiplying your final transformation matrix by your object's local coordinate center vector, which is (0,0,0,1).
If you want to know where an object's vertex is, rather than the object's center, then multiply that vertex in local coordinate space by the final transformation matrix, and you will get the new coordinate where that vertex is positioned.
There are two things I'd like to point out:
Back-face culling discards triangles, not vertices.
Triangles are also clipped so that they're within the viewing frustum.
I'm curious as to why you care about what is not displayed?
Wanted to know how can I add transparent dots or lines over CGPath or NSBezierPath.
Here are more details about the problem.
I've a solid line say width = 30(drawn using NSBezierPath or CGPath) , now I wanted to draw transparent dots over it or transparent lines(thickness=2 or something smaller than 30).
You can enumerate the elements of an NSBezierPath or CGPath, and do something for each one.
For NSBezierPath, use elementCount, elementAtIndex:associatedPoints:, and a for loop. The elementAtIndex:associatedPoints: requires a C array of up to three NSPoints.
For CGPath, use CGPathApply. This takes a pointer to a C function that you have written. One of the two arguments to the function is a structure that contains the same information returned by elementAtIndex:associatedPoints:, except that it will create the array of points for you.
The element types are mostly the same between them:
A moveto or lineto carries one point.
You might wonder why a lineto doesn't have two points. The point associated with the element is the destination point—the to in lineto—that is the new current point immediately afterward. The other point, the one you're coming from, is implicit; in cases where you want to use it, you will simply have to remember the last current point.
A (cubic) curveto uses all three points.
As with lineto, the source point is implicit, being simply the last current point. The last point in the array is the destination anchor point; the two other points are the control points.
Core Graphics has quadratic curveto elements, which only have two points.
A cubic curveto has two control points and one anchor point; a quadratic one has only one control point and one anchor point.
NSBezierPath does not have quadratic curveto elements. All curveto elements in an NSBezierPath are cubic.
A closepath has no points. It returns to the point of the last moveto.
Either way, for each element, draw whatever anchor-point indicator you want. You could, for example, draw a blue circle at the destination point, and not draw anything for a closepath (since you already drew that when you encountered the matching moveto). For curveto elements, you might also want to draw an indicator for each of the two control points.
Use -bezierPathByFlatteningPath.
Once you have flattened copy of the receiver, compute its length.
Then, iterate through the flattened copy, which is basically an array of points. Keep track of the distance between successive points, so that you can see where you are exactly on the curve.
For example, if you want to draw multiple copies of an object, you have to find on which segment of the flattened copy the object will reside. Once you have found the segment, linear interpolate between the two ends of that segment to find the exact spot.
If this is what you want to achieve, I can elaborate a little and post the category I wrote that does this.
This is about how to create an OpenGL irregular sphere. I've searched the web, but all the documents are telling how to create a regular sphere.
The effect I need is to simulate a bubble, and when the user touch the bubble, it should act on the touch, and the sphere bubble should change its shape on the touch position. Say, concave the touch part.
I can't figure out a feasible way to do this kind of simulation. Should I change the vertex position of touch part ? Or can I use a shader to implement this effect ?
At the same time, I don't know how can simulate the concave realistically, is there any math procedure to describe such a process ?
Thanks !
First, you'll want to use a geodesic-style sphere rather than one create via lat / long vertices. That will deform more predictably.
From there, there are several ways to do it. One way I could think of would be to create graph where each node indexes into a vertex in your mesh, and each node contains links to its neighbors. Then, when a vertex is pressed, it can "pull" its neighbors in with it. A cheap way would be to simply relocate the pressed vertex and then pull neighbors toward the new position, maintaining the original distance (very simple vector math). Then, repeat for those neighbors until the distance each neighbor is pulled reaches a sufficiently small threshold.
Once complete, the mesh will likely have to be reuploaded to GPU.
When I morph an object I just use an animation from the start vertex to the end vertex. The animation can have about 200 frames or so. I'm not sure how I can caclculate the steps from the start vertex to the end vertex. Maybe there is some trigonomic function? In your example I would create a sphere with the button and use it as a target frame. I'm not sure how a shader can help you here.
If I draw three rectangles to the surface, and "listen" to the onTouch event in a 2d Ortho world, then how can I identify which was the rectangle that was clicked?
If the triangles don't overlap you could keep track of the 3 xy points of each triangle in a triangle class object. then you can keep a list of those class objects to be compared later. then when the onTouch even is called you could see where the xy position of the finger is then compare it with the bounds of each rectangle to see if it is contained in it.
If it is within the bounds of one then you know that it is selected. If they overlap you just have to decide which is upfront. you can also keep track of order in the triangle objects if more that one triangle occupy the same space. then you would just choose the one that has the closest order with respect to the screen.