Morph an OpenGL sphere - opengl-es

This is about how to create an OpenGL irregular sphere. I've searched the web, but all the documents are telling how to create a regular sphere.
The effect I need is to simulate a bubble, and when the user touch the bubble, it should act on the touch, and the sphere bubble should change its shape on the touch position. Say, concave the touch part.
I can't figure out a feasible way to do this kind of simulation. Should I change the vertex position of touch part ? Or can I use a shader to implement this effect ?
At the same time, I don't know how can simulate the concave realistically, is there any math procedure to describe such a process ?
Thanks !

First, you'll want to use a geodesic-style sphere rather than one create via lat / long vertices. That will deform more predictably.
From there, there are several ways to do it. One way I could think of would be to create graph where each node indexes into a vertex in your mesh, and each node contains links to its neighbors. Then, when a vertex is pressed, it can "pull" its neighbors in with it. A cheap way would be to simply relocate the pressed vertex and then pull neighbors toward the new position, maintaining the original distance (very simple vector math). Then, repeat for those neighbors until the distance each neighbor is pulled reaches a sufficiently small threshold.
Once complete, the mesh will likely have to be reuploaded to GPU.

When I morph an object I just use an animation from the start vertex to the end vertex. The animation can have about 200 frames or so. I'm not sure how I can caclculate the steps from the start vertex to the end vertex. Maybe there is some trigonomic function? In your example I would create a sphere with the button and use it as a target frame. I'm not sure how a shader can help you here.

Related

Perform 2D vertices rotation in update function or render function

I'm starting to build a 2D game and have some confusion around whether I should perform rotation in the update function or the render function.
The problem is this;
I have a triangle, consisting of three vertices. The triangle has a rotation value in degrees.
If I rotate the vertices in the update function, the triangle rotates forever because each update applies the rotation over and over.
Therefore, I decided to not rotate the vertices in the update function and instead perform the rotation, based on the original vertices, in the render function.
This works, however now I have a different problem. The vertices are not actually where they appear to be. Therefore I can not use the vertices to perform collision detection, etc.
The only idea that I have to resolve this is that I could perform the rotation in the update function but have two sets of vertices; one for the original vertices, one of the rotated vertices. Then use the rotated vertices in collision detection calculations - this smells hacky and inefficient though!
I've put together a codepen demonstrating applying rotation in the render function; https://codepen.io/anon/pen/pPRjLq
Use arrow keys to rotate
So, should I rotate in render or update? If render then how do I keep vertices up to date? If update then how do I prevent infinite rotations?
Any help from experienced people would be greatly appreciated - thanks!
Have the rotation occur in the update method as you are updating the rotation of the actor.
You do not need to remember two vertices either, it would be a better alternative to just have current vertices of the actor and the rotation in radians, original vertices do not need to be remembered.
Thinking ahead, collision detection should not be dependent on the position vectors of an actor, better yet still, have a rectangle/AABB defined for it instead, which would need to be updated/scaled when rotation is applied to the actor.
If you did use a custom AABB for detection it could be set to the position of the actor vector by default, but by separating the constructs it opens up a lot of potential such as changing actor hit bounds x/y/width/length at run-time; it should not be coupled with actor x/y/width/height which should relate to the graphical representation of it.
To stop the infinite rotation you could have a boolean to state if left key up and one similarly for the right key, and something like below;
update(delta) {
if(keyLeftDown) {
// apply rotation as rotation * delta / desiredFPS
} else if (keyRightDown) {
//
}
}
Hope this helps.

How can I efficiently implement raycasting through a 2D mesh?

I'm implementing a nav mesh pathfinding system and I need to be able to raycast between two points in the mesh and get a list of all edges crossed by the ray. Obviously I'll need to be able to test for individual line intersections, but I expect there to be an efficient way of picking which lines actually need to be checked rather than brute force iterating over every edge in the entire mesh. Does anyone know how I might go about that?
If your mesh is rectangular grid, consider effective method of Woo and Amanatides from article "Fast Voxel Traversal Algorithm..."
Implementation example

WebGL Change Shape Animation

I'm creating a 3D globe with a map on it which is supposed to unravel and fill the screen after a few seconds.
I've managed to create the globe using three.js and webGL, but I'm having trouble finding any information on being able to animate a shape change. Can anyone provide any help? Is it even possible?
(Abstract Algorithm's and Kevin Reid's answers are good, and only one thing is missing: some actual Three.js code.)
You basically need to calculate where each point of the original sphere will be mapped to after it flattens out into a plane. This data is an attribute of the shader: a piece of data attached to each vertex that differs from vertex to vertex of the geometry. Then, to animate the transition from the original position to the end position, in your animation loop you will need to update the amount of time that has passed. This data is a uniform of the shader: a piece of data that remains constant for all vertices during each frame of the animation, but may change from one frame to the next. Finally, there exists a convenient function called "mix" that will linearly interpolate between the original position and the end/goal position of each vertex.
I've written two examples for you: the first just "flattens" a sphere, sending the point (x,y,z) to the point (x,0,z).
http://stemkoski.github.io/Three.js/Shader-Attributes.html
The second example follows Abstract Algorithm's suggestion in the comments: "unwrapping the sphere's vertices back on plane surface, like inverse sphere UV mapping." In this example, we can easily calculate the ending position from the UV coordinates, and so we actually don't need attributes in this case.
http://stemkoski.github.io/Three.js/Sphere-Unwrapping.html
Hope this helps!
In 3D, anything and everything is possible. ;)
Your sphere geometry has it's own vertices, and basically you just need to animate their position, so after animation they are all sitting on one planar surface.
Try creating sphere and plane geometry, with same number of vertices, and animating sphere's vertices with interpolated values of sphere's and plane's original values. That way, on the start you would have sphere shape and in the end, plane shape.
Hope this helps, tell me if you need more directives how to do it.
myGlobe.geometry.vertices[index].position = something_calculated;
// myGlobe is instance of THREE.Mesh and something_calculated would be THREE.Vector3 instance that you can calculate in some manner (sphere-plane interpolation over time)
(Abstract Algorithm's answer is good, but I think one thing needs improvement: namely using vertex shaders.)
You make a set of vertices textured with the map image. Then, design a calculation for interpolating between the sphere shape and the flat shape. It doesn't have to be linear interpolation — for example, one way that might be good is to put the map on a small portion of an sphere of increasing radius until it looks flat (getting it all the way will be tricky).
Then, write that calculation in your vertex shader. The position of each vertex can be computed entirely from the texture coordinates (since that determines where-on-the-map the vertex goes and implies its position) and a uniform variable containing a time value.
Using the vertex shader will be much more efficient than recomputing and re-uploading the coordinates using JavaScript, allowing perfectly smooth animation with plenty of spare resources to do other things as well.
Unfortunately, I'm not familiar enough with Three.js to describe how to do this in detail, but all of the above is straightforward in basic WebGL and should be possible in any decent framework.

2D geometry outline shader

I want to create a shader to outline 2D geometry. I'm using OpenGL ES2.0. I don't want to use a convolution filter, as the outline is not dependent on the texture, and it is too slow (I tried rendering the textured geometry to another texture, and then drawing that with the convolution shader). I've also tried doing 2 passes, the first being single colorded overscaled geometry to represent an oultine, and then normal drawing on top, but this results in different thicknesses or unaligned outlines. I've looking into how silhouette's in cel-shading are done but they are all calculated using normals and lights, which I don't use at all.
I'm using Box2D for physics, and have "destructable" objects with multiple fixtures. At any point an object can be broken down (fixtures deleted), and I want to the outline to follow the new outter counter.
I'm doing the drawing with a vertex buffer that matches the vertices of the fixtures, preset texture coordinates, and indices to draw triangles. When a fixture is removed, it's associated indices in the index buffer are set to 0, so no triangles are drawn there anymore.
The following image shows what this looks like for one object when it is fully intact.
The red points are the vertex positions (texturing isn't shown), the black lines are the fixtures, and the blue lines show the seperation of how the triangles are drawn. The gray outline is what I would like the outline to look like in any case.
This image shows the same object with a few fixtures removed.
Is this possible to do this in a vertex shader (or in combination with other simple methods)? Any help would be appreciated.
Thanks :)
Assuming you're able to do something about those awkward points that are slightly inset from the corners (eg, if you numbered the points in English-reading order, with the first being '1', point 6 would be one)...
If a point is interior then if you list all the polygon edges connected to it in clockwise order, each pair of edges in sequence will have a polygon in common. If any two edges don't have a polygon in common then it's an exterior point.
Starting from any exterior point you can then get the whole outline by first walking in any direction and subsequently along any edge that connects to an exterior point you haven't visited yet (or, alternatively, that isn't the edge you walked along just now).
Starting from an existing outline and removing some parts, you can obviously start from either exterior point that used to connect to another but no longer does and just walk from there until you get to the other.
You can't handle this stuff in a shader under ES because you don't get connectivity information.
I think the best you could do in a shader is to expand the geometry by pushing vertices outward along their surface normals. Supposing that your data structure is a list of rectangles, each described by, say, a centre, a width and a height, you could achieve the same thing by drawing each with the same centre but with a small amount added to the width and height.
To be completely general you'd need to store normals at vertices, but also to update them as geometry is removed. So there'd be some pushing of new information from the CPU but it'd be relatively limited.

Is a closed polygonal mesh flipped?

I have a 3d modeling application. Right now I'm drawing the meshes double-sided, but I'd like to switch to single sided when the object is closed.
If the polygonal mesh is closed (no boundary edges/completely periodic), it seems like I should always be able to determine if the object is currently flipped, and automatically correct.
Being flipped means that my normals point into the object instead of out of the object. Being flipped is a result of a mismatch between my winding rules and the current frontface setting, but I compute the normals directly from the geometry, so looking at the normals is a simple way to detect it.
One thing I was thinking was to take the bounding box, find the highest point, and see if its normal points up or down - if it's down, then the object is flipped.
But it seems like this solution might be prone to errors with degenerate geometry, or floating point error, as I'd only be looking at a single point. I guess I could get all 6 axis-aligned extents, but that seems like a slightly better kludge, and not a proper solution.
Is there a robust, simple way to do this? Robust and hard would also work.. :)
This is a robust, but slow way to get there:
Take a corner of a bounding box offset from the centroid (force it to be guaranteed outside your closed polygonal mesh), then create a line segment from that to the center point of any triangle on your mesh.
Measure the angle between that line segment and the normal of the triangle.
Intersect that line segment with each triangle face of your mesh (including the tri you used to generate the segment).
If there are an odd number of intersections, the angle between the normal and the line segment should be <180. If there are an even number, it should be >180.
If there are an even number of intersections, the numbers should be reversed.
This should work for very complex surfaces, but they must be closed, or it breaks down.
"I'm drawing the meshes double-sided"
Why are you doing that? If you're using OpenGL then there is a much better way to go and save yourself all the work. use:
glLightModeli(GL_LIGHT_MODEL_TWO_SIDE, 1);
With this, all the polygons are always two sided.
The only reason why you would want to use one-sided lighting is if you have an open or partially inverted mesh and you want to somehow indicate what part belong to the inside by having them unlighted.
Generally, the problem you're posing is an open problem in geometry processing and AFAIK there is no sure-fire general way that can always determine the orientation. As you suggest, there are heuristics that work almost always.
Another approach is reminiscent of a famous point-in-polygon algorithm: choose a vertex on the mesh and shoot a ray from it in the direction of the normal. If the ray hits an even number of faces then the normal is pointed to the outside, if it is odd then the normal is towards to inside. notice not to count the point of origin as an intersection. This approach will work only if the mesh is a closed manifold and it can reach some edge cases if the ray happens to pass exactly between two polygons so you might want to do it a number of times and take the leading vote.

Resources