Is there any easy way to animate my own tree models? I can't use tree creator, because it's useless for me since I am using my own terrain. My tree models are low poly and I would like animate trees like here. Do I have to write a shader for this? I found this one, but I am not sure if I can use it or how to apply it to all my trees.
You can animate them in your 3D software (Blender?) and it's probably the easiest way to do this.
Related
I want to create a function that can quickly 'paint' a volume in a 3D binary array.
The first approach I tried was extending a standard flood-fill algorithm into 3D, which was easy to do but I was interested in making it faster. I read into how to optimize the flood-fill algorithm and found the 'scanline' flood-fill algorithm. Implementing this into 2D gave me great results. I wanted to extend this into 3D but it was not clear to me how to do this while maintaining the spirit of scanline by minimising the number of voxel checks.
I have searched for an existing implementation or explanation of scanline in 3D, but found nothing on it. I managed to extend the algorithm by essentially splitting the 3D grid into 2D planes and performing the scanline 2D function on each slice. This is an improvement but I feel like there's a better way.
Can scanline be extended into 3D, or is there a better approach entirely to all of this?
I'm trying to write a little graph visualizer in P5js but I can't find a simple(-ish) algorithm to follow.
I've found ways to do it with D3 and I've found some dense textbook snippets (like this) but I'm looking for something in between the two.
Can someone explain the simplest algorithm for drawing a graph (force-directed or otherwise) or point me to a good resource?
Thanks for your help!
I literally just started something similar.
It's fairly easy to code, you just need to think about the 3 separate forces acting on each node, add them together and divide that by the mass of the node to get the movement of each node.
Gravity, put a simple force acting towards the centre of the canvas so the nodes dont launch themselves out of frame
Node-Node replusion, You can either use coulombs force (which describes particle-particle repulsion) or use the gravitational attraction equation and just reverse it
Connection Forces, this ones a little tricky, define a connection as 2 nodes and the distance between them. when the actual distance between them is different from the defined distance, add a force in the direction of the connection multiplied by the difference between the defined and the actual distance
I'm implementing a nav mesh pathfinding system and I need to be able to raycast between two points in the mesh and get a list of all edges crossed by the ray. Obviously I'll need to be able to test for individual line intersections, but I expect there to be an efficient way of picking which lines actually need to be checked rather than brute force iterating over every edge in the entire mesh. Does anyone know how I might go about that?
If your mesh is rectangular grid, consider effective method of Woo and Amanatides from article "Fast Voxel Traversal Algorithm..."
Implementation example
I have an object made of points, lets say its point cloud, i want to render object from those points, i want object to look like those points were wrapped in a sheet of paper. I want to animate it, so first thing that came on my mind was marching cubes, but my object will not be a ball or cube, it will morph, is there any simpler approach than marching cubes?
Depending on what you mean by "wrapped" a 3D convex hull may produce the effect that you want.
Animate your vertices however you want and re-run the hull algorithm each time.
The Marching Cubes algorithm seems like the best fit to what you're looking for -- not all point clouds are convex. The algorithm may seem intimidating because of the large lookup table, but it is actually pretty straightforward. I have posted an example (using Three.js) at:
http://stemkoski.github.com/Three.js/Marching-Cubes.html
This seems like what you're looking for:
http://nehe.gamedev.net/data/lessons/lesson.asp?lesson=25
I am new to culling. On a first glance, it seems that most occlusion culling algorithms are object-level, not examining single meshes, which would be practical for game rendering.
What I am looking for is an algorithm that culls all meshes within a single object that are occluded for a given viewpoint, with high accuracy. It needs to be at least O(n log n), a naive mesh-by-mesh comparison (O(n^2)) is too slow.
I notice that the Blender GUI identifies the occluded meshes for you in real-time, even if you work with large objects of 10,000+ meshes. What algorithm is used there, pray tell?
Blender performs both view frustum culling and occlusion culling, based on the dynamic AABB tree acceleration structures from the Bullet physics library. Occluders and objects are represented by their bounding volume (axis aligned bounding box).
View frustum culling just traverses the AABB tree, given a camera frustum.
The occlusion culling is based on a occlusion buffer, a kind of depth buffer that is initialized using a very simple software-renderer: a basic ray tracer using the dynamic AABB tree acceleration structures. You can choose the resolution of the occlusion buffer to trade accuracy versus efficiency.
See also documentation http://wiki.blender.org/index.php/Dev:Ref/Release_Notes/2.49/Game_Engine#Performance
Blender implementation from the Blender source tree:
blender\source\gameengine\Physics\Bullet\CcdPhysicsEnvironment.cpp method
bool CcdPhysicsEnvironment::cullingTest(PHY_CullingCallback callback, void* userData, PHY__Vector4 *planes, int nplanes, int occlusionRes)
and the Bullet Physics SDK has some C++ sample code in Bullet/Extras/CDTestFramework.
Have you looked into things like Octrees?
The reason why culling is not made on the mesh level is that a mesh is a very dumb renderer level object while a game-object is at the scene level, where all the culling occurs. There are no mesh level culling decision being taken, they are simply a way to batch primitives with similar shader states if their objects needs to be renderered.
If you really need to cull at mesh level, then you might want to create one object per mesh and then create a group object that contains the mesh-objects. Be aware that you might actually lose performance since culling at mesh level is usually not worth it; it might break the flow of data transfers to the hardware.
Usually the first thing you do is see if the meshes have any chance of occluding each other. Often times with a simple bounding primitive (bounding boxes or bounding spheres). If I recall correctly, Octrees will help with this.
I tried the naive approach which was actually sufficiently fast for my app.
For each triangle in mesh, check if occluded by any other triangle in mesh, thus O(n2). (I got high accuracy results from only checking whether the center point on each triangle was occluded or not, though you should at least check the triangle corner vertices, too, if accuracy is important).
On a pentium 4 machine (c++) a binary STL object of ~10,000 triangles took about 1 minute to complete.
The algorithm can be optimized up to ~40 percent by sorting the triangles first, for example by area size or distance from view point (more likely to occlude, so you can skip more checks).
Next optimization step would be to put the triangles in an octree, which should greatly reduce number of occlusion checks.