This is more of a GPU to CPU question, but I'd like to ask THREEjs people if they have any insights. In summary:
I have a large mesh requiring GPU calculations.
After the user interacts with the mesh, the user may elect to export the mesh based on the vertex positions of the GLSL shader (using OBJExporter). These need to be world positions, not screen positions (imagine a 3D print of the model)
I realize that it is expensive to go from GPU to CPU, but can anyone suggest how to do it? I'm not asking for code, but broad ideas would be very helpful as I haven't found much regarding WebGL specific workflows (see here).
How does one get the world XYZ of each vertex in a shader in order to recreate the geometry from the CPU for export?
Related
As far as I know, all the techniques mentioned in the title are rendering algorithms that seem quite similar. All ray based techniques seem to revolve about casting rays through each pixel of an image which are supposed to represent rays of real light. This allows to render very realistic images.
As a matter of fact I am making a simple program that renders such images myself based on Raytracing in one Weekend.
Now the thing is that I wanted to somehow name this program. I used the term “ray tracer” as this is the one used in the book.
I have heard a lot of different terms however and I would be interested to know what exactly is the difference between ray tracing, ray matching, ray casting, path tracing and potentially any other common ray-related algorithms. I was able to find some comparisons of these techniques online, but they all compared only two of these and some definitions overlapped, so I wanted to ask this question about all four techniques.
My understanding of this is:
ray cast
is using raster image to hold the scene and usually stop on first hit (no reflections and ray splitting) and does not necessarily cast ray on per pixel basis (usually per row or column of screen). The 3D version of this is called Voxel space ray cast however the map is not voxel space instead 2 raster images RGB,Height are used.
For more info see:
ray cast
Voxel space ray casting
(back) ray trace
This usually follows physical properties of light so ray split in reflected and refracted and we stop usually after some number of hits. The scene is represented either with BR meshes or with Analytical equations or both.
for more info see:
GLSL 3D Mesh back raytracer
GLSL 3D Volumetric back raytracer
the back means we cast the rays from camera to scene (on per pixel basis) instead of from light source to everywhere ... to speed up the process a lot at the cost of wrong lighting (but that can be remedied with additional methods on top of this)...
The other therms I am not so sure as I do not use those techniques (at least knowingly):
path tracing
is optimization technique to avoid recursive ray split in ray trace using Monte Carlo (stochastic) approach. So it really does not split the ray but chose randomly between the 2 options (similarly how photons behave in real world) and more rendered frames are then blended together.
ray marching
is optimization technique to speed up ray trace by using SDF (signed distance function) to determine safe advance along the ray so it does not hit anything. But it is confined only to analytical scene.
my question is because there are some old GPUs that does have vertex shaders and pixel shaders, I don't know how I can measure GFLOPS with that kind of GPU.
I know you can measure GFLOPS using Core Speed x ALUs x 2 (I don't know what this "2" is, if someone can answer that too it would be great!). But for a GPU that doesn't have unified shaders how can I measure it?
Thanks in advance.
I think that it can be measured independently. I mean, you should measure vertex shader performance without pixel shader work (for example, draw backface culled triangles), then measure pixel shader performance without (significant) vertex shader work (like full screen triangles). Then add the two numbers together.
I am using three.js to render a voxel representation as a set of triangles. I have got it render 5 million triangles comfortably but that seems to be the limit. you can view it online here.
select the Dublin model at resolution 3 to see a lot of triangles being drawn.
I have used every trick to get it this far (buffer geometry, voxel culling, multiple buffers) but I think it has hit the maximum amount that openGL triangles can accomplish.
Large amounts of voxels are normally rendered as a set of images in a 3D texture and while there are several posts on how to hack 2d textures into 3D textures but they seem to have a maximum limit on the texture size.
I have searched for tutorials or examples using this approach but haven't found any. Has anyone used this approach before with three.js
Your scene is render twice, because SSAO need depth texture. You could use WEBGL_depth_texture extension - which have pretty good support - so you just need a single render pass. You can stil fallback to low-perf-double-pass if extension is unavailable.
Your voxel's material is double sided. It's may be on purpose, but it may create a huge overdraw.
In your demo, you use a MeshPhongMaterial and directional lights. It's a needlessly complex material. Your geometries don't have any normals so you can't have any lighting. Try to use a simpler unlit material.
Your goal is to render a huge amount of vertices, so assuming the framerate is bound by vertex shader :
try stuff like https://github.com/GPUOpen-Tools/amd-tootle to preprocess your geometries. Focusing on prefetch vertex cache and vertex cache.
reduce the bandwidth used by your vertex buffers. Since your vertices are aligned on a "grid", you could store vertices position as 3 Shorts instead of 3 floats, reducing your VBO size by 2. You could use a same tricks if you had normals since all normals should be Axis aligned (cubes)
generally reduce the amount of varyings needed by fragment shader
if you need more attributes than just vec3 position, use one single interleaved VBO instead of one per attrib.
I am curious about the limits of three.js. The following question is asked mainly as a challenge, not because I actually need the specific knowledge/code right away.
Say you have a game/simulation world model around a sphere geometry representing a planet, like the worlds of the game Populous. The resolution of polygons and textures is sufficient to look smooth when the globe fills the view of an ordinary camera. There are animated macroscopic objects on the surface.
The challenge is to project everything from the model to a global map projection on the screen in real time. The choice of projection is yours, but it must be seamless/continuous, and it must be possible for the user to rotate it, placing any point on the planet surface in the center of the screen. (It is not an option to maintain an alternative model of the world only for visualization.)
There are no limits on the number of cameras etc. allowed, but the performance must be expected to be "realtime", say two-figured FPS or more.
I don't expect ayn proof in the form of a running application (although that would be cool), but some explanation as to how it could be done.
My own initial idea is to place a lot of cameras, in fact one for every pixel in the map projection, around the globe, within a Group object that is attached to some kind of orbit controls (with rotation only), but I expect the number of object culling operations to become a huge performance issue. I am sure there must exist more elegant (and faster) solutions. :-)
why not just use a spherical camera-model (think a 360° camera) and virtually put it in the center of the sphere? So this camera would (if it were physically possible) be wrapped all around the sphere, looking toward the center from all directions.
This camera could be implemented in shaders (instead of the regular projection-matrix) and would produce an equirectangular image of the planet-surface (or in fact any other projection you want, like spherical mercator-projection).
As far as I can tell the vertex-shader can implement any projection you want and it doesn't need to represent a camera that is physically possible. It just needs to produce consistent clip-space coordinates for all vertices. Fragment-Shaders for lighting would still need to operate on the original coordinates, normals etc. but that should be achievable. So the vertex-shader would just need compute (x,y,z) => (phi,theta,r) and go on with that.
Occlusion-culling would need to be disabled, but iirc three.js doesn't do that anyway.
I've got a fairly simple implementation of normal map lighting working for 2D sprites in webgl (GLSL shaders) which I was able to adapt & optimize from an example. It uses just one directional light and works fine for my purposes. Sprites are rendered flat (2D), only the light direction and normals are 3D vectors. Vertex rotation only happens around the z axis, so it's fairly easy-peasy.
I was hoping to add a bump (height) map to cast shadows. There are 3D bump map shadow casting examples and papers available online, but they're more complex than I need and the math goes over my head; I haven't found an example or explanation of how one might do a simple 2D case.
My first inclination is as follows: for the current pixel in the fragment shader, trace back along the direction of the light and check the altitude of the neighbouring bump map pixel. If it's higher than the light direction vector at that point, then that pixel is in the shade. However since "tall" pixels on the bump map may cast shadow across > 1 pixel distance, I'd have to keep testing pixel by pixel in that direction until I find one tall enough to cast a shadow (or reach the edge of the texture, or reach some arbitrary limit.)
This doesn't sound very optimal, especially for larger textures. I've read that if statements in shaders aren't so fast. Is there a faster/better method?
What you are looking for is called parallax (occlusion) mapping.
It's a technique that does exactly what you described, and it can be understood as on-bumpmap ray tracing in tangent space.
Here are some articles:
nVidia - Per-Pixel displacement (w/ sphere tracing)
nVidia - Cone Tracing for PM
AMD - POM
The ways to optimize search are similar to ordinary raytracing and include: sphere tracing, cone tracing, binary search and similar, instead of constant stepping function.
P. S. If you know the name of some rendering technique, it's generally good idea to Google it adding 'nVidia', 'crytek' or 'gpu' in front of the name, it will show you much more relevant results.
Hope this helps.