Debug Geometry Shader in draw call DrawInstancedIndirect - debugging

My program is a rain particle system based on the compute shader for advancing rain drops and another rendering shader(vertex shader, geometry shader, pixel shader) for rendering the advanced rain drops.
I use the draw call: DrawInstancedIndirect to apply the results from the Compute Shader to the rendering step.
My problem is in the rendering step, at the Geometry shader, where I'm trying to draw a billboard for each rain drop. If I just draw a normal regtangle, it render well, and when I change to a billboard, nothing is in the render target. I'm trying to find a way to debug this geometry shader. I used the following tools for debugging geometry shader, but thet do work out for me.
Graphics Debugger in VS2012. It seems that this tool do not support draw call: DrawInstancedIndirect.
GPU PeftStudio. It support vertex, pixel shader, but not Geometry shader. I tried to pass out the immediate values from geometry shader to pixel shaders for seeing them, and they are all zero. But I need to dig into geometry shader for finding out the error.
Nsight by NVDIA. My graphics card is 720M, and it's so sad that Nsight only supports from 730M. May be it is the reason the shader list is empty why I am in the debugging process.
I'm desperated now, and seeing no way to find out the problem. I hope you could suggest me a way to debug this geometry shader. Thanks so much!

You can try to use RenderDoc by Crytek, it's really easy to use and you can monitor every buffers in every stages.

Related

How to detect interaction with a line expanded in vertex shader in three.js?

I'm working on a three.js project that requires crisp, thick 2D lines.
Because of limitations in the ANGLE layer, the WebGL renderer on Windows doesn't allow thick lines with LineBasicMaterial.
To get around this, I'm expanding polylines in a vertex shader using three-line-2d. This works by pairing BufferGeometry with a simple ShaderMaterial.
Visually, I'm happy with the result.
Now, I'd like to detect mouse interactions with these lines. The usual Raycaster techniques don't work. I suspect that this is because my lines lack geometry that three.js understands (because I'm expanding in a shader).
My question: What are my options for picking these lines? Do I need to extrude outside of the shader, or are there other good options?

How to store and access per fragment attributes in WebGL

I am doing a particle system in WebGL using Three.js, and I want to do all the computation of the particles in the shaders. To achieve that, the positions (for example) of the particles are stored in a texture which is sampled by the vertex shader of each particle (POINT primitive).
The position texture is in fact two render targets which are swapped each frame after being updated off screen. Each pixel of this texture represent a particle.
To update a position, I read one of he render targets (texture2D), do some computation, and write on the other render target (fragment output).
To perform the "do some computation" step, I need some per particle attributes, like its velocity (and a lot of others). Since this step is done in the fragment shader, I can't use the vertex attributes buffers, so I have to store these properties in separate textures and sample each of them in the fragment shader.
It works, but sampling textures is slow as far as I know, and I wonder if there is some better ways to do this, like having one vertex per particle, each rendering a single fragment of the position texture.
I know that OpenGL 4 as some alternative ways to deal with this, like UBO or SSBO, but I'm not sure about WebGL.

Three.js doesn't use different shader programs for different mesh objects, why?

I've tried to figure out, how three.js is working and have tried some shader debugger for it.
I've added two simple planes with basic material (single color without any shading model), which are rotating within rendering process.
First of all, my question was... Why is three.js using a single shader program (look at the WebGL context function .useProgram()) for both meshes.
I suppose, that objects are the same, and that's why for performance reasons a single shader program is using for similar objects.
But... I have changed my three.js application source code, and now there are a plane and a cube in scene, which are rotating.
And let's look in shader debugger again:
Here you can see, that three.js is using again one shader program, but the objects are different right now. And this moment is not clear for me.
If to look at that shader, it seems to be very generic and huge shader program, and there are also two different shader programs, which were compiled, but not used.
So, why is three.js using a single shader program? What are those correct (or maybe not) reasons?
Most of the work done in a shader is related to the material part of the mesh, not the geometry.
In webgl (or opengl for that matter) the geometry as you understand it (if it is a cube, a sphere, or whatever) is pretty irrelevant.
It would be a little bit more relevant if you talk about how the geometry is constructed. But in these days where faces of more than 3 vertices are gone, and triangle strips are seldom used, that are few different geometries... face3 geometries, line geometries, particle geometries, and buffer geometries.
Most of the time, the key difference to use a different shader will be in the material.

First steps with shaders and THREEjs

I'm working on a vector field over perlin noise and I was suggested to boost it up using shaders. My graphics knowledge is still very basic but I would like to ask if my thinking how to do it is correct.
Here is what I have. (it is not the latest version with 3rd dimension, but You will get the concept I guess).
So I will pass attribute: time, and noise value to the vertex shader. Unfortunatelly Im using noise function from some library which requires positions that should be calculated every frame in the shader. Is it possible to output from the shader a variable with position calculated inside for every particle?
I've found also something like "https://github.com/ashima/webgl-noise/wiki" for generating the noise inside the shader, but how to update the particles x,y,z position after moving it by the field value and keep it for the next frame? GLSL shaders should also have built in functions for noise generation but i don't think You can use them with threejs?
Thank You for any advice in advance!
have a look in this example... http://threejs.org/examples/#webgl_terrain_dynamic
it will give you some idea for noise creation with shaders and dynamic position of it....

Basic approach to pupil constriction/dilation of eye model in OpenGL

I'm new to OpenGL-ES and looking for the best approach for creating a realistic model of an eye whose pupil can dilate and constrict so I have a plan in mind while running through tutorials.
I've made a mesh in blender that is basically a sphere with a hole (the 'pole' or central vertex is removed and a couple surrounding circle edges).
I plan to add an iris texture directly to the sphere's polys surrounding the hole.
To change pupil size, do I just need a function to reposition the vertices of the hole so the hole dilates or contracts?
I'm going to use OpenGL within an Objective-C app. I have Jeff Lamarche's Objective C export script. Is it standard to export only the mesh from blender, and add textures in code later in xcode? Or is it easier/better to setup the textures on the meshes in blender first and export the more finished product's data to xcode?
Your question is a bit old, so I'm not sure how much progress you've made, but as I've been climbing up the learning curve myself I thought I'd take a shot at answering.
If you want to animate the individual vertices of your model, I believe the method you'll want is Vertex Skinning. I can't speak much on that front as I haven't yet had reason to experiment with it, although it's a technique only available in OpenGL ES 2.0. (Which is probably where you want to start anyway, the increased flexibility over 1.1 is more than worth any additional incline to the learning curve.)
The answer to your texturing question is somewhat mixed. You'll need to actually apply the texture in OpenGL. But what Blender can do for you is determine the texture coordinates. Each vertex of your mesh will have a texture coordinate associated with it. The texture coordinate will be X, Y coordinates which map to a location on the texture image. The coordinates are in a range from 0.0 to 1.0 -- so, since your image texture is a rectangle, the texture coordinate {0, 0} maps to the bottom left corner; {1 , 1} maps to the top right corner; {0.5, 0.5} maps to the exact center of the image.
So in blender, you'd want to go ahead and texture the object with UV mappings. When you export, although your exported mesh won't contain any of the image content, it will retain the texture coordinates which map to your image content. This will allow you to apply the texture in OpenGL so that the texture is applied the same way it appeared in blender.
I've personally had some trouble getting Jeff Lamarche's script to spit out the texture coordinates, as Blender api seems to change significantly with each release. I've had more success with an .obj converter. So I've been exporting from blender to .obj, and using a command line tool to go from .obj to a C header file.
If you encounter similar problems with Lamarche's script, this post might help solve it: http://38leinad.wordpress.com/2012/05/29/blender-2-6-exporting-uv-texture-coordinates/
And this is a good resource for a .obj to .h script:
http://heikobehrens.net/2009/08/27/obj2opengl/

Resources