I'm fairly new to Shader Development and currently working on a SCNProgram to replace the rendering of a plane geometry.
Within the programs vertex shader I'd like to access the position (or basically anchor position) point of the node/mesh as a clip space coordinate. Is there an easy way to accomplish that, maybe through the supplied Node Buffer?
I got kinda close with:
xCoordinate = scn_node.modelViewProjectionTransform[3].x / povZPosition
yCoordinate = scn_node.modelViewProjectionTransform[3].y / povZPosition
The pov z position is being injected from outside through a custom buffer.
This breaks though, when the POV is facing the scene at an angle.
I figured that I could probably just calculate the node position by myself via:
renderer.projectPoint(markerNode.presentation.worldPosition)
and then passing that through my shader via »program.handleBinding(ofBufferNamed: …« on every frame. I hope there is a better way though.
While digging through Google the Unity equivalent would probably be: https://docs.unity3d.com/Packages/com.unity.shadergraph#6.9/manual/Screen-Position-Node.html
I would be really thankful for any hints. Attached is a little visualization.
If I'm reading you correctly, it sounds like you actually want the NDC position of the center of the node. This differs subtly from the clip-space position, but both are computable in the vertex shader as:
float4 clipSpaceNodeCenter = scn_node.modelViewProjectionTransform[3];
float2 ndcNodeCenter = clipSpaceNodeCenter.xy / clipSpaceNodeCenter.w;
Related
for a personal project, I've created a simple 3D engine in python using as little libraries as possible. I did what I wanted - I am able to render simple polygons, and have a movable camera. However, there is a problem:
I implemented a simple flat shader, but in order for it to work, I need to know the camera location (the camera is my light source). However, the problem is that I have no way of knowing the camera's location in the world space. At any point, I am able to display my view matrix, but I am unsure about how to extract the camera's location from it, especially after I rotate the camera. Here is a screenshot of my engine with the view matrix. The camera has not been rotated yet and it is very simple to extract its location (0, 1, 4).
However, upon moving the camera to a point between the X and Z axes and pointing it upwards (and staying at the same height), the view matrix changes to this:
It is obvious now that the last column cannot be taken directly to determine the camera location (it should be something like (4,1,4) on the last picture).
I have tried a lot of math, but I can't figure out the way to determine the camera x,y,z location from the view matrix. I will appreciate any and all help in solving this, as it seems to be a simple problem, yet whose solution eludes me. Thank you.
EDIT:
I was advised to transform a vertex (0,0,0,1) by my view matrix. This, however, does not work. See the example (the vertex obviously is not located at the printed coordinates):
Just take the transform of the vector (0,0,0,1) with the modelview matrix: Which is simply the rightmost column of the modelview matrix.
EDIT: #ampersander: I wonder why you're trying to work with the camera location in the first place, if you assume the source of illumination to be located at the camera's position. In that case, just be aware, that in OpenGL there is no such thing as a camera, and in fact, what the "view" transform does, is move everything in the world around so that where you assume your camera to be ends up at the coordinate origin (0,0,0).
Or in other words: After the modelview transform, the transformed vertex position is in fact the vector from the camera to the vertex, in view space. Which means that for your assumed illumination calculation the direction toward the light source, is the negative vertex position. Take that, normalize it to unit length and stick it into the illumination term.
I am writing a particle engine for iOS using Monotouch and openTK. My approach is to project the coordinate of each particle, and then write a correctly scaled textured rectangle at this screen location.
it works fine, but I have trouble calculating the correct depth value so that the sprite will correctly overdraw and be overdrawn by 3D objects in the scene.
This is the code I am using today:
//d=distance to projection plane
float d=(float)(1.0/(Math.Tan(MathHelper.DegreesToRadians(fovy/2f))));
Vector3 screenPos=Vector3.Transform(ref objPos,ref viewMatrix, out screenPos);
float depth=1-d/-screenPos.Z;
Then I am drawing a trianglestrip at the screen coordinate where I put the depth value calculated above as the z coordinate.
The results are almost correct, but not quite. I guess I need to take the near and far clipping planes into account somehow (near is 1 and far is 10000 in my case), but I am not sure how. I tried various ways and algorithms without getting accurate results.
I'd appreciate some help on this one.
What you really want to do is take your source position and pass it through modelview and projection or whatever you've got set up instead if you're not using the fixed pipeline. Supposing you've used one of the standard calls to set up the stack, such as glFrustum, and otherwise left things at identity then you can get the relevant formula directly from the man page. So reading directly from that you'd transform as:
z_clip = -( (far + near) / (far - near) ) * z_eye - ( (2 * far * near) / (far - near) )
w_clip = -z
Then, finally:
z_device = z_clip / w_clip;
EDIT: as you're working in ES 2.0, you can actually avoid the issue entirely. Supply your geometry for rendering as GL_POINTS and perform a normal transform in your vertex shader but set gl_PointSize to be the size in pixels that you want that point to be.
In your fragment shader you can then read gl_PointCoord to get a texture coordinate for each fragment that's part of your point, allowing you to draw a point sprite if you don't want just a single colour.
I am trying to write my particle system for OpenGL ES 2.0. Each particle is made up of 4 vertexes, forming the little square where a transparent texture is drawn.
The problem is: each particle has its own properties (color, position, size), that are constant across the 4 vertexes of that particle. The only variation for each vertex is what corner of the square it is.
If I am to send the properties of the particle via uniform variables, I must do:
for(each particle) { // do maaaany times
glUniform*(...);
glDrawArray(...); // only draw 4 vertexes
};
this is clearly inefficient, since I will only draw 4 vertexes per glDrawArray call.
If I send this properties via attribute variables, I must fill the same information 4 times for each fragment in the attribute buffer:
struct particle buf[n];
for(each particle) {
struct particle p;
p = ...; // Update particle
buf[i+0] = buf[i+1] = buf[i+2] = buf[i+3] = p;
};
glBufferData(..., buf, ...);
// then draw everithing once afterwards...
what is memory inefficient and seems very ugly to me. So what is the solution to this problem? What is the right way to pass parameters that change for each few vertexes to the shader?
Use point sprites. The introduction is very explicit about how to solve your problem.
You can also combine the use of point sprites with another extension, point_size_array.
...
As Christian Rau has commented, the point_size_array is no more usefull using programmable pipeline: set the maximum point size as usual, then discard fragments basing on their distance from the point center, derived from texture coordinates generated by OpenGL. The particle size shall be sent via additional attribute.
GL ES doesn't really have a good solution to this. Desktop OpenGL allows for instancing and various other tricks, but ES just doesn't have those.
You can use a Uniform Buffer Object. Note that this feature is only available on D3D10+ hardware.
Send the information via a texture. I'm not sure that texture sampling is supported in opengl-es 2.0 vertex shaders, but if it is, then that would be optimal.
I'm a little bit lost, and this is somewhat related to another question I've asked about fragment shaders, but goes beyond it.
I have an orthographic scene (although that may not be relevant), with the scene drawn here as black, and I have one billboarded sprite that I draw using a shader, which I show in red. I have a point that I know and define myself, A, represented by the blue dot, at some x,y coordinate in the 2d coordinate space. (Lower-left of screen is origin). I need to mask the red billboard in a programmatic fashion where I specify 0% to 100%, with 0% being fully intact and 100% being fully masked. I can either pass 0-100% (0 to 1.0) in to the shader, or I could precompute an angle, either solution would be fine.
( Here you can see the scene drawn with '0%' masking )
So when I set "15%" I want the following to show up:
( Here you can see the scene drawn with '15%' masking )
And when I set "45%" I want the following to show up:
( Here you can see the scene drawn with '45%' masking )
And here's an example of "80%":
The general idea, I think, is to pass in a uniform 'A' vec2d, and within the fragment shader I determine if the fragment is within the area from 'A' to bottom of screen, to the a line that's the correct angle offset clockwise from there. If within that area, discard the fragment. (Discarding makes more sense than setting alpha to 0.0 or 1.0 if keeping, right?)
But how can I actually achieve this?? I don't understand how to implement that algorithm in terms of a shader. (I'm using OpenGL ES 2.0)
One solution to this would be to calculate the difference between gl_FragCoord (I hope that exists under ES 2.0!) and the point (must be sure the point is in screen coords) and using the atan function with two parameters, giving you an angle. If the angle is not some value that you like (greater than minimum and less than maximum), kill the fragment.
Of course, killing fragments is not precisely the most performant thing to do. A (somewhat more complicated) triangle solution may still be faster.
EDIT:
To better explain "not precisely the most performant thing", consider that killing fragments still causes the fragment shader to run (it only discards the result afterwards) and interferes with early depth/stencil fragment rejection.
Constructing a triangle fan like whoplisp suggested is more work, but will not process any fragments that are not visible, will not interfere with depth/stencil rejection, and may look better in some situations, too (MSAA for example).
Why don't you just draw some black triangles ontop of the red rectangle?
I am learning GLSL by going through a tutorial on the Web.
The tutorial has an example called the Toon Shading. Here is the link to Toon Shading - Version I.
In this example, the vertex shader is written as follows:
uniform vec3 lightDir;
varying float intensity;
void main()
{
intensity = dot(lightDir,gl_Normal);
gl_Position = ftransform();
}
To my best understanding, I know that if a surface is rotated then the normal vectors of the vertices of that surface should also be rotated the same amount, so that the normal vectors reflect the new direction of the surface. However, in the codes above, the Model View Matrix is not applied on to the normal vector. The normal vector is used directly to calculate the light intensity.
Regarding my concern, here is what the tutorial says:
"lets assume that the light’s direction is defined in world space."
and
"If no rotations or scales are performed on the model in the OpenGL application, then the normal defined in world space, provided to the vertex shader as gl_Normal, coincides with the normal defined in the local space."
These explanations gives me several questions:
1. What are world space and local space? How are they different? (This question
seems a little bit elementary, but I need to understand...)
2. I figure the fact that "the light’s direction is defined in world space"
has something to do with not applying the Model View Matrix on to the
normal vector of a vertex. But, what is that?
3. Finally, If we don't apply the Model View Matrix on the normal vector then
wouldn't the normal be pointing to a direction different from the actual
direction of the surface? How do we solve this problem?
I hope I made my questions clear.
Thanks!
World space is, well, what it sounds like: the space of the world. It is the common space that all objects that exist in your virtual world reside in. The main purpose of world space is to define a common space that the camera (or eye) can also be positioned within.
Local space, also called "model space" by some, is the space that your vertex attribute data is in. If your meshes come from someone using a tool like 3DS Max, Blender, etc, then the space of those positions and normals is not necessarily the same as world space.
Just to cap things off, eye-space (also called camera-space or view-space) is essentially world space, except everything is relative to the position and orientation of the camera. When you move the camera in world space, you're really just changing the world-to-eye space transformation. The camera in eye-space is always at the origin.
Personally, I get the impression that the Lighthouse3D tutorial people were getting kind of lazy. Rare is the situation in which your vertex normals are in world space, so not showing that you have to transform normals as well as positions (which is what ftransform() did) was misleading.
The tutorial is correct in that if you have a normal in world-space and a light direction in world-space (and you're doing directional lighting, not point-lighting), then you don't need to transform anything. The purpose of transforming the normal is to transform it from local space to the same space as your light direction. Here, they just define that they are in the same space.
Sadly, actual users will not generally have the luxury of defining that their vertex normals are in any space other than local. So they will have to transform them.
Finally, If we don't apply the Model View Matrix on the normal vector then wouldn't the normal be pointing to a direction different from the actual direction of the surface?
Who cares? All that matters is that the two directions are in the same space. It doesn't matter if that space is the same space as the vertex positions' space. As you get farther into graphics, you will find that a space that is convenient for lighting may not be a space that you ever transform positions into.
And that's OK. Because the lighting equation only takes a direction towards the light and a surface normal. It doesn't take a position, unless you're doing point-lighting, and even then, the position is only useful insofar as it lets you calculate a light direction and attenuation factor. You can take the light direction and transform it into whatever space you want.
Some people do lighting in local space. If you're doing bump-mapping, you will often want to do lighting in the space tangent to the plane of the texture.
Addendum:
The standard way to handle normals (assuming that your matrices all still come through OpenGL's standard matrix commands) is to assume that the normals are in the same space as your position data. Therefore, they need to be transformed as well.
However, for reasons that are better explained here, (note: in the interest of full disclosure, I wrote that page) you cannot just transform the normals with the matrix you would use for the positions. Fortunately, OpenGL was written expecting that, so if you're using the standard OpenGL matrix stack, they give you a pre-defined matrix for handling this: gl_NormalMatrix.
A typical lighting scenario in GLSL would look like this:
uniform vec3 eyeSpaceLightDir;
varying float intensity;
void main()
{
vec3 eyeSpaceNormal = gl_NormalMatrix * gl_Normal;
intensity = dot(eyeSpaceLightDir, eyeSpaceNormal);
gl_Position = ftransform;
}
I tend to prefer to preface my variable names to state what space they are in, so that it's obvious what's going on.