I want to render GUI components in my OpenGL program. They are still just simple textured (vbo) rectangles.
I would like to get the following things done, the right way.
Drawing using screen coordinates, or at least using a coordinate system that's not based on perspective-like floating points. For example: now the coordinate system is from -1f to 1f (left to right of the screen). It would be more logical to use screen/pixel coordinates.
If it's easy to do, I'd like that the GUI doesn't stretch when the window (viewport) resizes.
I know, previously you could do a lot using the deprecated function glOrtho. But since I want to do it the modern way, which is hopefully also better for performance, I don't know how to start.
After searching on the internet, I came to the conclusion that I have to use a shader. I'm not very familiar with shaders.
And another question: does performance increase when doing this using a shader?
What you do with modern OpenGL is essentially the same as using glOrtho to setup a orthographic projection matrix: Create a transformation (matrix) that maps coordinates 1:1 into viewport coordinates and use that to transform the coordinates.
For example you could create a vec2 uniform viewport and set that to the viewport width/height. Then in the vertex shader you can use that to transform your vertex pixel coordinate positions into the range [-1,1], like this
gl_Position = vec4(2*vpos.xy / viewport.xy - 1, 0, 1);
Related
I'm currently working on a project that uses shadowtextures to render shadows.
It was pretty easy for spotlights, since only 1 texture in the direction of the spotlight is needed, but its a little more difficult since it needs either 6 textures in all directions or 1 texture that somehow renders all the obects around the pointlight.
And thats where my problem is. How can I generate a Projection matrix that somehow renders all the object in a 360 angle around the pointlight?
Basicly how do create a fisheye (or any other 360 degree camera) vertex shader?
How can I generate a Projection matrix that somehow renders all the object in a 360 angle around the pointlight?
You can't. A 4x4 projection matrix in homogenous space cannot represent any operation which would result in bending the edges of polygons. A straight line stays a straight line.
Basicly how do create a fisheye (or any other 360 degree camera) vertex shader?
You can't do that either, at least not in the general case. And this is not a limit of the projection matrix in use, but a general limit of the rasterizer. You could of course put the formula for fisheye distortion into the vertex shader. But the rasterizer will still rasterize each triangle with straight edges, you just distort the position of the corner points of each triangle. This means that it will only be correct for tiny triangles covering a single pixel. For larger triangles, you completely screw up the image. If you have stuff like T-joints, this even results in holes or overlaps in objects which actually should be perfectly closed.
It was pretty easy for spotlights, since only 1 texture in the direction of the spotlight is needed, but its a little more difficult since it needs either 6 textures in all directions or 1 texture that somehow renders all the obects around the pointlight.
The correct solution for this would be using a single cube map texture, with provides 6 faces. In a perfect cube, each face can then be rendered by a standard symmetric perspective projection with a field of view of 90 degrees both horizontally and vertically.
In modern OpenGL, you can use layered rendering. In that case, you attach each of the 6 faces of the cube map as a single layer to an FBO, and you can use the geometry shader to amplify your geomerty 6 times, and transform it according to the 6 different projection matrices, so that you still only need one render pass for the complete shadow map.
There are some other vendor-specific extensions which might be used to further optimize the cube map rendering, like Nvidia's NV_viewport_swizzle (available on Maxwell and newer GPUs), but I only mention this for completness.
Is there any way to render a different set of screen coordinates than the standard equidistant grid between -1,-1 to 1,1?
I am not talking about a transformation that can be accomplished by transformations in the vertex shader.
Specifically ES2 would be nice, but any version is fine.
Is this even directly OpenGl related or is the standard grid typically provided by plumbing libraries?
No, there isn't any other way. The values you write to gl_Position in the vertex (or tesselation or geometry) shader are clip space coordinates. The GPU will convert these to normalized device space (the "[-1,1] grid") by dividing by the w coordinate (after the actual primitive clipping, of course) and will finally use the viewport parameters to transform the results to window space.
There is no way to directly use window coordinates when you want to use the rendering pipeline. There are some operations which bypass large parts of that pipeline, like frambuffer blitting, which provides a limited way to draw some things directly to the framebuffer.
However, working with pixel coordinates isn't hard to achieve. You basically have to invert the transformations the GPU will be doing. While typically "ortho" matrices are used for this, the only required operations are a scale and a translation, which boils down to a fused multiply-add per component, so you can implement this extremely efficient in the vertex shader.
I'm writing a DirectX 11 overlay for a game. Creating textures is quite simple and I have good knowledge of C/C++.
The problem I am having is in my test window I can print the texture but as soon as I change the camera angle the texture moves with it. That is what most people want.
What I want to know is how do I print something in 2D to always appear on screen whether the camera moves or not?
Basically, since you use dx11, you use shaders to render your elements.
So standard 3d objects generally follow this guideline:
-Use 3 transforms : world (position object), view (transform in camera space), projection (transform in screen space).
In you vertex shader you multiply all that lot to convert from 3d to 2d.
Since now what you want is to display your elements in 2d (non relative to camera), you can easily create a new shader that doesn't take view/projection into account, so you just don't use those matrices in your vertex shader. (you can still use world for 2d transformation).
That's pretty much the easiest way, if you need pixel precise 2d elements, you need to create a billboard transform/shader. Basically you have your render target resolution, and standard render space is -1 -> 1, so you modify scale/translation to convert between both of those spaces.
When you render you overlay, also ensure that you disable depth completely.
If you need sample let me know I'll make one up quickly, but it should be quite simple.
I'm making a small OpenGL Mac app that uses point sprites. I'm using a vertex array to draw them, and I want to use a similar "array" function to give them all different sizes.
In OpenGL ES, there is a client state called GL_POINT_SIZE_ARRAY_OES, and a corresponding function glPointSizePointerOES() which do exactly what I want, but I can't seem to find an equivalent in standard OpenGL.
Does OpenGL support this in any way?
To expand a little on Fen's answer, the fixed function OpenGL pipeline can't do exactly what you want. It can do 'perspective' points which get smaller as the Z distance increases, but that's all.
For arbitrary point size at each vertex you need a custom vertex shader to set the size for each. Pass the point sizes either as an attribute array (re-use surface normals or tex coords, or use your own attribute index) or in a texture map, say a 1D texture with width equal to size of points array. The shader code example referred to by Fen uses the texture map technique.
OpenGL does not support this Apple extension, but you can do it other other way:
For fixed pipeline: (opengl 1.4 and above)
You need to setup point parameters:
float attenuation[3] = {0.0f, 1.0f, 0.0f};
glPointParameterfvEXT(GL_POINT_DISTANCE_ATTENUATION, attenuation);
glPointParameterfEXT(GL_POINT_SIZE_MIN, 1.0f);
glPointParameterfEXT(GL_POINT_SIZE_MAX, 128.0f);
glEnable(GL_POINT_SPRITE);
OpenGL will calculate point size for you that way
Shaders
Here is some info for rendering using shaders:
http://en.wikibooks.org/wiki/OpenGL_Programming/Scientific_OpenGL_Tutorial_01
If by "Does OpenGL support this", you mean "Can I do something like that in OpenGL", absolutely.
Use shaders. Pass a 1-dimensional generic vertex attribute that represents your point size. And in your vertex shader, set that point size as the gl_PointSize output from the vertex shader. It's really quite simple.
If you meant, "Does fixed-function OpenGL support this," no.
I'm using WIC and Direct2D (via SharpDX) to composite photos into video frames. For each frame I have the exact coordinates where each corner will be found. While the photos themselves are a standard aspect ratio (e.g. 4:3 or 16:9) the insertion points are not -- they may be rotated, scaled, and skewed.
Now I know in Direct2D I can apply matrix transformations to accomplish this... but I'm not exactly sure how. The examples I've seen are more about applying specific transformations (e.g. rotate 30 degrees) than trying to match an exact destination.
Given that I know the exact coordinates (A,B,C,D) above, is there an easy way to map the source image onto the target? Alternately how would I generate the matrix given the source and destination coordinates?
If Direct3D is an option, all you will need to do is to render the quadrilateral as two triangles (with the frog texture mapped onto it).
To make sure there are no artifacts, render the quad as an indexed mesh, like in the example here (note that it shares vertex 0 and vertex 2 across both triangles). Of course, you can replace the actual vertex coordinates with A, B, C and D.
To begin, you can check out these tutorials for SlimDX, an excellent set of .NET bindings to DirectX.
It is not really possible to achieve this with Direct2D. That could be possible with Direct2D1.1 (from Win8 Metro) with a custom vertex shader, but in the end, as ananthonline suggest, It will be much easier to do it with Direct3D11.
Also you can use triangle strip primitives that are easier to setup (you don't need to create an index buffer). For the coordinates, you can directly sends coordinates to a vertex shader without any transforms (the vertex shaders will copy input SV_POSITION directly to the pixel shader). You just have to map your coordinates into x [-1,1] and y[-1,1]. I suggest you to start with SharpDX MiniCubeTexture sample, change the matrix to perform an orthonormal projection (instead of the sample perspective).