Limiting the drawing to a rectangle in OpenGLES - opengl-es

I need to limit the drawing of an object to a rectangle. I can't just change the viewport to match the rectangle becouse the ModelView matrix (that should change the rectangle, but not the content) may not be identity. A solution that would work is to draw to a FBO that match the rectangle, then draw the FBO to the screen, but it seems to slow. Is there any better option to do that?

If I understood you correctly, glScissor should be the function you are looking for. It crops the rendering to a selected sub-rectangle of the viewport. This does not modify the viewport. So the objects cover the same size on the screen, it just prevents you from drawing any pixels outside of the scissor region. If this is not what you want and you want the sub-rectangle to contain the whole scene and thus your objects to shrink, then changing the viewport is the solution of choice.
EDIT: If you want the rectangle to be transformable and especially rotatable (and therefore not a rectangle anymore on the screen), then rendering into an FBO and using this as texture on a quad is probably the best solution. Otherwise you could probably also just modify the vertex coordinates after projection, thus multiplying the transformation matrix of the target rectangle with the projection matrix and using this as new projection matrix, but I'm not completely sure about that (but at least something similar should do it.

Related

Is it possible to select multiple objects with box selection?

I know that with QObjectPicker I can mouse pick a single entity. But how can I select multiple objects by drawing a rectangle on the screen?
I think this is actually pretty complicated. But here are my two cents:
If you only need to be able to select unoccluded objects
(i.e. don't need to select occluded ones) you could add a second frame graph branch to your existing one and draw each object with a unique color but to an offscreen texture. Then retrieve this texture, check which colors lie within the drawn rectangle and retrieve the corresponding objects and select them (compare to this question/answer).
I'm not sure how well this works in Qt3D because I've always had some issues with QRenderCapture. It didn't seem to have an impact where I added it in the frame graph, i.e. always captured the last state so maybe even if you have multiple render targets it might capture the wrong one etc. Qt3D is still in a pretty rough state I'd say.
If you need an example of how to render to an offscreen texture check out my example on GitHub.
If you need to be able to select occluded objects too
then it gets pretty complicated. I'm just providing some ideas here. I don't know if they will work.
If you don't have that many objects maybe you could implement the idea from above for each single object. I.e. for each object you have an offscreen frame graph branch that filters out all other objects. Then you could check each rendered texture for the rectangle drawn with the mouse. But again I'm not sure how well this works with Qt3D and if you have many objects (like in a game) it will probably crash because of the many offscreen textures.
You could also implement something like "inverse" frustum culling. In frustum culling, you omit rendering objects that lie outside the view frustum of the camera. You could compute a frustum using the rectangle coordinates drawn with the mouse. Check out the QFrustumCulling code. You would need to compute the planes differently of course, using a modified view matrix. When the user draws the rectangle, compute the frustum and check all objects. Unfortunately, this also selects objects whose bounding sphere intersects with the frustum, even though you might visible not touch any part of the object. If that bothers you, you could directly select all objects whose sphere is completely within the frustum and for all objects which only partly intersect do the intersection computation on a per-triangle basis and exit computation for the current object as soon as a triangle intersects the frustum. Depending on the number of triangles this could be very costly computational-wise.
I'd definitely stick to being able to select only unoccluded objects especially because picking in OpenGL seems to be realized by drawing the ojbects with colors these days.

how do I get a projection matrix I can use for a pointlight shadow map?

I'm currently working on a project that uses shadowtextures to render shadows.
It was pretty easy for spotlights, since only 1 texture in the direction of the spotlight is needed, but its a little more difficult since it needs either 6 textures in all directions or 1 texture that somehow renders all the obects around the pointlight.
And thats where my problem is. How can I generate a Projection matrix that somehow renders all the object in a 360 angle around the pointlight?
Basicly how do create a fisheye (or any other 360 degree camera) vertex shader?
How can I generate a Projection matrix that somehow renders all the object in a 360 angle around the pointlight?
You can't. A 4x4 projection matrix in homogenous space cannot represent any operation which would result in bending the edges of polygons. A straight line stays a straight line.
Basicly how do create a fisheye (or any other 360 degree camera) vertex shader?
You can't do that either, at least not in the general case. And this is not a limit of the projection matrix in use, but a general limit of the rasterizer. You could of course put the formula for fisheye distortion into the vertex shader. But the rasterizer will still rasterize each triangle with straight edges, you just distort the position of the corner points of each triangle. This means that it will only be correct for tiny triangles covering a single pixel. For larger triangles, you completely screw up the image. If you have stuff like T-joints, this even results in holes or overlaps in objects which actually should be perfectly closed.
It was pretty easy for spotlights, since only 1 texture in the direction of the spotlight is needed, but its a little more difficult since it needs either 6 textures in all directions or 1 texture that somehow renders all the obects around the pointlight.
The correct solution for this would be using a single cube map texture, with provides 6 faces. In a perfect cube, each face can then be rendered by a standard symmetric perspective projection with a field of view of 90 degrees both horizontally and vertically.
In modern OpenGL, you can use layered rendering. In that case, you attach each of the 6 faces of the cube map as a single layer to an FBO, and you can use the geometry shader to amplify your geomerty 6 times, and transform it according to the 6 different projection matrices, so that you still only need one render pass for the complete shadow map.
There are some other vendor-specific extensions which might be used to further optimize the cube map rendering, like Nvidia's NV_viewport_swizzle (available on Maxwell and newer GPUs), but I only mention this for completness.

Zoom invariant objects in webgl

WebGl doesn't support line thickness. So when I need to highlight some line, I just draw rectangle around it. But when I zoom scene it looks pretty scary.
There are two ways I see now:
1) Recalculate rectangle width according to canvas.width into model coordinates.
2) Place all zoom-invariant objects under separate matrix (I use scenejs) and recalculate their positions after each mousewheel
I don't like both of this solution. So I wonder: is there good workaround to make items zoom invariant?
Another way around that (not the most efficient one though) might be to use shaders. In our WebGL app, we render highlighted primitives into a texture and then blur it back on screen to add a "selection glow effect".

How to get consistent gradient fill in GDI+ when using a rotated LinearGradientBrush?

I'm using GDI+ in my application, and I need to use a rotated LinearGradientBrush to paint several rects in the exact same way. However, although I'm calling the same code to fill each rect, the results aren't what I expect. Here's the code to create the gradient fill, where rcDraw is the rect containing the area to paint for each rect. These coordinates are in the parent window's coordinates, so they are not identical for the 2 rects.
g_hbrLinear = new LinearGradientBrush( Rect( 0, rcDraw.top, 0, rcDraw.bottom - rcDraw.top ),
clrStart, clrEnd, (REAL) 80, FALSE );
What I see on screen looks like this (http://www.nnanime.com/bugs/LinGradBrush-rotate10.png). You can see that it's as if the fill from the first rect continues into the second one. What I really want is to have the 2 rects look identical. I think I can do that if I paint each rect separately using its own client coordinates, but for the purposes of my app, I need to use the parent window's coordinates.
I guess what I'm asking is, how does GDI+ calculate the "origin" of a fill? Is it always based on 0,0 in the coordinate system you use? Is there a way to shift it? I tried TranslateTransform, but it doesn't seem to shift the fill in a way that I find predictable or understandable.
The rect passed to the linear gradient brush determines the where the left and right colors will sit, and the gradient will be painted within this rectangle.
So, I think you need to create a brush for each rectangle you are painting, where the rectangle you are painting is also passed to the constructor for the linear gradient brush.
My experience with the "transform" of linear gradient brushes matches yours; I haven't been able to understand what it's supposed to do.
You can think of a brush in GDI+ as a function mapping world co-ordinates to a color. What the brush looks like at a given point does not change based on the shape being filled.
It does change with the transform of the Graphics object you're drawing on. So, if you don't want to change the brush, you could temporarily change the transform of the Graphics object so that the rectangle you're drawing has a specific, known size and position in world coordinates. The BeginContainer and EndContainer methods should make this easy.
(There is also the RenderingOrigin property but it only affects hatch brushes, which oddly are unaffected by world transforms.)

Does OpenGL (ES) draw objects completely outside of the viewport?

I have a viewport setup with orthographic projection. If I ask OpenGL to draw a quad outside of the viewport (x y bounds) using glDrawArrays(), does it ignore, or still draws it?
opengl will process your vertices (modelview transform etc.) because thats how it figures out where the pixels would end up, but when it comes to actually rendering, it will not 'draw' anything, because the pixel coordinates will not exist in the framebuffer. Depending on exactly where the coordinates are and other factors opengl may be able to stop processing your vertices sooner, but in general it will do all the coordinate transformations at least.
So in a word no, it will not 'draw' them.

Resources