How do I make a line at surface? - go

I know there's a function to fill rectangles surface.FillRect(&Rect, uint32), but is there a way to draw a line in the surface, like a function for renderer renderer.DrawLine(x1, y1, x2, y2)?

In SDL2 Surface is just a structure that represents a bitmap (2D matrix of pixels) in driver agnostic format. In terms of drawing Surface API is limited to setting pixels or easily defined groups of them (rectangles) or blitting (copying) rectangles of one surface to the other. With that API you technically can draw a line by calculating coordinates of each pixel on that line between coords x1,y1 and x2,y2 and setting each pixel to the desired color. In SDL2 library itself there is no such utility function.
Renderer is much more advanced API that takes care of many underlying aspects of accelerated rendering. On top of single pixels and rectangles it offers also line primitives. Instead of Surface it uses Texture which stores pixels in an optimized format for underlying graphics driver (and thus provides higher performance when drawing). If you are working on something new you should stick with Renderer. If you need to plug code based on Surface to Renderer code, you can create Texture from Surface data using the Renderer.CreateTextureFromSurface() method.

Related

OpenGL 2d (orthographic) rendering for GUI, the modern way

I want to render GUI components in my OpenGL program. They are still just simple textured (vbo) rectangles.
I would like to get the following things done, the right way.
Drawing using screen coordinates, or at least using a coordinate system that's not based on perspective-like floating points. For example: now the coordinate system is from -1f to 1f (left to right of the screen). It would be more logical to use screen/pixel coordinates.
If it's easy to do, I'd like that the GUI doesn't stretch when the window (viewport) resizes.
I know, previously you could do a lot using the deprecated function glOrtho. But since I want to do it the modern way, which is hopefully also better for performance, I don't know how to start.
After searching on the internet, I came to the conclusion that I have to use a shader. I'm not very familiar with shaders.
And another question: does performance increase when doing this using a shader?
What you do with modern OpenGL is essentially the same as using glOrtho to setup a orthographic projection matrix: Create a transformation (matrix) that maps coordinates 1:1 into viewport coordinates and use that to transform the coordinates.
For example you could create a vec2 uniform viewport and set that to the viewport width/height. Then in the vertex shader you can use that to transform your vertex pixel coordinate positions into the range [-1,1], like this
gl_Position = vec4(2*vpos.xy / viewport.xy - 1, 0, 1);

how do I get a projection matrix I can use for a pointlight shadow map?

I'm currently working on a project that uses shadowtextures to render shadows.
It was pretty easy for spotlights, since only 1 texture in the direction of the spotlight is needed, but its a little more difficult since it needs either 6 textures in all directions or 1 texture that somehow renders all the obects around the pointlight.
And thats where my problem is. How can I generate a Projection matrix that somehow renders all the object in a 360 angle around the pointlight?
Basicly how do create a fisheye (or any other 360 degree camera) vertex shader?
How can I generate a Projection matrix that somehow renders all the object in a 360 angle around the pointlight?
You can't. A 4x4 projection matrix in homogenous space cannot represent any operation which would result in bending the edges of polygons. A straight line stays a straight line.
Basicly how do create a fisheye (or any other 360 degree camera) vertex shader?
You can't do that either, at least not in the general case. And this is not a limit of the projection matrix in use, but a general limit of the rasterizer. You could of course put the formula for fisheye distortion into the vertex shader. But the rasterizer will still rasterize each triangle with straight edges, you just distort the position of the corner points of each triangle. This means that it will only be correct for tiny triangles covering a single pixel. For larger triangles, you completely screw up the image. If you have stuff like T-joints, this even results in holes or overlaps in objects which actually should be perfectly closed.
It was pretty easy for spotlights, since only 1 texture in the direction of the spotlight is needed, but its a little more difficult since it needs either 6 textures in all directions or 1 texture that somehow renders all the obects around the pointlight.
The correct solution for this would be using a single cube map texture, with provides 6 faces. In a perfect cube, each face can then be rendered by a standard symmetric perspective projection with a field of view of 90 degrees both horizontally and vertically.
In modern OpenGL, you can use layered rendering. In that case, you attach each of the 6 faces of the cube map as a single layer to an FBO, and you can use the geometry shader to amplify your geomerty 6 times, and transform it according to the 6 different projection matrices, so that you still only need one render pass for the complete shadow map.
There are some other vendor-specific extensions which might be used to further optimize the cube map rendering, like Nvidia's NV_viewport_swizzle (available on Maxwell and newer GPUs), but I only mention this for completness.

How do I render a custom set of screen coordinates in Opengl

Is there any way to render a different set of screen coordinates than the standard equidistant grid between -1,-1 to 1,1?
I am not talking about a transformation that can be accomplished by transformations in the vertex shader.
Specifically ES2 would be nice, but any version is fine.
Is this even directly OpenGl related or is the standard grid typically provided by plumbing libraries?
No, there isn't any other way. The values you write to gl_Position in the vertex (or tesselation or geometry) shader are clip space coordinates. The GPU will convert these to normalized device space (the "[-1,1] grid") by dividing by the w coordinate (after the actual primitive clipping, of course) and will finally use the viewport parameters to transform the results to window space.
There is no way to directly use window coordinates when you want to use the rendering pipeline. There are some operations which bypass large parts of that pipeline, like frambuffer blitting, which provides a limited way to draw some things directly to the framebuffer.
However, working with pixel coordinates isn't hard to achieve. You basically have to invert the transformations the GPU will be doing. While typically "ortho" matrices are used for this, the only required operations are a scale and a translation, which boils down to a fused multiply-add per component, so you can implement this extremely efficient in the vertex shader.

Is it possible to always draw a 1 pixel (device independent) line direct2d?

I'm using the DrawLine function on the render target and would like to always draw a line that is one (device independent) pixel thick.
My problem is that I have a transform which has wildly different horizontal and vertical dimensions, and it appears that I can only scale the strokeWidth for one of these dimensions.
I can just set the transform to the Identity, and use the matrix transform point to translate each point to my device independent coordinates which achieves the correct result, but the work isn't being offloaded to the GPU.
Is there a way to do this so that I can let the transform on the render target do the work?
I'm using SharpDX from C#, but I'm happy to translate any c++ answer.
You should be able to take advantage of the fact that the transform on ID2D1RenderTarget is absolute. There is no push/pop system, and you can always just set the transform to the identity matrix. With this knowledge, you should be able to 1) create the geometry that you want, 2) transform it by the matrix on the render target (ID2D1Factory::CreateTransformedGeometry(), although you're correct that this is not hardware accelerated), 3) set the render target's transform to the identity matrix, 4) draw the geometry w/ a 1px stroke width, 5) restore the original transform onto the render target.
Also, the version of Direct2D that comes with Win8 has some features to let you always draw with a 1px wide line, regardless of the transform. You create a stroke style and specify D2D1_STROKE_TRANSFORM_TYPE_HAIRLINE for the transformType.

image smoothing in opengl?

Does opengl provide any facilities to help with image smoothing?
My project converts scientific data to textures, each of which is a single line of colored pixels which is then mapped onto the appropriate area of the image. Lines are mapped next to each other.
I'd like to do simple image smoothing of this, but am wondering of OGL can do any of it for me.
By smoothing, I mean applying a two-dimensional averaging filter to the image - effectively increasing the number of pixels but filling them with averages of nearby actual colors - basically normal image smoothing.
You can do it through a custom shader if you want. Essentially you just bind your input texture, draw it as a fullscreen quad, and in the shader just take multiple samples around each fragment, average them together, and write it out to a new texture. The new texture can be an arbitrary higher resolution than the input texture if you desire that as well.

Resources