I am using a frame buffer, with 2 color attachments. I want to render, in one render call, into both color attachments.
layout (location = 0) out vec3 _color;
layout (location = 1) out vec3 _depth;
_color = texture(_colorImage, coord).xyz;
_depth = texture(_depthImage, coord).xyz;
I testet my application of the computer, but now i want to use the same on a mobile application, but how can I render in more than one color attachment in OpenGL ES ?
The preferred version would be OpenGL 2.0. But I don't need to.
For color attachments you can't render to more than one in OpenGL ES 2.0; the API doesn't support it.
For OpenGL ES 3.0 onwards it works exactly the same as OpenGL Multiple Render Targets.
Related
I ama using SDL2 and my pixel format is SDL_PIXELFORMAT_RGB888. I want alpha channel in my textures, but this format doesn't support alpha. My window is created with
m_windowOwnPtr = SDL_CreateWindow(windowTitle,
SDL_WINDOWPOS_CENTERED, SDL_WINDOWPOS_CENTERED,
m_screenWidth * magnificationLevel,
m_screenHeight * magnificationLevel, 0);
and my surface is m_SurfaceNoOwnPtr = SDL_GetWindowSurface(m_optrWindow);
How can I change the pixel format to SDL_PIXELFORMAT_RGBA8888 (for example) or something else to enable alpha channel support?
Thanks.
I tried to set variables of SDL_PixelFormat struct manually but this did not solve my problem.
I tried to create new surface from my surface with SDL_ConvertSurfaceFormat and SDL_PIXELFORMAT_RGBA8888 enum member and to free my old surface and assign the new surface to the window, but this did not work too.
I am trying to implement multiview using OpenGL ES. I have created a vertex shader in that I specified
#extension GL_OVR_multiview : enable
layout(num_views=2) in;
And I am sending 2 MVPs (model view projection matrices).
I have created an FBO with array texture as an attachment. I am binding this FBO and rendering it and everything is working fine and as expected.
Now my question is:
is it possible to render to only one view in one draw call even though I specified 2 views?
I want to render it to a single view per draw call and change the view index in the vertex shader.
Is it possible to change gl_ViewID_OVRin the vertex shader??
Please help me with this.
Thank you.
My attempts at High-DPI rendering on MacOS for my game, Bitfighter, always end up looking bad, like a scaled-up version of a low-res game.
The game uses SDL2 + OpenGL and I have correctly enabled the SDL_WINDOW_ALLOW_HIGHDPI window flag as well as making it HighDPI aware in the Info.plist. This all works and I get the higher-res title bar just fine. I use SDL_GL_GetDrawableSize and it correctly returns the 2x larger pixel size than the window size, but the following techniques to re-scale it don't yield good results:
Using glViewport using window coords
Using glViewport with drawable coords, then glOrtho to scale (as suggested in the SDL doc README-ios.md)
Both show pixely vector graphics. What can I do to get OpenGL to draw better with MacOS High-DPI?
Thanks.
I have a framebuffer object bind to a texture,which has black and white pixels spreaded in the texture at different places.I create frame buffer object with respect to new ipad resolution.from this fbo i want to read only white pixels.I would like to know how to do it.I am using glreadpixels function ,which will read all the pixels.But ?I only want white pixels.Please suggest me if there is any way to do this.
I am using OpenGL ES 2.0.
Thanks
I don't believe there's any OpenGL specific functionality that will do anything like that. You'll just have to stream out the whole buffer and iterate over it on CPU.
I'm trying to save a QGraphicsScene with OpenGL as an image (png or jpeg), but I don't want the image to depend on the current view (zoom). That's why I'm not using grabFrameBuffer but I use render() instead :
QImage imgToSave(1024,768,QImage::Format_ARGB32_Premultiplied);
// fill the image
// and define rectbuffer(), the QRect() containing what I want to save
QPainter painter(&imgToSave);
m_scene = new QGraphicsScene;
// fill the Scene
m_scene->render(&painter,imgToSave.rect(),rectbuffer());
It does work. My question is: is it using OpenGL capabilities or not?
If not, how to do so?
n.b. : i am using a QGLWidget as a Viewport for my GraphicsView. And the display using OpenGL is working. My concerns are about the image saving.
I am going to guess: no. Because for the QGraphicsScene to render through OpenGL you need to specify a QGLWidget derived object as the viewport of the scene - you haven't, so it's almost certainly using the raster engine. Secondly the QPainter uses whatever paint device you construct it with as a backend, you have specified a straight QImage which does not use OpenGL.
If you can't/won't use a QGLWidget through a QGraphicsView, then you might be able to render onto QGLFramebufferObject. But this brings it's own complications, namely you will have to create a hidden context beforehand.