Is HDR rendering possible in OpenGL es? - opengl-es

I'm NOT talking about actually rendering to HDR diplays here.
I'm trying to get my game to look better and one of the ways I've found online is to use an HDR pipeline in post-processing. According to this tutorial, to accomplish this, you need to render to a framebuffer with a texture that's set to an internal format of GL_RGB16f, GL_RGBA16, GL_RGB32F or GL_RGBA32F. Unfortunatly, I looked in the OpenGL es 3.0 docs, and it tells me (at page 132) that there is no floating-point type internal format that is color-renderable, which leads to a non-complete frameBuffer. Am I oblivious to something very obvious, or is an HDR pipline in OpenGL es 3.0 impossible?

I Got it working by generating the Texture this way:
FloatBuffer floatBuffer = ByteBuffer.allocateDirect(w * h * 4 * 4).order(ByteOrder.nativeOrder()).asFloatBuffer();
//allocateDirect( width * height * ComponentNumber * bitPerFloat)
GLES30.glTexImage2D(GLES30.GL_TEXTURE_2D, 0, GLES30.GL_RGBA16F, w, h, 0, GLES30.GL_RGBA, GLES30.GL_FLOAT, floatBuffer);

Related

Trouble Getting Depth Testing To Work With Apple's Metal Graphics API

I'm spending some time in the evenings trying to learn Apple's Metal graphics API. I've run into a frustrating problem and so must be missing something pretty fundamental: I can only get rendered objects to appear on screen when depth testing is disabled, or when the depth function is changed to "Greater". What could possibly be going wrong? Also, what kinds of things can I check in order to debug this problem?
Here's what I'm doing:
1) I'm using SDL to create my window. When setting up Metal, I manually create a CAMetalLayer and insert it into the layer hierarchy. To be clear, I am not using MTKView and I don't want to use MTKView. Staying away from Objective-C and Cocoa as much as possible seems to be the best strategy for writing this application to be cross-platform. The intention is to write in platform-agnostic C++ code with SDL and a rendering engine which can be swapped at run-time. Behind this interface is where all Apple-specific code will live. However, I strongly suspect that part of what's going wrong is something to do with setting up the layer:
SDL_SysWMinfo windowManagerInfo;
SDL_VERSION(&windowManagerInfo.version);
SDL_GetWindowWMInfo(&window, &windowManagerInfo);
// Create a metal layer and add it to the view that SDL created.
NSView *sdlView = windowManagerInfo.info.cocoa.window.contentView;
sdlView.wantsLayer = YES;
CALayer *sdlLayer = sdlView.layer;
CGFloat contentsScale = sdlLayer.contentsScale;
NSSize layerSize = sdlLayer.frame.size;
_metalLayer = [[CAMetalLayer layer] retain];
_metalLayer.contentsScale = contentsScale;
_metalLayer.drawableSize = NSMakeSize(layerSize.width * contentsScale,
layerSize.height * contentsScale);
_metalLayer.device = device;
_metalLayer.pixelFormat = MTLPixelFormatBGRA8Unorm;
_metalLayer.frame = sdlLayer.frame;
_metalLayer.framebufferOnly = true;
[sdlLayer addSublayer:_metalLayer];
2) I create a depth texture to use as a depth buffer. My understanding is that this step is necessary in Metal. Though, in OpenGL, the framework creates a depth buffer for me quite automatically:
CGSize drawableSize = _metalLayer.drawableSize;
MTLTextureDescriptor *descriptor =
[MTLTextureDescriptorr texture2DDescriptorWithPixelFormat:MTLPixelFormatDepth32Float_Stencil8 width:drawableSize.width height:drawableSize.height mipmapped:NO];
descriptor.storageMode = MTLStorageModePrivate;
descriptor.usage = MTLTextureUsageRenderTarget;
_depthTexture = [_metalLayer.device newTextureWithDescriptor:descriptor];
_depthTexture.label = #"DepthStencil";
3) I create a depth-stencil state object which will be set at render time:
MTLDepthStencilDescriptor *depthDescriptor = [[MTLDepthStencilDescriptor alloc] init];
depthDescriptor.depthWriteEnabled = YES;
depthDescriptor.depthCompareFunction = MTLCompareFunctionLess;
_depthState = [device newDepthStencilStateWithDescriptor:depthDescriptor];
4) When creating my render pass object, I explicitly attach the depth texture:
_metalRenderPassDesc = [[MTLRenderPassDescriptor renderPassDescriptor] retain];
MTLRenderPassColorAttachmentDescriptor *colorAttachment = _metalRenderPassDesc.colorAttachments[0];
colorAttachment.texture = _drawable.texture;
colorAttachment.clearColor = MTLClearColorMake(0.2, 0.4, 0.5, 1.0);
colorAttachment.storeAction = MTLStoreActionStore;
colorAttachment.loadAction = desc.clear ? MTLLoadActionClear : MTLLoadActionLoad;
MTLRenderPassDepthAttachmentDescriptor *depthAttachment = _metalRenderPassDesc.depthAttachment;
depthAttachment.texture = depthTexture;
depthAttachment.clearDepth = 1.0;
depthAttachment.storeAction = MTLStoreActionDontCare;
depthAttachment.loadAction = desc.clear ? MTLLoadActionClear : MTLLoadActionLoad;
MTLRenderPassStencilAttachmentDescriptor *stencilAttachment = _metalRenderPassDesc.stencilAttachment;
stencilAttachment.texture = depthAttachment.texture;
stencilAttachment.storeAction = MTLStoreActionDontCare;
stencilAttachment.loadAction = desc.clear ? MTLLoadActionClear : MTLLoadActionLoad;
5) Finally, at render time, I set the depth-stencil object before drawing my object:
[_encoder setDepthStencilState:_depthState];
Note that if I go into step 3 and change depthCompareFunction to MTLCompareFunctionAlways or MTLCompareFunctionGreater then I see polygons on the screen, but ordering is (expectedly) incorrect. If I leave depthCompareFunction set to MTLCompareFunctionLess then I see nothing but the background color. It acts AS IF all fragments fail the depth test at all times.
The Metal API validator reports no errors and has no warnings...
I've tried a variety of combinations of settings for things like the depth-stencil texture format and have not made any forward progress. Honestly, I'm not sure what to try next.
EDIT: GPU Frame Capture in Xcode displays a green outline of my polygons, but none of those fragments are actually drawn.
EDIT 2: I've learned that the Metal API validator has an "Extended" mode. When this is enabled, I get these two warnings:
warning: Texture Usage Should not be Flagged as MTLTextureUsageRenderTarget: This texture is not a render target. Clear the MTLTextureUsageRenderTarget bit flag in the texture usage options. Texture = DepthStencil. Texture is used in the Depth attachment.
warning: Resource Storage Mode Should be MTLStorageModePrivate and it Should be Initialized with a Blit: This resource is rarely accessed by the CPU. Changing the storage mode to MTLStorageModePrivate and initializing it with a blit from a shared buffer may improve performance. Texture = 0x102095000.
When I head these two warnings, I get these two errors. (The warnings and errors seem to contradict one another.)]
error 'MTLTextureDescriptor: Depth, Stencil, DepthStencil, and Multisample textures must be allocated with the MTLResourceStorageModePrivate resource option.'
failed assertion `MTLTextureDescriptor: Depth, Stencil, DepthStencil, and Multisample textures must be allocated with the MTLResourceStorageModePrivate resource option.'
EDIT 3: When I run a sample Metal app and use the GPU frame capture tool then I see a gray scale representation of the depth buffer and the rendered object is clearly visible. This doesn't happen for my app. There, the GPU frame capture tool always shows my depth buffer as a plain white image.
Okay, I figured this out. I'm going to post the answer here to help the next guy. There was no problem writing to the depth buffer. This explains why spending time mucking with depth texture and depth-stencil-state settings was getting me nowhere.
The problem is differences in the coordinate systems used for Normalized Device Coordinates in Metal versus OpenGL. In Metal, NDC are in the space [-1,+1]x[-1,+1]x[0,1]. In OpenGL, NDC are [-1,+1]x[-1,+1]x[-1,+1]. If I simply take the projection matrix produced by glm::perspective and shove it through Metal then results will not be as expected. In order to compensate for the NDC space differences when rendering with Metal, that projection matrix must be left-multiplied by a scaling matrix with (1, 1, 0.5, 1) on the diagonal.
I found these links to be helpful:
1. http://blog.athenstean.com/post/135771439196/from-opengl-to-metal-the-projection-matrix
2. http://www.songho.ca/opengl/gl_projectionmatrix.html
EDIT: Replaced explanation with a more complete and accurate explanation. Replace solution with a better solution.

Capture screenshot: native API vs opengl

In my program I am using Qt's function: qApp->primaryScreen()->grabWindow(qApp->desktop()->winId(),x_offset,y_offset,w,h);
But its a little bit slowly for a "main task". So I've collide with a question above. The program able to work under Windows and Mac OS X. I heard about opengl as nice screengrabber since it is closer to GPU than native API plus its a cross platform solution. This is the first knowledge I want to get: Opengl as desktop screengrabber, is it real? I mean like a button "print screen".
If it is, how?If its not:
Windows: can you please give advice how to? BitBlt, GetDC, smthing like this?
Mac OS X: AVFoundation? Please, can you describe this or give some link about how to capture screenshot using this class? (Its a hard way since I know about Objective-C(++) almost nothing)
UPDATE: I read a lot about ways to capture screenshot. There are some knowledge:1. Opengl (maybe) is real as screengrabber, but use it will be irrcorrect for this software. Btw, I don't care, if there is some solution I will accept it.
2. DirectX it is not a way to solve my problem since this software does not work under Mac OS X.
Just to expand on #Zhenyi Luo's answer, here is a code snippet I have used in the past.
It also uses FreeImage for exporting the screenshot.
void Display::SaveScreenShot (std::string FilePath, SCREENSHOT_FORMAT Format){
// Create Pixel Array
GLubyte* pixels = new GLubyte [3 * Window::width * Window::height];
// Read Pixels From Screen And Buffer Into Array
glPixelStorei (GL_UNPACK_ALIGNMENT, 1);
glReadPixels (0, 0, Window::width, Window::height, GL_BGR, GL_UNSIGNED_BYTE, pixels);
// Convert To FreeImage And Save
FIBITMAP* image = FreeImage_ConvertFromRawBits (pixels, Window::width,
Window::height, 3 * Window::width, 24,
0x0000FF, 0xFF0000, 0x00FF00, false);
FreeImage_Save ((FREE_IMAGE_FORMAT) Format, image, FilePath.c_str (), 0);
// Free Resources
FreeImage_Unload (image);
delete [] (pixels);
}
glReadPixels( GLint x,
GLint y,
GLsizei width,
GLsizei height,
GLenum format,
GLenum type,
GLvoid * data), would read a block of pixels into client memory starting at location data.

OpenGL ES 2.0 attach texture to stencil buffer

It is known, that OpenGL ES 2.0 does not have GL_STENCIL_INDEX8, GL_DEPTH24_STENCIL8_OES, etc ...
Is it possible to use GL_LUMINANCE, GL_ALPHA textures for this purporses?
glGenTextures(1, &byteTex);
glBindTexture(GL_TEXTURE_2D, byteTex);
glTexImage2D(GL_TEXTURE_2D, 0,GL_LUMINANCE, width(), height(), 0, GL_LUMINANCE, GL_UNSIGNED_BYTE, 0);
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_STENCIL_ATTACHMENT, GL_TEXTURE_2D, byteTex, 0);
In other words - is it possible, to have stencil buffer rendered to texture?
P.S. It is possible to "cover" stenciled area with quad, but....
No. In ES 2.0, you can only use renderbuffers for the stencil attachment.
From section "Framebuffer Attachment Completeness" in the ES 2.0 spec (page 117):
If attachment is STENCIL_ATTACHMENT, then image must have a stencil renderable internal format.
Table 4.5 on the same page lists STENCIL_INDEX8 as the only internal format to be stencil-renderable. And on the previous page, it says:
Formats not listed in table 4.5, including compressed internal formats, are not color-, depth-, or stencil-renderable, no matter which components they contain.
ES 2.0 is a very minimal version of OpenGL. You're clearly exceeding the scope of ES 2.0. ES 3.0 introduces depth/stencil textures, but still does not support sampling the stencil part of those textures. ES 3.1 introduces sampling the stencil part of depth/stencil textures.
There is a OES_texture_stencil8 extension defined, but it looks like a mess to me. It says that it is based on ES 3.0, but then partly references the ES 3.1 spec. And it says that it has a dependency on a OES_stencil_texturing extension, which is nowhere to be found on www.khronos.org, or anywhere else in the Google visible part of the internet. But since it's for ES 3.x, it wouldn't help you anyway.

Create new .png Image in D

I am trying to create a .png image that is X pixels tall and Y pixels short. I am not finding what I am looking for on dlang.org, and am struggling to find any other resources via google.
Can you please provide an example of how to create a .png image in D?
For example, BufferedImage off_Image = new BufferedImage(100, 50, BufferedImage.TYPE_INT_ARGB); from http://docs.oracle.com/javase/tutorial/2d/images/drawonimage.html is what I am looking for (I think), except in the D programming language.
I wrote a little lib that can do this too. Grab png.d and color.d from here:
https://github.com/adamdruppe/misc-stuff-including-D-programming-language-web-stuff
import arsd.png;
void main() {
// width * height
TrueColorImage image = new TrueColorImage(100, 50);
// fill it in with a gradient
auto colorData = image.imageData.colors; // get a ref to the color array
foreach(y; 0 .. image.height)
foreach(x; 0 .. image.width)
colorData[y * image.width + x] = Color(x * 2, 0, 0); // fill in (r,g,b,a=255)
writePng("test.png", image); // save it to a file
}
There is nothing in standard library for image work but you should be able to use DevIL or FreeImage to do what you want. Both of them have Derelict bindings.
DevIL (derelict-il)
FreeImage (derelict-fi)
Just use the C API documentation for either of them.
There is no standard 2D or 3D graphics API in Phobos, nor there is something similar to the ImageIO API from Java. However, there are plenty of D libraries written by various individuals, as well as various bindings to C/C++ libraries that could aid you in what you are doing. I am sure you should be able to accomplish what you need by using some parts of the GtkD .
I'd like to offer an alternative to Adam's solution - dlib has quite a few modules that come in handy when writing multi-media applications - image manipulation, linear algebra as well as geometry processing, I/O streams done right, basic XML parsing and others. It's still getting some development on the core interfaces (as of February 2014), but that should get pretty stable within a few weeks.
With dlib, that example code would translate to:
import dlib.image;
// width * height
auto image = new Image!(PixelFormat.RGB8)(100, 50);
// fill it in with a gradient
foreach(y; 0 .. image.height)
foreach(x; 0 .. image.width)
image[x, y] = Color4f(x * 2 / 255.0f, 0, 0);
savePNG(image, "test.png");
Grabbing the bytes directly is of course possible too, but why not do it the easier way? Premature optimization, etc.
If you're building your application with dub (which you probably should), using the latest and best of dlib is as simple as adding "dlib": "~master" to your dependencies.

Getting pixel colour not accurate

I'm currently using colour picking in my application.
This works on the PC, however I'm having trouble to get it working on a variety of devices.
This is probably due to the context being set up differently, depending on the device. For example, as far as I'm aware the PC is set to a colour of 888, whereas a device might default to 565.
I was wondering if there's a way in OpenGL to get the current pixel/colour format, so that I can retrieve the colour data properly?
This is the function I'm using which works fine on the PC:
inline void ProcessColourPick(GLubyte *out, KDfloat32 x, KDfloat32 y)
{
GLint viewport[4];
GLubyte pixel[3];
glGetIntegerv(GL_VIEWPORT,viewport);
//Read colour of pixel at a specific point in the framebuffer
glReadPixels(x,viewport[3]-y,1,1,
GL_RGB,GL_UNSIGNED_BYTE,(void *)pixel);
out[0] = pixel[0];
out[1] = pixel[1];
out[2] = pixel[2];
}
Any ideas?
Yes, but it's a bit complicated.
Querying the bitdepth of the current framebuffer is fairly easy in ES 2.0 (note: this is also legal in Desktop GL, but this functionality was removed in GL 3.1 core. It's still accessible from a compatibility profile). You have to get the bitdepth of each color component:
GLint bitdepth;
glGetIntegerv(GL_x_DEPTH, &bitdepth);
Where x is one of RED, GREEN, BLUE, or ALPHA.
Once you have the bitdepth, you can test to see if it's 565 and use appropriate pixel transfer parameters and color values.
The format parameter for glReadPixels must be either GL_RGBA (always supported) or the GL_IMPLEMENTATION_COLOR_READ_FORMAT_OES (different on different devices). It's a OpenGL ES restriction.

Resources