In my program I am using Qt's function: qApp->primaryScreen()->grabWindow(qApp->desktop()->winId(),x_offset,y_offset,w,h);
But its a little bit slowly for a "main task". So I've collide with a question above. The program able to work under Windows and Mac OS X. I heard about opengl as nice screengrabber since it is closer to GPU than native API plus its a cross platform solution. This is the first knowledge I want to get: Opengl as desktop screengrabber, is it real? I mean like a button "print screen".
If it is, how?If its not:
Windows: can you please give advice how to? BitBlt, GetDC, smthing like this?
Mac OS X: AVFoundation? Please, can you describe this or give some link about how to capture screenshot using this class? (Its a hard way since I know about Objective-C(++) almost nothing)
UPDATE: I read a lot about ways to capture screenshot. There are some knowledge:1. Opengl (maybe) is real as screengrabber, but use it will be irrcorrect for this software. Btw, I don't care, if there is some solution I will accept it.
2. DirectX it is not a way to solve my problem since this software does not work under Mac OS X.
Just to expand on #Zhenyi Luo's answer, here is a code snippet I have used in the past.
It also uses FreeImage for exporting the screenshot.
void Display::SaveScreenShot (std::string FilePath, SCREENSHOT_FORMAT Format){
// Create Pixel Array
GLubyte* pixels = new GLubyte [3 * Window::width * Window::height];
// Read Pixels From Screen And Buffer Into Array
glPixelStorei (GL_UNPACK_ALIGNMENT, 1);
glReadPixels (0, 0, Window::width, Window::height, GL_BGR, GL_UNSIGNED_BYTE, pixels);
// Convert To FreeImage And Save
FIBITMAP* image = FreeImage_ConvertFromRawBits (pixels, Window::width,
Window::height, 3 * Window::width, 24,
0x0000FF, 0xFF0000, 0x00FF00, false);
FreeImage_Save ((FREE_IMAGE_FORMAT) Format, image, FilePath.c_str (), 0);
// Free Resources
FreeImage_Unload (image);
delete [] (pixels);
}
glReadPixels( GLint x,
GLint y,
GLsizei width,
GLsizei height,
GLenum format,
GLenum type,
GLvoid * data), would read a block of pixels into client memory starting at location data.
Related
I am trying to set up an OpenGL window with an alpha channel in its color buffer. Unfortunately, my current setup is creating a GL_RGB back and front buffer (as reported by gDEBugger, and as shown by my experiments).
I set up the window like so:
PIXELFORMATDESCRIPTOR pfd;
ZeroMemory(&pfd,sizeof(pfd));
pfd.nSize = sizeof(pfd);
pfd.nVersion = 1;
pfd.dwFlags = PFD_DRAW_TO_WINDOW | PFD_SUPPORT_OPENGL | PFD_DOUBLEBUFFER;
pfd.iPixelType = PFD_TYPE_RGBA;
pfd.cColorBits = 24; //note by docs that alpha doesn't go here (though putting 32 changes nothing)
pfd.cDepthBits = 24;
pfd.iLayerType = PFD_MAIN_PLANE;
I have also tried more specifically:
PIXELFORMATDESCRIPTOR pfd = {
sizeof(PIXELFORMATDESCRIPTOR),
1,
PFD_DRAW_TO_WINDOW | PFD_SUPPORT_OPENGL | PFD_DOUBLEBUFFER,
PFD_TYPE_RGBA,
24,
8,0, 8,8, 8,16, 8,24, //A guess for lil endian; usually these are 0 (making them 0 doesn't help)
0, 0,0,0,0,
24, //depth
8, //stencil
0, //aux
PFD_MAIN_PLANE,
0,
0, 0, 0
};
My understanding is that when I call ChoosePixelFormat later, it's returning a format without an alpha channel (though why, I don't know).
I should clarify that when I say alpha buffer, I mean just a simple alpha buffer for rendering purposes (each color has an alpha value that fragments can be tested against, and so on). I do NOT mean a semi-transparent window or some other effect. [EDIT: So no, I am not at this time interested in making the window itself transparent. I just want an alpha channel for the default framebuffer.]
[EDIT: This is part of a cross-platform windowing backend I'm writing. My code is always portable. I am not using a library that provides this functionality since I need more control.]
There are two key points to consider here:
You can do this with the default framebuffer. However, you need to request it properly. Windows's default selection mechanism doesn't seem to weight having RGBA very highly. The best course seems to be to enumerate all possible pixelmodes and then select the one you want "manually" as it were. By doing this, I was also able to specify that I wanted a 24 bit depth buffer, an accumulation buffer, and an 8-bit stencil buffer.
The OpenGL context must be fully valid for alpha blending, depth testing, and any even remotely advanced techniques to work. Curiously, I was able to get rendered triangles without having a fully valid OpenGL context, leading to my confusion! Maybe it was being emulated in software? I figured it out when glewInit (not to mention making a few VBOs) failed miserably. The key point is that the OpenGL context must be valid. In my case, I wasn't setting it up properly.
This problem was in the context of my currently writing a lightweight cross-platform windowing toolkit. Currently, it supports Windows through the Win32 API, Linux through X11, and I just started porting to Mac OS through X11.
At the request of some in the comments, I hereby present my humble effort thereof that others may benefit. Mac support doesn't work yet, menus aren't implemented on Linux, and user input on Linux is only half-there. As of 1/18/2013, it may temporarily be found here (people of the future, I may have put new versions on my website (look for "Portcullis")).
I have some C# (SharpGL-esque) code which abstracts OpenGL frame buffer handling away to simple "set this texture as a 'render target'" calls. When a texture is first set as a render target, I create an FBO with matching depth buffer for that size of texture; that FBO/depth-buffer combo will then be reused for all same-sized textures.
I have a curious error as follows.
Initially the app runs and renders fine. But if I increase my window size, this can cause some code to need to resize its 'render target' texture, which it does via glDeleteTextures() and glGenTextures() (then bind, glTexImage2D, and texparams so MIN_FILTER and MAG_FILTER are both GL_NEAREST). I've observed I tend to get the same name (ID) back when doing so (as GL reuses the just-freed name).
We then hit the following code (with apologies for the slightly bastardised GL-like syntax):
void SetRenderTarget(Texture texture)
{
if (texture != null)
{
var size = (texture.Width << 16) | texture.Height;
FrameBufferInfo info;
if (!_mapSizeToFrameBufferInfo.TryGetValue(size, out info))
{
info = new FrameBufferInfo();
info.Width = texture.Width;
info.Height = texture.Height;
GL.GenFramebuffersEXT(1, _buffer);
info.FrameBuffer = _buffer[0];
GL.BindFramebufferEXT(GL.FRAMEBUFFER_EXT, info.FrameBuffer);
GL.FramebufferTexture2DEXT(GL.FRAMEBUFFER_EXT, GL.COLOR_ATTACHMENT0_EXT, GL.TEXTURE_2D, texture.InternalID, 0);
GL.GenRenderbuffersEXT(1, _buffer);
info.DepthBuffer = _buffer[0];
GL.BindRenderBufferEXT(GL.RENDERBUFFER_EXT, info.DepthBuffer);
GL.RenderbufferStorageEXT(GL.RENDERBUFFER_EXT, GL.DEPTH_COMPONENT16, texture.Width, texture.Height);
GL.BindRenderBufferEXT(GL.RENDERBUFFER_EXT, 0);
GL.FramebufferRenderbufferEXT(GL.FRAMEBUFFER_EXT, GL.DEPTH_ATTACHMENT_EXT, GL.RENDERBUFFER_EXT, info.DepthBuffer);
_mapSizeToFrameBufferInfo.Add(size, info);
}
else
{
GL.BindFramebufferEXT(GL.FRAMEBUFFER_EXT, info.FrameBuffer);
GL.FramebufferTexture2DEXT(GL.FRAMEBUFFER_EXT, GL.COLOR_ATTACHMENT0_EXT, GL.TEXTURE_2D, texture.InternalID, 0);
}
GL.CheckFrameBufferStatus(GL.FRAMEBUFFER_EXT);
}
else
{
GL.FramebufferTexture2DEXT(GL.FRAMEBUFFER_EXT, GL.COLOR_ATTACHMENT0_EXT, GL.TEXTURE_2D, 0, 0);
GL.BindFramebufferEXT(GL.FRAMEBUFFER_EXT, 0);
}
ProjectStandardOrthographic();
}
After said window resize, GL returns a GL_INVALID_VALUE error from the glFramebufferTexture2DEXT() call (identified with glGetError() and gDEBugger). If I ignore this, glCheckFrameBufferStatus() later fails with GL_FRAMEBUFFER_INCOMPLETE_ATTACHMENT. If I ignore this too, I can see the expected "framebuffer to dubious to do anything" errors if I check for them and a black screen if I don't.
I'm running on an NVidia GeForce GTX 550 Ti, Vista 64 (32 bit app), 306.97 drivers. I'm using GL 3.3 with the Core profile.
Workaround and curiosity: If when rellocating textures I glGenTextures() before glDeleteTextures() - to avoid getting the same ID back - the problem goes away. I don't want to do this as it's a stupid kluge and increases my chances of out of memory errors. I'm theorising it's because GL was/is using the texture in a recent FBO and now has decided that texture ID is in use or is no longer valid in some way and so isn't acceptable? Maybe?
After the problem gDEBugger shows that both FBOs (the original one with the smaller depth buffer and previous texture, and the new one with the larger combination) have the same texture ID attached.
I've tried detaching the texture from the frame buffer (via glFramebufferTexture2DEXT again) before deallocation, but to no avail (gDEBuffer reflects the change but the problem still occurs). I've tried taking out the depth buffer entirely. I've tried checking the texture sizes via glGetTexLevelParameter() before I use it; it does indeed exist.
This sounds like a bug in NVIDIA's OpenGL implementation. Once you delete an object name, that object name becomes invalid, and thus should be a legitimate candidate for glGen* to return.
You should file a bug report, with a minimal case that reproduces the issue.
I don't want to do this as it's a stupid kluge and increases my chances of out of memory errors.
No, it doesn't. glGenTextures doesn't allocate storage for textures (which is where any real OOM errors might come from). It only creates the texture name. It's unfortunate that you have to use a workaround, but it's not any real concern.
There is SDL_WM_ToggleFullScreen. However, on Mac, its implementation destroys the OpenGL context, which destroys your textures along with it. Ok, annoying, but I can reload my textures. However, when I reload my textures after toggling, it crashes on certain textures inside of a memcpy being called by glTexImage2D. Huh, it sure didn't crash when I loaded those textures the first time around. I even try deleting all my textures before the toggle, but I get the same result.
As a test, I reload textures without toggling fullscreen, and the textures reload fine. So, toggling the fullscreen does something funny to OpenGL. (And, by "toggling", I mean going in either direction: windowed->fullscreen or fullscreen->windowed - I get the same results.)
As an alternative, this code seems to toggle fullscreen as well:
SDL_Surface *surface = SDL_GetVideoSurce();
Uint32 flags = surface->flags;
flags ^= SDL_FULLSCREEN;
SDL_SetVideoMode(surface->w, surface->h, surface->format->BitsPerPixel, flags);
However, calling SDL_GetError after this code says that the "Invalid window" error was set during the SDL_SetVideoMode. And if I ignore it and try to load textures, there is no crashing, but my textures don't show up either (perhaps OpenGL is just immediately returning from glTexImage2D without actually doing anything). As a test, at startup time, I try calling SDL_SetVideoMode twice in a row to perform a toggle right before I even load my textures the first time. In that particular case, the textures do actually show up. Huh, what?
I am using SDL 1.3.0-6176 (posted on the SDL website January 7, 2012).
Update:
My texture uploading code is below (nothing surprising here). For clarification, this application (including the code below) is already working without any issues as a finished application for iOS, Android, PSP, and Windows. Windows is the only other version, other than Mac, that is using SDL under the hood.
unsigned int Texture::init(const void *data, int width, int height)
{
unsigned int textureID;
glGenTextures(1, &textureID);
glBindTexture(GL_TEXTURE_2D, textureID);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, width, height, 0, GL_RGBA, GL_UNSIGNED_BYTE, data);
return textureID;
}
it crashes on certain textures inside of a memcpy being called by glTexImage2D.
This shouldn't happen. There's some bug somewhere, either in your code or that of the OpenGL implementation. And I'd say it's yours. So please post your texture allocation and upload code for us to see.
However, on Mac, its implementation destroys the OpenGL context, which destroys your textures along with it.
Yes, unfortunately this is the way how it works on MacOS X. And due to the design of its OpenGL driver model, there's litte you can do about it (did I mention that MacOS X sucks; oh yes, I did. On occasion. Like over a hundred times). The same badly designed driver model also makes MacOS X so slow in catching up with OpenGL development.
For my graphics class, I need to match an OpenGL sample output in a pixel-perfect way.
I figured it would be cool if I could spawn the sample, send it some input, then take a screenshot of the exact OpenGL area, do the same for mine, and then just compare those screenshots. I also figured something like AutoIT would be the easiest way to do something like this.
I know that I can use the screencapture function, but I'm unsure of how to get the exact coordinates and size of the OpenGL area of the window (not the title bar/surrounding window stuff).
If anybody could help me out that would be awesome.
Or if anybody can think of an easier solution than AutoIt, and can point me in the right direction, that'd be great too.
EDIT: I also don't have access to the source code of the sample output program.
AutoIt is a pretty good tool for this job. I think you already found the _ScreenCapture in the help file, it has parameters for: X left, Y top, X right and Y bottom coordinates. However, the _ScreenCapture function stores to a file. I've made a library where you can capture part of the screen, or a window, and save this to memory. Then you can get the pixel colors from the memory and compare them to your existing pixels. You can find it here: http://www.autoitscript.com/forum/topic/63318-get-or-read-pixel-from-memory-udf-pixelgetcolor-au3/
The part of a window which does not include the titlebar and the borders is called the 'client area'. You can get the width and the height of the client area with the WinGetClientSize. Alternatively, you can use ControlGetPos on the OpenGL control to get the X and Y relative to the window, and the width and height of the OpenGL control. Combined with WinGetPos you should be able to calculate the values you need for _ScreenCapture. You should be able to find out a good approach if you use the "AutoIt window info" tool.
Finally, a simple and short solution which gives you little control, but might be just what you need, is the PixelChecksum function. Once you have the coordinates of the OpenGL part, you can use PixelChecksum and get a value corresponding to the pixels of the screen (a checksum of the pixels). You can then compare this value to a pre-recorded value to tell whether the pixels on the screen are exactly the same. Check the Autoit help file of PixelChecksum for an example.
If you want to capture data from an OpenGL buffer, here is how you can do it with legacy OpenGL (version 1.2 or so)
glFinish();
glReadBuffer(GL_FRONT);
glPixelStorei(GL_PACK_ALIGNMENT, 4);
glPixelStorei(GL_PACK_ROW_LENGTH, 0);
glPixelStorei(GL_PACK_SKIP_ROWS, 0);
glPixelStorei(GL_PACK_SKIP_PIXELS, 0);
glReadPixels(ox, oy, w, h, mode, GL_UNSIGNED_BYTE, img);
where:
ox and oy are origin x and y
w and h are width and height
mode is the mode, like GL_BGR if you want to save to BMP
img is a pointer to unsigned int * to copy the image to
You can search for glReadPixels on the internet and see the reference for more information.
I'm starting to code up my own window manager, and was wondering how to use the xorg api to get from raw image data ( such as the data given by libpng ), into an Xorg Pixmap or something drawable by Xorg?
You probably discovered this at some point since 2008, but for the benefit of future readers...
XCreatePixmapFromBitmapData() will load literal bitmap (i.e. 1-bit, black&white) data into a pixmap. This is most likely not what you want, if the goal is to load from a PNG.
A newer way to do this is to use Cairo or GdkPixbuf. The old-school Xlib APIs such as XCreatePixmapFromBitmapData() and XDrawWhatever() are all pretty much deprecated (not that they will actually be removed ever, but they are outdated and out of sync with how modern apps work).
The way people would generally recommend doing things these days is:
prefer libxcb to libX11, libxcb is just a very thin wrapper around the X protocol and lacks calls that do multiple X protocol requests (for example CreatePixmapFromBitmapData does CreatePixmap, CreateGC, PutImage, FreeGC)
prefer cairo (or comparable library, Skia is one) to the server-side drawing APIs
You could use cairo_image_surface_create_from_png() for simple purposes or GdkPixbuf if you need to support more formats, etc.
XCreatePixmapFromBitmapData should do just that. Remember that you need to feed in data of the same bit depth as your xserver is using.
There's a little dance with XCreateImage, XCreatePixmap and XCopyArea you have to do. It goes a little like this:
struct Image img = get_pixels_and_geometry_from_libpng("filename.png");
XImage *img = XCreateImage(/*5000 paremeters*/);
Pixmap pixmap = XCreatePixmap(dpy, img.width, img.height, 24);
XPutImage(dpy, pixmap, gc, 0, 0, img.width, img.height);
XCopyArea(dpy, pixmap, window, 0, 0, img.width, img.height, x, y);