Xorg loading an image - x11

I'm starting to code up my own window manager, and was wondering how to use the xorg api to get from raw image data ( such as the data given by libpng ), into an Xorg Pixmap or something drawable by Xorg?

You probably discovered this at some point since 2008, but for the benefit of future readers...
XCreatePixmapFromBitmapData() will load literal bitmap (i.e. 1-bit, black&white) data into a pixmap. This is most likely not what you want, if the goal is to load from a PNG.
A newer way to do this is to use Cairo or GdkPixbuf. The old-school Xlib APIs such as XCreatePixmapFromBitmapData() and XDrawWhatever() are all pretty much deprecated (not that they will actually be removed ever, but they are outdated and out of sync with how modern apps work).
The way people would generally recommend doing things these days is:
prefer libxcb to libX11, libxcb is just a very thin wrapper around the X protocol and lacks calls that do multiple X protocol requests (for example CreatePixmapFromBitmapData does CreatePixmap, CreateGC, PutImage, FreeGC)
prefer cairo (or comparable library, Skia is one) to the server-side drawing APIs
You could use cairo_image_surface_create_from_png() for simple purposes or GdkPixbuf if you need to support more formats, etc.

XCreatePixmapFromBitmapData should do just that. Remember that you need to feed in data of the same bit depth as your xserver is using.

There's a little dance with XCreateImage, XCreatePixmap and XCopyArea you have to do. It goes a little like this:
struct Image img = get_pixels_and_geometry_from_libpng("filename.png");
XImage *img = XCreateImage(/*5000 paremeters*/);
Pixmap pixmap = XCreatePixmap(dpy, img.width, img.height, 24);
XPutImage(dpy, pixmap, gc, 0, 0, img.width, img.height);
XCopyArea(dpy, pixmap, window, 0, 0, img.width, img.height, x, y);

Related

Capture screenshot: native API vs opengl

In my program I am using Qt's function: qApp->primaryScreen()->grabWindow(qApp->desktop()->winId(),x_offset,y_offset,w,h);
But its a little bit slowly for a "main task". So I've collide with a question above. The program able to work under Windows and Mac OS X. I heard about opengl as nice screengrabber since it is closer to GPU than native API plus its a cross platform solution. This is the first knowledge I want to get: Opengl as desktop screengrabber, is it real? I mean like a button "print screen".
If it is, how?If its not:
Windows: can you please give advice how to? BitBlt, GetDC, smthing like this?
Mac OS X: AVFoundation? Please, can you describe this or give some link about how to capture screenshot using this class? (Its a hard way since I know about Objective-C(++) almost nothing)
UPDATE: I read a lot about ways to capture screenshot. There are some knowledge:1. Opengl (maybe) is real as screengrabber, but use it will be irrcorrect for this software. Btw, I don't care, if there is some solution I will accept it.
2. DirectX it is not a way to solve my problem since this software does not work under Mac OS X.
Just to expand on #Zhenyi Luo's answer, here is a code snippet I have used in the past.
It also uses FreeImage for exporting the screenshot.
void Display::SaveScreenShot (std::string FilePath, SCREENSHOT_FORMAT Format){
// Create Pixel Array
GLubyte* pixels = new GLubyte [3 * Window::width * Window::height];
// Read Pixels From Screen And Buffer Into Array
glPixelStorei (GL_UNPACK_ALIGNMENT, 1);
glReadPixels (0, 0, Window::width, Window::height, GL_BGR, GL_UNSIGNED_BYTE, pixels);
// Convert To FreeImage And Save
FIBITMAP* image = FreeImage_ConvertFromRawBits (pixels, Window::width,
Window::height, 3 * Window::width, 24,
0x0000FF, 0xFF0000, 0x00FF00, false);
FreeImage_Save ((FREE_IMAGE_FORMAT) Format, image, FilePath.c_str (), 0);
// Free Resources
FreeImage_Unload (image);
delete [] (pixels);
}
glReadPixels( GLint x,
GLint y,
GLsizei width,
GLsizei height,
GLenum format,
GLenum type,
GLvoid * data), would read a block of pixels into client memory starting at location data.

Create new .png Image in D

I am trying to create a .png image that is X pixels tall and Y pixels short. I am not finding what I am looking for on dlang.org, and am struggling to find any other resources via google.
Can you please provide an example of how to create a .png image in D?
For example, BufferedImage off_Image = new BufferedImage(100, 50, BufferedImage.TYPE_INT_ARGB); from http://docs.oracle.com/javase/tutorial/2d/images/drawonimage.html is what I am looking for (I think), except in the D programming language.
I wrote a little lib that can do this too. Grab png.d and color.d from here:
https://github.com/adamdruppe/misc-stuff-including-D-programming-language-web-stuff
import arsd.png;
void main() {
// width * height
TrueColorImage image = new TrueColorImage(100, 50);
// fill it in with a gradient
auto colorData = image.imageData.colors; // get a ref to the color array
foreach(y; 0 .. image.height)
foreach(x; 0 .. image.width)
colorData[y * image.width + x] = Color(x * 2, 0, 0); // fill in (r,g,b,a=255)
writePng("test.png", image); // save it to a file
}
There is nothing in standard library for image work but you should be able to use DevIL or FreeImage to do what you want. Both of them have Derelict bindings.
DevIL (derelict-il)
FreeImage (derelict-fi)
Just use the C API documentation for either of them.
There is no standard 2D or 3D graphics API in Phobos, nor there is something similar to the ImageIO API from Java. However, there are plenty of D libraries written by various individuals, as well as various bindings to C/C++ libraries that could aid you in what you are doing. I am sure you should be able to accomplish what you need by using some parts of the GtkD .
I'd like to offer an alternative to Adam's solution - dlib has quite a few modules that come in handy when writing multi-media applications - image manipulation, linear algebra as well as geometry processing, I/O streams done right, basic XML parsing and others. It's still getting some development on the core interfaces (as of February 2014), but that should get pretty stable within a few weeks.
With dlib, that example code would translate to:
import dlib.image;
// width * height
auto image = new Image!(PixelFormat.RGB8)(100, 50);
// fill it in with a gradient
foreach(y; 0 .. image.height)
foreach(x; 0 .. image.width)
image[x, y] = Color4f(x * 2 / 255.0f, 0, 0);
savePNG(image, "test.png");
Grabbing the bytes directly is of course possible too, but why not do it the easier way? Premature optimization, etc.
If you're building your application with dub (which you probably should), using the latest and best of dlib is as simple as adding "dlib": "~master" to your dependencies.

OpenGL 3.1+ with Ruby

I followed this post to play with OpenGL (programmable pipeline) on Ruby
Basically, I'm just trying to create a blue window, and here's the code.
Ray::GL.major_version = 3
Ray::GL.minor_version = 2
Ray::GL.core_profile = true # if you want/need one
window = Ray::Window.new("Test Window", [800, 600])
window.make_current
glClearColor(0, 0, 1, 1);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glEnable(GL_DEPTH_TEST);
Instead, I got a white window created. This indicated that I was missing something, but I couldn't figure out what I was missing as the resources for OpenGL on Ruby seemed limited. I have been searching all over the web, but all I found was fixed-pipeline OpenGL stuff for Ruby.
Yes, I could use Ray's built-in functions to set the background color and draw stuff, but I didn't want to do that. I just wanted to use Ray to setup the window, then called OpenGL APIs directly. However, I couldn't figure out what I was missing in the code above.
I would greatly appreciate any hint or pointer to this (maybe I needed to swap the buffer? but then I didn't know how to do it with Ray). Is there any body familiar with using Ray that can give me some hints on this?
Or, are there any other tools that would allow me to setup OpenGL binding (for none fixed-pipeline)?
It would appear that you set the clear color to be blue, then cleared the back buffer to make it blue. But, as you said, you have not swapped the buffers to put the back buffer onto your screen. As far as swapping buffers goes, here's another answer from stack overflow
"Swapping the front and back buffer of a double buffered window is a function provided by the underlying graphics system, i.e. Win32 GDI, or X11 GLX. The function's you're looking for are wglSwapBuffers and/or glXSwapBuffers. On MacOS X NSOpenGLViews are automatically swapped.
However most likely you're using some framework, like GLUT, GLFW or Qt, which provide a portable wrapper around those functions. Read the framework's documentation."
I've never used Ray, so I'd say just keep rooting around in the documentation or look through example projects to see how buffer swapping is done.

Setting up a Win32 OpenGL Window with a GL_RGBA Color Buffer

I am trying to set up an OpenGL window with an alpha channel in its color buffer. Unfortunately, my current setup is creating a GL_RGB back and front buffer (as reported by gDEBugger, and as shown by my experiments).
I set up the window like so:
PIXELFORMATDESCRIPTOR pfd;
ZeroMemory(&pfd,sizeof(pfd));
pfd.nSize = sizeof(pfd);
pfd.nVersion = 1;
pfd.dwFlags = PFD_DRAW_TO_WINDOW | PFD_SUPPORT_OPENGL | PFD_DOUBLEBUFFER;
pfd.iPixelType = PFD_TYPE_RGBA;
pfd.cColorBits = 24; //note by docs that alpha doesn't go here (though putting 32 changes nothing)
pfd.cDepthBits = 24;
pfd.iLayerType = PFD_MAIN_PLANE;
I have also tried more specifically:
PIXELFORMATDESCRIPTOR pfd = {
sizeof(PIXELFORMATDESCRIPTOR),
1,
PFD_DRAW_TO_WINDOW | PFD_SUPPORT_OPENGL | PFD_DOUBLEBUFFER,
PFD_TYPE_RGBA,
24,
8,0, 8,8, 8,16, 8,24, //A guess for lil endian; usually these are 0 (making them 0 doesn't help)
0, 0,0,0,0,
24, //depth
8, //stencil
0, //aux
PFD_MAIN_PLANE,
0,
0, 0, 0
};
My understanding is that when I call ChoosePixelFormat later, it's returning a format without an alpha channel (though why, I don't know).
I should clarify that when I say alpha buffer, I mean just a simple alpha buffer for rendering purposes (each color has an alpha value that fragments can be tested against, and so on). I do NOT mean a semi-transparent window or some other effect. [EDIT: So no, I am not at this time interested in making the window itself transparent. I just want an alpha channel for the default framebuffer.]
[EDIT: This is part of a cross-platform windowing backend I'm writing. My code is always portable. I am not using a library that provides this functionality since I need more control.]
There are two key points to consider here:
You can do this with the default framebuffer. However, you need to request it properly. Windows's default selection mechanism doesn't seem to weight having RGBA very highly. The best course seems to be to enumerate all possible pixelmodes and then select the one you want "manually" as it were. By doing this, I was also able to specify that I wanted a 24 bit depth buffer, an accumulation buffer, and an 8-bit stencil buffer.
The OpenGL context must be fully valid for alpha blending, depth testing, and any even remotely advanced techniques to work. Curiously, I was able to get rendered triangles without having a fully valid OpenGL context, leading to my confusion! Maybe it was being emulated in software? I figured it out when glewInit (not to mention making a few VBOs) failed miserably. The key point is that the OpenGL context must be valid. In my case, I wasn't setting it up properly.
This problem was in the context of my currently writing a lightweight cross-platform windowing toolkit. Currently, it supports Windows through the Win32 API, Linux through X11, and I just started porting to Mac OS through X11.
At the request of some in the comments, I hereby present my humble effort thereof that others may benefit. Mac support doesn't work yet, menus aren't implemented on Linux, and user input on Linux is only half-there. As of 1/18/2013, it may temporarily be found here (people of the future, I may have put new versions on my website (look for "Portcullis")).

How to toggle fullscreen on Mac OSX using SDL 1.3 so that it actually works?

There is SDL_WM_ToggleFullScreen. However, on Mac, its implementation destroys the OpenGL context, which destroys your textures along with it. Ok, annoying, but I can reload my textures. However, when I reload my textures after toggling, it crashes on certain textures inside of a memcpy being called by glTexImage2D. Huh, it sure didn't crash when I loaded those textures the first time around. I even try deleting all my textures before the toggle, but I get the same result.
As a test, I reload textures without toggling fullscreen, and the textures reload fine. So, toggling the fullscreen does something funny to OpenGL. (And, by "toggling", I mean going in either direction: windowed->fullscreen or fullscreen->windowed - I get the same results.)
As an alternative, this code seems to toggle fullscreen as well:
SDL_Surface *surface = SDL_GetVideoSurce();
Uint32 flags = surface->flags;
flags ^= SDL_FULLSCREEN;
SDL_SetVideoMode(surface->w, surface->h, surface->format->BitsPerPixel, flags);
However, calling SDL_GetError after this code says that the "Invalid window" error was set during the SDL_SetVideoMode. And if I ignore it and try to load textures, there is no crashing, but my textures don't show up either (perhaps OpenGL is just immediately returning from glTexImage2D without actually doing anything). As a test, at startup time, I try calling SDL_SetVideoMode twice in a row to perform a toggle right before I even load my textures the first time. In that particular case, the textures do actually show up. Huh, what?
I am using SDL 1.3.0-6176 (posted on the SDL website January 7, 2012).
Update:
My texture uploading code is below (nothing surprising here). For clarification, this application (including the code below) is already working without any issues as a finished application for iOS, Android, PSP, and Windows. Windows is the only other version, other than Mac, that is using SDL under the hood.
unsigned int Texture::init(const void *data, int width, int height)
{
unsigned int textureID;
glGenTextures(1, &textureID);
glBindTexture(GL_TEXTURE_2D, textureID);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, width, height, 0, GL_RGBA, GL_UNSIGNED_BYTE, data);
return textureID;
}
it crashes on certain textures inside of a memcpy being called by glTexImage2D.
This shouldn't happen. There's some bug somewhere, either in your code or that of the OpenGL implementation. And I'd say it's yours. So please post your texture allocation and upload code for us to see.
However, on Mac, its implementation destroys the OpenGL context, which destroys your textures along with it.
Yes, unfortunately this is the way how it works on MacOS X. And due to the design of its OpenGL driver model, there's litte you can do about it (did I mention that MacOS X sucks; oh yes, I did. On occasion. Like over a hundred times). The same badly designed driver model also makes MacOS X so slow in catching up with OpenGL development.

Resources