I'm new in the field of gtk+. My question is that is there a way to render images very fast in gtk??? I mean is there a way to directly alter the image data in frame buffer or video ram or something like that? Do pixbufs do the same thing?? I need to apply this for a scrolling example. Is cairo good for fast rendering of images?
GTK is a high-level toolkit, which means that you can't do that sort of low-level manipulation. However, GTK has plenty of facilities for fast scrolling built in. You can also render images to Cairo surfaces in memory. What are you trying to do exactly?
Related
I need to write application where the main content will be OpenGL rendered (something like game engine), but there is no good OpenGL based GUI library similiar to what Qt widgets does (but they are software rendered).
As i browsed the source code of Qt, all painting is done via QPainter and there is even QPainter implementation in OpenGL, but the suppport for multiple graphics backends was dropped in Qt 5, so you can't render Qt Widgets in OpenGL anymore (i don't know why).
The problem is that you can't paint to window surface using both software and hardware rendering. You can have the window associated with OpenGL context or use software rendering. That means if i want to have app with complex GUI with OpenGL based content, i need either paint everything using OpenGL (which is hard because as i said, there is no good GUI library for it), or i can render GUI to image using software rendering (for example Qt) and than load that image as OpenGL texture (probably big performance loss).
Does anyone know any good application that is using software rendered GUI loaded as texture to OpenGL? I need to be sure it will work without some big performance loss, but can't find good example that it will work well even for apps like game engines.
If you take the "render ui to texture then draw a textured quad over my game" route, and are worried about performances, try to avoid transfering the whole texture each frame.
If you think about it :
60fps is not necessary for ui : 30fps is enough, so update it one time out of two.
Most of the time, ui dont change between frames, and if it changes, only a small portion of it do.
ui framework often keep track of which part of the ui is "dirty" and need to be redrawn. If you can get your hand on that, you can stream to the texture only the parts that need to be updated (glTexSubImage2D).
I've developed an interactive audio visualization engine. I need to make its GUI scalable to various screen sizes with various PPIs (this includes both very large screens and mobile devices). Designer simply sent me a PSD with graphical representation of supported widgets. I'm exporting these into PNGs. The problem is that those bitmaps are of course not scalable and looks ugly.
I've thought about several ways how to achieve resolution and PPI independent GUI:
Export PNGs with various sizes and select the current set on runtime (waste of space simply for storing bitmaps in various resolutions)
Use scale 9 images only (no fancy stuff)
Use SVG (not supported by rendering APIs, could use smth like nanovg for OpenGL but what to do with raw framebuffer then?, also performance problems and too much complexity for what I need)
I came to an idea to pregenerate bitmaps at runtime for specific device once and use them afterwards. Are there any specific libraries for that and maybe already available themes which I could employ for now? I imagine tool could work similarly to how cairo graphics library or javascript canvas work by reading command list and outputting a bitmap. Any other ideas?
One possible solution is this:
CPlayer is a procedural graphics player with an IMGUI toolkit. It can
be used for anything from quick demos, prototyping graphics apps, to
full-fledged apps and games.
http://luapower.com/cplayer.html
I know this has been asked before (I did search) but I promise you this one is different.
I am making an app for Mac OS X Mountain Lion, but I need to add a little bit of a bloom effect. I need to render the entire scene to a texture the size of the screen, reduce the size of the texture, pass it through a pixel buffer, then use it as a texture for a quad.
I ask this again because a few of the usual techniques do not seem to function. I cannot use the #version, layout, or out in my fragment shader, as they do not compile. If I just use gl_FragColor as normal, I get random pieces of the screen behind my app rather than the scene I am trying to render. The documentation doesn't say anything about such things.
So, basically, how can I render to a texture properly with the Mac implementation of OpenGL? Do you need to use extensions to do this?
I use the code from here
Rendering to a texture is best done using FBOs, which let you render directly into the texture. If your hardware/driver doesn't support OpenGL 3+, you will have to use the FBO functionality through the ARB_framebuffer_object core extension or the EXT_framebuffer_object extension.
If FBOs are not supported at all, you will either have to resort to a simple glCopyTexSubImage2D (which involves a copy though, even if just GPU-GPU) or use the more flexible but rather intricate (and deprecated) PBuffers.
This tutorial on FBOs provides a simple example for rendering to a texture and using this texture for rendering afterwards. Since the question lacks specific information about the particular problems you encountered with your approach, those rather general googlable pointers to the usual render-to-texture resources need to suffice for now.
I've been using SDL to render graphics in C. I know there are several options to create graphics at the pixel level on Windows, including SDL and OpenGL. But how do these programs do it? Fine, I can use SDL. But I'd like to know what SDL is doing so I don't feel like an ignorant fool. Am I the only one slightly frustrated by the opaque layer of frosting on modern computers?
A short explanation as to how this is done on other operating systems would also be interesting, but I am most concerned with Windows.
Edit: Since this question seems to be somehow unclear, this is precisely what I want:
I would like to know how pixel level graphics manipulations (drawing on the screen pixel by pixel) works on Windows. What do libraries like SDL do with the operating system to allow this to happen. I can manipulate the screen pixel by pixel using SDL, so what magic happens in SDL to let me do this?
Windows has many graphics APIs. Some are layers built on top of others (e.g., GDI+ on top of GDI), and others are completely independent stacks (like the Direct3D family).
In an API like GDI, there are functions like SetPixel which let you change the value of a single pixel on the screen (or within a region of the screen that you have access to). But using SetPixel to setting lots of pixels is generally slow.
If you were to build a photorealistic renderer, like a ray tracer, then you'd probably build up a bitmap in memory (pixel by pixel), and use an API like BitBlt that sends the entire bitmap to the screen at once. This is much faster.
But it still may not be fast enough for rendering something like video. Moving all that data from system memory to the video card memory takes time. For video, it's common to use a graphics stack that's closer to the low-level graphics drivers and hardware. If the graphics card can do the video decompression directly, then sending the compressed video stream to the card will be much more efficient than sending the decompressed data from system memory to the video card--and that's often the limiting factor.
But conceptually, it's the same thing: you're manipulating a bitmap (or texture or surface or raster or ...), but that bitmap lives in graphics memory, and you're issuing commands to the GPU to set the pixels the way you want, and then to display that bitmap at some portion of the screen (often with some sort of transformation).
Modern graphics processors actually run little programs--called shaders--that can (among other things) do calculations to determine the pixel values. The GPUs are optimized to do these types of calculations and can do many of them in parallel. But ultimately, it boils down to getting the pixel values into some sort of bitmap in video memory.
I have a huge image (234 megapixels) I want to plot in a way that it is dynamically resampled depending on the size of the area the user wishes to see. Is there any tool that supports doing this, or will I need to do this myself?
Typically the techniques to do this are referred to as (Image) Tile Rendering. Basically this is how applications such as Google Maps work.
A quick search on google turned up an OpenGL library for doing this for you (http://www.mesa3d.org/brianp/TR.html). I'm sure there are others that you should be able to discover if this library doesn't fit your technology needs.