Procedurally generated GUI - user-interface

I've developed an interactive audio visualization engine. I need to make its GUI scalable to various screen sizes with various PPIs (this includes both very large screens and mobile devices). Designer simply sent me a PSD with graphical representation of supported widgets. I'm exporting these into PNGs. The problem is that those bitmaps are of course not scalable and looks ugly.
I've thought about several ways how to achieve resolution and PPI independent GUI:
Export PNGs with various sizes and select the current set on runtime (waste of space simply for storing bitmaps in various resolutions)
Use scale 9 images only (no fancy stuff)
Use SVG (not supported by rendering APIs, could use smth like nanovg for OpenGL but what to do with raw framebuffer then?, also performance problems and too much complexity for what I need)
I came to an idea to pregenerate bitmaps at runtime for specific device once and use them afterwards. Are there any specific libraries for that and maybe already available themes which I could employ for now? I imagine tool could work similarly to how cairo graphics library or javascript canvas work by reading command list and outputting a bitmap. Any other ideas?

One possible solution is this:
CPlayer is a procedural graphics player with an IMGUI toolkit. It can
be used for anything from quick demos, prototyping graphics apps, to
full-fledged apps and games.
http://luapower.com/cplayer.html

Related

2D graphics without OpenGL / DirectX as for a GUI Toolkit

I am wondering how to create some 2D graphics without using OpenGL or DirectX. Like, what do e.g. Qt or GTK use, to draw what is basically colored rectangles (and text)?
I know that with KDE 5 and Gnome 3 there were some complains that now OpenGL is required (for certain effects including 3D stuff like the desktop cube that was trendy for a while). So apparently something simpler was used before, yet I can't find out what. Here the answers are basically: OpenGL or SDL …
Well the most basic way you have to draw a window on linux is to use Xlib or Win32 on windows. These are very basic window drawing APIs that also handle events. But it would probably be a lot of work to use them on their own.
SDL, SFML, or OpenGL are probably better options in most cases, since window rendering protocols can draw rectangles and images but lack a lot of QoL features that make your life as a dev easier. Maybe if you are looking for the absolute best performance Xlib (or wayland) would be the way to go, but if you are looking for a simple way to code a GUI application it's probably a bad idea.
If you want a great and easy to use GuI to do menus and stuffs, dear ImGui is very impressive and easy to use, and can run in a variety of rendering surfaces including SDL and DirectX
Also this answer could help you, it's seems a bit close :
Does OpenGL use Xlib to draw windows and render things, or is it the other way around?
You'll notice that at the end they talk of other ways to draw windows which are AGG and Cairo. It's a bit of a wall text but a greatly detailed answer.

Using SWF animation in Haxe/OpenFL applications

As great as Haxe got with NME/OpenFL the big problem transitioning from AS3 development are assets. As much as Haxe is similar to as3 and OpenFL tries to provide a familiar API the lack of SWF support scares away many developers.
My research on this topic led me to understand that current SWF is rather weak and buggy with many edits necessary to SWF file in order to run it in Haxe.
The question is how do you use SWF animation in OpenFL apps, or if you don't - whats the best solution you've found regarding rendering time, processor time and file size.
Having spent more time on the research and asking other developers I put together a small list of possible alternatives to using SWF assets for animation. Hopefully it will help other developers, who has a similar problem while the SWF animation support is weak.
NOTE: all methods were selected considering three factors important for me: availability on all platforms, performance and file-size of assets. Therefor not all possible methods were included.
Tested on: HTML5, Android, iOS
SWF animation is possible with Haxe/OpenFL, but there are few rules: no tweens - all animations are frame-by-frame. Vector art should be cached as bitmap, saved as bitmap or pre-rendered as bitmap sequence as on some platforms (e.g. neko) vector art is being transformed to raster with ugly bitten edges. Some bugs were reported if representing MovieClip's as graphics or vice versa, but I didn't notice any bugs on HTML5, Flash, iOS, Android releases. Nested animations sometimes might skip frames if looped (didn't see that either, maybe older release of NME/OpenFL did that). I'd say this is a fairly good way to animate content from the concern of file size and platform availability, but it's a headache to edit all the assets to meet requirements of Haxe support. And it's not fun to reuse these assets later, since they're all frame-by-frame animations and it's a mess.
Sprite sheet animations. Primary used for HTML5 targets due to higher rendering performance. This is directly from a openGL standard, so this method should apply for all openGL targets. The idea is to rather have one big file and save time on opening/loading multiple smaller files. The performance is good, works on all tested platforms, but the file size get's quickly out of hand and can be hardly used for animations where object size is changing in size - makes unnecessary large transparent space, rotating the image to best fit the space reduces rendering performance with editing the transformation matrix on run-time.
Frame sequence aka PNG sequence animation. Personal favorite. It works well and fast on all platforms, it's possible to pre-render the the animation (just like any other method above), transform to BitmapData array, stream-load etc. It does take a lot of disk space for the animations, but can be softened by loading the assets before using them (HTML5, SWF) and it doesn't really matter for mobile devices - as even 1-2GB apps are ok for the market. Large advantage that I found for myself is that this type of asset can be used for any other developing standard (C++, Java, cocos2d) and saved as Sprite sheet if needed (e.g. cocos2d, like HTML5 prefers Sprite sheets over anything else as written in the official book COCOS2DX by Roger Engelbert).With this flexibility, good performance is tolerable file size I prefer this method over any other method listed above.
Bone animation - PNG array + property list. Another approach is having separate images of an animated object and every image matrix data for every frame. That way with minimum disk space use it's possible to make thousands of animations. The downsides are: it's harder (not impossible) to have nested animations for more complex animations, constant matrix transformations limit number of active animations on the display list (horrible method for HTML5, other platforms held on well) and little reusability of assets. Usually it's the same good old SWF assets that were exported to this type, so it makes sense to edit the FLA rather then the bone animation itself.
Surely I've missed some great points, there are many ways to animate graphics and some might work for you better than others so feel free to leave comments and criticize, but I still hope this topic was helpful.
This question might be obsolete. I compiled a C++ app in Haxe/OpenFL just 5 minutes ago and had no trouble getting SWF animations (with tweens) to work.
Here's a gif recording:
https://imgflip.com/gif/7l02f
I had an asset called "library.swf" containing that animation, exported as class "Oluv"
This requires the "swf" library which is now free, and can be installed with "haxelib install swf"
In my example, I added this to my application.xml file:
<haxelib name="swf" />
<library id="oluvLib" path="assets/library.swf" type="swf"/>
Then put this in a standard OpenFL template project:
Assets.loadLibrary("oluvLib", swfAssetsLoaded);
private function swfAssetsLoaded(library:AssetLibrary):Void {
var oluv = Assets.getMovieClip("oluvLib:Oluv");
addChild(oluv);
oluv.x = (stage.stageWidth - oluv.width) / 2;
oluv.y = (stage.stageHeight - oluv.height) / 2;
}
Tweens do not seem to work on neko target, but they work fine in C++, and flash (of course).

Store resource in Direct2D on GPU

Is there some way to store a "scene" in Direct2D on the GPU?
I'm looking for something like ID2D1Mesh (i.e. storing the resource in vector format, not as a bitmap) but where I can configure if the mesh/scene/resource should be rendered with anti-aliasing or not.
Rick is correct in that you can apply antialiasing at two different levels. Either through Direct2D or through Direct3D. You can do both but that’s pointless and would only waste resources and lead to poor results. Direct2D antialiasing is suitable if you want per-primitive geometry-aware antialiasing. Direct3D antialiasing is useful if you want to sacrifice a bit of quality for better overall performance in some scenarios.
The Direct2D 1.1 command list literally stores/records a list of drawing commands that can be played back against different targets. This may be what you’re after as it’s not rasterized. Conceptually it’s like storing a vector image in device memory. Command lists are somewhat limited in that you cannot modify the command list once created and resources being drawn may also not be changed, but it’s still quite handy nonetheless.
There is a way to get antialiasing with ID2D1Mesh, but it's non-trivial. You have to create the Direct3D device yourself and then use ID2D1Factory::CreateDxgiSurfaceRenderTarget(). This allows you to configure the multisampling/antialiasing settings of the D3D device directly, and then meshes play along just fine (in fact I think you'd just always tell Direct2D to use aliased rendering). I haven't done this myself, but there is a MSDN sample that shows how to do this. It's not for the faint of heart ... and in order to do software rendering you have to initialize a WARP device. It does work, however.
Also, in Direct2D 1.1 (Windows 8, or Windows 7 + Platform Update), you can use the ID2D1CommandList interface for record/playback stuff. I'm not sure if that's implemented as "compile to GPU" (ala mesh), or if it's just macros (record/playback of commands).
In Windows 8.1, Direct2D introduced geometry realizations, which lets you store a tessellated version of the geometry and later render it back with or without anti-aliasing, just like you asked. These are highly recommended over the use of meshes. Command lists, while convenient, don't have the same caching abilities as creating and storing the geometry realizations yourself.

Magnify a streaming webcam image in OS X

I need to take a streaming image from a USB / UVC compliant web camera and digitally magnify it. The application is for people with low vision to be able to read.
The application must run on a MAC / OS X.
I am trying figure which framework to use. I did find that I can use the CALayer and apply an affine transformation, however the image is grainy as you would expect. I need to smooth it out with some method such as anti-aliasing, or some other method..
I know this is a very general question, but I need to know how to focus my efforts. At the moment I am chasing my tail reading docs, etc.
Anybody have a suggestion on what OS X Frameworks, also what algorithms or methods to smooth out grainy magnified images?
Thanks!
You're basically talking about interpolation--resizing images with some intelligent filling in of information rather than just chopping the pixels. There are various methods you could look into, such as "bilinear", "bicubic", etc. In the Apple API you might want to look at CGContextSetInterpolationQuality, which could help with some of the noise you're seeing. OpenCV is a very extensive image processing library which is readily available on OSX and has a C++ API.
Keep in mind, though, that you will ultimately be limited by the quality of the web cam, and the environment (especially lighting). There are "super resolution" techniques to stitch multiple images together, but they may not be applicable if this is a live-video application.
I can't overemphasize the importance of good lighting for applications like this. If you think about doing an iOS version of this, look at what apps like Turboscan do using the flash and a "best-of-three" technique. It's amazing what high-quality images you can get of printed text that way.

Equivalent of windows gdi regions in linux

I sometimes use windows gdi regions for graphics drawing and invalidation/validation. For example the program http://www.maxerist.net/main/soft-for-win/tubicus (oss) was made using only regions (no bitmaps or off-screen buffer). The drawing was made with FillRgn and FrameRgn, invalidation and painting with InvalidateRgn and CombineRgn, every cell (see screenshot) is a simple region created with CreateEllipticRgn, CreatePolygonRgn and CombineRgn.
I have plans to port it to Linux. As I understood, there are many graphics libraries in Linux. Are there ones that support windows-like regions?
Thanks
You want to use The X Window System, a.k.a. X11, as your graphic platform. Its client library is called Xlib. The platform supports polygonal regions.
There are many libraries written on top of Xlib (Gtk, Qt, wxWindows and more) but you always can use the low-level Xlib API directly with any of them. Qt even supports elliptic regions. I don't know the details but I guess it's implemented on top of X11 polygonal regions.
Qt has a lot of options for painting, and using QPainter with QPainterPath objects might suit you well. (There are samples in the Qt distribution). You can combine (add/intersect/substact) paths.
The QGraphicsView framework is a good alternative too.

Resources