I need to take a streaming image from a USB / UVC compliant web camera and digitally magnify it. The application is for people with low vision to be able to read.
The application must run on a MAC / OS X.
I am trying figure which framework to use. I did find that I can use the CALayer and apply an affine transformation, however the image is grainy as you would expect. I need to smooth it out with some method such as anti-aliasing, or some other method..
I know this is a very general question, but I need to know how to focus my efforts. At the moment I am chasing my tail reading docs, etc.
Anybody have a suggestion on what OS X Frameworks, also what algorithms or methods to smooth out grainy magnified images?
Thanks!
You're basically talking about interpolation--resizing images with some intelligent filling in of information rather than just chopping the pixels. There are various methods you could look into, such as "bilinear", "bicubic", etc. In the Apple API you might want to look at CGContextSetInterpolationQuality, which could help with some of the noise you're seeing. OpenCV is a very extensive image processing library which is readily available on OSX and has a C++ API.
Keep in mind, though, that you will ultimately be limited by the quality of the web cam, and the environment (especially lighting). There are "super resolution" techniques to stitch multiple images together, but they may not be applicable if this is a live-video application.
I can't overemphasize the importance of good lighting for applications like this. If you think about doing an iOS version of this, look at what apps like Turboscan do using the flash and a "best-of-three" technique. It's amazing what high-quality images you can get of printed text that way.
Related
I am wondering how to create some 2D graphics without using OpenGL or DirectX. Like, what do e.g. Qt or GTK use, to draw what is basically colored rectangles (and text)?
I know that with KDE 5 and Gnome 3 there were some complains that now OpenGL is required (for certain effects including 3D stuff like the desktop cube that was trendy for a while). So apparently something simpler was used before, yet I can't find out what. Here the answers are basically: OpenGL or SDL …
Well the most basic way you have to draw a window on linux is to use Xlib or Win32 on windows. These are very basic window drawing APIs that also handle events. But it would probably be a lot of work to use them on their own.
SDL, SFML, or OpenGL are probably better options in most cases, since window rendering protocols can draw rectangles and images but lack a lot of QoL features that make your life as a dev easier. Maybe if you are looking for the absolute best performance Xlib (or wayland) would be the way to go, but if you are looking for a simple way to code a GUI application it's probably a bad idea.
If you want a great and easy to use GuI to do menus and stuffs, dear ImGui is very impressive and easy to use, and can run in a variety of rendering surfaces including SDL and DirectX
Also this answer could help you, it's seems a bit close :
Does OpenGL use Xlib to draw windows and render things, or is it the other way around?
You'll notice that at the end they talk of other ways to draw windows which are AGG and Cairo. It's a bit of a wall text but a greatly detailed answer.
I've developed an interactive audio visualization engine. I need to make its GUI scalable to various screen sizes with various PPIs (this includes both very large screens and mobile devices). Designer simply sent me a PSD with graphical representation of supported widgets. I'm exporting these into PNGs. The problem is that those bitmaps are of course not scalable and looks ugly.
I've thought about several ways how to achieve resolution and PPI independent GUI:
Export PNGs with various sizes and select the current set on runtime (waste of space simply for storing bitmaps in various resolutions)
Use scale 9 images only (no fancy stuff)
Use SVG (not supported by rendering APIs, could use smth like nanovg for OpenGL but what to do with raw framebuffer then?, also performance problems and too much complexity for what I need)
I came to an idea to pregenerate bitmaps at runtime for specific device once and use them afterwards. Are there any specific libraries for that and maybe already available themes which I could employ for now? I imagine tool could work similarly to how cairo graphics library or javascript canvas work by reading command list and outputting a bitmap. Any other ideas?
One possible solution is this:
CPlayer is a procedural graphics player with an IMGUI toolkit. It can
be used for anything from quick demos, prototyping graphics apps, to
full-fledged apps and games.
http://luapower.com/cplayer.html
I have a huge image (234 megapixels) I want to plot in a way that it is dynamically resampled depending on the size of the area the user wishes to see. Is there any tool that supports doing this, or will I need to do this myself?
Typically the techniques to do this are referred to as (Image) Tile Rendering. Basically this is how applications such as Google Maps work.
A quick search on google turned up an OpenGL library for doing this for you (http://www.mesa3d.org/brianp/TR.html). I'm sure there are others that you should be able to discover if this library doesn't fit your technology needs.
I've created a relatively simple tower defense game using c++ and SFML. I'm very interested in creating a nice gui overlay for it, ie hud, menus, etc. I know there are a lot of gui libraries out there, but I would like to make my own (for learning purposes.
I'm very familiar with working with graphics, but I'm not as familiar with GUI systems (I just render my frames, and don't worry about widgets, title bars, etc.).
Are there any good articles out there, or perhaps suggestions anyone has regarding how to layout such an interface?
There are a couple ways I know of (at least for Java) to get a nice HUD in your game. One method is to have a separate 3D world with the UI elements placed directly in front of the camera, then overlay that camera's view over the main view. This can be buggy at times, especially when you don't have a good color filter, or when you have a high number of objects being displayed. Another method is to look at a GUI library designed for this purpose (such as NiftyGUI for Java).
A quick Google search lead me to a listing on Wikipedia of many open-source GUI libraries you could use. There are many others out there, so you should probably do the Google search yourself, or look around on GitHub and SourceForge.
I'm writing a 2D game and I'm looking to play a sound in a lightweight manner while being able to pan it left and right as needed. NSSound is fine for everything, including volume adjustments, but it can't pan.
One other wrinkle: I'm using MonoMac, and AVFoundation is not available. So AVAudioPlayer is a no-go.
From looking at the available APIs the only answer I've found seems to be "use OpenAL", but I'm interested to see if there's any other alternatives. It's a fairly simple 2D game, and I'd rather avoid mucking with positional audio if I can avoid it. (Even panning is sort of optional, it's just a nice-to-have that I'd like to work in if it's not going to ruin my day.)
This late in the (OpenGL) game I'd be using it even for a 2D (sprite) based game; Gamers expect things like parallax, shadowing, textures, etc. so why re-invent the wheel? That said if you understand OpenGL then OpenAL is actually simpler. You can probably get everything you need in less than a half-dozen API's.
If you choose to ignore the above advice… there's always Core Audio