I'm developing an application on relatively restricted embedded Linux platform, meaning it has 256MB of flash; no problem with RAM however. The application uses SPI TFT screen, exposed through framebuffer driver. The only thing required from UI is to support text presentation with various fonts and sizes, including text animations (fade, slide, etc.). On the prototype, which ran on RPi 3 I used libcairo so it went well. Now, provided the tight space constraints on the real platform, it doesn't seem feasible to use libcairo anymore, since according to what I've seen it requires more than 100 MB of space with all dependencies it has. Note however, that I come from bare metal world and never dealt with complex UI, so I might be completely wrong about libcairo and its size. So guys, please suggest what 2D library I could pick for my case (C++ is preferred, but C is also ok), and just in case there is a way to use libcairo with few megs footprint, please point me to the right direction.
Regards
Related
For example as you type, which library is telling the computer screen to display the respective ascii character and to move the cursor accordingly?
Imagine something like the old school computers (with no GUI) running DOS or Basic... what/which library is responsible for the UI?
Links to source code would be great for understanding how said library(ies) works.
The photo you have posted is of a BBC Micro running in Mode 7. This was an exception to most rules. Mode 7 was a low memory mode, in which there where no pixels, just 256 text characters. 1K of memory was reserved in RAM to contain what was displayed on the screen at that moment. A special chip on the circuit board, called the Video ULA (Uncommited Logic Array) read the contents of that memory and coded it to the output. The ULA was ROM and could not be changed by the programmer.
The ZX81 worked in a similar way: 256 possible text characters and no pixels. However the ZX81 had less dedicated chips and the main CPU did most of the work.
A more common setup was that every pixel was represented by a number of bits in memory (often more than one bit per pixel was needed because colours had to be indicated). Examples are BBC in modes 1-6; the Acorn Electron; Spectrum; C64; also many others. When the user placed text on the screen, the computers ROM would convert this to the correct pixels. Graphics could often be written directly to the RAM, or 'plotted' via BASIC. Once again, dedicated ROM chips and circuitry would then render this memory to the output. This approach required much more memory to display.
Every 8 bit computer had its own way of representing the display in RAM. You need to get manuals of the machine you are trying to program (easy to find on internet for the better known Micros).
Many emulators are open source, if you want to see the internals. For example: https://github.com/stardot/beebem
If you're interested in seeing the internals of a terminal to better understand how it works and renders input/output, Bash is completely open source. You can download its latest source code here.
I've got an iMac whose VRAM appears to have gone on the fritz. On boot, things are mostly fine for a while, but eventually, as more and more windows are opened (i.e. textures are created on the GPU), I eventually hit the glitchy VRAM, and I get these bizarre "noisy" grid-like patterns of red and green in the windows.
I had an idea, but I'm mostly a newb when it comes to OpenGL and GPU programming in general, so I figured I'd ask here to see if it was plausible:
What if I wrote a little app, that ran on boot, and would allocate GPU textures (of some reasonable quantum -- I dunno, maybe 256K?) until it consumed all available VRAM (i.e. can't allocate any more textures). Then have it upload a specific pattern of data into each texture. Next it would readback the texture from the GPU and checksum the data against the original pattern. If it checks out, then release it (for the rest of the system to use). If it doesn't checksum, hang onto it (forever).
Flaws I can see: a user space app is not going to be able to definitively run through ALL the VRAM, since the system will have grabbed some, but really, I'm just trying to squeeze some extra life out of a dying machine here, so anything that helps in that regard is welcome. I'm also aware that reading back from VRAM is comparatively slow, but I'm not overly concerned with performance -- this is a practical endeavor, to be sure.
Does this sound plausible, or is there some fundamental truth about GPUs that I'm missing here?
Your approach is interesting, although I think there other ways that might be easier to implement if you're looking for a quick fix or work-around. If your VRAM is on the fritz then it's likely that there is a specific location the corruption is taking place. If you're able to determine consistently that it happens at a certain point (VRAM is consuming x amount of memory, etc.) then you can work with it.
It's quite easy to create a RAM disk, and another possibility would be to allocate regular memory for VRAM. I know both of these are very possible, because I've done it. If someone says something "won't work" (no offense Pavel), it shouldn't discourage you from at least trying. If you're interested in the techniques that I mentioned I'd be happy to provide more info, however, this is about your idea and I'd like to know if you can make it work.
If you are able to write an app that ran on boot even before an OS loaded, that would be in the bootloader - why wouldnt you just then do a self-test of memory at that time ?
Or did you mean an userland app after the OS boots into the login ? An userland app will not be able to do what you mentioned of cycling through every address simply because there is no mapping to userland directly for every page.
If you are sure that RAM is a problem, did you try replacing the RAM ?
I want to know some examples for usage of OES_get_program_binary. In other words, I want to know some examples in which program binaries are really useful. In particular, I want to know the scenarios. Thanks.
The utility of OES_get_program_binary is outlined pretty clearly in the extension specification itself.
On OpenGL ES devices, a common method for using shaders is to precompile them for each specific device. However, there are a lot of GPUs out there. Even if we assume that each GPU within a specific generation can run the same precompiled shaders (which is almost certainly not true in many cases), that still means you need separate precompiled shaders for Tegra2, one for PowerVR Series 5 GPUs, PowerVR's series 5X, and Qualcomm's current GPU. And that doesn't take into account next-gen mobile GPUs, like PowerVR Series 6 and Tegra 3, and whatever Qualcomm's coming out with next. And any number of other GPUs I haven't mentioned.
The only alternative is to ship text shaders and compile them as needed. As you might imagine, running a compiler on low-power ARM chips is rather expensive.
OES_get_program_binary provides a reasonable alternative. It lets you take a compiled, linked program object and save a compiled binary image to local storage. This means that, when you go to load that program again, you don't have to load it from text shaders (unless the version has changed); you can load it from the binary directly. This should make applications start up faster on subsequent executions.
I have been reading https://stackoverflow.com/questions/158756/what-is-the-best-image-manipulation-library And tried a few libraries and are now looking for inputs on what is the best for our need. I will start by describing our current setting and problems.
We have a system that needs to resize and crop a large amount of images from big original images. We handle 50 000+ images every day on 2 powerfull servers. Today we use ImageGlue from WebSupergoo but we don't like it at all, it is slow and hangs the service now and then (Its in another unanswered stack overflow question). We have a threaded windows service that uses Microsoft ThreadPool to resize as much as possible on the 8 core machines.
I have tried AForge and it went very well it was loads faster and never crashed or anything. But I had problems with quality on a few images. This due to what algorithms I used ofc so can be tweaked. But want to widen our eyes to see if thats the right way to go.
so:
It needs to be c# .net and run in a windows service. (Since we wont change the rest of the service only image handling)
It needs to handle threaded environment well.
We have a great need of it being fast since today its too slow. But we also want good quality and small filesize since the images are later displayed on webpage with loads of visitors and needs good quality.
So we have a lot of demands on ability to get god quality at a fast pace, and also secondary keep filesizes lowered even if that can be adjusted with compression a bit.
Any comments or suggestions on what library to use?
I understand it sais that you want to still use C# but providing an alternative.
Depending on the ammount of work you are doing, the fastest way to manipulate images is doing it entirely on a GPU (that would offload most of the pixel work). You can interoperate with CUDA from Managed C++ that you can call from your service. Or use DirectX surfaces and rendering targets (you can have antialiasing and all the high-quality stuff out-of-the-box).
However, before doing anything makes sure your workload is dominated by the trilinear/bilinear resizing and not by the encoding/decoding of the image. BTW you will need at least one fast nVidia videocard on each server to do the offloading (cheap GTX 460 would be more than enough).
Hi
Is it possible to uninstall xserver and use xdirectfb with a tiny window manager - like awesome ?
Do I need to compile from source every appllication I want to use with xdirectfb ?
From these links, it isn't clear to me :
http://en.wikipedia.org/wiki/DirectFB
http://directfb.org/index.php?path=Projects%2FXDirectFB
Pretty much yes you can, no you don't have to. I'm not sure if you'll save anything though.
Normal X server contains both raw hardware access support (framebuffer) and X server abstraction layer for the windowed apps and window manager.
The X abstraction layer is quite heavyweight due to support of multiple displays on multiple hosts, windows geometry, ordering, palettes and so on, plus generally rather overly complex API. Running that uses up lots of resources but makes (arguably) programming easier.
OTOH a framebuffer usage is very simple, change a byte in memory, call one function and the corresponding pixel is set, that's all - no overhead on the API side, but it's up to your application to draw every single pixel and manage cooperation with other applications, create windows and so on.
DirectFB is a raw framebuffer access API that is fast, simple and with minimal overhead, but provides no extras.
XDirectFB is an app that will run on top of DirectFB providing all the complexity of X server, without a hardware layer of its own.
Then you can run any WM and app on top of XDirectFB like on top of any other X server.
Now while of course DirectFB alone is much more lightweight than any X server, whether the combination, DirectFB + XDirectFB is lighter than a dedicated X - this is not so sure.