Embedded linux framebuffer rotate - rotation

I have to integrate an LCD screen to my embedded linux (ARM9) system. The LCD is portrait 320x480 and I need to run the screen in landscape orientation 480x320.
Using the LCD config register I can rotate it in hardware, so that (x,y)(0,0) is rotated 90 degrees.
Here starts my problem, the screen's wide side gets narrowed from 480 pixels to 320, and the long side of the picture is out of the screen. This should be fixed by changing the framebuffer dimensions AFAIK, but I tried a few ways to do it with no success yet.
using the fbset, below are the settings for portrait:
mode "480x320-55"
# D: 9.091 MHz, H: 18.182 kHz, V: 55.096 Hz
geometry 480 320 480 320 16
timings 110000 4 4 4 4 12 2
rgba 5/0,6/5,5/11,0/0
endmode
Sending the command:
fbset --geometry 480 320 480 320 16
Results in:
mode "480x320-55"
# D: 9.091 MHz, H: 18.182 kHz, V: 55.096 Hz
geometry 480 320 480 320 16
timings 110000 4 4 4 4 12 2
rgba 5/0,6/5,5/11,0/0
endmode
Which makes the picture appear a few times and overlaps, but the screen width is still too narrow.
I tried to provide double the screen size for the virtual xres and yres, but no change.
fbset --geometry 480 320 960 640 16
I also tried using fb rotate function I found on the web "saFbdevRotation.c", which uses the FB IOCTL, but the active screen size is still incorrect.
rotate 90 degrees, see output
$> ./fb_rotate -r 90
## Before rotation
### Fix Screen Info:
Line Length - 640
Physical Address = 81a00000
Buffer Length = 1048576
### Var Screen Info:
Xres - 320
Yres - 480
Xres Virtual - 320
Yres Virtual - 480
Bits Per Pixel - 16
Pixel Clk - 110000
Rotation - 0
## after rotation
### Fix Screen Info:
Line Length - 960
Physical Address = 81a00000
Buffer Length = 1048576
### Var Screen Info:
Xres - 480
Yres - 320
Xres Virtual - 480
Yres Virtual - 320
Bits Per Pixel - 16
Pixel Clk - 110000
Rotation - 90
I can also add that the system is very limited with free memory, can this cause the fb to NOT allocate a new buffer? However there were no errors in dmesg.
Will appreciate your advise.

I can also add that the system is very limited with free memory, can
this cause the fb to NOT allocate a new buffer? However there were no
errors in dmesg.
Normally,the standard way for allocating video buffer is to pre-allocate a large video buffer(based on the max video resolution you support) in the boot time,which pass a men= argument to the kernel so that kernel won't occupy it initially.
then later on,the video driver can do
void *ioremap(unsigned long phys_addr, unsigned long size)
which will create a mmap area for driver directly operate the frame buffer.
You can check it via doing cat /proc/iomen
Thus,the video driver memory is pre-allocated and is different from linux kernel system memory(like kmalloc() or get_free_pages() or vmalloc()), what you concerned is ruled out.

I think your issue is related to the LCD that you are using. I have seen several embedded LCDs that claim to support 90 deg rotation, but the result was exactly as you described.
My problems were always encountered using the RGB display interface. I suspect that the rotation might have worked using the CPU interface.
I have seen only one embedded display that was able to do the rotation correctly for RGB interface.
The thing is, you should try to do the rotation either with
LCD HW, your processor HW, or pure SW.
I do not know if Linux framebuffer might use pure SW or your processor HW, it probably depends on your driver.

Have you tried rotating the display by adding a line in config.txt (on the DOS boot partition)?
The line you would need is:
display_rotate=0..3
(where 0 is the default and 1 is 90 degrees clockwise, 2 180 degrees, etc.)
There's also an lcd_rotate command, but I'm not sure what the difference is.

Related

How does memory usage in browsers work for images - can I do one large sprite?

I currently display 115 (!) different sponsor icons at the bottom of many web pages on my website. They're lazy-loaded, but even so, that's quite a lot.
At present, these icons are loaded separately, and are sized 75x50 (or x2 or x3, depending on the screen of the device).
I'm toying with the idea of making them all into one sprite, rather than 115 separate files. That would mean, instead of lots of tiny little files, I'd have one large PNG or WEBP file instead. The way I'm considering doing it would mean the smallest file would be 8,625 pixels across; and the x3 version would be 25,875 pixels across, which seems like a really very large image (albeit only 225 px high).
Will an image of this pixel size cause a browser to choke?
Is a sprite the right way to achieve a faster-loading page here, or is there something else I should be considering?
115 icons with 75 pixel wide sure will calculate to very wide 8625 pixels image, which is only 50px heigh...
but you don't have to use a low height (50 pixel) very wide (8625 pixel) image.
you can make a suitable rectangular smart size image with grid of icons... say, 12 rows of 10 icons per line...
115 x 10 = 1150 + 50 pixel (5 pixel space between 10 icons) total 1200 pixel wide approx.
50 x 12 = 600 + 120 pixel (5 pixel space between 12 icons) total 720 pixel tall approx.

macOS does'nt change dppx with scaling, bug or feature?

I've made some tests on macOS 10.13 with dppx.
macOS has 4 scales:
1680 х 1050
1440 х 900 (as default begin 2016+)
1280 x 800 (as default untill 2016)
1024 х 640
I've cheched dppx (using this online service) and as a results, I've always got 2dppx!
Why? What am I doing wrong? This is a bug?
macbook resolution is 2560*1600, assuming that I have 2 dppx, real scaling is: 2560 / 2dppx x 1600 / 2 dppx === 1280 x 800
Other scalling is:
1680 х 1050 = 1.52 dppx
1440 х 900 = 1.77 dppx
1280 x 800 = 2 dppx
1024 х 640 = 2.5 dppx
I'm right? If so, why do the tools for checking dppx always write 2dppx?
This is consistent with how Mac OS treats retina as a whole. The scaling factor is locked at 2x and the virtual resolution of the screen changes, instead of the resolution of the screen being fixed and the scaling factor changing. The web browser is faithfully exposing this fact to the web page.
At 1680x1050, for instance, the entire screen is rendered at 3360x2100 with a 2x scaling factor, and then scaled down by a factor of 1.3125 to fit on the screen, but this scaling is invisible to apps. If the MacBook Retina is set to any resolution other than exactly 2x the physical display, then the user has no hope of seeing a pixel-perfect image.
For a platform where the devicePixelRatio can change, test your site against the display scaling in Android N, or change the zoom level in Chrome or Firefox, but not Safari.
Some links:
Understanding points and the user space in Cocoa Drawing as they interact with screen resolution
What resolution programmers will get in Macbook Pro Retina
https://www.anandtech.com/show/6023/the-nextgen-macbook-pro-with-retina-display-review/6

Rasterize QML SVG Image in Alpha8 format

Importing a set of 80 icons in QML as non-visible images to be used for shader sources (for arbitrary colorization):
property Image action: Image { sourceSize.width: 512; sourceSize.height: 512; source: "icons/action.svg"; mipmap: true; antialiasing: true }
// 79 more of those
I discover that my memory consumption has skyrocketed, from 45 mb to 128 mb, a whooping ~185% increase, thats 83 extra mb for the 80 icons.
It was expected, after all, 512 * 512 * 4 / 1024 / 1024 makes up for exactly 1 mb of memory. However, that cost is not acceptable, especially in the face of targeting mobile phones with the app.
I could reduce the rasterization size, however, I want the icons to be nice and crisp. The icon size itself varies according to the device display, but for optimal user experience it needs to be about an inch or so. Given that most new high end phones are already pushing above 500 dpi, I'd really hate to scale the icons down and have them appear blurry.
Since the SVGs are supposed to be colorized in arbitrary colors, they are in effect simply alpha masks, meaning that all I really need is one of those 4 channels. Unfortunately, QML's Image doesn't seem to offer any sort of control, the SVG is rasterized to a RGBA image.
And indeed, if I
QVector<QImage> imgs;
imgs.reserve(80);
QDirIterator it(":/icons", QDirIterator::Subdirectories);
while (it.hasNext()) {
QSvgRenderer renderer (it.next());
QImage ic(512, 512, QImage::Format_Alpha8);
ic.fill(Qt::transparent);
QPainter pp(&ic);
renderer.render(&pp);
imgs.append(ic);
}
I get the expected modest 20 mb increase in memory usage.
Any ideas how to save on some memory?
2 hours later - good news, first and foremost, a custom QQuickImageProvider works out of the box with QImage::Format_Alpha8, no problems there. Unnecessary channels are eliminated.
But more importantly, I realized that the icons lend themselves very well to distance fields representation. So I now employ 128x128 SDF textures, which scale beautifully to far above 512x512, reducing the memory usage for the icons to the measly 1.25 mb, for a total of 64x reduction of ram usage compared to the initial implementation.
The only downside is the 3-4 seconds (on a Note 3 phone) additional startup, because the distance fields are calculated on startup, but there are remedies for this as well, I might switch to pre-computed SDFs or offload if to shaders, which would be a little more complex, but should reduce the time at least 5-fold.
The thing with the Image component is it appears to render the SVG to a bitmap before it is drawn. Hence the requirement to set sourceSize to force it to render your SVG at different resolutions. This force rendering can lead to poor memory usage and poor performance.
As an alternative, I found a number of components in QML that render SVG perfectly via the icon property.
Item {
width: 256
height: 256
Button {
anchors.centerIn: parent
background: Item { }
icon.source: "https://raw.githubusercontent.com/Esri/calcite-ui-icons/master/icons/biking-32.svg"
icon.width: 256
icon.height: 256
icon.color: "blue"
enabled: false
}
}
Which renders my biking-32.svg at 256x256 resolution in blue:

Image sizing in cocos2d

I am playing through an animation of about 90 images # 480x320 each, and I am wondering with the images not being power of 2, will this be a big performance hit? I am programing for as far back as the iphone 3Gs. Also I am using cocs2d.
Assuming you load all of these images at the start and they are 16 bit images.
Well you will have 512x512x90 = 23,592,960 pixels
With 16 bit images thats 377,487,360 bits.
377,487,360 bits = 45 Megabytes of RAM.
So yes that is a big performance hit.
Let's see...
480x320 image will create a texture of 512x512 (next nearest power of two). 512x512 times 32bit color depth (4 bytes) equals 1 Megabyte. 90 times 1 Megabyte equals 90 Megabytes.
Your biggest problem will be allocating this much memory and caching it. Without caching the animation images, you'll have to load each image every time the animation frame changes. Loading 1 Megabyte from flash memory is pretty darn slow, I'm guessing > 100 milliseconds. So every time a frame changes, your app will stop for a fraction of a second. This will be a problem on the 3GS, and possibly the Retina devices as well if you use Retina images (each 960x640 image then requires 4 Megabytes texture memory).
You can only improve this by using PVR compressed images (use TexturePacker). Even halving texture size to 16 bit will probably not be enough for a smooth animation.

Simple 2D UV mapping problem in openGLES

I'm fairly new to low level openGL coding and I have a problem that perhaps I'm just too tired to find a solution to. I'm sure the answer is simple and I'm doing something utterly stupid but here goes...
I have a texture of 1024 x 512,and I'm rendering a 2D quad which intends to use a portion of that texture. The quad is of the same size as the texture portion I am trying to render, eg 100 x 100 between vertices to render a 100 x 100 texture portion.
The texture pixels are from 0(s) and 104(t) assuming zero based pixel coords, and therefore the UV(st) is being calculated as 0 / 1024 and 104 / 512 to the extents of 100 / 1024 and 204 / 512.
However when I render the quad, I get a line from the pixel coords at 103(t), which is part of a different section of the image so it really stands out. I can get around the problem by setting the UV to 0 / 1024 and 105 / 512, but this seems very wrong to me.
I've been searching for a while and can't find what I'm doing wrong. I've tried GL_LINEAR, GL_NEAREST,clamping etc in glparams to no avail. Can someone please direct me to my error?
That is because texture coordinates don't address pixels, thus your assumption
and therefore the UV(st) is being
calculated as 0 / 1024 and 104 / 512
to the extents of 100 / 1024 and 204 /
512
is wrong!
This is kind of a FAQ, I recently answered it in OpenGL Texture Coordinates in Pixel Space

Resources