X11 struts geometry - x11

Can somebody explain (or link to explanation) of how X11 struts are constructed? From the only description that I was able to find here it is not clear to me what the twelve values in _NET_WM_STRUT_PARTIAL property represent. For example, in the situation below where I have two Xinerama displays aligned on the left edge, how would I define the strut (marked xxx)? I suppose that the origin of the coordinates is at the top left hand corner.
+-------------------+
| |
| |
| |
| |
+-------+---+-------+
| |xxx|
| +---+
| |
| |
+-----------+

This page explains _NET_WM_STRUT_PARTIAL:
_NET_WM_STRUT_PARTIAL, left, right, top, bottom, left_start_y, left_end_y,
right_start_y, right_end_y, top_start_x, top_end_x, bottom_start_x,
bottom_end_x,CARDINAL[12]/32
[...]
For example, for a panel-style Client appearing at the bottom of the screen, 50 pixels tall, and occupying the space from 200-600 pixels from the left of the screen edge would set a bottom strut of 50, and set bottom_start_x to 200 and bottom_end_x to 600. Another example is a panel on a screen using the Xinerama extension. Assume that the set up uses two monitors, one running at 1280x1024 and the other to the right running at 1024x768, with the top edge of the two physical displays aligned. If the panel wants to fill the entire bottom edge of the smaller display with a panel 50 pixels tall, it should set a bottom strut of 306, with bottom_start_x of 1280, and bottom_end_x of 2303. Note that the strut is relative to the screen edge, and not the edge of the xinerama monitor.
(my bold face).
Now, how does this work? Think of it as a feature which is triggered by a non-0 value in the first 4 integers. So if you want to reserve space at the bottom, you set left, right, top to 0 and bottom to 50.
The *_start_x and *_end_x pairs then define the size of the reserved area along the side of the screen.
In your example, you want to reserve space at the right side of the screen. If you main area is 2000 pixel and the smaller screen is 1200 pixel wide and the area should be 150 pixel, then you need bottom = 2000 - 1200 + 150 = 950 (the virtual screen in this setup is 2000 pixels wide everywhere and so you need to offset the value with the difference of the widths of the two real monitors).
right_start_y == height of upper monitor.
right_end_y == right_start_y + height of area you want to reserve.

Related

Maximally Stable Extremal Regions (MSER) Implementation in Document Image Character Patch Identification

My task is to identify character patches within the document image. Consider the image below:
Based from the paper, to extract character patches, the MSER based method will be adopted to detect character candidates.
"The main advantage of the MSER based method is that such algorithm is
able to find most legible characters even when the document image is in
low quality."
Another paper discusses about MSER. I'm having a hard time understanding the latter paper. Can anyone explain to me in simple terms the steps that I should take to implement MSER and extract character patches in my sample document. I will implement it in Python, and I need to fully grasp / understand how MSER works.
Below are the steps to identify character patches in the image document (based from the way I understand it, please correct me if I am wrong)
"First, pixels are sorted by intensity"
My comprehension:
Say for example I have 5 pixels in an image with intensities (Pixel 1) 1, (Pixel 2) 9,(Pixel 3) 255,(Pixel 4) 3,(Pixel 5) 4 consecutively, then if sorted increasingly, based on intensity it will yield an output, Pixel 1,4,5,2 and 3.
After sorting, pixels are placed in the image (either in decreasing or increasing order) and the list of connected components and their areas is maintained using the efficient union-find algorithm.
My Comprehension:
Using the example in number 1. Pixels will be arranged like below. Pixel component/group and Image X,Y coordinates are just examples.
Pixel Number | Intensity Level | Pixel Component/Group | Image X,Y Coordinates
1 | 1 | Pixel Component # 5 | (14,12)
4 | 3 | Pixel Component # 1 | (234,213)
5 | 4 | Pixel Component # 2 | (231,14)
2 | 9 | Pixel Component # 3 | (23,21)
3 | 255 | Pixel Component # 1 | (234,214)
"The process produces a data structure storing the area of each connected component as a function of intensity."
My comprehension:
A column in table in #2 will be added, called Area. It will count the number of pixels in a specific component with the same intensity level. Its like an aggregation of pixels within the component group with the same intensity level.
4."Finally, intensity levels that are local minima of the rate of change of the area function are selected as thresholds producing MSER. In the output, each MSER is represented by position of a local intensity minimum (or maximum) and a threshold."
How to get the local minima of the rate of change of the area function ?
Please help me understand this what and how to implement MSER. Feel free to correct my understanding. Thanks.
In one article the authors track a value they call "stability" (which roughly means the rate of change of area when going from region to region in their data structure), and then find regions corresponding to local minima of that value (a local minimum is a point in which the value of interest is smaller than that in the closest neighbors). If that is of any help, here is a C++ implementation of MSER (based on another article).

What is the difference between 1x vs r4 or 2x vs r4 or 3x vs r4

If 1x image is 100*100 then
2x image is 200 * 200
3x image is 300 * 300
what is r4 dimension should be xxx * xxx
There is no documentation on this.
FYI:
Its not about launch screen image ... the image can be anything like back button etc...
The answer isn't that straight forward.
The important thing to remember is that different iPhone models would automatically use different images from the imageset. The resolution of iPhone-A is not always a simple multiplication of iPhone-B, so the size of image-A can't always be a simple multiplication of image-B.
Here is a table showing the image automatically selected from x.imageset for each iPhone:
iPhone Model | ScreenSize | Ratio | x.ImageSet
-------------|------------|-------|----------------
XS Max | 1242-2688 | 0.46 | 3x
X,XS | 1125-2436 | 0.46 | 3x
XR | 828-1792 | 0.46 | 2x
6,6s,7,8+ | 1242-2208 | 0.56 | 3x
6,6s,7,8 | 750-1334 | 0.66 | 2x
5,5s | 640-1136 | 0.56 | R4
As the table shows the same image is selected for multiple screen sizes and multiple aspect ratios, that can lead to a bit of a mess...
Thats where the Content Mode property of the view showing the image comes in handy. It decides how the image will stretch inside its boundaries (LaunchScreen image boundaries are the screen size, Back button boundaries are the ImageView size).
If the Content Mode = Aspect Fill
Then the selected image from the imageset will resize its width to be exactly as the boundary width and the height will also change to maintain the original image aspect ratio - its top and bottom edges will be hidden because they exceeded the boundary or will not reach the boundary at all.
Other Content Modes will have other effects on the image.
Take a look: Understanding How Images Are Scaled

Why is a maximized Delphi form 8 pixels wider and higher than the GetSystemMetrics values?

If I maximize a Delphi form the width and height values are 8 pixles greater that the corresponding GetSystemMetrics SM_CXSCREEN and SM_CYSCREEN?
For Example:
When I right click on my screen and get properties I have a 1680 X 1050 screen resolution. Those are the same values returned from GetSystemMetrics(SM_CXSCREEN) and GetSystemMetrics(SM_CYSCREEN).
When I maximize the form in my Delphi application I get a width of 1688 and a height of 1058. There is an 8 pixel difference. What causes this difference?
When maximized windows were originally implemented, the designers wanted to remove the resizing borders. Rather than removing them, they instead decided to draw those borders beyond the edges of the screen, where they would not be seen. Hence the rather surprising window rectangle of a maximized window.
This implementation decision became a problem with the advent of multi-monitor systems. By that time there were applications that relied on this behaviour and so the Windows team decided to retain the behaviour for the sake of compatibility. This meant that maximized windows leaked onto neighbouring screens. In time the window manager acquired capabilities that meant it could suppress that leakage.
Raymond Chen, as usual, has an article that covers the details: Why does a maximized window have the wrong window rectangle?
I wrote simple program, which catches WM_GETMINMAXINFO. This message allows one to modify position and size of maximized window, before the actual maximization takes place. The default values provided by system were:
Position.x: -8
Position.y: -8
Size.x: 1456 (= 8 + width of screen + 8)
Size.y: 916 (= 8 + height of screen + 8)
The resolution of my screen is 1440x900.
It seems, that Windows positions the window after the maximization in such way, that the client area covers the most the available space and window's chrome is hidden outside the screen area.

Algorithms for placing images on a screen "nicely"

Note: This is not for a webpage, it is a simple program that holds a set of images and will randomly pick a number of images and display them on the screen. Imagine working with an image editor and manually positioning imported images on the canvas.
I am having difficulty coming up with a way to position a set of arbitrary images on a screen of fixed dimension (it's just a window)
So for example, if I have one image, I would probably just position it in the center of the screen.
|
If I have two images, I would try to place them in the center of the screen, but then spread them apart horizontally so that they look centered relative to each other and also the screen.
| |
But what if one image is larger than the other two? I might have something like
|-----|
| |
Similarly, maybe I have two larger ones and two smaller ones
|-----| |-----|
| |
So that the large one appears "in the back" while the small ones are up front.
It is inevitable that some images will end up covering up parts of other images but the best I can do is try to make it as orderly as possible.
I can quickly grab the dimensions of each image object that is to be drawn, and there is a limit on how many images will be drawn (from 1 to 8 inclusive).
Images can be drawn anywhere on the screen, and if any part of the image is outside of the screen those parts will just be cut off. All images have dimensions smaller than the dimensions of the screen, and are typically no bigger than 1/4 of the entire screen.
What is a good way to approach this problem? Even handling the base cases like having two images (of possibly different sizes) is already pretty confusing.
You could treat this as the 2D bin packing problem, which will optimise for non-overlapping rectangles in a "compact" way, though aesthetics won't be a consideration.
If you want to roll your own, you could try placing all images on the canvas on a grid, with the centre-to-centre spacing being large enough that no images overlap. Then "squash" the images closer together, left to right and top to bottom, to reduce the amount of whitespace.
html tables of 100% width and height (with disabled overflows) are a good starting point IMO - in the first iteration just order the pictures by size and make 8 templates like:
<tr><td><img></td></tr>
<tr><td><img></td><td><img></td></tr>
2 rows, first with colspan=2
...
then find some ugly cases and make special rules for them (like for 3 vertical images make 1 row, ...)

How to calculate screen resolution change

I have an application that is on a mobile device. I am moving resolutions of my app from 240W x 320L to 640W X 480L.
I have a bunch of columns that have their width in pixels (say 55 for example). I need to convert this to the new resolution.
"Easy", I thought 640/240 = 2 2/3. So I just take 55 * 2.6666667 and that is my new width.
Alas, that did not work. My columns (all together) are larger than my screen size now.
I tried 55 * 2 and that is too small. I am sure I can get there with trial an error, but I want to know an exact ratio.
So, what am I missing? how do I calculate my new column widths (other than by trial and error).
Rounding is your problem; Suppose that you have 24 columns of 10 pixels on the 240 pixel display. You calculate the new width: 10*2.667 = 27 so the total width sums to: 648 > 640. Oops...
To get this right you need to scale the absolute location of the column. That is if column number k begins on x-coordinate = X then after scaling it should begin on round(X*2.667). After this subtract rounded right side X from rounded left side X to get the width. This way you will round widths down and some up, but the total width will remain inside your limits.
The screen DPI is changing when resolution changes. So you need to take this into account.
Check this link about DPI Aware apps and search according to your platform (Native or CF)
I think your logic is good, but maybe you had rounding errors? If you want to make sure the total width is less than the screen resolution, after multiplying by the scale factor you should always round down to the nearest integer to get the width in pixels.
Also, if your columns have any padding, borders, or other space between them, you would have to take that into account as well.
If you can run on a desktop environment, I know there are "pixel ruler" sort of tools to actually measure things on the screen, you can search Google for them.

Resources