What is the best value for Xft.dpi in .Xresources - x11

From Arch Wiki:
For Xft.dpi, using integer multiples of 96 usually works best, e.g. 192 for 200% scaling.
I know that 200%, 300%,... scaling is the best possible because every pixel replaced with integer amount of pixels and we don't have situation where we need to display 1.5 pixels.
But what if don't have 4k monitor, and have for example 2.5k(2560x1440) monitor or monitor with some non-standard resolution or aspect ratio. In this case increasing scale factor 2 times is too much.
I have only 2 ideas:
Scale it in 1.25, 1.5, 1.75, so objects with 16x16 and 32x32 size will be properly scaled.
Scale it in (vertical_pixels*horizontal_pixels)/(1920*1080)*96, so you will get size of objects similar to normal display.

Related

Change training dataset aspect ratio and size

I have a training dataset of 640x512 images that I would like to use with a 320x240 camera.
Is it ok to change the aspect ratio and the size of the training images to that of the camera?
Would it be better to upscale the camera frames?
It is better if you keep the aspect ratio of the images because you will be artificially modifying the composition of the objects in the image. What you can do is downscale the image by a factor of 2, so it's 320 x 256, then crop from the center so you have a 320 x 240 image. You can do this by simply removing the first 8 and last 8 columns of the image to get it to 320 x 240. Removing the first 8 and last 8 columns should be safe because it is very unlikely you will see meaningful information within an 8 pixel band on either side of the image.
If you are using a deep learning framework such as Tensorflow or PyTorch, there are pre-processing methods to automatically allow you to crop from the center as well as downscale the image by a factor of 2 for you. You just need to set up a pre-processing pipeline and have these two things in place. You don't have any code established so I can't help you with implementation details, but hopefully what I've said is enough to get you started.
Finally, do not upsample the images. There will be no benefit because you will be using existing information to interpolate to a larger space which is inaccurate. You can scale down, but never scale up. The only situation where this could be useful is if you use superresolution, but that would be for specific cases and it highly depends on what images you use. In general, I do not recommend upscaling. Take your training set and downscale to the resolution of the camera as the images from the camera would be what is used at inference and at that resolution.

Image file size to area ratio

I'm writing a tool to detect images on our website that should be flagged for manual intervention to reduce file size. If a "large" image is 100K that might be fine, but if a "small" image is 100K, someone forgot to flatten it or compress it.
I'm looking at the "file density" of an image as the ratio filesize/(height x width). Is there a term for this? Is there some guidance about what a reasonable range for this density should be, so that I can flag images? Or am I thinking about this wrong?
Yes, if the file size is given in bits, then that fraction is known as the bitrate in bits per pixel (bpp) - as sascha points out. For example, an uncompressed image is usually 24 bit (8bit/channel * 3 channels (r,g,b)). Anything at this or higher bitrates is (most often) not compressed.
In general, lossless compression can be achieved at bitrates of about 12bpp (a 2:1 compression ratio). Normally you can aim at much lower bitrates (e.g., 1 bit per pixel, 24:1 compression ratio) and expect decent quality, but it'll depend on the images you're dealing with.

How do I prevent ImageMagick from doubling the image size during rotation?

I have an optimally compressed png that I'm rotating by 1 degree using ImageMagick -
convert -rotate 1 crab.png crab-rotated.png
The size goes from 74 KB to 167 KB. How do I minimize that increase?
Orginal:
Rotated:
The increase in file size is probably due to less efficient compression. You won't be able to do anything about that unless you decrease the compression level (-quality option) or use a more efficient but lossy compression method (e.g. JPEG).
Here's the reason why I think this happens (I hope somebody can correct me if I am wrong). By rotating the image, you are introducing spatial frequencies that were not present in the original image. If these frequencies are not suppressed during compression, than the file size will inevitably increase. However, suppressing these frequencies may degrade the quality of your image. It's a delicate balance.
The amount of increase (or decrease) in filesize after rotation depends on the frequencies already present in the image, i.e. it is image-specific.

Calculating pixel length of an image

May I know what are the ways to calculate the length of 1 pixel in centimeters? The images that I have are 640x480. I would like to compare 2 pixels at different places on the image and find the difference in distance. Thus I would need to find out what's the length of the pixel in centimeters.
Thank you.
A pixel is a relative unit of measure, it does not have an absolute size.
Edit. With regard to your edit: again, you can only calculate the distance between two pixels in an image in pixels, not in centimeters. As a simple example, think video projectors: if you project, say, a 3×3px image onto a wall, the distance between the leftmost and the rightmost pixels could be anything from a few millimeters to several meters. If you moved the projector closer to the wall or farther away from it, the pixel size would change, and whatever distance you had calculated earlier would become wrong.
Same goes for computer monitors and other devices (as Johannes Rössel has explained in his answer). There, the pixel size in centimeters depends on factors such as the physical resolution of the screen, the resolution of the graphical interface, and the zooming factor at which the image is displayed.
A pixel does not have a fixed physical size, by definition. It is simply the smallest addressable unit of picture, however large or small.
This is fully dependent on the screen resolution and screen size:
pixel width = width of monitor viewable area / number of horizontal pixels
pixel height = height of monitor viewable area / number of vertical pixels
Actually, the answer depends on where exactly your real-world units are.
It comes down to dpi (dots per inch) which is the number of image pixels along a length of 2.54 cm. That's the resolution of an image or a target device (printer, screen, &c.).
Image files usually have a resolution embedded within them which specifies their real-world size. It doesn't alter their pixel dimensions, it just says how large they are if printed or how large a “100 %” view on a display would be.
Then there is the resolution of your screen, as others have mentioned, as well as the specified resolution your graphical interface uses (usually 96 dpi, sometimes 120)—and then it's all a matter of whether programs actually honor that setting ...
The OS will assume some dpi (usually 96 dpi on windows) however the screens real dpi will depend on the physical size of the display and the resolution
e.g a 15" monitor should have a 12" width so depending on the horizontal resolution you will get a different horizontal dpi, assuming a 1152 pixel screen width you will genuinely get 96 dpi

What scaling factor to use for mapping the Font size on a high resolution monitor?

We have a requirement where our application needs to support high resolution monitors. Currently, when the application comes up in High res monitor, the text that it displays is too small. We use Arial 12 point font by default.
Now to make the text visible, I need to change the font size proportionally. I am finding it tough to come up with a formula which would give me the target font size given the monitor resolution.
Here is my understanding of the problem.
1) On windows, by default 96 pixels correpond to 1 Logical inch. This means that when the monitor resolution increases, the screen size in logical inches also increase.
2) 1 Point font is 1/72 of a Logical Inch. So combined with the fact that there are 96 Pixels per Logical inch, it turns out that, there are 96/72 Pixels per Point of Font.
This means that for a 12 point font, The number of Pixels it will occupy is 12*96/72 = 16 Pixels.
Now I need to know the scaling factor by which I need to increase these Number of Pixels so that the resultant Font is properly visible. If I know the scaled pixel count, I can get the Font size simply by dividing it by (96/72)
What is the suggested scaling factor which would ensure properly scaled Fonts on all monitor resolutions?
Also, please correct if my understanding is wrong.
There's an example on the MSDN page for LOGFONT structure. Your understanding is correct, you need to scale the point size by vertres / 72.
lfHeight = -PointSize * GetDeviceCaps(hDC, LOGPIXELSY) / 72;
If you set the resolution in Windows to match that of the physical monitor, no adjustment should be needed. Any well written program will do the multiplication and division necessary to scale the font properly, and in the newest versions of Windows the OS will lie about the resolution and scale the fonts automatically.
If you wish to handle this outside of the Windows settings, simply multiply your font size by your actual DPI and divide by 96.
Edit: Beginning with Windows Vista, Windows will not report your actual configured DPI unless you write a DPI-aware program. Microsoft has some guidance on the subject. You might find that the default scaling that Microsoft provides for non-DPI-aware programs is good enough for your purposes.

Resources