I have two physical displays. Currently xmonad thinks they are separated by some physical distance so when I move my cursor from one display to the next, the mouse disappears off the edge of one display and I have to keep moving the mouse some distance before it appears on the next display.
Another sign that they're misconfigured is when the screensaver shows an animation across both screens, there's a huge section from the middle that's missing (not that I actually care what the screensaver looks like)
In gnome, there's the display settings panel where you can drag one display's position relative to the other to match your physical reality (like if one monitor is slightly higher than the other)
What is the equivalent in xmonad?
If it helps, here is the output of xrandr
Screen 0: minimum 320 x 200, current 3840 x 1920, maximum 16384 x 16384
DisplayPort-0 connected 1200x1920+1920+0 left (normal left inverted right x axis y axis) 518mm x 324mm
1920x1200 59.95*+
1920x1080 59.99
1600x1200 60.00
1680x1050 59.95
1280x1024 60.02
1280x960 60.00
1024x768 60.00
800x600 60.32
640x480 60.00
720x400 70.08
DVI-0 connected primary 1200x1920+0+0 left (normal left inverted right x axis y axis) 518mm x 324mm
1920x1200 59.95*+
1920x1080 59.99
1600x1200 60.00
1680x1050 59.88
1280x1024 60.02
1280x960 60.00
1024x768 60.00
800x600 60.32
640x480 60.00
720x400 70.08
Thanks in advance for any help!
In your previous question, Eric recommended to use arandr (a graphical frontend for xrandr) which lets you position the screens easily - i guess this can be done using xrandr as well, but here the GUI simplifies your life.
Again, this question is independent of xmonad. Since you were unsure which window manager you are using (in your previous question) i think you need to understand: xmonad is a window manager, as opposed to a desktop environment like KDE or Gnome. Using a window manager you'll have to rely on tools provided by GNU/Linux to do certain things, where a desktop environment often provides such tools, e.g. Screensaver, Screen locking, status bars and so on.
Related
system: Ubuntu 20.04.3 LTS
The default Resolution is 1280x720, the DPI is 96.
When adjust the 'fractional scaling' is 125%, I have two options to get the DPI:
Use the command: xrdb -query |grep dpi
the DPI is 192 ??!
Xft.dpi: 192
Use the command: xdpyinfo, the DPI is 120.
screen #0:
dimensions: 2048x1152 pixels (433x244 millimeters)
resolution: 120x120 dots per inch
Why the two commands return different DPI ?
When scaling to 125%, Why the dimensions is 2048x1152 ? (2048/1280 = 1.6, 1152/720 = 1.6)
Is the X11 API is wrong or other problem?
Thanks.
Strictly speaking neither xrdb nor xdpyinfo are the right place to query the screen's pixel density.
xrdb shows you a DPI value, since it's the place where one can (but is not required to!) set a overriding DPI value for Xft, and some desktop environments to, just "because". xdpyinfo shows mostly values that already did exist waaay back in the original X11 core protocol, where one could also specify physical dimensions of a screen. The problem is, that on modern systems, which are capable of dynamically attaching and removing displays, things are done through XRandR and the capability to drive multiple X11 screens on the same X11 display no longer is used (it's all just one large X11 screen now). So depending on how you configured your monitors, the values reported by xdpyinfo are off.
To arrive at the correct pixel density, one must use XRandR (CLI query/set tool name is xrandr) to retrieve information about the physically connected displays. However be advised that it is perfectly possible for several displays of different pixel density to show overlapping regions of the X11 screen, and within those regions there's no unambiguous DPI value available.
I have never used a raspberry before, and I am having trouble configuring it. I've been searching for an answer to my problem but I have had no success in finding it. Maybe the answer is there but I have not been able to understand it, which is possible.
I have a Raspberry Pi 3 that outputs on a very tall LED wall. It's supposed to show some dynamic content working under HTML5. The code works correctly, as I have tested it on a regular screen with success.
The problem comes when I connect the raspberry to the LED wall. It has a very specific resolution (192 px wide, for 1216 px tall)
Tampering with the configuration, I have set the resolution to the highest I can find, but I'm a few pixels shy of 1216, and with LEDs that big it's very noticeable.
As far as I have found, there is only a limited list of resolutions to choose from. Is there any way to set a manual resolution to 192x1216? Or at least a bigger resolution that my screen fits in?
Thanks.
Since 1216 pixels in height is a little unconventional, I would suggest rotating the screen 90 degrees and setting the resolution to 1216x192 instead.
Add these 2 lines to /boot/config.txt and comment out any other occurrences of them if they already exists. You can edit the file by writing sudo nano /boot/config.txt
display_rotate=1 # Rotate the screen 90 degrees
hdmi_cvt 1216 192 60 # Set the resolution to 1216x192
And then reboot the unit with sudo reboot
In case the custom resolution doesn't work you could try to work with rotating the screen with the first line and use 1280x720 pixels instead.
How do you connect? The highest supported resolution is 1920x1200 in HDMI_MODE.
I think that might be a limitation with the Raspberry Pi.
I haven't used the Asus Tinkerboard but it seems similar to the Pi and supports a higher resolution.
This particular combination of parameters for /boot/config.txt was the one that worked for me:
disable_overscan = 1
hdmi_group = 2
hdmi_mode = 87
hdmi_cvt = 1024 600 60 6 0 0 0
display_hdmi_rotate = 3
I was looking for expressions for zoom in/out , pan.
Basically the use case is this: Consider a rectangle of 1280x720 and I need to zoom in it
to 640x480. The zoom time is configurable, consider x seconds. The output of the expression should be all the intermediate rectangles (format = x,y,w,h) till 640x480 # 30 fps. which means if the zoom time is 5 seconds, then I should get 150 output rectangles well spaced and smooth. (#30 fps, total rectangles = 30 x 5).
Further which, I'll crop them & then rescale them all to a constant resolution and finally feed to the encoder.
The same requirement goes to zoom out & pan-scan.
Thanks.
If you are using a mobile development platform (xcode, android SDK) then gestures are built in functions of the OS and are configurable through drag and drop.
If you're on a web development platform I recommend jquery plugins such as hammer.js or touchpunch. You can find links to them on this question.
If you give more information on your platform i'd be happy to give you more specific examples!
I want to display the kinect color frame in wpf with full screen , but when i am trying it ,
I got only very less quality video frames.
How to do this any idea??
The Kinect camera doesn't have great resolutions. Only 640x480 and 1280x960 are supported. Forcing these images to take up the entire screen, especially if you're using a high definition monitor (1920x1080, for example), will cause the image to be stretched, which generally looks awful. It's the same problem you run into if you try to make any image larger; each pixel in the original image has to fill up more pixels in the expanded image, causing the image to look blocky.
Really, the only thing to minimize this is to make sure you're using the Kinect's maximum color stream resolution. You can do that by specifying a ColorImageFormat when you enable the ColorStream. Note that this resolution has a significantly lower number of frames per second than the 640x480 stream (12 FPS vs 30 FPS). However, it should look better in a fullscreen mode than the alternative.
sensor.ColorStream.Enable(ColorImageFormat.RgbResolution1280x960Fps12);
Let's say I have a rMBP, and an image that is 1000x1000 pixels.
If I display the image onscreen at 1:1 while running the MBP in "Best for Retina" mode, it will be displayed 1:1 on the actual retina display pixels (i.e. it will take up the same screen real estate as a 500x500 image on a 1440x900 screen).
However, if I then switch to one of the "scaled" resolution modes, e.g. 1680x1050, the system no longer displays the image 1:1, but scales it down (it occupies the same screen real estate as a 500x500 image on a 1680x1050 screen).
I would like a way to have the image continue to display 1:1 on the retina display, regardless of the system resolution in use. I realize that I could calculate an appropriate "scaled" size, and scale the image up so that when it is scaled back down it corresponds to a 1:1 mapping, but this results in a noticeable quality degradation.
When running the MBP in the "scaled" resolutions, does Apple not provide any way to control the on-screen pixels directly (bypassing the scaling for just a part of the screen)?
No. Display scaling occurs at a very low level within the GPU and affects the entire display; there is no way to bypass it for part of the screen.
Look at it this way: If you set the resolution of an ordinary laptop's display to, say, 800x600, there is no way to display an image at the native resolution of the LCD, or to render content inside the black pillarboxes on the sides of the display. For all intents and purposes, the LCD is 800x600 while it's set to that resolution; the fact that it's actually (say) an 1440x900 display is temporarily forgotten.
The same principle applies to the MacBook Pro Retina display. The nature of the scaling is a little more complicated, but the "original" resolution of the display is still forgotten when you apply scaling, and there is no way to render directly to it.
Here are the APIs for addressing the pixels directly:
https://developer.apple.com/library/mac/documentation/GraphicsAnimation/Conceptual/HighResolutionOSX/CapturingScreenContents/CapturingScreenContents.html#//apple_ref/doc/uid/TP40012302-CH10-SW1