I want to occasionally VNC from an old laptop to my main PC. The old laptop's screen resolution (1024x768) is much lower than the PC's resolution (which at the moment is 1280x1024).
A few months ago, I set up x11vnc on my PC so that it would automatically lower the screen resolution when I connected and restore the optimum resolution when I was done. This worked incredibly well (seriously, it was awesome)... until I noticed that the backlight in the LCD attached to the PC seemed to be having issues.
Turns out, all the (very very frequent) screen resolution changes were killing the backlight lifespan, because (like most LCDs) the backlight on this LCD switches off and back on whenever the resolution changes.
Once I realized this I immediately stopped VNCing, which was extremely inconvenient and a major workflow disruption. However I couldn't risk killing my LCD.
So, I am looking for a way to lower the resolution in X11 in a way that does not cause the backlight to flicker.
In other words, I want to change the resolution X11 reports to programs without adjusting the actual screen resolution.
I've already been lowering and restoring the screen resolution for some time, so I am used to windows getting thrown around a bit.
And I fully expect that when the "effective" resolution is lowered, all the windows will probably bunch toward the top-right with a giant empty black area covering a lot of the screen. This is also fine.
I'm aware Openbox has a "margins" system that will affect the state of maximized windows, but I'm not using Openbox. (I'm currently using i3 but this is likely to change in future.)
Ideally I want the solution to be windowmanager-independent. Something that sits between X11 and the WM, ideally. Writing a program that watches floating windows and automatically moves them within a constrained area would be trivial to write - it's adjusting the state of maximized windows I'm stumped on!
I realize this is a particularly tricky question, and appreciate any ideas and suggestions!
Related
Mostly out of curiosity, i was wondering how should i approach this. I'd like to make it so that when i turn off my screen, the computer shuts down. In essence, i'd like to make a service that constantly check whether the screen is on or off.
Any ideas/suggestions are welcome. Thank you for taking the time and i wish you all a plesant day.
Well, if there would be a way to check the status of the connected monitors and you shut your computer down when the screen goes off, how do you plan to turn your computer on again? I expect then you have to use the normal hardware button on your tower. Why not stick to these buttons? It has worked the last 30 years or so...
To make an actual suggestion:
Setup a webcam to monitor your screen. Using simple picture analysis you can detect if the screen is on or off
Shutdown your computer down when screen goes off
Also you can monitor the szenario when turning your screen on. You should see a manufacturers logo for a few seconds or somethinglike this. (Now you need a second computer with a second webcam)
With the second computer you can build some little machine that pushes the power on button on your first computer to turn it on
But how to turn on/off the second computer? Well, you need another one for that...
I am using BitBlt heavily in my project. I create a number of threads and in each thread I capture the screen by BitBlt. It works great and as expected for now except the following problem.
The problem happens when the user clicks on a running program or for example already opened explorer on the taskbar. You know when you click on the running program on the taskbar, it either minimizes or appears on the screen. The issue that I am talking about happens just in this transition. At that moment, something like an interrupt, all threads stop capturing the screen for a fraction of a second and then they continue capturing. The same thing happen when you move down or up the thing on the volume control window. Could you please shed some light why this is happening and how I can prevent this happening?
Thanks.
Jay
It could be a scheduling issue. When you activate an app, it gets a small, momentary boost in its priority (so that it can seem responsive in the UI). This boost might last about as long as the animation and momentarily pre-empt your screen capture threads.
It's also possible that the desktop manager is serializing stuff, and your bitblts are simply stalled until the animation is over. Even if you've turned Aero off, I believe the desktop window manager may still be in compositing mode, which has the effect Hans Passant was describing in the comments.
If you're trying to make a video from the screen, I think it's going to be impossible to rely on GDI. I strongly suggest reading about the Desktop Window Manager. For example, this caveat directly applies to what you're trying to do:
Avoid reading from or writing to a display DC. Although supported by DWM, we do not recommend it because of decreased performance.
When you use GDI to try to read the screen, DWM has to stop what it's doing, possibly render a fresh copy of the desktop to video memory, and to copy data from video memory back to system memory. It's possible that the DWM treats these as lower-priority requests than an animation in progress, so by the time it responds to the BitBlt, the animation is over.
This question suggests that DirectShow with a screen capture filter might be the way to go.
In my OpenGL application I switch between windowed and fullscreen mode using
Raymond Chen's solution:
http://blogs.msdn.com/b/oldnewthing/archive/2010/04/12/9994016.aspx
This works apart from two very annoying side effects when used in a
multi-monitor setup (only):
After the window mode was switched BOTH screens flicker/flash just in the
moment glViewport is called to accommodate the changed window dimensions.
Windows on the desktop from other applications are not painted correctly
after the switch until I e.g. minimize/maximize them (or do something similar
to force a refresh).
Does one know these effects and maybe also knows a solution?
ps: further tests showed that this only happens on my PC with an AMD card but not with my Nvidia card. If only one monitor is active it doesn't happen at all.
On my system, this is a problem if you have a 3D application with running, which takes some time to redraw, and drag any window over it. It causes a very jerky movement. This also happens if you drag a dialog from the 3D app over its 3D window. The application actually gets a redraw message (WM_PAINT?) which causes it to do a full redraw. Shouldn't the background window be cached by Windows as a bitmap or something?
I've pasted the NVIDIA system information dump below, note that I have 2 GPUs. Don't know if that's significant, but we're seeing this problem on another machine in the office, which also has 2 GPUs and Windows 7. Other machines which have 1 GPU don't have this problem.
Found out what the issue was. I was running Windows Vista Basic color scheme, instead of Aero. In basic, Windows probably only has one buffer for the whole screen, so whenever a window is moved, any window it overlaps must be redrawn. In Aero, each windows' "buffer" is cached to enable gpu accelerated blending (for the transparent parts of the window). So in Aero, there's no redraw of underlying windows as a result of dragging another window across it.
I am programmatically animating a window to make it bounce around the desktop, using MoveWindow.
It's leaving a temporary ghosting effect of the previous portion of the desktop the window that
it occupied.
How does one prevent this happening ?
That's somewhat inevitable on older versions of Windows. The processes whose windows you overlap need time to update the part of the window that got revealed when you moved your window. Do check that your program isn't burning 100% core with taskmgr.exe, that would make it much more noticeable.
A real solution for this problem requires Aero, available on Vista and up.