Lower screen resolution while maintaining original pixel density - windows-7

I'd like to play some older games on Windows 7. Running them isn't an issue, but the increase of monitor size and pixel density of later monitors is. Pre-rendered games intended to be played full-screen on e.g. a 640x480 resolution are now "blown up" to fit on a complete screen, making everything look unsharp. I've been looking at different solutions, but so far to no avail for a selection of games:
Running the game in "windowed mode" is an option for those games that support it.
DxWnd could be used to force some games in "windowed mode", but it causes some applications to crash as well.
VirtualBox works nicely since it will automatically resize to the applications desired full-screen resolution, but this is no option if VirtualBox's 3D support is insufficient to play the game.
Drivers like those of AMD or NVIDIA provide means to force maintaining pixel aspect ratio if pixel aspect ratio is an issue on wide-screen monitors
All of the above don't work for me for one game, since it does not provide "windowed mode", DxWnd makes it crash, VirtualBox's 3D support is insufficient and aspect ratio isn't an issue on my monitor.
Which brings me to the question: is there a way to lower the screen resolution while maintaining original pixel density of the monitor instead of having it fill up the whole screen? Thus essentially creating a smaller view port for the Windows environment to use and filling up the rest of the screen with big black borders?

Right click on the game and click property's and then try ticking this option.
.

Related

How do GUI developers deal with variable pixel densities?

Todays displays have a quite huge range in size and resolution. For example, my 34.5cm × 19.5cm display (resulting in a diagonal of 39.6cm or 15.6") has 1366 × 768 pixels, whereas the MacBook Pro (3rd generation) with a 15" diagonal has 2880×1800 pixels.
Multiple people complained that everything is too small with such high resolution displays (see example). That is simple to explain when developers use pixels to define their GUI. For "traditional displays", this is not a big problem as the pixels might have about the same size on most monitors. But on the new monitors with much higher pixel density the pixels are simply smaller.
So how can / should user interface developers deal with that problem? Is it possible to get the physical size of the screen? Is it possible to set physical sizes instead of pixel-based ones? Is that still a problem (it's been a while since I last read about it) or was that fixed meanwhile?
(While css seems to support cm, when I try here it, it is not the set size).
how can / should user interface developers deal with that problem?
Use a toolkit or framework that support resolution independence. WPF is built from the ground up to be resolution-independent, but even old framework like Windows Forms can learn new tricks. OSX/iOS and Windows (or browser if we're talking about web) itself may try to take care the problem by automatic scaling, but if there's bitmap graphic involved, developers might need to provide different bitmaps such in Android (which face most varying resolution and densities compared to other OS)
Is it possible to get the physical size of the screen?
No, and developers shouldn't care about it. Developers should only care about the class of the device (say, different UI for tablet and smartphone), and perhaps the DPI to decide which bitmap resource to use. Vector resource and font should be scaled by the framework.
Is that still a problem (it's been a while since I last read about it) or was that fixed meanwhile?
Depend on when you last read about it. Windows support is still spotty, even for the internal apps itself, and while anyone developing in WPF or UWP have it easy, don't expect major third party apps to join soon. OSX display scaling seems to work a bit better, while modern mobile OS are either running on limited range of resolution (iOS and Windows Phone) or handle every resolution imaginable quite nicely (Android)
There are a few ways to deal with different screen sizes, for example when I make mobile apps in java, I either use DIP(Density Independent Pixels; They stay at a fixed size) or make objects occupy a percentage of the screen with simple math. As for web development, you can use VW and VH (Viewport Width and Viewport Height), by adding these to the end of a value instead of px, the objects take up a percentage of the viewport. For example 100vh takes 100% of the viewport height. Then what I think is the best way to do it, but time consuming, is to use a library like Bootstrap that automatically resizes elements, even when the window is resized. W3Schools has a good tutorial on bootstrap and more detailed explainations on any of these options can be looked up with an easy google search.
The design of the GUI in today display diversity era is real challenge. I would suggest several hints, mainly about the GUI applications design:
Never set or expect constant pixel size of the text - the user can change it from the system settings of the OS. Use some real-world measures for the text and check its pixel size when drawing. Provide some way to put the random size text in the boundaries of the window.
Never set or expect constant pixel size of the GUI widgets. Try to position them on the window in some adaptive way - according to the size of the window. Most GUI widget toolkits today have such instruments.
Never set or expect constant pixel size dialog windows. Let the OS to choose the size for you and then use what you get (X). Or, if you need to set some size and position (Windows), define it as a percent of the screen size.
If possible use scalable image formats for the icons. SVG is great for icons actually. Using sets of bitmap icons with different sizes is acceptable, but highly non-optimal as memory use and still will not provide perfect scaling in most cases.

Windows device coordinates vs virtual coordinates

I've tried to find an answer for this on MSDN, but I'm not getting a clear picture of how this is intended to work. All of my work is on Windows 8.1.
Here is my issue. I am working on a Laptop with a high resolution monitor, 3200x1800. I've been using EnumDisplayMonitors to get the bounding rectangle of my screen.
This seems to work fine if my display settings are default. But I've noticed that when I change the Window display settings to provide larger text, the resolution returned by EnumDisplayMonitor changes. Rather than getting 3200x1800 I will get 2133x1200.
I'm guessing since I asked for larger text, Windows chooses to represent the screen as a smaller resolution.
It seems that if I look at the virtual screen properties, everything is represented in the actual coordinates of my screen, i.e. 3200x1800. But the APIs for getting the window and monitor rectangles seem to operate on this "other" coordinate space.
Is there any documentation/Windows APIs to handle the conversion between these "other coordinates" and the "virtual coordinates"? i.e. if I want EnumDisplayMonitor or GetMonitorInfo to give me the true screen coordinates, how could I convert 2133x1200 to 3200x1800?
You have increased the DPI of the video adapter to 150% (144 dots per inch) to keep text readable and avoid having windows the size of a postage stamp. Quite necessary on such high resolution displays. But you haven't told Windows that your program knows how to deal with it.
So it assumes your program is an old one that was never designed to run on such monitors. It helps and lies to you. It gets your program to render its output to a memory buffer, then takes that output, rescales it by 150% and copies it to the video adapter. This is something you can see, text looks fuzzier if you put your program's output next to a program that doesn't ask for this kind of scaling, like Notepad.
And of course, it lies to you when you ask for the size of the screen. It tells you that it is 150% smaller than it really is. So that, after rescaling, a window you create will fill the screen.
Which is all just fine but of course not ideal, your program doesn't look as good as it should. You have to tell Windows that you know how to deal with the higher resolution. Do beware that this looks easier than it is in practice. Getting text to look crisp is trivial, it is bitmaps that are problematic. And in general a fertile source of bugs, even the big companies can get this wrong.
Before I start with an answer, let me ask: what are you really trying to do ? Or more specific - why do you need to know the monitor resolution ? The standard way to do this is to call GetWindowRect(GetDesktopWindow(), &rect) I'm not sure if the screen coordinates change based on DPI settings - but you should try that instead of GetMonitorInfo as the latter is for more advanced stuff. And if GetWindowRect still returns back a scaled rect, just call DPtoLP, LPtoDP or other mapping coordinate function as appropriate.
When you adjust the display settings as you described, you are actually changing the DPI settings of the screen. As such, certain APIs go into compatibility mode so that they allow the app to create larger elements and windows without knowing anything about this setting.
Why do you need to know the actual screen resolution since most of the windowing APIs will behave accordingly when the DPI scaling changes?
I suspect you could call SetProcessDPIAware or the manifest file equivalent. But do read this MSDN article first to understand DPI scaling.

Game on widescreen display

I'm new to OpenGL development for MacOS.
I make game 1024x768 resolution. In the fullscreen mode on widescreen monitors my game looks streched, it's not good.
Is there any function in OpenGL to get pixel per inch value? If I find it, I can decide whether to add bars to the sides of the screen.
OpenGL is a graphics library, which means that it is not meant to perform such tasks, its only for rendering something on to the screen. It is quite low level. You could use the Cocoa API NSScreen in order to get the correct information about the connected screens of your Mac.
I make game 1024x768 resolution.
That's the wrong approach. Never hardcode any resolutions. If you want to make a fullscreen game, use the fullscreen resolution. If you want to adjust the rendering resolution, switch the screen resolution and let the display do the proper scaling. By using the resolutions offered to you by the display and OS you'll always get proper aspect ratio.
Note that it still may be neccessary to take pixel aspect ratio into account. However neither switching the display resolution, nor determining the pixel aspect ratio is part of OpenGL. Those are facilities provided by the OS.

What is the typical way of going about monitor resolution when making a full screen GUI?

I'm making a program that will have a widget that has to be fixed in size, is there an industry standard for smallest resolution width?
What are some common way of dealing with this problem?
On traditional PCs (i.e. no mobile, no "custom", no specialized hardware) You usually will not find a display with a resultion below 640x800x256, so that is the "technical" de-facto standard.
However, if you try to design for that resolution, your controls will look ugly and uneconomically designed, wasting lots of available space on real-world platforms.
I'd say 800x600x16 is an absolute minimum requirement. Even windows save mode usually is able to come up with (or can be switched to) 800x600. So I usually design resizeable apps for 800x600, and if done right, they look and behave great under even the largest resolutions. In contrast, if you design a resizeable app for 640x480, you will make an awful lot of compromises in layouting etc. due to the limited space available, and that while "nobody" uses that resolution in the real world.
Furthermore, I love applications that resize intelligently. Depending on your GUI framework/toolkit, that is a requirement that you can be met easily, or not-so-easily. It's worth the hassle, though.
You might also consider the font scaling setting. On large-resolution displays, many users prefer the "large fonts" setting, or something else different from the original font scaling setting. Then, your app must scale accordingly, and the minimum resolution criterium gets less important, while the apps's ability to re-size intelligently gains much more significance.
In short:
a) Design for 800x600x16
a.1) Let your app terminate with an error message if the resolution is smaller than that
b) Make sure all resizeable dialogs resize intelligently
c) Test all layouts on large and small font scaling settings as well
d) When saying "800x600", this is useless, since your app usually cannot use the whole screen, even if maximized. (We are not talking about fullscreen apps, do we?) So you should account for the task bar and possibly other fixed screen elements that cannot be used by a normal Window, and for the window's title bar when maximized. You will want the window to fit into the desktop area in all cases. (Well, maybe you will.) Windows can tell you the dimensions of that area, taking account all task bar etc. stuff that the user might happen to use, so you could alert/abort if the usable space is smaller than your minimum resolution that you designed for.
For PCs (excluding embedded stuff like handphones, wristwatches, mp3 players, washing machines etc..) the smallest resolution is 640x480 otherwise known as VGA resolution.
There may be some PC-class computers like early Macs, Ataris or TRS-80s with smaller resolutions but nobody uses them nowdays. Conventional wisdom says the smallest monitor width is 640 pixels wide.
In the last 10 years a lot of developers have upped the assumed minimum resolution to 1024x768 otherwise known as XGA (btw, nobody calls them VGA or XGA anymore since the mid 1990s). All graphics card manufactured since 1999 can handle at least 1024 pixels as the minimum width.
768 pixels used to be assumed as the minimum height by a lot of developers in the last 10 years until 3 years ago when Asus invented the Netbook category. Most netbooks have a resolution of 1024x600. So a lot of software cannot fit on netbook screens (much to the annoyance of netbook owners).
Currently (since I'm one of those netbook owners) my own standard minimum is 1024x600, that is, 1024 pixels wide vs 600 pixels high (actually more like 560 pixels because I usually have to account for the menubar and the taskbar).
Note: wikipedia has a nice summary of standard monitor resolutions: http://en.wikipedia.org/wiki/Graphic_display_resolutions

Why Direct3D application performs better in full screen mode?

The performance of a Direct3D application seems to be significantly better in full screen mode compared to windowed mode. What are the technical reasons behind this?
I guess it has something to do with the fact that a full screen application can gain exclusive control for the display. But why the application cannot gain exclusive control for part of the screen (i.e. window) and have the same performance benefits?
Here are the cliff notes on how things work underneath.
Monitor screen always needs to be associated with so-called primary surface to be able to display anything, i.e. videocard can only scan out of one surface in video memory.
When application is fullscreen (and everything was set up correctly to enable flipping), primary surface is just one of the application backbuffers, and flipped to another backbuffer every frame. It is the most efficient way of presenting on the screen, but it requires application to own the entire monitor area (i.e. entire primary surface).
When there's no fullscreen application and DWM is off, primary surface is owned by OS, and every windowed application performs a blit from application backbuffer to a primary surface. This blit takes some GPU time to complete (as well as blits from the other applications visible on the screen), so it's not as efficient as fullscreen presentation. XP worked that way.
When DWM is composing the screen, things get even more complicated.
Here, DWM owns the primary surface and needs to draw application windows there. To make it possible, every window has an associated surface holding its contents, called redirection surface (which allows DWM to enable window ghosting, glass effects, and all that good stuff). Every time D3D application issues a frame, it adds a blit to a redirection surface.
That way, several blits need to happen: blit to a redirection surface by the app, blit from a redirection surface to the primary by DWM, which is, again, some overhead compared to fullscreen.
Note all of that additional work is on the GPU, so it doesn't affect CPU performance.
Stuff to read further:
http://blogs.msdn.com/greg_schechter/archive/2006/03/19/555087.aspx
http://blogs.msdn.com/greg_schechter/archive/2006/05/02/588934.aspx
http://blogs.msdn.com/greg_schechter/archive/2006/03/05/544314.aspx
There's a bit on MSDN that says full screen mode uses buffer flipping, if set up correctly, as opposed to blitting. It makes sense.
Of course you can (and in a way, do) give exclusive control for part of the screen to an application, but what happens to the rest of the screen? You still have to blit, do occlusion checking, etc. on the rest of the windows, and I think that's what causes the performance hit.
I'll add to #aib's answer that the rest of the screen is being managed by the OS. So, if anything else needs to be drawn/worked upon simultaneously, there has to be a performance hit.
For example, if you have a video playing in Windows Media Player in one window, then start Civilization in another, when Civ starts doing its fancy graphics, it will need to share screen space with everything else (like the video.
Whereas if the DirectX app has the full-screen, everything else might be "updating" or "playing", but not being drawn.
Basically, the video hardware is completely dedicated to the exclusive mode application.
There is no contention for video resources (pipeline, texture memory, etc...)
In particular, texture upload can be a big bottleneck. The less you have to do it (because you have it all), the better.

Resources