I'm new to OpenGL development for MacOS.
I make game 1024x768 resolution. In the fullscreen mode on widescreen monitors my game looks streched, it's not good.
Is there any function in OpenGL to get pixel per inch value? If I find it, I can decide whether to add bars to the sides of the screen.
OpenGL is a graphics library, which means that it is not meant to perform such tasks, its only for rendering something on to the screen. It is quite low level. You could use the Cocoa API NSScreen in order to get the correct information about the connected screens of your Mac.
I make game 1024x768 resolution.
That's the wrong approach. Never hardcode any resolutions. If you want to make a fullscreen game, use the fullscreen resolution. If you want to adjust the rendering resolution, switch the screen resolution and let the display do the proper scaling. By using the resolutions offered to you by the display and OS you'll always get proper aspect ratio.
Note that it still may be neccessary to take pixel aspect ratio into account. However neither switching the display resolution, nor determining the pixel aspect ratio is part of OpenGL. Those are facilities provided by the OS.
Related
Since most devices today have a CPU and a GPU, the usual advice for programmers wishing to do animated vector graphics (like making a circle grow or move around) is to define the graphical item once and then use linear transformations to animate it. This way, (on most platforms and frameworks) the GPU can do the animation work, because rasterization with linear transformations can be done very fast on a GPU. If the programmer chooses to draw each frame on the CPU, it would most likely be much slower and consume more energy.
I understand that the Watch is not a device you want to overload with complex animations, but at least the Home Screen certainly seems to use exactly this kind of animated linear transformations:
Also, most Watch Faces are animated in a way, e.g. the moving seconds and minutes hands.
However, the WatchKit controls do not have a .transform property, and I could not find much in the documentation - the words "animation" and "graphics" are not even mentioned there.
So, the only way I currently see is to draw the vector graphics to a CGContext and then put the result as an UIImage to a image control, as described here. But this does not really seem energy-efficient. It is exactly the kind of "CPU pixel drawing" that we usually want to avoid if possible. I think it is not energy-efficient because if I draw on a 100x100 pixels image buffer, the image has to be scaled to the actual Watch screen size, so we have two actual drawing processes per frame.
Is there a officially recommended, energy-efficient way to do animations on the Apple Watch?
Or, in other words, can we animate things like they are animated on the Home Screen or Watch Faces?
Seems SpriteKit is the answer. You can create SKScene and node objects and then display them in a WKInterfaceSKScene.
Todays displays have a quite huge range in size and resolution. For example, my 34.5cm × 19.5cm display (resulting in a diagonal of 39.6cm or 15.6") has 1366 × 768 pixels, whereas the MacBook Pro (3rd generation) with a 15" diagonal has 2880×1800 pixels.
Multiple people complained that everything is too small with such high resolution displays (see example). That is simple to explain when developers use pixels to define their GUI. For "traditional displays", this is not a big problem as the pixels might have about the same size on most monitors. But on the new monitors with much higher pixel density the pixels are simply smaller.
So how can / should user interface developers deal with that problem? Is it possible to get the physical size of the screen? Is it possible to set physical sizes instead of pixel-based ones? Is that still a problem (it's been a while since I last read about it) or was that fixed meanwhile?
(While css seems to support cm, when I try here it, it is not the set size).
how can / should user interface developers deal with that problem?
Use a toolkit or framework that support resolution independence. WPF is built from the ground up to be resolution-independent, but even old framework like Windows Forms can learn new tricks. OSX/iOS and Windows (or browser if we're talking about web) itself may try to take care the problem by automatic scaling, but if there's bitmap graphic involved, developers might need to provide different bitmaps such in Android (which face most varying resolution and densities compared to other OS)
Is it possible to get the physical size of the screen?
No, and developers shouldn't care about it. Developers should only care about the class of the device (say, different UI for tablet and smartphone), and perhaps the DPI to decide which bitmap resource to use. Vector resource and font should be scaled by the framework.
Is that still a problem (it's been a while since I last read about it) or was that fixed meanwhile?
Depend on when you last read about it. Windows support is still spotty, even for the internal apps itself, and while anyone developing in WPF or UWP have it easy, don't expect major third party apps to join soon. OSX display scaling seems to work a bit better, while modern mobile OS are either running on limited range of resolution (iOS and Windows Phone) or handle every resolution imaginable quite nicely (Android)
There are a few ways to deal with different screen sizes, for example when I make mobile apps in java, I either use DIP(Density Independent Pixels; They stay at a fixed size) or make objects occupy a percentage of the screen with simple math. As for web development, you can use VW and VH (Viewport Width and Viewport Height), by adding these to the end of a value instead of px, the objects take up a percentage of the viewport. For example 100vh takes 100% of the viewport height. Then what I think is the best way to do it, but time consuming, is to use a library like Bootstrap that automatically resizes elements, even when the window is resized. W3Schools has a good tutorial on bootstrap and more detailed explainations on any of these options can be looked up with an easy google search.
The design of the GUI in today display diversity era is real challenge. I would suggest several hints, mainly about the GUI applications design:
Never set or expect constant pixel size of the text - the user can change it from the system settings of the OS. Use some real-world measures for the text and check its pixel size when drawing. Provide some way to put the random size text in the boundaries of the window.
Never set or expect constant pixel size of the GUI widgets. Try to position them on the window in some adaptive way - according to the size of the window. Most GUI widget toolkits today have such instruments.
Never set or expect constant pixel size dialog windows. Let the OS to choose the size for you and then use what you get (X). Or, if you need to set some size and position (Windows), define it as a percent of the screen size.
If possible use scalable image formats for the icons. SVG is great for icons actually. Using sets of bitmap icons with different sizes is acceptable, but highly non-optimal as memory use and still will not provide perfect scaling in most cases.
My game engine currently uses UIScreen bounds for its rendering resolution. On an iPhone6 plus this reports the virtual resolution of 2208x1242.
On the simulator this is also that same resolution as reported by UIScreen nativeBounds.
However, on a real device nativeBounds will be 1920x1080, and I am unsure which to use for correct OpenGL rendering on an iPhone6 plus, and can find no official documentation on it.
Which one is correct?
Use nativeBounds and nativeScale to determine the size to set for your framebuffer or drawable. (Don't hard core the size.)
For GPU-heavy, performance-sensitive work — games with OpenGL ES or Metal — you really want to minimize the number of pixels going through the fragment shader. One good way to do that is to not render more pixels than the display hardware has.
I can now confirm that on a real device, correct rendering is produced when using the virtual resolution of 2208x1242 reported by [UIScreen bounds].
I'd like to play some older games on Windows 7. Running them isn't an issue, but the increase of monitor size and pixel density of later monitors is. Pre-rendered games intended to be played full-screen on e.g. a 640x480 resolution are now "blown up" to fit on a complete screen, making everything look unsharp. I've been looking at different solutions, but so far to no avail for a selection of games:
Running the game in "windowed mode" is an option for those games that support it.
DxWnd could be used to force some games in "windowed mode", but it causes some applications to crash as well.
VirtualBox works nicely since it will automatically resize to the applications desired full-screen resolution, but this is no option if VirtualBox's 3D support is insufficient to play the game.
Drivers like those of AMD or NVIDIA provide means to force maintaining pixel aspect ratio if pixel aspect ratio is an issue on wide-screen monitors
All of the above don't work for me for one game, since it does not provide "windowed mode", DxWnd makes it crash, VirtualBox's 3D support is insufficient and aspect ratio isn't an issue on my monitor.
Which brings me to the question: is there a way to lower the screen resolution while maintaining original pixel density of the monitor instead of having it fill up the whole screen? Thus essentially creating a smaller view port for the Windows environment to use and filling up the rest of the screen with big black borders?
Right click on the game and click property's and then try ticking this option.
.
I need to render a QuickTime movie into a GWorld that may not be the movie's natural size, on Windows. When the movie is scaled up, say by a factor of 1.5, I see some jaggies. Whereas if I open the same movie in QuickTime Player (version 7.6.6 on Windows Vista) and stretch the window, I see jaggies while stretching, but when I release the mouse, the image smooths out. It must be using some smarter scaling algorithm or antialiasing. What do I need to do to render at a bigger size besides SetMovieGWorld and SetMovieBox?
Here's a little of the smooth version:
(source: frameforge3d.com)
And here's the slightly jaggy counterpart:
(source: frameforge3d.com)
(Although this shows text, it's not a text track, just part of the image.)
I tried calling SetMoviePlayHints with various flags such as hintsHighQuality, with no effect.
Here's the big picture, in case you might have a suggestion for a whole different approach. The movie is side by side stereo. The left and right halves of the image need to be composited (in one of several ways) and then drawn to the screen. For instance a movie I'm testing with has a natural size of 2560 x 720, but is displayed at a natural size of 1280 x 720. The way I'm currently doing it is rendering to a GWorld, and then in a SetMovieDrawingCompleteProc callback, uploading the left and right halves to OpenGL textures with glTexSubImage2D, and rendering to the screen using a GLSL shader.
I tried using an Apple Developer Technical Support incident to get help with this, but their reply was basically "we don't do Windows".
Can you use DirectX and more specifically, DirectShow to display your movie instead of using Apple's SDK? I know that DirectX can play QuickTime movies.
If you can use DirectShow then you can search for or create your own video transform filters to do antialiasing and smoothing.