Properly handling HighDPI on MacOS with SDL and OpenGL - macos

My attempts at High-DPI rendering on MacOS for my game, Bitfighter, always end up looking bad, like a scaled-up version of a low-res game.
The game uses SDL2 + OpenGL and I have correctly enabled the SDL_WINDOW_ALLOW_HIGHDPI window flag as well as making it HighDPI aware in the Info.plist. This all works and I get the higher-res title bar just fine. I use SDL_GL_GetDrawableSize and it correctly returns the 2x larger pixel size than the window size, but the following techniques to re-scale it don't yield good results:
Using glViewport using window coords
Using glViewport with drawable coords, then glOrtho to scale (as suggested in the SDL doc README-ios.md)
Both show pixely vector graphics. What can I do to get OpenGL to draw better with MacOS High-DPI?
Thanks.

Related

Force a retina iPad to use non-retina images

I'm creating an app using OpenGL...
I have a retina iPad, and I'm using a texture that stores #2x images. When my app starts up, the OpenGL framebuffer is created and querying the size of it comes back as 1536x2048 - so far so good.
I also have a texture for non-retina iPad display (768x1024), but I do not have a non-retina iPad. I'd like to force my retina iPad to use the non-retina graphics (i.e, for it to use scale == 1.0). So I'd like it to create a 768x1024 OpenGL framebuffer. The problem is that it always creates a 1536x2048 frame buffer, and scale is always 2.0.
Is there a way of forcing it to use a scale of 1.0, and creating a smaller framebuffer? The base iOS version for the app is 8.0, but since iOS8.0 still works on the iPad2, I'd like to test that resolution as well.
I've tried using UILaunchImages, but that doesn't seem to work? In the past, when an app was written for a non-retina screen, a retina device used to scale the lower resolution to fit the higher-resolution screen, and that's what I want - at least so I can test....
Is UILaunchImages the right way to go to try and get iOS to think that only low-res graphics are available?
IIRC, simply by having an #2x resource (splash, icon, etc..) used to signal iOS that you are running an app with support for Retina. I think since then Apple has added a NSHighResolutionCapable key that you can try to set false in the plist. So try removing all #2x resources from your build and setting that key to false.
Another approach is to change your glViewPort and/or the Projection matrix to scale your logical resolution to 1024x768. I have the opposite problem in that I need to scale non-retina images up to the retina backing scale. I use a Scaling matrix in my stack to fix this.
Ask me how I mix and match retina and non-retina images!

how to change the size of the rendering surface in opengl/egl?

I am working in OpenGL ES 2.0 with C. (Not Android)
I want to change the size of the egl surface so that i can render two different contexts on the screen at the same time.
Is it possible to resize the egl surface?
What platform / window system is being used ? The windowsurface comes from the "window" - and hence depends on the windowsystem - for example, if using X, it will come from a client window. If using fullscreen "NULL" windowsystem, the size of the framebuffer. If Qt, a widgetsurface or similar.
Example using a NULL system is below:
https://github.com/prabindh/sgxperf/blob/master/sgxperf_gles20_vg.cpp
To answer - EGL only refers to a window already created, hence cannot resize it by itself. When a client window is resized, EGL then has to update its internals, not the other way round.
You can use glviewport to target different areas of the screen.

Skinned Window: Win32 API and DirectX

I'm trying to create a borderless window using a WS_EX_LAYERED style window. The objective is to render graphics using DirectX directly to the desktop, using alpha to blend onto the current desktop windows.
Now on my system this technique seems to work perfectly. I can set various alpha levels and achieve different levels of transparency. Unfortunately several users have reported severe performance problems and low frame rate, making this technique unusable.
The code setup is as follows:
Create a layered (WS_EX_LAYERED extended-style) window.
Initialize DirectX using the window HWND.
Create a render target using the CreateRenderTarget DirectX method.
Then during the render loop:
Render graphics to the render target using DirectX calls.
Get the HDC handle to the DirectX render target surface using GetDC method.
Update the window contents using the UpdateLayeredWindow function, specifying the DirectX surface HDC.
My question is: Am I doing something wrong? Is there a way to improve the performance of the window update. I have tried various things, like locking the render target and manually copying the bits to a DIB section to display in the window area, without success.
How big is your window? Note MSDN's documentation at http://msdn.microsoft.com/en-us/library/windows/desktop/ms633556%28v=vs.85%29.aspx says "For best drawing performance by the layered window and any underlying windows, the layered window should be as small as possible."
You may be getting a performance boost if compositing (Aero) is enabled. If Windows is already compositing, it won't have to do as much extra work to draw layered windows.
If you're not seeing any difference in performance depending on compositing, then I am probably completely off base here.

Why does this text look "less bold" in OS X 10.5 compared to 10.7?

I built a very simple application with nothing but a single NSTextView in it in xcode / interface builder. I've done nothing to the text view other than change the font face to "Arial" and increase the font size. However, it looks a lot less bold on OS X 10.5 than it does on 10.7. What's going on here?
(10.5 in the top window, 10.7 in the bottom window)
I've tested with Helvetica and got the same results, so it's not something to do with this specific font.
If you enlarge that image enough, you'll see color fringing on the 10.7 sample. This indicates that the text was rendered with subpixel antialiasing, more specifically, using a RGB subpixel ordering. The 10.5 sample uses grayscale antialiasing. While it is generally a good idea to use subpixel rendering, it can look bad on low-resolution screens or CRTs, and it can't be used in a multi-monitor setup where there is more than one subpixel arrangement to contend with. This means that some users will have subpixel rendering enabled for their system, and some won't. Don't try to circumvent that setting in any way, either by overriding the system preference for font smoothing, or by pre-rendering the text into an image. Users are far more likely to notice the one app whose text looks wrong on their system than they are to notice the difference in rendering between two different machines.

how to Map a window to 3D

I'm searching for how to render window to 3D windows Texture on D3D
for example, the windows aero-glass's preview.
a window or part of window that has a windows handler is rendered to a d3d device(i guess aero glass is maked by d3d).
my project is a 3D interective media. it is an AR project using HMD and Hand Recognizing.(Like a 3d touch interaction )My part is 3D Rendering. The WPF can do this. But i don't find the way how to do it with D3D.
Who knows the way or it is impossible on D3D? if you know, please notice me a KEYWORD that using to Google.
thanks to reading and your attention. i'm not native english user and i'm sorry that if you feel my english seems ugly.
I may suggest using dynamic textures. You first create a texture of the desired size and format. Then you get its surface, obtain HDC and pass it to a window you want be drawn. Showing a texture thru d3d device shouldn't be a problem.

Resources