we are developing a skinned application, and under vista/windows 7, on some machines, skinned applications sometimes loses their skin. here's an example for the problem, and here's how the application looks when it's good.
this happens to us whether we develop with native Win32 API or in QT. It happens spontanously, with no event that might explain it. btw, we see it happens sometimes to some other applications, too
we solve it by repainting everything every 2-3 seconds. but this is an ugly hack...
any ideas why this could happen?
thanks _very_much_ for any lead -
Lior
Shot in the dark, but it sounds like a graphics driver problem. I'd check whether the problematic machines all have the same graphics card or the same version of the graphics driver, and how the driver collection on those machines compares with the OK ones.
Shot in the dark #2: You're running out of GDI resources because your app (or another app running on the same machine) is leaking GDI handles.
It's been while since I've had to use any tools for detecting "GDI handle leaks" (Google or Bing on it).
Here's some links to go read up on:
http://msdn.microsoft.com/en-us/magazine/cc301756.aspx
http://www.nirsoft.net/utils/gdi_handles.html
http://msdn.microsoft.com/en-us/magazine/cc188782.aspx
Related
I work on a desktop application that we sometimes have to run on a virtual machine using Windows Remote Desktop for access. Fonts and gradients are noticeably degraded in appearance when running through Remote Desktop. The fonts are clearly not anti-aliased (and are normally) The gradients degenerate into much larger bands of solid color, losing the smoother look. Initially, I had assumed Windows was doing this to improve performance, but when I compared application fonts in our produce with those in other applications (Visual Studio specifically), I see that Qt is definitely rendered fonts in dialogs and QGraphicsScene differently.
In the application title bar of my app, I see that the font exactly matches the appearance of other application title bars, and that makes sense because Windows draws that. Within my application, all of the top menu items and fonts on dialogs are not anti-aliased and look terrible. We use QGraphicsScene extensively, and those fonts are degraded as well.
I don't have another application that generates gradients to compare those, but I viewed a high resolution image through the Remote Desktop connection using the Windows image viewer, and it looks just as good as on a local desktop.
The degraded appearance means that we can't do screen shots for documentation while using the VM. We are also frequently required to do demos using VMs and Remote Desktop, and the appearance is not appealing to show to customers. In our industry and within our company, there's increasing pressure to use VMs instead of local, physical machines, so this is becoming a bigger problem.
Both symptoms lead me to believe that Qt knows that I'm visualizing through Remote Desktop and that it is choosing to degrade appearance in favor of performance. I don't want that, or at the very least, I need to control it.
I suspect this is buried somewhere in Qt's style/theme system, but I haven't had any luck finding clues that would point me to the correct place to do something about this, or at least an answer that indicates whether or not it's even possible. Any advice is greatly appreciated.
With QGraphicScene we have OpenGL for rendering. And with some of VMs we mostly rely on software simulating OpenGL via MS DirectX, which is for software and not hardware supported rendering. The most popular software OpenGL rendering is based on ANGLE.
To improve the rendering on VM I would try to build a custom Qt for your app using one of proposed Qt build configurations to configure specific Windows Qt build.
With Qt evolving it gets a bit confusing: which configuration is the best. I was told that since Qt 5.5 -opengl dynamic will be an optimal for most of environments. I used to configure -opengl es2 configuration with Qt 5.3 and that worked well without degrading the graphics but mind that VMs used are from VMware and not MS Hyper-V that would not even allow the app to load due to OpenGL failing to initialize and I could not make ANGLE to help here with that specific Qt.
I was able to address the issue with fonts in QGraphicsScene. Because of the nature of our product, the font handling for graphics items was fairly specialized, and very early in development when I was very new to Qt, I had set the style strategy to those fonts to QFont::ForceOutline because I didn't want the font matching to use any bitmapped fonts. Through experimentation, I found that this strategy results in the fonts not being anti-aliased when running through Remote Desktop. Changing to QFont::PreferAntialias addressed the problem for the fonts in the scene, and that's a substantial and welcome improvement.
Unfortunately, I haven't been able to find a solution for the general application fonts, nor for the gradient degradation, but at least with the fonts, I have something more to go on. My next step will be to start inspecting the fonts that Qt is using by default on some of the widgets and seeing what their attributes are.
I need to figure out if porting an application to Mac OS X (not iOS) is feasible. I wrote some code for Mac around 20 years ago, but what I'm looking at now is completely different, and may require a complete re-write, which I cannot afford. After googling for some time, I found a variety of APIs, which appearing and get deprecated so often, that I feel completely lost.
The application draws through copying small fragments of bitmaps to the window. This is accomplished with BitBlt() on Windows or XCopyArea() on X11. In both cases, the source is stored in the video memory, so copying is really fast, 500K copies per second on a decent card, possibly more. On Mac, there used to be CopyBits() function which did the same, but it is now depreacted. I found CGContextDrawImage() which looks it's getting deprecated too, but copies from the user memory, and can only copy the whole image (not fragments). Is there any way to accomplish bitmap copying at decent speed?
I see everything is 64-bit. I would want to keep it in 32-bit for a number of reasons. 32-bit applications still seem to be supported, but with the fast pace deprecation, Apple may stop supporting at any time. Is this a correct assesment?
Software distribution. I cannot find any information on this. Looks like you need to be a member of the Apple Development program to be able to install your software on user's computers. Is this true? In some other places, I have read that any software must undergo Apple approval. Is this correct?
Thank you for your help.
So much has changed in the past twenty years, that it may indeed be quite difficult to port your app directly to modern OS X. You may be better served by taking the general design concept and application objectives, and create a fresh implementation using up-to-date software technology.
Your drawing system might be much easier to do with modern APIs, but the first step is deciding which framework to use. Invest some time in reading the documentation and watching the many videos available on the Developer website. A logical place to start is Getting Started with Graphics & Animation, but you may also wish to explore Metal Programming Guide and SpriteKit.
The notion of 64-bit vs 32-bit is irrelevant. All Mac computers run 64-bit code.
If you don't purchase a Developer program membership, you can still create an unsigned application with Xcode. It can be installed on another user's computer, but they'll need to specifically change the setting in System Preferences -> Security to "Allow apps downloaded from: Anywhere".
The WWDC videos are very useful in understanding the concepts and benefits of advancements in these frameworks made over the past few years.
After some investingation, it appears that OS X graphics is completely different from others. Screen is regarded as a target for vector graphics, not bitmap.
When the user changes the screen resolution, the screen resolution doesn't really cange (as it would in Linux or Windows) and remains native, but the scale at which the vector graphics is rendered changes. Consequently, it's perfectly possible to set screen "resolution" to be higher than the native one - you just see the things renderd smaller.
When you take a screenshot, the system simply renders everything to an off-screen bitmap (which can be any size), so you can get nice smooth screenshots at any size.
As everything is vectored, the applications that use bitmap graphics are at a huge disadvantage. It is very hard to get to native pixels without much overhead, and worse yet, an application that uses native pixels will behave strange because it won't scale when the user changes screen resolution. It also will have problems when screenshots are taken. Is it possible to make it work? I guess I won't find out until I fork over $2K for Macbook Pro and try it.
32-bit apps seems to be supported, and I don't think there's an intent to drop the support.
As to code distribution, my Thawte Authenticode certificate is supposed to work on OS X as well, so I probably don't need to become a member of Apple Developer program to distribute software, but again there's no definitive answer to that until I try.
I'm currently porting a 3D C++ game from iOS to Android using NDK. The rendering is done with GLES2. When I finished rewriting all the platform specific stuff and ran the full game for the first time I noticed rendering bugs - sometimes only parts of the geometry would render, sometimes huge triangles would flicker across the screen, and so on and so on...
I tested it on a Galaxy Nexus running 4.1.2. glGetError() returned nothing. Also, the game ran beautifully on all iOS devices. I started suspecting a driver bug and after hunting for many hours I found out that using VAOs (GL_OES_vertex_array_object) caused the trouble. The same renderer worked fine without VAOs and produced rubbish with VAOs.
I found this bug report at Google Code. Also I saw the same report at IMG forums and a staff member confirmed that it's indeed a driver bug.
All this made me think - how do I handle cases of confirmed driver bugs? I see 2 options:
Not using VAOs on Android devices.
Blacklisting specific devices and driver revisions, and not using VAOs on these devices.
I don't like both options.
Option number 1 will punish all users who have a good driver. VAOs really boost performance and I think it's a really bad thing to ignore them because one device has a bug.
Option number 2 is pretty hard to do right. I can't test every Android device for broken drivers and I expect the list to constantly change, making it hard to keep up.
Any suggestions? Is there perhaps a way to detect such driver bugs at runtime without testing every device manually?
Bugs in OpenGL ES drivers on Android is a well-known thing, so it is entirely possible to have a bug in a driver. Especially if you are using some advanced (not-so-well-tested) features like GL extensions.
In a large Android project we usually fight this issues using the following checklist:
Test and debug our own code thoroughly and check against OpenGL specifications to make sure we are not doing any API-misuses.
Google for the problem (!!!)
Contact the chip-set vendor (usually they have a form on their website to submit bugs from developers, but once you have submitted 2-3 real bugs successfully you will know the direct emails of people who can help) and show them your code. Sometimes they find bugs in the driver, sometimes they find API-misuse...
If the feature doesn't work on a couple of devices, just create a workaround or fallback to a traditional rendering path.
If the feature is not supported by the majority of the top-notch devices - just don't use it, you will be able to add it later once the market is ready for it.
I want my program to be able to launch any Windows game, and while the user is playing it, intermittently display some text or pictures in some part of the game window. The game may be in windowed or full-screen mode. From what I have been able to figure out from online resources, this could be done using a graphics library that supports overlays and using Windows Hooks to keep track of the target application's window. In this context I have some questions.
Will the overlays affect the game's performance?
How will hooking the application affect performance?
Is there any other way one could achieve this? For example, how do you think PIX, the DirectX debugging and analysis tool, work?
Fraps is the archetypal example of doing this sort of thing to a fullscreen DirectX application from a third-party app. It works by hooking some system calls and inserting itself into the call-chain between an app and DirectX. There is some performance hit, but in general its minimal.
This page seems to have some details and sample code on how to hook the app in this way.
If I recall correctly, from other forum discussions (can't find the link at the moment. search for things like "how does fraps work", it's a popular question), Fraps hooks a few things to force the app to load its DLL, then hooks Present() calls and executes a device->Clear() call before calling the real Present(), with a list of small rectangles to set to a different color, which can spell out the FPS number that it displays. This has a minimal performance impact and is widely compatible with whatever rendering the app is doing. Overlaying a bitmap would be more complicated since it wouldn't be as easy to do at Present-time. Perhaps if you could hook EndScene, then you could do more, but you would have to be careful to not change the device state.
PIX has privileged access to the DirectX driver, so I wouldn't expect to be able to use that as a model to emulate.
If an the target app is running in windowed mode, hooking DirectX still work, but you could also just use GDI instead.
Edit: I think this is the link I was originally thinking of.
When writing DirectX applications, obviously it's desirable to support the user suspending the application via Alt-Tab in a way that's fast and error-free. What is the best set of practices for ensuring this? Things that need to be addressed include:
The best methods of detecting when your application has been alt-tabbed out of and when it has been returned to.
What DirectX resources are lost when the user alt-tabs, and the best ways to cope with this.
Major things to do and things to avoid in application architecture for purposes of alt-tab support.
Any significant differences between major DirectX versions as they apply to the above.
Interesting tricks and gotchas are also good to hear about.
I will assume you are using C++ for the purposes of my answers, but if you can afford to use C#, XNA (http://creators.xna.com/) is an excellent game platform that handles all of these issues for you.
1]
This article is helpful for windows events in the window procedure to detect when a window loses or gains focus, you could handle this on your main window: http://www.functionx.com/win32/Lesson05.htm. Also, check out the WM_ACTIVATEAPP message here: http://msdn.microsoft.com/en-us/library/ms632614(VS.85).aspx
2]
The graphics device is lost when the application loses focus from full screen mode. Microsoft offers an article on how to handle this: http://msdn.microsoft.com/en-us/library/bb174717(VS.85).aspx This article also has a lost device tutorial: http://www.codesampler.com/dx9src/dx9src_6.htm
DirectInput can also have a device lost error state, here is a link about that: http://www.toymaker.info/Games/html/directinput.html
DirectSound can also have a device lost error state, this article has code that handles that: http://www.eastcoastgames.com/directx/chapter2.html
3]
I would make sure to never disable Alt-Tab. You probably want minimal CPU load while the application is not active because the user probably Alt-Tabbed because they want to do something else, so you could completely pause the application, or reduce the frames rendered per second. If the application is minimzed, you of course don't need to render anything either. After thinking about a network game, my best solution is that you should still reduce the frames rendered per second as well as the amount of network packets handled, possibly even throwing away many of the packets that come in until the game is re-activated.
4]
Honestly I would just stick to DirectX 9.0c (or DirectX 10 if you want to limit your target operating system to Vista and newer) if at all possible :)
Finally, the DirectX sdk has numerous tutorials and samples: http://www.microsoft.com/downloads/details.aspx?FamilyID=24a541d6-0486-4453-8641-1eee9e21b282&displaylang=en
We solved it by not using a fullscreen DirectX device at all - instead we used a full-screen window with the top-most flag to make it hide the task bar. If you Alt-Tab out of that, you can remove the flag and minimize the window. The texture resources are kept alive by the window.
However, this approach doesn't handle the device lost event happening due to 'lock screen', Ctrl+Alt+Delete, remote desktop connections, user switching or similar. But those don't need to be handled extremely fast or efficiently (at least that was the case in our application)
All serious D3D apps should be able to handle lost devices as this is something that can happen for a variety of reasons.
In DX10 under Vista there is a new "Timeout Detection and Recovery" feature that makes it common in my experience for graphics devices to be reset which would cause a lost device for your app. This seems to be improving as drivers mature but you need to handle it anyway.
In DX8 and 9 (and 10?) if you create your resources (vertex and index buffers and textures mainly) using D3DPOOL_MANAGED they will persist across lost devices and will not need reloading. This is because they are stored in system memory and the DX runtime copies to video memory automatically. However there is a performance cost due to the copying and this is not recommended for rapidly changing vertex data. Of course you would profile first to determine if there is a speed issue :-)