just to save some time, probably anybody tried this or saw somewhere related info.
asking about DirectDraw and not about DirectX because I need to support Win2000 and up, and I cannot install DirectX on target PC
tried capturing by Direct3D and compared with GDI way. Results are not unambiguous. On my Win7 x64 with rather good video card D3D way shows ~2 times performance boost. On my WinXP 32bits laptop with not-integrated but old video card D3D works much much much longer. On another WinXP (I don't know any hardware details) D3D works almost 2 times slower.
Related
I have an application that plots quarter degree blocks on a map using Timage stacked on each other. I then add records by drawing them on a separate layer.
The problem I have is that Firemonkey (or Windows) scrambles the graphics, but only on some computers, and I think all the affected computers are laptops. See the following links for screenshots:
The correct image should look like this:
On laptops this scrambling may take 3 repaints of the layers, but sometimes (on exactly the same code) it happens after 1 or two times. While it is inconsistent in exactly how many repaints it takes, it is guaranteed to happen after no more than 3 paints.
So I have come to the conclusion that it must be a Graphics driver issue. I have a NVidia Geforce 950M on my laptop (Asus NJ551 with Windows 10), but if I understand the code correctly I am using the Windows Direct2D acceleration so the Nvidia drivers shouldn't affect things?
I set the following flag by default: GlobalUseDX10Software := true; //Use DirectX to generate graphics, but this does not seem to make any difference as it still scrambles even when set to false.
I would prefer the Windows acceleration as my users may not all have a graphics card installed. A friend using a HP laptop (not sure of the model but running Windows 8) does not experience the issue, yet another friend with a brand new HP laptop (low spec but with Windows 10) is also experiencing the issue.
Can someone please help out here? I am out of ideas, and I'm not even sure what to Google. Is it Windows 10, is it the Graphics driver, etc? Is there a way I can force my laptop to use the Graphics card for testing? While this will not help other users without proper graphics cards, it may help isolate the issue.
Any advice is appreciated!
From the EDN forum, I got the a number of other Graphics related global variables to set. The one that sorted out the issue is:
GlobalUseDXSoftware := True;
It now makes sense, as the issue started happening after moving to XE8 from XE5, and the GlobalUseDX10Software flag is now deprecated
I own a laptop with nVidia Optimus
I tried everything to get rid of it, or make it work, and it refuses to work.
One problem in particular, is that when the WinAPI is called with information about the hardware (for example queries with capabilities, device-id, device name, and so on), apps always get the information for the integrated Intel card, that is terrible, and don't exactly match the nVidia card in capabilities either, this make some games and apps misbehave or crash.
I was wondering, can I somehow override those WinAPI calls, and make them lie? For example when the app asks about GPU Device-ID, I tell to it that it is a arbitrary device I want.
Bonus question: Can this also be applied to ASM calls, like CPUID and RDTSC? Many older games rely on those... also the Intel Compiler infamously made to work with only P4 tend to treat new (Core i7 of any generation) CPUs as AMD, and choose crap code paths.
EDIT: Some people are misunderstanding what I want to code.
I want to make a launcher app to workaround a common nVidia Optimus bug, like those apps to make games borderless, or to make them use a different more compatible version of DirectX than their original.
nVidia Optimus works (usually, it can be done differently) by the machine having a integrated Intel Chip, and a nVidia Discrete GPU, the computer treats the DGPU as a sort of video-coprocessor, the actual video chip is always the Intel video chip, but when Optimus kicks in, the Hardware Accelerated rendering is passed to the DGPU, that after finishings its work, copy the results into Intel's chip framebuffer, that finally show it on the screens.
The bug in this implementation, is that it never considered what happens when an app queries about the video capabilities, because the video chip is always the Intel one, any queries get a reply related to the Intel one, even if the chip that will actually receive the draw calls in this app is the nVidia one.
As result, any mismatched DX or OGL extensions between the GPUs can cause bugs or crashes, programs may assume wrong things about the available computing power and memory, may have timing problems, and so on.
I've been fighting with this tech for years, and found no practical solution, this idea is my "final stand" idea, make a "Optimus Launcher" app, that allows you to launch any game with Optimus and it will work, hopefully without ugly hacks like disabling Secure Boot (I disabled Secure Boot to play Age of Decadence, in machines with Optimus AoD, and other Torque3D games, don't work if Secure Boot is enabled, I have no idea why).
You can hook WinAPI calls and make them do what ever you like but it's nothing which is implemented easily. Furthermore I guess that some anti virus programs will get very nervous if you application is doing stuff like that...
Take a look at this article which is a good start: API hooking revealed
If one were to code a game for most versions of Windows, which API should be used?
I know DirectDraw works from NT4 and up (although DirectDraw is emulated on NT4 with GDI). However, I am told DirectDraw is deprecated in newer versions of Windows?
I could revert to just GDI, but then it is hard to completely eliminate flicker and tearing, since there is no double buffering with flipping between buffers.
Should I go for Direct3D or DirectDraw? Or is there some way of completely eliminating flicker in GDI?
If Direct3D is the answer, which version of it is supported on most platforms?
Unless you are sure you will never want to port your game to any non-Windows platform, I would recommend OpenGL. It should work on all versions from 2000 upwards, and some lucky NT4 or Win98 users may be able to run it (but don't advertise those versions as "supported.") Hardware acceleration won't always work, but the impact on performance won't be noticeable for a simple 2D game. And you will be able to port it reasonably cheaply to other platforms (e.g. iPhone) if necessary.
I'm developing a very simple application for the Mac OSX platform making use of Qt and OpenGL (and QtOpenGL) so crossplatform gets easier.
The application receive a variable number of video streams that have to be rendered to the screen. Each frame of these video streams is used as a texture for mapping a rectangle in 3D space (very similar to a videowall).
Apart from the things such as receiving, locking, uploading video data, synchronizing threads... i consider it is clear that it's a quite simple application.
The fact is that all behaves ok when using cocoa based Qt 4.7 binaries (the default ones) in a 10.5 Mac.
But my code has to run fine at all of the OSX versions starting from (and including to) 10.4. So i tried the code in a 10.4 machine and it crashed just when starting. After a few hours of internet reading, i discovered that for a Qt Application to be targeted at 10.4, carbon Qt based has to be used. So i rebuild the whole project with the new framework.
When the new resulting binary gets run, all works well except by the fact that application's fps fall to about 2 fps!! And it behaves the same at both machines (10.5 computer has sensibly better features)
I've spent quite time working on this but i have not reached a solution. Any suggest?
More information about the application and things i've tried
code has not been modified when recompiling carbon based
only two (256x256 textures) videos ar used in order to assure it's not a bandwidth limit problem (although i know it shouldn't because the first code worked)
the 2 video streams arrive from network (local)
when a video stream arrives, a signal is emmited and the data will be uploaded to an OpenGL texture (glTexSubImage2D)
a timer makes render (paintGL) happen at about 20ms (~50 fps)
the render code use the textures (updated or not) to draw the rectangles.
rendering only when a video arrives won't work because of having 2 (asynchronous) video streams; besides more things have to be draw at screen.
only basic OpenGL commands are used (no PBO,FBO,VBO,...) The only one problematic thing could be the use of shaders (available only from Qt 4.7), but its code is trivial.
i've made use of OpenGLProfiler and Instruments. Nothing special/strange was observed.
Some things i suspect (conclusions)
it's clear it's not a hardware issue. The same computer behave differently
it gives me the sensation it's a threading/locking problem but, why?
carbon is 32 bits. The 10.5 application was 64. It's not possibly develop 64 bits in carbon.
for giving away the 32 bits possible cause, i also rebuild the first project for 32 bits. It worked partically the same.
i've read something about carbon having problems (more than usual) with context switching.
maybe OpenGL implementation is Multithread and code is not? (or the opposite?) That could cause a lot of stalls.
maybe carbon handle events differently from cocoa's? (i mean signal/event dispatching, main loop...)
Ok, this is (sorry for the so long writing) my actual headache. Any suggestion, idea.. would be very appreciated.
Thx in advance.
May I ask a diagnostic question? Can you ensure that it's not being passed to the software renderer?
I remember that when 10.4 was released, there was some confusion about quartz extreme, quartz and carbon, with some of it disabled, and hardware renderers disabled by default on some of them, which required configuration by the end user to get it working correctly. I'm not sure whether this information is pertinent, because you say that, having targetted 10.4, the problem exhibits on both the 10.4 and the 10.5, yes?
It's possible (though admittedly I'm grasping at straws here) that even in 10.5 carbon doesn't use the hardware renderers by default. I'd like to think though that OSX prefers hardware renderers to software renderers in all scenarios, but it may be worth spending a little time looking into, given how thoroughly you're already looking into other options.
Good luck.
If you are using Qt, I guess your code would work on a windows or linux platform. Have you tried your application under these platforms ?
This would quickly reveal if it comes from Qt or the mac OSX version.
Has anyone out there created a version of GDI32.dll that takes advantage of hardware acceleration available on the machine? gdiplus.dll?
Starting with Windows Vista, GDI is no longer hardware accelerated. (GDI+ was never hardware accelerated). Without Microsoft fixing GDI (and GDI+) to be able to run well on the computer: native applications (C++ MFC, Delphi, etc), and managed WinForms applications, will continue to run poorly forever.
While i could use Direct2D for business applications, i cannot control the fact that the development environment still creates controls, with decades of library support code, that assumes the presence of GDI.
Application Compatibility: Graphical Device Interface (GDI):
GDI primitives such as LineTo and
Rectangle are now rendered in software
rather than video hardware, which
greatly simplify the display drivers.
Windows And Video Memory
In XP GDI is GPU accelerated to
various degrees depending on how the
OS is configured or the device driver
(for details see Hooking Versus
Punting).
In Vista, GDI is not GPU accelerated
Comparing Direct2D and GDI
As a result, in Windows Vista, the GDI
DDI display driver was changed to be
only implemented by a Microsoft
supplied driver, the Canonical Display
Driver (CDD). GDI rendered to a system
memory bitmap. Dirty regions were used
to update the video memory texture
which the window manager uses to
composite the desktop.
It seems that Vista was a special case in the history of GDI performance.
Both articles below show that the future for GDI looks bright again.
http://msdn.microsoft.com/en-us/library/ff729480%28VS.85%29.aspx
GDI is hardware accelerated on Windows
XP, and accelerated on Windows 7 when
the Desktop Window Manager is running
and a WDDM 1.1 driver is in use.
Direct2D is hardware accelerated on
almost any WDDM driver and regardless
of whether DWM is in use. On Vista,
GDI will always render on the CPU.
http://blogs.msdn.com/b/e7/archive/2009/04/25/engineering-windows-7-for-graphics-performance.aspx
Based on real-world application
statistics, ... we worked with our
graphics IHV partners to provide
support in their drivers to accelerate
the most commonly used GDI operations.
Well, yes, GDI is the "it works anywhere anytime" API for rendering graphics. It puts very low demands on the video driver. Everybody got that right a long time ago. Which took a while, I got a distinct memory of a ATI Mach video card that gave me no end of trouble. It stopped me from buying ATI products for quite a while.
Everybody got DirectX right a lesser long time ago too. It is being taking advantage of in the WPF rendering model, it completely relies on DirectX to get the job done. Milcore is the shim name. You won't get it until you buy into the WPF programming model.
What do you mean by hardware acceleration?
I mean, GDI doesn't do a lot other than raster blits, but those were hardware accelerated. And, given that Vista and Windows 7 arn't terribly slower with desktop apps, still are.
GDI still gets the video drivers to do all the heavy lifting, so if GDI isn't hardware accelerated, then its the driver vendors fault, not GDI's.