Why does Direct3D work only on Windows? - windows

What is the Direct3D? It is an API, is not it? Is it implemented by Windows or by graphics cards?
If the graphics cards implement the Direct3D API, why can not other operating systems use Direct3D resources from the graphics card?
If Direct3D is implemented by Windows, it has to use graphics card resources, such as OpenGL or OpenCL. If Direct3D calls are not directly to the graphics card, it would be slower due to intermediate calls.
Please, help me to understand what the Direct3D is.

What is the Direct3D? It is an API, is
not it? Is it implemented by Windows
or by graphics cards?
Yes, Direct3D is an API. It is implemented (mostly) by Windows itself. However, Windows will offload a considerable part of the actual work to the drivers of the graphics card and ultimately to the gfx card itself, so one can also say that a gfx card "implements" D3D.
If the graphics cards implement the
Direct3D API, why can not other
operating systems use Direct3D
resources from the graphics card?
They can, and they do, but only a relatively small (but often critical part) of D3D functionality is implemented directly by the card's hardware, so a lot more work in software is required to implement D3D.
If Direct3D is implemented by Windows,
it has to use graphics card resources,
such as OpenGL or OpenCL.
No, that's a misunderstanding. OpenGL and OpenCL are also APIs, and only partially implemented by the gfx hardware (just like D3D). The gfx hardware usually has a (proprietary) "native" API, which is what the gfx drivers (both for D3D and OpenGL) use.
If Direct3D calls are not directly to
the graphics card, it would be slower
due to intermediate calls.
There are not necessarily any "intermediate calls" - the D3D driver uses the cards native API, as explained above.

Direct 3D is a graphics API created by Microsoft. Its similar in function to opengl. OpenGL is a competing 3D graphics API created as a open standard. D3D doesn't need OpenGL to function.
Card manufactorers decide which APIs they want to implement; most always include DirectX (which includes Direct3D) and possibly OpenGL.
There is indirection as it doesn't go directly to the card, the cards D3D in turn calls the drivers, but this typicially is insignificant.

Direct3D is an API developed by Microsoft designed to help developers render 3D graphics. OpenGL and Direct3D are two separate APIs, but they must both interface to the video card using the drivers developed by the companies who manufacture the video cards. Both APIs (Direct3D and OpenGL) must both go through the driver in order to access the video card, and their speeds are dependent upon their design and their implementation in the video card drivers.
OpenCL is something different -- it's designed to help developers write programs which perform general-purpose computing on the GPU (not just for graphics). OpenCL is comparable to CUDA, but the latter in only supported on NVIDIA cards. Using CUDA instead of OpenCL may have some advantages, depending on your target system, since NVIDIA can make new features available to the CUDA API before they are accepted into the OpenCL API. However, even OpenCL and CUDA must interface to the driver in order to get anything done on the GPU.
As you already know, Direct3D only works on Microsoft and on Wine (for the most part), but its structure as an API is vastly different from that of OpenGL. Direct3D makes use of structures and includes more OOP elements in its API, while OpenGL acts as a state machine, lacking any structures or OOP features. Direct3D can often progress a little faster than OpenGL in terms of the features which it claims to officially support in its API because it is not designed for maximum compatibility with a wide range of devices; on the other hand, OpenGL has typically exhibited more inertia when it has come to adopting new features because of the inherent difficulty in adding new features to its API (the Khronos Group is influenced heavily by the CAD industry as well as many others, so it must cater to a wide range of needs). The time it took for the Khronos Group to finally adopt asynchronous API calls in the OpenGL is testament to this fact, and caused many people to lose faith in OpenGL.
However, OpenGL is cross-platform, endorsed by Apple, and it works on all operating systems on which it is implemented. You can easily use it with many popular window toolkits (Qt, SDL, FreeGLUT, JogAmp, gtk, etc.) and have confidence that your application will compile on other operating systems if you wrote it properly. The OpenGL API, unlike Direct3D, is an open-source industry standard.
As far as performance goes, it's still debatable as to which one is faster: depending on how you structure your program or batch your calls, this could change. However, performance should not really be a consideration for which API you use unless you have tested your application and have evidence that the choice of API is the cause of your bottleneck.

From Wikipedia:
Direct3D is a Microsoft DirectX API
subsystem component. The aim of
Direct3D is to abstract the
communication between a graphics
application and the graphics hardware
drivers. It is presented like a thin
abstract layer at a level comparable
to GDI (see attached diagram).
Direct3D contains numerous features
that GDI lacks.
Direct3D is an Immediate mode graphics
API. It provides a low-level interface
to every video card 3D function
(transformations, clipping, lighting,
materials, textures, depth buffering
and so on). It also had a higher level
Retained mode component, that has now
been officially discontinued.

Related

How to implement the OpenGL API in a custom arm os

I'm developing an operating system targetting the ARM architecture, more specifically, a RaspberryPi 4B. For that I've already managed to use the "Mailbox Property Interface" to draw some shapes on the screen. Out of curiosity I would like to know if it was possible to use OpenGL (or OpenGL ES, preferably) to render future more complex graphics. If possible, how do I do it?
You want to find out which driver the usual Raspberry Pi software uses, then adapt that driver to work on your OS. This is the code that interprets your OpenGL commands and translates them to the GPU's native language. Note there is both a kernel part and a user-space part.
It's probably not worth trying to write your own. Graphics is a whole field of study, it's like writing another OS just for the graphics card.

What does WindowsDX mean; what does WindowsGL mean, and the basic difference

I want to install some libraries, namely MonoGame, on my Windows10 computer. There appear to be a WindowsDX and WindowsGL (or DesktopGL). What is the basic difference between references to WindowsDX and GL. Thanks.
I assume WindowsGL refers to the Windows implementation of OpenGL. OpenGL is an open, industry standard library and interface for using/programming graphics hardware.
WindowsDX probably refers to Windows DirectX. DirectX is a suite of libraries for multimedia programming in general, including D3D for interfacing with graphics hardware, in particular.
The capabilities are comparable, but the interfaces differ.
Windows supports both, with OpenGL possibly being handled by translating commands into their DirectX equivalents at some level of the driver stack. I don't think any non-Windows platforms support DirectX.

Are GDI, GDI+ and OpenGL really obsolete/deprecated? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 6 years ago.
Improve this question
If you open the page "Graphics and Gaming (Windows)" on microsoft.com
the last category is described as
Legacy Graphics: Technologies that are obsolete and should not be used in new
applications.
This category includes (among others) the following APIs:
GDI
GDI+
OpenGL
What's your opinion? If i want to roll out a new software today it must support Windows XP (still about 50% of all installed systems). Direct2D requires Windows 7/Vista. What else should be used?
I suspect that Microsoft's definition of "legacy" has little to do with what any sensible developer should do, and is instead based on some Grand Rewrite of the Windows API.
Starting at around Windows Vista, Microsoft has been redesigning many of their API's. We now have MMDevAPI as the One True Sound API, WIC is the One True Image File API, etc. From what I've seen/heard, these new API's are much better than the old ones, and the "legacy" systems all work based on the new ones. In Windows Vista and later, DirectSound is entirely based on MMDevAPI, and components that need to read image files do it via WIC.
Windows 8 will have an ARM version, which it appears will support only a subset of the current Windows API. We won't know for sure until Windows on ARM is released, but, based on the libraries included for the ARM platform in Visual Studio 11 (ref: http://www.winehq.org/pipermail/wine-devel/2012-March/094559.html), it's looking like GDI+ and OpenGL will not be available. GDI is available for linking, but that doesn't necessarily mean it's intact.
This new API's from Vista and later roughly correspond to the libraries in the VS11 ARM target. I'm guessing that anything on that list is there because it's either the latest and greatest way to do what it does, or it's too technically important to discard (for now). Thus, "legacy" is anything that's not the latest and greatest way to do at least one thing.
I'm not sure what is the One True Graphics API. Already we have Direct2D, Direct3D, DirectComposition (which, by the way, is not available until Windows 8), DirectWrite, and DXGI. DXGI seems the closest, but I don't have a deep enough understanding of the graphics API's to say. I suspect gdi32 is technically very difficult to get rid of. How are non-legacy applications meant to find out when part of a window has been revealed and therefore must be painted, without using WM_PAINT, which involves an HDC, and how could a library do that on an application's behalf without replacing its window procedure? How are we meant to make semi-transparent windows without using UpdateLayeredWindow, which takes an HDC? How much does user32 depend on gdi32, and can they really be separated?
From a technical standpoint, Windows can easily get rid of GDI+ and OpenGL, but I'm not convinced that getting rid of OpenGL will work out, even on a new platform that doesn't promise any backward compatibility. It seems too valuable to developers. GDI+ isn't so important, but it's very easy for a third party to provide a replacement.
I would say use any of the API's you listed, and the worst that's likely to happen is that you have to rewrite your UI if you want to port your app to metro or Windows on ARM. GDI is a fine choice if your needs are simple and you'll be coding directly for the Windows API. There aren't many situations where I'd recommend GDI+ over OpenGL as a drawing API. GDI+ is slow, limited, and only available on Windows. The GDI+ API is simpler because it's 2D, so maybe it's worthwhile if you need to do something very simple but with anti-aliasing.
OpenGL isn't deprecated, Microsoft's implementation of it is. Microsoft's implementation is stuck at version 1.1, which is old. The current version of the standard is past version 4. If you want to use OpenGL, it is fully supported by NVidia, ATI, and Intel graphics cards on the Windows desktop, but not in Metro Windows Modern UI apps, is an industry standard, and also works on Mac and Linux. If you need a software fallback implementation, Mesa has got you covered, and it even works on DOS. (Since Mesa can render into memory buffers, there's no reason it won't work in Modern UI apps, but you probably don't want to do this because it can be slow.) One thing of note is that WGL, the API for accessing OpenGL functionality on the Windows desktop, depends on GDI (which is deprecated) so you probably want to use something like FreeGLUT or SDL instead if you want to future-proof your application, which also nets you platform independence.
OpenGL ES is a variant of OpenGL which works on Android and Apple iOS. It is also accessible in JavaScript via WebGL, which Internet Explorer 11 will support (and pretty much every other browser already does.) ANGLE provides a hardware-accelerated implementation of GLES for Windows which piggybacks off of DirectX (version 9 or 11) and thus should work in Modern UI apps as well. Once again, Mesa's got the software implementation covered.
TL;DR: OpenGL is not only not deprecated, it is cross-platform, standard, and has tremendous momentum in the industry. GDI and GDI+, well, not so much.
If you want to support Windows XP, then you're supporting a "legacy" operating system, and as such, using a "legacy" graphics framework is the logical choice.
Even if that weren't true, let's just say that I disagree with the advice given by the linked MSDN article. The "legacy" status here has more to do with which technology the Windows team thinks is cool this week. The status designation of "obsolete" just means that the group responsible is no longer accepting or fulfilling bug reports (except for critical security issues). Not too big of a deal—these technologies have been around long enough that they're fairly feature-complete and stable.
GDI isn't going anywhere, so if you need something rock-solid that is guaranteed to be supported anywhere and everywhere, that's what I would go with.
If you need a bit more 2D capabilities than GDI offers (e.g., alpha channel transparency), then you could consider using GDI+. It's nearly an order of magnitude slower than GDI, but that's not too big of an issue on modern machines with more power than you could ever want. This, too, is going to be supported for a very long time to come.
That said, if I were writing a new app today, I probably wouldn't bother with OpenGL. There's very little that it offers in benefits over Direct2D and DirectWrite, which are both what Microsoft is pushing as the replacements for GDI/GDI+. There might be some benefit to using OpenGL if you absolutely must target Windows XP because as far as I can tell, Direct2D/DirectWrite are only supported on Vista and later, but that's because (as I mentioned originally), Windows XP falls squarely into the "legacy" or "obsolete" camp itself. Alternatively, if you already know OpenGL well and don't have time or the desire to learn Direct2D/DirectWrite, then it might make sense to continue using it in a new application.
Don't let the verbiage of the MSDN article scare you. Choose whatever technology makes the most sense for your specific use case given all of the available information. By the time any of these technologies go away completely, you'll have to re-write the app completely for a dozen other reasons.
Edit: Hmm, it looks like DirectWrite has also been declared (by some people at least) "obsolete" as well, having been replaced by Direct2D. That's funny, it hasn't even been around long enough for me to bother learning it. I guess that only goes to support my earlier argument that "obsolete" simply designates that a particular technology is not what is currently considered to be in vogue by the Microsoft devs.
I'm personally waiting until all the bugs get worked out of this stuff (and we decide on a semi-permanent standard) before I make the switch for any of my applications. Everything I've seen written in DirectDraw or Direct2D has had serious rendering bugs and is a performance nightmare, even on reasonably competent machines. Sure, they only show up sometimes, under the right conditions, but that's too much for me. And I swear, the blurry text shows up all the time. Not being able to read what's on screen is a deal-killer for me and my users. GDI doesn't have this problem, and it's not going anywhere.
Are GDI, GDI+ and OpenGL really obsolete/deprecated?
This is not true for OpenGL. OpenGL 4 allows you to use geometry shaders on winxp. Which isn't possible with DirectX (DirectX 10 and up isn't supported on WInXP). It is also one of the only cross-platform 3D APIs out there.
From a business point of view MS is interested in promoting DirectX since it is their technology that lock Developer into windows platform (they're also interested in making DirectX more attractive for developer, but that's another story). So it makes sense that they aren't keen on promoting OpenGL.
What else should be used?
I'd advise to stop using platform-specific tecnologies when possible. Grab cross-platform framework and use it for your application. There's Qt, GTK, wxWidgets and other toolkits for GUI apps, and SDL(and alternatives) for games. This way when platform developer decide to make ridiculous decision (like not supporting DX10 on WinXP) you dislike, you'll be able to move elsewhere with minimum development cost. Qt is also ridiculously powerful and at the moment I have no reason to use something else for GUI development. Still, situation can change in the future.
In short, while developing for certain platform you should keep in mind that platform developer might have their goals that are not compatible with your wishes. Discovering that your source has become locked into single platform isn't very pleasant experience. Your own goals should be the first priority, and if os developer tries to make you use certain technology you don't like, then you shouldn't support that technology.
Because OpenGL is a standard, it should be considered equally deprecated as C or C++ so it is a matter of time before the entire Windows API -- which today has become a compile once run on every x86 machine API thanks to Wine -- is considered deprecated in favour of .NET and C#.
I use GDI for simple graphics and OpenGL, when I need accelerated 3d.
Another aspect is that Microsoft's build-in implementation of OpenGL is definitely to be considered as deprecated since it is just version 1.1 or something, but that has been for a long time.
Yeah, about OpenGL, it actually outperforms DirectX in many ways both resource and display wise. It will never be promoted by Microsoft because it can't own OpenGL, not to mention most people don't do their research and Microsoft can claim it is old. Truth is opengl is opensource standard and evolves at a much faster rate than closed does because it is more than 1 room of developers paid to work on it. Also Microsoft has contracts with many companies to release using only Microsoft's software, this causes more business for Microsoft and less to use the more advanced OpenGL standard. It is a interesting lock up if you will, Microsoft creates these contracts so that many programs are written in DirectX to keep business for Microsoft, and no company will refuse it because Microsoft has about 80%+ home user market.

Win32: Is there a replacement GDI32.dll that uses hardware acceleration?

Has anyone out there created a version of GDI32.dll that takes advantage of hardware acceleration available on the machine? gdiplus.dll?
Starting with Windows Vista, GDI is no longer hardware accelerated. (GDI+ was never hardware accelerated). Without Microsoft fixing GDI (and GDI+) to be able to run well on the computer: native applications (C++ MFC, Delphi, etc), and managed WinForms applications, will continue to run poorly forever.
While i could use Direct2D for business applications, i cannot control the fact that the development environment still creates controls, with decades of library support code, that assumes the presence of GDI.
Application Compatibility: Graphical Device Interface (GDI):
GDI primitives such as LineTo and
Rectangle are now rendered in software
rather than video hardware, which
greatly simplify the display drivers.
Windows And Video Memory
In XP GDI is GPU accelerated to
various degrees depending on how the
OS is configured or the device driver
(for details see Hooking Versus
Punting).
In Vista, GDI is not GPU accelerated
Comparing Direct2D and GDI
As a result, in Windows Vista, the GDI
DDI display driver was changed to be
only implemented by a Microsoft
supplied driver, the Canonical Display
Driver (CDD). GDI rendered to a system
memory bitmap. Dirty regions were used
to update the video memory texture
which the window manager uses to
composite the desktop.
It seems that Vista was a special case in the history of GDI performance.
Both articles below show that the future for GDI looks bright again.
http://msdn.microsoft.com/en-us/library/ff729480%28VS.85%29.aspx
GDI is hardware accelerated on Windows
XP, and accelerated on Windows 7 when
the Desktop Window Manager is running
and a WDDM 1.1 driver is in use.
Direct2D is hardware accelerated on
almost any WDDM driver and regardless
of whether DWM is in use. On Vista,
GDI will always render on the CPU.
http://blogs.msdn.com/b/e7/archive/2009/04/25/engineering-windows-7-for-graphics-performance.aspx
Based on real-world application
statistics, ... we worked with our
graphics IHV partners to provide
support in their drivers to accelerate
the most commonly used GDI operations.
Well, yes, GDI is the "it works anywhere anytime" API for rendering graphics. It puts very low demands on the video driver. Everybody got that right a long time ago. Which took a while, I got a distinct memory of a ATI Mach video card that gave me no end of trouble. It stopped me from buying ATI products for quite a while.
Everybody got DirectX right a lesser long time ago too. It is being taking advantage of in the WPF rendering model, it completely relies on DirectX to get the job done. Milcore is the shim name. You won't get it until you buy into the WPF programming model.
What do you mean by hardware acceleration?
I mean, GDI doesn't do a lot other than raster blits, but those were hardware accelerated. And, given that Vista and Windows 7 arn't terribly slower with desktop apps, still are.
GDI still gets the video drivers to do all the heavy lifting, so if GDI isn't hardware accelerated, then its the driver vendors fault, not GDI's.

hardware acceleration / performance and linkage of different macosx graphics apis, frameworks and layers

the more i read about the different type of views/context/rendering backends, the more i get confused.
regarding to http://en.wikipedia.org/wiki/Quartz_%28graphics_layer%29
MacOSX offers Quartz (Extreme) as a render-backend which itself is part of Core Graphics.
in the Apple docs and in some books too they say that in any case somehow you use OpenGL (obviously since this operating system uses OpenGL to render all its UI).
i currently have an application that should capture real-time video from a camera (via QTKit which is based on Quicktime but is Cocoa) and i would like to further process the frames (via Core Image, GLSL shaders, etc.).
so far so good. now my question is - does it matter performancewise if you
a) draw the captured frame via Quartz and implicitely via OpenGL or
b) if you setup an OpenGL context and a DisplayLink and draw the buffered image explicitely via OpenGL?
what would be the advantages or disadvantages of going either way?
i've looked at the different examples (especially CoreImage101 and CoreVideo101) and documents from apple's developer pages but i can't see why they go (or have to go) that way?!?
and i really don't get where Core Video and Core Animation come into play?
does going way b) automatically mean i use Core Video? and with which way can i use Core Animation?
additional info:
http://developer.apple.com/leopard/overview/graphicsandmedia.html
http://theocacao.com/document.page/306
http://lists.apple.com/archives/quartz-dev/2007/Jun/msg00059.html
p.s.: btw, i am on Leopard, so no QuicktimeX confusion yet :)
Generally speaking OpenGL just gives you more flexibility than the higher level APIs. If the higher level APIs do not offer a feature you need then it is very likely that you will need to drop down to the OpenGL layer.
If they do offer everything you need then you should comparable speed. Perhaps a small (almost negligible) degradation given the Objective-C overhead.

Resources