I am running a C++ 3d realtime application on winxp that itself runs on VMWare Workstation 9.0. I have 3d acceleration disabled in VMWare so I only have the windows OpenGL implementation to use.
I am rendering high resolution screenshots from my application using a hidden win32 window. But it seems not possible to render a higher resolution than the WinXP has its desktop at. What is the reason for this? Is an opengl context constrained to desktop resolution in GDI opengl? The area in the screenshot image that is outside screen resolution is just black.
I cannot tell you the reason for this implementation decision. But note that it fully conforms to the OpenGL 1.1 specification:
4.1.1 Pixel Ownership test
The first test is to determine if the pixel at location (x_w,y_w) in the framebuffer is currently owned by the GL (more precisely, by this GL context). If it is not, the window system decides the fate of the incoming fragment. Possible results are that the fragment is discarded..."
In a sense you are even "lucky" that a hidden window works, because technically it doesn't own the pixels. (If I should speculate about the reason, note that the OpenGL v1.1 implementation has been around since at least Windows 98. And graphics resources used to be really expensive...)
Maybe an OpenGL software implementation like Mesa3d is an option for you? As far as I remember, they support framebuffer objects, which are the preferred method for off-screen rendering nowadays. (Depending on the required resolution and the limits of your GL implementation you might still be forced to render and assemble tiles.)
Related
I recently moved the rendering part of a program of mine from GDI+ to OpenGL.
Now I'm wondering: are there any downsides to doing so?
For example, are there any versions of Windows (XP or later) that support GDI+ but not OpenGL?
Or, for example, is it possible for a lack of drivers (or poor drivers), or a lack of a graphics card, etc. to make OpenGL rendering impossible on a system on which GDI+ works fine?
(I understand the OpenGL might need to resort to software rendering on less capable systems, but aside from slowness, I'm wondering if it would ever simply not work correctly in a situation in which GDI+ would.)
It depends on the OpenGL version/profile you're using. Up to, inclusing Windows XP OpenGL-1.1 is available by default without additional drivers. Since Windows Vista the minimum available OpenGL version is OpenGL-1.4.
However if you need anything more than that, you're relying on the user installing the drivers that come from the GPU vendor; the drivers installed by default in a standard Windows installation don't cover OpenGL (for not perfectly sane reasons).
Programs and libraries that strongly depend on OpenGL-ES have resorted to incorporate fallbacks like ANGLE.
There are some idiosyncrasies, as for example: you cannot create a transparent OpenGL window, if transparency is disabled (which means, not at all, under XP). Otherwise, as datenwolf notes, there's ANGLE, but even that does not always work. Another option might be mesa3d compiled for the windows target, with software rendering enabled. This option might be the safest one and faster than the software OpenGL 1.1 implementation from Microsoft.
So, we've got a little graphical doohickey that needs to run in a server environment without a real video card. All it really needs is framebuffer objects and maybe some vector/font anti-aliasing. It will be slow, I know. It just needs to output single frames.
I see this post about how to force software rendering mode, but it seems to apply to machines that already have OpenGL enabled cards (like NVidia).
So, for fear of trying to install OpenGL on a machine three time zones away with a bunch of live production sites on it-- has anybody tried this and/or know how to "emulate" an OpenGL environment? Unfortunately our dev server HAS a video card, so I can't really show "what I've tried".
The relevant code is all in Cinder, but I think our actual OpenGL utilization is lightweight for this purpose.
This would run on windows server 2008 Standard
I see MS has a software implementation for OGL 1.1, but can't seem to find one for 2.0
Build/find some Mesa DLLs.
It will be slow.
I am trying to figure out the relationship between CGL and OpenGL on Mac platform.
More specifically about the context. Do they share context? If yes, how? Please give me a link to some related examples.
If no, then are there two contexts working in Core Animation applications which make use of OpenGL?
I am very confused by the use of OpenGL by Mac. Can somebody clarify?
CGL sets up device specific contexts suitable for OpenGL to render to. Compare to wgl and xgl on Windows and X respectively. CGL understands how to query the graphics hardware for its pixel format, and then how to set up and configure a context (e.g. double-buffered or single-buffered, what resolution depth, stencil, accumulation buffer, etc). But it doesn't provide functions to draw in that context. Once you have created the context with CGL, you make it current, and then you can call OpenGL to render in that context.
In Core Graphics (do not confuse it with CGL), both context initialization and drawing into the context are handled by the same framework. But because OpenGL is an open standard and designed to be cross-platform, the rendering functionality and the device context functionality have been abstracted into separate frameworks.
CGL is the low-level interface to OpenGL on a Mac. You probably don't want to be using it if you are writing an OpenGL Mac app. I am currently in the process of creating a intuitive OpenGL Mac application template for XCode 4, but in the mean time you can look at https://github.com/mk12/Pong-Ultimate, a pong clone I made using OpenGL. It uses NSOpenGL, a higher-level Cocoa interface to OpenGL.
You may also find the Apple docs helpful: http://developer.apple.com/library/mac/#documentation/GraphicsImaging/Conceptual/OpenGL-MacProgGuide/opengl_intro/opengl_intro.html.
Has anyone out there created a version of GDI32.dll that takes advantage of hardware acceleration available on the machine? gdiplus.dll?
Starting with Windows Vista, GDI is no longer hardware accelerated. (GDI+ was never hardware accelerated). Without Microsoft fixing GDI (and GDI+) to be able to run well on the computer: native applications (C++ MFC, Delphi, etc), and managed WinForms applications, will continue to run poorly forever.
While i could use Direct2D for business applications, i cannot control the fact that the development environment still creates controls, with decades of library support code, that assumes the presence of GDI.
Application Compatibility: Graphical Device Interface (GDI):
GDI primitives such as LineTo and
Rectangle are now rendered in software
rather than video hardware, which
greatly simplify the display drivers.
Windows And Video Memory
In XP GDI is GPU accelerated to
various degrees depending on how the
OS is configured or the device driver
(for details see Hooking Versus
Punting).
In Vista, GDI is not GPU accelerated
Comparing Direct2D and GDI
As a result, in Windows Vista, the GDI
DDI display driver was changed to be
only implemented by a Microsoft
supplied driver, the Canonical Display
Driver (CDD). GDI rendered to a system
memory bitmap. Dirty regions were used
to update the video memory texture
which the window manager uses to
composite the desktop.
It seems that Vista was a special case in the history of GDI performance.
Both articles below show that the future for GDI looks bright again.
http://msdn.microsoft.com/en-us/library/ff729480%28VS.85%29.aspx
GDI is hardware accelerated on Windows
XP, and accelerated on Windows 7 when
the Desktop Window Manager is running
and a WDDM 1.1 driver is in use.
Direct2D is hardware accelerated on
almost any WDDM driver and regardless
of whether DWM is in use. On Vista,
GDI will always render on the CPU.
http://blogs.msdn.com/b/e7/archive/2009/04/25/engineering-windows-7-for-graphics-performance.aspx
Based on real-world application
statistics, ... we worked with our
graphics IHV partners to provide
support in their drivers to accelerate
the most commonly used GDI operations.
Well, yes, GDI is the "it works anywhere anytime" API for rendering graphics. It puts very low demands on the video driver. Everybody got that right a long time ago. Which took a while, I got a distinct memory of a ATI Mach video card that gave me no end of trouble. It stopped me from buying ATI products for quite a while.
Everybody got DirectX right a lesser long time ago too. It is being taking advantage of in the WPF rendering model, it completely relies on DirectX to get the job done. Milcore is the shim name. You won't get it until you buy into the WPF programming model.
What do you mean by hardware acceleration?
I mean, GDI doesn't do a lot other than raster blits, but those were hardware accelerated. And, given that Vista and Windows 7 arn't terribly slower with desktop apps, still are.
GDI still gets the video drivers to do all the heavy lifting, so if GDI isn't hardware accelerated, then its the driver vendors fault, not GDI's.
the more i read about the different type of views/context/rendering backends, the more i get confused.
regarding to http://en.wikipedia.org/wiki/Quartz_%28graphics_layer%29
MacOSX offers Quartz (Extreme) as a render-backend which itself is part of Core Graphics.
in the Apple docs and in some books too they say that in any case somehow you use OpenGL (obviously since this operating system uses OpenGL to render all its UI).
i currently have an application that should capture real-time video from a camera (via QTKit which is based on Quicktime but is Cocoa) and i would like to further process the frames (via Core Image, GLSL shaders, etc.).
so far so good. now my question is - does it matter performancewise if you
a) draw the captured frame via Quartz and implicitely via OpenGL or
b) if you setup an OpenGL context and a DisplayLink and draw the buffered image explicitely via OpenGL?
what would be the advantages or disadvantages of going either way?
i've looked at the different examples (especially CoreImage101 and CoreVideo101) and documents from apple's developer pages but i can't see why they go (or have to go) that way?!?
and i really don't get where Core Video and Core Animation come into play?
does going way b) automatically mean i use Core Video? and with which way can i use Core Animation?
additional info:
http://developer.apple.com/leopard/overview/graphicsandmedia.html
http://theocacao.com/document.page/306
http://lists.apple.com/archives/quartz-dev/2007/Jun/msg00059.html
p.s.: btw, i am on Leopard, so no QuicktimeX confusion yet :)
Generally speaking OpenGL just gives you more flexibility than the higher level APIs. If the higher level APIs do not offer a feature you need then it is very likely that you will need to drop down to the OpenGL layer.
If they do offer everything you need then you should comparable speed. Perhaps a small (almost negligible) degradation given the Objective-C overhead.