I'm trying to write a simple app that will do a screen capture of an application and then rendering that capture in the 'main' application (ie, the application that took the screen capture).
I've figured out how to get the window handle and get the application's screen capture, but I'm having trouble rendering the captured screen in the 'main' application.
Using GDI, I have the following code to render:
Bitmap bit(hSrcbmp,hpal);
graphics.DrawImage(&bit,Gdiplus::PointF(0,0));
where hSrcbmp is a bitmap of the captured screen and graphics is a GDI+ 'Graphics' object.
I get the following error after the constructor call to Bitmap:
Gdiplus::Image = {nativeImage=0x00000000 lastResult=Win32Error loadStatus=-858993460 }
*Using Visual Studio 2005
*Windows XP
*Visual C++ (non-managed)
Any ideas?
Another question: Any better approach? C# or DirectX or openGL?
Thanks
Screen capture is a Win32 FAQ for 18 years.
See on win32 group for standard code (MS and other), C and C++
Related
I have to copy the image of a windows application (form based, not full screen) that is rendered with opengl to a region(panel) of the window of an other application (using GDI functions).
The opengl window is rendered on one monitor with the first graphic card of the system and the destination panel is rendered on an an other monitor, that is controlled by an other graphic card.
Both applications are "third party application". The only information I have are the window handles of both, the opengl application and the destination panel.
I did some tests with Delphi using the "dglOpenGL" unit - but without success. "glReadPixels" always results in an empty data array.
I do not understand how I can get access to an existing rendering context that was created by an other application.
My project is written in Delphi - but if there are hints, code snippets, in C# or C++, that will be helpful, too.
What is the best way to access the rendering area of every single window in Microsoft Windows, so I can render them myself in 3D? I know the Desktop Window Manager (dwm) gives every window a rendertarget and renders them alltogether on the screen.
There are tons of 3D window managers out there, but finding some code or other information is hard for me, so I'm asking you.
I would like to do it in OpenGL, but I can imagine it's just possible in DirectX, but that's fine too.
Thanks in advance!
You have to use the operaring system / graphics system specific API dedicated for that task. OpenGL is operating and graphics system agnostic and has no notion of "windows".
In Windows the API you must use is the DWM API. For Windows versions predating the DWM API (XP and earlier) some "dirty" hacks have to be employed to get the contents of the windows (essentially hooking the WM_PAINT and WM_ERASEBACKGROUND messages, and the BeginPaint and EndPaint functions of the GDI to copy the windows' contents into a DIBSECTION that can be used for updating a texture).
In X11 the API to use is XComposite + the GLX_ARB_texture_from_pixmap extension
I have a native C++ application that is rendering 3D using DirectX.
The app can switch between windowed and fullscreen using IDXGISwapChain::SetFullscreenState().
However some of my UI is in a .Net Winform in a managed dll.
In windowed mode I just call into the .Net dll and that opens the secondary Winform, so two windows, the main native MFC/DirectX and .Net Winform control.
But when the DirectX is in full screen, can I get the Winform to be show above the Direct X layer?
Is it even possible?
Merely setting the Winform property
this.TopMost = true;
is not sufficient. The winform is under the Direct X layer.
I just want "my" WinForm over the full screen.
You could copy your WinForm's DC onto bitmap in memory and copy that onto a DirectX rendering surface and pass messages back to the WinForm. It might be slow thow though, if you try to do very much.
I'm using C++(Visual Studio 10) on Windows XP.
I want to make an application similar to a video player where there is a window inside an outer window where the actual video is displayed. On the outer window I will have the GUI, buttons, etc...
Basically I'm composing two windows together. And the frames in the inner window are updated by another thread that does image processing(I will use OpenCV for this).
Any pointers? I just need to know the basic structure for this.
Build the entire GUI using win32 and then convert the IplImage to BITMAP to be able to display it. It seems someone posted a quick and dirty solution to do that, but I haven't tried it.
I have an MFC dialog application that I am using as the front end for some image processing with OpenCV 2.1. I would like to move away from using the cvShowImage and place my image directly on the dialog box or a suitable container. I've found examples that with a technique using an MFC SDI application with a View/Doc model, but I can't figure out how to convert that.
I'm curious if anyone has done this and/or knows where an example may live that does this?
Also, this is my first MFC application.
Thanks all.
There is a CvvImage type in highgui with a drawtoHDC(or simialr) function that will draw into a bitmap