SDL2 2.0.8. Windows 10. msys64. Radeon Card.
I'm converting a DirectX (draw) application to SDL2.
In the code, I open both a DirectX window and an SDL window and then verify the pixel format of both.
So on the same machine in the same program DirectX returns an ARGB8888 window but SDL2 returns an RGB888 window. SO 32 bit vs 24 bit???
Is this correct? How do I get SDL to return an ARGB8888 formatted window?
I guess there is no problem. You can check the rendering process works or not.
Related
I develop a test application with directx 11 und fl 10.1.
Everything is working as expected and fine, but when I maximize the window with my graphics in it, the time per frame increases drastically. like 1ms to 40ms.
NVS 300 graphics card
Windows 7 32-bit
Application that draws few sinuses with direct3d, c# via sharpdx
Windows forms with a control and sharpdx initialized swapchain, programmed to change backbuffer on resize event (would occur without that too though)
I used a System.Stopwatch to find the issue at the code line:
mSwapChain.Present(1, PresentFlags.None);
where the time it needs when maximized increases by a lot suddenly.
any clues?
In my specific case, switching to windows classic theme with aero disabled solved the issue. Because the frame performance got worse if the windows start button started to lay over the resized window.
I'm developing a cross-platform photo retouching application based on Gtk-2 (but already able to support Gtk-3 with minor modifications).
In my program, the result of the image retouching is previewed in a scrollable area that is implemented through a Gtk::DrawingArea inserted into a Gtk::ScrolledWindow. The drawing itself is performed using Cairo.
Recently I had the possibility to test the software on a MacBookPro laptop with retina display, and I've immediately realised that the preview image gets magnified by a factor of 2, like all the rest of the GUI elements.
Is there a way to tell Cairo and the DrawingArea to use the native screen resolution, instead of applying a 2x magnification? Is this supported in recent Gtk-3 versions?
Thanks for you help.
I am using ILNumerics. I just tested some simple examples from the ILNumerics website.
If I choose to use the GDI renderer in the properties panel of the ILPanel control it works fine.
If I choose the OpenGL renderer the scene is plotted wrongly, is blinking or is partially or not drawn.
I use VS2010 PRO, Win7 64-bit, Dell XPS 17 with Ge550M graphics.
Fogcity runs without problems using OpenGL. Any idea?
You are probably not using the geforce card but some onboard graphic chip for rendering. Make sure, the NVIDIA card is used. Check the NVIDIA control panel.
I want to have two aplications simultaneously run: one that analyzes image from webcam written using OpenCV (the image is acquired through callback function) and an application that goes into fullscreen mode (let's say a 3D game). The problem is that while the fullscreen mode is launched the webcam image stream is stopping - the frames simply don't turn up, the callback function isn't called. This seems to be an issue with OpenCV - to test that a simple application displaying the image form camera has been prepared.
Why the image stream could be blocked by the fullscreen mode? How to bypass this?
Thanks for any hints.
Your question does not tell if you have tried to search for the problem in the OpenCV community first, so I post this as a hint in case: http://tech.groups.yahoo.com/group/OpenCV/
Also check out the list of issues, maybe its a known bug: https://code.ros.org/trac/opencv/report/1
I'm not an OpenCV expert so this is closer to a suggestion than an answer - but I've experienced similar on my multi-monitor setup using a number of media players on the second monitor and some fullscreen apps ont he first.
In my limited testing, it comes down to what method is used to render the 3d app - DirectX seems to stop media players, OpenGL doesn't.
So it might not be OpenCV which has a problem - it may be what DirectX does to the hardware during a full-screen game.
Actually the behaviour of the OpenCV camera stream is strange. It seems to depend on the native OpenCV window (cvNamedWindow()) that shows the output image form webcam. If the window is on the same screen that went fullscreen the streaming will continue. If the camera window would be placed on another screen, the stream would stop.
Another curious thing is with the screen resolution change. If you change the resolution of the screen and the camera window is not visible (is closed or even minimalized) the image stream would get blocked.
These are just my observations on the topic, maybe it'll be helpful for someone.
I'm trying to write a simple app that will do a screen capture of an application and then rendering that capture in the 'main' application (ie, the application that took the screen capture).
I've figured out how to get the window handle and get the application's screen capture, but I'm having trouble rendering the captured screen in the 'main' application.
Using GDI, I have the following code to render:
Bitmap bit(hSrcbmp,hpal);
graphics.DrawImage(&bit,Gdiplus::PointF(0,0));
where hSrcbmp is a bitmap of the captured screen and graphics is a GDI+ 'Graphics' object.
I get the following error after the constructor call to Bitmap:
Gdiplus::Image = {nativeImage=0x00000000 lastResult=Win32Error loadStatus=-858993460 }
*Using Visual Studio 2005
*Windows XP
*Visual C++ (non-managed)
Any ideas?
Another question: Any better approach? C# or DirectX or openGL?
Thanks
Screen capture is a Win32 FAQ for 18 years.
See on win32 group for standard code (MS and other), C and C++