I have a windows application that scrapes pixels from the screen for recording (in the form of a video) to a custom screen-sharing format. The problem is that on machines using a software cursor, blitting from the screen with SRCCOPY|CAPTUREBLIT (so that layered windows also show up in the image) causes the cursor to blink, as described in Case of the Disappearing Cursor.
For single screen shots, this is not a problem, but when multiple screen shots are taken in rapid succession, the cursor blinks so fast that it sometimes seems to disappear altogether.
I have looked into using the Windows Media Encoder SDK (as described in a codeproject article, see below) because it doesn't cause the cursor to blink, but there seems to be no way to directly access the frame data. Unfortunately, both real-time encoding and the custom format are both requirements, which makes windows Media Encoder unusable for this purpose.
I have also tried the DirectX way (described in the same article, see below), and it seems to suffer from the same problem.
Has anyone else run into this problem? There must be a way around it - many commercial screen sharing programs have no such problem.
article: www.codeproject.com/KB/dialog/screencap.aspx
you can use Magnification API in windows vista or later.
i cannot find a good idea in windows xp.
What about using a mirror driver?
You are right, a mirror would certainly work. However, at the moment, I am trying to stay away from that approach because of the security and permissions concerns when installing under a user without admin rights. Correct me if I am wrong, but I don't think there is any way to install a driver without such rights. Besides that, it seems that that would be needlessly complex: there should be a simpler / less invasive way to do this. (I should have mentioned this in my original question)
Just copy the screen and the cursor separately and overlay them.
The thought I had to overcome the flicker is to "manually" draw "your own copy of the mouse", then make the BitBlt call, or to call BitBlt with just SRCCOPY then manually capture any visible transparent windows over the top of it. I don't know how the commercial stuff does it (or the windows media encoder apparently does).
ref: http://us.generation-nt.com/xp-bitblt-captureblt-option-help-26970632.html
Related
I am looking for a way to reproduce this behaviour from the Windows drag&drop file system :
The cursor isn't visible with the default screencap feature of Windows, but of course it's on the right side file.
I'm looking to reproduce this right side image following the cursor in a program and setting the image myself. Having it be removed automatically on MOUSE_UP would be a big plus.
I'd like to use the Windows API directly so this process could work in the same way across languages, but if anyone has a better way I'm perfectly willing to hear it, of course.
Thanks.
Well, the title almost says it all : Why should I not move a GUI (e.g. Gtk) window on screen from the code ? In Gtk 3 there was an API for moving windows on screen, but it was removed in Gtk 4, because it is not good to move a window from code; only the user should do so (don't ask me to provide sources for that, I read it somewhere but have forgotten where and cannot find it). But I cannot think of any reason why it shouldn't be good, but of several reasons why it could be good, for example to restore the position of a window between application restarts. Could you please shed some light on this ?
The major reason why is that it can't possibly work cross-platform, so it is broken API by definition. That’s also why it was removed in GTK4. For example: this is impossible to implement when running on top of a Wayland session, since the protocol doesn't allow getting/setting global coordinates. If you still want to have something similar working, you'll have to call the specific platform API (for example, X11) for those platforms that you want to support.
On the reason why it’s not supported by some display protocols: it’s bad for UX and security. In terms of UX: some compositors can have special behavior because they need to work on a small device, or because they have a kiosk mode in which everything should always run fullscreen, or they provide a tiling experience. Applications positioning their windows themselves then tend to give unexpected behaviour. In terms of security: if you allow this, it’s technically possible for an application to reposition and resize itself so that it covers your screens while making itself transparent, without it being noticeable, which means it has the possibility of scraping all input.
I am trying to setup individual window sharing for a project in Unity for Windows. The way I'm currently going about doing this is by using EnumWindows(), IsVisableWindow(), and GetWindowText() to create a dictionary of window titles and handles, then calling StartScreeCapturebyWindowId() to share the selected window.
This works relatively well for most process; the window of the process and only the window of the process is streamed. However, for certain programs (like Google Chrome, Discord, and Windows Photos) the captured area is set correctly, but overlapping windows are not culled out.
Does anyone know what could be causing this problem? Is there something wrong with the way I'm grabbing the handles for these windows? Or is there something about starting a screen capture that I am missing?
You certainly did the correct things. However, you also hit the limitation to the Windows part of the SDK. To understand this better, the set of programs are UWP applications. They have different ways to share the visible pixels. Previously version of Agora SDK could not even show the window. Starting from 3.0.1, the SDK uses Rectangle cutting method to get the window display. You may further read the online documentation about that API here.
There isn't much Agora can do for the near term. So you will just need to deal with the user experience (e.g. by warning them) or look at solutions like using Web SDK instead.
Up to now I been using the pda emulator in visual studios 2008 (I am using windows mobile 6.1 professional sdk).
So I just dragged and dropped most of my GUI components into the form. In one instance I made a panel then in this panel I dynamically generated labels in it with certain location positions.
I then put it on my Hp PAQ 110 Classic pda and it looked fine and everything. Then I was looking through the emulators one of them was called professional square. So I decided to run it and when it ran my program it looked like crap.
I had missing labels, missing controls and it just looked horrible.
I thought maybe it would like do some resizing for me but it seems to either did a shitty job or it does not do it at all.
So how do you make a GUI that will work well on all mobile phones(or at least the vast majority of them).
Is there like X number of types of mobile phones? Like the emulator emulates a pda and it works on my HP one so I am assuming that all window mobile device pdas have the same screen size.
Then the next question is how do you make the controls position properly from one device to another? I heard of people using XML files that have all the location position, sizes and etc that they call up and I guess essentially generate the GUI dynamically based on the information in XML.
But I could not find any examples how the XML file would look like, how to detect what phone type it is so that I could call up the right node of the file for that phone.
I am not sure if there are any other ways but this seems better then a set of GUI forms for each one.
Also would it be recommended to have most things in a panel so that way even if the stuff is bigger you can at least turn auto scrolling on.
thanks
I spent a good amount of time looking at different solutions for this problem (see my question here as well) and ended up with a pragmatic approach - consistent use of docking. You have to restrict yourself to the least common denominator, i.e. the lowest resolution you want to support, in terms of how much you can fit on the screen. The good news was that grids always use the entire available real estate, and my forms flow correctly on all devices and the screens don't look like they are broken.
This is far from being an easy task. You can follow some guidelines, but the only thing that will actually work is to always test the User Interface in all possible screen resolutions. Emulators are a good way to start, however it will be better to have an actual device. Some things like font sizes and text readability can only be tested in a real device. So, these are my advices:
Try to use docking for positioning your controls.
You need to be able to handle orientation changes correctly. Using docking helps, but again you always need to test in different screen resolutions.
At some point you will find out that it is inevitable to detect the screen size and adapt the User Interface dynamically. I don't agree that you should restrict yourself to only display what can fit in the smallest screen. A professional application should adapt itself to the available screen size and take full advantage of it.
Structure your application so that it is easy to support new screen resolutions. Make the main User Interface code screen size agnostic. Make it get all information about dynamic resizing - positioning from a configuration class. This way you only need to enhance a single item in your code in order to support a new screen resolution.
And of course:
Test in all possible screen resolutions. After even a minor change to the User Interface, retest.
Eventhough the above posts where helpful this video I found solves all my problems and you don't have to develop for the the lowest screen.
http://www.microsoft.com/events/series/detail/webcastdetails.aspx?seriesid=86&webcastid=5112
I was wondering if anyone could give me a starting point of how to capture the entire screen in Windows Vista/7? I know how to do it in previous versions of Windows, but would really like to keep everything in the D3D stack, without resorting to GDI/BltBit calls.
I realize that you can get a live thumbnail of a given window if you have the HWND using the DWM API, but how do you get a "thumbnail" of the entire desktop?
Thanks,
Alex
Unfortunately, the functions to do this are in the dwmapi.dll, and are undocumented. Someone figured out how to do it to get the directx surface of another window in vista, and use this to capture the screen, but those functions don't work on Windows 7.
The best thing you can do is get the thumbnails of individual windows, at least, that's all I've found.