I have an image application and I want to release it where unregistered users can view the files but cant save until they've registered.
I'm looking for a way to prevent the user from using the built in screenshot functionality so I don't have to watermark the images. How might I accomplish this?
-- Edit Below --
I decided to watermark the images. I had been trying to avoid watermarking since the images are stereoscopic but I'm rather happy about how the watermark looks now. I put a logo in the corner and offset it enough on each image so it appears in the foreground.
Whether people agree with it in practice or not, my question is still valid. Apple's DVD Player hides the video in its screenshots, which doesn't altogether stop the user from taking screenshots but accomplishes my original goal.
I would still very much like to know how to do this. (the DVD player way)
Based on a symbols search through DVD Player, it likely uses the private API CGSSetWindowCaptureExcludeShape. Richard Heard has been kind enough to reverse engineer it and wrap it for easy use.
Being private, it may stop working (or have already stopped working) at any time.
But ultimately the answer to your question is "yes, but not in any publicly documented way". Some other takeaways from this lengthy thread are:
Asking this question inevitably excites a lot of myopic moral outrage.
Given there's no public method, reverse engineering DVD Player is a useful path to pursue.
A request to Apple DTS might be the only reliable method to find an answer.
DVD Player does this (the user can still take the screenshot, but the player window doesn't appear in it), so I'm sure there's a way. Maybe setting the window's sharing type to NSWindowSharingNone?
One option that is very user hostile is to change the folder in which screen captures are stored to a /dev/null style directory by changing the com.apple.screencapture setting.
A huge downside of this is that you might mess up the users settings and not being able to restore them if the exit from your application isn't clean.
Another option is to keep track of what files that are created in the screen capture location, see if they match the pattern for name and then remove them.
This method is still quite hostile though.
I also investigated if it was possibility to kill the process that handle the screen capture, unfortunately the process that handles it, SystemUIServer just reboots after being killed.
SystemUIServer seems to refuse taking screenshots if DVD Player currently is playing a DVD. I have no idea how the DVD playback detection works though, but it might be a lead to prevent screenshots.
Links
Technical details about Screenshots in Mac OS X
com.apple.screencapture details
ScreenCapture.strings - List of error messages from ScreenCapture
Disclaimer before people start ranting: I have a legit reason to solve this problem, but won't use the com.apple.screencapture -> /dev/null method due to it's downsides.
You could try to run your application fullscreen and then capture all the keystrokes. But please listen to siride.
No; that's a system feature.
Related
Well, the title almost says it all : Why should I not move a GUI (e.g. Gtk) window on screen from the code ? In Gtk 3 there was an API for moving windows on screen, but it was removed in Gtk 4, because it is not good to move a window from code; only the user should do so (don't ask me to provide sources for that, I read it somewhere but have forgotten where and cannot find it). But I cannot think of any reason why it shouldn't be good, but of several reasons why it could be good, for example to restore the position of a window between application restarts. Could you please shed some light on this ?
The major reason why is that it can't possibly work cross-platform, so it is broken API by definition. That’s also why it was removed in GTK4. For example: this is impossible to implement when running on top of a Wayland session, since the protocol doesn't allow getting/setting global coordinates. If you still want to have something similar working, you'll have to call the specific platform API (for example, X11) for those platforms that you want to support.
On the reason why it’s not supported by some display protocols: it’s bad for UX and security. In terms of UX: some compositors can have special behavior because they need to work on a small device, or because they have a kiosk mode in which everything should always run fullscreen, or they provide a tiling experience. Applications positioning their windows themselves then tend to give unexpected behaviour. In terms of security: if you allow this, it’s technically possible for an application to reposition and resize itself so that it covers your screens while making itself transparent, without it being noticeable, which means it has the possibility of scraping all input.
WP7 mango now supports background agents (with some limitations): http://blogs.msdn.com/b/danielegan/archive/2011/10/18/background-agents-in-wp7.aspx
How can I inspect the user's current activity? specifically, here is what i'd like to determine:
what is the active application
when was the last user interaction
I think the Microsoft way of doing this would be to provide developers with an API to modify Lock Screen system settings. This is not possible at the moment, but there is a petition to enable setting the wallpaper from an app here: http://wpdev.uservoice.com/forums/110705-app-platform/suggestions/1720049-provide-a-wallpaper-api-to-enable-in-app-setting-o?ref=title
This combined with the ability to change the screen timeout (not possible yet) would achieve the desired effect.
I take it you are probably after custom screensaver with some other information on it and/or rendered/animated content. This wouldn't make sense. The whole point of the screen saver is to save the screen.
There is no better way to save the screen than to turn it off, which mobile devices handle pretty well.
You can't do either. Windows Phone does not support inspecting the users activity or anything else that goes beyond the boundary of a 3rd party application's own domain.
The background agents are limited to the application that created them, and the data related to that (in it's isolated storage).
And this is a good thing, since it means creating mal- and spyware is a lot more difficult. Also, a application like you're attempting to make, would principally be considered spyware.
I have a little application here which deals with QuickTime video using QTKit.
This is my first Cocoa app, so I'm still pretty new to programming for OS X.
As the main stuff is now working, I was wondering how I could use and support external video hardware?
I just don't have a clue how to get started as I have never worked with external hardware before.
So, if there is a BlackMagic card installed in the machine the program is running on for example, how would I get to know that and how would I possibly have my QuickTime movie played out on this card instead of a QTMovieView on the computer monitor?
Would be glad if someone could point me a direction!
Thank you very much.
The kind of graphics cards you have installed shouldn't matter to QTKit when it comes to playing things back (it might look smoother and sexier to you, but to what you call in the operating system it doesn't matter).
To display content on a second monitor (or "external hardware", as you call it), you can get information on the various screens hooked to your Macintosh by calling [NSScreen screens]. Take a look at the rest of the NSScreen reference, too.
And once you get the hang of that, you can decide if you want to do full screen on the deepest screen (presumably the one with your expensive graphics card), or if you want to render on the largest screen (which you can determine from NSScreen's "frame" method), or the screen that isn't the "main screen" with the menu bar.
There's also lower level stuff available for you to use in Quartz. Here is Apple's "Quartz Display Services Reference" guide. I'd only recommend going this route when you feel sufficiently smart with Macintosh program to go deeper.
Hope this helps you out!
I recently got myself a second monitor and I have been looking at software which offers the possibility to extend the taskbar to the second monitor. Softwares such as UltraMon and MultiMon offer such possibility.
I'd be interested to know what is the method they are using to replicate the tasbar? More precisely:
Is the second taskbar completely generated and managed by the software or is it some sort of extension/modification of how Windows behave?
How are the additionnal buttons on the window handle added? Is there some sort of templating system similar to what Stardock does?
How can you replicate the taskbar feel?
How can you remove open software icons from the main taskbar in order to move them to the software's taskbar?
Would creating a second start button actually be some sort of image of the said button, and the software would require to do POSSIBLE calls to the Windows API? (by possible, I mean I have no idea if such calls exists)
Finally, I'd be interested to know what field of knowledge is required to program such software.
I'd be glad to receive any pointers to articles or information that would lead to answers. If you have in depth knowledge that you'd gladly share, I'd really appreciate it.
Thanks to all for your replies.
They completely re-create the experience. DisplayFusion uses the Desktop Window Manager API to capture live thumbnails. Scott Hanselman has a very good rundown on just how close they got and where they're different.
I would imagine there is a lot of ugly code required to get it as close as they've gotten it.
I have a windows application that scrapes pixels from the screen for recording (in the form of a video) to a custom screen-sharing format. The problem is that on machines using a software cursor, blitting from the screen with SRCCOPY|CAPTUREBLIT (so that layered windows also show up in the image) causes the cursor to blink, as described in Case of the Disappearing Cursor.
For single screen shots, this is not a problem, but when multiple screen shots are taken in rapid succession, the cursor blinks so fast that it sometimes seems to disappear altogether.
I have looked into using the Windows Media Encoder SDK (as described in a codeproject article, see below) because it doesn't cause the cursor to blink, but there seems to be no way to directly access the frame data. Unfortunately, both real-time encoding and the custom format are both requirements, which makes windows Media Encoder unusable for this purpose.
I have also tried the DirectX way (described in the same article, see below), and it seems to suffer from the same problem.
Has anyone else run into this problem? There must be a way around it - many commercial screen sharing programs have no such problem.
article: www.codeproject.com/KB/dialog/screencap.aspx
you can use Magnification API in windows vista or later.
i cannot find a good idea in windows xp.
What about using a mirror driver?
You are right, a mirror would certainly work. However, at the moment, I am trying to stay away from that approach because of the security and permissions concerns when installing under a user without admin rights. Correct me if I am wrong, but I don't think there is any way to install a driver without such rights. Besides that, it seems that that would be needlessly complex: there should be a simpler / less invasive way to do this. (I should have mentioned this in my original question)
Just copy the screen and the cursor separately and overlay them.
The thought I had to overcome the flicker is to "manually" draw "your own copy of the mouse", then make the BitBlt call, or to call BitBlt with just SRCCOPY then manually capture any visible transparent windows over the top of it. I don't know how the commercial stuff does it (or the windows media encoder apparently does).
ref: http://us.generation-nt.com/xp-bitblt-captureblt-option-help-26970632.html