I would like to access the whole contents of a Mac OSX screen, not to take a screenshot, but modify the final (or final as possible) rendering of the screen.
Can anyone point me in the direction of any Cocoa / Quartz or other, API documentation on this? In a would like to access manipulate part of the OSX render pipeline, not for just one app but the whole screen.
Thanks
Ross
Edit: I have found CGGetDisplaysWithOpenGLDisplayMask. Wondering if I can use OpenGL shaders on the main screen.
You can't install a shader on the screen, as you can't get to the screen's underlying GL representation. You can, however, access the pixel data.
Take a look at CGDisplayCreateImageForRect(). In the same documentation set you'll find information about registering callback functions to find out when certain areas of the screen are being updated.
Related
I've been looking through the Xlib docs attempting to create a simple XWindows application. I'm able to get a window up and running, modify its background colour using pixel colours, etc.
Unfortunately, when I try and create a graphics content and render some of the primitives (Rectangles/Arcs etc) nothing is rendered.
I then built and ran the example here to make sure I wasn't missing something and it also just rendered the background with none of the primitives.
Can anyone explain what I may be missing here?
If it matters I'm running Fedora 23 on kernel 4.4.1 using Gnome shell.
You need to add event loop and move your drawing to occur after you receive expose event ( also make sure you set event mask when you create window or with XSelectInput call ). Likely the result of your drawing is disposed at some point and because you don't react to "window are is damaged, need to re-paint" notification all you see is window background
Take a look at this example
Is it possible to have enaml as target for OpenCV?
I'm thinking how to setup GUI and what to use.
Nothing too complicated, I need to be able to set some bitmap background, draw rectangles and circles over it, but also have the possibility to select/move these graphics objects.
Also, I would like that I do not have to take care of all these elements when I stretch the window, etc. they should do this automatically since they would be defined in some "absolute" space. I think I could easily make it work for the bitmaps (even from memory), by overriding request_image in ImageProvider object (even though I see some strange cache happening in provider/enaml view).
Problem that I'm having now with OpenCV (OSX 64) is that even when I get resize to work with qt backend and CV_WINDOW_NORMAL, the content does not stretch.
I like OpenCV, because easily I get basic UI functions.
On the other hand I started to like enaml so I'm thinking did anyone manage to get these to to work together.
I'm thinking if link with MPL works, it's possible that coupling with OpenCV should be possible :)
Thanks!
If you can get your image into argb32 or png format, you can use an Enaml ImageView to display it.
Take a look at the ImageView example:
https://github.com/nucleic/enaml/blob/master/examples/widgets/image_view.enaml
This should do it:
from enaml.image import Image
from cv2 import imread, imencode
open_cv_image = imread('./cat.png')
png_image = imencode('.png', open_cv_image)[1].tostring()
enaml_image = Image(data=png_image)
I am developing an application for viewing images.
I used the example of PhotoScroller Apple to implement this application.
In my application I want to be able to draw on the image.
I had the idea to put a UIView on top with transparent background and draw the lines via touch events. This solution has become very slow because the generated images are very large, around 3700x2000 pixels.
I also tried a solution with the example of Apple GLPaint that uses OpenGL, but it has a size limitation of 2048x2048 pixels.
Anyone have any idea or example of how I implement this?
I think you should try and tile your image.
One option is using CATiledLayer. Have a look at this short tutorial.
Or you could try and use CGContextDrawTiledImage to get your stuff done. Possibly this post from S.O. could help you getting started.
I would like to write a program that can mirror a portion of the main display into a new window. Ideally this new window could then be displayed on an external monitor. I have seen this uiltity for a flightsim that does this on a pc (a multifunction display extractor).
CLick here for a screenshot of the program (MFD Extractor)
This would be a live window ie. constantaly updated video display not just a static graphic.
I have looked at screen magnifiers or vnc clients for ideas but I think I need to write something from scratch. I have tried to do some reading on osx programing but where do I start in terms of gaining access to the display? I somehow need to extract the graphics from a particular program. Is it best to go near the final output stage (the individual pixels sent to the display) or somewhere nearer the window management stage.
Any ideas or pointers would be much appreciated. I just need somewhere to start from.
Regards,
There are a few ways to do this:
Quartz Display Services will let you get access to the video memory for a screen.
Quartz Window Services (a.k.a. CGWindow) will let you create an image of everything that lies below a window. If you create a borderless, transparent, empty, high-level window whose frame occupies an entire screen, everything below it will be everything on that screen. (Of course, you could create a smaller window in order to copy a section of the screen.)
There's also a way to do it using OpenGL that I never fully understood. That technique is demonstrated by a couple of code samples, OpenGLScreenSnapshot and OpenGLCaptureToMovie. It's more or less obsoleted by CGWindow, though.
Each of those will get you an image that you can then show or write to a file or something.
To show an image, use NSImageView or IKImageView. If you want to magnify it, IKImageView has a zoomFactor property, but if you want nearest-neighbor scaling (like Pixie, DigitalColor Meter, or xScope), I think you'll need to write a custom view for that (but even that isn't all that hard).
Pardon my frustration. I've asked about this in many places and I seriously don't think that there wouldn't be a way in Windows 7 SDK to accomplish this.
All I want, is to capture part of a 'child window' ( setParent() ) created by a parent. I used to do this with bitblt() but the catch is that the child window can be any type of application, and in my case has OpenGL running in a section of it. If I bitblt() that, then the OGL part comes blank, doesn't get written to the BMP.
DWM, particularly dwmRegisterThumbnail() doesn't allow thumbnail generation of child windows. So please give me a direction.
Thanks.
It's been a while since I did any of this, so my explanation might be a bit vague, but from what I remember, the Windows doesn't "see" the OpenGL rendered inside the window.
What Windows does is create the window at the specified size and then "hands it over" to OpenGL for rendering. This means that you can't get at the pixels as rendered from the Windows side of the code.
When we wanted to capture the 3D we had to re-render the screen to an off screen bitmap which was then saved (or printed).
Obviously a whole screen capture (Print Screen) works because it's reading the final pixels.
I suggest that you:
Forget the Thumbnail part of the task (in terms of capture).
Calculate where your window is.
Capture full screen.
Excise the area you are interested in (using data from step 2).
Rescale to the appropriate thumbnail size.
Sorry, its more work, but it should work, which is better than what you have right now.
This may help:
http://code.google.com/p/telekinesis/source/browse/trunk/Mac/Source/glgrab.c?r=140
http://www.codeproject.com/KB/dialog/screencap.aspx
Also Java's Robot class (http://java.sun.com/javase/6/docs/api/java/awt/Robot.html#createScreenCapture%28java.awt.Rectangle%29)
I don't have access to the source code of any child window that may be open including the one with OpenGL