L.S.,
A year ago I made a very simple screen saver using Quartz Composer on Snow Leopard (SL).
The screen saver captures the input of the built-in camera with the "video capture"-patch and uses the images as input for the environment parameter of the "GLSL shader"-patch as found in the GLSL Environment Map.qtz stock example. The shader in turn maps the video capture on the famous 3D teapot, creating the illusion of a chrome teapot mirroring the person in front of the iMac or MB. You can find the screen saver here: Compresses QC source
Under Mountain Lion (ML) the output of the video capture fails to function as input for the envirnment of the GLSL shader patch.
The video capture still works though. Because you can still can use it as the input for the image parameter for the teapot-patch.
Furthermore, it doesn't matter whether I run the screen saver as a screen saver or in the QC runner.
Anybody any idea what's happening? The question boils down to: Why is it under ML not possible to use the video capture output as environment for the GLSL shader patch?
The screen saver, as simpel as it is, is quite popular and it would be a shame if people can't enjoy it any more.
I'm eagerly looking forward to a solution!
Related
I have a question. Is there a way to take a picture, or save current frame while a tango app is running and how can i achieve this kind of behaviour
I just used ReadPixels and gave as parameters the dimensions of the screen; it worked in Unity
I'm trying to do pretty much what TangoARScreen does but with multithreaded rendering on in Unity. I did some experiments and I'm stuck.
I tried several things, such as letting Tango render into the OES texture that would be then blitted into a regular Texture2D in Unity, but OpenGL keeps complaining about invalid display when I try to use it. Probably not even OnTangoCameraTextureAvailable is called in the correct GL context? Hard to say when you have no idea how Tango Core works internally.
Is registering a YUV texture via TangoService_Experimental_connectTextureIdUnity the way to go? I'd have to deal with YUV2RGB conversion I assume. Or should I use OnTangoImageMultithreadedAvailable and deal with the buffer? Render it with a custom share for instance? The documentation is pretty blank in these areas and every experiment means several wasted days at least. Did anyone get this working? Could you point me in the right direction? All I need is live camera image rendered into Unity's camera background.
Frome the April 2017: Gankino release notes: "The C API now supports getting the latest camera image's timestamp outside a GL thread .... Unity multithreaded rendering support will get added in a future release.". So I guess we need to wait a little bit.
Multithreaded rendering still can be used in applications without camera feed (with motion tracking only), choosing "YUV Texture and Raw Bytes" as overlay method in Tango Application Script.
How can I capture the screen with Haskell on Mac OS X?
I've read Screen capture in Haskell?. But I'm working on a Mac Mini. So, the Windows solution is not applicable and the GTK solution does not work because it only captures a black screen. GTK in Macs only captures black screens.
How can I capture the screen with … and OpenGL?
Only with some luck. OpenGL is primarily a drawing API and the contents of the main framebuffer are undefined unless it's drawn to by OpenGL functions themself. That OpenGL could be abused was due to the way graphics system did manage their on-screen windows' framebuffers: After a window without predefined background color/brush was created, its initial framebuffer content was simply everything that was on the screen right before the window's creation. If a OpenGL context is created on top of this, the framebuffer could be read out using glReadPixels, that way creating a screenshot.
Today window compositing has become the norm which makes abusing OpenGL for taking screenshots almost impossible. With compositing each window has its own off-screen framebuffer and the screen's contents are composited only at the end. If you used that method outlined above, which relies on uninitialized memory containing the desired content, on a compositing window system, the results will vary wildly, between solid clear color, over wildly distorted junk fragments, to data noise.
Since taking a screenshot reliably must take into account a lot of idiosyncrasy of the system this is to happen on, it's virtually impossible to write a truly portable screenshot program.
And OpenGL is definitely the wrong tool for it, no matter that people (including myself) were able to abuse it for such in the past.
I programmed this C code to capture the screen of Macs and to show it in an OpenGL window through the function glDrawPixels:
opengl-capture.c
http://pastebin.com/pMH2rDNH
Coding the FFI for Haskell is quite trivial. I'll do it soon.
This might be useful to find the solution in C:
NeHe Productions - Using gluUnProject
http://nehe.gamedev.net/article/using_gluunproject/16013/
Apple Mailing Lists - Re: Screen snapshot example code posted
http://lists.apple.com/archives/cocoa-dev/2005/Aug/msg00901.html
Compiling OpenGL programs on Windows, Linux and OS X
http://goanna.cs.rmit.edu.au/~gl/teaching/Interactive3D/2012/compiling.html
Grab Mac OS Screen using GL_RGB format
I am attempting to grab frames and preview the video from a Bodelin Proscope HR USB microscope. I have a simple Cocoa app using an AVCaptureSession with an AVCaptureDeviceInput for the Proscope HR and a AVCaptureVideoPreviewLayer displaying the output.
All of this works fine with the built-in iSight camera, but the output from the Proscope HR is garbled beyond recognition.
Using the bundled Proscope software, I sometimes see the same garbling when trying to use the higher resolutions. My suspicion is that the hardware used is rather under-spec'd, and this is bolstered by the fact that at the lowest 320x200 resolution the bundled software grabs at 30fps, but when you bump up the resolutions the frame rates drop dramatically, down to 15fps at 640x480, all the way down to 3.75fps at the maximum resolution of 1600x1200.
EDIT: I originally thought that perhaps the frame rate being attempted by the AVCaptureSession was too high, but I have since confirmed that (at least in theory) the capture session is requesting the frame rate advertised by the AVCaptureDevice.
I should note that I have already tried all of the standard AVCaptureSessionPreset* constant presets defined in the headers, and none of them improved the results from the Proscope HR. (They did however appear to affect the built-in iSight in approximately the expected manner.)
Here is a screen capture showing the garbled output from the ProScope HR:
And just for comparison, the output from a generic WebCam:
According to the documentation you should configure AVCaptureDevice rather than AVCaptureSession.
EDIT:
The AV framework is developed on top of IOKit and it fully relies on the fact that you have no problems with hardware. In your case, it looks like the root of your problem is hardware-related so you should consider using IOKit directly.
I would like to access the whole contents of a Mac OSX screen, not to take a screenshot, but modify the final (or final as possible) rendering of the screen.
Can anyone point me in the direction of any Cocoa / Quartz or other, API documentation on this? In a would like to access manipulate part of the OSX render pipeline, not for just one app but the whole screen.
Thanks
Ross
Edit: I have found CGGetDisplaysWithOpenGLDisplayMask. Wondering if I can use OpenGL shaders on the main screen.
You can't install a shader on the screen, as you can't get to the screen's underlying GL representation. You can, however, access the pixel data.
Take a look at CGDisplayCreateImageForRect(). In the same documentation set you'll find information about registering callback functions to find out when certain areas of the screen are being updated.