I have a question. Is there a way to take a picture, or save current frame while a tango app is running and how can i achieve this kind of behaviour
I just used ReadPixels and gave as parameters the dimensions of the screen; it worked in Unity
Related
I am experimenting with using AR markers in my Tango app. The example Java and C applications are great for getting this working with the color camera, however, I want to try this with the fisheye camera (for added field of view).
I tried the naive approach of simply changing the camera callback so that I was getting the fisheye image. Then, I passed this into the function TangoSupport.detectMarkers. This resulted in a TangoInvalid exception (presumably due to the arguments that I passed to the function being invalid).
Based on what I've tried thus far, it appears that the fisheye image is not support by the detectMarkers function. Can someone connected to the project verify this? I couldn't find this in the documentation.
Assuming it's not support by detectMarkers, does anyone have an idea of how to proceed? I am currently streaming the fisheye camera data to my laptop where I undistort the fisheye image using some OpenCV code I wrote. Using this undistorted image, I am able to quite successfully find April Tags (a bit different than the Tango's tags) in the image.
Any pointers would be much appreciated.
I never found an easy way to do this, so I implemented my own version by using OpenCV Android SDK to undistort the fisheye image and then using apriltags (ported over to Android).
Here is a link to my code if anyone is interested: https://github.com/occamLab/MobilityGamesAndroid/tree/master/cane_game
I'm trying to do pretty much what TangoARScreen does but with multithreaded rendering on in Unity. I did some experiments and I'm stuck.
I tried several things, such as letting Tango render into the OES texture that would be then blitted into a regular Texture2D in Unity, but OpenGL keeps complaining about invalid display when I try to use it. Probably not even OnTangoCameraTextureAvailable is called in the correct GL context? Hard to say when you have no idea how Tango Core works internally.
Is registering a YUV texture via TangoService_Experimental_connectTextureIdUnity the way to go? I'd have to deal with YUV2RGB conversion I assume. Or should I use OnTangoImageMultithreadedAvailable and deal with the buffer? Render it with a custom share for instance? The documentation is pretty blank in these areas and every experiment means several wasted days at least. Did anyone get this working? Could you point me in the right direction? All I need is live camera image rendered into Unity's camera background.
Frome the April 2017: Gankino release notes: "The C API now supports getting the latest camera image's timestamp outside a GL thread .... Unity multithreaded rendering support will get added in a future release.". So I guess we need to wait a little bit.
Multithreaded rendering still can be used in applications without camera feed (with motion tracking only), choosing "YUV Texture and Raw Bytes" as overlay method in Tango Application Script.
I am developing an application for viewing images.
I used the example of PhotoScroller Apple to implement this application.
In my application I want to be able to draw on the image.
I had the idea to put a UIView on top with transparent background and draw the lines via touch events. This solution has become very slow because the generated images are very large, around 3700x2000 pixels.
I also tried a solution with the example of Apple GLPaint that uses OpenGL, but it has a size limitation of 2048x2048 pixels.
Anyone have any idea or example of how I implement this?
I think you should try and tile your image.
One option is using CATiledLayer. Have a look at this short tutorial.
Or you could try and use CGContextDrawTiledImage to get your stuff done. Possibly this post from S.O. could help you getting started.
I would like to access the whole contents of a Mac OSX screen, not to take a screenshot, but modify the final (or final as possible) rendering of the screen.
Can anyone point me in the direction of any Cocoa / Quartz or other, API documentation on this? In a would like to access manipulate part of the OSX render pipeline, not for just one app but the whole screen.
Thanks
Ross
Edit: I have found CGGetDisplaysWithOpenGLDisplayMask. Wondering if I can use OpenGL shaders on the main screen.
You can't install a shader on the screen, as you can't get to the screen's underlying GL representation. You can, however, access the pixel data.
Take a look at CGDisplayCreateImageForRect(). In the same documentation set you'll find information about registering callback functions to find out when certain areas of the screen are being updated.
I am currently writing a small graphical performance test benchmark for JavaFX.
Thus, I need to get the current FPS at which the JavaFX scene is being refreshed.
So far, I haven't found a solution how to accomplish this.
Does anyone know if there is some kind of event that I could use in order to get the FPS?
I don't think there is a specific event that gives a frame rate. This reference/example might help. It shows the frame rate when running -- JavaFX FPS Meter . It has a link to source code.