detecting AR markers in the fisheye camera? - google-project-tango

I am experimenting with using AR markers in my Tango app. The example Java and C applications are great for getting this working with the color camera, however, I want to try this with the fisheye camera (for added field of view).
I tried the naive approach of simply changing the camera callback so that I was getting the fisheye image. Then, I passed this into the function TangoSupport.detectMarkers. This resulted in a TangoInvalid exception (presumably due to the arguments that I passed to the function being invalid).
Based on what I've tried thus far, it appears that the fisheye image is not support by the detectMarkers function. Can someone connected to the project verify this? I couldn't find this in the documentation.
Assuming it's not support by detectMarkers, does anyone have an idea of how to proceed? I am currently streaming the fisheye camera data to my laptop where I undistort the fisheye image using some OpenCV code I wrote. Using this undistorted image, I am able to quite successfully find April Tags (a bit different than the Tango's tags) in the image.
Any pointers would be much appreciated.

I never found an easy way to do this, so I implemented my own version by using OpenCV Android SDK to undistort the fisheye image and then using apriltags (ported over to Android).
Here is a link to my code if anyone is interested: https://github.com/occamLab/MobilityGamesAndroid/tree/master/cane_game

Related

Saving a frame/picture using Tango Camera

I have a question. Is there a way to take a picture, or save current frame while a tango app is running and how can i achieve this kind of behaviour
I just used ReadPixels and gave as parameters the dimensions of the screen; it worked in Unity

Camera texture in Unity with multithreaded rendering

I'm trying to do pretty much what TangoARScreen does but with multithreaded rendering on in Unity. I did some experiments and I'm stuck.
I tried several things, such as letting Tango render into the OES texture that would be then blitted into a regular Texture2D in Unity, but OpenGL keeps complaining about invalid display when I try to use it. Probably not even OnTangoCameraTextureAvailable is called in the correct GL context? Hard to say when you have no idea how Tango Core works internally.
Is registering a YUV texture via TangoService_Experimental_connectTextureIdUnity the way to go? I'd have to deal with YUV2RGB conversion I assume. Or should I use OnTangoImageMultithreadedAvailable and deal with the buffer? Render it with a custom share for instance? The documentation is pretty blank in these areas and every experiment means several wasted days at least. Did anyone get this working? Could you point me in the right direction? All I need is live camera image rendered into Unity's camera background.
Frome the April 2017: Gankino release notes: "The C API now supports getting the latest camera image's timestamp outside a GL thread .... Unity multithreaded rendering support will get added in a future release.". So I guess we need to wait a little bit.
Multithreaded rendering still can be used in applications without camera feed (with motion tracking only), choosing "YUV Texture and Raw Bytes" as overlay method in Tango Application Script.

Detect Hand/Finger Gesture in javacv

I'm beginner in JavaCV. I want to take a snap based on hand gesture and process the image. how can i detect hand/finger gesture in javacv? what technique is to be used?
where can i get javacv documentation [help/tutuorials]?
There is no documentation for javacv,but opencv does have..most of the method definitions are similar ..it would help.
To detect hand u need to capture the image.see this link JAVACV : Webcam capturing using javacv
Then use CvThreshold to convert the image to binary. Use CvFindContour, to get a boundary around your detected shape.

Auto-cropping image with detection of crop-lines

I am working on a project, which is Android app that uses camera to capture a photo of some ticket and does OCR recognition for only a part of it. I have no previous experience in image processing, but I know it must be some kind of tricky way, because Android applications have small RAM limits.
I have not enough reputation points to post images so I give URLs to it.
Below, I attach image before any processing:
My aim is to automatically detect these lines of (---) and crop it so that final image look like this one:
What's more - it's important to stay open-source and do it without sending photo to some external image processing service.
You can try using Hough Transform to find the lines. OpenCV has a implementation that is open source and works on Android.
HoughLineP is a very efficient Version of the HoughTransform to find Line Segments.
Olena is definitely the way to go!. It's a generic image processing library, but the interesting part is an module that's called Scribo.
Scribo will do document analysis on the picture to extract text and/or image regions, and optionally send text regions to tesseract for recognition.
Being feasible for Android or not is something that I couldn't tell. I've tried it on OSX and Linux systems and it shows great potential.

Easiest way to import image file to OpenGL ES 2.0 cross platform

I am learning to use OpenGL ES 2.0 by using MoSync to write cross platform C code. I have already managed to draw basic shapes such as a triangle, square and circle so the next stage is to draw some text to the screen. After reading various books, tutorials and forum posts I realise I have to create a texture atlas bitmap.
I have a file with the text I want to use, i.e 0-9 a-z image file. Before I can upload and bind it to a texture object I first need to upload the image to OpenGL. Various tutorials use UIImage or BitmapFactory to upload the image but I cannot use these as MoSync does not contain their header files. Could anyone suggest a way to load my image file to OPenGL?
To use MoSync on the Android platform you are probably going to have to make a native library for MoSync and your OpenGL ES code in C++. Most OpenGL ES projects on Android are done in native code for many reasons which are detailed in this article:
http://software.intel.com/en-us/articles/porting-opengl-games-to-android-on-intel-atom-processors-part-1/
I ended up using maOpenGLTexImage(MAHandle image), which works exactly as glTexImage2D() but it uses an image resource instead and figures out pixel formats etc.

Resources