Detect Hand/Finger Gesture in javacv - javacv

I'm beginner in JavaCV. I want to take a snap based on hand gesture and process the image. how can i detect hand/finger gesture in javacv? what technique is to be used?
where can i get javacv documentation [help/tutuorials]?

There is no documentation for javacv,but opencv does have..most of the method definitions are similar ..it would help.
To detect hand u need to capture the image.see this link JAVACV : Webcam capturing using javacv
Then use CvThreshold to convert the image to binary. Use CvFindContour, to get a boundary around your detected shape.

Related

canvas for pixel drawing in GTK3 (gtk-rs)

I just started to use gtk3, and I overwhelmed. I want to draw a pixel rendering (a function graph, drawn in realtime) in a window. I was able to create a window, following some examples, but I can't find information on pixel drawing. I need keywords to google (is it called 'canvas'? Pixel map? Drawing area?), and some advises on how to do it right. Should I keep a separate buffer and do copy every time I need to update window? Or can I just draw into existing gtk object?
For the examples you can generally check the documentation page of the crate itself. There are examples shown in crate documentation.
You can check the documentation of the crate from here.
You can draw your shapes on to the drawing area which is shown as example here on the documentation itself
As far as I have seen, it is pretty much ported with same function and struct names for GTK 3.0.
I need keywords to Google
You can basically google for the GTK library itself and the examples to get the insight about GTK and the easily implement with the help of documentation.
Getting started with GTK 3.0

detecting AR markers in the fisheye camera?

I am experimenting with using AR markers in my Tango app. The example Java and C applications are great for getting this working with the color camera, however, I want to try this with the fisheye camera (for added field of view).
I tried the naive approach of simply changing the camera callback so that I was getting the fisheye image. Then, I passed this into the function TangoSupport.detectMarkers. This resulted in a TangoInvalid exception (presumably due to the arguments that I passed to the function being invalid).
Based on what I've tried thus far, it appears that the fisheye image is not support by the detectMarkers function. Can someone connected to the project verify this? I couldn't find this in the documentation.
Assuming it's not support by detectMarkers, does anyone have an idea of how to proceed? I am currently streaming the fisheye camera data to my laptop where I undistort the fisheye image using some OpenCV code I wrote. Using this undistorted image, I am able to quite successfully find April Tags (a bit different than the Tango's tags) in the image.
Any pointers would be much appreciated.
I never found an easy way to do this, so I implemented my own version by using OpenCV Android SDK to undistort the fisheye image and then using apriltags (ported over to Android).
Here is a link to my code if anyone is interested: https://github.com/occamLab/MobilityGamesAndroid/tree/master/cane_game

GUI to view values in image using OpenCV in ubuntu12.04

Is it possible to simultaneously display an image and the pixel,coordinate values based on the mouse pointer positions?
I am asking an OpenCV equivalent of imview function in MATLAB.
You don't need Qt to do that. Just use default OpenCV function imshow to show image and SetMouseCallback to set callback on mouse click.
It can be done using mouse call back events. You can find a good example in \opencv\samples\cpp\grabcut.cpp
I had a few problems trying to do this with OpenCV alone using an old code I wrote a while back. At this point I'm not sure if I missed something or if it's a bug in OpenCV. I'll investigate this further.
But I shared a short, self contained, correct (compilable), example at my repository, check cvImage. It's written in C++ with Qt and OpenCV. It's a Qt application that loads an image with OpenCV and displays the RGB values as the title of the Qt window.
Move the mouse around and place the cursor on top of the pixel that you are interested at to see it's RGB value.
Opencv with Qt support will do that.

Drawing large images for ipad

I am developing an application for viewing images.
I used the example of PhotoScroller Apple to implement this application.
In my application I want to be able to draw on the image.
I had the idea to put a UIView on top with transparent background and draw the lines via touch events. This solution has become very slow because the generated images are very large, around 3700x2000 pixels.
I also tried a solution with the example of Apple GLPaint that uses OpenGL, but it has a size limitation of 2048x2048 pixels.
Anyone have any idea or example of how I implement this?
I think you should try and tile your image.
One option is using CATiledLayer. Have a look at this short tutorial.
Or you could try and use CGContextDrawTiledImage to get your stuff done. Possibly this post from S.O. could help you getting started.

Is it possible to intercept the image data of Mac OSX Screen?

I would like to access the whole contents of a Mac OSX screen, not to take a screenshot, but modify the final (or final as possible) rendering of the screen.
Can anyone point me in the direction of any Cocoa / Quartz or other, API documentation on this? In a would like to access manipulate part of the OSX render pipeline, not for just one app but the whole screen.
Thanks
Ross
Edit: I have found CGGetDisplaysWithOpenGLDisplayMask. Wondering if I can use OpenGL shaders on the main screen.
You can't install a shader on the screen, as you can't get to the screen's underlying GL representation. You can, however, access the pixel data.
Take a look at CGDisplayCreateImageForRect(). In the same documentation set you'll find information about registering callback functions to find out when certain areas of the screen are being updated.

Resources