Google Project Tango, taking pictures while tracking motion - google-project-tango

I'd like to use the Google Tango motion tracking while taking pictures. The Tango service precludes using the android camera API. From what I can tell, it's not possible to control the camera (ISO, exposure, white balance) or take still shots while the Tango service is running. True?
The online java API documents show that TangoConfig has a constant called KEY_BOOLEAN_COLORMODEAUTO, but the TangoConfig class declaration in the TangoSDK_Leibnitz.jar does not have it. Is there a way to control the camera? If the java API does not support this, does the C API?

Theoretically, the OnFrameAvailable is what you want - it returns a regular image stream from the color or fisheye camera via callbacks. That said, it was busted several releases ago, and for the latest release (Leibnitz) there seems to be some confusion over format, and some concern over stability. DO NOT try and acquire the camera yourself, or Tango looses access to it

From what i can tell, you have access in C API (and latest Leibniz release) to disabling auto camera config and then set ISO params (i tried yesterday), and the cloud acquisition AND color frame acquistion is running fine. But this seems to be for Color Cam only ( not explicitly said, but FishEye seems not tunable like this).
Extract of comment from tango_client_api. h :
"
The supported configuration parameters that can be set are:
///
/// <table>
/// <tr><td class="indexkey">boolean config_color_mode_auto</td><td
/// class="indexvalue">
/// Use auto-exposure/auto-whitebalance with the color camera. Defaults to
/// true, and
/// if true, the values for config_color_iso and config_color_exp are ignored.
/// </td></tr>
///
/// <tr><td class="indexkey">int32 config_color_iso</td><td
/// class="indexvalue">ISO value for the color camera.
/// One of 100, 200, 400 or 800. Default is 100. Only applied if
/// config_color_mode_auto is set to false.</td></tr>
///
/// <tr><td class="indexkey">int32 config_color_exp</td><td class="indexvalue">
/// Exposure value for the color camera, in nanoseconds. Default is
/// 11100000 (11.1 ms). Valid from 0 to 30000000. Only applied if
/// config_color_mode_auto is set to false.</td></tr>
...
"

Related

How to set comments generator in English locale& in visual studio?

I want to merge resorces file, but user 1 has resources file with russian comments (his visual studio in english, but generates russian comments, like
/// <summary>
/// Ищет локализованную строку, похожую на ANPR camera images receiving error.
/// </summary>
but I need
/// <summary>
/// Looks up a localized string similar to ANPR camera images receiving error.
/// </summary>
And there are a lot of such points in resource file to merge in smart git.
What could be suggested to resolve my issue?
Appreciate any help.

How can I determine if a webcam is in use on Windows 10 without activating the camera?

On Windows 10, how can I determine if an attached web camera is currently active without turning the camera on if it's off?
Currently, I'm able to attempt to take a photo with the camera, and if it fails, assume that the camera is in use. However, this means that the activity LED for the camera will turn on (since the camera is now in use). Since I'd like to check the status of the camera every few seconds, it's not feasible to use this method to determine if the camera is in use.
I've used both the Win32 and UWP tags, and will accept a solution that uses either API.
I found a tip here that mentions that registry keys under Computer\HKEY_CURRENT_USER\SOFTWARE\Microsoft\Windows\CurrentVersion\CapabilityAccessManager\ConsentStore\webcam\NonPackaged change when the webcam is in use.
There are keys under here that track timestamps on when an application uses the webcam. When the webcam is actively in use, it appears that the 'LastUsedTimeStop' is 0, so to tell if the webcam is actively in use we may be able to simply check if any application has a LastUsedTimeStop==0.
Here's a quick Python class to poll the registry for webcam usage:
https://gist.github.com/cobryan05/8e191ae63976224a0129a8c8f376adc6
Example usage:
import time
from webcamDetect import WebcamDetect
webcamDetect = WebcamDetect()
while True:
print("Applications using webcam:")
for app in webcamDetect.getActiveApps():
print(app)
print("---")
time.sleep(1)
Great question, but I'm afraid there is no such api to return the camera state for each second, after research I found the similar case here, and our member has provided a sample way to detect the camera state from FileLoadException, I think this is only way to check the camera state currently.

Tracking motion of object (by itself or by colour)

Is it possible to use the Google Tango Java API to track an object by itself (say, in the captured video feed I tap on the object and then the device tracks it continuously) or track an object by color (e.g. If I have a black sphere against a white background)
Unfortunately, the Tango API seems designed only to track the observer (i.e. the Tango tablet/phone), at present. If you'd like to track other objects, I recommend using it with a library like OpenCV:
See:
http://opencv.org/platforms/android.html
http://docs.opencv.org/2.4/doc/tutorials/introduction/android_binary_package/dev_with_OCV_on_Android.html
You'll need some way of detecting or selecting an object, and then some way of tracking it:
http://docs.opencv.org/3.2.0/d5/d54/group__objdetect.html
http://docs.opencv.org/3.2.0/d9/df8/group__tracking.html

Comments in Doxygen .dox file

I have a .dox file and want to insert some comments that will not be
displayed in the actual documentation.
Have tried using only //, Horever, when I tried, all the text ir removed.
/// \defgroup desc_polyhedra Polyhedra
///
// If you want to prevent that a word that corresponds to a
// documented class is replaced by a link you should put a % in
// front of the word.
///
/// \section Polyhedron
///
/// A polyhedron is a solid in three dimensions with flat faces,
/// straight edges and sharp corners or vertices.
///
/// Amongst the many examples of polyhedra, the Cuboid and
/// Prismatoid structures will be considered in more detail.
/// Here is a brief summary of cuboids and prismatoids.
A couple of options here:
Surround the text you want left alone with \cond ... \endcond
Have a completely separate comment section with only // - i.e. don't mix // in the middle of the /// section. What you are seeing is that the first // ends the doxygen comment block, so it stops coming out in the output.
Use HTML comment syntax within the doxygen section /// <--! A comment is here -->

Should I use NSOperation or NSRunLoop?

I am trying to monitor a stream of video output from a FireWire camera. I have created an Interface Builder interface with buttons and an NSImageView. While image monitoring is occurring within an endless loop, I want to:
change some camera parameters on the fly (gain, gamma, etc.)
tell the monitoring to stop so I can save an image to a file (set a flag that stops the while loop)
Using the button features, I have been unable to loop the video frame monitor, while still looking for a button press (much like using the keypressed feature from C.) Two options present themselves:
Initiate a new run loop (for which I cannot get an autoreleasepool to function ...)
Initiate an NSOperation - how do I do this in a way which allows me to connect with an Xcode button push?
The documentation is very obtuse about the creation of such objects. If I create an NSOperation as per the examples I've found, there seems to be no way to communicate with it with an object from Interface Builder. When I create an NSRunLoop, I get an object leak error, and I can find no example of how to create an autoreleasepool that actually responds to the RunLoop I've created. Nevermind that I haven't even attempted to choose which objects get sampled by the secondary run loop ...
Because Objective C is (obviously!) not my native tongue, I am looking for solutions with baby steps, sorry to say ...
Thanks in advance
I've needed to do almost exactly the same as you, only with a continuous video display from the FireWire camera. In my case, I used the libdc1394 library to perform the frame capture and camera property adjustment for our FireWire cameras. I know you can also do this using some of the Carbon Quicktime functions, but I found libdc1394 to be a little easier to understand.
For the video capture loop, I tried a number of different approaches, from a separate thread that polls the camera and has locks around shared resources, to using one NSOperationQueue for interaction with the camera, and finally settled on using a CVDisplayLink to poll the camera in a way that matches the refresh rate of the screen.
The CVDisplayLink is configured using the following code:
CGDirectDisplayID displayID = CGMainDisplayID();
CVReturn error = kCVReturnSuccess;
error = CVDisplayLinkCreateWithCGDisplay(displayID, &displayLink);
if (error)
{
NSLog(#"DisplayLink created with error:%d", error);
displayLink = NULL;
}
CVDisplayLinkSetOutputCallback(displayLink, renderCallback, self);
and it calls the following function to trigger the retrieval of a new camera frame:
static CVReturn renderCallback(CVDisplayLinkRef displayLink,
const CVTimeStamp *inNow,
const CVTimeStamp *inOutputTime,
CVOptionFlags flagsIn,
CVOptionFlags *flagsOut,
void *displayLinkContext)
{
return [(SPVideoView *)displayLinkContext renderTime:inOutputTime];
}
The CVDisplayLink is started and stopped using the following:
- (void)startRequestingFrames;
{
CVDisplayLinkStart(displayLink);
}
- (void)stopRequestingFrames;
{
CVDisplayLinkStop(displayLink);
}
Rather than using a lock on the FireWire camera communications, whenever I need to adjust the exposure, gain, etc. I change corresponding instance variables and set the appropriate bits within a flag variable to indicate which settings to change. On the next retrieval of a frame, the callback method from the CVDisplayLink changes the appropriate settings on the camera to match the locally stored instance variables and clears that flag.
Display to the screen is handled through an NSOpenGLView (CAOpenGLLayer introduced too many visual artifacts when updating at this rate, and its update callbacks ran on the main thread). Apple has some extensions you can use to provide these frames as textures using DMA for better performance.
Unfortunately, nothing that I've described here is introductory-level stuff. I have about 2,000 lines of code for these camera-handling functions in our software and this took a long time to puzzle out. If Apple could add the manual camera settings adjustments to the QTKit Capture APIs, I could remove almost all of this.
If all you're trying to do is see/grab the output of a connected camera, the answer is probably neither.
Use QTKit's QTCaptureView. Problem solved. Want to grab a frame? Also no problem. Don't try to roll your own - QTKit's stuff is optimized and part of the OS. I'm pretty sure you can affect camera properties as you wanted but if not, plan B should work.
Plan b: Use a scheduled, recurring NSTimer to ask QTKit to grab a frame every so often ("how" linked above) and apply your image manipulations to the frame (maybe with Core Image) before displaying in your NSImageView.

Resources