Getting coordinates and depth data of the point on video overlay using Project Tango - google-project-tango

I'm looking to capture the coordinate data on the video overlay in project tango much similar to how measureit does it. How is this done?

Related

Using ARCore for Measurment

I would like to know if it is possible to measure the dimensions of an object by just pointing the camera at the object without moving the camera from left to right like we do in Google Measurement.
A depth map cannot be calculated from just a 2D camera image. A smart phone does not have a distance sensor but it does have motion sensors, so by combining the movement of the device with changes in the input from the camera(s), ARCore can calculate depth. To put it simply, objects in close to the camera move around on screen more, compared to objects further away.
To get depth data from a fixed position would require different technologies than found on current phones, such as LiDAR or an infrared beam projector and infrared camera.

Project Tango C API mapping specific 3D point to color

I am using Project Tango C API. I signed up to color image and depth image callbacks(TangoService_connectOnPointCloudAvailable and TangoService_connectOnFrameAvailable). So I do have TangoPointCloud and matching TangoImageBuffer. I've rendered them separately and they are valid. So I understand that I essentially can loop through each 3D point now in TangoPointCloud and do whatever I want with them. The trouble is I also want the corresponding color for each 3D point.
The Tango standard examples have lots of examples such as drawing depth and color separately or do OpenGL texture on depth image but they don't have a simple sample that maps 3D point to its color.
I tried TangoSupport_projectCameraPointToDistortedPixel but it gives weird results. I also tried TangoXYZij approach but it is obsolete.
Please those who achieved this help, I wasted two days going back and forth with this.

How to render side by side videos in OculusRiftEffect or VREffect

I'm experimenting with videojs-vr which is using THREE.OculusRiftEffect for rendering the video in an Oculus friendly way.
I downloaded a side by side video from YouTube and played it within the videojs-vr example.html.
Now I'm searching for a way to show only the left part of the video in the left camera of OculusRiftEffect / VREffect and the right part for the right eye.
I think I have to find/use an event which draws the movie onto the mesh and identify which camera is currently rendered to copy only the left or the right part of the video.
If you're using Three.js I would make two spheres, one for left eye and the other one for right eye. Then separate the video using a shader on those spheres to map only half texture in each one. Then attach each sphere to one of the two cameras.
Don't sure how to do it in three.js, as I come from Unity and I'm still a noob with three, but I did exactly that in Unity. Maybe the idea helps you.

Generate equirectangle images from 6 fisheyes images

Generate equirectangle images from 6 fisheyes images
Hello,
I am new to this area and your input would be highly appreciated. I have fisheye images which are supposed to cover the whole 360 degree horizontal and 180 degree vertical area. I want to create equirectangle image which I will later project into sphere. Can somebody lead me into the steps or lead me to some tutorials or terms to google ?
I want to implement and code the functions myself. So I am thinking along these lines:
1- project fisheye images into some planes
2- sticth the projected images in some way
3- change the stiched image into equirectangular image
Is this the right direction ? if yes, what type of projection should i project my fisheyes images to ? Is there any specific way to stitch the images to create the equirectangle image ?
Any input, tutorials, and direction would be highly appreciated,
Thank you
p.s.
I saw someone`s project.
https://docs.google.com/file/d/0B9H-Mgy5ePCaY0JyR0dnVDYyMEU/edit?usp=sharing
This is the picture taken with smartphone camera and fish-eye lens
https://docs.google.com/file/d/0B9H-Mgy5ePCaQ1NrbWhfMzZVUjg/edit?usp=sharing
He said this is the picture using equirectangle

three.js converting equirectangular video to cubic panorama video?

Came across this example here
http://mrdoob.github.com/three.js/examples/webgl_panorama_equirectangular.html
Was wondering if the same thing is possible if the source was video with equirectangular projection, not still a still image?
Yes. And this example shows how:
http://mrdoob.github.com/three.js/examples/webgl_materials_cubemap_dynamic2.html

Resources