Project Tango C API mapping specific 3D point to color - google-project-tango

I am using Project Tango C API. I signed up to color image and depth image callbacks(TangoService_connectOnPointCloudAvailable and TangoService_connectOnFrameAvailable). So I do have TangoPointCloud and matching TangoImageBuffer. I've rendered them separately and they are valid. So I understand that I essentially can loop through each 3D point now in TangoPointCloud and do whatever I want with them. The trouble is I also want the corresponding color for each 3D point.
The Tango standard examples have lots of examples such as drawing depth and color separately or do OpenGL texture on depth image but they don't have a simple sample that maps 3D point to its color.
I tried TangoSupport_projectCameraPointToDistortedPixel but it gives weird results. I also tried TangoXYZij approach but it is obsolete.
Please those who achieved this help, I wasted two days going back and forth with this.

Related

How to create splats from points with normals in three.js?

I am a newbie in both OpenGL as well as Three.js, I am working on a streaming based "on-line" viewer which uses websockets to transmit points (with surface normals) from one system application to a remote web interface. Long story short, I have modified BufferGeometry and use THREE.PointsMaterial to visualize incoming data with points.
Since I already am sending points locations [xyz], color [rgb] and normals [abc] so I would love to use technique such as surface-splatting Splatting. Unfortunately due to my limited knowledge and lack of internet resources can any one guide me to implement a very basic surface splatting technique using three.js?
Question: How to draw elliptical surfaces instead of points in three.js
Any help will be highly appreciated.
it would probably work using points if you compute the point-size per point such that the whole ellipsis fits in there and use the fragment-shader to compute the area of the ellipsis based on the viewing-angle (i suppose this is what you want to do, right?).
Alternatively, you can use instancing based on a simple quad and use instance-attributes for position and orientation of the quads. In this case, you just need to render a circle into each of the quads.

Unity, fresnel shader on raw image

Hello I'm trying to archive the effect in the image below (that is like shine light but only on top of the raw image)
Unfortunately I can not figure out how to do it, tried some shaders and assets from the asset store, but so far no one has worked, also I dont know much about shaders.
The raw image is an ui element, and renders a render texture that is being captured by a camera.
I'm totally lost here, any kind of help will be appreciated, how to make that effect?
Fresnel shaders use the difference between the surface normal and the view vector to detect which pixels are facing the viewer and which aren't. A UI plane will always face the user, so no luck there.
Solving this with shaders can be done in two ways - either you bake a normal map of the imagined "curvature" of the outer edge (example), or you create a signed distance field (example), or some similar method which maps the distance to the edge. A normal map would probably allow for the most complex effects, and i am sure that some fresnel shaders could work with that too. It does however require you to make a model of the shape and bake the normals from that.
A signed distance field on the other hand can be generated with script from an image, so if you have a lot of images, it might be the fastest approach. Getting the edge distance in real time inside the shader would not really work since you'd have to sample a very large amount of neighboring pixels, which might make the shader 10-20 times slower depending on how thick you need the edge to be.
If you don't need the image to be that dynamic, then maybe just creating an inner glow black/white texture in Photoshop and overlaying it using an additive shader would work better for you. If you don't know how to write shaders, then maybe the two above approaches are a bit of a tall order.

Unity script: scene to 3D led cube

I am a total beginner to Unity, but I'm getting familiar with the interface and the way scripts work with Game Objects. Some days ago, I came across with an article regarding a 3D LED Matrix controlled by Unity and since then I've been trying to make it work with my project.
Original article: http://philippseifried.com/blog/2014/10/29/3d-led-matrix-with-unity/
Basically, once the script is attached to an Orthographic camera (or at least that's what I understood from the article), the camera layers and "slices" the scene, transforms it into a pixel matrix and paints the result into some preview layers dynamically generated.
I have accomplished to attach the camera and get the preview layers to show up. However, I'm unable to get the final result the article shows, as preview layers show absolutely nothing. I think it has to do with the fact that the author is using some kind of transparent planes I have been unable to replicate.
It would be great if someone could guide me a bit to get the exact same result of the article by reading it and watching the last Vine, as it shows his Unity screen with the transparent layers working up and running.
The script was looking at the background color to decided wether a pixel had to be painted or not.
Changing the camera background to transparent (RGBA) was enough to see the final result.

Find my camera's 3D position and orientation according to a 2D marker

I am currently building an Augmented Reality application and stuck on a problem that seem quite easy but is very hard to me ... The problem is as follow:
My device's camera is calibrated and detect a 2D marker (such as a QRCode). I know the focal length, the sensor's position, the distance between my camera and the center of the marker, the real size of the marker and the coordinates of the 4 corners of the marker and of it center on the 2D image I got from the camera. See the following image:
On the image, we know the a,b,c,d distances and the coordinates of the red dots.
What I need to know is the position and the orientation of the camera according to the marker (as represented on the image, the origin is the center of the marker).
Is there an easy and fast way to do so? I tried some method imagined by myself (using Al-Kashi's formulas), but this ended with too much errors :(. Could someone point out a way to get me out of this?
You can find some example code for the EPnP algorithm on this webpage. This code consists in one header file and one source file, plus one file for the usage example, so this shouldn't be too hard to include in your code.
Note that this code is released for research/evaluation purposes only, as mentioned on this page.
EDIT:
I just realized that this code needs OpenCV to work. By the way, although this would add a pretty big dependency to your project, the current version of OpenCV has a builtin function called solvePnP, which does what you want.
You can compute the homography between the image points and the corresponding world points. Then from the homography you can compute the rotation and translation mapping a point from the marker's coordinate system into the camera's coordinate system. The math is described in the paper on camera calibration by Zhang.
Here's an example in MATLAB using the Computer Vision System Toolbox, which does most of what you need. It is using the extrinsics function, which computes a 3D rotation and a translation from matching image and world points. The points need not come from a checkerboard.

Google Maps-style quad-tree of materials on a single plane in Three.js – 1x1, 2x2, 4x4 and 8x8

I'm trying and failing to work out how to achieve a quad-tree of materials (images) on a single plane, much like a Google Maps-style zoomable tile that gets more accurate the closer you get.
In short, I want to be able to have a 1x1 image texture (covering a plane that is 256 units wide and tall) that can then be replaced with a 2x2 texture, that can then be replaced with a 4x4 texture, and so on.
Like the image example below…
Ideally, I want to avoid having to create a different plane for each zoom level / number of segments. A perfect solution would allow me to break a single plane into 8x8 segments (highest zoom) and update the number of textures on the fly. So it would start with a 1x1 texture across all 64 (8x8) segments, then change into a 2x2 texture with each texture covering 4x4 segments, and so on.
Unfortunately, I can't work out how to do this. I explored setting the materialIndex for each face but you aren't able to update those after the first render so that wouldn't work. I've tried looking into UV coordinates but I don't understand how it would work in this situation, nor how to actually implement that in Three.js – there is little in the way of documentation / examples for this specific case.
A vertex shader is another option that came up in research, but again I don't know enough to understand how to construct that.
I'd appreciate any and all help with this, it will be a technique that proves valuable for other Three.js users I'm sure.
Not 100% sure what you are trying to do, whether you are talking about texture atlasing (looking up and different textures based on current setting/zooms) but if you are looking for quad-tree based texturing that increases in detail as you zoom in then this is essentially what mipmaping is and does.
(It can be also be used to do all sorts of weird things because of that, but that's another adventure entirely)
Generally mipmapping is automatic based on the filtering you use - however it sounds like you need more control over it.
I created an example hidden away in the three.js source tree which may help:
http://mrdoob.github.com/three.js/examples/webgl_materials_texture_manualmipmap.html
Which shows you how to load each mipmap level in manually, rather than have it just be automatically generated.
HTH

Resources