I just found that my tango tablet's depth sensor is not working as expected in my project. Then I took the depth test as instructed in https://developers.google.com/project-tango/hardware/depth-test and found the top right area (about 1/4th of the rectangular ) of the point cloud in the screen are always nearer than the others and full of errors. I tested in different lightings and also tested some clean floor area and found similar errors in that area. So what is wrong with the depth sensor?
Thank you so much!
Siyuan Chen
sorry for hearing that your depth test failed, from your description, you may need to re-calibrated your devices on depth. right now
please send the request email to :project-tango-help#google.com
let us see what we can do.
Try to do the test again, but this time lock screen rotation first. Then turn the tablet upside down. If the errant pixels now appear in the lower left, it is likely a hardware issue. If they remain in the upper right, it is an environmental issue. For example, what looks like a flat wall could be slightly bowed.
Related
I am getting my first Tango in the next day or so; worked a little bit with Occipital's Structure Sensor - which is where my background in depth perceiving camera's come from.
Has anyone used multiple Tango at once (lets say 6-10), looking at the same part of a room, using depth for identification and placement of 3d character/content? I have been told that multiple devices looking at the same part of a room will confuse each Tango as they will see the other Tango's IR dots.
Thanks for your input.
Grisly
I have not tried to use several Tangos, but I have however tried to use my Tango in a room where I had a Kinect 2 sensor, which caused the Tango to go bananas. It seems however like the Tango has lower intensity on its IR projector in comparison, but I would still say that it is a reasonable assumption that it will not work.
It might work under certain angles but I doubt that you will be able to find a configuration of that many cameras without any of them interfering with each other. If you would make it work however, I would be very interested to know how.
You could lower the depth camera rate (defaults to 5/second I believe) to avoid conflicts, but that might not be desirable given what you're using the system for.
Alternatively, only enable the depth camera when placing your 3D models on surfaces, then disable said depth camera when it is not needed. This can also help conserve CPU and battery power.
It did not work. Occipital Structure Sensor on the other hand, did work (multiple devices in one place)!
I'm looking to develop an outdoor application but not sure if the tango tablet will work outdoors. Other depth devices out there tend to not work well outside becuase they depend on IR light being projected from the device and then observed after it bounces off the objects in the scene. I've been looking for information on this and all I've found is this video - https://www.youtube.com/watch?v=x5C_HNnW_3Q. Based on the video, it appears it can work outside by doing some IR compensation and/or using the depth sensor but just wanted to make sure before getting the tablet.
If the sun is out, it will only work in the shade, and darker shade is better. I tested this morning using the Java Point Cloud sample app, and only get > 10k points in my point cloud in center of my building's shadow, close to the building. Toward the edge of the shadow the depth point cloud frame rate goes way down and I get the "Few depth points" message. If it's overcast, I'm guessing your results will vary, depending on how dark it is, I haven't tested this yet.
The tango (yellowstone) tablet also works by projecting IR light patterns, like the other depth sensing devices you mentioned.
You can expect the pose tracking and area learning to work as well as they do indoors. The depth perception, however, will likely not work well outside in direct sunlight.
I'd like to program a detection of a rectangular sheet of paper which doesn't absolutely need to be perfectly straight on each side as I may take a picture of it "in the air" which means the single sides of the paper might get distorted a bit.
The app (iOs and android) CamScanner does this very very good and Im wondering how this might be implemented. First of all I thought of doing:
smoothing / noise reduction
Edge detection (canny etc) OR thresholding (global / adaptive)
Hough Transformation
Detecting lines (only vertically / horizontally allowed)
Calculate the intercept point of 4 found lines
But this gives me much problems with different types of images.
And I'm wondering if there's maybe a better approach in directly detecting a rectangular-like shape in an image and if so, if maybe camscanner does implement it like this as well!?
Here are some images taken in CamScanner.
These ones are detected quite nicely even though in a) the side is distorted (but the corner still gets shown in the overlay but doesnt really fit the corner of the white paper) and in b) the background is pretty close to the actual paper but it still gets recognized correctly:
It even gets the rotated pictures correctly:
And when Im inserting some testing errors, it fails but at least detects some of the contour, but always try to detect it as a rectangle:
And here it fails completely:
I suppose in the last three examples, if it would do hough transformation, it could have detected at least two of the four sides of the rectangle.
Any ideas and tips?
Thanks a lot in advance
OpenCV framework may help your problem. Also, you can look to this document for the Android platform.
The full source code is available on Github.
(I'll answer my own question here for general knowledge)
In Tesseract OCR, how do you detect an image that is upside down?
People who have worked with Tesseract may, or may not, know that Tesseract can read images that are being presented upside down.
The issue however is in that you do not know that it is upside down if you use hOCR output, as nowhere in the document it is said.
So how to detect it?
After double checking, I noticed that it really is not directly in the hOCR output, I would expect some attribute in the ocr_page div denoting the orientation.
What I do have figured out is that you can read the y-values of the bounding box of all ocr_careas per page:
If the values go from low to high, then the page is in normal orientation.
If the values go from high to low, then the page is upside down.
This may or may not work for 90 and 270 degrees rotation, but it could very well be that you see a similar pattern for the x-value.
i'm trying to solve a problem i'm facing in detecting the direction of movement of an image.
So i have this video which i'm trying to analyze, its composed of a contracting objects (continuaslly shrink and expand) and i'm trying to be able to detect if current frame of move is shrinked or expand !
here is an example of 2 frames 1 the objects there is expanded and other shrinked
Note: you can't see deference when they are on top of each other, try to save and view one after other on your computer.
So is there a way i can detect the direction of movement in video ? (inward of outward ?)
thanks a lot
This can be solve with "optical flow" which has been studied for several decades now.
The classical method is Horn-Schnuck http://en.wikipedia.org/wiki/Horn%E2%80%93Schunck_method which you can download here: http://www.mathworks.com/matlabcentral/fileexchange/22756-horn-schunck-optical-flow-method . It's fast but not the most accurate way to solve the problem as it tends to blur the regions you are interested in detecting since it minimizes the L2 norm of the gradients. Here's what I got on your images using Horn-Schnuck off the shelf:
Since your images have lots of edges it's probably worthwhile to try out some more modern algorithms. http://people.csail.mit.edu/celiu/OpticalFlow/ might help.