I have used this tutorial in order to create working example project. But when I move around with device, object is also moving slightly with me (even Lowe's Vision app) but ARKit keeping object a lot more stable than Tango. Is there any guide to fix this issue or Tango is not ready for using in real world applications (other than cases where slightly unstable objects are ok to tolerate, like in games)?
What "Tango" device, if it is the Dev Kit, then that 3 year old Tegra chip and older hardware is probably the bottle-neck as the Phab 2 Pro can compute and track way better then the old Dev Kit as I have compared them next to each other.
I have also compared my Phab 2 Pro with a Tango C API demo to the standard ARKit demo and the Tango has way better tracking since it has the depth camera as ARKit is just good software over a normal RGB camera. But this depth camera loses a lot of its advantage if you are clogging it with the abstraction layer set on Unity.
Also to my knowledge I am not sure how you can really quantify "more stable" as it might be the applications fault, not the hardware
Related
Tango is developed by google which has api that used for motion tracking on mobile devices. I was wondering if it could be applied to stand alone java application without android (for java-SE). If not then I was wondering are there any api out there are similar to tango where it tracks motion and depth perceptions.
I am trying to capture the motion data from a video, not camera/web cam. If this was possible at all.
Googles Tango API is only compatible with Tango enabled devices only. So it does not work on all mobile devices only devices that are Tango enabled. If you try to use the API with a device that is not Tango Enabled it wont work.
I think you should research a bit into OpenCV its an Open Source Computer Vision Library that is compatible with Java and many other languages. It lets you analyze videos without the need for that many sensors (like Raw Depth Sensors which are primarily used on Tango enabled Devices).
The Tango API is only available on Tango-enabled devices, which there aren't that many of. That being said, it is possible to create your own motion-tracking and depth-sensitive app with standard Java.
For motion-tracking all you need is a accelerometer and gyroscope, which most phones come equipped with nowadays as standard. All you basically then do is integrate those readings over time and you should have an idea of the device's position and orientation. Note that the accuracy will depend on your hardware and implementation, but be ready for it to be fairly inaccurate thanks to sensor drift and integration errors (see the answer here).
Depth-perception is more complex and would depend on your hardware setup. I'd recommend you look into the excellent OpenCV library which has Java bindings for you already and make sure you have a good grasp on the basics of computer vision (calibration, camera matrix, pinhole model, etc.). The first two answers in this SO question should get you started on how to go about determining depth using a single camera.
I'm developing an app that uses device sensors to determine user x-axis rotations and y-axis pitch (essentially the user spins in a circle and looks up at the sky or down at the ground). I've developed this app for a phone using the android Sensor.getRotationMatrix and Sensor.getOrientation functions and then using the first two resulting orientation values. I've now moved my app to a Project Tango tablet and these values no longer seem to be valid. I've looked into PT a bit and it seems that this measures things in Quarternions. Does this mean that Project Tango is not meant to implement the Android SDK?
The Project Tango APIs (which are for Android only) and the Android SDK are both required to build Project Tango apps. The Tango APIs offer higher level interfaces to Android device sensors than the Android SDK's direct access to sensors state - Tango APIs combine sensors states to deliver more complete "pose" (6 degrees of freedom position and orientation) state, as well as 3D (X, Y, depth) scene points and even feature recognition in scenes, etc. The crucial benefit of the Tango APIs is syncing several different sensors very precisely in realtime so the pose state is very accurate; indeed, the latest Tango devices support that sync inside the CPU circuitry itself. An app collecting that data from sensors using the (non-Tango) Android SDK APIs will not be fast enough to correlate the sensors as through the Tango APIs. So perhaps you're getting sensor data that's not synced, which sows as offsets.
Also, a known bug in the Tango APIs is that the device's compass sensor is returning garbage values. I don't know if that bug affects the quality of data returned by the Android SDK's calls directly to the compass. But the Android SDK's calls to the compass are going to return state at least somewhat out of sync with the state returned by the Tango API calls.
In theory, the Android SDK should still be working, so your app should work without any change, but it won't get advantage of the improvements given by the Project Tango.
To get the advantages of Tango (fisheye camera for improved motion tracking...), you need to use the Tango API to activate the Tango Service and then yes, use the pose in quaternions.
So, we've got a little graphical doohickey that needs to run in a server environment without a real video card. All it really needs is framebuffer objects and maybe some vector/font anti-aliasing. It will be slow, I know. It just needs to output single frames.
I see this post about how to force software rendering mode, but it seems to apply to machines that already have OpenGL enabled cards (like NVidia).
So, for fear of trying to install OpenGL on a machine three time zones away with a bunch of live production sites on it-- has anybody tried this and/or know how to "emulate" an OpenGL environment? Unfortunately our dev server HAS a video card, so I can't really show "what I've tried".
The relevant code is all in Cinder, but I think our actual OpenGL utilization is lightweight for this purpose.
This would run on windows server 2008 Standard
I see MS has a software implementation for OGL 1.1, but can't seem to find one for 2.0
Build/find some Mesa DLLs.
It will be slow.
the more i read about the different type of views/context/rendering backends, the more i get confused.
regarding to http://en.wikipedia.org/wiki/Quartz_%28graphics_layer%29
MacOSX offers Quartz (Extreme) as a render-backend which itself is part of Core Graphics.
in the Apple docs and in some books too they say that in any case somehow you use OpenGL (obviously since this operating system uses OpenGL to render all its UI).
i currently have an application that should capture real-time video from a camera (via QTKit which is based on Quicktime but is Cocoa) and i would like to further process the frames (via Core Image, GLSL shaders, etc.).
so far so good. now my question is - does it matter performancewise if you
a) draw the captured frame via Quartz and implicitely via OpenGL or
b) if you setup an OpenGL context and a DisplayLink and draw the buffered image explicitely via OpenGL?
what would be the advantages or disadvantages of going either way?
i've looked at the different examples (especially CoreImage101 and CoreVideo101) and documents from apple's developer pages but i can't see why they go (or have to go) that way?!?
and i really don't get where Core Video and Core Animation come into play?
does going way b) automatically mean i use Core Video? and with which way can i use Core Animation?
additional info:
http://developer.apple.com/leopard/overview/graphicsandmedia.html
http://theocacao.com/document.page/306
http://lists.apple.com/archives/quartz-dev/2007/Jun/msg00059.html
p.s.: btw, i am on Leopard, so no QuicktimeX confusion yet :)
Generally speaking OpenGL just gives you more flexibility than the higher level APIs. If the higher level APIs do not offer a feature you need then it is very likely that you will need to drop down to the OpenGL layer.
If they do offer everything you need then you should comparable speed. Perhaps a small (almost negligible) degradation given the Objective-C overhead.
I am new to opengl and using C#,opentk for development. My Application is very light weight (just 2d graphics) and i am planning to use software rendering when hardware rendering is not available.
How do i make sure software rendering works on all computers ? (when hardware rendering is not available.)
Should i distribute Software rendering libraries like Mesa, myself. or it will already available on all (Windows) OS ?
in other words, opengl32.dll is always available on all modern windows OS ( > XP SP2 ) or should i distribute that also ?
( My Application is very simple (simple 2d graphics) as of now. I selected opengl instead of GDI+/WPF because, i may extend it to 3D in future. )
OpenGL is a system library. You should not distribute it with your application. Especially on Unix/Linux systems, where it should be installed using the distribution's package manager.
Since opengl32.dll is included in Windows, it falls back to Software Rendering automatically if the pixel format you chose in your application isn't hardware accelerated by the graphics driver.
I tried leveraging OpenTk as well, but in itself creates a dependency and - particularly as a newbie - doesn't really do anything but confuse learning OpenGL versus learning someone else's interpretation of the framework.
OpenGL is - as the other answerer suggested - a system library. With this, it's a functions contained in a C DLL which you import through the API.
OpenTK imports these functions for you, that's the only real benefit it adds, but in doing so, many of the types are reinterpreted as are the function calls as per the author of OpenTK.
This creates an additional learning curve - as most of the internet references you're going to find are going to be OpenGL - so not only will you be struggling with understanding OpenGL - which isn't easy - but you're also going to be dealing with OpenTK interpretations of the OpenGL standards.
Now keep in mind that MANY open source projects such as OpenTK start as open source until they get sufficient enough user base, when they convert over to a for profit model. So let's say you learn and become dependent on OpenTK, well if/when they switch to a for profit model and you're tapped on cash, you are SOL (Shit outta luck). Or you have to pay their price.
What I did was - I took the source for API mapping for OpenTK's OpenGL mapping and renamed everything as per my tastes. it's a bit of work, but it's worth the labor and it helped me get to understand OpenGL.
As for distribution. I have absolutely no external dependencies I rely on other than the OpenGL DLL which should already be on the system.
ALL the DLLs you need for OpenGl will ALREADY be preinstalled on any windows OS you're dealing with. I can't speak for other OS flavors, but I suspect this may be the case.
On a final note: OpenGL handles 'toggling between software and hardware rendering' innately. So libraries like MESA and OpenTK add VERY little value at HIGH potential costs.
What are those costs?
1) Redistributable packaging and licensing. They still come with a license and most license are subject to change at any time.
2) Conversion from open source or free distribution to a for profit model.
Invest in yourself. OpenGL documentation is vast and at times confusing, and my advice is to avoid the knee jerk temptation to 'take the easy' path versus the leverage other's models - for one simple reason.
I know you're using this for a 2d application. And even if you're using Orthogonal view, the fact of the matter is you're learning OpenGL with is a patterned with 3d in mind. So give yourself the gift up front of teaching yourself because there really is no 'easy path' to understanding 3d - and thus no real value add to the external dependencies you're leaning towards using.
One thing to keep in mind: Modeling. IF/when you switch to 3d modeling, doing vertex creation through hand coding in opengl is a bitch. I use Blender to create my obj models in and read those into my own c# application which reads in the 3d models and lets me manipulate them from there.
I HAD been using C++, which sure is faster, but once I converted the APIs to c# code and started managing my own memory leveraging the garbage collection model, it became SO much easier than having to learn someone else's library.
Dont use redistributeables. And leverage the code from OpenTK, with modification, but don't include OpenTK as a redistributeable.
That's my advice.