Tango is developed by google which has api that used for motion tracking on mobile devices. I was wondering if it could be applied to stand alone java application without android (for java-SE). If not then I was wondering are there any api out there are similar to tango where it tracks motion and depth perceptions.
I am trying to capture the motion data from a video, not camera/web cam. If this was possible at all.
Googles Tango API is only compatible with Tango enabled devices only. So it does not work on all mobile devices only devices that are Tango enabled. If you try to use the API with a device that is not Tango Enabled it wont work.
I think you should research a bit into OpenCV its an Open Source Computer Vision Library that is compatible with Java and many other languages. It lets you analyze videos without the need for that many sensors (like Raw Depth Sensors which are primarily used on Tango enabled Devices).
The Tango API is only available on Tango-enabled devices, which there aren't that many of. That being said, it is possible to create your own motion-tracking and depth-sensitive app with standard Java.
For motion-tracking all you need is a accelerometer and gyroscope, which most phones come equipped with nowadays as standard. All you basically then do is integrate those readings over time and you should have an idea of the device's position and orientation. Note that the accuracy will depend on your hardware and implementation, but be ready for it to be fairly inaccurate thanks to sensor drift and integration errors (see the answer here).
Depth-perception is more complex and would depend on your hardware setup. I'd recommend you look into the excellent OpenCV library which has Java bindings for you already and make sure you have a good grasp on the basics of computer vision (calibration, camera matrix, pinhole model, etc.). The first two answers in this SO question should get you started on how to go about determining depth using a single camera.
Related
So recently, Casting technology is almost poping up everywhere i know, and those Smart TVs and TV boxes starting to "claims" to support such technology, and i was like hmmm, how do they works? wat protocol they use?
and one day, as im a Nexus6P user, which built-in a feature of "Casting" on it, and my friend claims that his TV support "Casting" feature, and then he use his IPhone to drop his screen onto his tv, but when i try to open my Casting feature, it won't find the TV devices, and i tried all my devices on my hand such as PC(Chrome), and even tried some 3rd parties software, no lucks. (mostly they just use the API of phone built-in casting protocol, which is useless)
And the the funny thing is, some random video platform software in China, BiliBili found the devices and work likes a charm, with my Nexus6P, so im completely confused and did some research on Protocols of Casting Technology, here are the information i've found
Here are the protocols of Casting Technology list i got
DLNA/UPNP casting (Media streaming)
This is probably the oldest protocol of casting technology, DLNA was originally developed for media transfer just like FTP, it probably send signal to the receiver device and ask it to receive data feed from the source, as it was not designed for Casting technology, it leaks of feature such as playback controlling and video streaming, and it need application to write its own data feeding codes, thats why there is only a few big brand software in the market support such protocol.
(Fun facts, Windows also support this protocol hiddenly)
Chromecast Built-in (Google Cast)
Chromecast, obviously developed by Google, and this one is directly designed for the casting technology only, such protocol seems is the leading one in the market, one fun fact is, Chromecast Built-in Projection device seems also able to cast on device that uses Miracast
Miracast
Miracast, developed by Wi-Fi Alliance, an extend of Intel WiDi, is probably the second choice in the market, and its widely used in China as it has no relation to Google, also Windows built-in this protocol too, kinda powerful as Chromecast too
[Deprecated] Intel WiDi
No longer supported
AirPlay
Developed by Apple, well they have their own market thats isolated from us... watever, but this protocol is the most useful one, as AirPlay projection devices is able to project on all of the devices used any protocols above, which is really powerful, give one like to apple this time, only
Sources
https://newsletter.icto.um.edu.mo/wireless-display-and-screen-mirroring-technology/
https://en.wikipedia.org/wiki/Google_Cast
some random china website i forget
If there are any incorrect information or missing, plz notice me im willing to update them, im just
trying to tidy up all the protocols we have, as internet seems leak of information on such area
so now comes with my questions
as DLNA/UPNP is the oldest Screen Mirroring Technology, why google and WiFi Alliance has to develop a new protocol for casting? you might tell me because its leak of control such as playback one, but if u have ever used BiliBili, or this one i found in the PlayStore "Video & TV Cast for DLNA Player: UPnP Movie Mirror" by 2kit consulting, u will notice that actually there is some kind of way to control it, even streaming Youtube. currently i can't find any opensource project of DLNA casting player too, why is everyone dont want to update DLNA and just heading into a new way??
Can i use Miracast enabled projection devices to cast on a chromecast device?? Say i want to use the Windows' built-in Miracast to cast on a Chromecast Built-in device
Currently, casting technology also seems leak of security, you can easily project on neighbor's TV without permission, ppl even play adult video or horror video to trick the others.. why no one care about that? even google??
Miracast from ethernet to Wifi?? why such thing still not exist?? according the information i got from internet, the projection device must be using WiFi why??
I have used this tutorial in order to create working example project. But when I move around with device, object is also moving slightly with me (even Lowe's Vision app) but ARKit keeping object a lot more stable than Tango. Is there any guide to fix this issue or Tango is not ready for using in real world applications (other than cases where slightly unstable objects are ok to tolerate, like in games)?
What "Tango" device, if it is the Dev Kit, then that 3 year old Tegra chip and older hardware is probably the bottle-neck as the Phab 2 Pro can compute and track way better then the old Dev Kit as I have compared them next to each other.
I have also compared my Phab 2 Pro with a Tango C API demo to the standard ARKit demo and the Tango has way better tracking since it has the depth camera as ARKit is just good software over a normal RGB camera. But this depth camera loses a lot of its advantage if you are clogging it with the abstraction layer set on Unity.
Also to my knowledge I am not sure how you can really quantify "more stable" as it might be the applications fault, not the hardware
I'm developing an app that uses device sensors to determine user x-axis rotations and y-axis pitch (essentially the user spins in a circle and looks up at the sky or down at the ground). I've developed this app for a phone using the android Sensor.getRotationMatrix and Sensor.getOrientation functions and then using the first two resulting orientation values. I've now moved my app to a Project Tango tablet and these values no longer seem to be valid. I've looked into PT a bit and it seems that this measures things in Quarternions. Does this mean that Project Tango is not meant to implement the Android SDK?
The Project Tango APIs (which are for Android only) and the Android SDK are both required to build Project Tango apps. The Tango APIs offer higher level interfaces to Android device sensors than the Android SDK's direct access to sensors state - Tango APIs combine sensors states to deliver more complete "pose" (6 degrees of freedom position and orientation) state, as well as 3D (X, Y, depth) scene points and even feature recognition in scenes, etc. The crucial benefit of the Tango APIs is syncing several different sensors very precisely in realtime so the pose state is very accurate; indeed, the latest Tango devices support that sync inside the CPU circuitry itself. An app collecting that data from sensors using the (non-Tango) Android SDK APIs will not be fast enough to correlate the sensors as through the Tango APIs. So perhaps you're getting sensor data that's not synced, which sows as offsets.
Also, a known bug in the Tango APIs is that the device's compass sensor is returning garbage values. I don't know if that bug affects the quality of data returned by the Android SDK's calls directly to the compass. But the Android SDK's calls to the compass are going to return state at least somewhat out of sync with the state returned by the Tango API calls.
In theory, the Android SDK should still be working, so your app should work without any change, but it won't get advantage of the improvements given by the Project Tango.
To get the advantages of Tango (fisheye camera for improved motion tracking...), you need to use the Tango API to activate the Tango Service and then yes, use the pose in quaternions.
As far as I understand eddybeacon (just released by Google) is effectively a new 'operating system' for Bluetooth 4.0 Low energy devices (iBeacons). I have been experimenting with iBeacons for sometime now and want to try out a few things with eddybeacon. Has anyone had a go with it yet? I've read a few sites and they say it can be installed to some devices... Can anyone share how to do this?
If you want to start out by playing with Eddystone, you have a couple of options:
You can use a software transmitter. Just download my free Locate App in the Google Play store which will both act as an Eddystone transmitter and decode other Eddystone-compatible beacons in the vicinity. Google also has posted an Android app that can transmit the Eddystone-UID frame here, but you have to compile it yourself.
You can get a few hardware beacons for testing with a Developer Kit from Radius Networks (my company) here.
Once you have a transmitter, you can try writing some software to work with it. Here's a tutorial I wrote on how to build a basic Eddystone-capable Android app.
One other thing that might be useful is an Eddystone detector tool. You can use the free Android Locate app to detect and decode all of the frames transmitted by Eddystone.
So:
Eddystone is a specification for Bluetooth Smart (usually just called BLE) devices to behave like beacons — it defines the Bluetooth frames and content they need to broadcast to be seen as beacons.
iBeacon is not a generic term. iBeacon is actually Apple's specification for Bluetooth beacons. Eddystone and iBeacon are both examples of beacon specifications for BLE devices.
There are a few ways to get started with Eddystone beacons.
a. A number of hardware manufacturers sell developer kits that will let you get started with Eddystone beacons right away, and there is plenty of example software out, either from those vendors, or from the google pages on GitHub — github.com/google/eddystone and github.com/google/beacon-platform.
b. Some people have had good luck with Arduinos and Raspberry Pis. You can see an Arduino example here (Note: I have no idea how well that project works, I've just seen it used a few times.)
I am new to WP7 development and working on a project (wp7 app) where I need to get ONLY gravity force using accleration API (I think I can do some thing using Motion Api) but It requires window phone to support Compass and Gyroscope as well.So Is there any way to separate gravity from accleration or only get gravity forces on X, Y and Z axis using only accelration (as I want my app to run on wp devices where there is no Compass and Gyroscope).
Also in android there are some methods likes
Linear Acceleration
Low pass / high pass filters etc
Do we have such kind of support in Wp7?
Thanks
The phone itself doesn't know what force is caused by acceleration and what by gravity. You would need information from other sensors to be able to do the math to separate the values. That's what Motion API is for.
So, your only chance is to use Motion API. It will fall gracefully if the device doesn't have the necessary sensors, but will work if there are:
The Motion API used by this sample requires all of the supported
Windows Phone sensors, and therefore these sample applications will
fail gracefully but will not work properly on devices without the
necessary sensors or on the device emulator.
There was a post on the Windows Phone Team blog about implementing a High pass / low pass filter on the accelerometer data. I've used this with fairly good results.