Google glass sdk for Epson moverio - epson

What I understand from technical specs of Google glass is that it displays a 2D plane on one of the eye's projector. Android sdk in addition with GDK provides tools for writing apps for the device with features that can sense eye and voice actions. But, it does not provide 3D stereoscopic vision as this would require projector on both eyes.
On the other hand Epson Moverio promises true 3D augmented reality experience Having used Moverio, I can see two projector for both eyes that is able to project steroscopic images.
Perhaps I should have done a more extensive research regarding the spectrum of products/toolkit available, still I have some Questions/Doubts of which until now I could not find any information.
Q1. Does google provide any 2-eye-projector kind of glasses product?
ANS: No
Q2. Does google glasses development kit (the api) provides features for generating left & right views for a 3D object for EPSON Moverio? I have seen that Wikitude and Metaio comes with these kind of features. Did google provide any support in gdk?
ANS: NO. Not from google.
Q3. Does Epson plan to roll out any developer's tool for easily create 3D markers and plot them in the projected space?
ANS: Not announced yet from EPSON.

There is no current support in Google Glass for stereoscopic views.

Related

DJI-Mobile-SDK/position control

I am useing DJI Mobile SDK to crtate an APP by Android Studio now.I want to know how to use the GPS signal of the aircraft and the phone to realize position control. Is there any API in the DJI Mobile SDK I can use?
You may follow this sample to run simple higher-level GPS position control https://developer.dji.com/mobile-sdk/documentation/ios-tutorials/GSDemo.html
If you stop at some waypoint, it can automatically hold the position. It is a simple recreation of the DJI pilot app waypoint planning.
For low-level GPS position control requires a higher understanding of the system. This usually allows interesting applications such as allow drone to follow some person or precision landing to some mark or circle around some tower. There is not much open-source implementation available on the internet. You have to search in the MSDK for the API for some basic control and you also need to have deep understanding in the field that you are trying to achieve e.g real-time object detection, low-level control framework, Visual-Inertial SLAM etc

Customize 3D environment for Assistant 2 Simulator [dji-sdk]

At the very end of the DJI Simulator Tutorial it says this:
This demo is a simple demonstration of using DJISimulator, to have a better user experience, you can create a 3D simulated environment using 3D game engine like Unity3Dto show the simulated data and aircraft flight behavious inside your mobile application (Like the Flight Simulator in DJI Go app)!
I have been looking for a way to integrate a custom 3D model/environment objects into the simulation environment, but there does not seem to be a way. I canĀ“t find any forum posts in regard to this either.
Does anyone have experience doing something like this or can point me in the direction?
Thanks!
It looks like there is not any sample integrating a custom 3D mode.
But there are some states such as 'roll','yaw','pitch', etc in DJISimulatotState.
It is helpful to build another customized simulator using 3D game engine.
https://developer.dji.com/api-reference/android-api/Components/Simulator/DJISimulator_DJISimulatorState.html#djisimulator_djisimulatorstate

Turn Philips HUE Lights on When the Ambient Light is Below X

I have Philips HUE Lights throughout my office and I've been trying to find a way to get them to turn on when it gets below a set lux / ambient light luminous so we don't have to wait till some realises its way too dark and turn them all on. It would be far better if they supplemented the lighting provided to the sun as is set so the office is always the same brightness.
I have looked at ifttt.com and I have looked at integration with a hub like SmartThings but I am struggling to find any working examples or a sensor that will definitively work with my Philips HUE bulbs.
Any suggests would be greatly appreciated!
The recently released Hue Motion sensor (http://www2.meethue.com/en-us/productdetail/philips-hue-motion-sensor) also includes a lightsensor so the lights only turn on when there is motion and it is dark.
If motion detection is also usefull for you, then you could just use the Hue app to set the trigger lightlevel and it works out of the box.
If you don't want to use the motion detection part then you can setup your own rules for triggering in the bridge on lightlevel changes, but this requires usage of the Hue API (see http://developers.meethue.com - free registration required)
There are also 3rd party apps (iConnectHue for iOS, all-4-hue for Android, possibly others) that support the motion sensor and are more flexible than the official Hue app. However I have no experience with these apps.

Getting detected features from Google Tango Motion Tracking API

I would like to know how to get the current feature points used in motion tracking and the ones that are present in the learned area (detected or not).
There is an older, related post without an useful answer:
How is it possible to get tracked features from tango APIs used for motion tracking. I'm using the tango to not do SLAM and IMU-integration on my own.
What do I need to do, to visualize the tracked features like they did in some of the presentation videos. https://www.youtube.com/watch?v=2y7NX-HUlMc (0:35 - 0:55)
What I want in general is some kind of measure or visual guidance on how good the devices learned the current environment. I know, there is is the Inspector App but I need this information on the fly.
Thanks for your Help ;)
If you want to check if an area is present in your learned area model and which is not, you can use the Tango Debug Overlay App. It has a field 'Tracking Success' that only counts up if the device sees learned feature points (ADF on) or finds new feature points (ADF off) (http://grauonline.de/alexwww/tmp/tango_debug_overlay_app.jpg). Additionally, you can request that debug information like Tango Debug Overlay App does (as a simple text) via UDP port 29361 in your App and parse the returned debug text (although this is not recommended at all for a real app as this interface is not documented)
PS: In Tango Core 01-19-2017 this counter does not seem to work anymore.

creating 3d house and walk through in it on web

I'm trying to build a house and hosting it on web. It should be possible for the user to walk through around the house. Does sandy 3D and flash could support this application or have to go for VRML
I don't know about Sandy and Flash for 3D, but i can tell you that VRML isn't an option. I'm afraid you will struggle to find any modern-day browsers and/or browser plugins which will allow you to display VRML models.
EDIT:
Update: Maybe you could try http://wiki.java.net/bin/view/Javadesktop/Java3D

Resources