Turn Philips HUE Lights on When the Ambient Light is Below X - homekit

I have Philips HUE Lights throughout my office and I've been trying to find a way to get them to turn on when it gets below a set lux / ambient light luminous so we don't have to wait till some realises its way too dark and turn them all on. It would be far better if they supplemented the lighting provided to the sun as is set so the office is always the same brightness.
I have looked at ifttt.com and I have looked at integration with a hub like SmartThings but I am struggling to find any working examples or a sensor that will definitively work with my Philips HUE bulbs.
Any suggests would be greatly appreciated!

The recently released Hue Motion sensor (http://www2.meethue.com/en-us/productdetail/philips-hue-motion-sensor) also includes a lightsensor so the lights only turn on when there is motion and it is dark.
If motion detection is also usefull for you, then you could just use the Hue app to set the trigger lightlevel and it works out of the box.
If you don't want to use the motion detection part then you can setup your own rules for triggering in the bridge on lightlevel changes, but this requires usage of the Hue API (see http://developers.meethue.com - free registration required)
There are also 3rd party apps (iConnectHue for iOS, all-4-hue for Android, possibly others) that support the motion sensor and are more flexible than the official Hue app. However I have no experience with these apps.

Related

DJI-Mobile-SDK/position control

I am useing DJI Mobile SDK to crtate an APP by Android Studio now.I want to know how to use the GPS signal of the aircraft and the phone to realize position control. Is there any API in the DJI Mobile SDK I can use?
You may follow this sample to run simple higher-level GPS position control https://developer.dji.com/mobile-sdk/documentation/ios-tutorials/GSDemo.html
If you stop at some waypoint, it can automatically hold the position. It is a simple recreation of the DJI pilot app waypoint planning.
For low-level GPS position control requires a higher understanding of the system. This usually allows interesting applications such as allow drone to follow some person or precision landing to some mark or circle around some tower. There is not much open-source implementation available on the internet. You have to search in the MSDK for the API for some basic control and you also need to have deep understanding in the field that you are trying to achieve e.g real-time object detection, low-level control framework, Visual-Inertial SLAM etc

Why don't Threejs.org examples run on my computer anymore?

My computer's hardware is no longer compatible with some newer webGL stuff, including many Three.js examples as well as webcam streaming. Can anyone explain why this is? I'd love to learn more about how the gaphics card works, if it is related to that.
it's not a browser issue
it's not an https:// issue
the examples in 3js that do not work are: - any PlaneGeometry does not show up, hemi lighting demo shows up all white , lookAt demo shows up almost entirely white
My specs:

Getting detected features from Google Tango Motion Tracking API

I would like to know how to get the current feature points used in motion tracking and the ones that are present in the learned area (detected or not).
There is an older, related post without an useful answer:
How is it possible to get tracked features from tango APIs used for motion tracking. I'm using the tango to not do SLAM and IMU-integration on my own.
What do I need to do, to visualize the tracked features like they did in some of the presentation videos. https://www.youtube.com/watch?v=2y7NX-HUlMc (0:35 - 0:55)
What I want in general is some kind of measure or visual guidance on how good the devices learned the current environment. I know, there is is the Inspector App but I need this information on the fly.
Thanks for your Help ;)
If you want to check if an area is present in your learned area model and which is not, you can use the Tango Debug Overlay App. It has a field 'Tracking Success' that only counts up if the device sees learned feature points (ADF on) or finds new feature points (ADF off) (http://grauonline.de/alexwww/tmp/tango_debug_overlay_app.jpg). Additionally, you can request that debug information like Tango Debug Overlay App does (as a simple text) via UDP port 29361 in your App and parse the returned debug text (although this is not recommended at all for a real app as this interface is not documented)
PS: In Tango Core 01-19-2017 this counter does not seem to work anymore.

Kinect 2 hand detection with Candescent NUI

someone know if the new kinect, have support for Candescent NUI?
I want detect fingers and hands with Candescent, but I can't find if the new OPENNI, kinect, NITE or microsoft SDK have support for the new kinect, accepted too work with Candescent NUI.
You can find a porting for Candescent NUI to Kinect V2 here but you have to setup your Dependencies to run coorectly, you need OpenNI.net.dll, OpenNI64.dll, XnVNITE.net.dll and Microsoft.Kinect.dll (Kinect SDK V2 dll)
It seems that nobody ported Candescent NUI to Kinect v2.
You can do it by yourself.
His code is pretty good and clear.
A number of months ago I really wanted to port this code to Kinect v2, even started working on that, but realized that I don't want front face finger tracking, but top-down, something more similar to RetroDepth (https://www.youtube.com/watch?v=96CZ_QPBx0s), and now I am implementing this.
If you only need a finger tracking, similar to Leap Motion, you may use Nimble SDK, it works pretty well (not perfectly) with Kinect v2, in front facing mode, like Candescent does, but it gives use full 3d hand skeleton. With Kinect v1 it works using top-down setup. But I am not sure if they still provide free licenses. Check it.
If they don't provide free license, you can either re-implement Candescent hand tracking features, moreover you could do it more robust, so it could support another depth cameras with different range (near, far) and different resolutions, actually one of the most annoying things (in my opinion) that Candescent has, it's hard coded resolution of depth and color images.
Moreover on the CHI2015 (http://chi2015.acm.org/) will be presented a new technique for hand tracking for Kinect v2 by Microsoft (https://www.youtube.com/watch?v=A-xXrMpOHyc), maybe they will integrate it soon into Kinect SDK v2. Also probably after the conference, its paper will be published and uploaded to acm.org or even to some public library, so you could see how they have done it, and fortunately somebody will implement it soon as well.

Google glass sdk for Epson moverio

What I understand from technical specs of Google glass is that it displays a 2D plane on one of the eye's projector. Android sdk in addition with GDK provides tools for writing apps for the device with features that can sense eye and voice actions. But, it does not provide 3D stereoscopic vision as this would require projector on both eyes.
On the other hand Epson Moverio promises true 3D augmented reality experience Having used Moverio, I can see two projector for both eyes that is able to project steroscopic images.
Perhaps I should have done a more extensive research regarding the spectrum of products/toolkit available, still I have some Questions/Doubts of which until now I could not find any information.
Q1. Does google provide any 2-eye-projector kind of glasses product?
ANS: No
Q2. Does google glasses development kit (the api) provides features for generating left & right views for a 3D object for EPSON Moverio? I have seen that Wikitude and Metaio comes with these kind of features. Did google provide any support in gdk?
ANS: NO. Not from google.
Q3. Does Epson plan to roll out any developer's tool for easily create 3D markers and plot them in the projected space?
ANS: Not announced yet from EPSON.
There is no current support in Google Glass for stereoscopic views.

Resources