Apple Home Kit - Server driven - homekit

Is it possible with Apple Home Kit and therefor Siri, to communicate with a server with intents rather than communicating with the device, like a light, itself?
I would like to say "Siri turn on the light in Room X", and the light fixture in the room to be controlled by a server, so the intent goes to the cloud rather than the light itself.

You can use Homebridge for that:
Homebridge is a lightweight NodeJS server you can run on your home network that emulates the iOS HomeKit API
It works quite well. I use it, running on a Raspberry Pi, to control (from Siri and also from the "Home" app) a 433Mhz transmitter to control lights and other remote control devices in my home.
There are a lot of plugins that you can install to control devices, but it's also possible to create your own plugins to receive the intents and perform an action based on them.

Related

Access HTTP server running on the Onboard computer

I was wondering if it is possible to access an HTTP server running on the onboard computer from a device (mobile or laptop) connected to the remote control on the ground. From the documentation, the uplink/downlink speeds of 24kbps / 16Mbps are satisfactory for our application.
Going over the available SDKs, the "SDK Interconnection" or "MOP" caught my attention that offers send and receive functions for Onboard and Mobile SDK (Payload as well). However, this means that the send/receive from the ground are exposed on an android based SDK, i.e.
UART Lightbridge USB
Onboard PC ---> OSDK ----> DJI drone ------------> R.C. ----> Android ---> MSDK
From this alone, it seems that we would need to develop network interfaces that are sending and receiving via the corresponding OSDK and MSDK methods. This might be easier said than done - especially for the android device.
My questions are:
Is there a smarter way to do this?
Is the implementation of Mobile SDK available? If so we can port the send/receive code to a Linux box to simplify the code
The MSDK is heavily encrypted.
I've been doing some reverse engineering on it. It's not easy, recommend older versons since they not encrypted.
There's no opensource if that what you asked. Will never be.
Everything sent from the drone is dumldore messages. You can decode them without the MSDK, but it's not completly straightforward.
The messages are partly documented here:
https://github.com/o-gs/dji-firmware-tools/blob/master/comm_dissector/wireshark/dji-dumlv1-proto.lua
If I were you I would connect a 4g modem to the onboard computer. Saves you a lot of time.

How can i imitate DJI's Lightbridge system

I would like to know all of Lightbridge, Phantom 4 Pro, Inspiron 2's controller and drones communication method.
I have many drones from DJI including Phantom 4 Pro, Inspire 2, and Matrice 100.
I want to create a Lightbridge system that is mounted inside the controller through PC programming.
Because the DJI drones I purchased connect the mobile device and the controller with USB cable, and the controller and the drone communicate with the Lightbridge, so the controller must be in the middle of the communication system, but I just want to control the drones directly through my PC.
As a result, how can I imitate the Lightbridge system to communicate with my PC, control (takeoff,landing etc.) , and capture live images.
So I'd like to know about the Lightbridge that helps.
Lightbride is a proprietary protocol. That means we (I work at DJI) do not disclose or document the details of its implementation.
On another hand, removing the remote controller would mean you create/provide your own transmitter which add quite a lot of complexity.
The best way for you to control the aircraft from the PC at this point is to write a mobile app as a bridge and control the app through your PC.
Now, this can be done in a wired manner:
You could write an Android app with the mobile SDK on a device with ethernet such as Odroid and chain it all together.

UWP Mediabox - a few questions

I have a question to you and I really hope you can provide me some information.
I wish to build a media center because I have not found any possibilities to cast my stuff straight to the big screen from my Windows mobile phone.
Off course there is the wireless display adapter from Microsoft but I wish not to cast my whole display to my tv.
After testing a few product (Amazon fire tv box, apple tv 3, display dock and the wireless dock) I came to the conclusion that I can not have an all in one solution which fits my perceptions.
From that point I thought that I have to build my own "tv application".
Ok ok... There is kodi(xbmc) and so on... But I think this is just making a detour.
Following features must be included:
running on Windows 10
Cast music, videos and pictures.
Ability to launch and download windows store apps.
Project Rome implementation to share data across devices.
Seems possible but here´s one big problem...
If we are talking about mediaboxes, we do talk about those small boxes besides your tv. Instead off building a micro ATX setup, I want to take this to the next level... using IoT (Raspberry Pi 3).
Using IoT may have some advantages but there are a few disadvantages I have to worry about.
Will Windows 10 work properly on IoT (advantages - disadvantages)
Media streaming?
ARM architecture
Bluetooth, WIFI, Ethernet connectivity
I have never ever worked on IoT before, so I am kinda noob again. I´am asking for some advices to make this possible.
[UWP] How can I stream data (e.g. video, music, images) to another application?
[UWP] Implement a remote control - just like the amazon fire tv controler ?
Advantages - Disadvantages of using Windows 10 on a Raspberry Pi ?
Using windows 10 default applications (Groove Music, Images, Videos - Application) to play incomming data?
What do you think? Is it possible to create a Mediacenter which is running on a raspberry pi using windows 10?
Thank you in advance.
The most straightforward idea would be to create an always-running app with a MediaPlayerElement with a Source property that can be set programmatically by a remote control app. A remote control app could also control the pause, play, next, previous actions.
Be aware that there is no hardware video acceleration support for Raspberry Pi on Windows IoT Core yet, and probably that also won't come soon. There are other devices that do have proper video drivers (look at the hardware support page of Windows IoT Core).
Also be aware that there is no Windows Store on Windows IoT Core, unless you are an OEM (then you can publish your properly signed apps in an official way to devices that are managed by you).
A simpler way would be to buy a Windows 10 box from aliexpress. Then you can use Miracast to stream your screen, install apps from the App Store and play films directly on it, for example using Kodi for which remote control apps exist.

How to setup dji L2 api demo?

I'm intended to create an android app in which you can control the drone (in my case phantom 2 vision) I want to control it using a virtual joystick, I already have the Level 2 API. To do that I wanted to see a "working app" with my own eyes, to understand how I should use the API. I tried to run the dji demo application (I followed the steps they pointed on the documentation, like put the api key in the manifest), the application seems to work ok, but I only can control the gimbal, the virtual joystick does not work for some reason. Is there any limitation in terms of android OS version, devices, phantom firmware version etc?. I made some questions on dji's forum but no one gave me a concrete answer, I hope some one here can give me a hint :)
I'm using a Samsung galaxy note 10.1. I'm working on the DJI-SDK-Android-V2.4.0 project.
I could get "D/GsProtocolJoystickDemoActivity: GroundStationResult GS_Result_Failed"
while I was debugging.
Since you could invoke the API from gimbal and camera correctly, I am assuming that you have already activated your app. Here is my point, the virtual joystick could be used only in the ground station mode. My suggestions are as following:
Turn the remote controller mode to the F mode
Invoke DJIDrone.getDJIGroundStation().openGroundStation().
Invoke the joystick methods.
Please Note: The app key should have the LEVEL2 access so that you could invoke the Ground Station relative methods.

Hiding monitor from windows, working with it from my app only

I need to use a monitor as a "private" device for my special application, I want to use it as a flashlight of a sort and draw special patterns on it in full screen. I don't want this monitor to be recognized by OS (Windows 7) as a usual monitor. I.e. user should not be able to move mouse to that monitor, or change its resolution, or run screensaver on it or whatever. But I want to be able to interact with it from my application. Monitor is plugged using an HDMI cable to a video card (most probably nVidia).
What is the simplest way to do this? All solutions are appreciated, including purchasing additional adapters or simple video cards, or any other special devices. The only solution I could imagine for now is to plug the monitor to another computer, run a daemon on that computer, connect it to my computer via ethernet or whatever, communicate with that daemon from my computer. It is pretty ugly and require additional computer. But I need to solve this problem.
To do this, detach the monitor from the desktop. Detaching a monitor from the desktop prevents Windows from using it for normal UI.
Sample code for attaching and detaching monitors is in this KB article. Once you've done that, you can use the monitor as an independent display.
Building upon your own idea of using an external PC, and Mark's comment on using a VM as this "external" device:
You could buy an external USB-to-VGA video adapter like one of these, approx. USD40:
http://www.newegg.com/USB-Display-Adapters/SubCategory/ID-3046
Almost every VM software supports some kind of USB passthrough. VirtualBox is a great example.
Only the VM sees the USB device, the host ignores it completely.
So the steps would be:
Buy said USB-to-VGA adapter.
Configure slim a virtual machine and cook up a little utility to receive the images to show on he screen by network.
Configure VirtualBox to connect the USB-to-VGA adapter directly to the virtual machine.
Here is another simple solution to monitor you application.
Your app should provide an API monitor service, served as HTTP on any port you want (for example http://{userip}:{port}/{appname}/monitor).
Your app monitors itself, keeping monitoring data in memory, in a local file or a database, hidden from the user. The monitor API serves this data to any device you want that has a browser (tablet, phone, netbook, android mini-PC, low cost linux device, any PC or any OS... from the internet, your LAN or direct connection to the PC hosting the app).
Pros:
Data to monitor is collected (and served) within your app : only one executable
Display can be done remotely : from anywhere !
Access security easily done using standard HTTP authentication mecanisms
You can monitor several applications (ie several monitoring URLs)
You are free to use any browser to monitor (even a local window browser on the same PC for testing purposes)
Monitor from any hardware and OS you want
Simple and flexible !
Cons:
There is few, but tell me...
Choosing this solution depends on what kind of data you need to monitor (text, images, video...), and also on what is the refresh rate you expect depending on your system network configuration.
Hope it helps :)

Resources