Live streaming google tango's data - google-project-tango

I am working with Google Tango to extract data from the tablet and use it at the same time in another device. I am trying to record the data and use it with another laptop via live streaming.
I've looked at various topics about it and I found the Paraview topic. However the App records the data, save as ZIP file and send it via bluetooth(which is fine for me). I do not want to save the file as ZIP format and send it to another device. I want to record and use the data via live streaming(bluetooth or Wi-Fi).
Is that possible? How can I do it?
Paraview shared the source code so I think I can change it make it work for me. However I am not really used to programming.
Thank you very much for your help. I really appreciate it.

You might want to use sockets: setting a socket server on your computer and connecting to it via your tablet. It is a way to transfer information via what is called, for example, a TCP protocol.
However, there might be a more efficient way to do what you want to achieve with USB debugging. I did not got able to make USB debugging work yet on the tablet and computer I worked with.

You could use TangoAnywhere, I developed it so you can broadcast Google Tango position and orientation data to any device.
https://play.google.com/store/apps/details?id=de.grauonline.tangoanywhere
You connect using a TCP client on your PC/Mac (simple TCP client written Python or something) to port 8080 of your Tango device and will get the position data in realtime.

I recommend the Tango ROS Streamer app which enables you to choose which data (position, point cloud, RGB image) to stream or not.
You will need ROS to retrieve the data. On Linux ROS is easy to install, otherwise use a docker image of ROS.
Caution: the theorical WiFi bandwidth does not enable to stream all the RGB frames at full resolution without dropping some.

Related

Real time data on a webserver AT COMMANDS

I succeded to make a webserver (access point ) which is showing an HTML code, using an esp32 and an stm32 which is sending the at commands to it. How I can do if I want to show on a webpage in real-time data from a sensor? I heard that I need to use WebSocket but I didn't find so much.

How to make a Google Home (Mini) publish what it listens to a MQTT topic (and broker)?

I have a Google Home Mini and I'm trying to use it as a speech-to-text device. The way I intend to do so is by having the device listening to what is said and publishing that input to an MQTT broker in order to my application to listen to it.
I have found this, that returns the input as text, but all it gives me is the certainty I can get this data. I have little to no clue on how to make it publish this data as an MQTT message.
Also found this, but can't make it work, because it states "There’s a very easy way to recognize custom phrases in Google Assistant,[...] I won’t cover it here". And even the Google's instructions (open "Create an Applet") seems to be out-dated in relation to IFTTT, because the steps simply aren't followable in IFTTT's interface.
Here is a quick sketch of the architecture:
There're 5 arrows. The first one is, obviously, a physical process. Arrows "Audio" and "Text" are automatically done by the hardware. The right "MQTT Message" is working already. So what I wanted help with is the "MQTT Message" arrow from "Google Home" to "MQTT Broker".
Thanks in advance.
The short answer to this is you don't (as you've described it).
The slightly longer answer is that you first have to move the arrow you are interested into to the cloud and it's not a MQTT message.
The Action box needs to be hosted on a publicly accessable machine (e.g. AWS/GCP/Azure/IBM Cloud) so that the Google platform knows where to find it.
Google have 2 different types of actions, one for conversational type interactions and one for controlling smart homes devices. You've not mentioned what you are trying to do so I can't say which one you really want.
Google have recently announced the Local SDK for interacting with smart home devices that is slightly closer to the diagram you have included. This can only be used for device control and still can't send MQTT messages, it supports HTTP, raw UDP or TCP (you might be able to implement a MQTT client using the raw TCP, but it would be a lot of work and I'm not convinced the keep alive would work)
I think I got what you need:
Configure the Google assistant to parse your speech, then connect it to ifttt (as I already did it in the past, it's very easy) to send HTTP requests.
NOW create a local web server that understands these requests from ifttt, and publish them to your broker.
And that's all!

Building embedded system with no reliable internet connection

I'm not really sure how to search this over the internet, I tried some searches but never got the help I needed, so I'll just ask here. (sry if it's already answered!)
I'm building a embedded system that runs on windows. I'll gather some data and send over the internet to read at home. I'm most probably using a 3G connection to connect my system (that will keep moving) to the internet and send the data over. I planned to use a ftp server with a hamachi connection to send the files to another pc. And it'll be automated, so the only person action will be to read the data at home. I tested and this all works fine when I use a reliable connection, like when I'm home.
My question is, will it work when my 3G connection drops and how can I make this system reliable?
I want to keep storing the data if the connection is down and send it all when it's back online, but i don't know if the system will automatically connect (i can't have a person manually clicking 'connect') to hamachi or to the ftp server (my first time using this technologies).
Also, is there a better, more reliable or simpler way than hamachi+ftp to send the data?
Thx,
EDIT: Adding more info. I'm gathering data with a LabView VI. The plan was to save this data into a file (txt, csv or whatever), send the file over and have another VI reading the file and display some graphs and etc. There is a DataSocket in Labview to send some data over the internet, but I'm not familiar with these internet protocols, it says I can use FTP, HTTP and others. What is paid and what can i do for free?
Also, is there a better, more reliable or simpler way than hamachi+ftp to send the data?
Might it be simpler to use e-mail (SMTP)? That has the advantage that the sender and receiver need not be up at the same time.

peer 2 peer libraries to broadcast real time video using websocket?

First of all, is it a nice and successfull idea to use peer 2 peer to broadcast realtime video ? I know that it will make the application scallable and will allow more users to get the real time video without affecting the server much, but are there drawbacks performance-wise and video quality-wise ?
Now the specefic question, my intention is to share realtime video, and then use peer2peer in the webclient level using websockets, are there any libraries that are used for this purpose?
I know that streaming should be better using UDP but the follwing post says that even using websockets (TCP) at 30fps is fast ennough (Video streaming over websockets using JavaScript)
XSockets.NET provides a WebRTC API.
This will provide you a JavaScript API for P2P communication. You can actually have a video chat with 2 or more participants really easy.
If you are a .NET dev you can install the sample from nuget. That sample contains a example of a multivideo chat.
The video will be of high quality, but you can set parameters to get lower resolution if you have low bandwith.
WebRTC works in Chrome and Firefox today (as well as chrome 29 on android). You can try this site with Chrome (not updated for Firefox or mobile) http://browsermeeting.com/
Nuget Package
You can check out IceLink (disclaimer: I work # FM), it'll help you do this.
I've actually built something along these lines for a client of ours, where each successive client becomes a potential "distribution" node. So X clients connect to the main server, and from there, other clients can connect to those clients (provided they have appropriate bandwidth/CPU/etc) for a re-broadcast version. It's sort of a supernode/mesh concept, and it works reasonably well.

Access webcam from multiple applications simultaneously

The problem background - there are two different windows applications that are trying to access webcam on the computer at the same time. Currently, only one application is able to access to it. I want to be able to allow both applications to simultaneously access the webcam. A common example of my problem is, skype and yahoo messenger trying to access the webcam on the computer at the same time.
I found a few softwares (manycam.com, http://www.splitcamera.com/) that allow this on windows. But I am not sure how they implemented it. I want to write the code myself to achieve this since my code needs to be integrated with other APIs.
I appreciate if anyone can shed light on how to write a device wrapper to achieve this.
The kernel camera driver registers several OS-defined callbacks. One of the callbacks is used for the output stream. Dedicated Windows applications have an interface to this stream - you'll need to do some reading on this subject, it's not something that can be covered in scope of SO. You need a component that will be layered in between the client applications and the camera driver. This component should intercept your camera driver output and duplicate it for the registered clients. This can be achieved either in kernel (filter driver) or in user mode (preferable). http://msdn.microsoft.com/en-us/library/windows/hardware/ff557573%28v=vs.85%29.aspx is a good place to start.
Note: this functionality might be already supported by your camera software (though I think the chances are very slim) and in this case you should dig into the corresponding documentation.

Resources