I have a question to you and I really hope you can provide me some information.
I wish to build a media center because I have not found any possibilities to cast my stuff straight to the big screen from my Windows mobile phone.
Off course there is the wireless display adapter from Microsoft but I wish not to cast my whole display to my tv.
After testing a few product (Amazon fire tv box, apple tv 3, display dock and the wireless dock) I came to the conclusion that I can not have an all in one solution which fits my perceptions.
From that point I thought that I have to build my own "tv application".
Ok ok... There is kodi(xbmc) and so on... But I think this is just making a detour.
Following features must be included:
running on Windows 10
Cast music, videos and pictures.
Ability to launch and download windows store apps.
Project Rome implementation to share data across devices.
Seems possible but here´s one big problem...
If we are talking about mediaboxes, we do talk about those small boxes besides your tv. Instead off building a micro ATX setup, I want to take this to the next level... using IoT (Raspberry Pi 3).
Using IoT may have some advantages but there are a few disadvantages I have to worry about.
Will Windows 10 work properly on IoT (advantages - disadvantages)
Media streaming?
ARM architecture
Bluetooth, WIFI, Ethernet connectivity
I have never ever worked on IoT before, so I am kinda noob again. I´am asking for some advices to make this possible.
[UWP] How can I stream data (e.g. video, music, images) to another application?
[UWP] Implement a remote control - just like the amazon fire tv controler ?
Advantages - Disadvantages of using Windows 10 on a Raspberry Pi ?
Using windows 10 default applications (Groove Music, Images, Videos - Application) to play incomming data?
What do you think? Is it possible to create a Mediacenter which is running on a raspberry pi using windows 10?
Thank you in advance.
The most straightforward idea would be to create an always-running app with a MediaPlayerElement with a Source property that can be set programmatically by a remote control app. A remote control app could also control the pause, play, next, previous actions.
Be aware that there is no hardware video acceleration support for Raspberry Pi on Windows IoT Core yet, and probably that also won't come soon. There are other devices that do have proper video drivers (look at the hardware support page of Windows IoT Core).
Also be aware that there is no Windows Store on Windows IoT Core, unless you are an OEM (then you can publish your properly signed apps in an official way to devices that are managed by you).
A simpler way would be to buy a Windows 10 box from aliexpress. Then you can use Miracast to stream your screen, install apps from the App Store and play films directly on it, for example using Kodi for which remote control apps exist.
Related
Is it possible with the Mobile SDK to write an application that receives way points from a web service and then starts the drone and monitors its operation?
The use case is as follows:
- Start drone
- Fly to a height of 2m
- Take picture/video and send/stream picture/video to the app
- Land again
Is it possible to simulate my code in the DJI Simulator and then when I know everything works use a Spark or Mavic for a real-life demonstration?
Yes absolutely, although it's not necessarily MobileSDK specific and here's a example:
1/ You create a desktop (native or web) app that does the mission planning. This app can save the mission in a known format - My advice is to create a framework/library to manage this format -
2/ A mobile app built on top of the MobileSDK reads the mission in the format - using the said framework that manages this format.
3/ The mobile app translates the mission requirements into missions system available on MobileSDK either through WaypointMissions, MissionControl or even VirtualStick commands.
As for simulation, once the drone is in simulator mode, the execution will work and show how it executes.
If you want to take things further, you can even stream back from the mobile app data to your destkop app to superpose actual path against planned path.
I can't drop a code source for this as it's extensive, but hopefully this helps.
이후선
11月7日 CST14:30
Can Phantom 4 and Inspire 2 be programmed on a PC?
I wonder if I can program the drones directly through the PC and acquire the images.
Phantom 4 has a smart device attached. I wonder if I can use the other smart device (iPhone, iPad) to control the dron without using the smart device which is basically installed.
In case of Inspire 2, I use smart device and controller with USB connection. I wonder if smart device and controller can be connected wirelessly without using USB.
I am curious about the communication method of Phantom 4 and Inspire 2.
I wonder what communication frequency should be used to directly control the drone via PC.
I wonder if I can program Phantom 4, Inspire 2 using the DJI Developer-ONBOARD SDK.
thank you..
No, you would need to use the mobile SDK as a bridge (PC talks to the mobile app, mobile app controls the aircraft)
I'm not sure which smart device you mention. If you mention the Phantom 4 Pro remote controller with attached screen, you can just swap to a regular remote without and use an iOS or Android device. In case of the Crystal Sky, you can simply remove it and use another device.
Unfortunately no.
PC control see #1, frequency, 2.4 and 5.8 GHz are commonly used and configurable using DJI Go.
Unfortunately no, for OnBoardSDK supported products see at the bottom of this page.
I can not comment yet. So this is rather additional option how to get video/images from Phantom 4 to PC directly.
1.In the menu of DJI GO app you can setup video streaming, trough RTMP. So video in resolution 720p will be stream.See this tutorial https://afsyaw.wordpress.com/2017/07/06/processing-images-from-the-dji-matrice-100-and-zenmuse-x3-without-the-manifold/
2.While on your Linux machine you can setup video stream server such as NGINX.Tutorial here https://obsproject.com/forum/resources/how-to-set-up-your-own-private-rtmp-server-using-nginx.50/
3.Than you can use OpenCV usb-video lib for ROS to process video or images.
If you have Matrice 100. There is also existing HDMI port on the controller. If you will use USB-HDMI grabber such as Immilet or Magwel - both support Mac,Windows,Linux drivers.You can skip step 1, also video stream is better.
Evening community.
I'm in the process of developing a windows based application which heavily revolves around mobiles being connected to a machine via USB, Currently. The communication between android using googles ADB drivers works without a problem (currently, that is). The problem is getting said application to integrate well with IOS users.
What the application does I'm bascially reinventing the wheel of stock control for a client, who wants a completely customized application based around their current mobile barcode scanner which scans and saves the scanned items to a file name created with the date & Time in a text format. This application is both on IOS and Android devices.
What i'm looking to do, is have their current machine automatically map the connected device to a drive letter to allow easier browsing of the device through the application & Pull the necessary file and save locally to then make other changes as needed..
So, the overall question. Is, that without having a jailbroken/rooted mobile device to allow Mass Storage, is it possible to have a Windows XP based machine to automatically map connected IOS and Android devices to a drive letter? There will be only one device connected at one time
I have been long interested to develop on the platform. I even got the tools installed already on my desktop but I can't upgrade my WDDM from 1.0 to 1.1. To make things simple: my graphics chips are not up to the task of running the emulator.
If I still buy a Windows Phone (e.g. a Nokia Lumia) for development purposes, can I sideload and test my apps there efficiently instead of going against the emulator?
If I still buy a Windows Phone (e.g. a Nokia Lumia) for development purposes, can I sideload and test my apps there efficiently instead of going against the emulator
Yes, of course. It's very easy and convenient. You have debugger and all the goodies. Advantage of the emulator is the test option for 256MB devices.
That's exactly what I used to do prior to upgrading my devstation. The nominal min spec says 3G but with a real phone it worked fine in 2G and as you say this also sorts out graphics limitations.
Note that the setting for whether the emulator or physical device is used is stored in the project, so if you accept a project from someone else you will have to set it once prior to debugging.
Well there are 2 sides of the coin. With the physical device you can test most of the things, but with a few limitations
You will not be able to test internet related test cases - For example, if you have an app which uses internet connectivity then you will not be able to test it on the device easily because
The device does not use the machine internet connectivity
When connected to the PC the device's internet connectivity(Data connection 3G/ wifi/GPRS) is broken.
You will have to purchase an account right from the first day you want to test your app. If you have the emulator working then you could postpone this for atleast few days.
Does the Windows Phone Test Framework by Expensify support testing on real mobile devices running windows phone 7 OS ?
If yes, which devices does it support? Please reply.
It depends what functionality you want to test.
The framework uses 3 different APIs to talk to the apps:
a COM API to talk to install/uninstall and start/stop apps
Silverlight automation peer support (communicated to using HTTP) to
talk to the silverlight controls within the apps - this allows get
and set of values, some list manipulation and inspection of the
visual tree.
Mouse and keyboard emulation to control the emulator
device - this is needed to do things like physical touches, hard
button presses (and other emulator interactions when the app isn't
running - e.g. taking photos).
For devices attached using USB: 1 and 2 are available
For devices attached using a network: 2 only is available
In summary, you can do some things if you want to... but I don't use the test framework to test real phones - I stick to the emulator. When external inputs (e.g. camera or gps) are needed then I find a way to mock them
according to the first few seconds of http://www.youtube.com/watch?v=2JkJfHZDd2g "there is some support for devices".
I would hope/expect all devices to devices to behave the same way, subject ot how they're configured.