someone know if the new kinect, have support for Candescent NUI?
I want detect fingers and hands with Candescent, but I can't find if the new OPENNI, kinect, NITE or microsoft SDK have support for the new kinect, accepted too work with Candescent NUI.
You can find a porting for Candescent NUI to Kinect V2 here but you have to setup your Dependencies to run coorectly, you need OpenNI.net.dll, OpenNI64.dll, XnVNITE.net.dll and Microsoft.Kinect.dll (Kinect SDK V2 dll)
It seems that nobody ported Candescent NUI to Kinect v2.
You can do it by yourself.
His code is pretty good and clear.
A number of months ago I really wanted to port this code to Kinect v2, even started working on that, but realized that I don't want front face finger tracking, but top-down, something more similar to RetroDepth (https://www.youtube.com/watch?v=96CZ_QPBx0s), and now I am implementing this.
If you only need a finger tracking, similar to Leap Motion, you may use Nimble SDK, it works pretty well (not perfectly) with Kinect v2, in front facing mode, like Candescent does, but it gives use full 3d hand skeleton. With Kinect v1 it works using top-down setup. But I am not sure if they still provide free licenses. Check it.
If they don't provide free license, you can either re-implement Candescent hand tracking features, moreover you could do it more robust, so it could support another depth cameras with different range (near, far) and different resolutions, actually one of the most annoying things (in my opinion) that Candescent has, it's hard coded resolution of depth and color images.
Moreover on the CHI2015 (http://chi2015.acm.org/) will be presented a new technique for hand tracking for Kinect v2 by Microsoft (https://www.youtube.com/watch?v=A-xXrMpOHyc), maybe they will integrate it soon into Kinect SDK v2. Also probably after the conference, its paper will be published and uploaded to acm.org or even to some public library, so you could see how they have done it, and fortunately somebody will implement it soon as well.
Related
I notice that desktop support isn't mentioned in NativeScript's future roadmap any more.
Has this been dropped for good, or is it still on the cards?
If it is still on the cards, for when is it planned?
NativeScript under Progress ownership
While NativeScript was owned by Progress, desktop support was never a priority; developer surveys did not show strong enough demand for it, and the NativeScript Core team were stretched too thinly to tackle it as a curiosity.
Of interest, before the death of Windows Phone, NativeScript did get very far on implementing a Universal Windows Platform runtime for NativeScript: https://github.com/NativeScript/windows-runtime
The NativeScript iOS runtime (https://github.com/NativeScript/ios-runtime for JSC, https://github.com/NativeScript/ns-v8ios-runtime for V8) is also close to delivering Catalyst support, although it's essentially undocumented for now.
I spoke with the NativeScript iOS runtime team and they said it would be pretty trivial to generate JS bindings to macOS (AppKit/Cocoa), too – though one would still have to implement all the UI components as AppKit ones, so it would only be the start of the journey.
Unofficial support
Kamen Bundev (on the Progress/Telerik NativeScript team) has been building a Qt-based desktop implementation of NativeScript as a hobby project for a long time:
https://github.com/bundyo/nativescript-platform-desktop
It has access to Node.js's APIs rather than, say, the Obj-C runtime on macOS, however.
NativeScript under nStudio ownership
NativeScript was recently handed over to nStudio, who may have a different stance. This question did in fact receive an official answer recently on Twitter:
They have also expressed love for the idea of creating Windows 10 apps with it (the tweet links to this issue, https://github.com/NativeScript/NativeScript/issues/8643):
My personal speculation
Note that I do not work for nStudio, and the dust is still settling after the NativeScript handover, so everything from here is just speculation:
So I think there's no question that the passion is there – the real question is whether they have the resources to back it. I personally think that there won't be any movement on it anytime soon, as nStudio need at least a few months just to get used to driving the NativeScript ecosystem and sorting out the long-standing open-source frictions. I think that they'd absolutely welcome a community-driven effort on this, of course. I imagine that by 2021 they'll feel more ready to take on projects of that scale.
The readme.md at https://github.com/NativeScript/windows-runtime says that the Windows runtime for Nativescript is in proof of concept stage, and then lists what I understand to be very deep language features that are not implemented yet.
The tone on the https://www.nativescript.org/blog/nativescript-runtime-preview-for-windows-10 announcement seems a bit more enthusiastic about the current feature set.
Being able to use Nativescript on Windows Phone (and any other platform) is incredibly appealing.
TJ, a core team member, recently posted on the forums about this:
Hey #NezzaGrey,
Thanks for reaching out, and awesome that you’re liking NativeScript :smile:. >Straight to the point though—we’re not actively working on UWP support because >1) it’s a ton of work to add a new platform and commit to supporting that >platform indefinitely, and 2) we’re not seeing nearly enough demand from our >community to justify taking on that work.
That doesn’t mean that UWP support in NativeScript will never happen, but it’s >not coming in the short term because we’re just not seeing the demand. That can >always change though. I’d encourage you to add your use case to the GitHub >issue open for adding UWP support in NativeScript: >https://github.com/NativeScript/NativeScript/issues/254. Yes, the issue is >somewhat ancient, but we really do pay attention to well-thought-out comments >during roadmap discussions.
I’ll note two other things. First, our initial work on making a Windows runtime >is completely open source and available on GitHub: >https://github.com/NativeScript/windows-runtime. We’d love to have community >?>help to make the new runtime a reality.
Second, one option you have is to build your iOS and Android apps with >NativeScript and Angular, and to use our code sharing approaches (see ?>https://www.nativescript.org/blog/code-sharing-between-web-and-mobile-with->angular-and-nativescript1) to share your Angular code with other apps. You >could take that approach to share Angular code between your NativeScript apps >and your UWP apps if you use something like Electron. This approach isn’t >ideal, as you’d probably prefer to build a completely native UWP app, but it’s >something to consider if you’re open to using Electron.
Anyways, hopefully you found some of this helpful. If you have any other >questions feel free to follow up.
Source: https://discourse.nativescript.org/t/windows-uwp-support/2659/3
I would like to know how to get the current feature points used in motion tracking and the ones that are present in the learned area (detected or not).
There is an older, related post without an useful answer:
How is it possible to get tracked features from tango APIs used for motion tracking. I'm using the tango to not do SLAM and IMU-integration on my own.
What do I need to do, to visualize the tracked features like they did in some of the presentation videos. https://www.youtube.com/watch?v=2y7NX-HUlMc (0:35 - 0:55)
What I want in general is some kind of measure or visual guidance on how good the devices learned the current environment. I know, there is is the Inspector App but I need this information on the fly.
Thanks for your Help ;)
If you want to check if an area is present in your learned area model and which is not, you can use the Tango Debug Overlay App. It has a field 'Tracking Success' that only counts up if the device sees learned feature points (ADF on) or finds new feature points (ADF off) (http://grauonline.de/alexwww/tmp/tango_debug_overlay_app.jpg). Additionally, you can request that debug information like Tango Debug Overlay App does (as a simple text) via UDP port 29361 in your App and parse the returned debug text (although this is not recommended at all for a real app as this interface is not documented)
PS: In Tango Core 01-19-2017 this counter does not seem to work anymore.
Maybe I have't looked hard enough, but I spent yesterday googling for a bit and found no relevant projects on hacking the DJI Phantom Drone in order to create new coordinating apps. This is besides the app for coordination DJI currently uses for their drone. I'm trying to see if there's a way to communicate with the Drone with a specific protocol in order to accept a set of procedures.
Any help would be awesome,
Thanks.
Great News for you and all us Droneys! DJI has launched their SDK since you asked this question. They released it last November and you can now apply for a license and write your own apps for the Phantom2 Vision+ using their SDK.
Check it out at https://developer.dji.com/
I am already building a project using the SDK - you can follow my progress on my blog / product site. I will also try to update it with good DJI related development links and tips.
This post is old but I think it is good to leave a foot print for others :)
There is this new company called NVdrones, which created a peace of hardware that you can attach to any drone (you need physical access to the flight controller), and once you do that you can use their SDK (Arduino, Java, Android and Javascript) to write your app without the need of hacking, soldering or anything else. It is just plug and play.
Another benefit is that you are not locked with a specific drone (DJI SDK or 3DRobotics SDK), you can use the board on anything you want. Which gives lots of flexibility.
The developer site is http://developers.NVdrones.com
Hope this helps.
This is a great topic!
You could check how to hack your copter here: https://github.com/flyver/Flyver-SDK/wiki/-2.2--How-To:-Flyver-Hack-a-Copter
By opening the drone, taking out the original controller, soldering a few wires and sticking an Android phone to it, you will have the ability to program your Phantom in a modern manner with an open source SDK and application based development. This means that you could add computer vision to it, automation or additional hardware. You could also use smartphones, web and other interactive devices for remote controlling the copter instead of using the standard remote controls.
The Phantom, however, is offcenter balanced due to the fact that most people use gimbal with it. Without the gimbal is a lot less stable from my experiments so you will have to put some extra work in center balancing it.
Okay so I am not sure if a lot of you have started to work on Microsoft Kinect for Windows that has been released in February 2012. I am trying to start developing and I wanted to know if there are any tutorials I can find on how to use the SDK or if someone can guide me How the RGB stream can be captured using the Kinect?
There are many tutorials. Some can be found at Channel 9's Kinect Quick Start Series , Channel 9 also has many articles on Kinect. All of the classes and variables found in the SDK can be found at MSDN, on Rob Relyea's Blog there are many tutorials. And if you ever are struggling, you can visit the Kinect Development Chatroom (assuming you have 20 rep).
Hope this helps!
Personally, I wouldn't start with Channel 9, or any tutorials for that matter. The most enjoyable way to jump into the Kinect and start messing around with stuff is to install the Developer Toolkit. It was update 3 days ago to include some really cool 3D point cloud stuff. Download/install the toolkit, run the Kinect Studio application it comes with, and spend some time checking out what the Kinect is capable of. If you see something of interest, install it to your computer and open it in Visual Studio. If you don't have Visual Studio, you can download the C# Express version for free. The source code is all very well commented and I find that's the best way to learn by example. You don't have to sit through Channel 9's sometimes painful videos or spend time reading a blog, you can just jump in and have fun with it. If you get stuck, then refer back to Channel9 or come back to Stack Overflow.
The best place to start learning is MSDN, and where you got the driver for kinect. They offer many tutorials and videos that explain most concepts for the kinect.
You can refer Kinect 1.0 for kinect for Windows SDk 1.0