Kinect V2 "Body Index" tracking stopped working - windows

I am in the proccess of developing a kinect v2 desktop app for an RnD project.
Roughly a month ago I was provided with the Kinect Sensor, I connected it to a USB 3.0 port(motherboard) Installed the SDK and everything was working properly, depth, color, body index tracking (skeleton joints) etc. After I confirmed everything was working, I put the project on pause.
So yesterday I decided to continue working on the project, when I realised the "Body Tracking" feature was not working, not in Kinect Studio not even in the examples provided.
I uninstalled/reinstalled drivers and the Kinect SDK, I tried different USB ports, nothing seems to fix this issue. I scoured Google for possible solutions, I have found nothing.
I am running Windows 10, I cannot recall If during these 30 days Windows installed some sort of update that maybe messed up drivers.
Just to clarify the sensor appears to be working, when I open the Kinect Studio, the only feature that does not work is the body index one.
Also when I run the "Kinect v2 Configuration Verifier" everything is "Green" except the "USB controller" section which is "Orange" (although I believe it was always like this, even when it was working not 100% sure).
Can anyone help me solve this issue?
Cheers!

So I solved the issue, and I am ashamed. It seems that I was keeping the appropriate distance beetween myself and the kinect. When I moved a couple of meters back, the bone index worked. So to anyone having this problem, make sure you are standing at least 1.5 meters away from the Kinect sensor

Related

No video feed from M600 Pro with mobile SDK

I'm having trouble getting the video feed working from the M600 Pro. It works fine in the DJI Go App so I know the feed is there, just not in my IOS app. When we initially setup the app up for the M210, we needed to set bandwidth allocation to make it work, wondering if there is something like that needed for the M600 Pro? Has anyone got that working?
Below is my code:
if (product?.model == DJIAircraftModelNameMatrice600Pro) {
DJISDKManager.videoFeeder()?.secondaryVideoFeed.add(self, with: nil)
}
VideoPreviewer.instance().start()
Yes bandwidth could be an issue but also, other aspects of your setup could influence this blockage. Without knowing your entire set-up this is a difficult question to answer since the M600 has a variety of variables. The best thing to do is send a ticket to dev#dji.com with this issue but include - what cameras you are using, outline how you've made the connections, your setup for bandwidth, the reason you are using secondaryVideoFeed vs. Primary and any other details you can think of.

New Tango device OTA not upgrading

New device unboxed a couple of days ago. Cannot use OTA to upgrade, says current software is up to date even though it is not. Without a current kernel I cannot download tango core, etc. So the device is basically non-functional (other than a plain tablet :-)) out of the box.
Same problem as this question:
Cannot update Tango Core - "Package file was not signed correctly"
Factory resets did not fix the problem. Unlike the previous question, waiting 48 hours provided no resolution. Several users on the Google+ developer group are having similar issues with this batch of devices, so this seems to be a common problem
Thanks!
We had an error in our OTA server configuration that we fixed this morning. This might take some time to propagate, but you should start seeing updates soon. Sorry for the inconvenience!

Kinect 2 hand detection with Candescent NUI

someone know if the new kinect, have support for Candescent NUI?
I want detect fingers and hands with Candescent, but I can't find if the new OPENNI, kinect, NITE or microsoft SDK have support for the new kinect, accepted too work with Candescent NUI.
You can find a porting for Candescent NUI to Kinect V2 here but you have to setup your Dependencies to run coorectly, you need OpenNI.net.dll, OpenNI64.dll, XnVNITE.net.dll and Microsoft.Kinect.dll (Kinect SDK V2 dll)
It seems that nobody ported Candescent NUI to Kinect v2.
You can do it by yourself.
His code is pretty good and clear.
A number of months ago I really wanted to port this code to Kinect v2, even started working on that, but realized that I don't want front face finger tracking, but top-down, something more similar to RetroDepth (https://www.youtube.com/watch?v=96CZ_QPBx0s), and now I am implementing this.
If you only need a finger tracking, similar to Leap Motion, you may use Nimble SDK, it works pretty well (not perfectly) with Kinect v2, in front facing mode, like Candescent does, but it gives use full 3d hand skeleton. With Kinect v1 it works using top-down setup. But I am not sure if they still provide free licenses. Check it.
If they don't provide free license, you can either re-implement Candescent hand tracking features, moreover you could do it more robust, so it could support another depth cameras with different range (near, far) and different resolutions, actually one of the most annoying things (in my opinion) that Candescent has, it's hard coded resolution of depth and color images.
Moreover on the CHI2015 (http://chi2015.acm.org/) will be presented a new technique for hand tracking for Kinect v2 by Microsoft (https://www.youtube.com/watch?v=A-xXrMpOHyc), maybe they will integrate it soon into Kinect SDK v2. Also probably after the conference, its paper will be published and uploaded to acm.org or even to some public library, so you could see how they have done it, and fortunately somebody will implement it soon as well.

DJI Phantom API or hackable procedure

Maybe I have't looked hard enough, but I spent yesterday googling for a bit and found no relevant projects on hacking the DJI Phantom Drone in order to create new coordinating apps. This is besides the app for coordination DJI currently uses for their drone. I'm trying to see if there's a way to communicate with the Drone with a specific protocol in order to accept a set of procedures.
Any help would be awesome,
Thanks.
Great News for you and all us Droneys! DJI has launched their SDK since you asked this question. They released it last November and you can now apply for a license and write your own apps for the Phantom2 Vision+ using their SDK.
Check it out at https://developer.dji.com/
I am already building a project using the SDK - you can follow my progress on my blog / product site. I will also try to update it with good DJI related development links and tips.
This post is old but I think it is good to leave a foot print for others :)
There is this new company called NVdrones, which created a peace of hardware that you can attach to any drone (you need physical access to the flight controller), and once you do that you can use their SDK (Arduino, Java, Android and Javascript) to write your app without the need of hacking, soldering or anything else. It is just plug and play.
Another benefit is that you are not locked with a specific drone (DJI SDK or 3DRobotics SDK), you can use the board on anything you want. Which gives lots of flexibility.
The developer site is http://developers.NVdrones.com
Hope this helps.
This is a great topic!
You could check how to hack your copter here: https://github.com/flyver/Flyver-SDK/wiki/-2.2--How-To:-Flyver-Hack-a-Copter
By opening the drone, taking out the original controller, soldering a few wires and sticking an Android phone to it, you will have the ability to program your Phantom in a modern manner with an open source SDK and application based development. This means that you could add computer vision to it, automation or additional hardware. You could also use smartphones, web and other interactive devices for remote controlling the copter instead of using the standard remote controls.
The Phantom, however, is offcenter balanced due to the fact that most people use gimbal with it. Without the gimbal is a lot less stable from my experiments so you will have to put some extra work in center balancing it.

Microsoft Kinect (For Windows)

Okay so I am not sure if a lot of you have started to work on Microsoft Kinect for Windows that has been released in February 2012. I am trying to start developing and I wanted to know if there are any tutorials I can find on how to use the SDK or if someone can guide me How the RGB stream can be captured using the Kinect?
There are many tutorials. Some can be found at Channel 9's Kinect Quick Start Series , Channel 9 also has many articles on Kinect. All of the classes and variables found in the SDK can be found at MSDN, on Rob Relyea's Blog there are many tutorials. And if you ever are struggling, you can visit the Kinect Development Chatroom (assuming you have 20 rep).
Hope this helps!
Personally, I wouldn't start with Channel 9, or any tutorials for that matter. The most enjoyable way to jump into the Kinect and start messing around with stuff is to install the Developer Toolkit. It was update 3 days ago to include some really cool 3D point cloud stuff. Download/install the toolkit, run the Kinect Studio application it comes with, and spend some time checking out what the Kinect is capable of. If you see something of interest, install it to your computer and open it in Visual Studio. If you don't have Visual Studio, you can download the C# Express version for free. The source code is all very well commented and I find that's the best way to learn by example. You don't have to sit through Channel 9's sometimes painful videos or spend time reading a blog, you can just jump in and have fun with it. If you get stuck, then refer back to Channel9 or come back to Stack Overflow.
The best place to start learning is MSDN, and where you got the driver for kinect. They offer many tutorials and videos that explain most concepts for the kinect.
You can refer Kinect 1.0 for kinect for Windows SDk 1.0

Resources