The documentation for Project Tango Explorer states that it is vital to hold the unit steady before beginning area learning. The C examples provided don't include this. What exactly is the interaction here? Does the device have to be stable prior to enabling area learning? How do I determine when the device has been stable for a sufficient time period?
I have the impression that holding the device steady at the time you connect to the Tango service is the key point - this is when the device and camera to IMU frames are getting set up - I have determined that if you don't hold the device steady in the explorer app when asked to do so, it just keeps waiting until you do - it really is watching :-)
I haven't seen any reference to the animation they use in the demo apps - I would use it too if I had.
Related
I'm trying to control the drone to fly autonomously, but in an area without GPS access. Will I be able to use the SDK to tell it to fly x meter forward/backward/up/down etc without GPS?
The answer is YES. I rarely use GPS in my navigation task. The only difference is how complex you need to hardware/software to be
DJI OSDK
For most of my project in recent year, I use DJI OSDK ROS to fly the drone with pure LDIAR or camera. see example video from here. Inside it is running visual internal navigation with stereo node. I tried with DJI A3/N3/ M100/M210/M600. all works fine. Complex onboard hardware, but software simple and straight forward.
https://www.youtube.com/watch?v=1AbfRENy3OQ&t=90s
DJI MSDK or PSDK
For other cases like DJI MSDK or PSDK(if you have access) you can use other methods like stream down video stream and do on ground localisation and then send the control command out. See my video(this is not using DJI A3, but using similar concept. I drop this idea after school project as it is deemed not suitable for actual commercial application). It is PTAM with EKF for IMU fusion
https://www.youtube.com/watch?v=6xNINp7nnDge
The code running behind is from here https://github.com/tum-vision/tum_ardrone.
The DJI MSDK is meant to replace this link mentioned in the tum_arrone https://github.com/AutonomyLab/ardrone_autonomy .
All you have to do is to modify the source code input and output system to as a android C++ lib. It is not easy job but I already have seen other people doing it. Its simple in hardware but more work in software
DJI WSDK
Even for the DJI windows SDK you can still use pure PTAM based approach on the feature-rich area. As shown in the image below. It is running semi direct visual odometry from ETH group. Its minimal effort in both hardware and software. The only problem is you need it to be a feature-rich area.
I quite disagree with #Ken as the optical flow is only meant for low-level/Microcontroller position hold. It is not meant for dynamic odometry/state estimation. For high-level general localization and mapping, it requires at least a visual odometry/SLAM output. And not only low altitude, the medium to high altitude will also work as shown in the figure below
The Code for getting this image is available at here https://github.com/uzh-rpg/rpg_svo
Since positioning relies very heavily on the aircraft's GPS system I cannot see how you can accurately control movements. You will have other issues also since most autonomous flight operations will not start until there are 6 (or 7) locked satellites.
You will also find that the aircraft will lack the ability to hold position (except when it's low enough for the vision systems to hold position).
My suggestion would be to look into the Virtual joystick parts of the SDK but honestly I feel you will not be happy and the possibility to achieve what you ask above may not be possible.
I'm going to have the task to make sure that an animation created for in Unity3D can be run on a Microsoft Hololens. I don't have any further information about the animation yet but I wanted to ask in advance if there are any big things i should keep in mind.
In the animation you're playing a "character" in first person mode, controlled by wasd or the arrow keys and you can look up, down, left, right with the mouse. There are (as known to me) no special interactions besides colliders.
And another question: is it easier to test the animation on the actual hololens or to use a hololens-emulator on my laptop?
I know it's a lot to ask right now without any code or stuff but I still hope that some of you can give me a little advide :)
In my experience it is difficult to say. The HoloLens, besides it is an awesome device with nice specs for that size, has quite limited graphical power. Try to minimize your model's vetices to a reasonable low amount (e.g. using Blender's decimate feature). Set down the quality in Unity's quality setting as proposed in the Dev-Guide.
For your emulation question: The emulator does not emulate the HoloLens' specs (like processor, memory...), but emulates input concepts etc., while running a Hyper-V virtual machine. So the performance in the emulator is dependent to your computer's hardware and is not related to the actual performance on a HoloLens.
Also take a look at the performance guidelines from Microsoft
I worked on HoloLens for a couple of projects. A few points that can be useful for you:
the first big thing I would keep in my mind is understanding if the character has to move in a VR environment. In this case HoloLens is almost useless because its lenses will allow you to see the surroundings [the real ones] distracting you from the virtual world. This is exactly what happens with their pre-installed HoloTour. Nice attempt but you will not totally feel in Roma or Machu Picchu
the second big thing that I would consider is the fact that - at least for the first release - HoloLens has a very limited field of view, that "amounts to the size of a monitor in front of you – equivalent to 15 inches" [source]. It is likely that - in a situation where the character will look in every direction - the objects that you put in the AR space will end up being cut or invisible
about testing: the emulator is really exceptional, I didn't find great differences between it and the real device. Of course if you already have the real HoloLens I would use that. But if not I would first develop and test on the emulator to understand if the project is worth the purchase
(Wow did SO just select a lot of nonsequitor questions - joy of being on the edge :-)
I find that often when I'm trying to run this app multiple times from Android Studio, subsequent invocations that cause a resume, instead of a cold start (real cold, as in camera permission needed again), the app can no longer acquire pose data - it does get attitude and position data, but it never gets any point clouds because the onPoseAvailable callback in setTangoListeners never gets called again - often I have to reboot the device, sometimes googles app makes everything better, and other times I have to reboot.
I'm pretty sure this is because the proper actions vis connecting to and disconnecting from tango in the Pause and Resume logic is not quite right - however, even when the app is completely rebuilt and installed due to code changes, this irritating behavior remains (irritating) - Anyone have any experience with this ?
I think there are two possible reasons causing this issue, one is the above you mentioned(connect disconnect life cycles), the other one could be the IR frame out of sync issue, as mentioned in Project Tango known issues, as it says:
"Occasionally, or when under high CPU load, the depth flash may appear in the color image, or no depth points are returned. Let the device cool down and/or reboot"
One way to diagnose the problem is to observe the device's IR projector(see Project Tango Tablet Development Kit hardware section). First, launch a depth application, if everything works correctly, you will be able to see a sequence of really dim red flash coming from the IR projector, the red flash pulses around 3Hz. If the problem is connect fails, the IR projector won't give the red pulses. If it is depth out of sync, then you will see the red pulses, but no depth coming out (no callbacks).
Hope it helps.
So I ran into the dreaded 'unfortunately....has stopped working' issue where art loads 2 classes and the debugger promptly tanks out - see this
So, in utter desperation, I switched from ART to Dalvik, half dreading a long session with ADB if the tablet got sour about the switch. Seemed to work. Tango works, albeit with a whole new set of head scratchers (flakier about getting XyzIj, flash is running, surface binding working, hell I can see the camera flashes in the surface showing the camera view - and if I try again and again, I do get tango point data :-)
Can I assume all the tango issues are of my own doing and keep using Dalvik, or must I switch back to ART and try to do all of my debugging through logcat ?
The answer to the question in title: Can we use Dalvik with Tango?
You should always use ART instead of Delvik on Tango, Delvik will work but NOT stable on Tango device, it might cause the issue you experienced like depth out-of-sync.
Same problem here,
What helps is switching to Dalvik for debugging non tango-related issues, but this really slows development process down, as all apps have to be optimized for each switch between debugging and testing session.
When writing DirectX applications, obviously it's desirable to support the user suspending the application via Alt-Tab in a way that's fast and error-free. What is the best set of practices for ensuring this? Things that need to be addressed include:
The best methods of detecting when your application has been alt-tabbed out of and when it has been returned to.
What DirectX resources are lost when the user alt-tabs, and the best ways to cope with this.
Major things to do and things to avoid in application architecture for purposes of alt-tab support.
Any significant differences between major DirectX versions as they apply to the above.
Interesting tricks and gotchas are also good to hear about.
I will assume you are using C++ for the purposes of my answers, but if you can afford to use C#, XNA (http://creators.xna.com/) is an excellent game platform that handles all of these issues for you.
1]
This article is helpful for windows events in the window procedure to detect when a window loses or gains focus, you could handle this on your main window: http://www.functionx.com/win32/Lesson05.htm. Also, check out the WM_ACTIVATEAPP message here: http://msdn.microsoft.com/en-us/library/ms632614(VS.85).aspx
2]
The graphics device is lost when the application loses focus from full screen mode. Microsoft offers an article on how to handle this: http://msdn.microsoft.com/en-us/library/bb174717(VS.85).aspx This article also has a lost device tutorial: http://www.codesampler.com/dx9src/dx9src_6.htm
DirectInput can also have a device lost error state, here is a link about that: http://www.toymaker.info/Games/html/directinput.html
DirectSound can also have a device lost error state, this article has code that handles that: http://www.eastcoastgames.com/directx/chapter2.html
3]
I would make sure to never disable Alt-Tab. You probably want minimal CPU load while the application is not active because the user probably Alt-Tabbed because they want to do something else, so you could completely pause the application, or reduce the frames rendered per second. If the application is minimzed, you of course don't need to render anything either. After thinking about a network game, my best solution is that you should still reduce the frames rendered per second as well as the amount of network packets handled, possibly even throwing away many of the packets that come in until the game is re-activated.
4]
Honestly I would just stick to DirectX 9.0c (or DirectX 10 if you want to limit your target operating system to Vista and newer) if at all possible :)
Finally, the DirectX sdk has numerous tutorials and samples: http://www.microsoft.com/downloads/details.aspx?FamilyID=24a541d6-0486-4453-8641-1eee9e21b282&displaylang=en
We solved it by not using a fullscreen DirectX device at all - instead we used a full-screen window with the top-most flag to make it hide the task bar. If you Alt-Tab out of that, you can remove the flag and minimize the window. The texture resources are kept alive by the window.
However, this approach doesn't handle the device lost event happening due to 'lock screen', Ctrl+Alt+Delete, remote desktop connections, user switching or similar. But those don't need to be handled extremely fast or efficiently (at least that was the case in our application)
All serious D3D apps should be able to handle lost devices as this is something that can happen for a variety of reasons.
In DX10 under Vista there is a new "Timeout Detection and Recovery" feature that makes it common in my experience for graphics devices to be reset which would cause a lost device for your app. This seems to be improving as drivers mature but you need to handle it anyway.
In DX8 and 9 (and 10?) if you create your resources (vertex and index buffers and textures mainly) using D3DPOOL_MANAGED they will persist across lost devices and will not need reloading. This is because they are stored in system memory and the DX runtime copies to video memory automatically. However there is a performance cost due to the copying and this is not recommended for rapidly changing vertex data. Of course you would profile first to determine if there is a speed issue :-)