How can we access the point cloud in the Leap Motion API? One feature that led me to purchase it was the point cloud demo from their promo video, but I can't seem to locate documentation regarding it and user replies on the forums seem mixed. Am I just missing something?
I'm looking to use the Leap Motion as a sort of cheap 3D scanner.
That demo was clearly a mockup which simulated a 3-D model of the human hand, not actual point cloud data. You can tell by the fact that points were displayed which could not have possibly been read by the sensor, due to obstruction.
orion78fr points to one forum post on this, but the transcript of an interview by the founders provides more information direct from the source:
Can you please allow access to cloud points in SDK?
David: So I think sometimes people have a misperception as to really
how things work in our hardware. It’s very different from other things
like the Kinect, and in normal device operation we have very different
priorities than most other technologies. Our priority is precision,
small movements, very low latency, very low CPU usage - so in order to
do that we will often be making sacrifices that make what the device
is doing completely not applicable to what I think you’re getting at,
which is 3D scanning.
What we’re working on are sort of alternative device modes that will
let you use it for those sorts of purposes, but that’s not what it was
originally built for.You know, it’s our goal to let it be able to do
those things and with the hardware can do many things. But our
priority right now is of course human computer interaction, which we
think is really the missing component in technology, and that’s our
core passion.
Michael: We really believe in trying to squeeze every ounce of
optimization and performance out of the devices for the purpose they
were built. So in this case the Leap today is intended to be a great
human computer interface. And we have made thousands of little
optimizations along the way to make it better, that might sacrifice
things in the process that might be useful for things like 3D scanning
objects. But those are intentional decisions, but they don’t mean that
we think 3D scanning isn’t exciting and isn’t a good use case. There
will be other things we build as a company in the future, and other
devices that might be able to do both or maybe there will be two
different devices. One that is fully optimized for 3D scanning, and
one that continues to be optimized and as great as it can be at
tracking fingers and hands.
If we haven’t done a good job communicating that the device isn’t
about 3D scanning or isn’t going to be able to 3D scan, that’s
unfortunate and it’s a mistake on our part - but that’s something that
we’ve had to sacrifice. The good news is that those sacrifices have
made the main device really exceptional at tracking hands and fingers.
I have developed with the Leap Motion Controller as well as several other 3-D scanning systems, and from what I've seen I'd seriously doubt if we're ever going to get point cloud data out of the currently shipping hardware. If we do, the fidelity will be far below what we see for gross finger and hand tracking from that device.
There are some low-cost alternatives for 3-D scanning that have started to emerge. SoftKinetic has their DepthSense 325 camera for $250 (which is effectively the same as the Creative Gesture Camera that is only $150 right now). The DS 325 is a time-of-flight IR camera that gives you a 320x240 point cloud map of the 3-D space in front of it. In my tests, it worked well with opaque materials, but anything with a little gloss or shininess gave it trouble.
The PrimeSense Carmine 1.09 ($200) uses structured light to get point cloud data in front of it, as an advancement of the technology they supplied for the original Kinect. It has a lower effective sptial resolution than the SoftKinetic cameras, but it seems to provide less depth noise and to work on a wider variety of materials.
The DUO was also a promising project, but unfortunately its Kickstarter campaign failed. It was using stereoscopic imaging from an IR source to return a point cloud from a couple of PS3 Eye cameras. They may restart that project at some point in the future.
While the Leap may not do what you want, it looks like more and more devices are coming out in the consumer price range to enable 3-D scanning.
See this link
It says that yes, Leap Motion can theorically handle point cloud and it was temporarily part of the visualiser in beta and no, you can't access it using the Leap Motion APIs right now.
It may appear in the future but it's not a priority of Leap Motion Team.
As with LeapMotion SDK 2.x one can at least access the stereo camera images! As I know by myself it is a convenient solution, for many tasks where the point cloud data was asked for. This is why I mention it here, even if it does not give the point-cloud data internally generated by the driver to extract the pointer-metadata. But now one has the capability to generate own point-cloud by yourself, this is why I think it is strongly related to the question.
Currently there is no access to the Pointcloud in the public API. But I think this video is no mock-up, so there should be a possibility:
http://www.youtube.com/watch?v=MYgsAMKLu7s#t=40s
Roadtovr recently reviewed the Nimble Sense Kickstarter, which is using point cloud.
It’s the same technology that the Kinect 2 uses, and it’s supposed to have some advantages over the Leap Motion.
Because it’s a depth sensing camera, you can point the camera top-down like the Touch+, although their product will not ship till next year.
Related
I am in a similar situation as caspertm was when asking this question: How do I export Point Cloud Data (Project Tango)?
I apologize that I cannot comment on other questions yet or I would have just done so on that question. I too was looking for the functionality the mapper app provided (specifically the capturing and saving of 3d environments) and have found through searching and reading that question that it is not available for the tablet. The answer provided to caspertm's question was to use the point cloud data sample code as a starting point and modify it to log the data to a file.
I am wondering if anyone would be willing to go into more detail about what needs to be modified to the point cloud sample (I am using the Java version) to save that data and retrieve it later on my computer so I can manipulate it in a program like blender or unity.
I am very new to the android developing process. I can read the sample point cloud java code and get a very basic understanding of what is going on, but I definitely have a lot of learning to do. I realize I am asking for a lot of help and don't expect any one person (or even several) to paint me the entire picture, but tips on things like: whether this data should be saved internally or externally, which java file requires the saving code, how to format the file to be readable in other 3d programs and how to see more than just the current snapshot of the point cloud would be greatly appreciated. If anyone could point me in the right direction of how to get the actual environment colors projected onto the cloud data, that would be amazing too, but any help or links for any of these requests would be greatly appreciated.
Thanks so much!
This answer addresses only computational geometry aspects - issues involved in getting the point cloud, phoning home with it, stuffing it in a file, etc are considered 'self evident' in order to more quickly go play with the math :-)
Nice shallow pretty answer - if you're scanning something where the point cloud represents an object with fair curvy or straight surface then the suggestions here will help -- https://blender.stackexchange.com/questions/7028/wrapping-a-mesh-around-point-cloud-with-cavities Please note that 'fair' is a loaded word.
The more detailed answer isn't pretty - and reality will have a way of handing you point clouds that make the preceding algorithms very irritated. If you are looking to take a random cloud of points (yes, I know its a meaningful cloud of points to you, but mathematicians make much of these details) and reconstruct a geometry from it, i.e. define the topology that relates those points in a meaningful way, you're talking about a very nasty problem. Check the internet for discussions of Delaunay Triangulation and Voronoi diagrams, which are the more traditional approaches to solving this issue. Sort of. Its pretty straightforward if you were scanning a model of a volcano. Assuming Tango could see it (I think probably not), scanning the Calder mobile at JFK would give pretty much anyone a drinking problem. The algorithms themselves assume a planar basis, and do not react well to fiddling with that assumption. Explaining this requires talking about manifolds, and reading between the lines in your question, I'm assuming you'd rather not have me go any further.
You should be able to find some open source implementations - if it builds and passes all of its unit tests, then you should be OK using it as a black box. If you have to reach inside, be careful. Those things bite :-)
I think I can partially answer the question:
In terms of saving the points, it should be fairly simple, you could have a file open and keep writing the points data into the file when the callback is being called. However, as Project Tango Developer website mentioned, the data provided from API is just the points, not mesh. That means after getting the points you will need to figure out your own way to construct indices.
For my job i've been using a Java version of ARToolkit (NyARTookit). So far it proven good enough for our needs, but my boss is starting to want the framework ported in other platforms such as web (Flash, etc) and mobiles. While i suppose i could use other ports, i'm increasingly annoyed by not knowing how the kit works and beyond that, from some limitations. Later i'll also need to extend the kit's abilities to add stuff like interaction (virtual buttons on cards, etc), which as far as i've seen in NyARToolkit aren't supported.
So basically, i need to replace ARToolkit with a custom mark detector (and in case of NyARToolkit, try to get rid of JMF and use a better solution via JNI). However i don't know how these detectors work. I know about 3D graphics and i've built a nice framework around it, but i need to know how to build the underlying tech :-).
Does anyone know any sources about how to implement a marker-based augmented reality application from scratch? When searching in google i only find "applications" of AR, not the underlying algorithms :-/.
'From scratch' is a relative term. Truly doing it from scratch, without using any pre-existing vision code, would be very painful and you wouldn't do a better job of it than the entire computer vision community.
However, if you want to do AR with existing vision code, this is more reasonable. The essential sub-tasks are:
Find the markers in your image or video.
Make sure they are the ones you want.
Figure out how they are oriented relative to the camera.
The first task is keypoint localization. Techniques for this include SIFT keypoint detection, the Harris corner detector, and others. Some of these have open source implementations - i think OpenCV has the Harris corner detector in the function GoodFeaturesToTrack.
The second task is making region descriptors. Techniques for this include SIFT descriptors, HOG descriptors, and many many others. There should be an open-source implementation of one of these somewhere.
The third task is also done by keypoint localizers. Ideally you want an affine transformation, since this will tell you how the marker is sitting in 3-space. The Harris affine detector should work for this. For more details go here: http://en.wikipedia.org/wiki/Harris_affine_region_detector
I have been asked to investigate porting Wii games and some (Sony) PSOne games to OpenGL ES (can you guess what platform?).
I have never undertaken a game port like this before (and will be hiring someone to do it) but I'd like to understand the process.
Does the Wii use OpenGL? If not what does it use and how easy is it to port to OpenGL / OpenGL ES?
Are there any resources/books/blogs that will help me in understanding the process?
Will my company have to become an official Wii developer? If so where do I start that process?
Porting from the Wii or the PSOne is a complex and involved task that can be broken down into multiple separate engineering efforts working in parallel to produce a working end product. The best possible thing you can do before moving to the target hardware is to compartmentalize all of the non-portable code while ensuring that the game continues to run as expected. When you commit to moving to the new platform, your effort switches to reimplementing the non-portable compartmentalized parts.
So, to answer your question, yes, you will need to become or work with a Sony and Nintendo licensed developer in order to take this approach. In the case of Sony, I don't even know if they offer a PSOne development program anymore which presents issues. Your Sony account rep can help clarify.
The major subsystems that are likely to be the focus of your porting effort are:
Rendering Graphics code contains fundamental assumptions about the hardware it is being run on in order to perform optimally. API-level compatibility is superficial compatibility and does not get you as much as you may hope it does. Plan on finding the entry point to the renderer and determining what data you need to render a scene and rewriting all the render code from there for your target hardware.
Game Saving Game state serialization and archival will need to be separated out. Older games often fwrite() structs with #pragma packed fields. Is that still going to work for you?
Networking Wii games write to high level services that are unavailable on your target hardware. At the low level, sockets are still sockets. What network services do your Wii games rely on?
Controls From where you are coming from to where you are going, anything short of a full redesign or reimagining of input will result in poor reviews of the software.
Memory Management Console games often make fundamental assumptions about the rate the system software returns memory from the heap, how much fragmentation it will cause and the duration the game needs to operate under these conditions. These memory management assumptions are obsolete on the new platform. It is wise to write your own memory manager that provides a cushion from the operating system. Also, console games compiled for release are stripped of most error handling and don't gracefully handle running out of memory-- just a heads up.
Content Your bottleneck will be system memory. Can you fit the necessary assets into memory? With textures, you can reduce mip where necessary and with graphics hardware timing, you can pull in the far clipping plane. With assets resident in memory, you may need a technical artist to go through and reduce the face density of your models or an animation programmer to implement a more size-friendly animation codec. This is very game specific.
You also run into the standard set of problems with things like bit compatibility (though the Wii and PSOne are both 32-bit), compiler idiosyncrasies, build script incompatibilities and proprietary compiler extensions.
Games are relatively challenging to test. A good rule of thumb is you want to have enough testers on your team to run through the game in a maximum of two days, covering all major aspects of play. In games that take a long time to beat (RPGs with 30+ hours of gameplay), your testing team needs to be quite large to offer full coverage. Because you are just doing a port, you can come up with a testing plan that maximizes coverage of your new code without having a testing team punch every wall in your game to make sure it (still) has clipping. The game shipped once.
Becoming a licensed developer requires you to apply. The turnaround time, from experience, is not good. Generally speaking, priority is given to studios with shipped titles and organized offices with reasonably good security and the ability to buy the (relatively) expensive development kits. You may be better off working with a licensed developer if you do not meet these criteria.
Console and game development is challenging for people already experienced in it. There is no book that covers it all. My recommendation is to attempt to recruit an expert who has experience shipping titles in a position of systems or engine programmer. What types of programmers and skillsets exist in games is a whole different question for Stack, though.
Games consoles don't use OpenGL but their own, custom libraries. The main reason is that they are pretty slow and have little RAM. So you need to squeeze out every drop of performance you can get. And that means: Custom code. Usually, you get a framework with the developer kit which gets you started and then, you build your code from that. Eventually, you'll start replacing parts from the developer kit with your own special code to get all the speed and special effects you need.
There is a reason why PSOne games are so ugly on the PS3 despite the fact that the developers have access to the sources: Revenue just doesn't justify to touch the code.
Which is one reason why game development is so expensive: Every game is (more or less) a completely new product. Sometimes, game companies can reuse a bit of code from the last version but more often than not, they have to develop everything again. They also don't talk much with each other.
In recent years, kits have become more complex and powerful and you can get complete game engines (with all kinds of effects and 3D support) but each engine is a completely different kind of beast, so you can't even copy code from engine A to B.
Today, media content (video, audio and render sequences) are so expensive that the actual game engine is often a minor detail, so this isn't going to change any time soon.
Net result: If you want to port a game, write an emulator for the hardware (which is usually pretty simple and allows you to run all kinds of games).
[EDIT] To develop software for the Wii, see here: http://www.warioworld.com/
For a Wii emulator, see http://wiiemulator.net/
I ported a couple of games, when I was a new game programmer, from working with one version of our engine to a newer version (where backwards compatibility was neither ignored nor pursued). Even copying (and possibly renaming) the files and placing them in a home in the new project was a bit of work. Following that, the procedure was:
recompile
fix many of the hundreds of errors [in many places, with the same error occurring over and over again]
and
"wire up" calls from the new game engine to the appropriate calls in the old code
"wire up" function calls from the old code into the new game engine
deal with other oddities (ex. in the old game engine, the 2d game would "swizzle" textures itself; in the new version, the engine did it (on specific platforms))
and, while I don't recall this clearly, it was probably mixed in with a bunch of #ifdeffing out portions of code so the thing would actually compile, and possibly creating function stubs to be filled in later.
As I recall, it was three or four days until I had something that compiled. (But, it did help when we ported other games from the old version to the new one!)
The magnitude of the task will come down to what the code you are getting is like. If it has generic 3D calls that you can intercept -- add a thunking layer to -- then you are in business. It depends on the level of abstraction in the code. If it is well-behaved and has things like "RenderModel" and "RenderWorld" calls, you can replace those functions, and even the structures that they work with. If drawing is occurring all over the place, and calls are more like "Draw Polygon" and "Draw Line" or "Draw using this highly optimised data structure", then you are likely in for a long slog.
You shouldn't need a Wii dev kit. Sometimes it is nice to verify that the code you are given does indeed compile in the original environment (and matches the shipping code!), but sometimes you can just take it on faith and make it work in its new environment.
Lastly, I don't think the Wii uses OpenGL, and I really don't know where to point you for further help.
What you may want to do is to start with designing the architecture of the game, write up a detailed specification for what the new game is like.
Once you have this, since you will be rewriting the code, you may find that some of the business logic that doesn't deal with the console can be ported over. But, anything dealing with I/O, user interaction or graphics/sounds will be rewritten, so you might as well do that from scratch.
A specification is very important, to make certain that you know how the current game is working so that the new port will give the same user experience, if that is what is desired.
You may want to keep the same bugs, if that is part of the experience, as, if I know that in the Wii I can jump down and bounce off the wall to safely land, then if I can't do that in the new version then that may be bothersome.
Well porting a PS1 game to an iPhone would be quite a task they work in very different ways. I'm sure its doable but it will be a LOT of work to replace all the fixed point maths and lack of Z-Buffer based rendering to a real graphics chip.
Wii would be a lot easier. The Wii API is very similar to OpenGL. However the Wii has some very nice fixed function features that just are not available on any other GL based platform. Should be doable, though ...
I'm not really sure I can say anything more than that. Have signed far too many NDAs over the years to be 100% sure of what I can and cannot say ;)
Still if you want to hire someone to do some porting work and are prepared to supply the required hardware then I might be free ;)
Along with all buzz talks about the wonderful Bumptop desktop environment, Im getting this question now. What is the relation with Physics and Bumptop techniques. Basically, I am interested in learning the techniques/algorithms followed in this desktop environment. For example,
Collision Detection -- is used when one icon is about to collide on other one.
Any other known techniques?
It probably uses a quite common rigid body dynamics simulator as used in (simple/older) computer games. If you want to play with one yourself, have a look at Open Dynamics Engine.
I'd say that it uses mechanics (the branch of physics that describes motion) to determine/calculate the outcome of object interaction.
I remember seeing a very early demo of this months ago - it looks very impressive!
Well, it looks as though it's using friction and velocity as well. The friction slows the animations down - if it didn't then things would just fly out of the way. Velocity is used to have things move at speed in certain directions.
More info here
BumpTop
Extensive use of physics effects like
bumping and tossing is applied to
documents when they interact, for a
more realistic experience.
I was wondering how the Half-Life 2 multiplayer protocol works in mods like Counter-Strike: Source or Day Of Defeat: Source. I believe that they use some kind of obfuscation and proprietary compression algorithm. I would like to know how different kinds of messages are encoded in a packet.
Half-Life 2, Counter-Strike:Source etc all use Valves Source engine. Valve has a developer wiki which covers a lot of stuff (its pretty cool check it out!)...
These articles might interest you:
Latency Compensating Methods in Client/Server In-game Protocol, Design and Optimization
Source Multiplayer Networking
You should check out Luigi Auriemmas papers on Half-Life. You'll find a packet decoder and some disassembled algorithms there, too.
Reverse engineering information on Half-Life 2 may be hard to come by, because of its relevance for cheating. I'd guess boards like mpcforum are your best bet.
This is a really complicated question, my suggestion would be to look at some of the open source network game engines:
http://www.hawksoft.com/hawknl/
http://www.zoidcom.com/
http://sourceforge.net/projects/opentnl
http://www.gillius.org/gne/
You could also look at the source code for the quake series upon which the original half life engine is based.
Though details might differ, the general framework is pretty old. Here's a quick overview:
In early fps games such as doom and Quake the player's position was updated only on the server's response to your move command. That is, you pressed the move-forward button and the client communicated that to the server, the server udpated your position on its memory and then relayed a new game-state to your client with your new position. This led to very laggy play: shooting, even moving in narrow corridors was a game of predicting lag.
Newer games let the client handle the player's shooting and movement by themselves. Though this led to lag-less movement and fire it opened more possibilities of cheating by hacking the client code. Now every player moves and fires independently on their own computer and communicates to the server what they have done. This only breaks down when two players bump into one another or try to catch a power up at the same time.
Now the server has this stream of client state coming from each player and has to sync them and make a coherent game out of them. The trick is to measure each player's latency. The ultimate goal is to be able to fire a very low latency weapon (such as sniper rifle or railgun) on an enemy moving sideways and have it hit correctly. If the latency from each player is know, suppose player A (latency 50ms) fires a gun on B (latency 60ms). To make a hit, the shot has to hit B where B was 60ms ago, from where A was 50ms ago.
That's a very rough overview but should give you the general idea.
I suggest that you look into Quake 1-3 engines. They are available with source code. Half-life's protocol might be a bit different but most likely close enough.