Using Accelerometer in Wiimote for Physics Practicals - visual-studio

I have to develop some software in my school to utilize the accelerometer in the Wiimote for recording data from experiments, for example finding the acceleration and velocity of a moving object. I understand how the accelerometer values will be used but I am sort of stuck on the programming front. There is a set of things that I would like to do:
Live streaming of data from the Wiimote via bluetooth
Use the accelerometer values to find velocity and displacment via integration
Plot a set of results
Avoid the use of the infrared sensor on the Wiimote
Please can anyone give me their thoughts on how to go about this. Also it would be great if people could direct me to existing projects that utizlise the wiimote. Also can someone suggest what would be the best programming language to use for this. My current bet is on using Visual basic.
Any sort of help is greatly appretiated.

There's some famous projects using the Wii remote by Johnny Lee Chung.
They use C# and you can download the source.
By and large they are the reverse of what you want - they use the camera, but you should be able to use the source as a starting point and to analyse the data coming from the remote.
NOTE: At the time of writing the Wiimote library linked to is unavailable, but as it's an MSDN site it should be back soon.
Addendum: It looks like this is now available on Codeplex
This also has a link to various applications built on the library. Wii Drum High looks like it just reads the accelerometer.

I have written some software to do some of what you ask. Check out wiiphysics.site88.net.
You will find integrating the acceleration data very tricky to get any decent results.
It is written in c#.

One problem is what are your initial conditions (ok if you start at rest), the other is that by the time you get to displacement you will have a lot of noise (the acceleration data from a wiimote is only 8-bit)

Related

Is it possible to detect several parts/joints of human body using Project Tango?

Is there anyway to detect human body parts and joints like shoulder, wrist, neck, thigh joints, etc using project tango.
There's no way to do this with Tango API. But Tango provides enough data to process, and it's possible to leverage some 3rd party library to do this, potentially OpenNI. But again, there's no example or tutorial on how to do that now, you might need to explore those frontier a little bit.

How do I get started capturing and saving Project Tango point cloud data as a mesh?

I am in a similar situation as caspertm was when asking this question: How do I export Point Cloud Data (Project Tango)?
I apologize that I cannot comment on other questions yet or I would have just done so on that question. I too was looking for the functionality the mapper app provided (specifically the capturing and saving of 3d environments) and have found through searching and reading that question that it is not available for the tablet. The answer provided to caspertm's question was to use the point cloud data sample code as a starting point and modify it to log the data to a file.
I am wondering if anyone would be willing to go into more detail about what needs to be modified to the point cloud sample (I am using the Java version) to save that data and retrieve it later on my computer so I can manipulate it in a program like blender or unity.
I am very new to the android developing process. I can read the sample point cloud java code and get a very basic understanding of what is going on, but I definitely have a lot of learning to do. I realize I am asking for a lot of help and don't expect any one person (or even several) to paint me the entire picture, but tips on things like: whether this data should be saved internally or externally, which java file requires the saving code, how to format the file to be readable in other 3d programs and how to see more than just the current snapshot of the point cloud would be greatly appreciated. If anyone could point me in the right direction of how to get the actual environment colors projected onto the cloud data, that would be amazing too, but any help or links for any of these requests would be greatly appreciated.
Thanks so much!
This answer addresses only computational geometry aspects - issues involved in getting the point cloud, phoning home with it, stuffing it in a file, etc are considered 'self evident' in order to more quickly go play with the math :-)
Nice shallow pretty answer - if you're scanning something where the point cloud represents an object with fair curvy or straight surface then the suggestions here will help -- https://blender.stackexchange.com/questions/7028/wrapping-a-mesh-around-point-cloud-with-cavities Please note that 'fair' is a loaded word.
The more detailed answer isn't pretty - and reality will have a way of handing you point clouds that make the preceding algorithms very irritated. If you are looking to take a random cloud of points (yes, I know its a meaningful cloud of points to you, but mathematicians make much of these details) and reconstruct a geometry from it, i.e. define the topology that relates those points in a meaningful way, you're talking about a very nasty problem. Check the internet for discussions of Delaunay Triangulation and Voronoi diagrams, which are the more traditional approaches to solving this issue. Sort of. Its pretty straightforward if you were scanning a model of a volcano. Assuming Tango could see it (I think probably not), scanning the Calder mobile at JFK would give pretty much anyone a drinking problem. The algorithms themselves assume a planar basis, and do not react well to fiddling with that assumption. Explaining this requires talking about manifolds, and reading between the lines in your question, I'm assuming you'd rather not have me go any further.
You should be able to find some open source implementations - if it builds and passes all of its unit tests, then you should be OK using it as a black box. If you have to reach inside, be careful. Those things bite :-)
I think I can partially answer the question:
In terms of saving the points, it should be fairly simple, you could have a file open and keep writing the points data into the file when the callback is being called. However, as Project Tango Developer website mentioned, the data provided from API is just the points, not mesh. That means after getting the points you will need to figure out your own way to construct indices.

Advice for interfacing strain gauges to PC

I'm using an arduino to excitate and amplify strain gauges on a rod - the resulting voltage will be picked up by the analog inputs available on the arduino. I need to plot the 'torque' taken by that rod with respect to time on a graph, and the easiest way I see to do this is using the Processing language, as the basic arduino environment does not provide for graphical display.
Any tips on where to start? I only have prior experience with MATLAB, and a bit with Java.
EDIT: I should add a specific question - how do I assign a variable in Processing to the physical values read on the arduino (varying voltage through analog)?
Thanks.
Since you have experience with MATLAB, consider using the ArduinoIO API provided by The MathWorks. Basically lets you interface your Arduino to MATLAB - all the pin I/O features are available. So let MATLAB do the work plotting, etc, for you and just use your Arduino to collect your data.
I can personally vouch for how useful this API is. It's powering my master's thesis (building Arduino-powered vehicles and doing control on them).

3d modeling for data structures

I'm looking for a 3D modeling/animation software. Honestly, I don't know if this is something achievable - but what I want to have is some kind of visual representation of various ideas.
Speaking in future tense: if I were to read about of the boot process of an OS, I would visualize the various data structures building up; and I can step through the process with a sliding bar or so. If I were to think about a complex data structure, I would have a 3D representation of various links and relations between them. Another would be a Git repository at work - how commits/trees/blobs are linked in space, and how they progress as time passes. And all of these would be interactive.
The reason why I want to do this is that it'd be very easy to explain the process. Not just to others, but also to self. I can revisit my model, and it'd be a quick brush up.
I'm sure there are no ready-to-use softwares for this. What I could think of are Flash, with action scripting, or Blender 3D (Python scripting?); or Synfig. Whatever it's, I've to learn up start; and I'm looking for suggestions as to which (even if not in my list) is the right one to choose.
Thanks
I've used Blender, but it requires a large upfront investment of time, especially to learn the UI. Blender is all about the hotkeys. Once you have them memorized, it's great. But getting there takes a while.
Alice might be worth a look. It looks easy to use and supports scripting.
There are many tools available for 3D modeling. I'm a fan of 3D Studio max. But there is Blender, Maya, and truespace.
You may want to take a look at the field of visualization to help with illustrating your message.
I suspect that packages such as 3D Studio Max and Blender are too powerful, in the sense that your relatively simple requirements will force you on too long a learning path. Try Googling for Data Structure Animations to get an idea of what others have used. Also, head over to Information Aesthetics, they recently featured a tool for visualising commits and checkouts to/from repositories and similar.
My favourite is nearly the Lego Designer, very good for 3D block animations, but so far I haven't figured out how to add text to the blocks.

How a marker-based augmented reality algorithm (like ARToolkit's one) works?

For my job i've been using a Java version of ARToolkit (NyARTookit). So far it proven good enough for our needs, but my boss is starting to want the framework ported in other platforms such as web (Flash, etc) and mobiles. While i suppose i could use other ports, i'm increasingly annoyed by not knowing how the kit works and beyond that, from some limitations. Later i'll also need to extend the kit's abilities to add stuff like interaction (virtual buttons on cards, etc), which as far as i've seen in NyARToolkit aren't supported.
So basically, i need to replace ARToolkit with a custom mark detector (and in case of NyARToolkit, try to get rid of JMF and use a better solution via JNI). However i don't know how these detectors work. I know about 3D graphics and i've built a nice framework around it, but i need to know how to build the underlying tech :-).
Does anyone know any sources about how to implement a marker-based augmented reality application from scratch? When searching in google i only find "applications" of AR, not the underlying algorithms :-/.
'From scratch' is a relative term. Truly doing it from scratch, without using any pre-existing vision code, would be very painful and you wouldn't do a better job of it than the entire computer vision community.
However, if you want to do AR with existing vision code, this is more reasonable. The essential sub-tasks are:
Find the markers in your image or video.
Make sure they are the ones you want.
Figure out how they are oriented relative to the camera.
The first task is keypoint localization. Techniques for this include SIFT keypoint detection, the Harris corner detector, and others. Some of these have open source implementations - i think OpenCV has the Harris corner detector in the function GoodFeaturesToTrack.
The second task is making region descriptors. Techniques for this include SIFT descriptors, HOG descriptors, and many many others. There should be an open-source implementation of one of these somewhere.
The third task is also done by keypoint localizers. Ideally you want an affine transformation, since this will tell you how the marker is sitting in 3-space. The Harris affine detector should work for this. For more details go here: http://en.wikipedia.org/wiki/Harris_affine_region_detector

Resources