Currently I am using an accelerometer, Gyro and magnetometer for motion tracking application.I have a 9D sensor fusion functionality to calculate the orientations and gravity cancellation from accelerometer data. How do i now calculate the position of the object in three dimensions? Kindly suggest any algorithm which could give good accuracy.
Extended Kalman filter can give you the best results for motion tracking if you are working on real time application. I would suggest you to refer a book Multi Sensor Data fusion with MATLAB (CRC Press).
Related
I have a project I'm working on that requires rotational data (yaw, pitch, roll) to be taken from a few different sensors to be combined in code.
The first sensor I have, I can get good angels from, but has a very bad drifting problem.
The second sensor can has very good angles with minimum drifting, but only has a -90 to 90 degree range
of motion.
My question is how can I combine these two sensors data so that I have minimum drifting and a 360° range?
I can provide sample data if needed.
Thanks in advance!
I have two separate pointclouds(type= sensor_msgs/PointCloud2) from two different sensors, a 3D stereo camera and a 2D LiDAR. I wanted to know how can I fuse these two pointclouds if the stereo pointcloud is 3D with fix length and a 2D LiDAR pointcloud with variable pointcloud length?
If someone has worked on it please help me, your help will be highly appreciated.
Thanks
I studied this in my research.
The first is you have to calibrate 2 sensors to know their extrinsic. There are a few open source packages you can play with which I listed Below
The Second is fuse the data. The simple way is just based on calibration transform and use the tf to send. The complicated way is to deply pipelines such as depth image to LIDAR alignment and depth map variance estimation and fusion. You can choose to do it ez way like easiler landmark included EKF estimation, or you can follow CMU Zhangji`s Visual-LIDAR-Inertial fusion work for the direct 3D feature to LIDAR alignment. The choice is urs
(1)
http://wiki.ros.org/velo2cam_calibration
Guindel, C., Beltrán, J., Martín, D. and García, F. (2017). Automatic Extrinsic Calibration for Lidar-Stereo Vehicle Sensor Setups. IEEE International Conference on Intelligent Transportation Systems (ITSC), 674–679.
Pros. Pretty accurate and ez to use package. Cons. you have to made rigid cut board.
(2) https://github.com/ankitdhall/lidar_camera_calibration
LiDAR-Camera Calibration using 3D-3D Point correspondences, arXiv 2017
Pros. Ez to use, Ez to make the hardware. Cons May is not so accurate
There were couple of others I listed In my thesis, I`ll go back and check for it and update here. If I remember
We are using MEMS tri axial sensor which has Accelerometer,Magnetometer and Gyrometer.We have also done Accelerometer and Magnetometer calibration.This sensor is used in borehole application.We have calculated Deviation of borehole using Accelerometer.Now we are stuck up in direction calculation free from rotation(i.e if the sensor rotates also the direction should not change).Is it possible to calculate rotation free Direction using the above three(Acce,Magnet,Gyro) readings. If yes please let me know .
Thanks,
S.Naseer
I want to develop an app for gesture recognition using Kinect and hidden Markov models. I watched a tutorial here: HMM lecture
But I don't know how to start. What is the state set and how to normalize the data to be able to realize HMM learning? I know (more or less) how it should be done for signals and for simple "left-to-right" cases, but 3D space makes me a little confused. Could anyone describe how it should be begun?
Could anyone describe the steps, how to do this? Especially I need to know how to do the model and what should be the steps of HMM algorithm.
One set of methods for applying HMMs to gesture recognition would be to apply a similar architecture as commonly used for speech recognition.
The HMM would not be over space but over time, and each video frame (or set of extracted features from the frame) would be an emission from an HMM state.
Unfortunately, HMM-based speech recognition is a rather large area. Many books and theses have been written describing different architectures. I recommend starting with Jelinek's "Statistical Methods for Speech Recognition" (http://books.google.ca/books?id=1C9dzcJTWowC&pg=PR5#v=onepage&q&f=false) then following the references from there. Another resource is the CMU sphinx webpage (http://cmusphinx.sourceforge.net).
Another thing to keep in mind is that HMM-based systems are probably less accurate than discriminative approaches like conditional random fields or max-margin recognizers (e.g. SVM-struct).
For an HMM-based recognizer the overall training process is usually something like the following:
1) Perform some sort of signal processing on the raw data
For speech this would involve converting raw audio into mel-cepstrum format, while for gestures, this might involve extracting image features (SIFT, GIST, etc.)
2) Apply vector quantization (VQ) (other dimensionality reduction techniques can also be used) to the processed data
Each cluster centroid is usually associated with a basic unit of the task. In speech recognition, for instance, each centroid could be associated with a phoneme. For a gesture recognition task, each VQ centroid could be associated with a pose or hand configuration.
3) Manually construct HMMs whose state transitions capture the sequence of different poses within a gesture.
Emission distributions of these HMM states will be centered on the VQ vector from step 2.
In speech recognition these HMMs are built from phoneme dictionaries that give the sequence of phonemes for each word.
4) Construct an single HMM that contains transitions between each individual gesture HMM (or in the case of speech recognition, each phoneme HMM). Then, train the composite HMM with videos of gestures.
It is also possible at this point to train each gesture HMM individually before the joint training step. This additional training step may result in better recognizers.
For the recognition process, apply the signal processing step, find the nearest VQ entry for each frame, then find a high scoring path through the HMM (either the Viterbi path, or one of a set of paths from an A* search) given the quantized vectors. This path gives the predicted gestures in the video.
I implemented the 2d version of this for the Coursera PGM class, which has kinect gestures as the final unit.
https://www.coursera.org/course/pgm
Basically, the idea is that you can't use HMM to actually decide poses very well. In our unit, I used some variation of K-means to segment the poses into probabilistic categories. The HMM was used to actually decide what sequences of poses were actually viable as gestures. But any clustering algorithm run on a set of poses is a good candidate- even if you don't know what kind of pose they are or something similar.
From there you can create a model which trains on the aggregate probabilities of each possible pose for each point of kinect data.
I know this is a bit of a sparse interview. That class gives an excellent overview of the state of the art but the problem in general is a bit too difficult to be condensed into an easy answer. (I'd recommend taking it in april if you're interested in this field)
According to the reference,
The deviceMotion property is only available on devices having both an accelerometer and a gyroscope. This is because its sub-properties are the result of a sensor fusion algorithm i.e. both signals are evaluated together in order to decrease the estimation errors.
Emm, my question is where is the internal implementation, or algorithm that CMMotionManager use to do the calculation. I want some detail about this so called "senser fusion algorithm"
Popular fusion algorithms are for instance the Kalman filter and derivatives but I guess the CMMotionManager's internal implementation is based on simpler and thus faster algorithms. I expect some simple but good enough math calculation upon the senser data from accelerometer and gyroscope to finally calculate the roll, yaw and pitch
It is not clear what is actually implemented in Core Motion.
As for filters other than the Kalman filter: I have implemented sensor fusion for Shimmer 2 devices based on this manuscript.
You may find this answer on Complementrary Filters also helpful, see especially filter.pdf
I would not use roll, pitch and yaw for two reasons: (1) it messes up the stability of your app and (2) you cannot use it for interpolation.