How does Sphero calculate velocity? - sphero-api

Sphero, a product of Orbotix, has the ability to calculate its velocity. How does it do this? Is it via rotations of the orb, or does it have another internal component which can derive its velocity?
NOTE: it can separate it into x and y vector components.

It uses encoders on the wheels combined with algorithms in the control system.

Related

Kalman filter with acceleration. State or Control vector?

I have a basic understanding question in Kalman filter which I haven't found an answer yet.
Assume I want to implement a Kalman filter with a constant acceleration dynamic.
I can either add the acceleration the state vector and F matrix - Xt = X(t-1) +Vt+0.5at^2
OR, I can add the acceleration to the U control vector.
What is the profind difference between these two methods and given I have accel measurements, what is the best policy?
You can find these two approaches in google.
All the best,
Roi
U refers to input to the system in standard control theory state space representation (for more information see here) and therefore what you do depends on the context of your specific problem.
It sounds like you are trying to estimate position of a target moving with constant acceleration. This means that the target's position has a defined model of motion. This model of motion will be encoded in F. Think of a bicyclist looking at the speedometer while rolling down a hill and not pedaling at all (bicyclist has no input to the system). If the acceleration is not constant (the hill has a varying slope) and you have access to real time measurements of the acceleration then you can modify your system to estimate acceleration as well. If the acceleration is unknown then read the following stackoverflow post at here.
If you were tracking the target while having direct control of the acceleration then you would put it in U. Think of YOU being the bicyclist looking at the speedometer (estimating the speed) and deciding whether you should pedal faster or slower.

DJI Mobile SDK - how is distance between adjacent waypoints calculated?

Using the DJI Mobile SDK to upload Waypoint Missions, if two adjacent waypoints are determined by the DJI to be too close (within 0.5 meters), the upload is rejected.
Does anyone know the algorithm used to determine the distance between adjacent waypoints in a waypoint mission?
Specifically, is the DJI algorithm using a haversine calculation for distance between lat, lon coordinates and if so, what is the earth radius used? Is it the UIGG mean radius: 6371008.8 meters. Or some other radius?
Or does it use the ellipsoidal Vincenty formula (WGS-84)?
This information would be useful for more precise waypoint decimation prior to mission upload.
First off, I would comment that DJI answering an internal implementation question is very unlikely since it would expose them to having to support the implementation over time and across aircraft. Different aircraft, different technologies may result in varying implementations.
What has always worked for me it to use standard "distance between points" calculations, either common from map formulas or as built into platform SDK (iOS, Android, etc.) I have found these to be sufficiently accurate enough to plan even complex flights.
Based on several tests I can now confirm that the current DJI internal distance computation is dependent on latitude and/or longitude. Meaning that you will get different results for the same (!) waypoint distance pair depending on where your two points are anchored.
A 1.5-meter distance Waypoint Pair got accepted as a mission in a location in central Europe but were rejected with WAYPOINT_DISTANCE_TOO_CLOSE for a location in the central US.
(We verified with https://gps-coordinates.org/distance-between-coordinates.php that both waypoint distance pairs had the same 1.5 meter distance between them.)
So it's safe to assume that DJI has a bug in their distance calculation.

Data structure for circular sector in robot vision

I'm trying to build a model of a 360-degree view of the surrounding environment from a distance sensor for continuous rotation (radar). I require a data structure for making a quickly computable strategy that will bring a robot to the first clear of obstacles point (or where the obstacle is far away).
I thought to a matrix of 360 numerical elements in which each element represents the detected distance in that degree of circumference.
Do you know a name for this data structure (used in this way)?
There are better representations for the situation I described?
The main language for the controller is Java.
It sounds to me that you are aware that your range data is effectively in polar co-ordinates.
The uniqueness of working with such 360° is in its circular, “wrap-around” nature.
Many people end up writing their own custom implementation around this data. Their is lots of theory in the robotics literature based on it for smoothing, segmenting, finding features, etc. (for example: “Line Extraction in 2D Range Images for Mobile Robotics”.)
Practically speaking, you might want to then consider checking out some robotics libraries. Something like ARIA. Another very good place to start is to use WeBots to emulate/model things - including range data - before transferring to a physical robotics platform.

How can I get a random distribution that "clusters" objects?

I'm working on a game, and I want to place some objects randomly throughout the world. However, I want the objects to be "clustered" in clumps. Is there any random distribution that clusters like this? Or is there some other technique I could use?
Consider using a bivariate normal (a.k.a. Gaussian) distribution. Generate separate normal values for the X and Y location. Bivariate normals are denser towards the center, sparser farther out, so your choice for the standard deviation of the distribution will determine how tight the clustering is - 2/3 of the items will be within 1 standard deviation of the distribution's center, 95% within 2 standard deviations, and almost all within 3 standard deviations.

medial axis transform implementation

How do I implement the Medial Axis Transform algorithm to transform the first image into the second?
(source: algorith at www.cs.sunysb.edu)
(source: algorith at www.cs.sunysb.edu)
What library in C++/C# have support for Medial Axis Transform?
There are many implementations of the medial axis transform on the Internet (personally I don't use OpenCV library but I'm sure it has a decent implementation). However, you could easily implement it yourself.
In order to perform medial axis transform, we need to define only one term: simple point.
A point (P) is simple point iff removing P doesn't effect the number of connected components of either the foreground or the background. So, you have to decide the connectivity (4 or 8) for the background and for the foreground - in order to work pick different one for both (if you are interested why, look up Jordan property on google).
Medial transform axis could be implemented by sequentally deleting simple points. You get the final skeleton if there are no more simple points. You get the curved skeleton (I don't know the english name for it which is rare - please correct me) if you only have endpoints OR non-simple points. You provided examples of the latter in your question.
Finding simple points could be easily implemented with morphological operators or a look-up table. Hint: a point is simple point iff the number of connected components in the background is 1 and the number of connected components in the foreground is 1 in a 3x3 local window.
There is a medial axis transform available in this C Library:http://www.pinkhq.com/
There are lot other related functionalities.
Check out this Function:http://www.pinkhq.com/medialaxis_8c.html

Resources