using DJIWaypointMissionHeadingTowardPointOfInterest as heading mode in DJIWaypointMission i can automatically rotate the drone to heading vs a POI, but is there a way to also automatically tilt the camera to frame the POI?
(unlucky also the "pointOfInterest" have no altitude property)
also i think that would be better change/define the heading mode in the DJIWaypoint instead to be a property of the entire DJIWaypointMission, is it possible?
I'm adding to what Ken said.
When you use 'isGimbalPitchRotationEnabled' you can set a pitch angle in each waypoint. The drone will change the pitch angle in a linear motion between each 2 waypoints.
Of course this will not work when during a flight between the 2 points the drone gets closer to the POI and than backs away.
What I'm doing in my app is dividing the straight line between the 2 points into several straight sections and calculating the correct pitch at each point. As I divide the original line I calculate the error (the difference between the calculated pitch and the linear extrapolated pitch). If the error is greater than some value (5 degrees, for example) I divide the line recursively and re-calculate the pitch, until the error at each point is small enough. It takes some geometric calculations when preparing the mission, but it produces amazing fly-by shots.
Take a look at these settings:
isGimbalPitchRotationEnabled and gimbalPitch
This isn't a 'true' location though because the pitch divides equally based on the distance between the waypoints but it's close.
In my app I manually control the gimbal but that means a connection must be maintained or the gimbal stops moving.
Yes, it would be nice if the POI in waypoints was a true POI functionality and I've made the suggestion many times but to no avail so far. I'd like to see the POI be a per waypoint (as a LocationCoordinate 3D) rather than the current, 1 POI for the whole mission.
I'd also like to see the location in waypoint to be consistent and be of type LocationCoordinate3D.
Related
I was wondering what the basic units of velocity are in Pymunk. If I put in a velocity of (50,50) does that correspond to 50 pixels/second in each direction? The API says that the units of angular velocity are rad/s but doesn't say anything about linear velocity.
Thanks.
Chipmunk avoids enforcing any particular units or scales. If you are using pixels for distance, and seconds for time, then velocity is in pixels per second. If you want to use meters and hours that’s fine too. As long as you drive it with the right input and interpret the output correctly you are good.
I´m making my first game in Game Maker.
In the game i need to the user to draw a figure, for example a rectangle, and the game has to recognize the figure. How can i do this?
Thanks!
Well, that is a pretty complex task. To simplify it, you could ask him to place a succession of points, using the mouse coordinates in the click event, and automatically connect them with lines. If you store every point in the same ds_list structure, you will be able to check conditions of angle, distance, etc. This way, you can determine the shape. May I ask why you want to do this ?
The way I would solve this problem is pretty simple. I would create a few variables for each point when someone clicked on one of the points it would equal true. and wait for the player to click on the next point. If the player clicked on the next point i would call in a sprite as a line using image_angle to line both points up and wait for the player to click the next point.
Next I would have a step event waiting to see if all points were clicked and when they were then to either draw a triangle at those coordinates or place an sprite at the correct coordinates to fill in the triangle.
Another way you could do it would be to decide what those points would be and check against mouse_x, and mouse_y to see if that was a point and if it was then do as above. There are many ways to solve this problem. Just keep trying you will find one that works for your skill level and what you want to do.
You need to use draw_rectangle(x1, y1, x2, y2, outline) function. As for recognition of the figure, use point_in_rectangle(px, py, x1, y1, x2, y2).
I'm just wondering around with ideas cause i can't code right now. But listen to this, i think this could work.
We suppose that the user must keep his finger on touchscreen or an event is triggered and all data from the touch event is cleaned.
I assume that in future you could need to recognize other simple geometrical figures too.
1 : Set a fixed amount of pixels of movement defined dependent on the viewport dimension (i'll call this constant MOV from now on), for every MOV you store in a buffer (pointsBuf) the coordinates of the point where the finger is.
2 : Everytime a point is stored you calculate the average of either X and Y coordinates for every point. (Hold the previous average and a counter to reduce time complexity). Comparing them we now can know the direction and versus of the line. Store them in a 2D buffer (dirVerBuf).
3 : If a point is "drastically" different from the most plain average between the X and Y coordinates we can assume that the finger changed direction. This is where the test part of MOV comes critical, we must assure to calculate an angle now. Since only a Parkinsoned user would make really distorted lines we can assume at 95% that we're safe to take the 2nd point that didn't changed the average of the coordinate as vertex and let's say the last and the 2nd point before vertex to calculate the angle. You have now one angle. Test the best error margin of the user to find if the angle is about to be a 90, 60, 45, ecc.. degrees angle. Store in a new buffer (angBuf)
4 : Delete the values from pointsBuf and repeat step 2 and 3 until the user's finger leaves the screen.
5 : if four of the angles are of 90 degrees, the 4 versus and two of the directions are different, the last point is somewhat near (depending from MOV) the first angle stored and the two X lines and the Y lines are somewhat equal, but of different length between them, then you can connect the 4 angles using the four best values next to the 4 coordinates to make perfect rectangular shape.
It's late and i could have forgotten something, but with this method i think you could even figure out a triangle, a circle, ecc..
With just some edit and confronting.
EDIT: If you are really lazy you could instead use a much more space complexity heavy strategy. Just create a grid of rectangles or even triangles of a fixed dimension and check which one the finger has touched, connect their centers after you'have figured out the shape, obviously ignoring the "touched for mistake" ones. This would be extremely easy to draw even circles using the native functions. Gg.
We can retrieve the acceleration data from CMAcceleration.
It provides 3 values, namely x, y and , z.
I have been reading up on this and I seem to have gotten different explanation for these values.
Some say they are the acceleration values in respect to gravity.
Others have said they are not, they are the acceleration values in respect to the axis as they turn around on its axis.
Which is the correct version here? For example, does x represent the acceleration rate for pitch or does it for from left to right?
In addition, let say if we want to get the acceleration rate (how fast) for yaw, how could we be able to derive that value when the call back is feeding us constantly with values? Would we need to set up another timer for the calculation?
Edit (in response to #Kay):
Yes, it was basically it - I just wanted to make sure x, y, z and respectively pitch, roll and yaw and represented differently by the frame.
1.)
How are these related in certain situations? Would there be a need that besides getting a value, for example, for yaw that needs addition information from the use of x, y, z?
2.)
Can you explain a little more on this:
(deviceMotion.rotationRate.z - previousRotationRateZ) / (currentTime - previousTime)
Would we need to use a timer for the time values? And how would making use of the above generate an angular acceleration? I thought angular acceleration entail more complex maths.
3.)
In a real world situation, we can barely only rely on a single value from pitch, roll and yaw because that would be impossible to for us to make a rotation only on one axis (our hand is not that "stable". Especially after 5 cups of coffee...)
Let say I would like to get the values of yaw (yes, rotation on the z-axis) but at the time as yaw spins I wanted to check it against pitch (x-axis).
Yes, 2 motions combine here (imagine the phone is rotating around z with slight movement going towards and away from the user's face).
So: Is there is mathematical model (or one that is from your own personal experience) to derive a value from calculating values of different axis? (sample case: if the user is spinning on z-axis and at the same time also making a movement of x-axis - good. If not, not a good motion we need). Sample case just off the top of my head.
I hope my sample case above with both yaw and pitch makes sense to you. If not, please feel free to cite a better use case for explanation.
4.)
Lastly time. How can we get time as a reference frame to check how fast a movement is since the last? Should we provide a tolerance (Example: "less than 1/50 of a second since last movement - do something. If not, do nothing.")? Where and when do we set a timer?
The class reference of CMAccelerometerData says:
X-axis acceleration in G's (gravitational force)
The acceleration is measured in local coordinates like shown in figure 4-1 in the Event Handling Guide. It's always a translation und must not be confused with radial or circular motions which are measured in angles.
Anyway, every rotation even with a constant angular velocity is related to a change in the direction and thus an acceleration is reported as well s. Circular Motion
What do you mean by get the acceleration rate (how fast) for yaw?
Based on figure 4-2 in Handling Rotation Rate Data the yaw rotation occurs around the Z axis. That means there is a continuous linear acceleration in the X,Y plane. If you are interested in angular acceleration, you need to take CMDeviceMotion.rotationRate and divide it by the time delta e.g.:
(deviceMotion.rotationRate.z - previousRotationRateZ) / (currentTime - previousTime)
Update:
It depends on what you want to do and which motions you are interested in to track. I hope you don't want to get the exact device position in x,y,z when doing a translation as this is impossible. The orientation i.e. the rotation relativ to g can be determined very well of course.
I think in >99% of all cases you won't need additional information from accelerations when working with angles.
Don't use your own timer. CMDeviceMotion inherits from CMLogItem and thus provides a perfect matching timestamp of the sensor data or respectivly the interpolated time for the result of the sensor fusion algorithm.
I assume that you don't need angular acceleration.
You are totally right even without coffee ;-) If you look at the motions shown in this video there is exactly the situation you describe. Maths and algorithms were the result of some heavy R&D and I am bound to NDA.
But the most use cases are covered with the properties available in CMAttitude. Be cautious with Euler angles when doing calculation because of Gimbal Lock
Again this totally depends on what you are up to.
I'm working on a project where I need to track two points in an image. So far, the best way I have of identifying these points is to get the user to click on them when the program is first run. I'm using the Lucas-Kanade Pyramid method built into OpenCV (documented here, but as is to be expected, this doesn't work too well. Is there a better alternative algorithm for tracking points in OpenCV, or alternatively some other way of verifying the points I already have?
I'm currently considering using GoodFeaturesToTrack, and getting the distance from each point to the one that I want to track, and maybe some sort of vector pointing out the relationship between the two points, and using this information to determine my new point.
I'm looking for suggestions of ways to go about this, not necessarily code samples.
Thanks
EDIT: I'm tracking small movements, if that helps
If you look for a solution that is implemented in opencv the pyramidal Lucas Kanade (PLK) method is quit good, else I would prefer a Particle Filter based tracker.
To improve your tracking performance with the PLK be sure that you have set up the parameters correctly. E.g. for large motion you need a level at ca. 3 or 4. The window should not be to small ( I prefer 17x17 to 27x27). Also keep in mind that the methods needs textured areas to be able to track the points. That means corner like image content (aperture problem).
I would propose to seed a set of points (ps) in a grid around the points (P) you want to track. And than use a foreward - backward threshold to reject falsly tracked points. The motion of your points (P) will be computed by the mean motion of the particular residual point sets (ps).
The foreward backward confidence is computes by estimating the motion from frame 1 to frame 2. (ptList1 -> ptList2). And that from frame 2 to frame 1 with the points of ptList2 (ptList2 -> ptListRef). Motion vectors will be rejected if (|| ptRef - pt1 || > fb_threshold).
I currently have a robot with some sensors, like a GPS, an accelerometer and a compass. The thing I would like to do is my robot to reach a GPS coordinate that I enter. I wondered if any algorithm to do that already existed. I don't want a source code, which wouldn't have any point, just the procedure to follow for my robot to do so, for me to be able to understand what I do... At the moment, let's imagine that I can access the GPS coordinate everytime, so no need of a Kalman filter. I know it's unrealistic, but I would like to programm it step by step, and Kalman is the next step.
If anyone has an idea...
To get a bearing (positive angle east of north) between two lat-long points use:
bearing=mod(atan2(sin(lon2-lon1)*cos(lat2),(lat1)*sin(lat2)-sin(lat1)*cos(lat2)*cos(lon2-lon1)),2*pi)
Note - angles probably have to be in radians depending on your math package.
But for small distances you can just calculate how many meters in one degree of lat and long at your position and then treat them as flat X,Y coords.
For typical 45deg latitudes it's around 111.132 km/deg lat, 78.847 km/deg lon.
1) orient your robot toward its destination.
2) Move forward until the distance between you and your destination is increasing where you should go back to 1)
3) BUT ... if you are close enough (under a threshold), consider that you arrived at the destination.
You can use the Location class. It's BearingTo function computes the bearing you have to follow to reach another location.
There is a very nice page explaining the formulas between GPS-based distance, bearing, etc. calculation, which I have been using:
http://www.movable-type.co.uk/scripts/latlong.html
I am currently trying to do these calculations myself, and just found out that in Martin Becket answer there is an error. If you compare to the info of that webpage, you will see that the part in the middle:
(lat1)*sin(lat2)
should actually be:
cos(lat1)*sin(lat2)
Would have left a comment, but don't have the reputation yet...