Calculate LatLong of point at distance x,y metres from origin LatLong using Boost Geomtry - boost

I'm using boost geometry (v1.75) to do forward and reverse coordinate transforms, this works Ok.
namespace bg = boost::geometry;
namespace bm = bg::model::d2;
bg::srs::projection<bg::srs::static_epsg<3785> > transform;
bm::point_xy<double, bg::cs::geographic<bg::degree> > origin = { -3.04081, 53.4427}, longLatOut;
bm::point_xy<double, bg::cs::cartesian> xy;
transform.forward(origin, xy);
transform.inverse(xy, longLatOut);
I would like calculate a new point as a result of adding an offset x,y metres to my geographic origin (LatLong) - accuracy is important. I'm stuck on how to do this.
Can anyone provide any guidance of the best way to approach this using the boost libs?
Regs

As far as I know, boost has no cartographic projection function, maybe you can try PROJ lib.

Related

ArcGIS Runtime : How to convert a point's unit from degree to meter

I have two geometries with the same coordinate system (Wgs84), but their data units are different, one is degree and the other is meter.
I need to perform some operations on them, like:
var g1 = GeometryEngine.Difference(geometry1, geometry2);
But I got an error:
System.ArgumentException:'Invalid argument: geometry1 and geometry2 must have equivalent spatial references.'
So I want to convert the data in degrees to the data in meters, I don’t know how to do it.
The data in meters comes from the shp file. This shp file is loaded into SceneView.
The data in degrees comes from the PreviewMouseLeftButtonDown event of SceneView:
// Get the mouse position.
Point cursorSceenPoint = mouseEventArgs.GetPosition(MySceneView);
// Get the corresponding MapPoint.
MapPoint onMapLocation = MySceneView.ScreenToBaseSurface(cursorSceenPoint);
Then I thought about whether the unit can be modified by setting SceneView.SpatialReference.Unit, but it is read-only.
A .NET solution is the best, and other languages are also acceptable.
Most geometry engine operations requires all geometries to be in the same spatial reference. As the error points to, that is not the case. Before performing any geometry engine operation, you could use the following code to bring geometry2 over to match the spatial reference of geometry1 (or vise-versa):
if (!geometry1.SpatialReference.IsEqual(geometry2.SpatialReference))
geometry2 = GeometryEngine.Project(geometry2, geometry1.SpatialReference);
The SceneView always returns coordinates in wgs84 lat/long.
var point1 = ...;
var point2= GeometryEngine.Project(point1, YourNewSpatialReference) as MapPoint;
public static Geometry? Project(Geometry geometry, SpatialReference outputSpatialReference);
public static Geometry? Project(Geometry geometry, SpatialReference outputSpatialReference, DatumTransformation? datumTransformation);

Is there a simple way of handling (transforming) a group of objects in SkiaSharp?

In a nutshell, let's say, I need to draw a complex object (arrow) which consists of certain amount of objects, like five (or more) lines, for instance. And what's more important, that object must be transformed with particular (dynamic) coordinates (including scaling, possibly).
My question is whether SkiaSharp has anything which I can use for manipulating of this complex object transformation (some sort of grouping etc.) or do I still need to calculate every single point manually (with matrix, for instance).
This question is related particularly to SkiaSharp as I use it on Xamarin, but maybe some general answers from Skia can also help with it?
I think, the question might be too common (and possibly not for stackoverflow exactly), but I just can't find any specific information in google.
Yes, I know how to use SkiaSharp for drawing primitives.
create an SKPath and add lines and other shapes to it
SKPath path = new SKPath();
path.LineTo(...);
...
...
then draw the SKPath on your canvas
canvas.DrawPath(path,paint);
you can apply a transform to the entire path before drawing
var rot = new SKMatrix();
SKMatrix.RotateDegrees(ref rot, 45.0f);
path.Transform(rot);
If you are drawing something more complex than a path SKPicture is perfect for this. You can set it up so that you construct it once and then reuse it easily and efficiently. In the example below, the SKPicture's origin is in the center of a 100 x 100 rectangle but that is arbitrary.
SKPicture myPicture;
SKPicture MyPicture {
get {
if(myPicture != null) {
return myPicture;
}
using(SKPictureRecorder recorder = new SKPictureRecorder())
using(SKCanvas canvas = recorder.BeginRecording(new SKRect(-50, -50, 50, 50)))
// draw using primitives
...
myPicture = recorder.EndRecording();
}
return myPicture;
}
}
Then you apply your transforms to the canvas, draw the picture and restore the canvas state. offsetX and offsetY correspond to where the origin of the SKPicture will be rendered.
canvas.Save();
canvas.Translate(offsetX, offsetY);
canvas.Scale(scaleAmount);
canvas.RotateDegrees(degrees);
canvas.DrawPicture(MyPicture);
canvas.Restore();

Change facing direction of CSS3DObject

I have a 3D scene with a bunch of CSS object that I want to rotate so that they are all pointing towards a point in the space.
My CSS objects are simple rectangles that are a lot wider than they are high:
var element = document.createElement('div');
element.innerHTML = "test";
element.style.width = "75px";
element.style.height = "10px";
var object = new THREE.CSS3DObject(element);
object.position.x = x;
object.position.y = y;
object.position.z = z;
Per default, the created objects are defined as if they are "facing" the z-axis. This means that if I use the lookAt() function, the objects will rotate so that the "test" text face the point.
My problem is that I would rather rotate so that the "right edge" of the div is pointing towards the desired point. I've tried fiddling with the up-vector, but I feel like that wont work because I still want the up-vector to point up. I also tried rotating the object Math.PI/2 along the y axis first, but lookAt() seems to ignore any prior set rotation.
It seems like I need to redefine the objects local z-vector instead, so that it runs along with the global x-vector. That way the objects "looking at"-direction would be to the right in the scene, and then lookAt() would orient it properly.
Sorry for probably mangling terminology, newbie 3D programmer here.
Object.lookAt( point ) will orient the object so that the object's internal positive z-axis points in the direction of the desired point.
If you want the object's internal positive x-axis to point in the direction of the desired point, you can use this pattern:
object.lookAt( point );
object.rotateY( - Math.PI / 2 );
three.js r.84

Make a rigged character's head rotate in sync with a quaternion in Unity

I have a face detection app, and I want a character's head to rotate according to the detected face's pose.
I've managed to get the rotation of the detected face in the form of a quaternion, but I'm unsure about how I'm supposed to translate the data from the quaternion into 3D points for the reference points of the rigged character which I believe will decide the rotation.
Let's say I have this character: http://i.imgur.com/3pcRoYx.png
One solution could be to just cut off the head and make it an own object and then set the rotation of that object according to the quaternion, but I don't want that. I want an intact character.
Is it possible to move the reference points in the head with the data from a quaternion? Or have I gotten it wrong how rigged characters turn their heads? I haven't animated before.
You can apply rotation to a single bone. Get that bone in your script. Keep a var in your class to store the last quaternion in and every update, compare it to that and rotate by the different. I don't have the actual editor here but try this psuedocode.
class NeckRotator {
public GameObject Neck;
private Quaternion LastFace;
void Start(){
LastFace = Neck.transform.Rotation;
}
void Update(){
var DetectedFace = ... // Whatever you do to get this
var Change = Quaternion.Inverse(DetectedFace) * LastFace; // Found this online real quick
Neck.Rotate(Change);
LastFace = Neck.transform.Rotation;
}
}
I've done something like that before to rotate a neck of an NPC to look at a player. It should work for your deal as well.

kineticjs regular polygon setFillPatternImage alignment issue

I am using a kineticjs regular polygon (a hexagon in this case) and I am filling it with an image using setFillPatternImage. This is working. I'm creating a dynamic implementation so I need to scale the source image depending on the current size of the polygon. This involves calculating the setFillPatternOffset and the setFillPatternScale since the dimensions of a regular polygon are relative to the center. There is no clear documentation that I can find regarding the reference point for the fill image, nor whether the scaling factor should use the radius as a proxy for the width and height ratios or not. The following code results in a misplaced image on the polygon. Anyone know what the alignment rules are for fillPatternImage?
imageObj.onload = function() {
var whex = hexagon.getRadius() * 2;
var xratio = whex / imageObj.width;
var yratio = whex / imageObj.height;
hexagon.setFillPatternImage(imageObj);
hexagon.setFillPatternOffset(-whex/2,-whex/2);
hexagon.setFillPatternScale( [ xratio, yratio ] );
};
Thanks!
Looks like I was over-thinking this. Rather than using the width of the destination polygon when setting the offset, kineticjs handles the scaling of that offset for you. As a result you simply set the offset with:
hexagon.setFillPatternOffset(-imageObj.width/2, -imageObj.height/2);

Resources