ArcGIS Runtime : How to convert a point's unit from degree to meter - arcgis-runtime

I have two geometries with the same coordinate system (Wgs84), but their data units are different, one is degree and the other is meter.
I need to perform some operations on them, like:
var g1 = GeometryEngine.Difference(geometry1, geometry2);
But I got an error:
System.ArgumentException:'Invalid argument: geometry1 and geometry2 must have equivalent spatial references.'
So I want to convert the data in degrees to the data in meters, I don’t know how to do it.
The data in meters comes from the shp file. This shp file is loaded into SceneView.
The data in degrees comes from the PreviewMouseLeftButtonDown event of SceneView:
// Get the mouse position.
Point cursorSceenPoint = mouseEventArgs.GetPosition(MySceneView);
// Get the corresponding MapPoint.
MapPoint onMapLocation = MySceneView.ScreenToBaseSurface(cursorSceenPoint);
Then I thought about whether the unit can be modified by setting SceneView.SpatialReference.Unit, but it is read-only.
A .NET solution is the best, and other languages are also acceptable.

Most geometry engine operations requires all geometries to be in the same spatial reference. As the error points to, that is not the case. Before performing any geometry engine operation, you could use the following code to bring geometry2 over to match the spatial reference of geometry1 (or vise-versa):
if (!geometry1.SpatialReference.IsEqual(geometry2.SpatialReference))
geometry2 = GeometryEngine.Project(geometry2, geometry1.SpatialReference);
The SceneView always returns coordinates in wgs84 lat/long.

var point1 = ...;
var point2= GeometryEngine.Project(point1, YourNewSpatialReference) as MapPoint;
public static Geometry? Project(Geometry geometry, SpatialReference outputSpatialReference);
public static Geometry? Project(Geometry geometry, SpatialReference outputSpatialReference, DatumTransformation? datumTransformation);

Related

Is there a simple way of handling (transforming) a group of objects in SkiaSharp?

In a nutshell, let's say, I need to draw a complex object (arrow) which consists of certain amount of objects, like five (or more) lines, for instance. And what's more important, that object must be transformed with particular (dynamic) coordinates (including scaling, possibly).
My question is whether SkiaSharp has anything which I can use for manipulating of this complex object transformation (some sort of grouping etc.) or do I still need to calculate every single point manually (with matrix, for instance).
This question is related particularly to SkiaSharp as I use it on Xamarin, but maybe some general answers from Skia can also help with it?
I think, the question might be too common (and possibly not for stackoverflow exactly), but I just can't find any specific information in google.
Yes, I know how to use SkiaSharp for drawing primitives.
create an SKPath and add lines and other shapes to it
SKPath path = new SKPath();
path.LineTo(...);
...
...
then draw the SKPath on your canvas
canvas.DrawPath(path,paint);
you can apply a transform to the entire path before drawing
var rot = new SKMatrix();
SKMatrix.RotateDegrees(ref rot, 45.0f);
path.Transform(rot);
If you are drawing something more complex than a path SKPicture is perfect for this. You can set it up so that you construct it once and then reuse it easily and efficiently. In the example below, the SKPicture's origin is in the center of a 100 x 100 rectangle but that is arbitrary.
SKPicture myPicture;
SKPicture MyPicture {
get {
if(myPicture != null) {
return myPicture;
}
using(SKPictureRecorder recorder = new SKPictureRecorder())
using(SKCanvas canvas = recorder.BeginRecording(new SKRect(-50, -50, 50, 50)))
// draw using primitives
...
myPicture = recorder.EndRecording();
}
return myPicture;
}
}
Then you apply your transforms to the canvas, draw the picture and restore the canvas state. offsetX and offsetY correspond to where the origin of the SKPicture will be rendered.
canvas.Save();
canvas.Translate(offsetX, offsetY);
canvas.Scale(scaleAmount);
canvas.RotateDegrees(degrees);
canvas.DrawPicture(MyPicture);
canvas.Restore();

ThreeJS apply properties from one camera to another camera

In my webapp I'm using ThreeJS scenes in different modals/popups/dialogs with different width/height ratios.
Furthermore, I want to use multiple user defined camera settings (rotation, position, lookAt etc.) among these different scenes.
Therefore, I save the camera object via camera.toJSON() when the user clicks a capture camera settings button.
(Before I did this, I saved just the object camera, but unfortunately these objects are quite big and slow down the performance while multiple camera objects get stored. Nevertheless, this approach worked, since I was able to copy all the desired values between the saved camera object and the currently used camera [e.g. current_camera.position.x=saved_camera.position.x and so on])
In every scene I want now to use the saved properties I tried the following:
let m = new THREE.Matrix4();
m.fromArray(saved_camera.object.matrix);
current_camera.applyMatrix(m)
current_camera.updateMatrix();
Unfortunately this doesn't work.
"normal" camera object
camera.toJSON() object
If you're comfortable using matrices, then you can turn off the matrix auto-update that three.js does during the render process, and keep the world matrix up-to-date yourself. (This includes any time you change the camera's orientation, so keep that in mind if you're using some form of mouse interaction to control the camera angle.)
First, turn off automatic matrix updating for your camera by setting the autoUpdateMatrix property to false. You can still use the convenience properties (position, rotation, scale), but you'll have to manually update the world matrix by calling camera.updateMatrixWorld(true);.
Finally, when you're ready to restore a particular camera orientation, simply copy the matrix values using the matrixWorld's copy method.
var origin = new THREE.Vector3();
var theCamera = new THREE.PerspectiveCamera(35, 1, 1, 1000);
theCamera.autoUpdateMatrix = false; // turn off auto-update
theCamera.position.set(10, 10, 10);
theCamera.lookAt(origin);
theCamera.updateMatrixWorld(true); // manually update the matrix!
console.log("Camera original matrix: ", theCamera.matrixWorld.elements.toString());
var saveMatrix = new THREE.Matrix4();
saveMatrix.copy(theCamera.matrixWorld);
// saveMatrix now contains the current value of theCamera.matrixWorld
theCamera.position.set(50, -50, 75);
theCamera.lookAt(origin);
theCamera.updateMatrixWorld(true); // manually update the matrix!
console.log("Camera moved matrix: ", theCamera.matrixWorld.elements.toString());
// theCamera.matrixWorld now holds a value that's different from saveMatrix.
theCamera.matrixWorld.copy(saveMatrix);
// Don't upate the matrix, because you just SET it.
console.log("Camera moved matrix: ", theCamera.matrixWorld.elements.toString());
// theCamera.matrixWorld once again contains the saved value.
<script src="https://threejs.org/build/three.js"></script>
Edit to address OrbitControls:
It looks like OrbitControls uses the convenience properties, rather than gathering the information from the matrix. As such, when you restore a camera position, you'll also need to restore those properties. This is easily done by using decompose on the matrix, and copying the resulting values into the appropriate properties:
var d = new THREE.Vector3(),
q = new THREE.Quaternion(),
s = new THREE.Vector3();
camera.matrixWorld.decompose( d, q, s );
camera.position.copy( d );
camera.quaternion.copy( q );
camera.scale.copy( s );

Change facing direction of CSS3DObject

I have a 3D scene with a bunch of CSS object that I want to rotate so that they are all pointing towards a point in the space.
My CSS objects are simple rectangles that are a lot wider than they are high:
var element = document.createElement('div');
element.innerHTML = "test";
element.style.width = "75px";
element.style.height = "10px";
var object = new THREE.CSS3DObject(element);
object.position.x = x;
object.position.y = y;
object.position.z = z;
Per default, the created objects are defined as if they are "facing" the z-axis. This means that if I use the lookAt() function, the objects will rotate so that the "test" text face the point.
My problem is that I would rather rotate so that the "right edge" of the div is pointing towards the desired point. I've tried fiddling with the up-vector, but I feel like that wont work because I still want the up-vector to point up. I also tried rotating the object Math.PI/2 along the y axis first, but lookAt() seems to ignore any prior set rotation.
It seems like I need to redefine the objects local z-vector instead, so that it runs along with the global x-vector. That way the objects "looking at"-direction would be to the right in the scene, and then lookAt() would orient it properly.
Sorry for probably mangling terminology, newbie 3D programmer here.
Object.lookAt( point ) will orient the object so that the object's internal positive z-axis points in the direction of the desired point.
If you want the object's internal positive x-axis to point in the direction of the desired point, you can use this pattern:
object.lookAt( point );
object.rotateY( - Math.PI / 2 );
three.js r.84

Performance problems with scenekit

I've got a row dimensional array of values that I want to visualize in 3D and I'm using scene kit under OS X for it. I've done it in a clumsy manner by using each column as a point on the X axis, each row as a point on the Z axis, and each value as a normalized point on the Y axis -- I place a sphere at the vector defined by each data point. It works but it doesn't look too good.
I've also done this by building a mesh of lines based on #Matthew's function in Drawing a line between two points using SceneKit (the answer he posted, not the original question). For each point I use his function to draw two lines - one between my current point and the next point to the right and another between my current point and the next point towards the front (except when there is no additional column/row, of course).
Using the second method, my results look much better... however the performance is quite hideous! It takes quite a long time to complete the initial rendering, and if I use a trackpad/mouse to rotate or translate the scene, I might as well get a cup of coffee to wait until my system is usable again (and this is not much hyperbole). Using the sphere method, things render and update very quickly.
Any advice on how to improve the performance when using the lines method? (Note that I am not trying to add both lines and spheres at the same time.) Code-wise, the only difference between approach is which of the following methods gets called (and that for each point, addPixelAt... is called once, but addLineAt... is called twice for most points).
- (SCNNode *)addPixelAtRow:(CGFloat)row Column:(CGFloat)column size:(CGFloat)size color:(NSColor *)color
{
CGFloat radius = 0.5;
SCNSphere *ball = [SCNSphere sphereWithRadius:radius*1.5];
SCNMaterial *material = [SCNMaterial material];
[[material diffuse] setContents:color];
[[material specular] setContents:color];
[ball setMaterials:#[material]];
SCNNode *ballNode = [SCNNode nodeWithGeometry:ball];
[ballNode setPosition:SCNVector3Make(column, size, row)];
[_baseNode addChildNode:ballNode];
return ballNode;
}
- (SCNNode *)addLineFromRow:(CGFloat)row1 Column:(CGFloat)column1 size:(CGFloat)size1
toRow2:(CGFloat)row2 Column2:(CGFloat)column2 size2:(CGFloat)size2 color:(NSColor *)color
{
SCNVector3 positions[] = {
SCNVector3Make(column1, size1, row1),
SCNVector3Make(column2, size2, row2)
};
int indices[] = {0, 1};
SCNGeometrySource *vertexSource = [SCNGeometrySource geometrySourceWithVertices:positions count:2];
NSData *indexData = [NSData dataWithBytes:indices length:sizeof(indices)];
SCNGeometryElement *element = [SCNGeometryElement geometryElementWithData:indexData
primitiveType:SCNGeometryPrimitiveTypeLine
primitiveCount:1
bytesPerIndex:sizeof(int)];
SCNGeometry *line = [SCNGeometry geometryWithSources:#[vertexSource] elements:#[element]];
SCNMaterial *material = [SCNMaterial material];
[[material diffuse] setContents:color];
[[material specular] setContents:color];
[line setMaterials:#[material]];
SCNNode *lineNode = [SCNNode nodeWithGeometry:line];
[_baseNode addChildNode:lineNode];
return lineNode;
}
From the data that you've shown in your question I would say that your main problem is the number of draw calls. Your's is in the tens of thousands, which is way too much. It should probably be a lot closer to ~100.
The reason why you have so many draw calls is that you have so many distinct objects in your scene (each line). The better (but more advanced solution) would probably be to generate a single element for the entire mesh that consists of all the lines. If you want to achieve the same rendering with that mesh (with a color from cold to warm based on the height) then you could do that in a shader modifier.
However, in your case I would start by flattening all the lines (since that would be the smallest code change and should still have a significant performance improvement in your case).
(Optimizing performance is always an iterative process. Once you fix one thing there will be another thing which is the most expensive operation. Without your code I can only say what would help with the current performance problem)
Create an empty node (without adding it to your scene) and generate all the lines, adding them to this node. Then create a flattened copy of that node by calling flattenedClone on the node that contains all the lines
SCNNode *nodeWithAllTheLines = [SCNNode node];
// create all the lines and add them to it...
SCNNode *flattenedNode = [nodeWithAllTheLines flattenedClone];
[_baseNode addChildNode:flattenedNode];
When you do this you should see a significant drop in the number of draw calls (the number after the diamond in the statistics) and hopefully a big increase in performance.

THREE.raycaster : Possible to use it for enemy Ai?

I've just started learning THREE and have been messing about with the three.js example of controllable MD2 characters to try and fashion it into a 3rd person shooter kind of game. I've been trying to write a simple algorithm for the enemy characters and I'm pretty sure that ray-casting would be ideal.The whole idea is that the enemies should stop rotating once they're facing the player. But Here's the problem that's giving me sleepless nights! :
Let's say, the enemy object is the origin for the ray caster ray. No matter what direction I set for the direction of that ray ( even, for example (1,0,0) - the positive x-axis), the ray's direction is always pointing towards the center of the scene!!!
Please help! haven't been able to find any Example online for this kind of use for the ray caster (apart from collision detection which I really don't need at the moment).
If all you want is for enemies to stop rotating when they are looking at the player, I would consider just checking the direction between them, as it's a lot faster than casting a ray to see if it intersects:
// Assuming `enemy` is a THREE.Mesh
var targetDir = enemy.position.clone().sub(player.position).normalize();
var currentDir = (new THREE.Vector3()).applyMatrix4(enemy.matrixWorld).sub(enemy.position).normalize();
var amountToRotate = currentDir.sub(targetDir);
var offset = amountToRotate.length();
Then rotate each axis no more than the value for that axis in amountToRotate if offset is greater than some threshold.
That said, here is how you use a Raycaster, given the variables above:
var raycaster = new THREE.Raycaster(enemy.position, targetDir);
var intersections = raycaster.intersectObject(player);
Note that if you are running any of the above code in an animation loop, it will create a lot of garbage collection churn because you are constantly creating a bunch of new objects and then immediately throwing them away. A better pattern, which is used a lot in the library itself, is to initialize objects once, copy values to them if you need to, and then use those copies for computation. For example, you could create a function to do your raycasting for you like this:
var isEnemyLookingAtPlayer = (function() {
var raycaster = new THREE.Raycaster();
var pos = new THREE.Vector3();
return function(enemy) {
raycaster.ray.origin.copy(enemy.position);
raycaster.ray.direction.copy(pos.copy(enemy.position).sub(player.position).normalize());
return !!raycaster.intersectObject(player).length;
};
})();

Resources