I am trying to create a 3D Visualization of an RC airplane in Threebox. The RC plane sends live telemetry, including:
GPS Coordinates
Gyro sensor data, showing the pitch, roll and heading of the plane.
I have now loaded a Model of an airplane in Threebox, no problems with that.
My problem comes down to the rotation of the plane. I want the plane object to represent the current orientation of the RC plane. Since I have the live telemetry from the flight controller, this should be possible.
In the Documentation, I have found this method, which seemed like exactly what i needed:
plane.setRotation({x: roll, y: pitch, z: yaw/heading})
And it basically works. I can rotate the Plane around its axes. But things get messed up when I combine the rotations.
For example: When I just update the roll axis, the Object behaves just like I want it to. However, when i change the heading of the plane by 90 degrees, the roll axis suddenly becomes the pitch axis. It seems to me, that the axes of the Plane object don't rotate with the plane itself.
I've prepared a recreation of the issue on jsfiddle. You can change the heading of the plane using the slider in the bottom right.
I've been stuck on this for days, would be super happy for any help!
There are lots of issues with your jsfiddle that prevent it from running. To isolate an issue and make it easier to test you should eliminate as many variables as possible - you're using two third-party libraries that will play a big hand in how transformations behave, particularly threebox.
I would recommend sticking with three.js's built in transformation tools unless you specifically need some lat/lng transformations, or other transformations to move between a local cartesian space and a global coordinate system. In this case, a very basic plane.setRotationFromEuler(new THREE.Euler(yaw, pitch, roll)) should do the trick. Be aware of how much order in euler rotations can affect the outcome, and that three.js uses radians for all its rotations, not degrees.
Related
I am curious about the limits of three.js. The following question is asked mainly as a challenge, not because I actually need the specific knowledge/code right away.
Say you have a game/simulation world model around a sphere geometry representing a planet, like the worlds of the game Populous. The resolution of polygons and textures is sufficient to look smooth when the globe fills the view of an ordinary camera. There are animated macroscopic objects on the surface.
The challenge is to project everything from the model to a global map projection on the screen in real time. The choice of projection is yours, but it must be seamless/continuous, and it must be possible for the user to rotate it, placing any point on the planet surface in the center of the screen. (It is not an option to maintain an alternative model of the world only for visualization.)
There are no limits on the number of cameras etc. allowed, but the performance must be expected to be "realtime", say two-figured FPS or more.
I don't expect ayn proof in the form of a running application (although that would be cool), but some explanation as to how it could be done.
My own initial idea is to place a lot of cameras, in fact one for every pixel in the map projection, around the globe, within a Group object that is attached to some kind of orbit controls (with rotation only), but I expect the number of object culling operations to become a huge performance issue. I am sure there must exist more elegant (and faster) solutions. :-)
why not just use a spherical camera-model (think a 360° camera) and virtually put it in the center of the sphere? So this camera would (if it were physically possible) be wrapped all around the sphere, looking toward the center from all directions.
This camera could be implemented in shaders (instead of the regular projection-matrix) and would produce an equirectangular image of the planet-surface (or in fact any other projection you want, like spherical mercator-projection).
As far as I can tell the vertex-shader can implement any projection you want and it doesn't need to represent a camera that is physically possible. It just needs to produce consistent clip-space coordinates for all vertices. Fragment-Shaders for lighting would still need to operate on the original coordinates, normals etc. but that should be achievable. So the vertex-shader would just need compute (x,y,z) => (phi,theta,r) and go on with that.
Occlusion-culling would need to be disabled, but iirc three.js doesn't do that anyway.
I'm new to three.js and WebGL in general.
The sample at http://css.dzone.com/articles/threejs-render-real-world shows how to use raster GIS terrain data in three.js
Is it possible to use vector GIS data in a scene? For example, I have a series of points representing locations (including height) stored in real-world coordinates (meters). How would I go about displaying those in three.js?
The basic sample at http://threejs.org/docs/59/#Manual/Introduction/Creating_a_scene shows how to create a geometry using coordinates - could I use a similar approach with real-world coordinates such as
"x" : 339494.5,
"y" : 1294953.7,
"z": 0.75
or do I need to convert these into page units? Could I use my points to create a surface on which to drape an aerial image?
I tried modifying the simple sample but I'm not seeing anything (or any error messages): http://jsfiddle.net/slead/KpCfW/
Thanks for any suggestions on what I'm doing wrong, or whether this is indeed possible.
I did a number of things to get the JSFiddle show something.. here: http://jsfiddle.net/HxnnA/
You did not specify any faces in your geometry. In this case I just hard-coded a face with all three of your data points acting as corner. Alternatively you can look into using particles to display your data as points instead of faces.
Set material to THREE.DoubleSide. This is not usually needed or recommended, but helps debugging in early phases, when you can see both sides of a face.
Your camera was probably looking in a wrong direction. Added a lookAt() to point it to the center and made the field of view wider (this just makes it easier to find things while coding).
Your camera near and far planes were likely off-range for the camera position and terrain dimensions. So I increased the far plane distance.
Your coordinate values were quite huge, so I just modified them by hand a bit to make sense in relation to the camera, and to make sure they form a big enough triangle for it to be seen in camera. You could consider dividing your coordinates with something like 100 to make the units smaller. But adjusting the camera to account for the huge scale should be enough too.
Nothing wrong with your approach, just make sure you feed the data so that it makes sense considering the camera location, direction and near + far planes. Pay attention to how you make the faces. The parameters to Face3 is the index of each point in your vertices array. Later on you might need to take winding order, normals and uvs into account. You can study the geometry classes included in Three.js for reference.
Three.js does not specify any meaning to units. Its just floating point numbers, and you can decide yourself what a unit (1.0) represents. Whether it's 1mm, 1 inch or 1km, depends on what makes the most sense considering the application and the scale of it. Floating point numbers can bring precision problems when the actual numbers are extremely small or extremely big. My own applications typically deal with stuff in the range from a couple of centimeters to couple hundred meters, and use units in such a way that 1.0 = 1 meter, that has been working fine.
Having fun with D3 geo orthographic projection to build an interactive globe, based on all the great examples I found.
You can see my simple mockup at http://bl.ocks.org/patricksurry/5721459
I want the user to manipulate the globe like a trackball (http://www.opengl.org/wiki/Trackball). I started with one of Mike's examples (http://mbostock.github.io/d3/talk/20111018/azimuthal.html), and improved slightly to use canvas coordinates and express the mouse locations in 'trackball coordinates' (i.e. rotation around canvas horizontal and vertical axes) so that a fixed mouse movement gives more rotation near the edges of the globe (and works outside the globe if you use the hyberbolic extension explained above), rather than Mike's one:one correspondence.
It works nicely when the globe starts at an unrotated position (north pole vertical), but when the globe is already rotated (manipulate the example so the north pole is facing out of the page) then the trackball controls become non-intuitive because you can't simply express a change in trackball coordinates as a delta in the d3.geo.rotate lat/lon coordinates. D3's 3-axis rotation involves applying a longitude rotation (spin around north pole), then a latitude rotation (spin around a horizontal axis in the canvas plane), and then a 'yaw' rotation (spin around an axis perpendicular to the plane) - see http://bl.ocks.org/mbostock/4282586.
I guess what I need is a method for composing my two rotation matrices (the one currently in the projection, with a new one to rotate the trackball slightly), but I can't see a way to do that in D3, other than digging into the source (https://github.com/mbostock/d3/blob/master/src/geo/rotation.js) and trying to do the math to define the rotation matrix. The code looks elegant but comment-free and I'm not sure I can correctly decipher the closures with the orthographic projection instance.
On the last point, if someone knows the rotation matrix form of d3.geo.projection that would probably solve my problem too.
Any ideas?
There is an alternative solution to patricksurry's answer, by using quaternion representations, as inspired by Jason Davies. I, too, thought D3 would've already supported this composition natively! And hoped Jason Davies posted his code...
Took sometime to figure out the math. A demo is uploaded here, with an attempt to explain the math too. http://bl.ocks.org/ivyywang/7c94cb5a3accd9913263
With my limited math knowledge, I think, one of the advantages quaternion over Euler is the ability to compound multiple rotations over and over, without worrying about coordinate references. So it would always work, no matter where your north pole faces, and no matter how many rotations you'll have. (Someone please correct me, if I got this wrong).
I decided that solving for the combined rotation matrix might not be so hard. I got http://sagemath.org to do most of the hard work, so that I could express the composition of the original projection rotate() orientation plus a trackball rotation as a single equivalent rotate().
This gives much more natural behavior regardless of the orientation of the globe.
I updated the mockup so that it has the improved version - see http://bl.ocks.org/patricksurry/5721459
The sources are at http://bl.ocks.org/patricksurry/5721459 which include an explanation of the math - cool that you can use proper greek letters in javascript for almost readable math sourcecode!
It would still be good if D3 supported composition of rotate operations natively (or maybe it does already?!)
A quick introduction:
We're developing a positioning system that works the following way. Our camera is situated on a robot and is pointed upwards (looking at the ceiling). On the ceiling we have something like landmarks, thanks to whom we can compute the position of the robot. It looks like this:
Our problem:
The camera is tilted a bit (0-4 degrees I think), because the surface of the robot is not perfectly even. That means, when the robot turns around but stays at the same coordinates, the camera looks at a different position on the ceiling and therefore our positioning program yields a different position of the robot, even though it only turned around and wasn't moved a bit.
Our current (hardcoded) solution:
We've taken some test photos from the camera, turning it around the lens axis. From the pictures we've deduced that it's tilted ca. 4 degrees in the "up direction" of the picture. Using some simple geometrical transformations we've managed to reduce the tilt effect and find the real camera position. On the following pictures the grey dot marks the center of the picture, the black dot is the real place on the ceiling under which the camera is situated. The black dot was transformed from the grey dot (its position was computed correcting the grey dot position). As you can easily notice, the grey dots form a circle on the ceiling and the black dot is the center of this circle.
The problem with our solution:
Our approach is completely unportable. If we moved the camera to a new robot, the angle and direction of tilt would have to be completely recalibrated. Therefore we wanted to leave the calibration phase to the user, that would demand takings some pictures, assessing the tilt parameters by him and then setting them in the program. My question to you is: can you think of any better (more automatic) solution to computing the tilt parameters or correcting the tilt on the pictures?
Nice work. To have an automatic calibration is a nice challenge.
An idea would be to use the parallel lines from the roof tiles:
If the camera is perfectly level, then all lines will be parallel in the picture too.
If the camera is tilted, then all lines will be secant (they intersect in the vanishing point).
Now, this is probably very hard to implement. With the camera you're using, distortion needs to be corrected first so that lines are indeed straight.
Your practical approach is probably simpler and more robust. As you describe it, it seems it can be automated to become user friendly. Make the robot turn on itself and identify pragmatically which point remains at the same place in the picture.
Context: trying to take THREE.js and use it to display conic sections.
Method: creating a mesh of vertices and then connect face4's to all of them. Used two faces to produce a front and back side so that when the conic section rotates it won't matter from which angle the camera views it.
Problems encountered: 1. Trying to find a good way to create a intuitive mouse rotation scheme. If you think in spherical coordinates, then it feels like just making up/down change phi and left/right change phi would work. But that requires that you can move the camera. As far as I can tell, there is no way to change actively change the rotation of anything besides the objects. Does anyone know how to change the rotation of the camera or scene? 2. Is there a way to graph functions that is better than creating a mesh? If the mesh has many points then it is too slow, and if the mesh has few points then you cannot easily make out the shape of the conic sections.
Any sort of help would be most excellent.
I'm still starting to learn Three.js, so I'm not sure about the second part of your question.
For the first part, to change the camera, there is a very good way, which could also include zooming and moving the scene: the trackball camera.
For the exact code and how to use it, you can view:
https://github.com/mrdoob/three.js/blob/master/examples/webgl_trackballcamera_earth.html
At the botton of this page (http://mrdoob.com/122/Threejs) you can see the example in action (the globe in the third row from the bottom).
There is an orbit control script for the three.js camera.
I'm not sure if I understand the rotation bit. You do want to rotate an object, but you are correct, the rotation is relative.
When you rotate or move your camera, a matrix is calculated for that position/rotation, and it does indeed rotate the scene while keeping the camera static.
This is irrelevant though, because you work in model/world space, and you position your camera in it, the engine takes care of the rotations under the hood.
What you probably want is to set up an object, hook up your rotation with spherical coordinates, and link your camera as a child to this object. The translation along the cameras Z axis relative to the object should mimic your dolly (zoom is FOV change).
You can rotate the camera by changing its position. See the code I pasted here: https://gamedev.stackexchange.com/questions/79219/three-js-camera-turning-leftside-right
As others are saying OrbitControls.js is an intuitive way for users to manage the camera.
I tackled many of the same issues when building formulatoy.net. I used Morphing Geometries since I found mapping 3d math functions to a UV surface to require v little code and it allowed an easy way to implement different coordinate systems (Cartesian, spherical, cylindrical).
You could use particles instead of a mesh I suppose but a mesh seems best. The lattice material is not too useful if you're trying to understand a surface mathematically. At this point I'm thinking of drawing my own X,Y lines on the surface (or phi, theta lines etc) to better demonstrate cross-sections.
Hope that helps.
You can use trackball controls by which you can zoom in and out of an object,rotate the object,pan it.In trackball controls you are moving the camera around the object.Object still rotates with respect to the screen or renderer centre (0,0,0).