How to rotate a model on a plane? - rotation

I am using a plane geometry to represent a terrain model with different "y" values(altitude). Also using the raycaster function I am able to move the model on the plane.
I need a way to rotate the model a be parallel with the current face its on without changing its path orientation.
Is there a way to define rotation by a face of a geometry?

theCorrect me if I'm wrong here, it sounds like you want to have both local (on the face) and global (in the direction of your "path orientation") rotations integrated here. This is, in general, one of those tricky and somewhat context-specific problems that will require you to mix two different sources of rotation. In a typical Euler-style rotation, it sounds like you want to rotate around Y according to the path (I'm assuming the path is in top-down 2D here -- it's these assumptions tat make the problem impossible to definitively answer!), while rotating around X and Z according to the normal of the surface. Try taking assembling a THREE.Euler that way -- does it get you in the neighborhood?

Related

three.js: When to move / rotate geometry and when mesh?

I compose multiple STLs for 3D printing / milling. For that I also use CSG and need some raytracing for detecting features of the models.
My scene is pretty much static. Just have to move around the models to arrange them. For this use case I'm not really sure which approach for moving / rotating the models is right.
Currently I manipulate the BufferGeometries directly. So everything in the geometry is like in the real world. Each position, each normal. No calculation from / to local or world coordinates.
On the other hand I could do the same thing with changing the meshes, which means to change just a matrix.
For me, working with the mesh is more for animation etc. While working with the geometry to manipulate the real object, which is my intention.
I'm wondering when one would translate / rotate the geometry and when the mesh. I know that manipulating the geometry is not best for CPU, which is not a problem for my use case.
Geometry can be translated so that subsequent transformations (such as scale or rotation) originate from a more preferred vector. Meshes can share a geometry. There are unique use cases for either if you care to memorize the list. Sometimes I integrate preexisting code samples. Sometimes the decision is made for me by some aspect of the process. As for the properties which may be similar, which is more convenient? I like the pattern of modifying an Object3D dummy using those methods and then updating from its matrix. There's a whole book on normals, but I didn't write it, sadly...

Threejs - can you use circleBufferGeometry with Points material?

I am setting up a particle system in threejs by adapting the buffer geometry drawcalls example in threejs. I want to create a series of points, but I want them to be round.
The documentation for threejs points says it accepts geometry or buffer geometry, but I also noticed there is a circleBufferGeometry. Can I use this?
Or is there another way to make the points round besides using sprites? I'm not sure, but it seems like loading an image for each particle would cause a lot of unnecessary overhead.
So, in short, is there a more performant or simple way to make a particle system of round particles (spheres or discs) in threejs without sprites?
If you want to draw each "point"/"particle" as a geometric circle, you can use THREE.InstancedBufferGeometry or take a look at this
The geometry of a Points object defines where the points exist in 3D space. It does not define the shape of the points. Points are also drawn as quads, so they're always going to be a square, though they don't have to appear that way.
Your first option is to (as you pointed out) load a texture for each point. I don't really see how this would introduce "a lot" of overhead, because the texture would only be loaded once, and would be applied to all points. But, I'm sure you have your reasons.
Your other option is to create your own shader to draw the point as a circle. This method takes the point as a square, and discards any fragments (multiple fragments make up a pixel) outside the circle.

Using three.js, how would you project a globe world to a map on the screen?

I am curious about the limits of three.js. The following question is asked mainly as a challenge, not because I actually need the specific knowledge/code right away.
Say you have a game/simulation world model around a sphere geometry representing a planet, like the worlds of the game Populous. The resolution of polygons and textures is sufficient to look smooth when the globe fills the view of an ordinary camera. There are animated macroscopic objects on the surface.
The challenge is to project everything from the model to a global map projection on the screen in real time. The choice of projection is yours, but it must be seamless/continuous, and it must be possible for the user to rotate it, placing any point on the planet surface in the center of the screen. (It is not an option to maintain an alternative model of the world only for visualization.)
There are no limits on the number of cameras etc. allowed, but the performance must be expected to be "realtime", say two-figured FPS or more.
I don't expect ayn proof in the form of a running application (although that would be cool), but some explanation as to how it could be done.
My own initial idea is to place a lot of cameras, in fact one for every pixel in the map projection, around the globe, within a Group object that is attached to some kind of orbit controls (with rotation only), but I expect the number of object culling operations to become a huge performance issue. I am sure there must exist more elegant (and faster) solutions. :-)
why not just use a spherical camera-model (think a 360° camera) and virtually put it in the center of the sphere? So this camera would (if it were physically possible) be wrapped all around the sphere, looking toward the center from all directions.
This camera could be implemented in shaders (instead of the regular projection-matrix) and would produce an equirectangular image of the planet-surface (or in fact any other projection you want, like spherical mercator-projection).
As far as I can tell the vertex-shader can implement any projection you want and it doesn't need to represent a camera that is physically possible. It just needs to produce consistent clip-space coordinates for all vertices. Fragment-Shaders for lighting would still need to operate on the original coordinates, normals etc. but that should be achievable. So the vertex-shader would just need compute (x,y,z) => (phi,theta,r) and go on with that.
Occlusion-culling would need to be disabled, but iirc three.js doesn't do that anyway.

Is it possible to use GIS terrain vector data in three.js?

I'm new to three.js and WebGL in general.
The sample at http://css.dzone.com/articles/threejs-render-real-world shows how to use raster GIS terrain data in three.js
Is it possible to use vector GIS data in a scene? For example, I have a series of points representing locations (including height) stored in real-world coordinates (meters). How would I go about displaying those in three.js?
The basic sample at http://threejs.org/docs/59/#Manual/Introduction/Creating_a_scene shows how to create a geometry using coordinates - could I use a similar approach with real-world coordinates such as
"x" : 339494.5,
"y" : 1294953.7,
"z": 0.75
or do I need to convert these into page units? Could I use my points to create a surface on which to drape an aerial image?
I tried modifying the simple sample but I'm not seeing anything (or any error messages): http://jsfiddle.net/slead/KpCfW/
Thanks for any suggestions on what I'm doing wrong, or whether this is indeed possible.
I did a number of things to get the JSFiddle show something.. here: http://jsfiddle.net/HxnnA/
You did not specify any faces in your geometry. In this case I just hard-coded a face with all three of your data points acting as corner. Alternatively you can look into using particles to display your data as points instead of faces.
Set material to THREE.DoubleSide. This is not usually needed or recommended, but helps debugging in early phases, when you can see both sides of a face.
Your camera was probably looking in a wrong direction. Added a lookAt() to point it to the center and made the field of view wider (this just makes it easier to find things while coding).
Your camera near and far planes were likely off-range for the camera position and terrain dimensions. So I increased the far plane distance.
Your coordinate values were quite huge, so I just modified them by hand a bit to make sense in relation to the camera, and to make sure they form a big enough triangle for it to be seen in camera. You could consider dividing your coordinates with something like 100 to make the units smaller. But adjusting the camera to account for the huge scale should be enough too.
Nothing wrong with your approach, just make sure you feed the data so that it makes sense considering the camera location, direction and near + far planes. Pay attention to how you make the faces. The parameters to Face3 is the index of each point in your vertices array. Later on you might need to take winding order, normals and uvs into account. You can study the geometry classes included in Three.js for reference.
Three.js does not specify any meaning to units. Its just floating point numbers, and you can decide yourself what a unit (1.0) represents. Whether it's 1mm, 1 inch or 1km, depends on what makes the most sense considering the application and the scale of it. Floating point numbers can bring precision problems when the actual numbers are extremely small or extremely big. My own applications typically deal with stuff in the range from a couple of centimeters to couple hundred meters, and use units in such a way that 1.0 = 1 meter, that has been working fine.

Orthographic 3D Backface Culling using Surface Normals

I'm creating an HTML5 canvas 3D renderer, and I'd say I've gotten pretty far without the help of SO, but I've run into a showstopper of sorts. I'm trying to implement backface culling on a cube with the help of some normals calculations. Also, I've tagged this as WebGL, as this is a general enough question that it could apply to both my use case and a 3D-accelerated one.
At any rate, as I'm rotating the cube, I've found that the wrong faces are being hidden. Example:
I'm using the following vertices:
https://developer.mozilla.org/en/WebGL/Creating_3D_objects_using_WebGL#Define_the_positions_of_the_cube%27s_vertices
The general procedure I'm using is:
Create a transformation matrix by which to transform the cube's vertices
For each face, and for each point on each face, I convert these to vec3s, andn multiply them by the matrix made in step 1.
I then get the surface normal of the face using Newell's method, then get a dot-product from that normal and some made-up vec3, e.g., [-1, 1, 1], since I couldn't think of a good value to put in here. I've seen some folks use the position of the camera for this, but...
Skipping the usual step of using a camera matrix, I pull the x and y values from the resulting vectors to send to my line and face renderers, but only if they have a dot-product above 0. I realize it's rather arbitrary which ones I pull, really.
I'm wondering two things; if my procedure in step 3 is correct (it most likely isn't), and if the order of the points I'm drawing on the faces is incorrect (very likely). If the latter is true, I'm not quite sure how to visualize the problem. I've seen people say that normals aren't pertinent, that it's the direction the line is being drawn, but... It's hard for me to wrap my head around that, or if that's the source of my problem.
It probably doesn't matter, but the matrix library I'm using is gl-matrix:
https://github.com/toji/gl-matrix
Also, the particular file in my open source codebase I'm using is here:
http://code.google.com/p/nanoblok/source/browse/nb11/app/render.js
Thanks in advance!
I haven't reviewed your entire system, but the “made-up vec3” should not be arbitrary; it should be the “out of the screen” vector, which (since your projection is ⟨x, y, z⟩ → ⟨x, y⟩) is either ⟨0, 0, -1⟩ or ⟨0, 0, 1⟩ depending on your coordinate system's handedness and screen axes. You don't have an explicit "camera matrix" (that is usually called a view matrix), but your camera (view and projection) is implicitly defined by your step 4 projection!
However, note that this approach will only work for orthographic projections, not perspective ones (consider a face on the left side of the screen, facing rightward and parallel to the view direction; the dot product would be 0 but it should be visible). The usual approach, used in actual 3D hardware, is to first do all of the transformation (including projection), then check whether the resulting 2D triangle is counterclockwise or clockwise wound, and keep or discard based on that condition.

Resources