Related
I'm trying to create a zoom box, so far I managed to translate the cursor positions from locale to world coordinates and create a box object around the cursor with the right uvs.
Here is the fiddle of my attempt : https://jsfiddle.net/2ynfedvk/2/
Without scaling the box is perfectly centered around the cursor, but if you toggle the scaling checkbox to set the scale zoomMesh.scale.set(1.5, 1.5, 1), the box position shift the further you move the cursor from the scene center.
Am I messing any CSS like "transform origin" for three.js to center the scale around the object, is this the right approach the get this kind of zoom effect ?
I'm new to three.js and 3d in general, so thanks for any help.
When you scale your mesh with 1.5, it means that apply transform matrix that scales values of vertices.
The issue comes from changing of vertices. Vertices are in local space of the mesh. And when you set the left-top vertex of the square, for example, to [10, 10, 0] and then apply .scale.set(1.5, 1.5, 1) to the mesh, then the coordinate of vertex became [15, 15, 0]. The same to all the other 3 vertices. And that's why the center of the square does not match at 1.5 times from the center of the picture to mouse pointer.
So, an option is not to scale a mesh, but change the size of the square.
I changed your fiddle a bit, so maybe it will be more explanatory:
const
[width, height] = [500, 300],
canvas = document.querySelector('canvas'),
scaleCheckBox = document.querySelector('input')
;
console.log(scaleCheckBox)
canvas.width = width;
canvas.height = height;
const
scene = new THREE.Scene(),
renderer = new THREE.WebGLRenderer({canvas}),
camDistance = 5,
camFov = (2 * Math.atan( height / ( 2 * camDistance ) ) * ( 180 / Math.PI )),
camera = new THREE.PerspectiveCamera(camFov, width/height, 0.1, 1000 )
;
camera.position.z = camDistance;
const
texture = new THREE.TextureLoader().load( "https://picsum.photos/500/300" ),
imageMaterial = new THREE.MeshBasicMaterial( { map: texture , side : 0 } )
;
texture.minFilter = THREE.LinearFilter;
texture.magFilter = THREE.LinearFilter;
texture.format = THREE.RGBFormat;
const
planeGeometry = new THREE.PlaneGeometry( width, height ),
planeMesh = new THREE.Mesh( planeGeometry, imageMaterial )
;
const
zoomGeometry = new THREE.BufferGeometry(),
zoomMaterial = new THREE.MeshBasicMaterial( { map: texture , side : 0 } ),
zoomMesh = new THREE.Mesh( zoomGeometry, zoomMaterial )
;
zoomMaterial.color.set(0xff0000);
zoomGeometry.setAttribute('position', new THREE.BufferAttribute(new Float32Array([
0, 0, 0,
0, 0, 0,
0, 0, 0,
0, 0, 0
]), 3));
zoomGeometry.setIndex([
0, 1, 2,
2, 1, 3
]);
scene.add( planeMesh );
scene.add( zoomMesh );
var zoom = 1.;
function setZoomBox(e){
const
size = 50 * zoom,
x = e.clientX - (size/2),
y = -(e.clientY - height) - (size/2),
coords = [
x,
y,
x + size,
y + size
]
;
const [x1, y1, x2, y2] = [
coords[0] - (width/2),
coords[1] - (height/2),
coords[2] - (width/2),
coords[3] - (height/2)
];
zoomGeometry.setAttribute('position', new THREE.BufferAttribute(new Float32Array([
x1, y1, 0,
x2, y1, 0,
x1, y2, 0,
x2, y2, 0
]), 3));
const [u1, v1, u2, v2] = [
coords[0]/width,
coords[1]/height,
coords[2]/width,
coords[3]/height
]
zoomGeometry.setAttribute('uv',
new THREE.BufferAttribute(new Float32Array([
u1, v1,
u2, v1,
u1, v2,
u2, v2,
u1, v1,
u1, v2
]), 2));
}
function setScale(e){
//zoomMesh.scale.set(...(scaleCheckBox.checked ? [1.5, 1.5, 1] : [1, 1, 1]));
zoom = scaleCheckBox.checked ? 1.5 : 1 ;
}
function render(){
renderer.render(scene, camera);
requestAnimationFrame(render);
}
render();
canvas.addEventListener('mousemove', setZoomBox);
scaleCheckBox.addEventListener('change', setScale);
html, body {
margin: 0;
height: 100%;
}
body{
background: #333;
color: #FFF;
font: bold 16px arial;
}
canvas{
}
<script src="https://threejs.org/build/three.min.js"></script>
<canvas></canvas>
<div>Toggle scale <input type="checkbox" /></div>
thanks for the answer, not quite what I was looking for (not only resize the square but also zoom in the image), but you pointed me in the right direction.
Like you said the positions coordinate are shifting with the scale, so I have to recalculate the new position relative to the scale.
Added these new lines, with new scale and offset variables :
if(scaleCheckBox.checked){
const offset = scale - 1;
zoomMesh.position.set(
-(x1 * offset) - (size*scale)/2) -(size/2),
-((y1 * offset) + (size*scale)/2) -(size/2)),
0
);
}
Here is the working fiddle : https://jsfiddle.net/dc9f5v0m/
It's a bit messy, with a lot of recalculation (Especially to center the cursor around the square), but it gets the job done and the zoom effect can be achieved with any shape not only a square.
Thanks again for your help.
I'm trying to figure out a way to use latitude and longitude to put a pin on a map of the USA.
I'm using a perspective camera btw.
This is my mesh, which basically adds a color map, and a displacement map to give it some height:
const mapMaterial = new MeshStandardMaterial({
map: colorTexture,
displacementMap: this.app.textures[displacementMap],
metalness: 0,
roughness: 1,
displacementScale: 3,
color: 0xffffff,
//wireframe: true
})
const mapTextureWidth = 100
const mapTextureHeight = 100
const planeGeom = new PlaneGeometry(mapTextureWidth, mapTextureHeight, mapTextureWidth - 1, mapTextureHeight - 1)
this.mapLayer = new Mesh(planeGeom, mapMaterial)
this.mapLayer.rotation.x = -1
this.mapLayer.position.set(0, 0, 0); // set the original position
I've also added a camera to give it a slight tilt so we can see the height in the mountains and such.
In the end it looks like this:
What I need to do is add a map pin on the map by using latitude and longitude.
I've played around with converting lat and long to pixels, but that gives me an x and y relative to the screen, and not the map itself, (found this in a different SO post):
convertGeoToPixelPosition(
latitude, longitude,
mapWidth , // in pixels
mapHeight , // in pixels
mapLonLeft , // in degrees
mapLonDelta , // in degrees (mapLonRight - mapLonLeft);
mapLatBottom , // in degrees
mapLatBottomDegree
) {
const x = (longitude - mapLonLeft) * (mapWidth / mapLonDelta);
latitude = latitude * Math.PI / 180
const worldMapWidth = ((mapWidth / mapLonDelta) * 360) / (2 * Math.PI)
const mapOffsetY = (worldMapWidth / 2 * Math.log((1 + Math.sin(mapLatBottomDegree)) / (1 - Math.sin(mapLatBottomDegree))))
const y = mapHeight - ((worldMapWidth / 2 * Math.log((1 + Math.sin(latitude)) / (1 - Math.sin(latitude)))) - mapOffsetY)
return { "x": x , "y": y}
}
Any thoughts on how I can transform the latitude and longitude to world coordinates?
I've already created the sprite for the map pin, and adding them works great, just have to figure out the proper place to put them....
Add your marker as a child of the mapLayer...
this.mapLayer.add( marker )
then set its position:
marker.position.set( (x/4096)-0.5)*100, (y/4096)-0.5)*100, 0)
where x and y are what you get from your convertGeo function.
I have a problem with flickering of THREE.Points depending on their UV coordinates, as seen in the following codepen: http://codepen.io/anon/pen/qrdQeY?editors=0010
The code in the codepen is condensed down as much as possible (171 lines),
but to summarize what I'm doing:
Rendering sprites using THREE.Points
BufferGeometry contains spritesheet index and position for each sprite
RawShaderMaterial with custom vertex and pixel shader to lookup up the UV coordinates of the sprite for the given index
a 128x128px spritesheet with 4x4 cells contains the sprites
Here's the code:
/// FRAGMENT SHADER ===========================================================
const fragmentShader = `
precision highp float;
uniform sampler2D spritesheet;
// number of spritesheet subdivisions both vertically and horizontally
// e.g. for a 4x4 spritesheet this number is 4
uniform float spritesheetSubdivisions;
// vParams[i].x = sprite index
// vParams[i].z = sprite alpha
varying vec3 vParams;
/**
* Maps regular UV coordinates spanning the entire spritesheet
* to a specific sprite within the spritesheet based on the given index,
* which points into a spritesheel cell (depending on spritesheetSubdivisions
* and assuming that the spritesheet is regular and square).
*/
vec2 spriteIndexToUV(float idx, vec2 uv) {
float cols = spritesheetSubdivisions;
float rows = spritesheetSubdivisions;
float x = mod(idx, cols);
float y = floor(idx / cols);
return vec2(x / cols + uv.x / cols, 1.0 - (y / rows + (uv.y) / rows));
}
void main() {
vec2 uv = spriteIndexToUV(vParams.x, gl_PointCoord);
vec4 diffuse = texture2D(spritesheet, uv);
float alpha = diffuse.a * vParams.z;
if (alpha < 0.5) discard;
gl_FragColor = vec4(diffuse.xyz, alpha);
}
`
// VERTEX SHADER ==============================================================
const vertexShader = `
precision highp float;
uniform mat4 modelViewMatrix;
uniform mat4 projectionMatrix;
uniform float size;
uniform float scale;
attribute vec3 position;
attribute vec3 params; // x = sprite index, y = unused, z = sprite alpha
attribute vec3 color;
varying vec3 vParams;
void main() {
vParams = params;
vec4 mvPosition = modelViewMatrix * vec4( position, 1.0 );
gl_Position = projectionMatrix * mvPosition;
gl_PointSize = size * ( scale / - mvPosition.z );
}
`
// THREEJS CODE ===============================================================
const scene = new THREE.Scene();
const camera = new THREE.PerspectiveCamera(75, window.innerWidth / window.innerHeight, 0.1, 1000);
const renderer = new THREE.WebGLRenderer({canvas: document.querySelector("#mycanvas")});
renderer.setSize(window.innerWidth, window.innerHeight);
renderer.setClearColor(0xf0f0f0)
const pointGeometry = new THREE.BufferGeometry()
pointGeometry.addAttribute("position", new THREE.BufferAttribute(new Float32Array([
-1.5, -1.5, 0,
-0.5, -1.5, 0,
0.5, -1.5, 0,
1.5, -1.5, 0,
-1.5, -0.5, 0,
-0.5, -0.5, 0,
0.5, -0.5, 0,
1.5, -0.5, 0,
-1.5, 0.5, 0,
-0.5, 0.5, 0,
0.5, 0.5, 0,
1.5, 0.5, 0,
-1.5, 1.5, 0,
-0.5, 1.5, 0,
0.5, 1.5, 0,
1.5, 1.5, 0,
]), 3))
pointGeometry.addAttribute("params", new THREE.BufferAttribute(new Float32Array([
0, 0, 1, // sprite index 0 (row 0, column 0)
1, 0, 1, // sprite index 1 (row 0, column 1)
2, 0, 1, // sprite index 2 (row 0, column 2)
3, 0, 1, // sprite index 3 (row 0, column 4)
4, 0, 1, // sprite index 4 (row 1, column 0)
5, 0, 1, // sprite index 5 (row 1, column 1)
6, 0, 1, // ...
7, 0, 1,
8, 0, 1,
9, 0, 1,
10, 0, 1,
11, 0, 1,
12, 0, 1,
13, 0, 1,
14, 0, 1,
15, 0, 1
]), 3))
const img = document.querySelector("img")
const texture = new THREE.TextureLoader().load(img.src);
const pointMaterial = new THREE.RawShaderMaterial({
transparent: true,
vertexShader: vertexShader,
fragmentShader: fragmentShader,
uniforms: {
spritesheet: {
type: "t",
value: texture
},
spritesheetSubdivisions: {
type: "f",
value: 4
},
size: {
type: "f",
value: 1
},
scale: {
type: "f",
value: window.innerHeight / 2
}
}
})
const points = new THREE.Points(pointGeometry, pointMaterial)
scene.add(points)
const render = function (timestamp) {
requestAnimationFrame(render);
camera.position.z = 5 + Math.sin(timestamp / 1000.0)
renderer.render(scene, camera);
};
render();
// resize viewport
window.addEventListener( 'resize', onWindowResize, false );
function onWindowResize(){
camera.aspect = window.innerWidth / window.innerHeight;
camera.updateProjectionMatrix();
renderer.setSize( window.innerWidth, window.innerHeight );
}
If you have an Nvidia card you will see three sprites flicker while the camera
is moving back and forth along the Z axis. On integrated Intel graphics chips
the problem does not occur.
I'm not sure how to solve this problem. The affected uv coordinates seem kind of random. I'd be grateful for any pointers.
The mod()/floor() calculations inside your spriteIndexToUV() function are causing problems in certain constellations (when spriteindex is a multiple of spritesheetSubdivisions).
I could fix it by tweaking the cols variable with a small epsilon:
vec2 spriteIndexToUV(float idx, vec2 uv)
{
float cols = spritesheetSubdivisions - 1e-6; // subtract epsilon
float rows = spritesheetSubdivisions;
float x = mod(idx, cols);
float y = floor(idx / cols);
return vec2(x / cols + uv.x / cols, 1.0 - (y / rows + (uv.y) / rows));
}
PS: That codepen stuff is really cool, didn't know that this existed :-)
edit: It might be even better/clearer to write it like this:
float cols = spritesheetSubdivisions;
float rows = spritesheetSubdivisions;
float y = floor ((idx+0.5) / cols);
float x = idx - cols * y;
That way, we keep totally clear of any critical situations in the floor operation -- plus we get rid of the mod() call.
As to why floor (idx/4) is sometimes producing 0 instead of 1 when idx should be exactly 4.0, I can only speculate that the varying vec3 vParams is subjected to some interpolation when it goes from the vertex-shader to the fragment-shader stage, thus leading to the fragment-shader seeing e.g. 3.999999 instead of exactly 4.0.
I'm doing a spring physics simulation using 2D samplers to house and pre-process some position data in a fragment shader, and getting very odd results. If I start with 16 individually located springs (a point at the end of an invisible spring originating from an invisible anchor), the visualization ends up with eight pairs, each pair hanging from the same spring anchor point. However, if I simply run the visualization to place the points using only the tOffsets values, all the information to calculate each of the anchor points is there and displays correctly (though no physics, of course). It's once I add back in the spring physics that I end up with pairs again. Also, from watching the visualization, I can tell that the pairs' anchor points values are none of the original 16 anchor point values. Any idea what's going on here? (See both the fiddle and the starred inline comments below.)
(three.js v 80)
See the fiddle using v79 here.
uniform sampler2D tPositions;
uniform sampler2D tOffsets;
varying vec2 vUv;
void main() {
float damping = 0.98;
vec4 nowPos = texture2D( tPositions, vUv ).xyzw;
vec4 offsets = texture2D( tOffsets, vUv ).xyzw;
vec2 velocity = vec2(nowPos.z, nowPos.w);
vec2 anchor = vec2( offsets.x, 130.0 );
// Newton's law: F = M * A
float mass = 24.0;
vec2 acceleration = vec2(0.0, 0.0);
// 1. apply gravity's force: **this works fine
vec2 gravity = vec2(0.0, 2.0);
gravity /= mass;
acceleration += gravity;
// 2. apply the spring force ** something goes wrong once I add the spring physics - the springs display in pairs
float restLength = length(yAnchor - offsets.y);
float springConstant = 0.2;
// Vector pointing from anchor to point position
vec2 springForce = vec2(nowPos.x - anchor.x, nowPos.y - anchor.y);
// length of the vector
float distance = length( springForce );
// stretch is the difference between the current distance and restLength
float stretch = distance - restLength;
// Calculate springForce according to Hooke's Law
springForce = normalize( springForce );
springForce *= (1.0 * springConstant * stretch);
springForce /= mass;
acceleration += springForce; // ** If I comment out this line, all points display where expected, and fall according to gravity. If I add it it back in the springs work properly but display in 8 pairs as opposed to 16 independent locations
velocity += acceleration;
velocity *= damping;
vec2 newPosition = vec2(nowPos.x - velocity.x, nowPos.y - velocity.y);
// Write new position out to texture for the next shader
gl_FragColor = vec4(newPosition.x, newPosition.y, velocity.x, velocity.y); // **the pair problem shows up with this line active
// sanity checks with comments:
// gl_FragColor = vec4(newPosition.x, newPosition.y, 0.0, 0.0); // **the pair problem also shows up in this case
// gl_FragColor = vec4( offsets.x, offsets.y, velocity ); // **all points display in the correct position, though no physics
// gl_FragColor = vec4(nowPos.x, nowPos.y, 0.0, 0.0); // **all points display in the correct position, though no physics
UPDATE 1:
Could the problem be with the number of values (rgba, xzyw) agreeing between all of the pieces of my program? I've specified rgba values wherever I can think to, but perhaps I've missed somewhere. Here is a snippet from my javascript:
if ( ! renderer.context.getExtension( 'OES_texture_float' ) ) {
alert( 'OES_texture_float is not :(' );
}
var width = 4, height = 4;
particles = width * height;
// Start creation of DataTexture
var positions = new Float32Array( particles * 4 );
var offsets = new Float32Array( particles * 4 );
// hardcoded dummy values for the sake of debugging:
var somePositions = [10.885510444641113, -6.274578094482422, 0, 0, -10.12020206451416, 0.8196354508399963, 0, 0, 35.518341064453125, -5.810637474060059, 0, 0, 3.7696402072906494, -3.118760347366333, 0, 0, 9.090447425842285, -7.851400375366211, 0, 0, -32.53229522705078, -26.4628849029541, 0, 0, 32.3623046875, 22.746187210083008, 0, 0, 7.844726085662842, -15.305091857910156, 0, 0, -32.65345001220703, 22.251712799072266, 0, 0, -25.811357498168945, 32.4153938293457, 0, 0, -28.263731002807617, -31.015430450439453, 0, 0, 2.0903847217559814, 1.7632032632827759, 0, 0, -4.471604347229004, 8.995194435119629, 0, 0, -12.317420959472656, 12.19576358795166, 0, 0, 36.77312469482422, -14.580523490905762, 0, 0, 36.447078704833984, -16.085195541381836, 0, 0];
for ( var i = 0, i4 = 0; i < particles; i ++, i4 +=4 ) {
positions[ i4 + 0 ] = somePositions[ i4 + 0 ]; // x
positions[ i4 + 1 ] = somePositions[ i4 + 1 ]; // y
positions[ i4 + 2 ] = 0.0; // velocity
positions[ i4 + 3 ] = 0.0; // velocity
offsets[ i4 + 0 ] = positions[ i4 + 0 ];// - gridPositions[ i4 + 0 ]; // width offset
offsets[ i4 + 1 ] = positions[ i4 + 1 ];// - gridPositions[ i4 + 1 ]; // height offset
offsets[ i4 + 2 ] = 0; // not used
offsets[ i4 + 3 ] = 0; // not used
}
positionsTexture = new THREE.DataTexture( positions, width, height, THREE.RGBAFormat, THREE.FloatType );
positionsTexture.minFilter = THREE.NearestFilter;
positionsTexture.magFilter = THREE.NearestFilter;
positionsTexture.needsUpdate = true;
offsetsTexture = new THREE.DataTexture( offsets, width, height, THREE.RGBAFormat, THREE.FloatType );
offsetsTexture.minFilter = THREE.NearestFilter;
offsetsTexture.magFilter = THREE.NearestFilter;
offsetsTexture.needsUpdate = true;
rtTexturePos = new THREE.WebGLRenderTarget(width, height, {
wrapS:THREE.RepeatWrapping,
wrapT:THREE.RepeatWrapping,
minFilter: THREE.NearestFilter,
magFilter: THREE.NearestFilter,
format: THREE.RGBAFormat,
type:THREE.FloatType,
stencilBuffer: false
});
rtTexturePos2 = rtTexturePos.clone();
simulationShader = new THREE.ShaderMaterial({
uniforms: {
tPositions: { type: "t", value: positionsTexture },
tOffsets: { type: "t", value: offsetsTexture },
},
vertexShader: document.getElementById('texture_vertex_simulation_shader').textContent,
fragmentShader: document.getElementById('texture_fragment_simulation_shader').textContent
});
fboParticles = new THREE.FBOUtils( width, renderer, simulationShader );
fboParticles.renderToTexture(rtTexturePos, rtTexturePos2);
fboParticles.in = rtTexturePos;
fboParticles.out = rtTexturePos2;
UPDATE 2:
Perhaps the problem has to do with how the texels are being read from these textures? Somehow it may be reading between two texels, and so coming up with an averaged position shared by two springs? Is this possible? If so, where would I look to fix it?
I never discovered the problem with the fiddle in my question above; however, I did eventually find the newer version of the THREE.FBOUtils script I was using above - it is now called THREE.GPUComputationRenderer. After implementing it, my script finally worked!
For those who find themselves trying trying so solve a similar problem, here is the new and improved fiddle using the GPUComputationRenderer in place of the old FBOUtils.
Here, from the script documentation, is a basic use case of GPUComputationRenderer:
//Initialization...
// Create computation renderer
var gpuCompute = new GPUComputationRenderer( 1024, 1024, renderer );
// Create initial state float textures
var pos0 = gpuCompute.createTexture();
var vel0 = gpuCompute.createTexture();
// and fill in here the texture data...
// Add texture variables
var velVar = gpuCompute.addVariable( "textureVelocity", fragmentShaderVel, pos0 );
var posVar = gpuCompute.addVariable( "texturePosition", fragmentShaderPos, vel0 );
// Add variable dependencies
gpuCompute.setVariableDependencies( velVar, [ velVar, posVar ] );
gpuCompute.setVariableDependencies( posVar, [ velVar, posVar ] );
// Add custom uniforms
velVar.material.uniforms.time = { value: 0.0 };
// Check for completeness
var error = gpuCompute.init();
if ( error !== null ) {
console.error( error );
}
// In each frame...
// Compute!
gpuCompute.compute();
// Update texture uniforms in your visualization materials with the gpu renderer output
myMaterial.uniforms.myTexture.value = gpuCompute.getCurrentRenderTarget( posVar ).texture;
// Do your rendering
renderer.render( myScene, myCamera );
I try to build molecule CH4 with threejs
But when I try to build 109.5 angle
methanum = function(x, y, z) {
molecule = new THREE.Object3D();
var startPosition = new THREE.Vector3( 0, 0, 0 );
molecule.add(atom(startPosition, "o"));
var secondPosition = new THREE.Vector3( -20, 10, 00 );
molecule.add(atom(secondPosition, "h"));
var angle = 109.5;
var matrix = new THREE.Matrix4().makeRotationAxis( new THREE.Vector3( 0, 1, 0 ), angle * ( Math.PI / 180 ));
var thirdPosition = secondPosition.applyMatrix4( matrix );
molecule.add(atom(thirdPosition, "h"));
var fourthPosition = thirdPosition.applyMatrix4( matrix );
molecule.add(atom(thirdPosition, "h"));
molecule.position.set(x, y, z);
molecule.rotation.set(x, y, z);
scene.add( molecule );
}
Demo: https://dl.dropboxusercontent.com/u/6204711/3d/ch4.html
But my atoms are not uniformly distributed as in the drawing
Some ideas?
Well there are 3 errors in your molecule code.
You place an oxygen as the center of the CH4 instead of a carbon
When you apply your fourth hydrogen, you specify the third position whereas you have created a fourthposition.
You are rotating around the wrong axis when you place your third hydrogen. My hints are the following: First of all , place your carbon, then move along the Z-axis, place your first hydrogen, rotate around the X-axis of 109.5°, place your second hydrogen, rotate around the Z-axis of 120° the position of your second hydrogen, place your third hydrogen and finally rotate once again around the Z-axis of 120° the position of your third hydrogen and place your last hydrogen.
Here is the CH4 I tried:
methanum3 = function(x, y, z) {
molecule = new THREE.Object3D();
var startPosition = new THREE.Vector3( 0, 0, 0 );
molecule.add(atom(startPosition, "c"));
var axis = new THREE.AxisHelper( 50 );
axis.position.set( 0, 0, 0 );
molecule.add( axis );
var secondPosition = new THREE.Vector3( 0, 0, -40 );
molecule.add(atom(secondPosition, "h"));
var angle = 109.5;
var matrixX = new THREE.Matrix4().makeRotationAxis( new THREE.Vector3( 1, 0, 0 ), angle * ( Math.PI / 180 ));
var thirdPosition = secondPosition.applyMatrix4( matrixX );
molecule.add(atom(thirdPosition, "h"));
var matrixZ = new THREE.Matrix4().makeRotationAxis( new THREE.Vector3( 0, 0, 1 ), 120 * ( Math.PI / 180 ));
var fourthPosition = thirdPosition.applyMatrix4( matrixZ );
molecule.add(atom(fourthPosition, "h"));
var fifthPosition = fourthPosition.applyMatrix4( matrixZ );
molecule.add(atom(fifthPosition, "h"));
molecule.position.set(x, y, z);
//molecule.rotation.set(x, y, z);
scene.add( molecule );
}
//water(0,0,0);
//water(30,60,0);
methanum3(-30,60,0);
Explanation:
Let's call H1 an hydrogen and H2 another one. The given angle of 109.5° is defined in the :
---> --->
(CH1,CH2) plane. Therefore when you look in the direction of the normal of that plane, you can see the 109.5° (Cf. the right part of the image below) BUT When you look in the direction of the normal of another plane you'll get the projection of that angle on that plane. In your case when you look in the direction of the Z-axis you can see an angle of 120°.(Cf. left part of the image below).
The two angles are different according to the direction of the camera.
Hope this helps.