Creating a 3D free-camera in WebGL - why do neither of these methods work? - opengl-es

EDIT
OK, I've tried a camera using quaternions:
qyaw = [Math.cos(rot[0]/2), 0, Math.sin(rot[0]/2), 0];
qpitch = [Math.cos(rot[1]/2), 0, 0, Math.sin(rot[1]/2)];
rotQuat = quat4.multiply (qpitch, qyaw);
camRot = quat4.toMat4(rotQuat);
camMat = mat4.multiply(camMat,camRot);
and I get exactly the same problem. So I'm guessing it's not gimbal lock. I've tried changing the order I multiply my matrices, but it just goes camera matrix * model view matrix, then object matrix * model view. That's right isn't it?
I'm trying to build a 3d camera in webGL that can move about the world and be rotated around the x and y (right and up) axes.
I'm getting the familiar problem (possibly gimbal lock?) that once one of the axes is rotated, the rotation around the other is screwed up; for example, when you rotate around the Y axis 90degrees, rotation around the x becomes a spin around z.
I appreciate this is a common problem, and there are copious guides to building a camera that avoid this problem, but as far as I can tell, I've implemented two different solutions and I'm still getting the same problem. Frankly, it's doing my head in...
One solution I'm using is this (adapted from http://www.toymaker.info/Games/html/camera.html):
function updateCam(){
yAx = [0,1,0];
xAx = [1,0,0];
zAx = [0,0,1];
mat4.identity(camMat);
xRotMat = mat4.create();
mat4.identity(xRotMat)
mat4.rotate(xRotMat,rot[0],xAx);
mat4.multiplyVec3(xRotMat,zAx);
mat4.multiplyVec3(xRotMat,yAx);
yRotMat = mat4.create();
mat4.identity(yRotMat)
mat4.rotate(yRotMat,rot[1],yAx);
mat4.multiplyVec3(yRotMat,zAx);
mat4.multiplyVec3(yRotMat,xAx);
zRotMat = mat4.create();
mat4.identity(zRotMat)
mat4.rotate(zRotMat,rot[2],zAx);
mat4.multiplyVec3(zRotMat,yAx);
mat4.multiplyVec3(zRotMat,xAx);
camMat[0] = xAx[0];
camMat[1] = yAx[0];
camMat[2] = zAx[0];
//camMat[3] =
camMat[4] = xAx[1]
camMat[5] = yAx[1];
camMat[6] = zAx[1];
//camMat[7] =
camMat[8] = xAx[2]
camMat[9] = yAx[2];
camMat[10]= zAx[2];
//camMat[11]=
camMat[12]= -1* vec3.dot(camPos, xAx);
camMat[13]= -1* vec3.dot(camPos, yAx);
camMat[14]= -1* vec3.dot(camPos, zAx);
//camMat[15]=
var movSpeed = 1.5 * forward;
var movVec= vec3.create(zAx);
vec3.scale(movVec, movSpeed);
vec3.add(camPos, movVec);
movVec= vec3.create(xAx);
movSpeed = 1.5 * strafe;
vec3.scale(movVec, movSpeed);
vec3.add(camPos, movVec);
}
I also tried using this method using
mat4.rotate(camMat, rot[1], yAx);
instead of explicitly building the camera matrix - same result.
My second (actually first...) method looks like this (rot is an array containing the current rotations around x, y and z (z is always zero):
function updateCam(){
mat4.identity(camRot);
mat4.identity(camMat);
camRot = fullRotate(rot);
mat4.set(camRot,camMat);
mat4.translate(camMat, camPos);
}
function fullRotate(angles){
var cosX = Math.cos(angles[0]);
var sinX = Math.sin(angles[0]);
var cosY = Math.cos(angles[1]);
var sinY = Math.sin(angles[1]);
var cosZ = Math.cos(angles[2]);
var sinZ = Math.sin(angles[2]);
rotMatrix = mat4.create([cosZ*cosY, -1*sinZ*cosX + cosZ*sinY*sinX, sinZ*sinX+cosZ*sinY*cosX, 0,
sinZ*cosY, cosZ*cosX + sinZ*sinY*sinX, -1*cosZ*sinX + sinZ*sinY*cosX, 0,
-1*sinY, cosY*sinX, cosY*cosX, 0,
0,0,0,1 ] );
mat4.transpose(rotMatrix);
return (rotMatrix);
}
The code (I've taken out most of the boilerplate gl lighting stuff etc and just left the transformations) to actually draw the scene is:
function drawScene() {
gl.viewport(0, 0, gl.viewportWidth, gl.viewportHeight);
gl.clear(gl.COLOR_BUFFER_BIT | gl.DEPTH_BUFFER_BIT);
mat4.perspective(45, gl.viewportWidth / gl.viewportHeight, 0.1, 2000.0, pMatrix);
mat4.identity(mvMatrix);
for(var i=0; i<planets.length; i++){
if (planets[i].type =="sun"){
currentProgram = perVertexSunProgram;
} else {
currentProgram = perVertexNormalProgram;
}
alpha = planets[i].alphaFlag;
mat4.identity(planets[i].rotMat);
mvPushMatrix();
//all the following puts planets in orbit around a central sun, but it's not really relevant to my current problem
var rot = [0,rotCount*planets[i].orbitSpeed,0];
var planetMat;
planetMat = mat4.create(fullRotate(rot));
mat4.multiply(planets[i].rotMat, planetMat);
mat4.translate(planets[i].rotMat, planets[i].position);
if (planets[i].type == "moon"){
var rot = [0,rotCount*planets[i].moonOrbitSpeed,0];
moonMat = mat4.create(fullRotate(rot));
mat4.multiply(planets[i].rotMat, moonMat);
mat4.translate(planets[i].rotMat, planets[i].moonPosition);
mat4.multiply(planets[i].rotMat, mat4.inverse(moonMat));
}
mat4.multiply(planets[i].rotMat, mat4.inverse(planetMat));
mat4.rotate(planets[i].rotMat, rotCount*planets[i].spinSpd, [0, 1, 0]);
//this bit does the work - multiplying the model view by the camera matrix, then by the matrix of the object we want to render
mat4.multiply(mvMatrix, camMat);
mat4.multiply(mvMatrix, planets[i].rotMat);
gl.useProgram(currentProgram);
setMatrixUniforms();
gl.drawElements(gl.TRIANGLES, planets[i].VertexIndexBuffer.numItems, gl.UNSIGNED_SHORT, 0);
mvPopMatrix();
}
}
However, most of the transformations can be ignored, the same effect cab be seen simply displaying a sphere at world coords 0,0,0.
I thought my two methods - either rotating the axes one at a time as you go, or building up the rotation matrix in one go avoided the problem of doing two rotations one after the other. Any ideas where I'm going wrong?
PS - I'm still very much starting to learn WebGL and 3d maths, so be gentle and talk to me like someone who hadn't heard of a matrix til a couple of months ago... Also, I know quaternions are a good solution to 3d rotation, and that would be my next attempt, however, I think I need to understand why these two methods don't work first...

For the sake of clarification, think about gimbal lock this way: You've played Quake/Unreal/Call of Duty/Any First Person Shooter, right? You know how when you are looking forward and move the mouse side to side your view swings around in a nice wide arc, but if you look straight up or down and move your mouse side to side you basically just spin tightly around a single point? That's gimbal lock. It's something that pretty much any FPS game uses because it happens to mimic what we would do in real life, and thus most people don't usually think of it as a problem.
For something like a space flight sim, however, or (more commonly) skeletal animation that type of effect is undesirable, and so we use things like quaternions to help us get around it. Wether or not you care about gimbal lock for your camera depends on the effect that you are looking to achieve.
I don't think you're experiencing that, however. What it sounds like is that your order of matrix multiplication is messed up, and as a result your view is rotating in a way that you don't expect. I would try playing with the order that you do your X/Y/Z rotations in and see if you can find an order than gives you the desired results.
Now, I hate doing code dumps, but this may be useful to you so here we go: This is the code that I use in most of my newer WebGL projects to manage a free-floating camera. It is gimbal locked, but as I mentioned earlier it doesn't really matter in this case. Basically it just gives you FPS style controls that you can use to fly around your scene.
/**
* A Flying Camera allows free motion around the scene using FPS style controls (WASD + mouselook)
* This type of camera is good for displaying large scenes
*/
var FlyingCamera = Object.create(Object, {
_angles: {
value: null
},
angles: {
get: function() {
return this._angles;
},
set: function(value) {
this._angles = value;
this._dirty = true;
}
},
_position: {
value: null
},
position: {
get: function() {
return this._position;
},
set: function(value) {
this._position = value;
this._dirty = true;
}
},
speed: {
value: 100
},
_dirty: {
value: true
},
_cameraMat: {
value: null
},
_pressedKeys: {
value: null
},
_viewMat: {
value: null
},
viewMat: {
get: function() {
if(this._dirty) {
var mv = this._viewMat;
mat4.identity(mv);
mat4.rotateX(mv, this.angles[0]-Math.PI/2.0);
mat4.rotateZ(mv, this.angles[1]);
mat4.rotateY(mv, this.angles[2]);
mat4.translate(mv, [-this.position[0], -this.position[1], - this.position[2]]);
this._dirty = false;
}
return this._viewMat;
}
},
init: {
value: function(canvas) {
this.angles = vec3.create();
this.position = vec3.create();
this.pressedKeys = new Array(128);
// Initialize the matricies
this.projectionMat = mat4.create();
this._viewMat = mat4.create();
this._cameraMat = mat4.create();
// Set up the appropriate event hooks
var moving = false;
var lastX, lastY;
var self = this;
window.addEventListener("keydown", function(event) {
self.pressedKeys[event.keyCode] = true;
}, false);
window.addEventListener("keyup", function(event) {
self.pressedKeys[event.keyCode] = false;
}, false);
canvas.addEventListener('mousedown', function(event) {
if(event.which == 1) {
moving = true;
}
lastX = event.pageX;
lastY = event.pageY;
}, false);
canvas.addEventListener('mousemove', function(event) {
if (moving) {
var xDelta = event.pageX - lastX;
var yDelta = event.pageY - lastY;
lastX = event.pageX;
lastY = event.pageY;
self.angles[1] += xDelta*0.025;
while (self.angles[1] < 0)
self.angles[1] += Math.PI*2;
while (self.angles[1] >= Math.PI*2)
self.angles[1] -= Math.PI*2;
self.angles[0] += yDelta*0.025;
while (self.angles[0] < -Math.PI*0.5)
self.angles[0] = -Math.PI*0.5;
while (self.angles[0] > Math.PI*0.5)
self.angles[0] = Math.PI*0.5;
self._dirty = true;
}
}, false);
canvas.addEventListener('mouseup', function(event) {
moving = false;
}, false);
return this;
}
},
update: {
value: function(frameTime) {
var dir = [0, 0, 0];
var speed = (this.speed / 1000) * frameTime;
// This is our first person movement code. It's not really pretty, but it works
if(this.pressedKeys['W'.charCodeAt(0)]) {
dir[1] += speed;
}
if(this.pressedKeys['S'.charCodeAt(0)]) {
dir[1] -= speed;
}
if(this.pressedKeys['A'.charCodeAt(0)]) {
dir[0] -= speed;
}
if(this.pressedKeys['D'.charCodeAt(0)]) {
dir[0] += speed;
}
if(this.pressedKeys[32]) { // Space, moves up
dir[2] += speed;
}
if(this.pressedKeys[17]) { // Ctrl, moves down
dir[2] -= speed;
}
if(dir[0] != 0 || dir[1] != 0 || dir[2] != 0) {
var cam = this._cameraMat;
mat4.identity(cam);
mat4.rotateX(cam, this.angles[0]);
mat4.rotateZ(cam, this.angles[1]);
mat4.inverse(cam);
mat4.multiplyVec3(cam, dir);
// Move the camera in the direction we are facing
vec3.add(this.position, dir);
this._dirty = true;
}
}
}
});
This camera assumes that Z is your "Up" axis, which may or may not be true for you. It's also using ECMAScript 5 style objects, but that shouldn't be an issue for any WebGL-enabled browser, and it utilizes my glMatrix library but it looks like you're already using that anyway. Basic usage is pretty simple:
// During your init code
var camera = Object.create(FlyingCamera).init(canvasElement);
// During your draw loop
camera.update(16); // 16ms per-frame == 60 FPS
// Bind a shader, etc, etc...
gl.uniformMatrix4fv(shaderUniformModelViewMat, false, camera.viewMat);
Everything else is handled internally for you, including keyboard and mouse controls. May not fit your needs exactly, but hopefully you can glean what you need to from there. (Note: This is essentially the same as the camera used in my Quake 3 demo, so that should give you an idea of how it works.)
Okay, that's enough babbling from me for one post! Good luck!

It doesn't matter how you build your matrices, using euler angle rotations (like both of your code snippets do) will always result in a transformation that shows the gimble lock problem.
You may want to have a look at https://en.wikipedia.org/wiki/Quaternions_and_spatial_rotation as a starting point for creating transformations that avoid gimble locks.

Try my new project (webGL2 part of visual-js game engine) based on glmatrix 2.0 .
Activate events for camera use : App.camera.FirstPersonController = true;
live examples
For camera important functions :
Camera interaction
App.operation.CameraPerspective = function() {
this.GL.gl.viewport(0, 0, wd, ht);
this.GL.gl.clear(this.GL.gl.COLOR_BUFFER_BIT | this.GL.gl.DEPTH_BUFFER_BIT);
// mat4.identity( world.mvMatrix )
// mat4.translate(world.mvMatrix , world.mvMatrix, [ 10 , 10 , 10] );
/* Field of view, Width height ratio, min distance of viewpoint, max distance of viewpoint, */
mat4.perspective(this.pMatrix, degToRad( App.camera.viewAngle ), (this.GL.gl.viewportWidth / this.GL.gl.viewportHeight), App.camera.nearViewpoint , App.camera.farViewpoint );
};
manifest.js :
var App = {
name : "webgl2 experimental",
version : 0.3,
events : true,
logs : false ,
draw_interval : 10 ,
antialias : false ,
camera : { viewAngle : 45 ,
nearViewpoint : 0.1 ,
farViewpoint : 1000 ,
edgeMarginValue : 100 ,
FirstPersonController : false },
textures : [] , //readOnly in manifest
tools : {}, //readOnly in manifest
download source from :
webGL 2 part of visual-js GE project
Old :
opengles 1.1
https://stackoverflow.com/a/17261523/1513187
Very fast first person controler with glmatrix 0.9 based on http://learningwebgl.com/ examples.

Related

How do I scale translate x,y values?

I am working on a 2d grid with scale touch functionality. I've managed to set the translate boundaries so that the screen viewport doesn't go beyond the grid boundaries. I'm now struggling with the algorithm for determining the new translate values when scaling on both two finger touch and mouse wheel events.
touchStarted sets the vector angle between the two initial touches. lastTouchAngle is for comparison in touchMoved.
function touchStarted() {
if(touches.length == 2) {
let touchA = createVector(touches[0].x, touches[0].y);
let touchB = createVector(touches[1].x, touches[1].y);
lastTouchAngle = touchA.angleBetween(touchB);
}
return false;
}
touchMoved makes the current touches vectors, compares the angle, and then scales accordingly.
t_MinX and t_MinY set the lowest possible translate value for the constrains, but determining what the new translate value should be is where I'm lost. I know it's going to require the current scale, the center point between the two touches, and the width and height of the Canvas.
function touchMoved() {
if(touches.length == 1) {
panTranslate(translateX, translateY, mouseX, mouseY, pmouseX, pmouseY);
} else if (touches.length == 2) {
let touchA = createVector(touches[0].x, touches[0].y);
let touchB = createVector(touches[1].x, touches[1].y);
scl = (abs(lastTouchAngle) < abs(touchA.angleBetween(touchB)) ? (scl+sclStep < sclMax ? scl+sclStep : sclMax) : (scl-sclStep > sclMin ? scl-sclStep : sclMin));
let t_MinX = (screenH/sclMin) * (sclMin-scl);
let t_MinY = (screenW/sclMin) * (sclMin-scl);
let tX = translateX;
let tY = translateY;
if(abs(lastTouchAngle) > abs(touchA.angleBetween(touchB))) {
console.log("Scale out");
translateX = constrain(tX+mX, t_MinX, 0);
translateY = constrain(tY+mY, t_MinY, 0);
} else {
console.log("Scale in");
if(scl != sclMax) {
translateX = constrain(tX-mX, t_MinX, 0);
translateY = constrain(tY-mY, t_MinY, 0);
}
}
// Set current touch angle to lastTouchAngle
lastTouchAngle = touchA.angleBetween(touchB);
}
return false;
}
Here is the bit getting me confused:
translateX = constrain(tX+mX, t_MinX, 0);
translateY = constrain(tY+mY, t_MinY, 0);
Full code: https://editor.p5js.org/OMTI/sketches/9ux6Rq6n5
https://stackoverflow.com/questions/5713174
I found the answer at the above link and was able to get this working from the answer there.

How to dynamically change texture of PIXI.Sprite when PIXI.Sprite reaches certain position - Pixi.js?

I have a class which extends PIXI.Sprite. Here i create the sprite initially. The texture i use is a spritesheet and i create sprites from random sections of this spritesheet.png by creating random frames for the texture. There I add 10000 sprites and move them in random directions. Then I add the PIXI.Sprite class in another class which extends PIXI.ParticleContainer 10,000 times.
createTexture() {
this.textureWidth = 2048;
this.rectX = () => {
let number;
while (number % 32 !== 0) number = Math.floor(Math.random() * this.textureWidth) + 0;
return number;
}
this.rectY = () => {
let number;
while (number % 32 !== 0) number = Math.floor(Math.random() * 128) + 0;
return number;
}
this.initialTexture = PIXI.Texture.from(this.resources[assets.images[0].src].name);
this.rectangle = new PIXI.Rectangle(this.rectX(), this.rectY(), 32, 32);
this.initialTexture.frame = this.rectangle;
this.texture = new PIXI.Texture(this.initialTexture.baseTexture, this.initialTexture.frame);
this.texture.requiresUpdate = true;
this.texture.updateUvs();
this.timesChangedVy = 0;
}
When a Sprite hits window borders, i call the method change texture in the class of PIXI.Sprite:
changeTexture() {
let newTexture = PIXI.Texture.from(this.resources[assets.images[0].src].name);
let rectangle = new PIXI.Rectangle(this.rectX(), this.rectY(), 32, 32);
newTexture.frame = rectangle;
// this.texture.frame = rectangle
this.texture = newTexture;
// this.texture = new PIXI.Texture.from(this.resources[assets.images[0].src].name)
// this.texture._frame = rectangle
// this.texture.orig = rectangle
// this._texture = newTexture
// this.texture = new PIXI.Texture(newTexture.baseTexture, rectangle)
this.texture.update()
this.texture.requiresUpdate = true;
this.texture.updateUvs();
}
I tried different approaches. When i console.log the texture after changing it , i see that the frame and origins have been changed, but the new texture is not being rendered.
Does someone know where the problem lies and how i can fix it?
Finally, I found the reason for my sprites not updating on texture change.
It is because I add them as children of Pixi.ParticleContainer, which has less functionality than Pixi.Container and does not update Uvs of children by default.
THE SOLUTION IS TO SET uvs to true when creating PIXI.ParticleContainer.
It looks like this: new PIXI.ParticleContainer(10000, { uvs: true }).
This will solve the problem of changing textures not being updated and uvs will be uploaded and applied.
https://pixijs.download/dev/docs/PIXI.ParticleContainer.html

How to perform full rotation of 3d model in threejs from pitch, roll and heading?

I have a chip that gives me pitch(-90° - 90°), roll(-180° - 180°) and heading(0° - 360°).
I want to mirror any rotations of the object to a model in threejs.
I have made a threejs app that receives pitch, roll and heading, but I am struggeling with the understanding of how i should rotate the model, and if its even possible to do this regarding the range of pitch and roll. I Have not found a clear answer to this on the internet.
Lets say i want to rotate z: -450°, x: 250° and y: -210° at the same
time during a 2 second period. I will in my app receive pitch, roll
and heading every 100ms with the current rotation and heading.
Is it this possible to visualize this rotation ?
If yes, what would be the best approach regarding setting the rotation, using local/global axis etc.
I am using tweenjs to perform animations like below.
new TWEEN.Tween(this.model.rotation)
.to(
{
x: THREE.Math.degToRad(pitch),
y: THREE.Math.degToRad(roll),
z: THREE.Math.degToRad(heading)
},
150
)
.easing(TWEEN.Easing.Linear.None)
.start();
I have good knowledge of frontend programming, but my knowledge with 3d/threejs is not so good.
you could use tween.js(https://github.com/tweenjs/tween.js/) to achieve the desired result and do something like
function animateVector3(vectorToAnimate, target, options) {
options = options || {}
// get targets from options or set to defaults
let to = target || new THREE.Vector3(),
easing = options.easing || TWEEN.Easing.Exponential.InOut,
duration = options.duration || 2000
// create the tween
let tweenVector3 = new TWEEN.Tween(vectorToAnimate)
.to({x: to.x, y: to.y, z: to.z}, duration)
.easing(easing)
.onStart(function(d) {
if (options.start) {
options.start(d)
}
})
.onUpdate(function(d) {
if (options.update) {
options.update(d)
}
})
.onComplete(function() {
if (options.finish) options.finish()
})
// start the tween
tweenVector3.start()
// return the tween in case we want to manipulate it later on
return tweenVector3
}
const animationOptions = {
duration: 2000,
start: () => {
this.cameraControls.enable(false)
},
finish: () => {
this.cameraControls.enable(true)
}
}
// Adjust Yaw object rotation
animateVector3(
// current rotation of the 3d object
yawObject.rotation,
// desired rotation of the object
new THREE.Vector3(0, 0, annotation.rotation.z + degToRad(90)),
animationOptions
)
// Adjust Pitch object rotation
animateVector3(
pitchObject.rotation,
new THREE.Vector3(0, degToRad(45), 0),
animationOptions
)
Does this answer your question?

Three.js combining particles

I have a problem with three.js. i have two particle system set ups that seem to be conflicting with each other.
The first scene loads up without problem, but when the second set loads the first set of particles vanish. This wouldn't be too confusing if it weren't for the the fact that the rest of the first scene is still appearing in the entire set up.
Is there an easy way to rename or call in the two sets of particles?
I've looked around but can't find a ref to this.
the one thing that i think might be causing this is the PARTICLE_COUNT call - which features in both scripts...
in one it is
var PARTICLE_COUNT = 15000;
var MAX_DISTANCE = 1500;
var IMAGE_SCALE = 5;
followed by
for(var i = 0; i < PARTICLE_COUNT; i++) {
geometry.vertices.push(new THREE.Vertex());
var star = new Star();
stars.push(star);
}
and the second
AUDIO_FILE = 'songs/zircon_devils_spirit',
PARTICLE_COUNT = 250,
MAX_PARTICLE_SIZE = 12,
MIN_PARTICLE_SIZE = 2,
GROWTH_RATE = 5,
DECAY_RATE = 0.5,
BEAM_RATE = 0.5,
BEAM_COUNT = 20,
GROWTH_VECTOR = new THREE.Vector3( GROWTH_RATE, GROWTH_RATE, GROWTH_RATE ),
DECAY_VECTOR = new THREE.Vector3( DECAY_RATE, DECAY_RATE, DECAY_RATE ),
beamGroup = new THREE.Object3D(),
particles = group.children,
colors = [ 0xaaee22, 0x04dbe5, 0xff0077, 0xffb412, 0xf6c83d ],
t, dancer, kick;
followed by
dancer = new Dancer();
kick = dancer.createKick({
onKick: function () {
var i;
if ( particles[ 0 ].scale.x > MAX_PARTICLE_SIZE ) {
decay();
} else {
for ( i = PARTICLE_COUNT; i--; ) {
particles[ i ].scale.addSelf( GROWTH_VECTOR );
}
}
if ( !beamGroup.children[ 0 ].visible ) {
for ( i = BEAM_COUNT; i--; ) {
beamGroup.children[ i ].visible = true;
}
}
},
offKick: decay
});
dancer.onceAt( 0, function () {
kick.on();
}).onceAt( 8.2, function () {
scene.add( beamGroup );
}).after( 8.2, function () {
beamGroup.rotation.x += BEAM_RATE;
beamGroup.rotation.y += BEAM_RATE;
}).onceAt( 50, function () {
changeParticleMat( 'white' );
}).onceAt( 66.5, function () {
changeParticleMat( 'pink' );
}).onceAt( 75, function () {
changeParticleMat();
}).fft( document.getElementById( 'fft' ) )
.load({ src: AUDIO_FILE, codecs: [ 'ogg', 'mp3' ]})
Dancer.isSupported() || loaded();
!dancer.isLoaded() ? dancer.bind( 'loaded', loaded ) : loaded();
Bit of a "needle lost in a haystack" i know...
But maybe someone can see the error of my ways!
I've tried updating the revision of three.js but r47 was as up to date as i could get it - my knowledge of three.js and dancer.js is very limited...
i also tried to create a jsfiddle - but as the earliest version on there is r54 jsfiddle won't work when i put it together... shows only parts of the whole thing... not the working version...
but maybe just the bare bones might be thing...
this one http://jsfiddle.net/wwfc/3L5z5mx…
is for the file min.'s (which drives the animated/moving particles that go from one shape to another...
and this one http://jsfiddle.net/wwfc/v96L3kq…
is the one that calls and sets up the audio reactive particles...
this is the one that the particles (not the beams just particles) vanish from when min.'s loads up...
i can see where both scripts create the particles that are clashing but no idea of how to remedy it :-(
is there anything glaringly obvious that i need to be addressing?

Animating D3 globe (d3.geo.azimuthal)

I have question about the d3 javascript libarary. I want to use the azimuthal globe and I want to insert points from longitude and lattitude coordinates on the globe and make the globe be animated without ever using the mouse events.
Do you think this is possible? Can you give me some good tips on how to do this?
Cheers
Thor
To make the example rotate on its own I implemented:
var newX = 185;
var newY = -200;
function setupRotate() {
m0= [0,0];
o0 = projection.origin();
}
function rotate() {
if (m0) {
var m1 = [newX, newY];//d3.event.pageX, d3.event.pageY],
o1 = [o0[0] + (m0[0] - m1[0]) / 8, o0[1] + (m1[1] - m0[1]) / 8];
projection.origin(o1);
//console.log(m1);
circle.origin(o1)
refresh();
//console.log("rotate");
//console.log("newX: "+newX+" newY: "+newY);
}
}
function rotateInterval() {
var theRotationInterval = setInterval(rotateAndIncrement,1);
function rotateAndIncrement(){
//console.log("rotateAndIncrement");
if (newX === 3)//3065) {
{
//console.warn("!!Reset Rotation!!");
clearInterval(theRotationInterval);
newX = 185;
rotateInterval();
}
//console.log("newX: "+newX+" newY: "+newY);
else {
newX++;
rotate();
}
}
}
I'm working on adding points to the map, it's much more complicated. If I cant get it working I'll post back here.

Resources