Animating D3 globe (d3.geo.azimuthal) - animation

I have question about the d3 javascript libarary. I want to use the azimuthal globe and I want to insert points from longitude and lattitude coordinates on the globe and make the globe be animated without ever using the mouse events.
Do you think this is possible? Can you give me some good tips on how to do this?
Cheers
Thor

To make the example rotate on its own I implemented:
var newX = 185;
var newY = -200;
function setupRotate() {
m0= [0,0];
o0 = projection.origin();
}
function rotate() {
if (m0) {
var m1 = [newX, newY];//d3.event.pageX, d3.event.pageY],
o1 = [o0[0] + (m0[0] - m1[0]) / 8, o0[1] + (m1[1] - m0[1]) / 8];
projection.origin(o1);
//console.log(m1);
circle.origin(o1)
refresh();
//console.log("rotate");
//console.log("newX: "+newX+" newY: "+newY);
}
}
function rotateInterval() {
var theRotationInterval = setInterval(rotateAndIncrement,1);
function rotateAndIncrement(){
//console.log("rotateAndIncrement");
if (newX === 3)//3065) {
{
//console.warn("!!Reset Rotation!!");
clearInterval(theRotationInterval);
newX = 185;
rotateInterval();
}
//console.log("newX: "+newX+" newY: "+newY);
else {
newX++;
rotate();
}
}
}
I'm working on adding points to the map, it's much more complicated. If I cant get it working I'll post back here.

Related

How do I scale translate x,y values?

I am working on a 2d grid with scale touch functionality. I've managed to set the translate boundaries so that the screen viewport doesn't go beyond the grid boundaries. I'm now struggling with the algorithm for determining the new translate values when scaling on both two finger touch and mouse wheel events.
touchStarted sets the vector angle between the two initial touches. lastTouchAngle is for comparison in touchMoved.
function touchStarted() {
if(touches.length == 2) {
let touchA = createVector(touches[0].x, touches[0].y);
let touchB = createVector(touches[1].x, touches[1].y);
lastTouchAngle = touchA.angleBetween(touchB);
}
return false;
}
touchMoved makes the current touches vectors, compares the angle, and then scales accordingly.
t_MinX and t_MinY set the lowest possible translate value for the constrains, but determining what the new translate value should be is where I'm lost. I know it's going to require the current scale, the center point between the two touches, and the width and height of the Canvas.
function touchMoved() {
if(touches.length == 1) {
panTranslate(translateX, translateY, mouseX, mouseY, pmouseX, pmouseY);
} else if (touches.length == 2) {
let touchA = createVector(touches[0].x, touches[0].y);
let touchB = createVector(touches[1].x, touches[1].y);
scl = (abs(lastTouchAngle) < abs(touchA.angleBetween(touchB)) ? (scl+sclStep < sclMax ? scl+sclStep : sclMax) : (scl-sclStep > sclMin ? scl-sclStep : sclMin));
let t_MinX = (screenH/sclMin) * (sclMin-scl);
let t_MinY = (screenW/sclMin) * (sclMin-scl);
let tX = translateX;
let tY = translateY;
if(abs(lastTouchAngle) > abs(touchA.angleBetween(touchB))) {
console.log("Scale out");
translateX = constrain(tX+mX, t_MinX, 0);
translateY = constrain(tY+mY, t_MinY, 0);
} else {
console.log("Scale in");
if(scl != sclMax) {
translateX = constrain(tX-mX, t_MinX, 0);
translateY = constrain(tY-mY, t_MinY, 0);
}
}
// Set current touch angle to lastTouchAngle
lastTouchAngle = touchA.angleBetween(touchB);
}
return false;
}
Here is the bit getting me confused:
translateX = constrain(tX+mX, t_MinX, 0);
translateY = constrain(tY+mY, t_MinY, 0);
Full code: https://editor.p5js.org/OMTI/sketches/9ux6Rq6n5
https://stackoverflow.com/questions/5713174
I found the answer at the above link and was able to get this working from the answer there.

d3fc - Crosshair with snapping using latest version 14

In previous version of d3fc my code was using fc.util.seriesPointSnapXOnly for snapping the crosshair.
This appears to be gone in the latest version of d3fc (or maybe I'm missing it in one of the standalone packages?).
I'm using the canvas implementation (annotationCanvasCrosshair) and it seems to also be missing the "snap" function where it was previously used like so:
fc.tool.crosshair()
.snap(fc.util.seriesPointSnapXOnly(line, series))
Additionally, "on" is also not available, so I can't attach events like trackingstart, trackingend, etc.
How can I implement a snapping crosshair now? The canvas version of the components are badly lacking examples. Does anyone have an example showing a snapping crosshair in the latest version of d3fc via canvas rendering?
Here's what I have so far https://codepen.io/parliament718/pen/xxbQGgp
I understand you've raised the issue with d3fc github, therefore I'll assume you are aware that util/snap.js is been deprecated.
Since this functionality unsupported now, it seems that the only feasible way to work around it will be to implement your own.
I took your pen and original snap.js code as starting point and applied the method outlined in Simple Crosshair example from the documentation.
I ended up having to add missing functions and their dependencies verbatim (surely you can refactor and package it up into a separate module):
function defined() {
var outerArguments = arguments;
return function(d, i) {
for (var c = 0, j = outerArguments.length; c < j; c++) {
if (outerArguments[c](d, i) == null) {
return false;
}
}
return true;
};
}
function minimum(data, accessor) {
return data.map(function(dataPoint, index) {
return [accessor(dataPoint, index), dataPoint, index];
}).reduce(function(accumulator, dataPoint) {
return accumulator[0] > dataPoint[0] ? dataPoint : accumulator;
}, [Number.MAX_VALUE, null, -1]);
}
function pointSnap(xScale, yScale, xValue, yValue, data, objectiveFunction) {
// a default function that computes the distance between two points
objectiveFunction = objectiveFunction || function(x, y, cx, cy) {
var dx = x - cx,
dy = y - cy;
return dx * dx + dy * dy;
};
return function(point) {
var filtered = data.filter(function(d, i) {
return defined(xValue, yValue)(d, i);
});
var nearest = minimum(filtered, function(d) {
return objectiveFunction(point.x, point.y, xScale(xValue(d)), yScale(yValue(d)));
})[1];
return [{
datum: nearest,
x: nearest ? xScale(xValue(nearest)) : point.x,
y: nearest ? yScale(yValue(nearest)) : point.y
}];
};
}
function seriesPointSnap(series, data, objectiveFunction) {
return function(point) {
var xScale = series.xScale(),
yScale = series.yScale(),
xValue = series.crossValue(),
yValue = (series.openValue).call(series);
return pointSnap(xScale, yScale, xValue, yValue, data, objectiveFunction)(point);
};
};
function seriesPointSnapXOnly(series, data) {
function objectiveFunction(x, y, cx, cy) {
var dx = x - cx;
return Math.abs(dx);
}
return seriesPointSnap(series, data, objectiveFunction);
}
The working end result can be seen here: https://codepen.io/timur_kh/pen/YzXXOOG. I basically defined two series and used a pointer component to update that second series data and trigger a re-render:
const data = {
series: stream.take(50), // your candle stick chart
crosshair: [] // second series to hold the crosshair position
};
.............
const crosshair = fc.annotationCanvasCrosshair() // define your crosshair
const multichart = fc.seriesCanvasMulti()
.series([candlesticks, crosshair]) // we've got two series now
.mapping((data, index, series) => {
switch(series[index]) {
case candlesticks:
return data.series;
case crosshair:
return data.crosshair;
}
});
.............
function render() {
d3.select('#zoom-chart')
.datum(data)
.call(chart);
// add the pointer component to the plot-area, re-rendering each time the event fires.
var pointer = fc.pointer()
.on('point', (event) => {
data.crosshair = seriesPointSnapXOnly(candlesticks, data.series)(event[0]);// and when we update the crosshair position - we snap it to the other series using the old library code.
render();
});
d3.select('#zoom-chart .plot-area')
.call(pointer);
}
UPD:
the functionality can be simplified like so, i also updated the pen:
function minimum(data, accessor) {
return data.map(function(dataPoint, index) {
return [accessor(dataPoint, index), dataPoint, index];
}).reduce(function(accumulator, dataPoint) {
return accumulator[0] > dataPoint[0] ? dataPoint : accumulator;
}, [Number.MAX_VALUE, null, -1]);
}
function seriesPointSnapXOnly(series, data, point) {
if (point == undefined) return []; // short circuit if data point was empty
var xScale = series.xScale(),
xValue = series.crossValue();
var filtered = data.filter((d) => (xValue(d) != null));
var nearest = minimum(filtered, (d) => Math.abs(point.x - xScale(xValue(d))))[1];
return [{
x: xScale(xValue(nearest)),
y: point.y
}];
};
This is far from polished, but I'm hoping it conveys the general idea.

UPDATED: Javascript logic to fix in a small function (SVG, obtaining absolute coords)

NEW:
So here is the code at codepen:
http://codepen.io/cmer41k/pen/pRJNww/
Currently function UpdateCoords(draggable) - is commented out in the code.
What I wanted is to update on mouseup event the coordinates of the path (circle as path here) to the absolute ones and remove transform attribute.
But I am failing to do that;(( sorry only learning
OLD:
In my code I have an svg element (path) that gets dragged around the root svg obj (svg) via transform="translate(x,y)" property.
I wanted to update such path element's attribute "d" (the string that describes all coords) to use absolute coordinates and get rid of transformed\translate thing.
Basically:
was: d="M10,10 30,10 20,30" + transform="translate(20,0);
to be: d="M30,10 50,10 40,30" + transform="translate(0,0)" (or if we can delete the transform - even better)
So I did the code that does the thing for me, but there is a bug that prevents proper result.
I am sure I am doing something wrong in here:
var v = Object.keys(path.controlPoints).length
// controlPoints here is just a place in path object where I store the coords for the path.
var matrix = path.transform.baseVal.consolidate();
//I validated that the above line does give me proper transform matrix with proper x,y translated values. Now below I am trying to loop through and update all control points (coordinates) of the path
for (i=0; i<v; i++) {
var position = svg.createSVGPoint();
position.x = path.controlPoints["p"+i].x;
position.y = path.controlPoints["p"+i].y;
// so for each of path's control points I create intermediate svgpoint that can leverage matrix data (or so I think) to "convert" old coords into the new ones.
position = position.matrixTransform(matrix);
path.controlPoints["p"+i].x = position.x;
path.controlPoints["p"+i].y = position.y;
}
// I am sure I am doing something wrong here, maybe its because I am not "cleaning"/resetting this position thing in this loop or smth?
Sorry I am not a programmer, just learning stuff and the question is - in this code snipped provided the goal that I described - is something wrong with how I handle "position"?
Alright, the code snipped is now functioning properly!
So after I figured how to obtain properly the matrix I still had a weird displacement for any subsequent draggables.
I became clear that those displacements happen even before my function.
I debugged it a bit and realized that I was not clearing the ._x and ._y params that I use for dragging.
Now code works!
http://codepen.io/cmer41k/pen/XpbpQJ
var svgNS = "http://www.w3.org/2000/svg";
var draggable = null;
var canvas = {};
var inventory = {};
var elementToUpdate = {};
//debug
var focusedObj = {};
var focusedObj2 = {};
// to be deleted
window.onload = function() {
canvas = document.getElementById("canvas");
inventory = document.getElementById("inventory");
AddListeners();
}
function AddListeners() {
document.getElementById("svg").addEventListener("mousedown", Drag);
document.getElementById("svg").addEventListener("mousemove", Drag);
document.getElementById("svg").addEventListener("mouseup", Drag);
}
// Drag function //
function Drag(e) {
var t = e.target, id = t.id, et = e.type; m = MousePos(e); //MousePos to ensure we obtain proper mouse coordinates
if (!draggable && (et == "mousedown")) {
if (t.className.baseVal=="inventory") { //if its inventory class item, this should get cloned into draggable
copy = t.cloneNode(true);
copy.onmousedown = copy.onmouseup = copy.onmousemove = Drag;
copy.removeAttribute("id");
copy._x = 0;
copy._y = 0;
canvas.appendChild(copy);
draggable = copy;
dPoint = m;
}
else if (t.className.baseVal=="draggable") { //if its just draggable class - it can be dragged around
draggable = t;
dPoint = m;
}
}
// drag the spawned/copied draggable element now
if (draggable && (et == "mousemove")) {
draggable._x += m.x - dPoint.x;
draggable._y += m.y - dPoint.y;
dPoint = m;
draggable.setAttribute("transform", "translate(" +draggable._x+","+draggable._y+")");
}
// stop drag
if (draggable && (et == "mouseup")) {
draggable.className.baseVal="draggable";
UpdateCoords(draggable);
console.log(draggable);
draggable._x = 0;
draggable._y = 0;
draggable = null;
}
}

Creating a 3D free-camera in WebGL - why do neither of these methods work?

EDIT
OK, I've tried a camera using quaternions:
qyaw = [Math.cos(rot[0]/2), 0, Math.sin(rot[0]/2), 0];
qpitch = [Math.cos(rot[1]/2), 0, 0, Math.sin(rot[1]/2)];
rotQuat = quat4.multiply (qpitch, qyaw);
camRot = quat4.toMat4(rotQuat);
camMat = mat4.multiply(camMat,camRot);
and I get exactly the same problem. So I'm guessing it's not gimbal lock. I've tried changing the order I multiply my matrices, but it just goes camera matrix * model view matrix, then object matrix * model view. That's right isn't it?
I'm trying to build a 3d camera in webGL that can move about the world and be rotated around the x and y (right and up) axes.
I'm getting the familiar problem (possibly gimbal lock?) that once one of the axes is rotated, the rotation around the other is screwed up; for example, when you rotate around the Y axis 90degrees, rotation around the x becomes a spin around z.
I appreciate this is a common problem, and there are copious guides to building a camera that avoid this problem, but as far as I can tell, I've implemented two different solutions and I'm still getting the same problem. Frankly, it's doing my head in...
One solution I'm using is this (adapted from http://www.toymaker.info/Games/html/camera.html):
function updateCam(){
yAx = [0,1,0];
xAx = [1,0,0];
zAx = [0,0,1];
mat4.identity(camMat);
xRotMat = mat4.create();
mat4.identity(xRotMat)
mat4.rotate(xRotMat,rot[0],xAx);
mat4.multiplyVec3(xRotMat,zAx);
mat4.multiplyVec3(xRotMat,yAx);
yRotMat = mat4.create();
mat4.identity(yRotMat)
mat4.rotate(yRotMat,rot[1],yAx);
mat4.multiplyVec3(yRotMat,zAx);
mat4.multiplyVec3(yRotMat,xAx);
zRotMat = mat4.create();
mat4.identity(zRotMat)
mat4.rotate(zRotMat,rot[2],zAx);
mat4.multiplyVec3(zRotMat,yAx);
mat4.multiplyVec3(zRotMat,xAx);
camMat[0] = xAx[0];
camMat[1] = yAx[0];
camMat[2] = zAx[0];
//camMat[3] =
camMat[4] = xAx[1]
camMat[5] = yAx[1];
camMat[6] = zAx[1];
//camMat[7] =
camMat[8] = xAx[2]
camMat[9] = yAx[2];
camMat[10]= zAx[2];
//camMat[11]=
camMat[12]= -1* vec3.dot(camPos, xAx);
camMat[13]= -1* vec3.dot(camPos, yAx);
camMat[14]= -1* vec3.dot(camPos, zAx);
//camMat[15]=
var movSpeed = 1.5 * forward;
var movVec= vec3.create(zAx);
vec3.scale(movVec, movSpeed);
vec3.add(camPos, movVec);
movVec= vec3.create(xAx);
movSpeed = 1.5 * strafe;
vec3.scale(movVec, movSpeed);
vec3.add(camPos, movVec);
}
I also tried using this method using
mat4.rotate(camMat, rot[1], yAx);
instead of explicitly building the camera matrix - same result.
My second (actually first...) method looks like this (rot is an array containing the current rotations around x, y and z (z is always zero):
function updateCam(){
mat4.identity(camRot);
mat4.identity(camMat);
camRot = fullRotate(rot);
mat4.set(camRot,camMat);
mat4.translate(camMat, camPos);
}
function fullRotate(angles){
var cosX = Math.cos(angles[0]);
var sinX = Math.sin(angles[0]);
var cosY = Math.cos(angles[1]);
var sinY = Math.sin(angles[1]);
var cosZ = Math.cos(angles[2]);
var sinZ = Math.sin(angles[2]);
rotMatrix = mat4.create([cosZ*cosY, -1*sinZ*cosX + cosZ*sinY*sinX, sinZ*sinX+cosZ*sinY*cosX, 0,
sinZ*cosY, cosZ*cosX + sinZ*sinY*sinX, -1*cosZ*sinX + sinZ*sinY*cosX, 0,
-1*sinY, cosY*sinX, cosY*cosX, 0,
0,0,0,1 ] );
mat4.transpose(rotMatrix);
return (rotMatrix);
}
The code (I've taken out most of the boilerplate gl lighting stuff etc and just left the transformations) to actually draw the scene is:
function drawScene() {
gl.viewport(0, 0, gl.viewportWidth, gl.viewportHeight);
gl.clear(gl.COLOR_BUFFER_BIT | gl.DEPTH_BUFFER_BIT);
mat4.perspective(45, gl.viewportWidth / gl.viewportHeight, 0.1, 2000.0, pMatrix);
mat4.identity(mvMatrix);
for(var i=0; i<planets.length; i++){
if (planets[i].type =="sun"){
currentProgram = perVertexSunProgram;
} else {
currentProgram = perVertexNormalProgram;
}
alpha = planets[i].alphaFlag;
mat4.identity(planets[i].rotMat);
mvPushMatrix();
//all the following puts planets in orbit around a central sun, but it's not really relevant to my current problem
var rot = [0,rotCount*planets[i].orbitSpeed,0];
var planetMat;
planetMat = mat4.create(fullRotate(rot));
mat4.multiply(planets[i].rotMat, planetMat);
mat4.translate(planets[i].rotMat, planets[i].position);
if (planets[i].type == "moon"){
var rot = [0,rotCount*planets[i].moonOrbitSpeed,0];
moonMat = mat4.create(fullRotate(rot));
mat4.multiply(planets[i].rotMat, moonMat);
mat4.translate(planets[i].rotMat, planets[i].moonPosition);
mat4.multiply(planets[i].rotMat, mat4.inverse(moonMat));
}
mat4.multiply(planets[i].rotMat, mat4.inverse(planetMat));
mat4.rotate(planets[i].rotMat, rotCount*planets[i].spinSpd, [0, 1, 0]);
//this bit does the work - multiplying the model view by the camera matrix, then by the matrix of the object we want to render
mat4.multiply(mvMatrix, camMat);
mat4.multiply(mvMatrix, planets[i].rotMat);
gl.useProgram(currentProgram);
setMatrixUniforms();
gl.drawElements(gl.TRIANGLES, planets[i].VertexIndexBuffer.numItems, gl.UNSIGNED_SHORT, 0);
mvPopMatrix();
}
}
However, most of the transformations can be ignored, the same effect cab be seen simply displaying a sphere at world coords 0,0,0.
I thought my two methods - either rotating the axes one at a time as you go, or building up the rotation matrix in one go avoided the problem of doing two rotations one after the other. Any ideas where I'm going wrong?
PS - I'm still very much starting to learn WebGL and 3d maths, so be gentle and talk to me like someone who hadn't heard of a matrix til a couple of months ago... Also, I know quaternions are a good solution to 3d rotation, and that would be my next attempt, however, I think I need to understand why these two methods don't work first...
For the sake of clarification, think about gimbal lock this way: You've played Quake/Unreal/Call of Duty/Any First Person Shooter, right? You know how when you are looking forward and move the mouse side to side your view swings around in a nice wide arc, but if you look straight up or down and move your mouse side to side you basically just spin tightly around a single point? That's gimbal lock. It's something that pretty much any FPS game uses because it happens to mimic what we would do in real life, and thus most people don't usually think of it as a problem.
For something like a space flight sim, however, or (more commonly) skeletal animation that type of effect is undesirable, and so we use things like quaternions to help us get around it. Wether or not you care about gimbal lock for your camera depends on the effect that you are looking to achieve.
I don't think you're experiencing that, however. What it sounds like is that your order of matrix multiplication is messed up, and as a result your view is rotating in a way that you don't expect. I would try playing with the order that you do your X/Y/Z rotations in and see if you can find an order than gives you the desired results.
Now, I hate doing code dumps, but this may be useful to you so here we go: This is the code that I use in most of my newer WebGL projects to manage a free-floating camera. It is gimbal locked, but as I mentioned earlier it doesn't really matter in this case. Basically it just gives you FPS style controls that you can use to fly around your scene.
/**
* A Flying Camera allows free motion around the scene using FPS style controls (WASD + mouselook)
* This type of camera is good for displaying large scenes
*/
var FlyingCamera = Object.create(Object, {
_angles: {
value: null
},
angles: {
get: function() {
return this._angles;
},
set: function(value) {
this._angles = value;
this._dirty = true;
}
},
_position: {
value: null
},
position: {
get: function() {
return this._position;
},
set: function(value) {
this._position = value;
this._dirty = true;
}
},
speed: {
value: 100
},
_dirty: {
value: true
},
_cameraMat: {
value: null
},
_pressedKeys: {
value: null
},
_viewMat: {
value: null
},
viewMat: {
get: function() {
if(this._dirty) {
var mv = this._viewMat;
mat4.identity(mv);
mat4.rotateX(mv, this.angles[0]-Math.PI/2.0);
mat4.rotateZ(mv, this.angles[1]);
mat4.rotateY(mv, this.angles[2]);
mat4.translate(mv, [-this.position[0], -this.position[1], - this.position[2]]);
this._dirty = false;
}
return this._viewMat;
}
},
init: {
value: function(canvas) {
this.angles = vec3.create();
this.position = vec3.create();
this.pressedKeys = new Array(128);
// Initialize the matricies
this.projectionMat = mat4.create();
this._viewMat = mat4.create();
this._cameraMat = mat4.create();
// Set up the appropriate event hooks
var moving = false;
var lastX, lastY;
var self = this;
window.addEventListener("keydown", function(event) {
self.pressedKeys[event.keyCode] = true;
}, false);
window.addEventListener("keyup", function(event) {
self.pressedKeys[event.keyCode] = false;
}, false);
canvas.addEventListener('mousedown', function(event) {
if(event.which == 1) {
moving = true;
}
lastX = event.pageX;
lastY = event.pageY;
}, false);
canvas.addEventListener('mousemove', function(event) {
if (moving) {
var xDelta = event.pageX - lastX;
var yDelta = event.pageY - lastY;
lastX = event.pageX;
lastY = event.pageY;
self.angles[1] += xDelta*0.025;
while (self.angles[1] < 0)
self.angles[1] += Math.PI*2;
while (self.angles[1] >= Math.PI*2)
self.angles[1] -= Math.PI*2;
self.angles[0] += yDelta*0.025;
while (self.angles[0] < -Math.PI*0.5)
self.angles[0] = -Math.PI*0.5;
while (self.angles[0] > Math.PI*0.5)
self.angles[0] = Math.PI*0.5;
self._dirty = true;
}
}, false);
canvas.addEventListener('mouseup', function(event) {
moving = false;
}, false);
return this;
}
},
update: {
value: function(frameTime) {
var dir = [0, 0, 0];
var speed = (this.speed / 1000) * frameTime;
// This is our first person movement code. It's not really pretty, but it works
if(this.pressedKeys['W'.charCodeAt(0)]) {
dir[1] += speed;
}
if(this.pressedKeys['S'.charCodeAt(0)]) {
dir[1] -= speed;
}
if(this.pressedKeys['A'.charCodeAt(0)]) {
dir[0] -= speed;
}
if(this.pressedKeys['D'.charCodeAt(0)]) {
dir[0] += speed;
}
if(this.pressedKeys[32]) { // Space, moves up
dir[2] += speed;
}
if(this.pressedKeys[17]) { // Ctrl, moves down
dir[2] -= speed;
}
if(dir[0] != 0 || dir[1] != 0 || dir[2] != 0) {
var cam = this._cameraMat;
mat4.identity(cam);
mat4.rotateX(cam, this.angles[0]);
mat4.rotateZ(cam, this.angles[1]);
mat4.inverse(cam);
mat4.multiplyVec3(cam, dir);
// Move the camera in the direction we are facing
vec3.add(this.position, dir);
this._dirty = true;
}
}
}
});
This camera assumes that Z is your "Up" axis, which may or may not be true for you. It's also using ECMAScript 5 style objects, but that shouldn't be an issue for any WebGL-enabled browser, and it utilizes my glMatrix library but it looks like you're already using that anyway. Basic usage is pretty simple:
// During your init code
var camera = Object.create(FlyingCamera).init(canvasElement);
// During your draw loop
camera.update(16); // 16ms per-frame == 60 FPS
// Bind a shader, etc, etc...
gl.uniformMatrix4fv(shaderUniformModelViewMat, false, camera.viewMat);
Everything else is handled internally for you, including keyboard and mouse controls. May not fit your needs exactly, but hopefully you can glean what you need to from there. (Note: This is essentially the same as the camera used in my Quake 3 demo, so that should give you an idea of how it works.)
Okay, that's enough babbling from me for one post! Good luck!
It doesn't matter how you build your matrices, using euler angle rotations (like both of your code snippets do) will always result in a transformation that shows the gimble lock problem.
You may want to have a look at https://en.wikipedia.org/wiki/Quaternions_and_spatial_rotation as a starting point for creating transformations that avoid gimble locks.
Try my new project (webGL2 part of visual-js game engine) based on glmatrix 2.0 .
Activate events for camera use : App.camera.FirstPersonController = true;
live examples
For camera important functions :
Camera interaction
App.operation.CameraPerspective = function() {
this.GL.gl.viewport(0, 0, wd, ht);
this.GL.gl.clear(this.GL.gl.COLOR_BUFFER_BIT | this.GL.gl.DEPTH_BUFFER_BIT);
// mat4.identity( world.mvMatrix )
// mat4.translate(world.mvMatrix , world.mvMatrix, [ 10 , 10 , 10] );
/* Field of view, Width height ratio, min distance of viewpoint, max distance of viewpoint, */
mat4.perspective(this.pMatrix, degToRad( App.camera.viewAngle ), (this.GL.gl.viewportWidth / this.GL.gl.viewportHeight), App.camera.nearViewpoint , App.camera.farViewpoint );
};
manifest.js :
var App = {
name : "webgl2 experimental",
version : 0.3,
events : true,
logs : false ,
draw_interval : 10 ,
antialias : false ,
camera : { viewAngle : 45 ,
nearViewpoint : 0.1 ,
farViewpoint : 1000 ,
edgeMarginValue : 100 ,
FirstPersonController : false },
textures : [] , //readOnly in manifest
tools : {}, //readOnly in manifest
download source from :
webGL 2 part of visual-js GE project
Old :
opengles 1.1
https://stackoverflow.com/a/17261523/1513187
Very fast first person controler with glmatrix 0.9 based on http://learningwebgl.com/ examples.

Farseer/XNA Assertion Failed, Vector2 position for body modified by camera matrix

I created a camera with a matrix and used it to move the view point in 2D. Basically I started from this template:
http://torshall.se/?p=272
I also had in one of my class, a simple code to spawn boxs with the mouse:
public void CreateBodies()
{
mouse = Mouse.GetState();
if (mouse.RightButton == ButtonState.Pressed)
{
Bodies += 1;
if (Bodies >= MaxBodies)
Bodies = 0;
rectBody[Bodies] = BodyFactory.CreateRectangle(world, ConvertUnits.ToSimUnits(rectangle.Width), ConvertUnits.ToSimUnits(rectangle.Height), 1);
rectBody[Bodies].Position = ConvertUnits.ToSimUnits(mouse.X, mouse.Y);
rectBody[Bodies].BodyType = BodyType.Dynamic;
}
}
This Worked perfectly fine but when I moved the ''camera'' the mouse didn't change in the right location, Si I did this little modification in game1.cs and in my method to have the world coord. of my mouse:
mouse = Mouse.GetState();
Matrix inverse = Matrix.Invert(camera.transform);
Vector2 mousePos = Vector2.Transform(new Vector2(mouse.X, mouse.Y), inverse);
TE.CreateBodies(mousePos);
public void CreateBodies(Vector2 mousePosition)
{
mouse = Mouse.GetState();
MousePosition = mousePosition;
if (mouse.RightButton == ButtonState.Pressed)
{
Bodies += 1;
if (Bodies >= MaxBodies)
{
Bodies = 0;
}
rectBody[Bodies] = BodyFactory.CreateRectangle(world, ConvertUnits.ToSimUnits(rectangle.Width), ConvertUnits.ToSimUnits(rectangle.Height), 1);
rectBody[Bodies].BodyType = BodyType.Dynamic;
rectBody[Bodies].Position = ConvertUnits.ToSimUnits(MousePosition);
}
}
Now this is supposed to give me the world coords. of my mouse, but I have a problem, when I run the program and click somewhere on the screen to create a box I get this error:
http://img68.xooimage.com/files/6/a/4/bob-2c526f4.png
What's going on? :/
Edit:
This is at the line 439 of body.cs:
Debug.Assert(!float.IsNaN(value.X) && !float.IsNaN(value.Y));

Resources