I am using SharpDX to basically render browser (chromium) output buffer on directX process.
Process is relatively simple, I intercept CEF buffer (by overriding OnPaint method) and write that to a texture2D.
Code is relatively simple:
Texture creation:
public void BuildTextureWrap() {
var oldTexture = texture;
texture = new D3D11.Texture2D(DxHandler.Device, new D3D11.Texture2DDescription() {
Width = overlay.Size.Width,
Height = overlay.Size.Height,
MipLevels = 1,
ArraySize = 1,
Format = DXGI.Format.B8G8R8A8_UNorm,
SampleDescription = new DXGI.SampleDescription(1, 0),
Usage = D3D11.ResourceUsage.Default,
BindFlags = D3D11.BindFlags.ShaderResource,
CpuAccessFlags = D3D11.CpuAccessFlags.None,
OptionFlags = D3D11.ResourceOptionFlags.None,
});
var view = new D3D11.ShaderResourceView(
DxHandler.Device,
texture,
new D3D11.ShaderResourceViewDescription {
Format = texture.Description.Format,
Dimension = D3D.ShaderResourceViewDimension.Texture2D,
Texture2D = { MipLevels = texture.Description.MipLevels },
}
);
textureWrap = new D3DTextureWrap(view, texture.Description.Width, texture.Description.Height);
if (oldTexture != null) {
obsoleteTextures.Add(oldTexture);
}
}
That piece of code is executed at start and when resize is happening.
Now when CEF OnDraw I basically copy their buffer to texture:
var destinationRegion = new D3D11.ResourceRegion {
Top = Math.Min(r.dirtyRect.y, texDesc.Height),
Bottom = Math.Min(r.dirtyRect.y + r.dirtyRect.height, texDesc.Height),
Left = Math.Min(r.dirtyRect.x, texDesc.Width),
Right = Math.Min(r.dirtyRect.x + r.dirtyRect.width, texDesc.Width),
Front = 0,
Back = 1,
};
// Draw to the target
var context = targetTexture.Device.ImmediateContext;
context.UpdateSubresource(targetTexture, 0, destinationRegion, sourceRegionPtr, rowPitch, depthPitch);
There are some more code out there but basically this is only relevant piece. Whole thing works until OnDraw happens frequently.
Apparently if I force CEF to Paint frequently, whole host process dies.
This is happening at UpdateSubresource.
So my question is, is there another, safer way to do this? (Update texture frequently)
Solution to this problem was relatively simple yet not so obvious at the beginning.
I simply moved the code responsible for updating texture inside render loop and just keep internal buffer pointer cached.
First, I'm somewhat new to Mapbox. I've made some fun things work ok, but that doesn't mean I'm doing things the correct/best way. Always happy to learn how to do things better.
I'm trying to set up a page that loads a 3D model and gives you on-screen controls to manipulate the 3D model after it loads.
I've gotten x/y/z movement and rotation to work ok but scale isn't working correctly. I've tried a few different ways (detailed below) and scale just doesn't change.
I started with the standard Mapbox 3D model code example here:
https://docs.mapbox.com/mapbox-gl-js/example/add-3d-model/
And I'm using jcastro76's threebox fork from here:
https://github.com/jscastro76/threebox/
Note: Regardless of what the initial model var options scale is set to, the console.log/.dir shows 0.0262049 but changing that initial scale does affect the initial size of the model. That makes me think I'm reading the wrong scale, but none of the scale setting attempts visibly changed the model's scale either. And doing a console.dir(defaultModel) and looking through the properties, everything that looks scale related is also always set to 0.0262049 including matrix.elements 0/5/10, scale x/y/z,
Any thoughts/comments? Thanks in advance!
Code I'm using....
Adding the 3D object:
map.addLayer({
id: 'custom_layer',
type: 'custom',
renderingMode: '3d',
onAdd: function (map, mbxContext) {
window.tb = new Threebox(
map,
mbxContext,
{
defaultLights: true,
enableSelectingObjects: true,
enableDraggingObjects: true,
enableRotatingObjects: true
}
);
var options = {
obj: 'model.glb',
type: 'gltf',
scale: 1, // I get 0.0262049 later regardless of what this is set to. Models with different initial scale set here work correctly, but still can't change it later
units: 'meters',
rotation: { x: 90, y: 0, z: 0 },
anchor: 'center'
}
tb.loadObj(options, function (model) {
defaultModel = model.setCoords(origin);
defaultModel.addEventListener('ObjectDragged', onDraggedObject, false);
tb.add(defaultModel);
})
},
render: function (gl, matrix) {
tb.update();
}
});
// Attempt #1, scale.set
scale = defaultModel.scale;
console.log('Original scale:');
console.dir(scale);
x: 0.0262049
y: 0.0262049
z: 0.0262049
defaultModel.scale.set(1, 1, 1);
scale = defaultModel.scale;
console.log('New scale:');
console.dir(scale);
x: 0.0262049
y: 0.0262049
z: 0.0262049
I also tried all of these with the same before/after results:
defaultModel.matrix.makeScale(1, 1, 1);
defaultModel.setScale(1);
defaultModel.scale.x = 1;
defaultModel.scale.y = 1;
defaultModel.scale.z = 1;
defaultModel.matrix.scale(1);
defaultModel.matrix.scale(1, 1, 1);
I saw reference to using a THREE.Vector3 object so I tried this, with the same results:
var threeV3 = new THREE.Vector3(
1,
1, // also tried -1 on some
1
);
defaultModel.scale.set(threeV3);
defaultModel.matrix.makeScale(threeV3);
defaultModel.setScale(threeV3);
defaultModel.matrix.scale(threeV3);
I am rendering OSM map tiles onto a web page using HTML canvas drawImage. However where an end user has selected dark mode, I would like to reduce the luminosity of these displayed maps, yet still allow them to make sense to the user.
So far I have had moderate success, as follows:
First plotting the map tile using drawImage
setting globalCompositeOperation to "difference"
over plotting the map tile with a white rectangle of the same size
setting globalCompositeOperation back to "source-over"
But this simple colour inversion is not perhaps the best solution. Does anyone have any other suggestions.
You could switch to a different tile server with a different map style. Check for example "CartoDB.DarkMatter" from Leaflet Provider Demo or MapBox Light & Dark.
I have found a pretty good solution to this and it is as follows:
First set the canvas context filter to "hue-rotate(180deg)"
Then plot the map tile on the canvas using drawImage
Then set the canvas context filter to "none"
The set canvas context globalCompositeOperation to "difference"
Then over plot the map tile with a white rectangle of the same size
Finally set canvas context globalCompositeOperation back to "source-over"
Maybe someone will still find this useful, it's some code i'm using for this purpose in my tar1090 project.
Negative and positive contrast are probably clear and dim is basically just a brightness modification with inverted sign.
toggle function:
function setDim(layer, state) {
if (state) {
layer.dimKey = layer.on('postrender', dim);
} else {
ol.Observable.unByKey(layer.dimKey);
}
OLMap.render();
}
postrender function:
function dim(evt) {
const dim = mapDimPercentage * (1 + 0.25 * toggles['darkerColors'].state);
const contrast = mapContrastPercentage * (1 + 0.1 * toggles['darkerColors'].state);
if (dim > 0.0001) {
evt.context.globalCompositeOperation = 'multiply';
evt.context.fillStyle = 'rgba(0,0,0,'+dim+')';
evt.context.fillRect(0, 0, evt.context.canvas.width, evt.context.canvas.height);
} else if (dim < -0.0001) {
evt.context.globalCompositeOperation = 'screen';
console.log(evt.context.globalCompositeOperation);
evt.context.fillStyle = 'rgba(255, 255, 255,'+(-dim)+')';
evt.context.fillRect(0, 0, evt.context.canvas.width, evt.context.canvas.height);
}
if (contrast > 0.0001) {
evt.context.globalCompositeOperation = 'overlay';
evt.context.fillStyle = 'rgba(0,0,0,'+contrast+')';
evt.context.fillRect(0, 0, evt.context.canvas.width, evt.context.canvas.height);
} else if (contrast < -0.0001) {
evt.context.globalCompositeOperation = 'overlay';
evt.context.fillStyle = 'rgba(255, 255, 255,'+ (-contrast)+')';
evt.context.fillRect(0, 0, evt.context.canvas.width, evt.context.canvas.height);
}
evt.context.globalCompositeOperation = 'source-over';
}
toggle function when using LayerSwitcher:
function setDimLayerSwitcher(state) {
if (!state) {
ol.control.LayerSwitcher.forEachRecursive(layers_group, function(lyr) {
if (lyr.get('type') != 'base')
return;
ol.Observable.unByKey(lyr.dimKey);
});
} else {
ol.control.LayerSwitcher.forEachRecursive(layers_group, function(lyr) {
if (lyr.get('type') != 'base')
return;
lyr.dimKey = lyr.on('postrender', dim);
});
}
OLMap.render();
}
I would like to use MediaStream.captureStream() method, but it is either rendered useless due to specification and bugs or I am using it totally wrong.
I know that captureStream gets maximal framerate as the parameter, not constant and it does not even guarantee that, but it is possible to change MediaStream currentTime (currently in Chrome, in Firefox it has no effect but in return there is requestFrame, not available at Chrome), but the idea of manual frame requests or setting the placement of the frame in the MediaStream should override this effect. It doesn't.
In Firefox it smoothly renders the video, frame by frame, but the video result is as long as wall clock time used for processing.
In Chrome there are some dubious black frames or reordered ones (currently I do not care about it until the FPS matches), and the manual setting of currentTime gives nothing, the same result as in FF.
I use modified code from MediaStream Capture Canvas and Audio Simultaneously answer.
const FPS = 30;
var cStream, vid, recorder, chunks = [], go = true,
Q = 61, rec = document.getElementById('rec'),
canvas = document.getElementById('canvas'),
ctx = canvas.getContext('2d');
ctx.strokeStyle = 'rgb(255, 0, 0)';
function clickHandler() {
this.textContent = 'stop recording';
//it has no effect no matter if it is empty or set to 30
cStream = canvas.captureStream(FPS);
recorder = new MediaRecorder(cStream);
recorder.ondataavailable = saveChunks;
recorder.onstop = exportStream;
this.onclick = stopRecording;
recorder.start();
draw();
}
function exportStream(e) {
if (chunks.length) {
var blob = new Blob(chunks)
var vidURL = URL.createObjectURL(blob);
var vid2 = document.createElement('video');
vid2.controls = true;
vid2.src = vidURL;
vid2.onend = function() {
URL.revokeObjectURL(vidURL);
}
document.body.insertBefore(vid2, vid);
} else {
document.body.insertBefore(document.createTextNode('no data saved'), canvas);
}
}
function saveChunks(e) {
e.data.size && chunks.push(e.data);
}
function stopRecording() {
go = false;
this.parentNode.removeChild(this);
recorder.stop();
}
var loadVideo = function() {
vid = document.createElement('video');
document.body.insertBefore(vid, canvas);
vid.oncanplay = function() {
rec.onclick = clickHandler;
rec.disabled = false;
canvas.width = vid.videoWidth;
canvas.height = vid.videoHeight;
vid.oncanplay = null;
ctx.drawImage(vid, 0, 0);
}
vid.onseeked = function() {
ctx.drawImage(vid, 0, 0);
/*
Here I want to include additional drawing per each frame,
for sure taking more than 180ms
*/
if(cStream && cStream.requestFrame) cStream.requestFrame();
draw();
}
vid.crossOrigin = 'anonymous';
vid.src = 'https://dl.dropboxusercontent.com/s/bch2j17v6ny4ako/movie720p.mp4';
vid.currentTime = 0;
}
function draw() {
if(go && cStream) {
++Q;
cStream.currentTime = Q / FPS;
vid.currentTime = Q / FPS;
}
};
loadVideo();
<button id="rec" disabled>record</button><br>
<canvas id="canvas" width="500" height="500"></canvas>
Is there a way to make it operational?
The goal is to load video, process every frame (which is time consuming in my case) and return the processed one.
Footnote: I do not want to use ffmpeg.js, external server or other technologies. I can process it by classic ffmpeg without using JavaScript at all, but this is not the point of this question, it is more about MediaStream usability / maturity. The context is Firefox/Chrome here, but it may be node.js or nw.js as well. If this is possible at all or awaiting bug fixes, the next question would be feeding audio to it, but I think it would be good as separate question.
EDIT
OK, I've tried a camera using quaternions:
qyaw = [Math.cos(rot[0]/2), 0, Math.sin(rot[0]/2), 0];
qpitch = [Math.cos(rot[1]/2), 0, 0, Math.sin(rot[1]/2)];
rotQuat = quat4.multiply (qpitch, qyaw);
camRot = quat4.toMat4(rotQuat);
camMat = mat4.multiply(camMat,camRot);
and I get exactly the same problem. So I'm guessing it's not gimbal lock. I've tried changing the order I multiply my matrices, but it just goes camera matrix * model view matrix, then object matrix * model view. That's right isn't it?
I'm trying to build a 3d camera in webGL that can move about the world and be rotated around the x and y (right and up) axes.
I'm getting the familiar problem (possibly gimbal lock?) that once one of the axes is rotated, the rotation around the other is screwed up; for example, when you rotate around the Y axis 90degrees, rotation around the x becomes a spin around z.
I appreciate this is a common problem, and there are copious guides to building a camera that avoid this problem, but as far as I can tell, I've implemented two different solutions and I'm still getting the same problem. Frankly, it's doing my head in...
One solution I'm using is this (adapted from http://www.toymaker.info/Games/html/camera.html):
function updateCam(){
yAx = [0,1,0];
xAx = [1,0,0];
zAx = [0,0,1];
mat4.identity(camMat);
xRotMat = mat4.create();
mat4.identity(xRotMat)
mat4.rotate(xRotMat,rot[0],xAx);
mat4.multiplyVec3(xRotMat,zAx);
mat4.multiplyVec3(xRotMat,yAx);
yRotMat = mat4.create();
mat4.identity(yRotMat)
mat4.rotate(yRotMat,rot[1],yAx);
mat4.multiplyVec3(yRotMat,zAx);
mat4.multiplyVec3(yRotMat,xAx);
zRotMat = mat4.create();
mat4.identity(zRotMat)
mat4.rotate(zRotMat,rot[2],zAx);
mat4.multiplyVec3(zRotMat,yAx);
mat4.multiplyVec3(zRotMat,xAx);
camMat[0] = xAx[0];
camMat[1] = yAx[0];
camMat[2] = zAx[0];
//camMat[3] =
camMat[4] = xAx[1]
camMat[5] = yAx[1];
camMat[6] = zAx[1];
//camMat[7] =
camMat[8] = xAx[2]
camMat[9] = yAx[2];
camMat[10]= zAx[2];
//camMat[11]=
camMat[12]= -1* vec3.dot(camPos, xAx);
camMat[13]= -1* vec3.dot(camPos, yAx);
camMat[14]= -1* vec3.dot(camPos, zAx);
//camMat[15]=
var movSpeed = 1.5 * forward;
var movVec= vec3.create(zAx);
vec3.scale(movVec, movSpeed);
vec3.add(camPos, movVec);
movVec= vec3.create(xAx);
movSpeed = 1.5 * strafe;
vec3.scale(movVec, movSpeed);
vec3.add(camPos, movVec);
}
I also tried using this method using
mat4.rotate(camMat, rot[1], yAx);
instead of explicitly building the camera matrix - same result.
My second (actually first...) method looks like this (rot is an array containing the current rotations around x, y and z (z is always zero):
function updateCam(){
mat4.identity(camRot);
mat4.identity(camMat);
camRot = fullRotate(rot);
mat4.set(camRot,camMat);
mat4.translate(camMat, camPos);
}
function fullRotate(angles){
var cosX = Math.cos(angles[0]);
var sinX = Math.sin(angles[0]);
var cosY = Math.cos(angles[1]);
var sinY = Math.sin(angles[1]);
var cosZ = Math.cos(angles[2]);
var sinZ = Math.sin(angles[2]);
rotMatrix = mat4.create([cosZ*cosY, -1*sinZ*cosX + cosZ*sinY*sinX, sinZ*sinX+cosZ*sinY*cosX, 0,
sinZ*cosY, cosZ*cosX + sinZ*sinY*sinX, -1*cosZ*sinX + sinZ*sinY*cosX, 0,
-1*sinY, cosY*sinX, cosY*cosX, 0,
0,0,0,1 ] );
mat4.transpose(rotMatrix);
return (rotMatrix);
}
The code (I've taken out most of the boilerplate gl lighting stuff etc and just left the transformations) to actually draw the scene is:
function drawScene() {
gl.viewport(0, 0, gl.viewportWidth, gl.viewportHeight);
gl.clear(gl.COLOR_BUFFER_BIT | gl.DEPTH_BUFFER_BIT);
mat4.perspective(45, gl.viewportWidth / gl.viewportHeight, 0.1, 2000.0, pMatrix);
mat4.identity(mvMatrix);
for(var i=0; i<planets.length; i++){
if (planets[i].type =="sun"){
currentProgram = perVertexSunProgram;
} else {
currentProgram = perVertexNormalProgram;
}
alpha = planets[i].alphaFlag;
mat4.identity(planets[i].rotMat);
mvPushMatrix();
//all the following puts planets in orbit around a central sun, but it's not really relevant to my current problem
var rot = [0,rotCount*planets[i].orbitSpeed,0];
var planetMat;
planetMat = mat4.create(fullRotate(rot));
mat4.multiply(planets[i].rotMat, planetMat);
mat4.translate(planets[i].rotMat, planets[i].position);
if (planets[i].type == "moon"){
var rot = [0,rotCount*planets[i].moonOrbitSpeed,0];
moonMat = mat4.create(fullRotate(rot));
mat4.multiply(planets[i].rotMat, moonMat);
mat4.translate(planets[i].rotMat, planets[i].moonPosition);
mat4.multiply(planets[i].rotMat, mat4.inverse(moonMat));
}
mat4.multiply(planets[i].rotMat, mat4.inverse(planetMat));
mat4.rotate(planets[i].rotMat, rotCount*planets[i].spinSpd, [0, 1, 0]);
//this bit does the work - multiplying the model view by the camera matrix, then by the matrix of the object we want to render
mat4.multiply(mvMatrix, camMat);
mat4.multiply(mvMatrix, planets[i].rotMat);
gl.useProgram(currentProgram);
setMatrixUniforms();
gl.drawElements(gl.TRIANGLES, planets[i].VertexIndexBuffer.numItems, gl.UNSIGNED_SHORT, 0);
mvPopMatrix();
}
}
However, most of the transformations can be ignored, the same effect cab be seen simply displaying a sphere at world coords 0,0,0.
I thought my two methods - either rotating the axes one at a time as you go, or building up the rotation matrix in one go avoided the problem of doing two rotations one after the other. Any ideas where I'm going wrong?
PS - I'm still very much starting to learn WebGL and 3d maths, so be gentle and talk to me like someone who hadn't heard of a matrix til a couple of months ago... Also, I know quaternions are a good solution to 3d rotation, and that would be my next attempt, however, I think I need to understand why these two methods don't work first...
For the sake of clarification, think about gimbal lock this way: You've played Quake/Unreal/Call of Duty/Any First Person Shooter, right? You know how when you are looking forward and move the mouse side to side your view swings around in a nice wide arc, but if you look straight up or down and move your mouse side to side you basically just spin tightly around a single point? That's gimbal lock. It's something that pretty much any FPS game uses because it happens to mimic what we would do in real life, and thus most people don't usually think of it as a problem.
For something like a space flight sim, however, or (more commonly) skeletal animation that type of effect is undesirable, and so we use things like quaternions to help us get around it. Wether or not you care about gimbal lock for your camera depends on the effect that you are looking to achieve.
I don't think you're experiencing that, however. What it sounds like is that your order of matrix multiplication is messed up, and as a result your view is rotating in a way that you don't expect. I would try playing with the order that you do your X/Y/Z rotations in and see if you can find an order than gives you the desired results.
Now, I hate doing code dumps, but this may be useful to you so here we go: This is the code that I use in most of my newer WebGL projects to manage a free-floating camera. It is gimbal locked, but as I mentioned earlier it doesn't really matter in this case. Basically it just gives you FPS style controls that you can use to fly around your scene.
/**
* A Flying Camera allows free motion around the scene using FPS style controls (WASD + mouselook)
* This type of camera is good for displaying large scenes
*/
var FlyingCamera = Object.create(Object, {
_angles: {
value: null
},
angles: {
get: function() {
return this._angles;
},
set: function(value) {
this._angles = value;
this._dirty = true;
}
},
_position: {
value: null
},
position: {
get: function() {
return this._position;
},
set: function(value) {
this._position = value;
this._dirty = true;
}
},
speed: {
value: 100
},
_dirty: {
value: true
},
_cameraMat: {
value: null
},
_pressedKeys: {
value: null
},
_viewMat: {
value: null
},
viewMat: {
get: function() {
if(this._dirty) {
var mv = this._viewMat;
mat4.identity(mv);
mat4.rotateX(mv, this.angles[0]-Math.PI/2.0);
mat4.rotateZ(mv, this.angles[1]);
mat4.rotateY(mv, this.angles[2]);
mat4.translate(mv, [-this.position[0], -this.position[1], - this.position[2]]);
this._dirty = false;
}
return this._viewMat;
}
},
init: {
value: function(canvas) {
this.angles = vec3.create();
this.position = vec3.create();
this.pressedKeys = new Array(128);
// Initialize the matricies
this.projectionMat = mat4.create();
this._viewMat = mat4.create();
this._cameraMat = mat4.create();
// Set up the appropriate event hooks
var moving = false;
var lastX, lastY;
var self = this;
window.addEventListener("keydown", function(event) {
self.pressedKeys[event.keyCode] = true;
}, false);
window.addEventListener("keyup", function(event) {
self.pressedKeys[event.keyCode] = false;
}, false);
canvas.addEventListener('mousedown', function(event) {
if(event.which == 1) {
moving = true;
}
lastX = event.pageX;
lastY = event.pageY;
}, false);
canvas.addEventListener('mousemove', function(event) {
if (moving) {
var xDelta = event.pageX - lastX;
var yDelta = event.pageY - lastY;
lastX = event.pageX;
lastY = event.pageY;
self.angles[1] += xDelta*0.025;
while (self.angles[1] < 0)
self.angles[1] += Math.PI*2;
while (self.angles[1] >= Math.PI*2)
self.angles[1] -= Math.PI*2;
self.angles[0] += yDelta*0.025;
while (self.angles[0] < -Math.PI*0.5)
self.angles[0] = -Math.PI*0.5;
while (self.angles[0] > Math.PI*0.5)
self.angles[0] = Math.PI*0.5;
self._dirty = true;
}
}, false);
canvas.addEventListener('mouseup', function(event) {
moving = false;
}, false);
return this;
}
},
update: {
value: function(frameTime) {
var dir = [0, 0, 0];
var speed = (this.speed / 1000) * frameTime;
// This is our first person movement code. It's not really pretty, but it works
if(this.pressedKeys['W'.charCodeAt(0)]) {
dir[1] += speed;
}
if(this.pressedKeys['S'.charCodeAt(0)]) {
dir[1] -= speed;
}
if(this.pressedKeys['A'.charCodeAt(0)]) {
dir[0] -= speed;
}
if(this.pressedKeys['D'.charCodeAt(0)]) {
dir[0] += speed;
}
if(this.pressedKeys[32]) { // Space, moves up
dir[2] += speed;
}
if(this.pressedKeys[17]) { // Ctrl, moves down
dir[2] -= speed;
}
if(dir[0] != 0 || dir[1] != 0 || dir[2] != 0) {
var cam = this._cameraMat;
mat4.identity(cam);
mat4.rotateX(cam, this.angles[0]);
mat4.rotateZ(cam, this.angles[1]);
mat4.inverse(cam);
mat4.multiplyVec3(cam, dir);
// Move the camera in the direction we are facing
vec3.add(this.position, dir);
this._dirty = true;
}
}
}
});
This camera assumes that Z is your "Up" axis, which may or may not be true for you. It's also using ECMAScript 5 style objects, but that shouldn't be an issue for any WebGL-enabled browser, and it utilizes my glMatrix library but it looks like you're already using that anyway. Basic usage is pretty simple:
// During your init code
var camera = Object.create(FlyingCamera).init(canvasElement);
// During your draw loop
camera.update(16); // 16ms per-frame == 60 FPS
// Bind a shader, etc, etc...
gl.uniformMatrix4fv(shaderUniformModelViewMat, false, camera.viewMat);
Everything else is handled internally for you, including keyboard and mouse controls. May not fit your needs exactly, but hopefully you can glean what you need to from there. (Note: This is essentially the same as the camera used in my Quake 3 demo, so that should give you an idea of how it works.)
Okay, that's enough babbling from me for one post! Good luck!
It doesn't matter how you build your matrices, using euler angle rotations (like both of your code snippets do) will always result in a transformation that shows the gimble lock problem.
You may want to have a look at https://en.wikipedia.org/wiki/Quaternions_and_spatial_rotation as a starting point for creating transformations that avoid gimble locks.
Try my new project (webGL2 part of visual-js game engine) based on glmatrix 2.0 .
Activate events for camera use : App.camera.FirstPersonController = true;
live examples
For camera important functions :
Camera interaction
App.operation.CameraPerspective = function() {
this.GL.gl.viewport(0, 0, wd, ht);
this.GL.gl.clear(this.GL.gl.COLOR_BUFFER_BIT | this.GL.gl.DEPTH_BUFFER_BIT);
// mat4.identity( world.mvMatrix )
// mat4.translate(world.mvMatrix , world.mvMatrix, [ 10 , 10 , 10] );
/* Field of view, Width height ratio, min distance of viewpoint, max distance of viewpoint, */
mat4.perspective(this.pMatrix, degToRad( App.camera.viewAngle ), (this.GL.gl.viewportWidth / this.GL.gl.viewportHeight), App.camera.nearViewpoint , App.camera.farViewpoint );
};
manifest.js :
var App = {
name : "webgl2 experimental",
version : 0.3,
events : true,
logs : false ,
draw_interval : 10 ,
antialias : false ,
camera : { viewAngle : 45 ,
nearViewpoint : 0.1 ,
farViewpoint : 1000 ,
edgeMarginValue : 100 ,
FirstPersonController : false },
textures : [] , //readOnly in manifest
tools : {}, //readOnly in manifest
download source from :
webGL 2 part of visual-js GE project
Old :
opengles 1.1
https://stackoverflow.com/a/17261523/1513187
Very fast first person controler with glmatrix 0.9 based on http://learningwebgl.com/ examples.