I'm currently using three.js for my projects with animation. I'm using morphtargets and wanted to ask why my frames are often skipped? My animation has 7 morphtargets, and it goes all around them. It starts from 0 to 6 (output:console.log('frame: ' + lastKeyframe) ) , but sometimes my frame jumps from 0 to 3 or from 1 to 4.. What happens here really? Btw, the animation code is working well
[if ( Mesh && playBack ) // exists / is loaded
{
time = new Date().getTime() % duration; //arba Date.now()
keyframe = Math.floor( time / interpolation ) + animOffset;
if ( keyframe != currentKeyframe )
{
Mesh.morphTargetInfluences[ lastKeyframe ] = 0;
Mesh.morphTargetInfluences[ currentKeyframe ] = 1;
Mesh.morphTargetInfluences[ keyframe ] = 0;
//console.log(Mesh.morphTargetInfluences[ 0 ]);
lastKeyframe = currentKeyframe;
currentKeyframe = keyframe;
}
//The two lines after the if statement interpolate between frames.
//The value at currentKeyFrame starts decreasing from 1, and the value at keyFrame starts increasing.
Mesh.morphTargetInfluences[ keyframe ] = ( time % interpolation ) / interpolation;
Mesh.morphTargetInfluences[ lastKeyframe ] = 1 - Mesh.morphTargetInfluences[ keyframe ];
//console.log('current: ' + Mesh.morphTargetInfluences[ keyframe ]);
console.log('frame: ' + lastKeyframe);
}]
I think that it's because you are selecting the new frame based on reading a wall clock.. so it seems likely that if your frame rate drops then you'll lose a frame
Related
I am trying to visualize a grand strategy (EU4, CK3, HOI) like map in Three.js. I started creating meshes for every cell. the results are fine (screenshot 1 & 2).
Separate mesh approach - simple land / water differentiation :
Separate mesh approach - random cell color :
however, with a lot of cells, performance becomes an issue (I am getting 15fps with 10k cells).
In order to improve performance I would like to combine all these separate indices & vertex arrays into 2 big arrays, which will then be used to create a single mesh.
I am looping through all my cells to push their indices, vertices & colors into the big arrays like so:
addCellGeometryToMapGeometry(cell) {
let startIndex = this.mapVertices.length;
let cellIndices = cell.indices.length;
let cellVertices = cell.vertices.length;
let color = new THREE.Color( Math.random(), Math.random(), Math.random() );
for (let i = 0; i < cellIndices; i++) {
this.mapIndices.push(startIndex + cell.indices[i]);
}
for (let i = 0; i < cellVertices; i++) {
this.mapVertices.push(cell.vertices[i]);
this.mapColors.push (color);
}
}
I then generate the combined mesh:
generateMapMesh() {
let geometry = new THREE.BufferGeometry();
const material = new THREE.MeshPhongMaterial( {
side: THREE.DoubleSide,
flatShading: true,
vertexColors: true,
shininess: 0
} );
geometry.setIndex( this.mapIndices );
geometry.setAttribute( 'position', new THREE.Float32BufferAttribute( this.mapVertices, 3 ) );
geometry.setAttribute( 'color', new THREE.Float32BufferAttribute( new Float32Array(this.mapColors.length), 3 ) );
for ( let i = 0; i < this.mapColors.length; i ++ ) {
geometry.attributes.color.setXYZ(i, this.mapColors[i].r, this.mapColors[i].g, this.mapColors[i].b);
}
return new THREE.Mesh( geometry, material );
}
Unfortunately the results are underwhelming:
While the data in the combined arrays look okay, only every third cell is rendered. In some cases the indices seem to get mixed up too.
Combined approach - random cell colors :
In other similar topics it is recommended to merge existing meshes. However, I figured that my approach should allow me to better understand what is actually happening & potentially save on performance as well.
Has my code obvious flaws that I cannot see?
Or am I generally on a wrong path, if so, how should it be done instead?
I actually found the issue in my code. wrong:
let startIndex = this.mapVertices.length;
The issue here is that the values in the indices array always reference a vertex (which consists of 3 consecutive array entries in the vertices array). correct:
let startIndex = this.mapVertices.length / 3;
Additionally I should only push one color per vertex instead of one per vertex array entry (= 1 per coordinate) but make sure that the arraylength of the geometry.color attribute stays at it is.
With these 2 changes, the result for the combined mesh looks exactly the same as when creating a separate mesh for every cell. The performance improvement is impressive.
separate meshes:
60 - 65 ms needed to render a frame
144 mb allocated memory
combined mesh:
0 - 1 ms needed to render a frame
58 mb allocated memory
Here are the fixed snippets:
addCellGeometryToMapGeometry(cell) {
let startIndex = this.mapVertices.length / 3;
let cellIndices = cell.indices.length;
let cellVertices = cell.vertices.length;
console.log('Vertex-- maplength: ' + startIndex + ' celllength: ' + cellVertices);
console.log('Indices -- maplength: ' + this.mapIndices.length + ' celllength: ' + cellIndices);
console.log({cell});
let color = new THREE.Color( Math.random(), Math.random(), Math.random() );
for (let i = 0; i < cellIndices; i++) {
this.mapIndices.push(startIndex + cell.indices[i]);
}
for (let i = 0; i < cellVertices; i++) {
this.mapVertices.push(cell.vertices[i]);
if (i % 3 === 0) { this.mapColors.push (color); }
}
}
generateMapMesh() {
let geometry = new THREE.BufferGeometry();
const material = new THREE.MeshPhongMaterial( {
side: THREE.DoubleSide,
flatShading: true,
vertexColors: true,
shininess: 0
} );
geometry.setIndex( this.mapIndices );
geometry.setAttribute( 'position', new THREE.Float32BufferAttribute( this.mapVertices, 3 ) );
geometry.setAttribute( 'color', new THREE.Float32BufferAttribute( new Float32Array(this.mapVertices.length), 3 ) );
for ( let i = 0; i < this.mapColors.length; i ++ ) {
geometry.attributes.color.setXYZ(i, this.mapColors[i].r, this.mapColors[i].g, this.mapColors[i].b);
}
return new THREE.Mesh( geometry, material );
}
I would like to make a simple animation of the character rotating itself when it jumps. I'm making an indie platformer so this should be simple to do, I think, but I'm too newbie for this.
Here's the movement code.
//------------------------- MOVEMENT INPUT
xMove = kRight - kLeft;
xSpd = xMove * mSpd;
ySpd += 0.65;
//------------------------- JUMP
onGround = place_meeting(x,y+1,oSolid);
if(onGround) airJump = 1;
if(kJump){
if(onGround or airJump > 0){
ySpd = -12;
airJump = 0;
}
}
//------------------------- FINAL MOVEMENT
if(place_meeting(x + xSpd, y, oSolid)){
while(!place_meeting(x + sign(xSpd), y, oSolid)) x += sign(xSpd);
xSpd = 0;
}
if(place_meeting(x + xSpd, y + ySpd, oSolid)){
while(!place_meeting(x + xSpd, y + sign(ySpd), oSolid)) y += sign(ySpd);
ySpd = 0;
}
x += xSpd;
y += ySpd;
if xSpd < 0 dir = -1;
if xSpd > 0 dir = 1;
The player is a simple square, so I would like to make it rotate 360 degrees while on the air.
You should be able to use image_angle for this, changing the value will change the angle of the sprite, and continiously increasing/decreasing that value will simulate a rotation.
However, keep in mind that if you rotate the sprite, the hitbox of the sprite will rotate as well. You can probably set the hitbox apart from the sprite so it won't interrupt with each other.
Example:
https://manual.yoyogames.com/GameMaker_Language/GML_Reference/Asset_Management/Sprites/Sprite_Instance_Variables/image_angle.htm
For player movement collision handling you want to avoid using image_angle variable by using your own variable for the image rotation with the draw_sprite_ext function. Also by change you end up wanting to use the image angle for anything its good to wrap it mostly later if your trying to use fov and what not.
For example
function Scr_Player_Create(){
image_offset = 0;
}
function Scr_Player_Step(){
image_offset += (keyboard_check(vk_right) - keyboard_check(vk_left)) * 10;
image_offset = wrap(image_offset, 0, 359);
}
function Scr_Player_Draw(){
draw_sprite_ext( sprite_index, image_index, x, y, image_xscale, image_yscale,
image_angle + image_offset, image_blend, image_alpha );
draw_text(10, 10, image_offset);
}
function wrap(wrap_value, wrap_minimum, wrap_maximum){
// Credit: Juju from GMLscripts forums!
var _mod = ( wrap_value - wrap_minimum ) mod ( wrap_maximum - wrap_minimum );
if ( _mod < 0 ) return _mod + wrap_maximum else return _mod + wrap_minimum;
}
Another approach you could do to avoid image_angle effecting your collision is this
var _angle = image_angle;
image_angle += image_offset;
draw_self();
image_angle = _angle;
I recently stretched a gradient across the canvas using the ImageData data array; ie the ctx.getImageData() and ctx.putImageData() methods, and thought to myself, "this could be a really efficient way to animate a canvas full of moving objects". So I wrapped it into my main function along with the requestAnimationFrame(callback) statement, but that's when things got weird. The best I can do at describing is to say it's like the left most column of pixels on the canvas is duplicated in the right most column of pixels, and based on what coordinates you specify for the get and put ctx methods, this can have bizarre consequences.
I started out with the get and put methods targeting the canvas at 0, 0 like so:
imageData = ctx.getImageData( 0, 0, cvs.width, cvs.height );
// here I set the pixel colors according to their location
// on the canvas using nested for loops to target the
// the correct imageData array indexes.
ctx.putImageData( imageData, 0, 0 );
But I immediately noticed the right side of the canvas was wrong. Like the gradient has started over, and the last pixel just didn't get touched for some reason:
So scaled back my draw region changed the put ImageData coordinates to get some space between the drawn image and the edge of the canvas, and I changed the get coordinated to eliminate that line on the right edge of the canvas:
imageData = ctx.getImageData( 1, 1, cvs.width, cvs.height );
for ( var x = 0; x < cvs.width - 92; x++ ) {
for ( var y = 0; y < cvs.height - 92; y++ ) {
// array[ x + y * width ] = value / x; // or similar
}
}
ctx.putImageData( imageData, 2, 2 );
Pretty! But wrong... So I reproduced it in codepen. Can someone help me understand and overcome this behavior?
Note: The codepen has the scaled back draw area. If you change the get coordinates to 0 you'll see it basically behaves the same way as the first example but with white-space in between the expected square and the unexpected line. That said, I left the get at 1 and the put at zero for the most interesting behavior yet.
I've changed your code a little. In your double loop I am declaring a variable var i = (x + y*cvs.width)*4; This is only reducing the verbosity of your code so that I can see it better. The i variable represents the index of your pixel in the imageData.data array. Since you are doing
imageData.data[i - 4 ] ...
imageData.data[i - 3 ] ...
imageData.data[i - 2 ] ...
imageData.data[i - 1 ] ...
you are going one pixel backwards and the first pixel from every row appears as the last pixel of the previous row. So I've changed it from var i = (x + y*cvs.width)*4; to var i = 4 + (x + y*cvs.width)*4;.
When you are animating it, since the imageData is inside the test() function, you are recalculating the values of the imageData.data array in base of the last frame. So in the second frame you have that 1px line from the first frame copied again and moved 1px upward and 1px to the left.
I hope this is what you were asking.
var ctx, cvs, imageData;
cvs = document.getElementById('canv');
ctx = cvs.getContext('2d');
function test() {
// imageData = ctx.getImageData( 0, 0, cvs.width, cvs.height );
// produces a line on the right side of the screen
imageData = ctx.getImageData( 1, 1, cvs.width, cvs.height );
// bizzar reverse cascading
for (var x=0;x<cvs.width-92;x++) {
for (var y=0;y<cvs.height-92;y++) {
var i = 4+(x + y*cvs.width)*4;
imageData.data[i - 4 ] = Math.floor((255-y)-Math.floor(x/55)*55);
imageData.data[i - 3 ] = Math.floor(255/(cvs.height-92)*y);
imageData.data[i - 2 ] = Math.floor(255/(cvs.width-92)*x);
imageData.data[i - 1 ] = 255;
}
}
ctx.putImageData( imageData, 0, 0 );
requestAnimationFrame( test );
}
test();
canvas {
box-shadow: 0 0 2.5px 0 black;
}
<canvas id="canv" height="256" width="256"></canvas>
I'm trying to morph the vertices of a loaded .obj file like in this example: https://threejs.org/docs/#api/materials/MeshDepthMaterial - when 'wireframe' and 'morphTargets' are activated in THREE.MeshDepthMaterial.
But I can't reach the desired effect. From the above example the geometry can be morphed via geometry.morphTargets.push( { name: 'target1', vertices: vertices } ); however it seems that morphTargets is not available for my loaded 3D object as it is a BufferGeometry.
Instead I tried to change independently each vertices point from myMesh.child.child.geometry.attributes.position.array[i], it kind of works (the vertices of my mesh are moving) but not as good as the above example.
Here is a Codepen of what I could do.
How can I reach the desired effect on my loaded .obj file?
Adding morph targets to THREE.BufferGeometry is a bit different than THREE.Geometry. Example:
// after loading the mesh:
var morphAttributes = mesh.geometry.morphAttributes;
morphAttributes.position = [];
mesh.material.morphTargets = true;
var position = mesh.geometry.attributes.position.clone();
for ( var j = 0, jl = position.count; j < jl; j ++ ) {
position.setXYZ(
j,
position.getX( j ) * 2 * Math.random(),
position.getY( j ) * 2 * Math.random(),
position.getZ( j ) * 2 * Math.random()
);
}
morphAttributes.position.push(position); // I forgot this earlier.
mesh.updateMorphTargets();
mesh.morphTargetInfluences[ 0 ] = 0;
// later, in your render() loop:
mesh.morphTargetInfluences[ 0 ] += 0.001;
three.js r90
I am splitting a texture 1024 x 1024 over 32x32 tiles * 32, Im not sure if its possible to share the texture with an offset or would i need to create a new texture for each tile with the offset..
to create the offset i am using a uniform value = 32 * i and updating the uniform through each loop instance of creating tile, all the tiles seem to be the same offset? as basically i wanting an image to appear like its one image not broken up into little tiles.But the current out-put is the same x,y-offset on all 32 tiles..Im using the vertex-shader with three.js r71...
Would i need to create a new texture for each tile with the offset?
for ( j = 0; j < row; j ++ ) {
for ( t = 0; t < col; t ++ ) {
customUniforms.tX.value = tX;
customUniforms.tY.value = tY;
console.log(customUniforms.tX.value);
customUniforms.tX.needsUpdate = true;
customUniforms.tY.needsUpdate = true;
mesh = new THREE.Mesh( geometry,mMaterial);// or new material
}
}
//vertex shader :
vec2 uvOffset = vUV + vec2( tX, tY) ;
Image example:
Each image should have an offset of 10 0r 20 px but they are all the same.... this is from using one texture..
As suggested i have tried to manipulate the uv on each object with out luck, it seems to make all the same vertexes have the same position for example 10x10 segmant plane all faces will be the same
var geometry = [
[ new THREE.PlaneGeometry( w, w ,64,64),50 ],
[ new THREE.PlaneGeometry( w, w ,40,40), 500 ],
[ new THREE.PlaneGeometry( w, w ,30,30), 850 ],
[ new THREE.PlaneGeometry( w, w,16,16 ), 1200 ]
];
geometry[0][0].faceVertexUvs[0] = [];
for(var p = 0; p < geometry[0][0].faces.length; p++){
geometry[0][0].faceVertexUvs[0].push([
new THREE.Vector2(0.0, 0.0),
new THREE.Vector2(0.0, 1),
new THREE.Vector2( 1, 1 ),
new THREE.Vector2(1.0, 0.0)]);
}
image of this result, you will notice all vertices are the same when they shouldn't be
Update again:
I have to go through each vertices of faces as two triangles make a quad to avoid the above issue, I think i may have this solved... will update
Last Update Hopfully:
Below is the source code but i am lost making the algorithm display the texture as expected.
/*
j and t are rows & columns looping by 4x4 grid
row = 4 col = 4;
*/
for( i = 0; i < geometry.length; i ++ ) {
var mesh = new THREE.Mesh( geometry[ i ][ 0 ], customMaterial);
mesh.geometry.computeBoundingBox();
var max = mesh.geometry.boundingBox.max;
var min = mesh.geometry.boundingBox.min;
var offset = new THREE.Vector2(0 - min.x*t*j+w, 0- min.y*j+w);//here is my issue
var range = new THREE.Vector2(max.x - min.x*row*2, max.y - min.y*col*2);
mesh.geometry.faceVertexUvs[0] = [];
var faces = mesh.geometry.faces;
for (p = 0; p < mesh.geometry.faces.length ; p++) {
var v1 = mesh.geometry.vertices[faces[p].a];
var v2 = mesh.geometry.vertices[faces[p].b];
var v3 = mesh.geometry.vertices[faces[p].c];
mesh.geometry.faceVertexUvs[0].push([
new THREE.Vector2( ( v1.x + offset.x ) / range.x , ( v1.y + offset.y ) / range.y ),
new THREE.Vector2( ( v2.x + offset.x ) / range.x , ( v2.y + offset.y ) / range.y ),
new THREE.Vector2( ( v3.x + offset.x ) / range.x , ( v3.y + offset.y ) / range.y )
]);
}
You will notice the below image in the red is seamless as the other tiles are not aligned with the texture.
Here is the answer:
var offset = new THREE.Vector2(w - min.x-w+(w*t), w- min.y+w+(w*-j+w));
var range = new THREE.Vector2(max.x - min.x*7, max.y - min.y*7);
if you could simplify answer will award bounty too: