Gltf wrong rotations - animation

I have a gltf asset I have verified in multiple places. The asset is correct.
I am trying to load this asset into a custom engine I am making. If I load all animation information, except rotations I get this:
Which is correct.
If instead of loading the animation rotations I load the rotation of the target node the animation is targeting, i.e. this:
gltf.nodes[target_node].rotation.quaternion();
Instead of:
gltf.data_from_accessor::<Quatf>(sampler.output).unwrap()
I get this:
i.e. the same thing.
But if I try loading the data from the accessor I get this abomination:
I am not sure what to test. As mentioned I know the data itself is fine, so it must be the code. But it's not that the target node order is incorrect, I know for a fact that I do have the correct node-animation mapping.
This is how I am loading the gltf data:
let animation = &gltf.animations[animation_id];
let channel = &animation.channels[channel_id];
let sampler = &animation.samplers[channel.sampler];
match channel.target.path
{
ChannelPath::ROTATION => {
debug_assert!(self.rotations.len() == 0);
for rot in gltf.data_from_accessor::<Quatf>(sampler.output).unwrap()
{
self.rotations.push(*rot);
}
for time in gltf.data_from_accessor::<f32>(sampler.input).unwrap()
{
self.rotation_times.push(*time);
}
// ......
}
I have a different gltf asset that I am loading with rotations, and that one is fine:
The animation plays just as expected.
So I have two assets, both are fine according to gltf validators, one goes through the pipeline just fine, the other gets degenerate. I don't understand how rotations can cause this problem :\
I have checked that all quaternion data has norm of 1, which it does, as well.

If someone runs into something like this. The issue was a byte alignment problem.
The asset that loaded properly specifies its joints as bytes. The other asset as unsigned shorts. The code was assuming the data was always in byte format and thus was not properly initializing the joint vectors of the mesh.

Related

PIXI js - Pixel Data is Zero

I'm currently working on a little game written in JS using Pixi.js(https://github.com/pixijs). One Problem has occured currently, I'm trying to implement pixel exact collision between shapes, while I was programing a little I noticed that all the pixel RBGA values of my images are just 0.
I searched on the web but for a while but the only reason for those Problems I could find was that the canvas was tainted because of CORS(Pixel RGB values are all zero).
But this can't be the reason in my case because I created the sprites myself, I'm not loading them from other (any) domains or something like that.
Could this be a problem with the images? How do I avoid that? I will append some code that works if I use other images (some images I downloaded for tests).
const app = new PIXI.Application({width: 500, height: 500});
document.body.appendChild(app.view);
PIXI.loader.add("sprites/test.png")
.load(() => {
let img = new PIXI.Sprite(PIXI.loader.resources["sprites/test.png"].texture);
app.stage.addChild(img);
console.log(app.renderer.extract.pixels(img));
});
Edit: I tried getting the RGBA values using a short Java Program btw, same problem. Every single value is zero.

Three.js calling clipAction.play() makes animated objects vanish

In Three.js, Calling action.play() makes objects just vanish, without any error or warning on the console.
I use THREE.ObjectLoader to load a JSON file created in blender. The srt (position/scale/quaternion) animation is in the generated file. As are the morphtargets. To optimise filesize I animated the srt as a series of null objects. The morphtargets tracks are in the main object, which I clone 5 times to build the characters (balloons to be exact).
I previously did extensive testing to introduce shape/morph animation. After being succesfull I finalised all the animations. Only to be trumped by the disappearing models. The srt (position/scale/quaternion) animation was working fine before. But after refactoring the code, to be less spagettied, upon calling action.play(). The objects just vanish, exactly then. Echoeing the mixers and the array containing the clips, everything looks correct (ie I see the tracks, the names are right etc). Also examining the newly generated JSON, it seems the same and correct (also I have not changed the SRT animations, only introduced shapeanimation)
So I am lost, and think this looks more and more like a bug. From previous experience I do know it works (or has worked).
I created a jsfiddle: https://jsfiddle.net/oompol/3ya6sqed/
[edit] I turned on the action.play and call the function from the link in the div [/edit] please note I commented out calling action.play(). So you see the load and init work. See the function listed below
function playScene(scene) {
for (parentName in srtMixers) {
var clpName = "balloon1_fly";
var clp = THREE.AnimationClip.findByName(animLib, clpName);
var action = srtMixers[parentName].clipAction(clp);
action.clampWhenFinished = true;
console.log("playScene:", clpName, clp, parentName, srtMixers);
//this is when the problem happens
action.play();
}
}
This is the JSON I am loading:
https://rawgit.com/bakajin/2e3d2f6a722103ed4aefd76f6250ec08/raw/28cad35c20060d478499c0cd40a2753611993720/oomp-scene_balloons-oomp-6.9.4.json
Ok,
there was something very wrong with the scaling indeed.
The io_three JSON exporter for Blender (r87 dev) writes incorrect matrix transformation data in the geometry object (really tiny scaling values). The animation track with the scaling keys were correctly written as 1,1,1. So all the objects just scaled out of view immediately.
Hard to see because the geometry has no separate scaling value but a matrix. Seems to happen when you set "Scene" to true on export.
Worked around the problem by entering the scaling value in the keyframe tracks. But this will only work if you have no scaling animation (so the keys are all one).
Meanwhile I have extensively edited the JSON by hand. Because this is not the only incorrect data. The formatting of the animation object is also wrong. The durations for the morphTargetInfluence Keys is also incorrect. The formatting of these keys is also not always correct.
Hope this helps some other ppl

THREE JS: loading large model crashes browser

If I load a small PLY file (4-10 MB) using the following code:
this.loader.load('assets/data/GuyFawkesMask.ply', function (geometry) {
var bufferGeometry = new THREE.BufferGeometry().fromGeometry( geometry );
console.log(bufferGeometry);
// Create object
let object =
new THREE.Mesh(bufferGeometry,
new THREE.MeshPhongMaterial(
{
color: 0xFFFFFF,
//vertexColors: THREE.VertexColors,
shading: THREE.FlatShading,
shininess: 0
})
);
_this.add(object);
});
Everything works fine.
If I load large files 50MB+ the Browser sometime crashes or if the model is loaded successfully the interaction with the object using the orbit-control is painfully slow in some computer.
I appreciate that 3D models are complex beasts but do you know if there are ways to optimize memory usage, model loading in THREE js without decimate the file, operation that I cannot do without losing vital information?
I've had a similar issue with larger models. What I suggest is that you stream and load the model in small chunks (for example 1mb at a time).
The problem is, Three.js does not support loading chunked models.
What I did for myself was to rewrite a loader to work on chunks of data and build the internal representation from chunks.
Another way would be to export the scene in smaller pieces and then recombine them after loading.
There is also the third possibility, that the model is simply very complex.
Edit: Okay, I played around a bit and if you convert the file into a binary STL, then it works a bit better and does not crash chrome, while maintaining the same vertex count.
I'll include a link here to 2 converted files, where one is a decimated version (70% reduction in vertex count) and the other is a conversion of the original, both of them worked for me.

Using a FrameBufferObject with several Color Texture attachments

I'm implementing in my program the gaussian blur effect. To do the job I need to render the first blur information (the one on Y axis) in a specific texture (let's call it tex_1) and use this same information contained in tex_1 as input information for a second render pass (for the X axis) to fill an other texture (let's call it tex_2) containing the final gaussian blur result.
A good practice should be to create 2 frame buffers (FBOs) with a texture attached for each of them and linked both to GL_COLOR_ATTACHMENT0 (for example). But I just wonder one thing:
Is it possible to fill these 2 textures using the same FBO ?
So I will have to enable GL_COLOR_ATTACHMENT0 and GL_COLOR_ATTACHMENT1 and bind the desired texture to the correct render pass as follow :
Pseudo code:
FrameBuffer->Bind()
{
FrameBuffer->GetTexture(GL_COLOR_ATTACHMENT0)->Bind(); //tex_1
{
//BIND external texture to blur
//DRAW code (Y axis blur pass) here...
//-> Write the result in texture COLOR_ATTACHEMENT0 (tex_1)
}
FrameBuffer->GetTexture(GL_COLOR_ATTACHMENT1)->Bind(); //tex_2
{
//BIND here first texture (tex_1) filled above in the first render pass
//Draw code (X axis blur pass) here...
//-> Use this texture in FS to compute the final result
//within COLOR_ATTACHEMENT1 (tex_2) -> The final result
}
}
FrameBuffer->Unbind()
But in my mind there is a problem because I need for each render pass to bind an external texture as an input in my fragment shader. Consequently, the first binding of the texture (the color_attachment) is lost!
So does it exist a way to solve my problem using one FBO or do I need to use 2 separate FBOs ?
I can think of at least 3 distinct options to do this. Where the 3rd one will actually not work in OpenGL ES, but I'll explain it anyway because you might be tempted to try it otherwise, and it is supported in desktop OpenGL.
I'm going to use pseudo-code as well to cut down on typing and improve readability.
2 FBOs, 1 attachment each
This is the most straightforward approach. You use a separate FBO for each texture. During setup, you would have:
attach(fbo1, ATTACHMENT0, tex1)
attach(fbo2, ATTACHMENT0, tex2)
Then for rendering:
bindFbo(fbo1)
render pass 1
bindFbo(fbo2)
bindTexture(tex1)
render pass 2
1 FBO, 1 attachment
In this approach, you use one FBO, and attach the texture you want to render to each time. During setup, you only create the FBO, without attaching anything yet.
Then for rendering:
bindFbo(fbo1)
attach(fbo1, ATTACHMENT0, tex1)
render pass 1
attach(fbo1, ATTACHMENT0, tex2)
bindTexture(tex1)
render pass 2
1 FBO, 2 attachments
This seems to be what you had in mind. You have one FBO, and attach both textures to different attachment points of this FBO. During setup:
attach(fbo1, ATTACHMENT0, tex1)
attach(fbo1, ATTACHMENT1, tex2)
Then for rendering:
bindFbo(fbo1)
drawBuffer(ATTACHMENT0)
render pass 1
drawBuffer(ATTACHMENT1)
bindTexture(tex1)
render pass 2
This renders to tex2 in pass 2 because it is attached to ATTACHMENT1, and we set the draw buffer to ATTACHMENT1.
The major caveat is that this does not work with OpenGL ES. In ES 2.0 (without using extensions) it's a non-starter because it only supports a single color buffer.
In ES 3.0/3.1, there is a more subtle restriction: They do not have the glDrawBuffer() call from full OpenGL, only glDrawBuffers(). The call you would try is:
GLenum bufs[1] = {GL_COLOR_ATTACHMENT1};
glDrawBuffers(bufs, 1);
This is totally valid in full OpenGL, but will produce an error in ES 3.0/3.1 because it violates the following constraint from the spec:
If the GL is bound to a draw framebuffer object, the ith buffer listed in bufs must be COLOR_ATTACHMENTi or NONE.
In other words, the only way to render to GL_COLOR_ATTACHMENT1 is to have at least two draw buffers. The following call is valid:
GLenum bufs[2] = {GL_NONE, GL_COLOR_ATTACHMENT1};
glDrawBuffers(bufs, 2);
But to make this actually work, you'll need a fragment shader that produces two outputs, where the first one will not be used. By now, you hopefully agree that this approach is not appealing for OpenGL ES.
Conclusion
For OpenGL ES, the first two approaches above will work, and are both absolutely fine to use. I don't think there's a very strong reason to choose one over the other. I would recommend the first approach, though.
You might think that using only one FBO would save resources. But keep in mind that FBOs are objects that contain only state, so they use very little memory. Creating an additional FBO is insignificant.
Most people would probably prefer the first approach. The thinking is that you can configure both FBOs during setup, and then only need glBindFramebuffer() calls to switch between them. Binding a different object is generally considered cheaper than modifying an existing object, which you need for the second approach.
Consequently, the first binding of the texture (the color_attachment)
is lost!
No, it isn't. Maybe your framebuffer class works that way, but then, it would be a very bad abstraction. The GL won't detach a texture from an FBO just because you bind this texture to some texture unit. You might get some undefined results if you create a feedback loop (rendering to a texture you are reading from).
EDIT
However, as #Reto Koradi pointed out in his excellent answer, (and his comment to this one), you can't simply render to a single color attachment in unextended GLES1/2, and need some tricks in GLES3. As a result, The fact I'm pointing out here is still true, but not really helpful for the ultimate goal you are trying to achieve.

How to generate texture mapping images?

I want to put/wrap images to 3D objects. To keep things simple and fast, instead of using(and learning) a 3D library I want to use mapping images. Mapping images are used in such a way:
So you generate the mapping images once for each object and use the same mapping for all images you want to wrap.
My question is how can I generate such mapping images (given the 3D model)? Since I don't know about the terminology my searches failed me. Sorry if I am using the wrong jargon.
Below you can see a description of the workflow.
I have the 3D model of the object and the input image, i want to generate mapping images that I can use to generate the textured image.
I don't even know where to start, any pointers are appreciated.
More info
My initial idea was to somehow wrap a identity mappings (see below) using an external program. I have generated horizontal and vertical gradient images in Photoshop just to see if mapping works using photoshop generated images. The result doesn't look good. I wasn't hopeful but it was worth a shot.
input
mappings (x and y), they just resize the image, they don't do anything fancy.
result
as you can see there are lots of artifacts. Custom mapping images I have generated by warping the gradients even looks worse.
Here is some more information on mappings: http://www.imagemagick.org/Usage/mapping/#distortion_maps
I am using OpenCV remap() function for mapping.
if i understand you right here, you want to do all of it in 2D ?
calling warpPerspective() for each of your cube surfaces will be much more successful, than using remap()
pseudocode outline:
// for each surface:
// get the desired src and dst polygon
// the src one is your texture-image, so that's:
vector<Point> p_src(4), p_dst(4);
p_src[0] = Point(0,0);
p_src[1] = Point(0,src.rows-1);
p_src[2] = Point(src.cols-1,0);
p_src[3] = Point(src.cols-1,src.rows-1);
// the dst poly is the one you want textured, a 3d->2d projection of the cube surface.
// sorry, you've got to do that on your own ;(
// let's say, you've come up with this for the cube - top:
p_dst[0] = Point(15,15);
p_dst[1] = Point(44,19);
p_dst[2] = Point(56,30);
p_dst[3] = Point(33,44);
// now you need the projection matrix to transform from one to another:
Mat proj = getPerspectiveTransform( p_src, p_dst );
// finally, you can warp your texture to the dst-polygon:
warpPerspective(src, dst, proj, dst.size());
if you can get hold of the 'Learning Opencv' book, it's described around p 170.
final word of warning, since youre complaining about artefacts, - yes, it'll all look pretty cheesy, 'real' 3d engines do a lot of work here, subpixel-uv mapping, filtering,
mipmapping, etc. if you want it to look nice, consider using the 'real' thing.
btw, there's nice opengl support built into opencv
To achieve what you are trying to do, you need to render the 3D-models UV to a texture. It will be easier to learn to render 3D than to do things this way. Especially since there are a lot of weaknesses in your aproach. difficult to to lighting and problems til the depth-buffer will be abundant.
Assuming all your objects shul ever only be viewed from one angle, you need to render each of them to 3 textures:
UV-map
Normal-map
Depth-map (to correct the depth-buffer)
You will still have to do shading in order to draw these to look like your object, and I don't even know how to do the depth-buffer-thing, I just know it can be done.
So in order to avoid learning 3D, your will have to learn all the difficult parts of 3D-rendering. Does not seem the easier route...

Resources