Autodesk Forge Viewer - smoke particle effect in Forge Viewer - three.js

I've been trying to make a smoke particle effect in Forge Viewer for several days. I want to achieve something like this in the Forge Viewer. I find some threejs particle engine samples. But all of them can't use in version r71(which used by Forge Viewer). So I decided to write my own particle engine. But there's a problem I can't figured it out why.
At first ,I've tried it in threejs (not in Forge Viewer)(with version r71 of course) and I can do something like this. It seems good for me, I think I can start writing my particle engine. But when I test it in Forge Viewer, things aren't going well.
Back to Forge Viewer, I have tested point cloud with custom shader and it worked well in the Forge Viewer. I can customize every attributes such as color, size, position to every single point. But when I try to add an image texture using texture2D in my fragmentShader. The browser shows me some warnings and nothing show on the viewer.
Here are the warnings showed by browser:
WebGL: INVALID_OPERATION: getUniformLocation: program not linked
WebGL: INVALID_OPERATION: getAttribLocation: program not linked
WebGL: INVALID_OPERATION: useProgram: program not valid
vertexShader:
attribute float customSize;
void main() {
vec4 mvPosition = modelViewMatrix * vec4( position, 1.0 );
gl_Position = projectionMatrix * mvPosition;
gl_PointSize = customSize;
}
fragmentShader:
uniform sampler2D texture;
void main() {
vec4 color = texture2D(texture,gl_PointCoord); //when I comment out this line, everything works well
gl_FragColor = color;
}
function for create points:
createPoints() {
const width = 100;
const height = 100;
const pointCount = width * height;
const positions = new Float32Array(pointCount * 3);
//const colors = new Float32Array(pointCount * 4);
const sizes = new Float32Array(pointCount);
const geometry = new THREE.BufferGeometry();
const material = new THREE.ShaderMaterial({
uniforms: {
texture: { type: "t", value: this.particleTexture }
},
vertexShader: this.vertexShader,
fragmentShader: this.fragmentShader,
transparent: true,
depthTest: true,
blending: THREE.NormalBlending
});
material.supportsMrtNormals = true;
let i = 0;
for (var x = 0; x < width; x++) {
for (var y = 0; y < height; y++) {
const u = x / width, v = y / height;
positions[i * 3] = u * 20;
positions[i * 3 + 1] = v * 20;
positions[i * 3 + 2] = Math.sin(u * 20) + Math.cos(v * 20);
sizes[i] = 1 + THREE.Math.randFloat(1, 5);
colors[i * 4] = THREE.Math.random();
colors[i * 4 + 1] = THREE.Math.random();
colors[i * 4 + 2] = THREE.Math.random();
colors[i * 4 + 3] = 1;
i++;
}
}
//const colorsAttribute = new THREE.BufferAttribute(colors, 4);
//colorsAttribute.normalized = true;
geometry.addAttribute("position", new THREE.BufferAttribute(positions, 3));
geometry.addAttribute("customSize", new THREE.BufferAttribute(sizes, 1));
//geometry.addAttribute("customColor", colorsAttribute);
geometry.computeBoundingBox();
geometry.isPoints = true;
points = new THREE.PointCloud(geometry, material);
viewer.impl.createOverlayScene('pointclouds');
viewer.impl.addOverlay('pointclouds', points);
}
in the createPoints() function, this.particleTexture comes from :
THREE.ImageUtils.loadTexture("../img/smokeparticle.png")
vertexShader ,fragmentShader and the createPoints() function are all the same between threejs testing app on browser(not in Forge Viewer) and in Forge Viewer app. But it works well only when it's not running in Forge Viewer.
I have searched a lot of tutorials and blogs, but just can't find a solution that fits me. Can anyone help? Or maybe there's a better way to make smoke effect in Forge Viewer? Thx for help!
(If I missed some information just tell me. I would update them!)

I tried your code and I managed to make it working changing the uniform name : texture by tex.
I think texture is a reserved word in WebGL2.
If you switch to WebGL1 (as describe in this article : Custom shader materials in Forge) it works.
Fragment shader :
uniform sampler2D tex;
void main() {
vec4 color = texture2D(tex,gl_PointCoord);
gl_FragColor = color;
}
Replace uniform name in material constructor :
const material = new THREE.ShaderMaterial({
uniforms: {
tex: { type: "t", value: this.particleTexture }
},
vertexShader: this.vertexShader,
fragmentShader: this.fragmentShader,
transparent: true,
depthTest: true,
blending: THREE.NormalBlending
});
Can you try it ?

My colleague blogged about adding 3D markups to the viewer using THREE.Points and custom shader with textures: https://forge.autodesk.com/blog/3d-markup-icons-and-info-card. This sounds quite close to what you're trying to do.

Related

Transparent point is not transparent at some degrees in THREE.js

demo link
//vertexShader
attribute float percent;
varying float vPercent;
void main() {
vec4 mvPosition = modelViewMatrix * vec4( position, 1.0 );
gl_Position = projectionMatrix * mvPosition;
gl_PointSize = 10.0;
vPercent = percent;
}
//fragmentShader
varying float vPercent;
void main() {
gl_FragColor = vec4( vec3(0.0), vPercent );
}
let points = [];
for(let i = 0; i < 1000; i++) {
points.push(new THREE.Vector3(i / 1000, 1, 0));
}
let geometry = new THREE.BufferGeometry().setFromPoints(points);
let percents = new Float32Array(1000);
for(let i = 0; i < 1000; i++) {
percents[i] = i / 1000;
}
geometry.addAttribute('percent', new THREE.BufferAttribute(percents, 1));
let line = new THREE.Points(geometry,
new THREE.ShaderMaterial({
vertexShader: shader_content["vertexShader"],
fragmentShader: shader_content["fragmentShader"],
vertexColors: THREE.VertexColors,
transparent: true
})
);
scene.add(line);
Rotate the scene, points are all black at some degrees. THREE.js 109.
Is it a bug or it is just what it supposed to be?
Screenshots may explain better.
all black picture
normal picture
What you see is a depth sorting issue that can be avoid by setting depthWrite of your shader material to false. The points do not flicker anymore and you always see the expected result (which actually corresponds to your black picture).

Performantly render tens of thousands of spheres of variable size/color/position in Three.js?

This question is picking up from my last question where I found that using Points leads to problems: https://stackoverflow.com/a/60306638/4749956
To solve this you'll need to draw your points using quads instead of points. There are many ways to do that. Draw each quad as a separate mesh or sprite, or merge all the quads into another mesh, or use InstancedMesh where you'll need a matrix per point, or write custom shaders to do points (see the last example on this article)
I've been trying to figure this answer out. My questions are
What is 'instancing'? What is the difference between merging geometries and instancing? And, if I were to do either one of these, what geometry would I use and how would I vary color? I've been looking at this example:
https://github.com/mrdoob/three.js/blob/master/examples/webgl_instancing_performance.html
And I see that for each sphere you would have a geometry which would apply the position and the size (scale?). Would the underlying geometry be a SphereBufferGeometry of unit radius, then? But, how do you apply color?
Also, I read about the custom shader method, and it makes some vague sense. But, it seems more complex. Would the performance be any better than the above?
Based on your previous quesiton...
First off, Instancing is a way to tell three.js to draw the same geometry multiple times but change one more more things for each "instance". IIRC the only thing three.js supports out-of-the-box is setting a different matrix (position, orientatin, scale) for each instance. Past that, like having different colors for example, you have to write custom shaders.
Instancing allows you to ask the system to draw many things with one "ask" instead of an "ask" per thing. That means it ends up being much faster. You can think of it like anything. If want 3 hambergers you could ask someone to make you 1. When they finished you could ask them to make another. When they finished you could ask them to make a 3rd. That would be much slower than just asking them to make 3 hambergers at the start. That's not a perfect analogy but it does point out how asking for multiple things one at a time is less efficient than asking for mulitple things all at once.
Merging meshes is yet another solution, following the bad analogy above , mergeing meshes is like making one big 1pound hamberger instead of three 1/3 pound hamburgers. Flipping one larger burger and putting toppings and buns on one large burger is marginally faster than doing the same to 3 small burgers.
As for which is the best solution for you that depends. In your original code you were just drawing textured quads using Points. Points always draw their quad in screen space. Meshes on the other hand rotate in world space by default so if you made instances of quads or a merged set of quads and try to rotate them they would turn and not face the camera like Points do. If you used sphere geometry then you'd have the issues that instead of only computing 6 vertices per quad with a circle drawn on it, you'd be computing 100s or 1000s of vertices per sphere which would be slower than 6 vertices per quad.
So again it requires a custom shader to keep the points facing the camera.
To do it with instancing the short version is you decide which vertex data are repeated each instance. For example for a textured quad we need 6 vertex positions and 6 uvs. For these you make the normal BufferAttribute
Then you decide which vertex data are unique to each instance. In your case the size, the color, and the center of the point. For each of these we make an InstancedBufferAttribute
We add all of those attributes to an InstancedBufferGeometry and as the last argument we tell it how many instances.
At draw time you can think of it like this
for each instance
set size to the next value in the size attribute
set color to the next value in the color attribute
set center to the next value in the center attribute
call the vertex shader 6 times, with position and uv set to the nth value in their attributes.
In this way you get the same geometry (the positions and uvs) used multiple times but each time a few values (size, color, center) change.
body {
margin: 0;
}
#c {
width: 100vw;
height: 100vh;
display: block;
}
#info {
position: absolute;
right: 0;
bottom: 0;
color: red;
background: black;
}
<canvas id="c"></canvas>
<div id="info"></div>
<script type="module">
// Three.js - Picking - RayCaster w/Transparency
// from https://threejsfundamentals.org/threejs/threejs-picking-gpu.html
import * as THREE from "https://threejsfundamentals.org/threejs/resources/threejs/r113/build/three.module.js";
function main() {
const infoElem = document.querySelector("#info");
const canvas = document.querySelector("#c");
const renderer = new THREE.WebGLRenderer({ canvas });
const fov = 60;
const aspect = 2; // the canvas default
const near = 0.1;
const far = 200;
const camera = new THREE.PerspectiveCamera(fov, aspect, near, far);
camera.position.z = 30;
const scene = new THREE.Scene();
scene.background = new THREE.Color(0);
const pickingScene = new THREE.Scene();
pickingScene.background = new THREE.Color(0);
// put the camera on a pole (parent it to an object)
// so we can spin the pole to move the camera around the scene
const cameraPole = new THREE.Object3D();
scene.add(cameraPole);
cameraPole.add(camera);
function randomNormalizedColor() {
return Math.random();
}
function getRandomInt(n) {
return Math.floor(Math.random() * n);
}
function getCanvasRelativePosition(e) {
const rect = canvas.getBoundingClientRect();
return {
x: e.clientX - rect.left,
y: e.clientY - rect.top
};
}
const textureLoader = new THREE.TextureLoader();
const particleTexture =
"https://raw.githubusercontent.com/mrdoob/three.js/master/examples/textures/sprites/ball.png";
const vertexShader = `
attribute float size;
attribute vec3 customColor;
attribute vec3 center;
varying vec3 vColor;
varying vec2 vUv;
void main() {
vColor = customColor;
vUv = uv;
vec3 viewOffset = position * size ;
vec4 mvPosition = modelViewMatrix * vec4(center, 1) + vec4(viewOffset, 0);
gl_Position = projectionMatrix * mvPosition;
}
`;
const fragmentShader = `
uniform sampler2D texture;
varying vec3 vColor;
varying vec2 vUv;
void main() {
vec4 tColor = texture2D(texture, vUv);
if (tColor.a < 0.5) discard;
gl_FragColor = mix(vec4(vColor.rgb, 1.0), tColor, 0.1);
}
`;
const pickFragmentShader = `
uniform sampler2D texture;
varying vec3 vColor;
varying vec2 vUv;
void main() {
vec4 tColor = texture2D(texture, vUv);
if (tColor.a < 0.25) discard;
gl_FragColor = vec4(vColor.rgb, 1.0);
}
`;
const materialSettings = {
uniforms: {
texture: {
type: "t",
value: textureLoader.load(particleTexture)
}
},
vertexShader: vertexShader,
fragmentShader: fragmentShader,
blending: THREE.NormalBlending,
depthTest: true,
transparent: false
};
const createParticleMaterial = () => {
const material = new THREE.ShaderMaterial(materialSettings);
return material;
};
const createPickingMaterial = () => {
const material = new THREE.ShaderMaterial({
...materialSettings,
fragmentShader: pickFragmentShader,
blending: THREE.NormalBlending
});
return material;
};
const geometry = new THREE.InstancedBufferGeometry();
const pickingGeometry = new THREE.InstancedBufferGeometry();
const colors = [];
const sizes = [];
const pickingColors = [];
const pickingColor = new THREE.Color();
const centers = [];
const numSpheres = 30;
const positions = [
-0.5, -0.5,
0.5, -0.5,
-0.5, 0.5,
-0.5, 0.5,
0.5, -0.5,
0.5, 0.5,
];
const uvs = [
0, 0,
1, 0,
0, 1,
0, 1,
1, 0,
1, 1,
];
for (let i = 0; i < numSpheres; i++) {
colors[3 * i] = randomNormalizedColor();
colors[3 * i + 1] = randomNormalizedColor();
colors[3 * i + 2] = randomNormalizedColor();
const rgbPickingColor = pickingColor.setHex(i + 1);
pickingColors[3 * i] = rgbPickingColor.r;
pickingColors[3 * i + 1] = rgbPickingColor.g;
pickingColors[3 * i + 2] = rgbPickingColor.b;
sizes[i] = getRandomInt(5);
centers[3 * i] = getRandomInt(20);
centers[3 * i + 1] = getRandomInt(20);
centers[3 * i + 2] = getRandomInt(20);
}
geometry.setAttribute(
"position",
new THREE.Float32BufferAttribute(positions, 2)
);
geometry.setAttribute(
"uv",
new THREE.Float32BufferAttribute(uvs, 2)
);
geometry.setAttribute(
"customColor",
new THREE.InstancedBufferAttribute(new Float32Array(colors), 3)
);
geometry.setAttribute(
"center",
new THREE.InstancedBufferAttribute(new Float32Array(centers), 3)
);
geometry.setAttribute(
"size",
new THREE.InstancedBufferAttribute(new Float32Array(sizes), 1));
const material = createParticleMaterial();
const points = new THREE.InstancedMesh(geometry, material, numSpheres);
// setup geometry and material for GPU picking
pickingGeometry.setAttribute(
"position",
new THREE.Float32BufferAttribute(positions, 2)
);
pickingGeometry.setAttribute(
"uv",
new THREE.Float32BufferAttribute(uvs, 2)
);
pickingGeometry.setAttribute(
"customColor",
new THREE.InstancedBufferAttribute(new Float32Array(pickingColors), 3)
);
pickingGeometry.setAttribute(
"center",
new THREE.InstancedBufferAttribute(new Float32Array(centers), 3)
);
pickingGeometry.setAttribute(
"size",
new THREE.InstancedBufferAttribute(new Float32Array(sizes), 1)
);
const pickingMaterial = createPickingMaterial();
const pickingPoints = new THREE.InstancedMesh(pickingGeometry, pickingMaterial, numSpheres);
scene.add(points);
pickingScene.add(pickingPoints);
function resizeRendererToDisplaySize(renderer) {
const canvas = renderer.domElement;
const width = canvas.clientWidth;
const height = canvas.clientHeight;
const needResize = canvas.width !== width || canvas.height !== height;
if (needResize) {
renderer.setSize(width, height, false);
}
return needResize;
}
class GPUPickHelper {
constructor() {
// create a 1x1 pixel render target
this.pickingTexture = new THREE.WebGLRenderTarget(1, 1);
this.pixelBuffer = new Uint8Array(4);
}
pick(cssPosition, pickingScene, camera) {
const { pickingTexture, pixelBuffer } = this;
// set the view offset to represent just a single pixel under the mouse
const pixelRatio = renderer.getPixelRatio();
camera.setViewOffset(
renderer.getContext().drawingBufferWidth, // full width
renderer.getContext().drawingBufferHeight, // full top
(cssPosition.x * pixelRatio) | 0, // rect x
(cssPosition.y * pixelRatio) | 0, // rect y
1, // rect width
1 // rect height
);
// render the scene
renderer.setRenderTarget(pickingTexture);
renderer.render(pickingScene, camera);
renderer.setRenderTarget(null);
// clear the view offset so rendering returns to normal
camera.clearViewOffset();
//read the pixel
renderer.readRenderTargetPixels(
pickingTexture,
0, // x
0, // y
1, // width
1, // height
pixelBuffer
);
const id =
(pixelBuffer[0] << 16) | (pixelBuffer[1] << 8) | pixelBuffer[2];
infoElem.textContent = `You clicked sphere number ${id}`;
return id;
}
}
const pickHelper = new GPUPickHelper();
function render(time) {
time *= 0.001; // convert to seconds;
if (resizeRendererToDisplaySize(renderer)) {
const canvas = renderer.domElement;
camera.aspect = canvas.clientWidth / canvas.clientHeight;
camera.updateProjectionMatrix();
}
cameraPole.rotation.y = time * 0.1;
renderer.render(scene, camera);
requestAnimationFrame(render);
}
requestAnimationFrame(render);
function onClick(e) {
const pickPosition = getCanvasRelativePosition(e);
const pickedID = pickHelper.pick(pickPosition, pickingScene, camera);
}
function onTouch(e) {
const touch = e.touches[0];
const pickPosition = getCanvasRelativePosition(touch);
const pickedID = pickHelper.pick(pickPosition, pickingScene, camera);
}
window.addEventListener("mousedown", onClick);
window.addEventListener("touchstart", onTouch);
}
main();
</script>
This is quite a broad topic. In short, both merging and instancing is about reducing the number of draw calls when rendering something.
If you bind your sphere geometry once, but keep re-rendering it, it costs you more to tell your computer to draw it many times, than it takes your computer to compute what it takes to draw it. You end up with the GPU, a powerful parallel processing device, sitting idle.
Obviously, if you create a unique sphere at each point in space, and merge them all, you pay the price of telling the gpu to render once, and it will be busy rendering thousands of your spheres.
However, merging this will increase your memory footprint, and has some overhead when you're actually creating the unique data. Instancing is a built-in clever way of achieving the same effect, at less the memory cost.
I have an article written on this topic.

Particle system design using Three.js and Shader

I'm very new to this community. As i'm asking question if there is something i claim not right, please correct me.
Now to the point, i'm design a particle system using Three.js library, particularly i'm using THREE.Geometry() and control the vertex using shader. I want my particle movement restricted inside a box, which means when a particle crosses over a face of the box, it new position will be at the opposite side of that face.
Here's how i approach, in the vertex shader:
uniform float elapsedTime;
void main() {
gl_PointSize = 3.2;
vec3 pos = position;
pos.y -= elapsedTime*2.1;
if( pos.y < -100.0) {
pos.y = 100.0;
}
gl_Position = projectionMatrix * modelViewMatrix * vec4(pos, 1.0 );
}
The ellapsedTime is sent from javascript animation loop via uniform. And the y position of each vertex will be update corresponding to the time. As a test, i want if a particle is lower than the bottom plane ( y = -100) it will move to the top plane. That was my plan. And this is the result after they all reach the bottom:
Start to fall
After reach the bottom
So, what am i missing here?
You can achieve it, using mod function:
var scene = new THREE.Scene();
var camera = new THREE.PerspectiveCamera(60, innerWidth / innerHeight, 1, 1000);
camera.position.set(0, 0, 300);
var renderer = new THREE.WebGLRenderer({
antialis: true
});
renderer.setSize(innerWidth, innerHeight);
document.body.appendChild(renderer.domElement);
var controls = new THREE.OrbitControls(camera, renderer.domElement);
var gridTop = new THREE.GridHelper(200, 10);
gridTop.position.y = 100;
var gridBottom = new THREE.GridHelper(200, 10);
gridBottom.position.y = -100;
scene.add(gridTop, gridBottom);
var pts = [];
for (let i = 0; i < 1000; i++) {
pts.push(new THREE.Vector3(Math.random() - 0.5, Math.random() - 0.5, Math.random() - 0.5).multiplyScalar(100));
}
var geom = new THREE.BufferGeometry().setFromPoints(pts);
var mat = new THREE.PointsMaterial({
size: 2,
color: "aqua"
});
var uniforms = {
time: {
value: 0
},
highY: {
value: 100
},
lowY: {
value: -100
}
}
mat.onBeforeCompile = shader => {
shader.uniforms.time = uniforms.time;
shader.uniforms.highY = uniforms.highY;
shader.uniforms.lowY = uniforms.lowY;
console.log(shader.vertexShader);
shader.vertexShader = `
uniform float time;
uniform float highY;
uniform float lowY;
` + shader.vertexShader;
shader.vertexShader = shader.vertexShader.replace(
`#include <begin_vertex>`,
`#include <begin_vertex>
float totalY = highY - lowY;
transformed.y = highY - mod(highY - (transformed.y - time * 20.), totalY);
`
);
}
var points = new THREE.Points(geom, mat);
scene.add(points);
var clock = new THREE.Clock();
renderer.setAnimationLoop(() => {
uniforms.time.value = clock.getElapsedTime();
renderer.render(scene, camera);
});
body {
overflow: hidden;
margin: 0;
}
<script src="https://threejs.org/build/three.min.js"></script>
<script src="https://threejs.org/examples/js/controls/OrbitControls.js"></script>
You can not change state in a shader. vertex shaders only output is gl_Position (to generate points/lines/triangles) and varyings that get passed to the fragment shader. Fragment shader's only output is gl_FragColor (in general). So trying to change pos.y will do nothing. The moment the shader exits your change is forgotten.
For your particle code though you could make the position a repeating function of the time
const float duration = 5.0;
float t = fract(elapsedTime / duration);
pos.y = mix(-100.0, 100.0, t);
Assuming elapsedTime is in seconds then pos.y will go from -100 to 100 over 5 seconds and repeat.
Note in this case all the particles will fall at the same time. You could add an attribute to give them each a different time offsets or you could work their position into your own formula. Related to that you might find this article useful.
You could also do the particle movement in JavaScript like this example and this one, updating the positions in the Geometry (or better, BufferGeometry)
Yet another solution is to do the movement in a separate shader by storing the positions in a texture and updating them to a new texture. Then using that texture as input another set of shaders that draws particles.

Synching two meshes, one with ShaderMaterial

I've got two meshes created from the same geometry and running the same animation. If I do absolutely nothing to the meshes they stay in perfect lockstep, which is what I want. But if I change their position or rotation they go out of sync.
Here's a jsfiddle of an example. There's a blob of minified js at the top which contains the contents of EffectComposer.js, ShaderPass.js, RenderPass.js, MaskPass.js, and CopyShader.js from the r77 source---the three.js CDN doesn't contain them and jsfiddle won't work with linking to them from three.js github repo. The start of the example problem is with the definition of THREE.OutlineShader:
THREE.OutlineShader = {
uniforms: {
"offset": {
type: "f",
value: 2.0
},
"boneTexture": {
type: "t",
value: null
},
"boneTextureWidth": {
type: "i",
value: null
},
"boneTextureHeight": {
type: "i",
value: null
},
},
vertexShader: [
"uniform sampler2D boneTexture;",
"uniform int boneTextureWidth;",
"uniform int boneTextureHeight;",
"uniform float offset;",
"mat4 getBoneMatrix(const in float i) {",
"float j = i * 4.0;",
"float x = mod(j, float(boneTextureWidth));",
"float y = floor(j / float(boneTextureWidth));",
"float dx = 1.0 / float(boneTextureWidth);",
"float dy = 1.0 / float(boneTextureHeight);",
"y = dy * (y + 0.5);",
"vec4 v1 = texture2D(boneTexture, vec2(dx * (x + 0.5), y));",
"vec4 v2 = texture2D(boneTexture, vec2(dx * (x + 1.5), y));",
"vec4 v3 = texture2D(boneTexture, vec2(dx * (x + 2.5), y));",
"vec4 v4 = texture2D(boneTexture, vec2(dx * (x + 3.5), y));",
"mat4 bone = mat4(v1, v2, v3, v4);",
"return bone;",
"}",
"void main() {",
"mat4 boneMatX = getBoneMatrix(skinIndex.x);",
"mat4 boneMatY = getBoneMatrix(skinIndex.y);",
"mat4 boneMatZ = getBoneMatrix(skinIndex.z);",
"mat4 boneMatW = getBoneMatrix(skinIndex.w);",
"vec4 skinVertex = vec4(position + normal * offset, 1.0);",
"vec4 skinned = boneMatX * skinVertex * skinWeight.x;",
"skinned += boneMatY * skinVertex * skinWeight.y;",
"skinned += boneMatZ * skinVertex * skinWeight.z;",
"skinned += boneMatW * skinVertex * skinWeight.w;",
"vec4 mvPosition = modelViewMatrix * skinned;",
"gl_Position = projectionMatrix * mvPosition;",
"}"
].join("\n"),
fragmentShader: [
"uniform int boneTextureWidth;",
"void main() {",
"gl_FragColor = vec4(1.0, 0.0, 0.0, 1.0);",
"}"
].join("\n")
};
var camera, light, renderer, composer, clock;
var sceneMain, sceneOutline;
var meshMain = null,
meshOutline = null;
var mixerMain, mixerOutline;
var animMain, animOutline;
var height = 500,
width = 500;
var objData = '{"metadata":{"formatVersion":3.1,"generatedBy":"Blender 2.7 Exporter","vertices":24,"faces":22,"normals":18,"colors":0,"uvs":[],"materials":1,"morphTargets":0,"bones":2},"scale":1.000000,"materials":[{"DbgColor":15658734,"DbgIndex":0,"DbgName":"Material","blending":"NormalBlending","colorDiffuse":[0.1569801711586143,0.17312412519937936,0.6400000190734865],"colorEmissive":[0.0,0.0,0.0],"colorSpecular":[0.2535329759120941,0.0,0.007157782092690468],"depthTest":true,"depthWrite":true,"shading":"Lambert","specularCoef":50,"opacity":1.0,"transparent":false,"vertexColors":false}],"vertices":[1.51034,-1,-1,1.51034,-1,1,-0.489661,-1,1,-0.489661,-1,-1,1.51034,1,-1,1.51034,1,1,-0.489662,1,1,-0.489661,1,-1,3.23233,-1,-0.999999,3.23233,-1,1,3.23233,1,-0.999999,3.23233,1,1,-1.98848,-1,1,-1.98848,-1,-1,-1.98848,1,0.999999,-1.98848,1,-1,1.51034,-5.70811,-1,1.51034,-5.70811,1,3.23233,-5.70811,-0.999999,3.23233,-5.70811,1,-0.489661,-5.62708,1,-0.48966,-5.62708,-1,-1.98848,-5.62708,1,-1.98848,-5.62708,-1],"morphTargets":[],"normals":[-0.301492,-0.301492,-0.904508,-0.301492,-0.301492,0.904508,0.301492,-0.301492,0.904508,0.301492,-0.301492,-0.904508,0,0.707083,-0.707083,0,0.707083,0.707083,0.577349,0.577349,0.577349,0.577349,0.577349,-0.577349,-0.577349,0.577349,-0.577349,-0.577349,0.577349,0.577349,0.707083,0,-0.707083,0.707083,0,0.707083,0.577349,-0.577349,-0.577349,-0.577349,-0.577349,-0.577349,-0.707083,0,0.707083,-0.707083,0,-0.707083,-0.577349,-0.577349,0.577349,0.577349,-0.577349,0.577349],"colors":[],"uvs":[],"faces":[35,0,1,2,3,0,0,1,2,3,35,4,5,11,10,0,4,5,6,7,35,1,5,6,2,0,1,5,5,2,35,6,7,15,14,0,5,4,8,9,35,4,0,3,7,0,4,0,3,4,35,8,10,11,9,0,10,7,6,11,35,5,1,9,11,0,5,1,11,6,35,0,4,10,8,0,0,4,7,10,35,0,8,18,16,0,0,10,12,13,35,12,14,15,13,0,14,9,8,15,35,7,3,13,15,0,4,3,15,8,35,2,6,14,12,0,2,5,9,14,35,2,12,22,20,0,2,14,16,17,35,17,16,18,19,0,16,13,12,17,35,9,1,17,19,0,11,1,16,17,35,8,9,19,18,0,10,11,17,12,35,1,0,16,17,0,1,0,13,16,35,21,20,22,23,0,12,17,16,13,35,13,3,21,23,0,15,3,12,13,35,12,13,23,22,0,14,15,13,16,35,3,2,20,21,0,3,2,17,12,35,4,7,6,5,0,4,4,5,5],"bones":[{"parent":-1,"name":"leg.R","pos":[-1.24994,0.43791,0.191651],"rotq":[-0.00523508,-0.706875,-0.707296,-0.00579363],"scl":[1,1,1]},{"parent":-1,"name":"leg.L","pos":[2.49995,0.280193,0.066556],"rotq":[-0.00523507,-0.706875,-0.707296,-0.00579363],"scl":[1,1,1]}],"skinIndices":[1,0,1,0,0,1,0,1,1,0,1,0,0,1,0,1,1,-1,1,-1,1,-1,1,-1,0,-1,0,-1,0,-1,0,1,1,-1,1,-1,1,-1,1,-1,0,-1,0,-1,0,-1,0,-1],"skinWeights":[0.928373,0.0680346,0.937978,0.0587701,0.949888,0.0463839,0.937937,0.0591265,0.821856,0.122838,0.79233,0.145709,0.876929,0.0825711,0.830405,0.115734,0.989868,0,0.992278,0,0.968805,0,0.966368,0,0.993762,0,0.989439,0,0.978637,0,0.962526,0.00173758,0.997334,0,0.997776,0,0.999229,0,0.999402,0,0.998345,0,0.997508,0,0.99955,0,0.999106,0],"animations":[{"name":"ArmatureAction","fps":24,"length":0.416667,"hierarchy":[{"parent":-1,"keys":[{"time":0,"pos":[-1.24994,0.43791,0.191651],"rot":[-0.00643926,-0.522937,-0.852335,-0.0044168],"scl":[1,1,1]},{"time":0.0416667,"pos":[-1.24994,0.43791,0.191651],"rot":[-0.00683821,-0.561328,-0.827555,-0.00415746],"scl":[1,1,1]},{"time":0.0833333,"pos":[-1.24994,0.43791,0.191651],"rot":[-0.00791262,-0.665775,-0.746103,-0.00335735],"scl":[1,1,1]},{"time":0.125,"pos":[-1.24994,0.43791,0.191651],"rot":[0.00910612,0.78443,0.620147,0.00222404],"scl":[1,1,1]},{"time":0.166667,"pos":[-1.24994,0.43791,0.191651],"rot":[0.00983298,0.859093,0.511724,0.00131562],"scl":[1,1,1]},{"time":0.208333,"pos":[-1.24994,0.43791,0.191651],"rot":[0.0100438,0.881367,0.472325,0.000997505],"scl":[1,1,1]},{"time":0.25,"pos":[-1.24994,0.43791,0.191651],"rot":[0.00983298,0.859093,0.511724,0.00131562],"scl":[1,1,1]},{"time":0.291667,"pos":[-1.24994,0.43791,0.191651],"rot":[0.00910612,0.78443,0.620147,0.00222404],"scl":[1,1,1]},{"time":0.333333,"pos":[-1.24994,0.43791,0.191651],"rot":[-0.00791262,-0.665775,-0.746103,-0.00335735],"scl":[1,1,1]},{"time":0.375,"pos":[-1.24994,0.43791,0.191651],"rot":[-0.00683821,-0.561328,-0.827555,-0.00415746],"scl":[1,1,1]},{"time":0.416667,"pos":[-1.24994,0.43791,0.191651],"rot":[-0.00643926,-0.522937,-0.852335,-0.0044168],"scl":[1,1,1]}]},{"parent":0,"keys":[{"time":0,"pos":[2.49995,0.280193,0.066556],"rot":[0.0033329,0.881416,0.472275,0.00706144],"scl":[1,1,1]},{"time":0.0416667,"pos":[2.49995,0.280193,0.066556],"rot":[0.00316317,0.858922,0.512045,0.00734349],"scl":[1,1,1]},{"time":0.0833333,"pos":[2.49995,0.280193,0.066556],"rot":[0.00263566,0.783706,0.621074,0.00807219],"scl":[1,1,1]},{"time":0.125,"pos":[2.49995,0.280193,0.066556],"rot":[-0.00187897,-0.664887,-0.74689,-0.00880854],"scl":[1,1,1]},{"time":0.166667,"pos":[2.49995,0.280193,0.066556],"rot":[-0.00126496,-0.561008,-0.827759,-0.00919252],"scl":[1,1,1]},{"time":0.208333,"pos":[2.49995,0.280193,0.066556],"rot":[-0.00104832,-0.522977,-0.852295,-0.009288],"scl":[1,1,1]},{"time":0.25,"pos":[2.49995,0.280193,0.066556],"rot":[-0.00126496,-0.561007,-0.827759,-0.00919252],"scl":[1,1,1]},{"time":0.291667,"pos":[2.49995,0.280193,0.066556],"rot":[-0.00187897,-0.664887,-0.74689,-0.00880854],"scl":[1,1,1]},{"time":0.333333,"pos":[2.49995,0.280193,0.066556],"rot":[0.00263566,0.783706,0.621074,0.00807219],"scl":[1,1,1]},{"time":0.375,"pos":[2.49995,0.280193,0.066556],"rot":[0.00316317,0.858921,0.512045,0.0073435],"scl":[1,1,1]},{"time":0.416667,"pos":[2.49995,0.280193,0.066556],"rot":[0.0033329,0.881416,0.472275,0.00706144],"scl":[1,1,1]}]}]}]}';
load();
function load() {
var loader = new THREE.JSONLoader();
clock = new THREE.Clock();
sceneMain = new THREE.Scene();
sceneOutline = new THREE.Scene();
var obj = loader.parse(JSON.parse(objData));
for (var k in obj.materials) {
obj.materials[k].skinning = true;
}
setModel(obj.geometry, obj.materials);
init();
animate();
}
function init() {
camera = new THREE.PerspectiveCamera(40, height / width, 1, 10000);
camera.position.set(0, 0, 25);
light = new THREE.DirectionalLight(0xffffff)
light.position.set(1, 1, 1);
sceneMain.add(light);
renderer = new THREE.WebGLRenderer({
width: width,
height: height,
antialias: true,
});
renderer.setSize(width, height);
renderer.setClearColor(0x666666);
renderer.autoClear = false;
renderer.gammaInput = true;
renderer.gammaOutput = true;
document.body.appendChild(renderer.domElement);
var renderTarget = new THREE.WebGLRenderTarget(width, height, {
minFilter: THREE.LinearFilter,
magFilter: THREE.LinearFilter,
format: THREE.RGBAFormat,
stencilBuffer: true,
});
composer = new THREE.EffectComposer(renderer, renderTarget);
composer.renderTarget1.stencilBuffer = true;
composer.renderTarget2.stencilBuffer = true;
var pMain = new THREE.RenderPass(sceneMain, camera);
var pOut = new THREE.RenderPass(sceneOutline, camera);
pOut.clear = false;
var pCopy = new THREE.ShaderPass(THREE.CopyShader);
pCopy.renderToScreen = true;
composer.addPass(pMain);
composer.addPass(pOut);
composer.addPass(pCopy);
animMain.play();
animOutline.play();
}
function setModel(geometry, materials) {
meshMain = new THREE.SkinnedMesh(geometry,
new THREE.MeshFaceMaterial(materials));
sceneMain.add(meshMain);
mixerMain = new THREE.AnimationMixer(meshMain);
animMain = mixerMain.clipAction(geometry.animations[0]);
var shader = THREE.OutlineShader;
var shaderMaterial = new THREE.ShaderMaterial({
uniforms: THREE.UniformsUtils.clone(shader.uniforms),
vertexShader: shader.vertexShader,
fragmentShader: shader.fragmentShader,
skinning: true,
side: THREE.BackSide,
});
meshOutline = new THREE.SkinnedMesh(geometry, shaderMaterial);
shaderMaterial.uniforms['boneTextureWidth'].value = meshOutline.skeleton.boneTextureWidth;
shaderMaterial.uniforms['boneTextureHeight'].value = meshOutline.skeleton.boneTextureHeight;
shaderMaterial.uniforms['boneTexture'].value = meshOutline.skeleton.boneTexture;
shaderMaterial.uniforms['offset'].value = 0.5;
shaderMaterial.uniforms['boneTextureWidth'].value.needsUpdate = true;
shaderMaterial.uniforms['boneTextureHeight'].value.needsUpdate = true;
shaderMaterial.uniforms['boneTexture'].value.needsUpdate = true;
shaderMaterial.uniforms['offset'].value.needsUpdate = true;
sceneOutline.add(meshOutline);
mixerOutline = new THREE.AnimationMixer(meshOutline);
animOutline = mixerOutline.clipAction(geometry.animations[0]);
}
function animate() {
var delta = clock.getDelta();
requestAnimationFrame(animate);
update(delta);
render(delta);
}
function update(delta) {
if (meshMain && meshOutline) {
meshMain.rotation.y += 1 * delta;
meshOutline.rotation.y += 1 * delta;
mixerMain.update(delta);
mixerOutline.update(delta);
}
}
function render(delta) {
composer.render(delta);
}
The problem is evidently due to the ShaderMaterial and/or the shader itself, as changing the second mesh's material to e.g. MeshBasicMaterial results in the expected behaviour (the two meshes staying in lockstep).
The shader was lifted from this jsfiddle posted some time ago. It uses an ancient version of three.js. I'm not entirely clear on the expected/correct way of populating the boneTexture, boneTextureWidth, and boneTextureHeight uniforms when creating the ShaderMaterial instance. I do it manually from the values in the mesh's skeleton, but I wouldn't be surprised if that's wrong.
Again, I'm just trying to understand why translating both meshes in the same way at the same time causes them to go out of sync like illustrated in the first jsfiddle example.
Edit: I observe that the mesh using ShaderMaterial (meshOutline) syncs with the other mesh (meshMain) if meshOutline is rotated exactly half as much as meshMain. E.g., in the update() function:
meshMain.rotation.y += 1 * delta;
meshOutline.rotation.y += 1 * delta / 2;
...will result in the two meshes apparently rotating in sync. The same is true if the rotation is replaced with a coordinate (e.g. x) movement:
//meshMain.rotation.y += 1 * delta;
//meshOutline.rotation.y += 1 * delta / 2;
var dx = Math.random() - 0.5;
meshMain.position.x += dx;
meshOutline.position.x += dx / 2;
...will result in both meshes moving back and forth together. But if both
are combined, that is:
meshMain.rotation.y += 1 * delta;
meshOutline.rotation.y += 1 * delta / 2;
var dx = Math.random() - 0.5;
meshMain.position.x += dx;
meshOutline.position.x += dx / 2;
They go wildly out of sync.
This clearly means that there's something I'm not understanding about how the shader is getting vertex positions from three.js. I understand that the shader is computing the vertex positions and using them because that's what happens when you use a ShaderMaterial. What I'm not understanding is how to keep the data the shader is using current with what's happening to the mesh in three.js. Which is apparently happening in the second jsfiddle example I linked above.
Answering my own question: it appears as if something involving the ShaderMaterial implementation has changed since r66 (the version used in the second---working---jsfiddle example, from which I got the shader code).
What I ended up doing was going through the ShaderChunk source to see if I could reproduce what I wanted to do using chunks of shader code from the three.js source (thinking perhaps it was just some default or whatever that was getting set in the background that I wasn't doing in the custom shader code). What I ended up with is (for the vertex shader):
vertexShader: [
"uniform float offset;",
THREE.ShaderChunk["common"],
THREE.ShaderChunk["skinning_pars_vertex"],
"void main() {",
"vec3 transformed = vec3(position + normal * offset);",
THREE.ShaderChunk["skinbase_vertex"],
THREE.ShaderChunk["skinning_vertex"],
THREE.ShaderChunk["project_vertex"],
"}"
].join( "\n" ),
The important difference being hidden away in skinning_vertex.glsl, the source for the skinning_vertex shader chunk:
#ifdef USE_SKINNING
vec4 skinVertex = bindMatrix * vec4( transformed, 1.0 );
vec4 skinned = vec4( 0.0 );
skinned += boneMatX * skinVertex * skinWeight.x;
skinned += boneMatY * skinVertex * skinWeight.y;
skinned += boneMatZ * skinVertex * skinWeight.z;
skinned += boneMatW * skinVertex * skinWeight.w;
skinned = bindMatrixInverse * skinned;
#endif
The thing that's happening there that isn't in the custom shader I had (and which wasn't in the example that was working with r66) is in the first and last lines---first multiplying by bindMatrix and then later by bindMatrixInverse. I'm a little puzzled why this is required, as according to the docs these are two uniforms that are only defined if the SkinnedMesh has bindMode set to "detached" (instead of "attached", the default).
But at any rate that change---either by using the ShaderChunk-based shader or by editing my custom shader to include the differences---produces the desired result.
That answers my question, but I'd still welcome any pointers to where the documentation covers this or an explanation of the changes from r66 to r77 that explain the different behaviour.

How to map texture on a custom non square quad in THREE JS

playing around with ThreeJS, i encoutered a classic problem of non square quad texturing :
http://www.xyzw.us/~cass/qcoord/
Problem is ThreeJS only let you set texture coordinates with a vec2 (for what i know ..)
And after spending hours on this, and finally found a working solution, i wanted to share it with the community, and maybe get some better ideas ?
So here is the code:
First, the javascript to make my Quad using three JS:
var planeGeom = new THREE.Geometry();
planeGeom.vertices.push(new THREE.Vector3(0, 0, 10));
planeGeom.vertices.push(new THREE.Vector3(10, 0, 10));
planeGeom.vertices.push(new THREE.Vector3(20, 0,0));
planeGeom.vertices.push(new THREE.Vector3(-10, 0, 0));
//create the 2 faces , maybe i should play with CW or CCW order... ;)
planeGeom.faces.push(new THREE.Face3(0,1,3));
planeGeom.faces.push(new THREE.Face3(1,2,3));
//Compute widths ratio
var topWidth = Math.abs(Plane.TR.x - Plane.TL.x);
var bottomWidth = Math.abs(Plane.BR.x - Plane.BL.x);
var ratio = topWidth / bottomWidth;
//create UV's as barely explained in the link above (www.xyzw.us)
var UVS = [
new THREE.Vector2(0, ratio),
new THREE.Vector2(0, 0),
new THREE.Vector2(1.0, 0),
new THREE.Vector2(ratio, ratio)
];
//faceVertexUvs[materialID] [face index] [vertex index among face]
planeGeom.faceVertexUvs[0][0] = [UVS[0],UVS[1],UVS[3]];
planeGeom.faceVertexUvs[0][1] = [UVS[1],UVS[2],UVS[3]];
//load the image
var checkerTexture = THREE.ImageUtils.loadTexture('./resources/images/checker_largeColor.gif');
//Now create custom shader parts
customUniforms =
{
uSampler: { type: "t", value: checkerTexture },
};
var customMaterial = new THREE.ShaderMaterial(
{
uniforms: customUniforms,
vertexShader: document.getElementById( 'vertexShader').textContent,
fragmentShader: document.getElementById( 'fragmentShader').textContent,
side: THREE.DoubleSide
} );
//create the mesh with the custom geometry and material
var planeMesh = new THREE.Mesh(planeGeom, customMaterial);
//add the object to the threeJS scene
this.m_Scene.add(planeMesh);
and now the custom shader code:
Vertex shader:
varying vec4 textureCoord;
void main()
{
//here i reCreate the Vec4 i would have liked to have in threeJS
textureCoord = vec4(uv,0.0, 1.0);
if(uv.y != 0.0)
{
textureCoord.w *= (uv.y);
}
gl_Position = projectionMatrix * modelViewMatrix * vec4( position, 1.0 );
}
and the fragment shader:
uniform sampler2D uSampler;
varying vec4 textureCoord;
void main()
{
gl_FragColor = texture2D(uSampler, vec2(textureCoord.x/textureCoord.w, textureCoord.y/textureCoord.w));
}
voilaaa. I hope it could help some, or maybe myself in a few years... ;)

Resources