three.js texture across InstanceGeometry - three.js

I'm using InstanceGeometry to render thousands of base geometries (boxes) in a scene. It's efficient and uses only 1 material/texture, but repeats the image texture for every instance.
I'm trying to figure out how to have the texture spread out over x number of instances. Say for example there's 8 box instances, I'd like to 1/8 of the texture to appear on every box.
I think the transformUV function on THREE.Texture is what I'd want to use, but I'm not sure how to use it in this context. OR, would the texture mapping happen in the Shader itself?
UPDATE
My own code is pretty involved and uses the built-in three.js materials adapted for instances, so let's just use one of the three.js examples as a starting point: https://github.com/mrdoob/three.js/blob/master/examples/webgl_buffergeometry_instancing_dynamic.html
also pasted in brief below..
Vertex shader:
precision highp float;
uniform mat4 modelViewMatrix;
uniform mat4 projectionMatrix;
attribute vec3 position;
attribute vec3 offset;
attribute vec2 uv;
attribute vec4 orientation;
varying vec2 vUv;
// http://www.geeks3d.com/20141201/how-to-rotate-a-vertex-by-a-quaternion-in-glsl/
vec3 applyQuaternionToVector( vec4 q, vec3 v ){
return v + 2.0 * cross( q.xyz, cross( q.xyz, v ) + q.w * v );
}
void main() {
vec3 vPosition = applyQuaternionToVector( orientation, position );
vUv = uv;
gl_Position = projectionMatrix * modelViewMatrix * vec4( offset + vPosition, 1.0 );
}
Fragment shader
precision highp float;
uniform sampler2D map;
varying vec2 vUv;
void main() {
gl_FragColor = texture2D( map, vUv );
}
JS:
var instances = 50;
var bufferGeometry = new THREE.BoxBufferGeometry( 2, 2, 2 );
var geometry = new THREE.InstancedBufferGeometry();
geometry.index = bufferGeometry.index;
geometry.attributes.position = bufferGeometry.attributes.position;
geometry.attributes.uv = bufferGeometry.attributes.uv;
// per instance data
var offsets = [];
var orientations = [];
var vector = new THREE.Vector4();
var x, y, z, w;
for ( var i = 0; i < instances; i ++ ) {
// offsets
x = Math.random() * 100 - 50;
y = Math.random() * 100 - 50;
z = Math.random() * 100 - 50;
vector.set( x, y, z, 0 ).normalize();
vector.multiplyScalar( 5 );
offsets.push( x + vector.x, y + vector.y, z + vector.z );
// orientations
x = Math.random() * 2 - 1;
y = Math.random() * 2 - 1;
z = Math.random() * 2 - 1;
w = Math.random() * 2 - 1;
vector.set( x, y, z, w ).normalize();
orientations.push( vector.x, vector.y, vector.z, vector.w );
}
offsetAttribute = new THREE.InstancedBufferAttribute( new Float32Array( offsets ), 3 );
orientationAttribute = new THREE.InstancedBufferAttribute( new Float32Array( orientations ), 4 ).setDynamic( true );
geometry.addAttribute( 'offset', offsetAttribute );
geometry.addAttribute( 'orientation', orientationAttribute );
// material
var material = new THREE.ShaderMaterial( {
uniforms: {
map: { value: new THREE.TextureLoader().load( 'textures/crate.gif' ) } },
vertexShader: document.getElementById( 'vertexShader' ).textContent,
fragmentShader: document.getElementById( 'fragmentShader' ).textContent
} );
mesh = new THREE.Mesh( geometry, material );
scene.add( mesh );

You're going to have to create an additional custom attribute that holds the offset of UVs, just like you're creating an attribute that holds the x, y, z offsets, but with u, v.
First, you add it in JavaScript:
var uvOffsets = [];
var u, v;
for ( var i = 0; i < instances; i ++ ) {
//... inside the loop
u = Math.random(); // I'm assigning random, but you can do the math...
v = Math.random(); // ... to make it discrete 1/8th amounts
uvOffsets.push(u, v);
}
// Add new attribute to BufferGeometry
var uvOffsetAttribute = new THREE.InstancedBufferAttribute( new Float32Array( uvOffsets ), 2 );
geometry.addAttribute( 'uvOffset', uvOffsetAttribute );
Then, in your Vertex shader:
// [...]
attribute vec2 uv;
attribute vec2 uvOffset;
varying vec2 vUv;
void main() {
vec3 vPosition = applyQuaternionToVector( orientation, position );
// Divide uvs by 8, and add assigned offsets
vUv = (uv / 8.0) + uvOffset;
gl_Position = projectionMatrix * modelViewMatrix * vec4( offset + vPosition, 1.0 );
}
Finally, in your frag shader:
precision highp float;
uniform sampler2D map;
uniform vec2 uvOffset;
varying vec2 vUv; // <- these UVs have been transformed by vertex shader.
void main() {
gl_FragColor = texture2D( map, vUv ); // <- Transformation is applied to texture
}

Related

Decompose a GLSL mat4 to original RTS values within vertex shader to calculate a View UV Offset

I need to get the rotation differences between the model and the camera, convert the values to radians/degrees, and pass it to the fragment shader.
For that I need to decompose and the Model rotation matrix and maybe the camera view matrix as well. I cannot seem to find a way to decompose mechanism suitable within a shader.
The rotation details goes into fragment shader to calculate uv offset.
original_rotation + viewing_angles to calculate a final sprite-like offset of the following texture and shown as billboards.
Ultimately UV should offset downwards (ex:H3 to A3) looking from down, upwards looking from up (ex:A3 to H3), left to right looking and viceversa looking from sides (ex: D1 to D8 and viceversa).
const vertex_shader = `
precision highp float;
uniform mat4 modelViewMatrix;
uniform mat4 projectionMatrix;
attribute vec3 position;
attribute vec2 uv;
attribute mat4 instanceMatrix;
attribute float index;
attribute float texture_index;
uniform vec2 rows_cols;
uniform vec3 camera_location;
varying float vTexIndex;
varying vec2 vUv;
varying vec4 transformed_normal;
float normal_to_orbit(vec3 rotation_vector, vec3 view_vector){
rotation_vector = normalize(rotation_vector);
view_vector = normalize(view_vector);
vec3 x_direction = vec3(1.0,0,0);
vec3 y_direction = vec3(0,1.0,0);
vec3 z_direction = vec3(0,0,1.0);
float rotation_x_length = dot(rotation_vector, x_direction);
float rotation_y_length = dot(rotation_vector, y_direction);
float rotation_z_length = dot(rotation_vector, z_direction);
float view_x_length = dot(view_vector, x_direction);
float view_y_length = dot(view_vector, y_direction);
float view_z_length = dot(view_vector, z_direction);
//TOP
float top_rotation = degrees(atan(rotation_x_length, rotation_z_length));
float top_view = degrees(atan(view_x_length, view_z_length));
float top_final = top_view-top_rotation;
float top_idx = floor(top_final/(360.0/rows_cols.x));
//FRONT
float front_rotation = degrees(atan(rotation_x_length, rotation_z_length));
float front_view = degrees(atan(view_x_length, view_z_length));
float front_final = front_view-front_rotation;
float front_idx = floor(front_final/(360.0/rows_cols.y));
return abs((front_idx*rows_cols.x)+top_idx);
}
vec3 extractEulerAngleXYZ(mat4 mat) {
vec3 rotangles = vec3(0,0,0);
rotangles.x = atan(mat[2].z, -mat[1].z);
float cosYangle = sqrt(pow(mat[0].x, 2.0) + pow(mat[0].y, 2.0));
rotangles.y = atan(cosYangle, mat[0].z);
float sinXangle = sin(rotangles.x);
float cosXangle = cos(rotangles.x);
rotangles.z = atan(cosXangle * mat[1].y + sinXangle * mat[2].y, cosXangle * mat[1].x + sinXangle * mat[2].x);
return rotangles;
}
float view_index(vec3 position, mat4 mv_matrix, mat4 rot_matrix){
vec4 posInView = mv_matrix * vec4(0.0, 0.0, 0.0, 1.0);
// posInView /= posInView[3];
vec3 VinView = normalize(-posInView.xyz); // (0, 0, 0) - posInView
// vec4 NinView = normalize(rot_matrix * vec4(0.0, 0.0, 1.0, 1.0));
// float NdotV = dot(NinView, VinView);
vec4 view_normal = rot_matrix * vec4(VinView.xyz, 1.0);
float view_x_length = dot(view_normal.xyz, vec3(1.0,0,0));
float view_y_length = dot(view_normal.xyz, vec3(0,1.0,0));
float view_z_length = dot(view_normal.xyz, vec3(0,0,1.0));
// float radians = atan(-view_x_length, -view_z_length);
float radians = atan(view_x_length, view_z_length);
// float angle = radians/PI*180.0 + 180.0;
float angle = degrees(radians);
if (radians < 0.0) { angle += 360.0; }
if (0.0<=angle && angle<=360.0){
return floor(angle/(360.0/rows_cols.x));
}
return 0.0;
}
void main(){
vec4 original_normal = vec4(0.0, 0.0, 1.0, 1.0);
// transformed_normal = modelViewMatrix * instanceMatrix * original_normal;
vec3 rotangles = extractEulerAngleXYZ(modelViewMatrix * instanceMatrix);
// transformed_normal = vec4(rotangles.xyz, 1.0);
transformed_normal = vec4(camera_location.xyz, 1.0);
vec4 v = (modelViewMatrix* instanceMatrix* vec4(0.0, 0.0, 0.0, 1.0)) + vec4(position.x, position.y, 0.0, 0.0) * vec4(1.0, 1.0, 1.0, 1.0);
vec4 model_center = (modelViewMatrix* instanceMatrix* vec4(0.0, 0.0, 0.0, 1.0));
vec4 model_normal = (modelViewMatrix* instanceMatrix* vec4(0.0, 0.0, 1.0, 1.0));
vec4 cam_loc = vec4(camera_location.xyz, 1.0);
vec4 view_vector = normalize((cam_loc-model_center));
//float findex = normal_to_orbit(model_normal.xyz, view_vector.xyz);
float findex = view_index(position, base_matrix, combined_rot);
vTexIndex = texture_index;
vUv = vec2(mod(findex,rows_cols.x)/rows_cols.x, floor(findex/rows_cols.x)/rows_cols.y) + (uv / rows_cols);
//vUv = vec2(mod(index,rows_cols.x)/rows_cols.x, floor(index/rows_cols.x)/rows_cols.y) + (uv / rows_cols);
gl_Position = projectionMatrix * v;
// gl_Position = projectionMatrix * modelViewMatrix * instanceMatrix * vec4(position, 1.0);
}
`
const fragment_shader = (texture_count) => {
var fragShader = `
precision highp float;
uniform sampler2D textures[${texture_count}];
varying float vTexIndex;
varying vec2 vUv;
varying vec4 transformed_normal;
void main() {
vec4 finalColor;
`;
for (var i = 0; i < texture_count; i++) {
if (i == 0) {
fragShader += `if (vTexIndex < ${i}.5) {
finalColor = texture2D(textures[${i}], vUv);
}
`
} else {
fragShader += `else if (vTexIndex < ${i}.5) {
finalColor = texture2D(textures[${i}], vUv);
}
`
}
}
//fragShader += `gl_FragColor = finalColor * transformed_normal; }`;
fragShader += `gl_FragColor = finalColor; }`;
// fragShader += `gl_FragColor = startColor * finalColor; }`;
// int index = int(v_TexIndex+0.5); //https://stackoverflow.com/questions/60896915/texture-slot-not-getting-picked-properly-in-shader-issue
//console.log('frag shader: ', fragShader)
return fragShader;
}
function reset_instance_positions() {
const dummy = new THREE.Object3D();
const offset = 500*4
for (var i = 0; i < max_instances; i++) {
dummy.position.set(offset-(Math.floor(i % 8)*500), offset-(Math.floor(i / 8)*500), 0);
dummy.updateMatrix();
mesh.setMatrixAt(i, dummy.matrix);
}
mesh.instanceMatrix.needsUpdate = true;
}
function setup_geometry() {
const geometry = new THREE.InstancedBufferGeometry().copy(new THREE.PlaneBufferGeometry(400, 400));
const index = new Float32Array(max_instances * 1); // index
for (let i = 0; i < max_instances; i++) {
index[i] = (i % max_instances) * 1.0 /* index[i] = 0.0 */
}
geometry.setAttribute("index", new THREE.InstancedBufferAttribute(index, 1));
const texture_index = new Float32Array(max_instances * 1); // texture_index
const max_maps = 1
for (let i = 0; i < max_instances; i++) {
texture_index[i] = (Math.floor(i / max_instances) % max_maps) * 1.0 /* index[i] = 0.0 */
}
geometry.setAttribute("texture_index", new THREE.InstancedBufferAttribute(texture_index, 1));
const textures = [texture]
const grid_xy = new THREE.Vector2(8, 8)
mesh = new THREE.InstancedMesh(geometry,
new THREE.RawShaderMaterial({
uniforms: {
textures: {
type: 'tv',
value: textures
},
rows_cols: {
value: new THREE.Vector2(grid_xy.x * 1.0, grid_xy.y * 1.0)
},
camera_location: {
value: camera.position
}
},
vertexShader: vertex_shader,
fragmentShader: fragment_shader(textures.length),
side: THREE.DoubleSide,
// transparent: true,
}), max_instances);
scene.add(mesh);
reset_instance_positions()
}
var camera, scene, mesh, renderer;
const max_instances = 64
function init() {
camera = new THREE.PerspectiveCamera(60, window.innerWidth / window.innerHeight,1, 10000 );
camera.position.z = 1024;
scene = new THREE.Scene();
scene.background = new THREE.Color(0xffffff);
setup_geometry()
var canvas = document.createElement('canvas');
var context = canvas.getContext('webgl2');
renderer = new THREE.WebGLRenderer({
canvas: canvas,
context: context
});
renderer.setPixelRatio(window.devicePixelRatio);
renderer.setSize(window.innerWidth, window.innerHeight);
document.body.appendChild(renderer.domElement);
window.addEventListener('resize', onWindowResize, false);
var controls = new THREE.OrbitControls(camera, renderer.domElement);
}
function onWindowResize() {
camera.aspect = window.innerWidth / window.innerHeight;
camera.updateProjectionMatrix();
renderer.setSize(window.innerWidth, window.innerHeight);
}
function animate() {
requestAnimationFrame(animate);
renderer.render(scene, camera);
}
var dataurl = "https://i.stack.imgur.com/accaU.png"
var texture;
var imageElement = document.createElement('img');
imageElement.onload = function(e) {
texture = new THREE.Texture(this);
texture.needsUpdate = true;
init();
animate();
};
imageElement.src = dataurl;
JSFiddle of work so far
So You got 4x4 transform matrix M used on xy plane QUAD and want to map its 4 corners (p0,p1,p2,p3) to your texture with "repaeat" like manner (crossing border from left/right/up/down will return right/left/down/up) based on direction of Z axis of the matrix.
You face 2 problems...
M rotation is 3 DOF and you want just 2 DOF (yaw,pitch) so if roll present the result might be questionable
if texture crosses borders you need to handle this in GLSL to avoid seems
so either do this in geometry shader and divide the quad to more if needed or use enlarged texture where you have the needed overlaps ...
Now if I did not miss something the conversion is like this:
const float pi=3.1415926535897932384626433832795;
vec3 d = normalize(z axis from M);
vec2 dd = normalize(d.xy);
u = atan2(dd.y,dd.x);
v = acos(d.z);
u = (u+pi)/(2.0*pi);
v = v/pi
The z axis extraction is just simple copy of 3th column/row (depends on your notation) from your matrix 'M' or transforming (1,0,0,0) by it. For more info see:
Understanding 4x4 homogenous transform matrices
In case of overlapped texture you need to add also this:
const float ov = 1.0/8.0; // overlap size
u = ov + (u/(ov+ov+1.0));
v = ov + (v/(ov+ov+1.0));
And the texture would look like:
In case your quads cover more than 1/8 of your original texture you need to enlarge the overlap ...
Now to handle the corners of QUAD instead of just axis you could translate the quad by distance l in Z+ direction in mesh local coordinates, apply the M on them and use those 4 points as directions to compute u,v in vertex shader. The l will affect how much of the texture area is used for quad ... This approach might even handle roll but did not test any of this yet...
After implementing it my fears was well grounded as any 2 euler angles affect each other so the result is OK on most of the directions but in edge cases the stuff get mirrored and or jumped in one or both axises probably due to area coverage difference between 3 DOF and 2 DOF (unless I made a bug in my code or the math was not computed correctly in vertex which happened to me before due to bug in drivers)
If you going for azimut/elevation that should be fine as its 2 DOF too the equation above shoul dwork for them too +/- some range conversion if needed.

How to add opacity map to ShaderMaterial

I've applied ShaderMaterial to a glb model that has opacity map (the model is human body and the opacity map is used to create hair and eyelashes), the reference for the model material was this -
So as you can see - the material is some sort of glow effect, so i was manage to find This Example which is pretty much what i need - the problem is that i can't figure out how to apply the models opacity map - if you look closely on the difference between my result (left picture) to the right picture - you'll see that the hair doesn't looks as it should - since the opacity map do not applied... i wonder is the ShaderMaterial is the good for this look or should i use other kind of shader.
Here is my material code -
let m = new THREE.MeshStandardMaterial({
roughness: 0.25,
metalness: 0.75,
opacity: 0.3,
map: new THREE.TextureLoader().load(
"/maps/opacity.jpg",
(tex) => {
tex.wrapS = THREE.RepeatWrapping;
tex.wrapT = THREE.RepeatWrapping;
tex.repeat.set(16, 1);
}
),
onBeforeCompile: (shader) => {
shader.uniforms.s = uniforms.s;
shader.uniforms.b = uniforms.b;
shader.uniforms.p = uniforms.p;
shader.uniforms.glowColor = uniforms.glowColor;
shader.vertexShader = document.getElementById("vertexShader").textContent;
shader.fragmentShader = document.getElementById(
"fragmentShader"
).textContent;
shader.side = THREE.FrontSide;
shader.transparent = true;
// shader.uniforms['alphaMap'].value.needsUpdate = true;
console.log(shader.vertexShader);
console.log(shader.fragmentShader);
},
});
Shader setting:
<script id="vertexShader" type="x-shader/x-vertex">
varying vec3 vNormal;
varying vec3 vPositionNormal;
void main()
{
vNormal = normalize( normalMatrix * normal ); // 转换到视图空间
vPositionNormal = normalize(( modelViewMatrix * vec4(position, 1.0) ).xyz);
gl_Position = projectionMatrix * modelViewMatrix * vec4( position, 1.0 );
}
</script>
<!-- fragment shader a.k.a. pixel shader -->
<script id="fragmentShader" type="x-shader/x-vertex">
uniform vec3 glowColor;
uniform float b;
uniform float p;
uniform float s;
varying vec3 vNormal;
varying vec3 vPositionNormal;
void main()
{
float a = pow( b + s * abs(dot(vNormal, vPositionNormal)), p );
gl_FragColor = vec4( mix(vec3(0), glowColor, a), 1. );
}
</script>
You're creating a MeshStandardMaterial, but then you're overriding all its shader code when you assign new vertex and fragment shaders, making the Standard material useless. You should stick to ShaderMaterial like the demo you linked. It would make your code cleaner:
// Get shader code
let vertShader = document.getElementById("vertexShader").textContent;
let fragShader = document.getElementById("fragmentShader").textContent;
// Build texture
let alphaTex = new THREE.TextureLoader().load("/maps/opacity.jpg");
alphaTex.wrapS = THREE.RepeatWrapping;
alphaTex.wrapT = THREE.RepeatWrapping;
// alphaTex.repeat.set(16, 1); <- repeat won't work in a custom shader
// Build material
let m = new THREE.ShaderMaterial({
transparent: true,
// side: THREE.FrontSide, <- this is already default. Not needed
uniforms: {
s: {value: 1},
b: {value: 2},
p: {value: 3},
alphaMap: {value: alphaTex},
glowColor: {value: new THREE.Color(0x0099ff)},
// we create a Vec2 to manually handle repeat
repeat: {value: new THREE.Vector2(16, 1)}
},
vertexShader: vertShader,
fragmentShader: fragShader
});
This helps build you material in a cleaner way, since you're using its native build method without having to override anything. Then, you can sample the alphaMap texture in your fragment shader:
uniform float s;
uniform float b;
uniform float p;
uniform vec3 glowColor;
uniform vec2 repeat;
// Declare the alphaMap uniform if we're gonna use it
uniform sampler2D alphaMap;
// Don't forget to declare UV coordinates
varying vec2 vUv;
varying vec3 vNormal;
varying vec3 vPositionNormal;
void main()
{
float a = pow( b + s * abs(dot(vNormal, vPositionNormal)), p );
// Sample map with UV coordinates. Multiply by uniform to get repeat
float a2 = texture2D(alphaMap, vUv * repeat).r;
// Combine both alphas
float opacity = a * a2;
gl_FragColor = vec4( mix(vec3(0), glowColor, opacity), 1. );
}
Also, don't forget to carry over the UVs from your vertex shader:
// Don't forget to declare UV coordinates
varying vec2 vUv;
varying vec3 vNormal;
varying vec3 vPositionNormal;
void main()
{
// convert uv attribute to vUv varying
vUv = uv;
vNormal = normalize( normalMatrix * normal ); // 转换到视图空间
vPositionNormal = normalize(( modelViewMatrix * vec4(position, 1.0) ).xyz);
gl_Position = projectionMatrix * modelViewMatrix * vec4( position, 1.0 );
}
Update
The error
'=' : cannot convert from 'lowp 4-component vector of float' to 'highp float'
means I made a mistake when taking the texture2D() sample in the fragment shader. It should have been texture2D().r so we only read the red channel to get a float instead of cramming all RGBA channels (yielding a vec4) into a float. See the following snippet for the final result:
var container, scene, camera, renderer, controls, torusKnot;
init()
function init() {
initBase()
initObject()
render()
}
function initBase () {
container = document.getElementById( 'ThreeJS' )
// SCENE
scene = new THREE.Scene();
// CAMERA
var SCREEN_WIDTH = window.innerWidth, SCREEN_HEIGHT = window.innerHeight
var VIEW_ANGLE = 45, ASPECT = SCREEN_WIDTH / SCREEN_HEIGHT, NEAR = 0.1, FAR = 20000
camera = new THREE.PerspectiveCamera( VIEW_ANGLE, ASPECT, NEAR, FAR)
camera.position.set(0,0,50)
camera.lookAt(scene.position)
// RENDERER
renderer = new THREE.WebGLRenderer( {antialias:true} )
renderer.setSize(SCREEN_WIDTH, SCREEN_HEIGHT)
renderer.setClearColor(0x333333)
container.appendChild( renderer.domElement )
// CONTROLS
controls = new THREE.OrbitControls( camera, renderer.domElement )
// Resize
window.addEventListener("resize", onWindowResize);
}
function onWindowResize() {
var w = window.innerWidth;
var h = window.innerHeight;
renderer.setSize(w, h);
camera.aspect = w / h;
camera.updateProjectionMatrix();
}
function initObject () {
let vertShader = document.getElementById("vertexShader").textContent;
let fragShader = document.getElementById("fragmentShader").textContent;
// Build texture
let alphaTex = new THREE.TextureLoader().load("https://threejs.org/examples/textures/floors/FloorsCheckerboard_S_Diffuse.jpg");
alphaTex.wrapS = THREE.RepeatWrapping;
alphaTex.wrapT = THREE.RepeatWrapping;
var customMaterial = new THREE.ShaderMaterial({
uniforms: {
s: {value: -1},
b: {value: 1},
p: {value: 2},
alphaMap: {value: alphaTex},
glowColor: {value: new THREE.Color(0x00ffff)},
// we create a Vec2 to manually handle repeat
repeat: {value: new THREE.Vector2(16, 1)}
},
vertexShader: vertShader,
fragmentShader: fragShader
})
var geometry = new THREE.TorusKnotBufferGeometry( 10, 3, 100, 32 )
torusKnot = new THREE.Mesh( geometry, customMaterial )
scene.add( torusKnot )
}
function render() {
torusKnot.rotation.y += 0.01;
renderer.render( scene, camera );
requestAnimationFrame(render);
}
body{
overflow: hidden;
margin: 0;
}
<script src="https://threejs.org/build/three.js"></script>
<script src="https://threejs.org/examples/js/controls/OrbitControls.js"></script>
<!-- vertext shader a.k.a. pixel shader -->
<script id="vertexShader" type="x-shader/x-vertex">
varying vec2 vUv;
varying vec3 vNormal;
varying vec3 vPositionNormal;
void main()
{
// convert uv attribute to vUv varying
vUv = uv;
vNormal = normalize( normalMatrix * normal ); // 转换到视图空间
vec4 mvPosition = modelViewMatrix * vec4( position, 1.0 );
vPositionNormal = normalize(( mvPosition ).xyz);
gl_Position = projectionMatrix * mvPosition;
}
</script>
<!-- fragment shader a.k.a. pixel shader -->
<script id="fragmentShader" type="x-shader/x-vertex">
uniform float s;
uniform float b;
uniform float p;
uniform vec3 glowColor;
uniform vec2 repeat;
// Declare the alphaMap uniform if we're gonna use it
uniform sampler2D alphaMap;
// Don't forget to declare UV coordinates
varying vec2 vUv;
varying vec3 vNormal;
varying vec3 vPositionNormal;
void main()
{
float a = pow( b + s * abs(dot(vNormal, vPositionNormal)), p );
// Sample map with UV coordinates. Multiply by uniform to get repeat
float a2 = texture2D(alphaMap, vUv * repeat).r;
// Combine both alphas
float opacity = a * a2;
gl_FragColor = vec4( mix(vec3(0), glowColor, opacity), 1. );
}
</script>
<div id="ThreeJS" style="position: absolute; left:0px; top:0px"></div>

Storing data as a texture for use in Vertex Shader for Instanced Geometry (THREE JS / GLSL)

I'm using a THREE.InstancedBufferGeometry, and I wish to access data in the Vertex Shader, encoded into a Texture.
What I want to do, is create a Data Texture with one pixel per instance, which will store position data for each instance (then at a later stage, I can update the texture using a simulation with a flow field to animate the positions).
I'm struggling to access the data from the texture in the Vertex Shader.
const INSTANCES_COUNT = 5000;
// FOR EVERY INSTANCE, GIVE IT A RANDOM X, Y, Z OFFSET, AND SAVE IT IN DATA TEXTURE
const data = new Uint8Array(4 * INSTANCES_COUNT);
for (let i = 0; i < INSTANCES_COUNT; i++) {
const stride = i * 4;
data[stride] = (Math.random() - 0.5);
data[stride + 1] = (Math.random() - 0.5);
data[stride + 2] = (Math.random() - 0.5);
data[stride + 3] = 0.0;
}
const offsetTexture = new THREE.DataTexture( data, INSTANCES, 1, THREE.RGBAFormat, THREE.FloatType );
offsetTexture.minFilter = THREE.NearestFilter;
offsetTexture.magFilter = THREE.NearestFilter;
offsetTexture.generateMipmaps = false;
offsetTexture.needsUpdate = true;
// CREATE MY INSTANCED GEOMETRY
const geometry = new THREE.InstancedBufferGeometry();
geometry.maxInstancedCount = INSTANCES_COUNT;
geometry.addAttribute( 'position', new THREE.Float32BufferAttribute([5, -5, 0, -5, 5, 0, 0, 0, 5], 3 )); // SIMPLE TRIANGLE
const vertexShader = `
precision highp float;
uniform vec3 color;
uniform sampler2D offsetTexture;
uniform mat4 modelViewMatrix;
uniform mat4 projectionMatrix;
attribute vec3 position;
varying vec3 vPosition;
varying vec3 vColor;
void main(){
vPosition = position;
vec4 orientation = vec4(.0, .0, .0, .0);
vec3 vcV = cross( orientation.xyz, vPosition );
vPosition = vcV * ( 2.0 * orientation.w ) + ( cross( orientation.xyz, vcV ) * 2.0 + vPosition );
vec2 uv = position.xy;
vec4 data = texture2D( offsetTexture, uv );
vec3 particlePosition = data.xyz * 1000.0;
gl_Position = projectionMatrix * modelViewMatrix * vec4( vPosition + particlePosition, 1.0 );
}
`;
const fragmentShader = `
precision highp float;
varying vec3 vColor;
void main() {
gl_FragColor = vec4(vColor, 1.0);
}
`;
const uniforms = {
size: { value: 1.0 },
color: {
type: 'c',
value: new THREE.Color(0x3db230),
},
offsetTexture: {
type: 't',
value: offsetTexture,
},
};
// CREATE MY MATERIAL
const material = new THREE.RawShaderMaterial({
uniforms,
vertexShader,
fragmentShader,
side: THREE.DoubleSide,
transparent: false,
});
scene.add(new THREE.Mesh(geometry, material));
At the moment it seems that the data from the image isn't accessible in the vertex shader (if I just set the vUv to vec2(1.0, 0.0), for example, and change the offset positions, nothing changes), and also I'm not sure how to go about making sure that the instance can reference the correct texel in the texture.
So, my two issues are:
1) How to correctly set the Data Image Texture, and access that data in the Vertex Shader
2) How to correctly reference the texel storing the data for each particular instance (e.g, instance 1000 should use vec2(1000,1), etc
Also, do I have to normalize the data (0.0-1.0, or 0–255, or -1 – +1)
Thanks
You need to compute some kind of an index into the texture per instance.
Meaning, you need an attribute that is going to be shared by each instance.
if your triangle is
[a,b,c]
your index should be
[0,0,0]
Lets say you have 1024 instances and a 1024x1 px texture.
attribute float aIndex;
vec2 myIndex = ((aIndex + 0.5)/1024.,1.);
vec4 myRes = texture2D( mySampler, myIndex);

Per instance UV texture mapping in Three.js InstancedBufferGeometry

I have a InstancedBufferGeometry made up of a single Plane:
const plane = new THREE.PlaneBufferGeometry(100, 100, 1, 1);
const geometry = new THREE.InstancedBufferGeometry();
geometry.maxInstancedCount = 100;
geometry.attributes.position = plane.attributes.position;
geometry.index = plane.index;
geometry.attributes.uv = plane.attributes.uv;
geometry.addAttribute( 'offset', new THREE.InstancedBufferAttribute( new Float32Array( offsets ), 3 ) ); // an offset position
I am applying a texture to each plane, which is working as expected, however I wish to apply a different region of the texture to each instance, and I'm not sure about the correct approach.
At the moment I have tried to build up uv's per instance, based on the structure of the uv's for a single plane:
let uvs = [];
for (let i = 0; i < 100; i++) {
const tl = [0, 1];
const tr = [1, 1];
const bl = [0, 0];
const br = [1, 0];
uvs = uvs.concat(tl, tr, bl, br);
}
...
geometry.addAttribute( 'uv', new THREE.InstancedBufferAttribute( new Float32Array( uvs ), 2) );
When I do this, I don't have any errors, but every instance is just a single colour (all instances are the the same colour). I have tried changing the instance size, and also the meshes per attribute (which I don't fully understand, struggling to find a good explanation in the docs).
I feel like I'm close, but I'm missing something, so a point in the right direction would be fantastic!
(For reference, here are my shaders):
const vertexShader = `
precision mediump float;
uniform vec3 color;
uniform sampler2D tPositions;
uniform mat4 modelViewMatrix;
uniform mat4 projectionMatrix;
attribute vec2 uv;
attribute vec2 dataUv;
attribute vec3 position;
attribute vec3 offset;
attribute vec3 particlePosition;
attribute vec4 orientationStart;
attribute vec4 orientationEnd;
varying vec3 vPosition;
varying vec3 vColor;
varying vec2 vUv;
void main(){
vPosition = position;
vec4 orientation = normalize( orientationStart );
vec3 vcV = cross( orientation.xyz, vPosition );
vPosition = vcV * ( 2.0 * orientation.w ) + ( cross( orientation.xyz, vcV ) * 2.0 + vPosition );
vec4 data = texture2D( tPositions, vec2(dataUv.x, 0.0));
vec3 particlePosition = (data.xyz - 0.5) * 1000.0;
vUv = uv;
vColor = data.xyz;
gl_Position = projectionMatrix * modelViewMatrix * vec4( position + particlePosition + offset, 1.0 );
}
`;
const fragmentShader = `
precision mediump float;
uniform sampler2D map;
varying vec3 vPosition;
varying vec3 vColor;
varying vec2 vUv;
void main() {
vec3 color = texture2D(map, vUv).xyz;
gl_FragColor = vec4(color, 1.0);
}
`;
As all my instances need to take the same size rectangular area, but offset (like a sprite sheet), I have added a UV offset and UV scale attribute to each instance, and use this to define which area of the map to use:
const uvOffsets = [];
for (let i = 0; i < INSTANCES; i++) {
const u = i % textureWidthHeight;
const v = ~~ (i / textureWidthHeight);
uvOffsets.push(u, v);
}
...
geometry.attributes.uv = plane.attributes.uv;
geometry.addAttribute( 'uvOffsets', new THREE.InstancedBufferAttribute( new Float32Array( uvOffsets ), 2 ) );
uniforms: {
...
uUvScale: { value: 1 / textureWidthHeight }
}
And in the fragment shader:
void main() {
vec4 color = texture2D(map, (vUv * uUvScale) + (vUvOffsets * uUvScale));
gl_FragColor = vec4(1.0, 1.0, 1.0, color.a);
}
\o/

Three.JS Cannot change HDR RGBELoader Offset.y Value

I have an object which when a normal texture is applied allows me to change the offset.y value, however when using RGBELoader and a HDR file I can no longer change the offset.y.
My code is as follows:
var loader3 = new THREE.ObjectLoader();
loader3.load("model/dome/dome2.json",function ( obj ) {
obj.scale.x = 7;
obj.scale.y = 7;
obj.scale.z = 7;
obj.position.x = 0;
obj.position.z = 0;
obj.position.y = 0;
var loader = new THREE.RGBELoader();
var texture = loader.load( "maps/scene2.hdr", function( texture, textureData ){
materialHDR = new THREE.ShaderMaterial( {
uniforms: {
tDiffuse: { type: "t", value: texture },
exposure: { type: "f", value: textureData.exposure },
brightMax: { type: "f", value: textureData.gamma }
},
vertexShader: getText( 'vs-hdr' ),
fragmentShader: getText( 'fs-hdr' )
} );
texture.offset.y = 0.5; // HERE LIES THE PROBLEM
texture.flipY = true;
obj.traverse( function ( child ) {
if ( child instanceof THREE.Mesh ) {
child.material = materialHDR;
child.receiveShadow = true;
//child.material.side = THREE.BackSide;
child.material.side = THREE.DoubleSide;
}
});
scene.add( obj );
} );
});
The HDR image loads just fine and is applied to the object just as it is when I use a normal texture however I just cannot move the texture around the model at all.
I have tried wrap with repeat and all sorts of combinations but the stubborn offset will not work!
I would also like to add I am currently learning three.js (awesome!) so excuse the code above if it has any additional errors.
Thanks for any help in advance it is driving me nuts!
Shader code below
<script id="fs-hdr" type="x-shader/x-fragment">
uniform sampler2D tDiffuse;
uniform float exposure;
uniform float brightMax;
varying vec2 vUv;
vec3 decode_pnghdr( const in vec4 color ) {
vec4 rgbcolor = vec4( 0.0, 0.0, 0.0, 0.0 );
if ( color.w > 0.0 ) {
float f = pow(2.0, 127.0*(color.w-0.5));
rgbcolor.xyz = color.xyz * f;
}
return rgbcolor.xyz;
/*
// remove gamma correction
vec4 res = color * color;
// decoded RI
float ri = pow( 2.0, res.w * 32.0 - 16.0 );
// decoded HDR pixel
res.xyz = res.xyz * ri;
return res.xyz;
*/
}
void main() {
vec4 color = texture2D( tDiffuse, vUv );
color.xyz = decode_pnghdr( color );
// apply gamma correction and exposure
//gl_FragColor = vec4( pow( exposure * color.xyz, vec3( 0.474 ) ), 1.0 );
// Perform tone-mapping
float Y = dot(vec4(0.30, 0.59, 0.11, 0.0), color);
float YD = exposure * (exposure/brightMax + 1.0) / (exposure + 1.0);
color *= YD;
gl_FragColor = vec4( color.xyz, 1.0 );
}
</script>
<!-- HDR vertex shader -->
<script id="vs-hdr" type="x-shader/x-vertex">
varying vec2 vUv;
void main() {
vUv = uv;
gl_Position = projectionMatrix * modelViewMatrix * vec4( position, 3 );
}
</script>
Your custom shader must handle the offset/repeat.
Add offsetRepeat to your uniforms;
offsetRepeat: { type: "v4", value: new THREE.Vector4( 0, 0.5, 1, 1 ) }
And modify your vertex shader
uniform vec4 offsetRepeat;
varying vec2 vUv;
void main() {
vUv = uv * offsetRepeat.zw + offsetRepeat.xy;
gl_Position = projectionMatrix * modelViewMatrix * vec4( position, 1.0 );
}
Your texture must be power-of-two, and set
texture.wrapS = texture.wrapT = THREE.RepeatWrapping;
three.js r.75

Resources