I'm using a ShaderMaterial to generate noise to a render texture.
Then I'm using the resulting texture as a displacementMap on a plane using phong shading material.
What happens is that the displacement map will go from [0-displacementScale].
Meaning that my noise texture will be interpreted per pixel as: 0x000000 pixels means 0 vertex offset, and 0xffffff pixels means the displacementScale value...
I thought the displacementBias would allow me to map the range, but it kind of just offsets the "average".
I would like to know how could I map the range from the displacementMap so 0x000000 could mean e.g: -100 and the 0xffffff could mean 100...
In the source file displacementmap_vertex.glsl, you will find the displacement formula:
transformed += normal * ( texture2D( displacementMap, uv ).x * displacementScale + displacementBias );
So in your case, set displacementBias = -100 and displacementScale = 200.
three.js r.73
Related
In Three.js, how can I change the way in which a texture gets mapped onto a plane?
Let's assume we have a 1x1 plane and a 16:9 image. How can I control the way in which that image gets mapped onto the plane?
By default, the image gets "squished". I would like it to maintain its aspect ratio and have any overlap get "cut off". Is there a way to configure the material or texture to do this, or would I use a shader? If so, what would it need to look like?
const planeMesh = new THREE.Mesh(
new THREE.PlaneBufferGeometry(1, 1),
new THREE.MeshBasicMaterial({
map: texture,
})
);
PS: In future, I would also like to be able to zoom into and out of the image on mouse hover without affecting the size of the plane, so would think a shader might be better?
A Texture already has several properties built-in that can do what you're looking for.
const texture = textureLoader.load("whatever.png");
const planeMesh = new THREE.Mesh(
new THREE.PlaneBufferGeometry(1, 1),
new THREE.MeshBasicMaterial({
map: texture,
})
);
// Sets the pivot point to the center of the texture
texture.center.set(0.5, 0.5);
// Make the texture repeat 0.5625 times in the x-axis to match 16:9 ratio
let ratio = 9 / 16;
texture.repeat.set(ratio, 1);
// Scale texture up to "zoom" into it
let zoom = 0.5;
texture.repeat.set(ratio * zoom, 1 * zoom);
You can read more about the .repeat .center and even .rotation properties in the Texture docs. Just keep in mind that repeating a texture is a bit counter-intuitive because you're doing the inverse of scaling a texture. So to scale a texture by 2, you have to tell it to repeat 1/2 times.
I'm trying to make FBO-particle system by calculating positions in separate pass. Using code from this post now http://barradeau.com/blog/?p=621.
I render sphere of particles, without any movement:
The only thing i'm adding so far is a texture in simulation fragment shader:
void main() {
vec3 pos = texture2D( texture, vUv ).xyz;
//THIS LINE, pos is approx in -200..200 range
float map = texture2D(texture1, abs(pos.xy/200.)).r;
...
// save map value in ping-pong texture as alpha
gl_FragColor = vec4( pos, map );
texture1 is: half black half white.
Then in render vertex shader i read this map parameter:
map = texture2D( positions, position.xy ).a;
and use it in render fragment shader to see the color:
vec3 finalColor = mix(vec3(1.,0.,0.),vec3(0.,1.,0.),map);
gl_FragColor = vec4( finalColor, .2 );
So what i hope to see is: (made by setting same texture in render shaders)
But what i really see is: (by setting texture in simulation shaders)
Colors are mixed up, though mostly you can see more red ones where they should be, but there are a lot of green particles in between.
Also tried to make my own demo with simplified texture and same idea and i got this:
Also mixed up, but you can still guess image.
Same error.
I think i am missing something obvious. But i was struggling with this a couple of days now, not able to find a mistake by myself.
Would be very grateful for someone to point me in the right direction. Thank you in advance!
Demo with error: http://cssing.org.ua/examples/fbo-error/
Full code i'm referring: https://github.com/akella/fbo-test
You should disable texture filtering by using GL_NEAREST min/mag filters.
My guess is that THREE.TextureLoader() loads texture with mipmaps and texture2D call in vertex shader uses the lowest-res mipmap. In vertex shaders you should use texture2DLod(texture, texCoord, 0.0) - note the 3rd param, lod, which specifies 0 mipmap level.
Coming from this question I'm trying to generate UV Mappings programmatically with Three.js for some models, I need this because my models are being generated programmatically too and I need to apply a simple texture to them. I have read here and successfully generated UV mapping for some simple 3D text but when applying the same mapping to more complex models it just doesn't work.
The texture I'm trying to apply is something like this:
The black background it's just transparent in the PNG image. I need to apply this to my models, it's just a glitter effect so I don't care about the exact position in the model, is any way to create a simple UV Map programatically for this cases?
I'm using this code from the linked question which works great for planar models but doesn't work for non-planar models:
assignUVs = function( geometry ){
geometry.computeBoundingBox();
var max = geometry.boundingBox.max;
var min = geometry.boundingBox.min;
var offset = new THREE.Vector2(0 - min.x, 0 - min.y);
var range = new THREE.Vector2(max.x - min.x, max.y - min.y);
geometry.faceVertexUvs[0] = [];
var faces = geometry.faces;
for (i = 0; i < geometry.faces.length ; i++) {
var v1 = geometry.vertices[faces[i].a];
var v2 = geometry.vertices[faces[i].b];
var v3 = geometry.vertices[faces[i].c];
geometry.faceVertexUvs[0].push([
new THREE.Vector2( ( v1.x + offset.x ) / range.x , ( v1.y + offset.y ) / range.y ),
new THREE.Vector2( ( v2.x + offset.x ) / range.x , ( v2.y + offset.y ) / range.y ),
new THREE.Vector2( ( v3.x + offset.x ) / range.x , ( v3.y + offset.y ) / range.y )
]);
}
geometry.uvsNeedUpdate = true;
}
You need to be more specific. Here, I'll apply UV mapping programmatically
for (i = 0; i < geometry.faces.length ; i++) {
geometry.faceVertexUvs[0].push([
new THREE.Vector2( 0, 0 ),
new THREE.Vector2( 0, 0 ),
new THREE.Vector2( 0, 0 ),
]);
}
Happy?
There are an infinite ways of applying UV coordinates. How about this
for (i = 0; i < geometry.faces.length ; i++) {
geometry.faceVertexUvs[0].push([
new THREE.Vector2( Math.random(), Math.random() ),
new THREE.Vector2( Math.random(), Math.random() ),
new THREE.Vector2( Math.random(), Math.random() ),
]);
}
There's no RIGHT answer. There's just whatever you want to do is up to you. It's kind of like asking how do I apply pencil to paper.
Sorry to be so snarky, just pointing out the question is in one sense nonsensical.
Anyway, there are a few common methods for applying a texture.
Spherical mapping
Imagine your model is translucent, there's a sphere inside made of film and inside the sphere is a point light so that it projects (like a movie projector) from the sphere in all directions. So you do the math to computer the correct UVs for that situation
To get a point on there sphere multiply your points by the inverse of the world matrix for the sphere then normalize the result. After that though there's still the problem of how the texture itself is mapped to the imaginary sphere for which again there are an infinite number of ways.
The simplest way is I guess called mercator projection which is how most 2d maps of the world work. they have the problem that lots of space is wasted at the north and south poles. Assuming x,y,z are the normalized coordinates mentioned in the previous paragraph then
U = Math.atan2(z, x) / Math.PI * 0.5 - 0.5;
V = 0.5 - Math.asin(y) / Math.PI;
Projection Mapping
This is just like a movie. You have a 2d image being projected from a point. Imagine you pointed a movie projector (or a projection TV) at a chair. Compute those points
Computing these points is exactly like computing the 2D image from 3D data that nearly all WebGL apps do. Usually they have a line in their vertex shader like this
gl_Position = matrix * position;
Where matrix = worldViewProjection. You can then do
clipSpace = gl_Position.xy / gl_Position.w
You now have x,y values that go from -1 to +1. You then convert them
to 0 to 1 for UV coords
uv = clipSpace * 0.5 + 0.5;
Of course normally you'd compute UV coordinates at init time in JavaScript but the concept is the same.
Planar Mapping
This is the almost the same as projection mapping except imagine the projector, instead of being a point, is the same size as you want to project it. In other words, with projection mapping as you move your model closer to the projector the picture being projected will get smaller but with planar it won't.
Following the projection mapping example the only difference here is using an orthographic projection instead of a perspective projection.
Cube Mapping?
This is effectively planar mapping from 6 directions. It's up to you
to decide which UV coordinates get which of the 6 planes. I'd guess
most of the time you'd take the normal of the triangle to see which
plane it most faces, then do planar mapping from that plane.
Actually I might be getting my terms mixed up. You can also do
real cube mapping where you have a cube texture but that requires
U,V,W instead of just U,V. For that it's the same as the sphere
example except you just use the normalized coordinates directly as
U,V,W.
Cylindrical mapping
This is like sphere mapping except assume there's tiny cylinder projecting on to your model. Unlike a sphere a cylinder has orientation but basically you can move the points of the model into the orientation of the cylinder then assuming x,y,z are now relative to the cylinder (in other words you multiplied them by the inverse matrix of the matrix that represents the orientation of the cylinder), then .
U = Math.atan2(x, z) / Math.PI * 0.5 + 0.5
V = y
2 more solutions
Maybe you want Environment Mapping?
Here's 1 example and Here's another.
Maybe you should consider using a modeling package like Maya or Blender that have UV editors and UV projectors built in.
I have an extremely simple PNG texture: a grey circle with a transparent background.
I use it as a uniform map for a THREE.ShaderMaterial:
var uniforms = THREE.UniformsUtils.merge( [basicShader.uniforms] );
uniforms['map'].value = THREE.ImageUtils.loadTexture( "img/particle.png" );
uniforms['size'].value = 100;
uniforms['opacity'].value = 0.5;
uniforms['psColor'].value = new THREE.Color( 0xffffff );
Here is my fragment shader (just part of it):
gl_FragColor = vec4( psColor, vOpacity );
gl_FragColor = gl_FragColor * texture2D( map,vec2( gl_PointCoord.x, 1.0 - gl_PointCoord.y ) );
gl_FragColor = gl_FragColor * vec4( vColor, 1.0 );
I applied the material to some particles (THREE.PointCloud mesh) and it works quite well:
But if i turn the camera of more than 180 degrees I see this:
I understand that the fragment shader is not correctly taking into account the alpha value of the PNG texture.
What is the best approach in this case, to get the right color and opacity (from custom attributes) and still get the alpha right from the PNG?
And why is it behaving correctly on one side?
Transparent objects must be rendered from back to front -- from furthest to closest. This is because of the depth buffer.
But PointCloud particles are not sorted based on distance from the camera. That would be too inefficient. The particles are always rendered in the same order, regardless of the camera position.
You have several work-arounds.
The first is to discard fragments for which the alpha is low. You can use a pattern like so:
if ( textureColor.a < 0.5 ) discard;
Another option is to set material.depthTest = false or material.depthWrite = false. You might not like the side effects, however, if you have other objects in the scene.
three.js r.71
I'm using orthographic projection.
I have 2 triangles creating one long quad.
On this quad i put a texture that repeat him self along the the way.
The world zoom is always changing by the user - and makes the quad length be short or long accordingly. The height is being calculated in the shader so it is always the same size (in pixels).
My problem is that i want the texture to repeat according to it's real (pixel size) and the length of the quad. In other words, that the texture will be always the same size (pixels) and it will fill the quad by repeating it more or less depend on the quad length.
The rotation is important.
For Example
My texture is
I've added to my vertices - texture coordinates for duplicating it 20 times now
as you see below
Because it's too much zoomed out we see the texture squeezed.
Now i'm zooming in and the texture stretched. It will always be 20 times repeat.
I'm sure that i have to play in with the texture coordinates in the frag shader, but don't see the solution. or perhaps there is a better solution to my problem.
---- ADDITION ----
Solved it by:
Calculating the repeat S value in the current zoom (That i'm adding the vertices) and send the map width (in world values) as attribute. Every draw i'm sending the current map width as uniform for calculating the scale.
But i'm not happy with this solution.
OK, found a way to do it with minimum attributes and minimum code in the shader.
Do Once:
Calculating the the repeat count for each line as my world and my screen are 1:1 - 1 in my world is 1 pixel. LineDistance(InWorldUnits)/picWidth(inScreenUnits)
Saving as an attribute.
Every Draw:
Calculating the scale - world to screen - WorldWidth/ScreenWidth
Setting as uniform
Drawing the buffer
In the frag shader
simply multiply this scale with the repeat attribute.
Works perfectly and looks good. Resizing the window is supported as well.
The general solution is to include a texture matrix. So your vertex shader might look something like
attribute vec4 a_position;
attribute vec2 a_texcoord;
varying vec2 v_texcoord;
uniform mat4 u_matrix;
uniform mat4 u_texMatrix;
void main() {
gl_Position = u_matrix * a_position;
v_texcoord = (u_texMatrix * v_texcoord).xy;
}
Now you can set up texture matrix to scale your texture coordinates however you need. If your texture coordinates go from 0 to 1 across the texture and your pattern is 16 pixels wide then if you're drawing a line 100 pixels long you'd need 100/16 as your X scale.
var pixelsLong = 100;
var pixelsTall = 8;
var textureWidth = 16;
var textureHeight = 16;
var xScale = pixelsLong / textureWidth;
var yScale = pixelsTall / textureHeight;
var texMatrix = [
xScale, 0, 0, 0,
0, yScale, 0, 0,
0, 0, 1, 0,
0, 0, 0, 1,
];
gl.uniformMatrix4fv(texMatrixLocation, false, texMatrix);
That seems like it would work. Because you're using a matrix you can also easily offset or rotate the texture. See matrix math