three.js premultipliedAlpha=true does not affect the output of gl.readPixels() - three.js

Okay, I'm probably missing something obvious here. What I'm trying to accomplish is storing a three.js scene that has premultipliedAlpha: true to a buffer. However, when reading the Webgl context using gl.readPixels() the value of the pixels color are always as if premultipliedAlpha was set to false. Or put differently, the premultipliedAlpha flag does not affect the output of gl.readPixels().
Code example:
var width = 512;
var height = 512;
// Browser renderer
var renderer = new THREE.WebGLRenderer({alpha: true, antialias: true, premultipliedAlpha: true});
renderer.setSize(width, height);
document.body.appendChild(renderer.domElement);
// Setup basic scene
var scene = new THREE.Scene();
var camera = new THREE.OrthographicCamera(-width/2, width/2, height/2, -height/2, 0.001, 1000);
camera.position.z = 1;
// Draw a semi-transparent red square
var redSquare = new THREE.Mesh(
new THREE.PlaneGeometry(width, height),
new THREE.MeshBasicMaterial({
side: THREE.DoubleSide,
transparent : true,
color: 0xFF0000,
opacity : .5
})
);
scene.add(redSquare);
renderer.render( scene, camera );
// Check color of first pixel
var gl = renderer.getContext();
var buf = new Uint8Array(4);
gl.readPixels(0, 0, 1, 1, gl.RGBA, gl.UNSIGNED_BYTE, buf);
console.log(buf[0], buf[1], buf[2], buf[3]);
<script src="https://cdnjs.cloudflare.com/ajax/libs/three.js/r83/three.min.js"></script>
The output color is 127 0 0 127, I expected it to be 255 0 0 127. Toggling premultipliedAlpha changes the value on screen, but not in the output. This might be by design, but then my question is, how do I convert the color in the buffer so that I do get the expected value?
Solution
To premultiply the alpha in the buffer, the pixel values should be divided by the alpha value, like so:
buf[0] /= buf[3] / 0xff
buf[1] /= buf[3] / 0xff
buf[2] /= buf[3] / 0xff
Thanks to #Kirill Dmitrenko

premultipliedAlpha: true is the default but setting that to true or false has nothing to do with the what values are in the canvas (ie, returned from readPixels). It's only a flag to the browser on how to composite the canvas with the rest of the page.
It's your responsibility to put values in the canvas that match whatever you set premultipliedAlpha to be

Thats because premultipliedAlpha: true is default value.

Related

ThreeJs: Add a Gridhelper which always face the perspective camera

I have a threejs scene view containing a mesh, a perspective camera, and in which I move the camera with OrbitControls.
I need to add a measurement grid on a threejs view which "face" my perspective camera
It works on "start up" with the following code by applying a xRotation of Pi/2 on the grid helper
window.camera = new THREE.PerspectiveCamera( 45, window.innerWidth / window.innerHeight, 0.01, 300 );
window.camera.position.z = 150;
window.grid1 = new THREE.GridHelper(500, ~~(500 * 2))
window.grid1.material.transparent = true;
window.grid1.material.depthTest = false;
window.grid1.material.blending = THREE.NormalBlending;
window.grid1.renderOrder = 100;
window.grid1.rotation.x = Math.PI / 2;
window.scene.add(window.grid1);
window.controls = new OrbitControls(window.camera, window.renderer.domElement );
window.controls.target.set( 0, 0.5, 0 );
window.controls.update();
window.controls.enablePan = false;
window.controls.enableDamping = true;
But once i start moving with orbitcontrol the grid helper don't stay align with the camera.
I try to use on the renderLoop
window.grid1.quaternion.copy(window.camera.quaternion);
And
window.grid1.lookAt(window.camera.position)
Which seems to work partially, gridhelper is aligned on the "floor" but not facing the camera
How can I achieve that?
Be gentle I'm starting with threejs :)
This is a bit of a hack, but you could wrap your grid in a THREE.Group and rotate it instead:
const camera = new THREE.PerspectiveCamera( 45, window.innerWidth / window.innerHeight, 0.01, 300 );
camera.position.z = 150;
const grid1 = new THREE.GridHelper(500, ~~(500 * 2));
grid1.material.transparent = true;
grid1.material.depthTest = false;
grid1.material.blending = THREE.NormalBlending;
grid1.renderOrder = 100;
grid1.rotation.x = Math.PI / 2;
const gridGroup = new THREE.Group();
gridGroup.add(grid1);
scene.add(gridGroup);
// ...
And then, in your render loop, you make your group face to the camera (and not the grid):
gridGroup.lookAt(camera.position)
This works because it kind of simulates the behaviour of setting the normal in a THREE.Plane. The GridHelper is rotated to be perpendicular to the camera, and the it is wrapped in a group with no rotation. So by rotating the group, the grid will always be offset so that it is perpendicular to the camera.

How do I make all objects visible at any zoom level/distance from the camera?

I have a scene with hundreds of objects, however not all of them are visible unless I move the camera towards them.
Here's part of the scene from above - note how the yellow wireframes seem to cut off halfway:
The more I zoom in, the more becomes visible:
I would like for all objects to be visible at all times, not just when the camera is near them.
Is there a setting/property for this or is it not that simple?
I am using a PerspectiveCamera with the following settings (code abridged for brevity):
const camera = new THREE.PerspectiveCamera(50, 1280 / 720);
camera.far = Infinity;
camera.position.x = 0;
camera.position.y = 1024;
camera.position.z = 1024;
const controls = new OrbitControls(camera, renderer.domElement);
scene.add(camera);
Thanks to user #prisoner849's comment, I realised that Infinity is not a valid far value for a PerspectiveCamera.
Fixed code:
const camera = new THREE.PerspectiveCamera(50, 1280 / 720, 0.1, 0x10000);

Best way to paint rectangles in three.js

EDIT: I solved my problem and this is what is was for. It now uses raw webgl and two triangles for each rectangle.
I'm a seasoned developer, but know next to nothing about 3d development.
I need to animate a million small rectangles where I set the coordinates in Javascript (rather than through a shader). (EDIT: It's a 2D job and I'm looking at webgl for performance reasons only.) I tweaked an existing threejs sample that uses "Points" to modify the coordinates in a BufferGeometry via Javascript and that performs really well, even with a million points.
The three.js concept of "Points", however, is a bit weird in that it appears they have to be squares - my rectangles can't be quite squares though, and they are of slightly different dimensions each.
I can think of a couple of workarounds, such as having foreground-colored squares partially overlap with squares of a background-color, thereby molding them into the correct rectangle. That's quite hacky though.
Another possibility would be to not do it with points but rather with proper triangles; but then I need to set 12 values from Javascript (2 triangles, 3 edges, 2 dimensions) rather than just the needed 4 (x, y, width, height). I suppose that could be improved with a vertex shader somehow, but that will be tricky for a noob like me.
I'm looking for some suggestions or, alternatively, a sample on how to set a large number of vertex coordinates from Javascript in threejs (the existing samples all appear to assume that manipulation is done in shaders, but that doesn't work so well for my use case).
EDIT - Here's a picture of how the rectangles could be laid out:
The rectangle's top and bottom edges are arbitrary, but they are organized into columns of arbitrary widths.
The rectangles of each column all have the same, uniform color.
Just an option with canvas and .map:
var scene = new THREE.Scene();
var camera = new THREE.PerspectiveCamera(60, innerWidth / innerHeight, 1, 1000);
camera.position.set(0, 0, 10);
camera.lookAt(scene.position);
var renderer = new THREE.WebGLRenderer();
renderer.setSize(innerWidth, innerHeight);
document.body.appendChild(renderer.domElement);
var gh = new THREE.GridHelper(10, 10, "black", "black");
gh.rotation.x = Math.PI * 0.5;
gh.position.z = 0.01;
scene.add(gh);
var canvas = document.createElement("canvas");
var map = new THREE.CanvasTexture(canvas);
canvas.width = 512;
canvas.height = 512;
var ctx = canvas.getContext("2d");
ctx.fillStyle = "gray";
ctx.fillRect(0, 0, canvas.width, canvas.height);
function drawRectangle(x, y, width, height, color) {
let xUnit = canvas.width / 10;
let yUnit = canvas.height / 10;
let x_ = x * xUnit;
let y_ = y * yUnit;
let w_ = width * xUnit;
let h_ = height * yUnit;
ctx.fillStyle = color;
ctx.fillRect(x_, y_, w_, h_);
map.needsUpdate = true;
}
drawRectangle(1, 1, 4, 3, "aqua");
drawRectangle(0, 6, 6, 3, "magenta");
drawRectangle(3, 2, 6, 6, "yellow");
var plane = new THREE.Mesh(new THREE.PlaneBufferGeometry(10, 10), new THREE.MeshBasicMaterial({
color: "white",
map: map
}));
scene.add(plane);
renderer.setAnimationLoop(() => {
renderer.render(scene, camera);
});
body {
overflow: hidden;
margin: 0;
}
<script src="https://threejs.org/build/three.min.js"></script>
Read the source for these samples:
https://threejs.org/examples/?q=buffer#webgl_buffergeometry_custom_attributes_particles
https://threejs.org/examples/?q=buffer#webgl_buffergeometry_instancing
https://threejs.org/examples/?q=buffer#webgl_buffergeometry_instancing_billboards
https://threejs.org/examples/?q=buffer#webgl_buffergeometry_points

How to cast a visible ray threejs

I want to aim for objects with cameras' vision (as the user would look at the object, not point at it with mouse).
I'm casting a ray from the camera like this
rotation.x = camera.rotation.x;
rotation.y = camera.rotation.y;
rotation.z = camera.rotation.z;
raycaster.ray.direction.copy( direction ).applyEuler(rotation);
raycaster.ray.origin.copy( camera.position );
var intersections = raycaster.intersectObjects( cubes.children );
This gets me the intersections but it seems to wander off sometimes. So I'd like to add aim (crosshair). That would be somekind on object (mesh) at the end or in the middle of the ray.
How can I add it? When I created a regular line it was in front of the camera so the screen would go black.
You can add a crosshair constructed from simple geometry to your camera like this:
var material = new THREE.LineBasicMaterial({ color: 0xAAFFAA });
// crosshair size
var x = 0.01, y = 0.01;
var geometry = new THREE.Geometry();
// crosshair
geometry.vertices.push(new THREE.Vector3(0, y, 0));
geometry.vertices.push(new THREE.Vector3(0, -y, 0));
geometry.vertices.push(new THREE.Vector3(0, 0, 0));
geometry.vertices.push(new THREE.Vector3(x, 0, 0));
geometry.vertices.push(new THREE.Vector3(-x, 0, 0));
var crosshair = new THREE.Line( geometry, material );
// place it in the center
var crosshairPercentX = 50;
var crosshairPercentY = 50;
var crosshairPositionX = (crosshairPercentX / 100) * 2 - 1;
var crosshairPositionY = (crosshairPercentY / 100) * 2 - 1;
crosshair.position.x = crosshairPositionX * camera.aspect;
crosshair.position.y = crosshairPositionY;
crosshair.position.z = -0.3;
camera.add( crosshair );
scene.add( camera );
Three.js r107
http://jsfiddle.net/5ksydn6u/2/
In case you dont have a special usecase where you need to retrieve the position and rotation from your camera like you are doing, I guess your "wandering off" could be fixed by calling your raycaster with these arguments.:
raycaster.set( camera.getWorldPosition(), camera.getWorldDirection() );
var intersections = raycaster.intersectObjects( cubes.children );
Cast visible ray
Then you can visualize your raycast in 3D space by drawing an arrow with the arrow helper. Do this after your raycast:
scene.remove ( arrow );
arrow = new THREE.ArrowHelper( camera.getWorldDirection(), camera.getWorldPosition(), 100, Math.random() * 0xffffff );
scene.add( arrow );

Three.js: How to Repeat a Texture

With the following code, I want to set the rectangle's texture, but the problem is that the texture image does not repeat over the whole rectangle:
var penGeometry = new THREE.CubeGeometry(length, 15, 120);
var wallTexture = new THREE.ImageUtils.loadTexture('../../3D/brick2.jpg');
wallTexture.wrapS = wallTexture.wrapT = THREE.MirroredRepeatWrapping;
wallTexture.repeat.set(50, 1);
var wallMaterial = new THREE.MeshBasicMaterial({ map: wallTexture });
var line = new THREE.Mesh(penGeometry, wallMaterial);
line.position.x = PenArray.lastPosition.x + (PenArray.currentPosition.x - PenArray.lastPosition.x) / 2;
line.position.y = PenArray.lastPosition.y + (PenArray.currentPosition.y - PenArray.lastPosition.y) / 2;
line.position.z = PenArray.lastPosition.z + 60;
line.rotation.z = angle;
The texture image is http://wysnan.com/NightClubBooth/brick1.jpg
The result is http://wysnan.com/NightClubBooth/brick2.jpg
Only a piece of texture is rendered correctly but not all of the rectangle, why?
And how to render all the rectangle with this texture image?
For repeat wrapping, your texture's dimensions must be a power-of-two (POT).
For example ( 512 x 512 ) or ( 512 x 256 ).
three.js r.58

Resources