How to determine the visible objects on the screen? - three.js

I need to find the objects that fully/partly visible on the rendered screen. I know this can be done by coloring each object uniquely, rendering the scene, and detecting the colors that end up on the screen. This is a screen-space operation that would involve fiddling with the frame-buffer. Are there any special functions/helpers within three.js that do this more easily?

You can check if object is in view frustum of the camera. See Frustum in Three.js documentation.

One way to achieve this is to render your scene once with constant shading, colour-coding your objects as you need, with any anti-aliasing and other effects turned off, so that you can easily map a read pixel back to its object by its colour.
Then, you can read pixels from your render target, for which you can use three.js' WebGLRenderer.readRenderTargetPixels() (see docs). You can then read the colours out of the buffer you pass to it.
Something like this:
// Render your scene first, into a renderTarget. Then:
const buffer = new Uint8Array(width * height * 4);
this.renderer.readRenderTargetPixels(renderTarget, 0, 0, width, height, buffer);
for (let i=0; i<buffer.length/4; ++i) {
const r = buffer[i*4 ];
const g = buffer[i*4 + 1];
const b = buffer[i*4 + 2];
const rgb = (r << 16) | (g << 8) | b;
// Do your mapping
}
This is very much just WebGL though, and don't know whether there might be a better way to do this within three.js.

Related

In A-Frame/THREE.js, is there a method like the Camera.ScreenToWorldPoint() from Unity?

I know a method from Unity whichs is very useful to convert a screen position to a world position : https://docs.unity3d.com/ScriptReference/Camera.ScreenToWorldPoint.html
I've been looking for something similar in A-Frame/THREE.js, but I didn't find anything.
Is there an easy way to convert a screen position to a world position in a plane which is positioned a given distance from the camera ?
This is typically done using Raycaster. An equivalent function using three.js would be written like this:
function screenToWorldPoint(screenSpaceCoord, target = new THREE.Vector3()) {
// convert the screen-space coordinates to normalized device coordinates
// (x and y ranging from -1 to 1):
const ndc = new THREE.Vector2()
ndc.x = 2 * screenSpaceCoord.x / screenWidth - 1;
ndc.y = 2 * screenSpaceCoord.y / screenHeight - 1;
// `Raycaster` can be used to convert this into a ray:
const raycaster = new THREE.Raycaster();
raycaster.setFromCamera(ndc, camera);
// finally, apply the distance:
return raycaster.ray.at(screenSpaceCoord.z, target);
}
Note that coordinates in browsers are usually measured from the top/left corner with y pointing downwards. In that case, the NDC calculation should be:
ndc.y = 1 - 2 * screenSpaceCoord.y / screenHeight;
Another note: instead of using a set distance in screenSpaceCoord.z you could also let three.js compute an intersection with any Object in your scene. For that you can use raycaster.intersectObject() and get a precise depth for the point of intersection with that object. See the documentation and various examples linked here: https://threejs.org/docs/#api/core/Raycaster

Drawing image(PGraphics) gives unwanted double image mirrored about x-axis. Processing 3

The code is supposed to fade and copy the window's image to a buffer f, then draw f back onto the window but translated, rotated, and scaled. I am trying to create an effect like a feedback loop when you point a camera plugged into a TV at the TV.
I have tried everything I can think of, logged every variable I could think of, and still it just seems like image(f,0,0) is doing something wrong or unexpected.
What am I missing?
Pic of double image mirror about x-axis:
PGraphics f;
int rect_size;
int midX;
int midY;
void setup(){
size(1000, 1000, P2D);
f = createGraphics(width, height, P2D);
midX = width/2;
midY = height/2;
rect_size = 300;
imageMode(CENTER);
rectMode(CENTER);
smooth();
background(0,0,0);
fill(0,0);
stroke(255,255);
}
void draw(){
fade_and_copy_pixels(f); //fades window pixels and then copies pixels to f
background(0,0,0);//without this the corners dont get repainted.
//transform display window (instead of f)
pushMatrix();
float scaling = 0.90; // x>1 makes image bigger
float rot = 5; //angle in degrees
translate(midX,midY); //makes it so rotations are always around the center
rotate(radians(rot));
scale(scaling);
imageMode(CENTER);
image(f,0,0); //weird double image must have something not working around here
popMatrix();//returns window matrix to normal
int x = mouseX;
int y = mouseY;
rectMode(CENTER);
rect(x,y,rect_size,rect_size);
}
//fades window pixels and then copies pixels to f
void fade_and_copy_pixels(PGraphics f){
loadPixels(); //load windows pixels. dont need because I am only reading pixels?
f.loadPixels(); //loads feedback loops pixels
// Loop through every pixel in window
//it is faster to grab data from pixels[] array, so dont use get and set, use this
for (int i = 0; i < pixels.length; i++) {
//////////////FADE PIXELS in window and COPY to f:///////////////
color p = pixels[i];
//get color values, mask then shift
int r = (p & 0x00FF0000) >> 16;
int g = (p & 0x0000FF00) >> 8;
int b = p & 0x000000FF; //no need for shifting
// reduce value for each color proportional
// between fade_amount between 0-1 for 0 being totallty transparent, and 1 totally none
// min is 0.0039 (when using floor function and 255 as molorModes for colors)
float fade_percent= 0.005; //0.05 = 5%
int r_new = floor(float(r) - (float(r) * fade_percent));
int g_new = floor(float(g) - (float(g) * fade_percent));
int b_new = floor(float(b) - (float(b) * fade_percent));
//maybe later rewrite in a way to save what the difference is and round it differently, like maybe faster at first and slow later,
//round doesn't work because it never first subtracts one to get the ball rolling
//floor has a minimum of always subtracting 1 from each value each time. cant just subtract 1 ever n loops
//keep a list of all the pixel as floats? too much memory?
//ill stick with floor for now
// the lowest percent that will make a difference with floor is 0.0039?... because thats slightly more than 1/255
//shift back and or together
p = 0xFF000000 | (r_new << 16) | (g_new << 8) | b_new; // or-ing all the new hex together back into AARRGGBB
f.pixels[i] = p;
////////pixels now copied
}
f.updatePixels();
}
This is a weird one. But let's start with a simpler MCVE that isolates the problem:
PGraphics f;
void setup() {
size(500, 500, P2D);
f = createGraphics(width, height, P2D);
}
void draw() {
background(0);
rect(mouseX, mouseY, 100, 100);
copyPixels(f);
image(f, 0, 0);
}
void copyPixels(PGraphics f) {
loadPixels();
f.loadPixels();
for (int i = 0; i < pixels.length; i++) {
color p = pixels[i];
f.pixels[i] = p;
}
f.updatePixels();
}
This code exhibits the same problem as your code, without any of the extra logic. I would expect this code to show a rectangle wherever the mouse is, but instead it shows a rectangle at a position reflected over the X axis. If the mouse is on the top of the window, the rectangle is at the bottom of the window, and vice-versa.
I think this is caused by the P2D renderer being OpenGL, which has an inversed Y axis (0 is at the bottom instead of the top). So it seems like when you copy the pixels over, it's going from screen space to OpenGL space... or something. That definitely seems buggy though.
For now, there are two things that seem to fix the problem. First, you could just use the default renderer instead of P2D. That seems to fix the problem.
Or you could get rid of the for loop inside the copyPixels() function and just do f.pixels = pixels; for now. That also seems to fix the problem, but again it feels pretty buggy.
If somebody else (paging George) doesn't come along with a better explanation by tomorrow, I'd file a bug on Processing's GitHub. (I can do that for you if you want.)
Edit: I've filed an issue here, so hopefully we'll hear back from a developer in the next few days.
Edit Two: Looks like a fix has been implemented and should be available in the next release of Processing. If you need it now, you can always build Processing from source.
An easier one, and works like a charm:
add f.beginDraw(); before and f.endDraw(); after using f:
loadPixels(); //load windows pixels. dont need because I am only reading pixels?
f.loadPixels(); //loads feedback loops pixels
// Loop through every pixel in window
//it is faster to grab data from pixels[] array, so dont use get and set, use this
f.beginDraw();
and
f.updatePixels();
f.endDraw();
Processing must know when it's drawing in a buffer and when not.
In this image you can see that works

CSS3Renderer ignores projectonMatrix property?

I'm doing augmented reality with Three.js and recenlty I tried to combine WebGL and CSS3 rendering to render both 3D content and DOM objects (Mostly for video playback) at the same time. I've started with Closing the gap between html and webgl tutorial, but I cannot get correct visualization using CSS (Although WebGL working fine).
Basically, when doing AR, we have two matrices we have to apply to our scene: projection matrix and camera matrix. The projection matrix (row-major) usually looks like this:
var projectionMatrix = [ 1.820090055466, 0, -0.000550820783, 0,
0, 3.227676868439, -0.036605358124, 0,
0, 0, -1.000199913979,-0.200020000339,
0, 0, -1, 0
];
And camera matrix (row-major) represents a rigid 3D transform (R|t composition) that represents camera transformation in virtual world:
var cameraMatrix = [ 0.790828585625,0.296402275562,-0.535477280617,-0.309822082520,
-0.612037420273,0.382129371166,-0.692378044128,-0.447699964046,
-0.000600785017,0.875284433365,0.483608126640,-0.637073278427,
0.000000000000,0.000000000000,0.000000000000,1.000000000000];
With WebGL it's pretty easy to apply these matrices to a pipeline:
self.wglCamera.matrixAutoUpdate = false;
self.wglCamera.projectionMatrix.set(
pm[0], pm[1], pm[2], pm[3],
pm[4], pm[5], pm[6], pm[7],
pm[8], pm[9], pm[10], pm[11],
pm[12], pm[13], pm[14], pm[15]);
self.wglCamera.matrix.set(
cm[0], cm[1], cm[2], cm[3],
cm[4], cm[5], cm[6], cm[7],
cm[8], cm[9], cm[10], cm[11],
cm[12], cm[13], cm[14], cm[15]);
When I do the same for CSS3 camera, I get incorrect rendering result (VIDEO):
There are two issues:
Red texture (CSS3Object) non-uniformly scaled (it's square in fact)
It always sits in screen center, however it should be located where a blue grid is.
After analyzing CSS3Renderer implementation, I found that only camera FOV property is used to set perspective effect, but the projectionMatrix property is totally ignored when rendering with CSS3Renderer. Is it intended?
// https://github.com/mrdoob/three.js/blob/master/examples/js/renderers/CSS3DRenderer.js#L225
this.render = function ( scene, camera ) {
var fov = 0.5 / Math.tan( THREE.Math.degToRad( camera.fov * 0.5 ) ) * _height;
...
camera.matrixWorldInverse.getInverse( camera.matrixWorld );
// Why we don't use camera.projection Matrix here?
var style = "translate3d(0,0," + fov + "px)" + getCameraCSSMatrix( camera.matrixWorldInverse ) +
" translate3d(" + _widthHalf + "px," + _heightHalf + "px, 0)";
...
};
And, if yes, how I can achieve desired result?
I've tried to pass PM * CM to camera matrix, but both problems still exists. Mainly I more worried about ignored translation, since rotation looks good.
I'd appreciate any ideas/suggestions! Thanks.

Converting Sketchup Transformation to Three.js rotation+scale+position or Matrix

I'm writing an export script (ruby) in SketchUp, and I'm having trouble applying the same transformation in Three.js side, so that objects have the same rotation in Three.js as they appear in SketchUp.
I can read the rotation using the SketchUp Transformation class: http://www.sketchup.com/intl/en/developer/docs/ourdoc/transformation.php
I can get these kind of values from a rotated component that I pass to my Three.js code. All are Vectors in the form of X, Y, Z
xaxis: 0.0157771536190692,-0.0,-0.0199058138160762
yaxis: -0.0199058138160762,0.0,-0.0157771536190692
zaxis: 0.0,0.0254,-0.0
origin: 1.4975125146729,0.0,-1.25735397455338
Objects are positioned correctly if I just copy the values from origin to Object3D.position. But I have no idea how to apply the xaxis, yaxis and zaxis values to Object3D.rotation.
Three.js has various ways to rotate a model, via Matrix manipulation, quaternion, angles, radians and whatnot. But how to set object rotation using those axis values?
EDIT:
SketchUp Transformation provides also a .to_a (to array) method, which I think is supposed to return a 16 element matrix. I tried to use that in Three.js:
// tm is from SketchUp:Transformation to_a
var tm = "0.621147780278315,0.783693457325836,-0.0,0.0,-0.783693457325836,0.621147780278315,0.0,0.0,0.0,0.0,1.0,0.0,58.9571856170433,49.5021249824165,0.0,1.0";
tm = tm.split(",");
for (var i = 0; i < tm.length; i++) {
tm[i] = tm[i] * 1.0;
}
var matrix = new THREE.Matrix4(tm[0], tm[1], tm[2], tm[3], tm[4], tm[5], tm[6], tm[7], tm[8], tm[9], tm[10], tm[11], tm[12], tm[13], tm[14], tm[15]);
obj.applyMatrix(matrix);
This results in a total mess however, so there's still something wrong.
Based on information here: http://sketchucation.com/forums/viewtopic.php?f=180&t=46944&p=419606&hilit=matrix#p419606
I was able to construct a working Matrix4. I think the problem was both in unit scales (see the .to_m conversion in some of the elements) and the order of matrix array elements. In Sketchup:
tr = transformation.to_a
trc = [tr[0],tr[8],-(tr[4]),tr[12].to_m, tr[2],tr[10],-(tr[6]),tr[14].to_m, -(tr[1]),-(tr[9]),tr[5],-(tr[13].to_m), 0.0, 0.0, 0.0, 1.0] # the last 4 values are unused in Sketchup
el.attributes["tm"] = trc.join(",") # rotation and scale matrix
el.attributes["to"] = convertscale(transformation.origin) # position
In Three.js
var origin = this.parsevector3(node.getAttribute("to"));
obj.position = origin;
var tm = node.getAttribute("tm");
tm = tm.split(",");
for (var i = 0; i < tm.length; i++) {
tm[i] = tm[i] * 1.0;
}
var matrix = new THREE.Matrix4(tm[0], tm[1], tm[2], tm[3], tm[4], tm[5], tm[6], tm[7], tm[8], tm[9], tm[10], tm[11], tm[12], tm[13], tm[14], tm[15]);
obj.applyMatrix(matrix);
Sorry there is some application specific logic in the code, but I think the idea can be found regardless, if someone runs into similar problems.
SketchUp Transformation provides also a .to_a (to array) method, which I think is supposed to return a 16 element matrix.
It has been a while since you posted this, but here's a useful link for people who bump into this in the future: http://www.martinrinehart.com/models/tutorial/tutorial_t.html

How do I create resizable objects in THREE.JS

Lets say I have a simple cube and some variables specifying its width, height and depth.
How do I update my THREE mesh when the w/h/d changes? (I have everything else like changelisteners etc). Do I update vertices directly? Or is it easier to just redraw everything?
I think easiest would be to create your cube with 1 unit dimensions (1x1x1). Then set the dimensions by scaling it:
mesh.scale.x = width;
mesh.scale.y = height;
mesh.scale.z = depth;
Not actually sure if mesh supports scale, if not, you can wrap it in Object3D
var obj = new THREE.Object3D();
obj.add(mesh);
obj.scale.x = width;
obj.scale.y = height;
obj.scale.z = depth;
Nothing stops you from modifying the vertices directly. I think you need to specify geometry.dynamic=true; and then geometry.verticesNeedUpdate=true; in that case.

Resources