Related
EDIT: I solved my problem and this is what is was for. It now uses raw webgl and two triangles for each rectangle.
I'm a seasoned developer, but know next to nothing about 3d development.
I need to animate a million small rectangles where I set the coordinates in Javascript (rather than through a shader). (EDIT: It's a 2D job and I'm looking at webgl for performance reasons only.) I tweaked an existing threejs sample that uses "Points" to modify the coordinates in a BufferGeometry via Javascript and that performs really well, even with a million points.
The three.js concept of "Points", however, is a bit weird in that it appears they have to be squares - my rectangles can't be quite squares though, and they are of slightly different dimensions each.
I can think of a couple of workarounds, such as having foreground-colored squares partially overlap with squares of a background-color, thereby molding them into the correct rectangle. That's quite hacky though.
Another possibility would be to not do it with points but rather with proper triangles; but then I need to set 12 values from Javascript (2 triangles, 3 edges, 2 dimensions) rather than just the needed 4 (x, y, width, height). I suppose that could be improved with a vertex shader somehow, but that will be tricky for a noob like me.
I'm looking for some suggestions or, alternatively, a sample on how to set a large number of vertex coordinates from Javascript in threejs (the existing samples all appear to assume that manipulation is done in shaders, but that doesn't work so well for my use case).
EDIT - Here's a picture of how the rectangles could be laid out:
The rectangle's top and bottom edges are arbitrary, but they are organized into columns of arbitrary widths.
The rectangles of each column all have the same, uniform color.
Just an option with canvas and .map:
var scene = new THREE.Scene();
var camera = new THREE.PerspectiveCamera(60, innerWidth / innerHeight, 1, 1000);
camera.position.set(0, 0, 10);
camera.lookAt(scene.position);
var renderer = new THREE.WebGLRenderer();
renderer.setSize(innerWidth, innerHeight);
document.body.appendChild(renderer.domElement);
var gh = new THREE.GridHelper(10, 10, "black", "black");
gh.rotation.x = Math.PI * 0.5;
gh.position.z = 0.01;
scene.add(gh);
var canvas = document.createElement("canvas");
var map = new THREE.CanvasTexture(canvas);
canvas.width = 512;
canvas.height = 512;
var ctx = canvas.getContext("2d");
ctx.fillStyle = "gray";
ctx.fillRect(0, 0, canvas.width, canvas.height);
function drawRectangle(x, y, width, height, color) {
let xUnit = canvas.width / 10;
let yUnit = canvas.height / 10;
let x_ = x * xUnit;
let y_ = y * yUnit;
let w_ = width * xUnit;
let h_ = height * yUnit;
ctx.fillStyle = color;
ctx.fillRect(x_, y_, w_, h_);
map.needsUpdate = true;
}
drawRectangle(1, 1, 4, 3, "aqua");
drawRectangle(0, 6, 6, 3, "magenta");
drawRectangle(3, 2, 6, 6, "yellow");
var plane = new THREE.Mesh(new THREE.PlaneBufferGeometry(10, 10), new THREE.MeshBasicMaterial({
color: "white",
map: map
}));
scene.add(plane);
renderer.setAnimationLoop(() => {
renderer.render(scene, camera);
});
body {
overflow: hidden;
margin: 0;
}
<script src="https://threejs.org/build/three.min.js"></script>
Read the source for these samples:
https://threejs.org/examples/?q=buffer#webgl_buffergeometry_custom_attributes_particles
https://threejs.org/examples/?q=buffer#webgl_buffergeometry_instancing
https://threejs.org/examples/?q=buffer#webgl_buffergeometry_instancing_billboards
https://threejs.org/examples/?q=buffer#webgl_buffergeometry_points
I display a "curved tube" and color its vertices based on their distance to the plane the curve lays on.
It works mostly fine, however, when I reduce the resolution of the tube, artifacts starts to appear in the tube colors.
Those artifacts seem to depend on the camera position. If I move the camera around, sometimes the artifacts disappear. Not sure it makes sense.
Live demo: http://jsfiddle.net/gz1wu369/15/
I do not know if there is actually a problem in the interpolation or if it is just a "screen" artifact.
Afterwards I render the scene to a texture, looking at it from the "top". It then looks like a "deformation" field that I use in another shader, hence the need for continuous color.
I do not know if it is the expected behavior or if there is a problem in my code while setting the vertices color.
Would using the THREEJS Extrusion tools instead of the tube geometry solve my issue?
const tubeGeo = new THREE.TubeBufferGeometry(closedSpline, steps, radius, curveSegments, false);
const count = tubeGeo.attributes.position.count;
tubeGeo.addAttribute('color', new THREE.BufferAttribute(new Float32Array(count * 3), 3));
const colors = tubeGeo.attributes.color;
const color = new THREE.Color();
for (let i = 0; i < count; i++) {
const pp = new THREE.Vector3(
tubeGeo.attributes.position.array[3 * i],
tubeGeo.attributes.position.array[3 * i + 1],
tubeGeo.attributes.position.array[3 * i + 2]);
const distance = plane.distanceToPoint(pp);
const normalizedDist = Math.abs(distance) / radius;
const t2 = Math.floor(i / (curveSegments + 1));
color.setHSL(0.5 * t2 / steps, .8, .5);
const green = 1 - Math.cos(Math.asin(Math.abs(normslizedDist)));
colors.setXYZ(i, color.r, green, 0);
}
Low-res tubes with "Normals" material shows different artifact
High resolution tube hide the artifacts:
I've run into an issue after switching to a logarithmic depth buffer in Three.js. Everything runs nicely except for nearby culling of the ground as described in the following photos:
As you can see, the camera is elevated above the ground significantly. The character box that is shown is about 2 units above the ground, and my camera is set up as such:
var WIDTH = window.innerWidth
, HEIGHT = window.innerHeight;
var VIEW_ANGLE = 70
, ASPECT = WIDTH / HEIGHT
, NEAR = 1e-6
, FAR = 9000;
var aspect = WIDTH / HEIGHT;
var camera = new THREE.PerspectiveCamera(VIEW_ANGLE, ASPECT, NEAR, FAR);
camera.rotation.order = 'YXZ';
So my NEAR parameter is nowhere near 2, the distance between the camera and the ground. You can see in the second image that I even move up the camera with my PointerLockControls and still run into the issue.
Can anyone diagnose my issue?
I also tested my issue by seeing if this bug occurred with a static camera as well. It does.
Additionally, this problem only happens with the logarithmic depth buffer, as it doesn't happen with the default depth buffer.
I have my camera as a child to a controls object, which is defined as follows:
controls = new THREE.PointerLockControls(camera);
controls.getObject().position.set(strtx, 50, strtz);
scene.add(controls.getObject());
camera.position.z += 2;
camera.position.y += .1;
Here's the relevant code for PointerLockControls:
var pitchObject, yawObject;
var v = new THREE.Vector3(0, 0, -1);
THREE.PointerLockControls = function(camera){
var scope = this;
camera.rotation.set(0, 0, 0);
pitchObject = new THREE.Object3D();
pitchObject.rotation.x -= 0.3;
pitchObject.add(camera);
yawObject = new THREE.Object3D();
yawObject.position.y = 10;
yawObject.add(pitchObject);
var PI_2 = Math.PI / 2;
var onMouseMove = function(event){
if (scope.enabled === false) return;
var movementX = event.movementX || event.mozMovementX || event.webkitMovementX || 0;
var movementY = event.movementY || event.mozMovementY || event.webkitMovementY || 0;
yawObject.rotation.y -= movementX * 0.002;
pitchObject.rotation.x -= movementY * 0.002;
pitchObject.rotation.x = Math.max( - PI_2, Math.min( PI_2, pitchObject.rotation.x ) );
};
this.dispose = function() {
document.removeEventListener( 'mousemove', onMouseMove, false );
};
document.addEventListener( 'mousemove', onMouseMove, false );
this.enabled = false;
this.getObject = function () {
return yawObject;
};
this.getDirection = function() {
// assumes the camera itself is not rotated
var rotation = new THREE.Euler(0, 0, 0, "YXZ");
var direction = new THREE.Vector3(0, 0, -1);
return function() {
rotation.set(pitchObject.rotation.x, yawObject.rotation.y, 0);
v.copy(direction).applyEuler(rotation);
return v;
};
}();
};
You'll also notice that it's only the ground that is being culled, not other objects
Edit:
I've whipped up an isolated environment that shows the larger issue. In the first image, I have a flat PlaneBufferGeometry that has 400 segments for both width and height, defined by var g = new THREE.PlaneBufferGeometry(380, 380, 400, 400);. Even getting very close to the surface, no clipping is present:
However, if I provide only 1 segment, var g = new THREE.PlaneBufferGeometry(380, 380, 1, 1);, the clipping is present
I'm not sure if this intended in Three.js/WebGL, but it seems that I'll need to do something to work around it.
I don't think this is a bug, I think this is a feature of how the depthbuffer in the different settings works. Look at this example. On the right, the depthbuffer can't make up its mind between the letters in "microscopic" and the sphere. This is because it has lower precision at very small scales and starts doing rounding that oscilates between one object and another, and favoring draw order over z-depth.
It's always a tradeoff. If you want to forgo this issue, you can try raising the scale of your scene overall, so that the 'near' of the camera will never be so close to something that it can round it off - so just work in a number range that won't be rounded in the exponential model of the logarithmic z-buffer.
Also another question - how is the blue defined, because maybe what you're seeing is not clipping from being too close, but confusion between whether blue or the ground is closer. If it's just a blue box encompassing everything, you could try making it bigger and more distant from the ground.
EDIT:
Okay, this looks like it should work. so I would start looking for edge cases. What can you do to change the scene so that it does work? What can you do to make other things start breaking?
try moving the landscape far down/ far up (does the issue persist when looking up instead of down at it, does it persist even when it's unquestionably far away?)
try rotating the landscape
try changing the camera FOV
try changing the camera far plane
try changing the camera near plane from 1e-x notation to .000001, .0001,.01,.1, etc. see what effect it has.
console.log the camera object in your render function, and make sure that the fov, near, far etc, is as you set on setup and that it's not being overwritten and reset to default. check what it prints out in chrome's developer tools, you can browse the whole object, check position, parent name, all that stuff.
basically i don't see a blatant mistake, so I would guess it's something hard to spot, or it's working exactly as it should. Figure out what you can do to improve the effect/ make it worse, and that will clarify a direction to go.
A good rule of thumb for debugging is to try and just take things to an extreme, without trying to fix it, or keep the code true to its purpose, and just see in what way it breaks further/changes. report back when you find something.
(Please also refer to my illustration of the problem: http://i.stack.imgur.com/SfwwP.png)
problem description and ideas
I am creating several objects in the standard XYZ coordinate system.
Those are added to a THREE.group.
Please think of the group as a wall with several frames and image hung on it.
I want to create my frame objects with eg. dimension of (40, 20, 0.5). So I get a rather flat landscape formatted frame/artwork. I create and place several of those. Then I add them to the group, which I wanted to freely rotate in the world along two vectors start and end.
The problem I am struggling with is how to rotate and position the group from a given vector start to a give vector end.
So far I tried to solve it with a THREE.Matrix4().lookAt :
var group = new THREE.Group();
startVec = new THREE.Vector3( 100, 0, -100 );
endVec = new THREE.Vector3( -200, 0, 200 );
matrix = new THREE.Matrix4().lookAt(startVec, endVec, new THREE.Vector3( 0, 1, 0 ));
group.matrixAutoUpdate = false;
var object1 = new THREE.Mesh(new THREE.BoxGeometry(0.5, 20, 40), mat);
// etc. -> notice the swapping of X and Z coordinates I have to do.
group.add(object1);
group.applyMatrix(matrix);
You can see the example on jsfiddle:
http://jsfiddle.net/y6b9Lumw/1/
If you open jsfiddle, you can see that the objects are not placed along the line from start to end, although I their are placed along the groups internal X-Axis like: addBox(new THREE.Vector3( i * 30, 0 , 0 ));
Full code:
<html>
<head>
<title>testing a rotation matrix</title>
<style>body { margin: 0; } canvas { width: 100%; height: 100% } </style>
</head> <body>
<script src="https://cdnjs.cloudflare.com/ajax/libs/three.js/r70/three.min.js"></script>
<script>
var scene, camera, renderer, light, matrix;
var startVec, endVec;
var boxes;
function addBox(v) {
var boxmesh;
var boxgeom = new THREE.BoxGeometry( 15, 5, 1 );
var boxmaterial = new THREE.MeshLambertMaterial( {color: 0xdd2222} );
boxmesh = new THREE.Mesh( boxgeom, boxmaterial );
//boxmesh.matrix.makeRotationY(Math.PI / 2);
boxmesh.matrix.setPosition(v);
boxmesh.matrixAutoUpdate = false;
boxes.add(boxmesh);
}
function init() {
renderer = new THREE.WebGLRenderer();
renderer.setClearColor( 0x222222 );
renderer.setPixelRatio( window.devicePixelRatio );
renderer.setSize( window.innerWidth, window.innerHeight );
document.body.appendChild( renderer.domElement );
scene = new THREE.Scene();
camera = new THREE.PerspectiveCamera( 45, window.innerWidth / window.innerHeight, 1, 1000 );
camera.position.set( -20, 30, 300 );
var light = new THREE.PointLight (0xCCCCCC, 0.5 );
scene.add(light);
startVec = new THREE.Vector3( 100, 0, -100 );
endVec = new THREE.Vector3( -200, 0, 200 );
matrix = new THREE.Matrix4().lookAt(startVec, endVec, new THREE.Vector3( 0, 1, 0 ));
boxes = new THREE.Group();
for (var i = -100; i < 100; i++) {
addBox(new THREE.Vector3( i * 30, 0 , 0 ));
}
boxes.matrixAutoUpdate = false;
boxes.applyMatrix(matrix);
scene.add(boxes);
var linegeometry = new THREE.Geometry();
linegeometry.vertices.push( startVec, endVec);
var line = new THREE.Line(linegeometry, new THREE.LineBasicMaterial({color: 0x33eeef}));
scene.add(line);
render();
}
function render(){
requestAnimationFrame(render);
renderer.render(scene, camera);
}
init();
</script>
</body> </html>
This works only nicely to some extend. As the look vector is usually oriented along the Z-Axis (i think it is (0,0,1)). So unfortunately the objects inside the group get rotated like that aswell.
This is actually what you would expect from a lookAt() rotational transformation. It's not what I would like to have though, as this places all the children in the group, on their Z-Axis, instead of their X-Axis.
In order to have things look properly I had to initialize my groups children with X and Z swapped.
Instead of:
var object1 = new THREE.BoxGeometry( 40, 20, 0.5 );
I have to do:
var object1 = new THREE.BoxGeometry( 0.5, 20, 40 );
same if I want to translate objects in the group on the X-Axis, I have to use the Z-Axis, as that is the look-vector along which the whole wall is oriented by the matrix transformation.
my question is:
How does my matrix have to be constructed/look like to accomplish what I want: Normally create objects, and then have their X-Axis placed along vector start and vector end, like placing artworks on a wall, which can be moved around?
I thought about creating a matrix, whose X-Axis is end.sub(start), so the vector from start end end, might this be what I need to do? If so, how would I construct it?
problem illustration with an image
I tried to illustrate my sitation in two images. One being the wall, one being the wall inside the world, with the same objects attached to the wall (see top of the post).
In the first figure you see the local coordinate system of the group, with two added children, one translated along X.
In the second figure, you can see the same localsystem inside the world how I would like it to be. The green axes are the world axes. The start and end vectors are shown aswell. You can see, both boxes, are properly placed along that line.
I would like to answer my own question by disregarding the idea of manipulating the matrix myself. Thx to #WestLangley I adapted my idea to the following by setting the groups quaternion via .setFromUnitVectors.
So the rotation is derived from the rotation from the x-axis to the direction vector of start and end, as explained in three.js' documentation:
"Sets this quaternion to the rotation required to rotate direction vector vFrom to direction vector vTo."
(http://threejs.org/docs/#Reference/Math/Quaternion.setFromUnitVectors)
Below is the relevant part of my solution:
// define the starting and ending vector of the wall
start = new THREE.Vector3( -130, -40, 300 );
end = new THREE.Vector3( 60, 20, -100 );
// dir is the direction from start to end, normalized
var dir = new THREE.Vector3().copy(end).sub(start).normalize();
// position wall in the middle of start and end
var middle = new THREE.Vector3().copy(start).lerp(end, 0.5);
wall.position.copy(middle);
// rotate wall by applying rotation from X-Axis to dir
wall.quaternion.setFromUnitVectors( new THREE.Vector3(1, 0, 0), dir );
The result can be seen here: http://jsfiddle.net/L9dmqqvy/1/
I'm working my way through this book, and I'm doing okay I guess, but I've hit something I do not really get.
Below is how you can log to the console and object in 3D space that you click on:
renderer.domElement.addEventListener('mousedown', function(event) {
var vector = new THREE.Vector3(
renderer.devicePixelRatio * (event.pageX - this.offsetLeft) / this.width * 2 - 1,
-renderer.devicePixelRatio * (event.pageY - this.offsetTop) / this.height * 2 + 1,
0
);
projector.unprojectVector(vector, camera);
var raycaster = new THREE.Raycaster(
camera.position,
vector.sub(camera.position).normalize()
);
var intersects = raycaster.intersectObjects(OBJECTS);
if (intersects.length) {
console.log(intersects[0]);
}
}, false);
Here's the book's explanation on how this code works:
The previous code listens to the mousedown event on the renderer's canvas.
Get that, we're finding the domElement the renderer is using by using renderer.domElement. We're then binding an event listener to it with addEventListner, and specifing we want to listening for a mousedown. When the mouse is clicked, we launch an anonymous function and pass the eventvariable into the function.
Then,
it creates a new Vector3 instance with the mouse's coordinates on the screen
relative to the center of the canvas as a percent of the canvas width.
What? I get how we're creating a new instance with new THREE.Vector3, and I get that the three arguments Vector3 takes are its x, y and z coordinates, but that's where my understanding completely and utterly breaks down.
Firstly, I'm making an assumption here, but to plot a vector, surely you need two points in space in order to project? If you give it just one set of coords, how does it know what direction to project from? My guess is that you actually use the Raycaster to plot the "vector"...
Now onto the arguments we're passing to Vector3... I get how z is 0. Because we're only interested in where we're clicking on the screen. We can either click up or down, left or right, but not into or out of the screen, so we set that to zero. Now let's tackle x:
renderer.devicePixelRatio * (event.pageX - this.offsetLeft) / this.width * 2 - 1,
We're getting the PixelRatio of the device, timsing it by where we clicked along the x axis, dividing by the renderer's domElement width, timsing this by two and taking away one.
When you don't get something, you need to say what you do get so people can best help you out. So I feel like such a fool when I say:
I don't get why we even need the pixel ratio I don't get why we times that by where we've clicked along the x
I don't get why we divide that by the width
I utterly do not get why we need to times by 2 and take away 1. Times by 2, take away 1. That could genuinely could be times by an elephant, take away peanut and it would make as much sense.
I get y even less:
-renderer.devicePixelRatio * (event.pageY - this.offsetTop) / this.height * 2 + 1,
Why are we now randomly using -devicePixelRatio? Why are now deciding to add one rather than minus one?
That vector is then un-projected (from 2D into 3D space) relative to the camera.
What?
Once we have the point in 3D space representing the mouse's location,
we draw a line to it using the Raycaster. The two arguments that it
receives are the starting point and the direction to the ending point.
Okay, I get that, it's what I was mentioning above. How we need two points to plot a "vector". In THREE talk, a vector appears to be called a "raycaster".
However, the two points we're passing to it as arguments don't make much sense. If we were passing in the camera's position and the vector's position and drawing the projection from those two points I'd get that, and indeed we are using the camera.position for the first points, but
vector.sub(camera.position).normalize()
Why are we subtracting the camera.position? Why are we normalizing? Why does this useless f***** book not think to explain anything?
We get the direction by subtracting the mouse and camera positions and
then normalizing the result, which divides each dimension by the
length of the vector to scale it so that no dimension has a value
greater than 1.
What? I'm not being lazy, not a word past by makes sense here.
Finally, we use the ray to check which objects are located in the
given direction (that is, under the mouse) with the intersectObjects
method. OBJECTS is an array of objects (generally meshes) to check; be
sure to change it appropriately for your code. An array of objects
that are behind the mouse are returned and sorted by distance, so the
first result is the object that was clicked. Each object in the
intersects array has an object, point, face, and distance property.
Respectively, the values of these properties are the clicked object
(generally a Mesh), a Vector3 instance representing the clicked
location in space, the Face3 instance at the clicked location, and the
distance from the camera to the clicked point.
I get that. We grab all the objects the vector passes through, put them to an array in distance order and log the first one, i.e. the nearest one:
console.log(intersects[0]);
And, in all honestly, do you think I should give up with THREE? I mean, I've gotten somewhere with it certainly, and I understand all the programming aspects of it, creating new instances, using data objects such as arrays, using anonymous functions and passing in variables, but whenever I hit something mathematical I seem to grind to a soul-crushing halt.
Or is this actually difficult? Did you find this tricky? It's just the book doesn't feel it's necessary to explain in much detail, and neither do other answers , as though this stuff is just normal for most people. I feel like such an idiot. Should I give up? I want to create 3D games. I really, really want to, but I am drawn to the poetic idea of creating an entire world. Not math. If I said I didn't find math difficult, I would be lying.
I understand your troubles and I'm here to help. It seems you have one principal question: what operations are performed on the vector to prepare it for click detection?
Let's look back at the original declaration of vector:
var vector = new THREE.Vector3(
renderer.devicePixelRatio * (event.pageX - this.offsetLeft) / this.width * 2 - 1,
-renderer.devicePixelRatio * (event.pageY - this.offsetTop) / this.height * 2 + 1,
0
);
renderer.devicePixelRatio relates to a ratio of virtual site pixels /
real device pixels
event.pageX and .pageY are mouseX, mouseY
The this context is renderer.domElement, so .width, .height, .offsetLeft/Right relate to that
1 appears to be a corrective "magic" number for the calculation (for the purpose of being as visually exact as possible)
We don't care about the z-value, THREE will handle that for us. X and Y are our chief concern. Let's derive them:
We first find the distance of the mouse to the edge of the canvas: event.pageX - this.offsetLeft
We divide that by this.width to get the mouseX as a percentage of the screen width
We multiply by renderer.devicePixelRatio to convert from device pixels to site pixels
I'm not sure why we multiply by 2, but it might have to do with an assumption that the user has a retina display (someone can feel free to correct me on this if it's wrong).
1 is, again, magic to fix what might be just an offset error
For y, we multiply the whole expression by -1 to compensate for the inverted coordinate system (0 is top, this.height is bottom)
Thus you get the following arguments for the vector:
renderer.devicePixelRatio * (event.pageX - this.offsetLeft) / this.width * 2 - 1,
-renderer.devicePixelRatio * (event.pageY - this.offsetTop) / this.height * 2 + 1,
0
Now, for the next bit, a few terms:
Normalizing a vector means simplifying it into x, y, and z components less than one. To do so, you simply divide the x, y, and z components of the vector by the magnitude of the vector. It seems useless, but it's important because it creates a unit vector (magnitude = 1) in the direction of the mouse vector!
A Raycaster casts a vector through the 3D landscape produced in the canvas. Its constructor is THREE.Raycaster( origin, direction )
With these terms in mind, I can explain why we do this: vector.sub(camera.position).normalize(). First, we get the vector describing the distance from the mouse position vector to the camera position vector, vector.sub(camera.position). Then, we normalize it to make it a direction vector (again, magnitude = 1). This way, we're casting a vector from the camera to the 3D space in the direction of the mouse position! This operation allows us to then figure out any objects that are under the mouse by comparing the object position to the ray's vector.
I hope this helps. If you have any more questions, feel free to comment and I will answer them as soon as possible.
Oh, and don't let the math discourage you. THREE.js is by nature a math-heavy language because you're manipulating objects in 3D space, but experience will help you get past these kinds of understanding roadblocks. I would continue learning and return to Stack Overflow with your questions. It may take some time to develop an aptitude for the math, but you won't learn if you don't try!
This is more universal no matter the render dom location, and the dom and its ancesters's padding margin.
var rect = renderer.domElement.getBoundingClientRect();
mouse.x = ( ( event.clientX - rect.left ) / ( rect.width - rect.left ) ) * 2 - 1;
mouse.y = - ( ( event.clientY - rect.top ) / ( rect.bottom - rect.top) ) * 2 + 1;
here is a demo, scroll to the bottom to click the cube.
<!DOCTYPE html>
<html>
<head>
<script src="http://threejs.org/build/three.min.js"></script>
<link rel="stylesheet" href="http://libs.baidu.com/bootstrap/3.0.3/css/bootstrap.min.css" />
<style>
body {
font-family: Monospace;
background-color: #fff;
margin: 0px;
}
#canvas {
background-color: #000;
width: 200px;
height: 200px;
border: 1px solid black;
margin: 10px;
padding: 0px;
top: 10px;
left: 100px;
}
.border {
padding:10px;
margin:10px;
height:3000px;
overflow:scroll;
}
</style>
</head>
<body>
<div class="border">
<div style="min-height:1000px;"></div>
<div class="border">
<div id="canvas"></div>
</div>
</div>
<script>
// Three.js ray.intersects with offset canvas
var container, camera, scene, renderer, mesh,
objects = [],
count = 0,
CANVAS_WIDTH = 200,
CANVAS_HEIGHT = 200;
// info
info = document.createElement( 'div' );
info.style.position = 'absolute';
info.style.top = '30px';
info.style.width = '100%';
info.style.textAlign = 'center';
info.style.color = '#f00';
info.style.backgroundColor = 'transparent';
info.style.zIndex = '1';
info.style.fontFamily = 'Monospace';
info.innerHTML = 'INTERSECT Count: ' + count;
info.style.userSelect = "none";
info.style.webkitUserSelect = "none";
info.style.MozUserSelect = "none";
document.body.appendChild( info );
container = document.getElementById( 'canvas' );
renderer = new THREE.WebGLRenderer();
renderer.setSize( CANVAS_WIDTH, CANVAS_HEIGHT );
container.appendChild( renderer.domElement );
scene = new THREE.Scene();
camera = new THREE.PerspectiveCamera( 45, CANVAS_WIDTH / CANVAS_HEIGHT, 1, 1000 );
camera.position.y = 250;
camera.position.z = 500;
camera.lookAt( scene.position );
scene.add( camera );
scene.add( new THREE.AmbientLight( 0x222222 ) );
var light = new THREE.PointLight( 0xffffff, 1 );
camera.add( light );
mesh = new THREE.Mesh(
new THREE.BoxGeometry( 200, 200, 200, 1, 1, 1 ),
new THREE.MeshPhongMaterial( { color : 0x0080ff }
) );
scene.add( mesh );
objects.push( mesh );
// find intersections
var raycaster = new THREE.Raycaster();
var mouse = new THREE.Vector2();
// mouse listener
document.addEventListener( 'mousedown', function( event ) {
var rect = renderer.domElement.getBoundingClientRect();
mouse.x = ( ( event.clientX - rect.left ) / ( rect.width - rect.left ) ) * 2 - 1;
mouse.y = - ( ( event.clientY - rect.top ) / ( rect.bottom - rect.top) ) * 2 + 1;
raycaster.setFromCamera( mouse, camera );
intersects = raycaster.intersectObjects( objects );
if ( intersects.length > 0 ) {
info.innerHTML = 'INTERSECT Count: ' + ++count;
}
}, false );
function render() {
mesh.rotation.y += 0.01;
renderer.render( scene, camera );
}
(function animate() {
requestAnimationFrame( animate );
render();
})();
</script>
</body>
</html>