Using ThreeJs, is there any way to limit x, y z movement? - three.js

In ThreeJS, I am using firstpersoncontrols to move the camera around; I would like to set a limit (max and min) in which the viewer can move in the x and z direction. Is there are property that I can use?
Currently I have tried amending code in the firstPersoncontrols.js, though the effect is not wonderful:
var targetPosition = this.target,
position = this.object.position;
var maxx = Math.sqrt(Math.pow(6000,2) - Math.pow(position.z + 100 * Math.sin( this.phi ) * Math.sin( this.theta ) + 500,2)) - 500;
var minx = - Math.sqrt(Math.pow(6000,2) - Math.pow(position.z + 100 * Math.sin( this.phi ) * Math.sin( this.theta ) + 500,2)) + 500;
var maxz = Math.sqrt(Math.pow(6000,2) - Math.pow(position.x + 100 * Math.sin( this.phi ) * Math.cos( this.theta ) - 500,2)) - 500;
var minz = - Math.sqrt(Math.pow(6000,2) - Math.pow(position.x + 100 * Math.sin( this.phi ) * Math.cos( this.theta ) - 500,2)) + 500;
if (position.x + 100 * Math.sin( this.phi ) * Math.cos( this.theta ) <= maxx && position.x + 100 * Math.sin( this.phi ) * Math.cos( this.theta ) >= minx) {
targetPosition.x = position.x + 100 * Math.sin( this.phi ) * Math.cos( this.theta );
} else {
position.x = position.x - 100 * Math.sin( this.phi ) * Math.cos( this.theta );
targetPosition.x = position.x;
}
targetPosition.y = position.y + 100 * Math.cos( this.phi );
if (position.z + 100 * Math.sin( this.phi ) * Math.sin( this.theta ) <= maxz && position.z + 100 * Math.sin( this.phi ) * Math.sin( this.theta ) >= minz) {
targetPosition.z = position.z + 100 * Math.sin( this.phi ) * Math.sin( this.theta );
} else {
position.z = position.z - 100 * Math.sin( this.phi ) * Math.sin( this.theta );
targetPosition.z = position.z
}
this.object.lookAt( targetPosition );

Ultimately the question was aimed at persons familiar with the first person controls in the threejs library.
I was wanted to place a max and min value to x and z, to prevent the user/camera moving to far in either direction; the solutions that I found (the one above and a later improved version) were not sophisticated, and resulted juddering and other problems.
Eventually, I removed firstpersoncontrols from the code, and used a path instead, so that the user would be automatically guided around the 3D space. this was not ideal, I would have preferred the user to guide their self, but with boundaries to stop them straying to far from the defined scene.

Related

How to assume spiral parameters?

Could someone help me in understanding what paramaters assume to have such spiral as in this question:Draw equidistant points on a spiral?
I don't understant this parameter: rotation- Overall rotation of the spiral. ('0'=no rotation, '1'=360 degrees, '180/360'=180 degrees) I would be grateful if someone write some sets of parameters (sides,coils,rotation) to get spiral.
It's code in Matlab:
clc
clear all
centerX = 0
centerY = 0
radius = 10
coils = 30
rotation = 360
chord = 2
delta = 1
thetaMax = coils * 2 * pi;
awayStep = radius / thetaMax;
i = 1
for theta = (chord / awayStep):thetaMax;
away = awayStep * theta;
around = theta + rotation;
x(i) = centerX + cos ( around ) * away;
y(i) = centerY + sin ( around ) * away;
i = i + 1
theta = theta + (chord / away);
theta2 = theta + delta
away2 = away + awayStep * delta
delta = 2 * chord / ( away + away2 )
delta = 2 * chord / ( 2*away + awayStep * delta )
2*(away + awayStep * delta ) * delta == 2 * chord
awayStep * delta * 2 + 2*away * delta - 2 * chord == 0
a= awayStep; b = 2*away; c = -2*chord
delta = ( -2 * away + sqrt ( 4 * away * away + 8 * awayStep * chord ) ) / ( 2 * awayStep );
theta = theta + delta;
end
v = [0 x]
w = [0 y]
scatter(v,w)
Thank you in advance

Circle around the mouse? In three.js

When I Move mouse over an object it flies. How to make that object came off the circle? Radius is 100, I added a circle to the mouse.
Here is my code:
function onDocumentMouseMove(event){
fly.style.left = (event.clientX - maxR) + 'px';
fly.style.top = (event.clientY - maxR) + 'px';
event.preventDefault();
mouse.x = (( event.clientX / renderer.domElement.width ) * 2 - 1);
mouse.y = - ( event.clientY / renderer.domElement.height ) * 2 + 1;
raycaster.setFromCamera( mouse, camera );
var intersects = raycaster.intersectObjects( scene.children );
if ( intersects.length > 0 ) {
new TWEEN.Tween( intersects[ 0 ].object.position ).to( {
x: Math.random()*750-375,
y: Math.random()*750-375,
/*z: Math.random() * 400 - 200 */}, 10000 )
.easing( TWEEN.Easing.Elastic.Out).start()
}
}

gl_PointSize to screen coordinates

When I calculate the gl_PointSize the same way I do it in the vertex shader I get a value "in pixels" (according to http://www.opengl.org/sdk/docs/manglsl/xhtml/gl_PointSize.xml). Yet this value doesn't match the measured width and height of the point on the screen.
The difference between the calculated and measured size is no constant it seems.
Calculated values range from 1 (very far away) to 4 (very near)
Current code (with three.js, but nothing magic), trying to calculate the size of a point on screen:
var projector = new THREE.Projector();
var width = window.innerWidth, height = window.innerHeight;
var widthHalf = width / 2, heightHalf = height / 2;
var vector = new THREE.Vector3();
var projector = new THREE.Projector();
var matrixWorld = new THREE.Matrix4();
matrixWorld.setPosition(focusedArtCluster.object3D.localToWorld(position));
var modelViewMatrix = camera.matrixWorldInverse.clone().multiply( matrixWorld );
var mvPosition = (new THREE.Vector4( position.x, position.y, position.z, 1.0 )).applyMatrix4(modelViewMatrix);
var gl_PointSize = zoomLevels.options.zoom * ( 180.0 / Math.sqrt( mvPosition.x * mvPosition.x + mvPosition.y * mvPosition.y + mvPosition.z * mvPosition.z ) );
projector.projectVector( vector.getPositionFromMatrix( matrixWorld ), camera );
vector.x = ( vector.x * widthHalf ) + widthHalf;
vector.y = - ( vector.y * heightHalf ) + heightHalf;
console.log(vector.x, vector.y, gl_PointSize);
Let me clarify:
The goal is to get the screen size of a point, in pixels.
My vertex shader:
vec4 mvPosition = modelViewMatrix * vec4( position, 1.0 );
gl_PointSize = zoom * ( 180.0 / length( mvPosition.xyz ) );
gl_Position = projectionMatrix * mvPosition;
Since in GLSL matrices are column-major and in three.js row-major I needed to transpose the matrices in order to have the correct matrix multiplications:
var modelViewMatrix = camera.matrixWorldInverse.clone().transpose().multiply( matrixWorld).transpose();
Further there's awlays an offset of 20px to the actual screen position. I haven't figured out why yet, but I had to do:
vector.x = ( vector.x * widthHalf ) + widthHalf - 20;
vector.y = - ( vector.y * heightHalf ) + heightHalf - 20;
Thirdly we'll have to take browser zoom into account. For width and height we probably have to somehow work with renderer.devicePixelRatio. I hope to figure out how soon enough, and I'll post it here.
Thanks for the help nonetheless. Glad it's solved.

edge detection on depth buffer [cel shading]

I am currently writing a cel shading shader, but I'm having issues with edge detection. I am currently using the following code utilizing laplacian edge detection on non-linear depth buffer values:
uniform sampler2d depth_tex;
void main(){
vec4 color_out;
float znear = 1.0;
float zfar = 50000.0;
float depthm = texture2D(depth_tex, gl_TexCoord[0].xy).r;
float lineAmp = mix( 0.001, 0.0, clamp( (500.0 / (zfar + znear - ( 2.0 * depthm - 1.0 ) * (zfar - znear) )/2.0), 0.0, 1.0 ) );// make the lines thicker at close range
float depthn = texture2D(depth_tex, gl_TexCoord[0].xy + vec2( (0.002 + lineAmp)*0.625 , 0.0) ).r;
depthn = depthn / depthm;
float depths = texture2D(depth_tex, gl_TexCoord[0].xy - vec2( (0.002 + lineAmp)*0.625 , 0.0) ).r;
depths = depths / depthm;
float depthw = texture2D(depth_tex, gl_TexCoord[0].xy + vec2(0.0 , 0.002 + lineAmp) ).r;
depthw = depthw / depthm;
float depthe = texture2D(depth_tex, gl_TexCoord[0].xy - vec2(0.0 , 0.002 + lineAmp) ).r;
depthe = depthe / depthm;
float Contour = -4.0 + depthn + depths + depthw + depthe;
float lineAmp2 = 100.0 * clamp( depthm - 0.99, 0.0, 1.0);
lineAmp2 = lineAmp2 * lineAmp2;
Contour = (512.0 + lineAmp2 * 204800.0 ) * Contour;
if(Contour > 0.15){
Contour = (0.15 - Contour) / 1.5 + 0.5;
} else
Contour = 1.0;
color_out.rgb = color_out.rgb * Contour;
color_out.a = 1.0;
gl_FragColor = color_out;
}
but it is hackish[note the lineAmp2], and the details at large distances are lost. So I made up some other algorithm:
[Note that Laplacian edge detection is in use]
1.Get 5 samples from the depth buffer: depthm, depthn, depths, depthw, depthe, where depthm is exactly where the processed fragment is, depthn is slightly to the top, depths is slightly to the bottom etc.
2.Calculate their real coordinates in camera space[as well as convert to linear].
3.Compare the side samples to the middle sample by substracting and then normalize each difference by dividing by difference in distance between two camera-space points and add all four results. This should in theory help with situation, where at large distances from the camera two fragments are very close on the screen but very far in camera space, which is fatal for linear depth testing.
where:
2.a convert the non linear depth to linear using an algorithm from [url=http://stackoverflow.com/questions/6652253/getting-the-true-z-value-from-the-depth-buffer]http://stackoverflow.com/questions/6652253/getting-the-true-z-value-from-the-depth-buffer[/url]
exact code:
uniform sampler2D depthBuffTex;
uniform float zNear;
uniform float zFar;
varying vec2 vTexCoord;
void main(void)
{
float z_b = texture2D(depthBuffTex, vTexCoord).x;
float z_n = 2.0 * z_b - 1.0;
float z_e = 2.0 * zNear * zFar / (zFar + zNear - z_n * (zFar - zNear));
}
2.b convert the screen coordinates to be [tan a, tan b], where a is horizontal angle and b i vertical. There probably is a better terminology with some spherical coordinates but I don't know these yet.
2.c create a 3d vector ( converted screen coordinates, 1.0 ) and scale it by linear depth. I assume this is estimated camera space coordinates of the fragment. It looks like it.
3.a each difference is as follows: (depthm - sidedepth)/lenght( positionm - sideposition)
And I may have messed up something at any point. Code looks fine, but the algorithm may not be, as I made it up myself.
My code:
uniform sampler2d depth_tex;
void main(){
float znear = 1.0;
float zfar = 10000000000.0;
float depthm = texture2D(depth_tex, gl_TexCoord[0].xy + distort ).r;
depthm = 2.0 * zfar * znear / (zfar + znear - ( 2.0 * depthm - 1.0 ) * (zfar - znear) ); //convert to linear
vec2 scorm = (gl_TexCoord[0].xy + distort) -0.5; //conversion to desired coordinates space. This line returns value from range (-0.5,0.5)
scorm = scorm * 2.0 * 0.5; // normalize to (-1, 1) and multiply by tan FOV/2, and default fov is IIRC 60 degrees
scorm.x = scorm.x * 1.6; //1.6 is aspect ratio 16/10
vec3 posm = vec3( scorm, 1.0 );
posm = posm * depthm; //scale by linearized depth
float depthn = texture2D(depth_tex, gl_TexCoord[0].xy + distort + vec2( 0.002*0.625 , 0.0) ).r; //0.625 is aspect ratio 10/16
depthn = 2.0 * zfar * znear / (zfar + znear - ( 2.0 * depthn - 1.0 ) * (zfar - znear) );
vec2 scorn = (gl_TexCoord[0].xy + distort + vec2( 0.002*0.625, 0.0) ) -0.5;
scorn = scorn * 2.0 * 0.5;
scorn.x = scorn.x * 1.6;
vec3 posn = vec3( scorn, 1.0 );
posn = posn * depthn;
float depths = texture2D(depth_tex, gl_TexCoord[0].xy + distort - vec2( 0.002*0.625 , 0.0) ).r;
depths = 2.0 * zfar * znear / (zfar + znear - ( 2.0 * depths - 1.0 ) * (zfar - znear) );
vec2 scors = (gl_TexCoord[0].xy + distort - vec2( 0.002*0.625, 0.0) ) -0.5;
scors = scors * 2.0 * 0.5;
scors.x = scors.x * 1.6;
vec3 poss = vec3( scors, 1.0 );
poss = poss * depths;
float depthw = texture2D(depth_tex, gl_TexCoord[0].xy + distort + vec2(0.0 , 0.002) ).r;
depthw = 2.0 * zfar * znear / (zfar + znear - ( 2.0 * depthw - 1.0 ) * (zfar - znear) );
vec2 scorw = ( gl_TexCoord[0].xy + distort + vec2( 0.0 , 0.002) ) -0.5;
scorw = scorw * 2.0 * 0.5;
scorw.x = scorw.x * 1.6;
vec3 posw = vec3( scorw, 1.0 );
posw = posw * depthw;
float depthe = texture2D(depth_tex, gl_TexCoord[0].xy + distort - vec2(0.0 , 0.002) ).r;
depthe = 2.0 * zfar * znear / (zfar + znear - ( 2.0 * depthe - 1.0 ) * (zfar - znear) );
vec2 score = ( gl_TexCoord[0].xy + distort - vec2( 0.0 , 0.002) ) -0.5;
score = score * 2.0 * 0.5;
score.x = score.x * 1.6;
vec3 pose = vec3( score, 1.0 );
pose = pose * depthe;
float Contour = ( depthn - depthm )/length(posm - posn) + ( depths - depthm )/length(posm - poss) + ( depthw - depthm )/length(posm - posw) + ( depthe - depthm )/length(posm - pose);
Contour = 0.25 * Contour;
color_out.rgb = vec3( Contour, Contour, Contour );
color_out.a = 1.0;
gl_FragColor = color_out;
}
The exact issue with the second code is that it exhibits some awful artifacts at larger distances.
My goal is to make either of them work properly. Are there any tricks I could use to improve precision/quality in both linearized and non-linearized depth buffer? Is anything wrong with my algorithm for linearized depth buffer?

Find angle from 3x3 matrix components

I have a 3x3 rotation matrix:
[ cos( angle ) sin( angle ) 0 ]
[ -sin( angle ) cos( angle ) 0 ]
[ 0 0 1 ]
How do I work out angle?
The methods I'm using now to do this is:
void Mat3::SetAngle( const float angle ) {
m[ 0 + 0 * 3 ] = cos( angle );
m[ 1 + 0 * 3 ] = sin( angle );
m[ 0 + 1 * 3 ] = -sin( angle );
m[ 1 + 1 * 3 ] = cos( angle );
}
And to retreive it I'm using:
float Mat3::GetAngle( void ) {
return atan2( m[ 1 + 0 * 3], m[ 0 + 0 * 3] );
}
I'm testing it like this:
Mat3 m;
m.SetAngle( 179.0f );
float a = m.GetAngle();
And a ends up being 3.0708115 which is not correct.
sin and cos take arguments in radians, while atan2 returns an angle in radians.
179 rad = 3.070811 + 14 * 2 * pi rad
which is the same angle as 3.070811 rad.
You could either pass in the required angle as radians and convert the GetAngle result
m.SetAngle( 179.0f * M_PI / 180.0f );
float a = m.GetAngle() * 180.0f / M_PI;
or modify the class to take degrees
void Mat3::SetAngle( const float angleDeg ) {
angleRad = angleDeg / 180.0f * M_PI;
m[ 0 + 0 * 3 ] = cos( angleRad );
// etc
}
float Mat3::GetAngle( void ) {
return atan2( m[ 1 + 0 * 3], m[ 0 + 0 * 3] ) * 180.0f / M_PI;
}
Either way I'd suggest documenting which unit your class expects.
Use an atan( sin/ cos) function, or atan2( sin, cos) if it's available.
The simple single-argument atan( s/c) is the basic method. That answers angles from nearly -PI to +PI (approaching -90 degrees to approaching +90 degrees), but you will need to special-case a divide-by-0 at +/- 90 degrees exactly.
You also need to modify the result angle for "cosine(X) < 0", that is for the other 180 degrees of the circle.
See:
http://en.wikipedia.org/wiki/Atan2
http://docs.oracle.com/javase/6/docs/api/java/lang/Math.html#atan2(double,%20double)

Resources