When I calculate the gl_PointSize the same way I do it in the vertex shader I get a value "in pixels" (according to http://www.opengl.org/sdk/docs/manglsl/xhtml/gl_PointSize.xml). Yet this value doesn't match the measured width and height of the point on the screen.
The difference between the calculated and measured size is no constant it seems.
Calculated values range from 1 (very far away) to 4 (very near)
Current code (with three.js, but nothing magic), trying to calculate the size of a point on screen:
var projector = new THREE.Projector();
var width = window.innerWidth, height = window.innerHeight;
var widthHalf = width / 2, heightHalf = height / 2;
var vector = new THREE.Vector3();
var projector = new THREE.Projector();
var matrixWorld = new THREE.Matrix4();
matrixWorld.setPosition(focusedArtCluster.object3D.localToWorld(position));
var modelViewMatrix = camera.matrixWorldInverse.clone().multiply( matrixWorld );
var mvPosition = (new THREE.Vector4( position.x, position.y, position.z, 1.0 )).applyMatrix4(modelViewMatrix);
var gl_PointSize = zoomLevels.options.zoom * ( 180.0 / Math.sqrt( mvPosition.x * mvPosition.x + mvPosition.y * mvPosition.y + mvPosition.z * mvPosition.z ) );
projector.projectVector( vector.getPositionFromMatrix( matrixWorld ), camera );
vector.x = ( vector.x * widthHalf ) + widthHalf;
vector.y = - ( vector.y * heightHalf ) + heightHalf;
console.log(vector.x, vector.y, gl_PointSize);
Let me clarify:
The goal is to get the screen size of a point, in pixels.
My vertex shader:
vec4 mvPosition = modelViewMatrix * vec4( position, 1.0 );
gl_PointSize = zoom * ( 180.0 / length( mvPosition.xyz ) );
gl_Position = projectionMatrix * mvPosition;
Since in GLSL matrices are column-major and in three.js row-major I needed to transpose the matrices in order to have the correct matrix multiplications:
var modelViewMatrix = camera.matrixWorldInverse.clone().transpose().multiply( matrixWorld).transpose();
Further there's awlays an offset of 20px to the actual screen position. I haven't figured out why yet, but I had to do:
vector.x = ( vector.x * widthHalf ) + widthHalf - 20;
vector.y = - ( vector.y * heightHalf ) + heightHalf - 20;
Thirdly we'll have to take browser zoom into account. For width and height we probably have to somehow work with renderer.devicePixelRatio. I hope to figure out how soon enough, and I'll post it here.
Thanks for the help nonetheless. Glad it's solved.
Related
Using three.js I have the following.
A scene containing several Object3D instances
Several predefined camera Vector3 positions
A dynamic width/height of the canvas if the screen resizes
A user can select an object (from above)
A user can select a camera position (from above)
Given an object being viewed and the camera position they have chosen how do I compute the final camera position to "best fit" the object on screen?
If the camera positions are used "as is" on some screens the objects bleed over the edge of my viewport whilst others they appear smaller. I believe it is possible to fit the object to the camera frustum but haven't been able to find anything suitable.
I am assuming you are using a perspective camera.
You can set the camera's position, field-of-view, or both.
The following calculation is exact for an object that is a cube, so think in terms of the object's bounding box, aligned to face the camera.
If the camera is centered and viewing the cube head-on, define
dist = distance from the camera to the _closest face_ of the cube
and
height = height of the cube.
If you set the camera field-of-view as follows
fov = 2 * Math.atan( height / ( 2 * dist ) ) * ( 180 / Math.PI ); // in degrees
then the cube height will match the visible height.
At this point, you can back the camera up a bit, or increase the field-of-view a bit.
If the field-of-view is fixed, then use the above equation to solve for the distance.
EDIT: If you want the cube width to match the visible width, let aspect be the aspect ratio of the canvas ( canvas width divided by canvas height ), and set the camera field-of-view like so
fov = 2 * Math.atan( ( width / aspect ) / ( 2 * dist ) ) * ( 180 / Math.PI ); // in degrees
three.js r.69
Based on WestLangleys answer here is how you calculate the distance with a fixed camera field-of-view:
dist = height / 2 / Math.tan(Math.PI * fov / 360);
To calculate how far away to place your camera to fit an object to the screen, you can use this formula (in Javascript):
// Convert camera fov degrees to radians
var fov = camera.fov * ( Math.PI / 180 );
// Calculate the camera distance
var distance = Math.abs( objectSize / Math.sin( fov / 2 ) );
Where objectSize is the height or width of the object. For cube/sphere objects you can use either the height or width. For a non-cube/non-sphere object, where length or width is greater, use var objectSize = Math.max( width, height ) to get the larger value.
Note that if your object position isn't at 0, 0, 0, you need to adjust your camera position to include the offset.
Here's a CodePen showing this in action. The relevant lines:
var fov = cameraFov * ( Math.PI / 180 );
var objectSize = 0.6 + ( 0.5 * Math.sin( Date.now() * 0.001 ) );
var cameraPosition = new THREE.Vector3(
0,
sphereMesh.position.y + Math.abs( objectSize / Math.sin( fov / 2 ) ),
0
);
You can see that if you grab the window handle and resize it, the sphere still takes up 100% of the screen height. Additionally, the object is scaling up and down in a sine wave fashion (0.6 + ( 0.5 * Math.sin( Date.now() * 0.001 ) )), to show the camera position takes into account scale of the object.
Assuming that object fits into screen if it's bounding sphere fits, we reduce the task to fitting sphere into camera view.
In given example we keep PerspectiveCamera.fov constant while changing camera rotation to achieve best point of view for the object. Zoom effect is achieved by moving camera along .lookAt direction vector.
On the picture you can see problem definition:
given bounding sphere and camera.fov, find L, so that bounding sphere touches camera's frustum planes.
Here's how you calculate desired distance from sphere to camera:
Complete solution: https://jsfiddle.net/mmalex/h7wzvbkt/
var renderer;
var camera;
var scene;
var orbit;
var object1;
function zoomExtents() {
let vFoV = camera.getEffectiveFOV();
let hFoV = camera.fov * camera.aspect;
let FoV = Math.min(vFoV, hFoV);
let FoV2 = FoV / 2;
let dir = new THREE.Vector3();
camera.getWorldDirection(dir);
let bb = object1.geometry.boundingBox;
let bs = object1.geometry.boundingSphere;
let bsWorld = bs.center.clone();
object1.localToWorld(bsWorld);
let th = FoV2 * Math.PI / 180.0;
let sina = Math.sin(th);
let R = bs.radius;
let FL = R / sina;
let cameraDir = new THREE.Vector3();
camera.getWorldDirection(cameraDir);
let cameraOffs = cameraDir.clone();
cameraOffs.multiplyScalar(-FL);
let newCameraPos = bsWorld.clone().add(cameraOffs);
camera.position.copy(newCameraPos);
camera.lookAt(bsWorld);
orbit.target.copy(bsWorld);
orbit.update();
}
scene = new THREE.Scene();
camera = new THREE.PerspectiveCamera(54, window.innerWidth / window.innerHeight, 0.1, 1000);
camera.position.x = 15;
camera.position.y = 15;
camera.position.z = 15;
camera.lookAt(0, 0, 0);
renderer = new THREE.WebGLRenderer({
antialias: true
});
renderer.setSize(window.innerWidth, window.innerHeight);
renderer.setClearColor(new THREE.Color(0xfefefe));
document.body.appendChild(renderer.domElement);
orbit = new THREE.OrbitControls(camera, renderer.domElement);
// create light
{
var spotLight = new THREE.SpotLight(0xffffff);
spotLight.position.set(0, 100, 50);
spotLight.castShadow = true;
spotLight.shadow.mapSize.width = 1024;
spotLight.shadow.mapSize.height = 1024;
spotLight.shadow.camera.near = 500;
spotLight.shadow.camera.far = 4000;
spotLight.shadow.camera.fov = 30;
scene.add(spotLight);
}
var root = new THREE.Object3D();
scene.add(root);
function CustomSinCurve(scale) {
THREE.Curve.call(this);
this.scale = (scale === undefined) ? 1 : scale;
}
CustomSinCurve.prototype = Object.create(THREE.Curve.prototype);
CustomSinCurve.prototype.constructor = CustomSinCurve;
CustomSinCurve.prototype.getPoint = function(t) {
var tx = t * 3 - 1.5;
var ty = Math.sin(2 * Math.PI * t);
var tz = 0;
return new THREE.Vector3(tx, ty, tz).multiplyScalar(this.scale);
};
var path = new CustomSinCurve(10);
var geometry = new THREE.TubeGeometry(path, 20, 2, 8, false);
var material = new THREE.MeshPhongMaterial({
color: 0x20f910,
transparent: true,
opacity: 0.75
});
object1 = new THREE.Mesh(geometry, material);
object1.geometry.computeBoundingBox();
object1.position.x = 22.3;
object1.position.y = 0.2;
object1.position.z = -1.1;
object1.rotation.x = Math.PI / 3;
object1.rotation.z = Math.PI / 4;
root.add(object1);
object1.geometry.computeBoundingSphere();
var geometry = new THREE.SphereGeometry(object1.geometry.boundingSphere.radius, 32, 32);
var material = new THREE.MeshBasicMaterial({
color: 0xffff00
});
material.transparent = true;
material.opacity = 0.35;
var sphere = new THREE.Mesh(geometry, material);
object1.add(sphere);
var size = 10;
var divisions = 10;
var gridHelper = new THREE.GridHelper(size, divisions);
scene.add(gridHelper);
var animate = function() {
requestAnimationFrame(animate);
renderer.render(scene, camera);
};
animate();
try this for OrbitControls
let padding = 48;
let w = Math.max(objectLength, objectWidth) + padding;
let h = objectHeight + padding;
let fovX = camera.fov * (aspectX / aspectY);
let fovY = camera.fov;
let distanceX = (w / 2) / Math.tan(Math.PI * fovX / 360) + (w / 2);
let distanceY = (h / 2) / Math.tan(Math.PI * fovY / 360) + (w / 2);
let distance = Math.max(distanceX, distanceY);
From user151496's suggestion about using the aspect ratio, this seems to work, although I've only tested with a few different parameter sets.
var maxDim = Math.max(w, h);
var aspectRatio = w / h;
var distance = maxDim/ 2 / aspectRatio / Math.tan(Math.PI * fov / 360);
I had the same question but I expected the object(s) (represented by a Box3 as a whole) could rotate on my phone if the whole was wider than my screen so I could view it by zooming in as near as possible.
const objectSizes = bboxMap.getSize();
console.log('centerPoint', centerPoint, bboxMap, objectSizes, tileMap);
//setupIsometricOrthographicCamera(bboxMap);
//https://gamedev.stackexchange.com/questions/43588/how-to-rotate-camera-centered-around-the-cameras-position
//https://threejs.org/docs/#api/en/cameras/PerspectiveCamera
//https://stackoverflow.com/questions/14614252/how-to-fit-camera-to-object
// Top
// +--------+
// Left | Camera | Right
// +--------+
// Bottom
// canvas.height/2 / disance = tan(fov); canvas.width/2 / disance = tan(fovLR);
// => canvas.width / canvas.height = tan(fovLR)/tan(fov);
// => tan(fovLR) = tan(fov) * aspectRatio;
//If rotating the camera around z-axis in local space by 90 degrees.
// Left
// +---+
// Bottom | | Top
// | |
// +---+
// Right
// => tan(fovLR) = tan(fov) / aspectRatio;
const padding = 0, fov = 50;
let aspectRatio = canvas.width / canvas.height;
let tanFOV = Math.tan(Math.PI * fov / 360);
let viewWidth = padding + objectSizes.x, viewHeight = padding + objectSizes.y;
//The distances are proportional to the view's with or height
let distanceH = viewWidth / 2 / (tanFOV * aspectRatio);
let distanceV = viewHeight / 2 / tanFOV;
const camera = this.camera = new THREE.PerspectiveCamera(fov, aspectRatio, 0.1, 10000); //VIEW_ANGLE, ASPECT, NEAR, FAR
if (aspectRatio > 1 != viewWidth > viewHeight) {
console.log('screen is more narrow than the objects to be viewed');
// viewWidth / canvas.width => viewHeight / canvas.width
// viewHeight / canvas.height => viewWidth / canvas.height;
distanceH *= viewHeight / viewWidth;
distanceV *= viewWidth / viewHeight;
camera.rotateZ(Math.PI / 2);
}
camera.position.z = Math.max(distanceH, distanceV) + bboxMap.max.z;
//camera.lookAt(tileMap.position);
I had tested two different aspect of Box3 on tow different orientations (landscape and portrait) using my phone, it worked well.
References
Box3.getSize ( target : Vector3 ) : Vector3
target — the result will be copied into this Vector3.
Returns the width, height and depth of this box.
Object3D.rotateZ ( rad : Float ) : this (PerspectiveCamera)
rad - the angle to rotate in radians.
Rotates the object around z axis in local space.
Other answers
Building my responsive website, I would like to build my funny timeline, but I cannot come up with a solution.
It would be a sprite such as a rocket or flying saucer taking off at the bottom of middle of the page and coming out with smoke.
Smoke would remain more or less and disclose my timeline.
Sketch
Is anyone does have an idea how to make that possible?
To simulate smoke, you have to use a particle system.
As you maybe know, WebGL is able to draw triangles, lines and points.
This last one is what we need. The smoke is made of hundreds of semi-transparent white disks of slighly different sizes. Each point is defined by 7 attributes :
x, y: starting position.
vx, vy: direction.
radius: maximal radius.
life: number of milliseconds before it disappears.
delay: Number of milliseconds to wait before its birth.
One trick is to create points along a vertical centered axis. The more you go up, the more the delay increases. The other trick is to make the point more more transparent as it reaches it end of live.
Here is how you create such vertices :
function createVertices() {
var x, y, vx, vy, radius, life, delay;
var vertices = [];
for( delay=0; delay<1; delay+=0.01 ) {
for( var loops=0; loops<5; loops++ ) {
// Going left.
x = rnd(0.01);
y = (2.2 * delay - 1) + rnd(-0.01, 0.01);
vx = -rnd(0, 1.5) * 0.0001;
vy = -rnd(0.001);
radius = rnd(0.1, 0.25) / 1000;
life = rnd(2000, 5000);
vertices.push( x, y, vx, vy, radius, life, delay );
// Going right.
x = -rnd(0.01);
y = (2.2 * delay - 1) + rnd(-0.01, 0.01);
vx = rnd(0, 1.5) * 0.0001;
vy = -rnd(0.001);
radius = rnd(0.1, 0.25) / 1000;
life = rnd(2000, 5000);
vertices.push( x, y, vx, vy, radius, life, delay );
}
}
var buff = gl.createBuffer();
gl.bindBuffer( gl.ARRAY_BUFFER, buff );
gl.bufferData( gl.ARRAY_BUFFER, new Float32Array(vertices), gl.STATIC_DRAW );
return Math.floor( vertices.length / 7 );
}
As you can see, I created points going right and points going left to get a growing fuzzy triangle.
Then you need a vertex shader controling the position and size of the points.
WebGL provide the output variable gl_PointSize which is the size (in pixels) of the square to draw for the current point.
uniform float uniWidth;
uniform float uniHeight;
uniform float uniTime;
attribute vec2 attCoords;
attribute vec2 attDirection;
attribute float attRadius;
attribute float attLife;
attribute float attDelay;
varying float varAlpha;
const float PERIOD = 10000.0;
const float TRAVEL_TIME = 2000.0;
void main() {
float time = mod( uniTime, PERIOD );
time -= TRAVEL_TIME * attDelay;
if( time < 0.0 || time > attLife) return;
vec2 pos = attCoords + time * attDirection;
gl_Position = vec4( pos.xy, 0, 1 );
gl_PointSize = time * attRadius * min(uniWidth, uniHeight);
varAlpha = 1.0 - (time / attLife);
}
Finally, the fragment shader will display a point in white. but the more you go far from the center, the more transparent the fragments become.
To know where you are in the square drawn for the current point, you can read the global WebGL variable gl_PointCoord.
precision mediump float;
varying float varAlpha;
void main() {
float x = gl_PointCoord.x - 0.5;
float y = gl_PointCoord.y - 0.5;
float radius = x * x + y * y;
if( radius > 0.25 ) discard;
float alpha = varAlpha * 0.8 * (0.25 - radius);
gl_FragColor = vec4(1, 1, 1, alpha);
}
Here is a live example : https://jsfiddle.net/m1a9qry6/1/
My goal is to draw a circle around my mouse cursor over a plane.
I get NDC coordinates (-1 to +1) that represent my cursor position:
const rect = targetHTML.getBoundingClientRect();
const mousePositionX = event.clientX - rect.left;
const mousePositionY = event.clientY - rect.top;
this._currentPoint = {
x: (mousePositionX / targetHTML.clientWidth * 2 - 1),
y: (mousePositionY / targetHTML.clientHeight * -2 + 1),
};
I pass it to my fragment shader via uniforms:
this._cursorMaterial.uniforms.uBrushPosition.value =
new window.THREE.Vector2(this._currentPoint.x, this._currentPoint.y);
In my fragment shader, I want to convert it to a world coordinate in order to compare it to the fragment world location.
// vertex shader
varying vec4 vPos;
void main() {
vPos = modelMatrix * vec4(position, 1.0 );
gl_Position = projectionMatrix * modelViewMatrix * vec4(position, 1.0 );
}
// fragment shader
varying vec4 vPos;
uniform vec2 uBrushPosition;
void main() {
// convert uBrush position to world space
// uBrushPosition
vec3 brushWorldPosition = ?
//
if (distance(brushWorldPosition, vpos) < 10.) {
gl_FragColor = vec4(1., 0., 0., .5);
}
discard;
Not in the shader, but you can send it in as a uniform.
var mouseWorld = new THREE.Vector3( mouse.x, mouse.y, distanceFromCamera )
mouseWorld.unproject( camera )
How can you ray trace to a Point Cloud with a custom vertex shader in three.js.
This is my vertex shader
void main() {
vUvP = vec2( position.x / (width*2.0), position.y / (height*2.0)+0.5 );
colorP = vec2( position.x / (width*2.0)+0.5 , position.y / (height*2.0) );
vec4 pos = vec4(0.0,0.0,0.0,0.0);
depthVariance = 0.0;
if ( (vUvP.x<0.0)|| (vUvP.x>0.5) || (vUvP.y<0.5) || (vUvP.y>0.0)) {
vec2 smp = decodeDepth(vec2(position.x, position.y));
float depth = smp.x;
depthVariance = smp.y;
float z = -depth;
pos = vec4(( position.x / width - 0.5 ) * z * (1000.0/focallength) * -1.0,( position.y / height - 0.5 ) * z * (1000.0/focallength),(- z + zOffset / 1000.0) * 2.0,1.0);
vec2 maskP = vec2( position.x / (width*2.0), position.y / (height*2.0) );
vec4 maskColor = texture2D( map, maskP );
maskVal = ( maskColor.r + maskColor.g + maskColor.b ) / 3.0 ;
}
gl_PointSize = pointSize;
gl_Position = projectionMatrix * modelViewMatrix * pos;
}
In the Points class, ray tracing is implemented as follows:
function testPoint( point, index ) {
var rayPointDistanceSq = ray.distanceSqToPoint( point );
if ( rayPointDistanceSq < localThresholdSq ) {
var intersectPoint = ray.closestPointToPoint( point );
intersectPoint.applyMatrix4( matrixWorld );
var distance = raycaster.ray.origin.distanceTo( intersectPoint );
if ( distance < raycaster.near || distance > raycaster.far ) return;
intersects.push( {
distance: distance,
distanceToRay: Math.sqrt( rayPointDistanceSq ),
point: intersectPoint.clone(),
index: index,
face: null,
object: object
} );
}
}
var vertices = geometry.vertices;
for ( var i = 0, l = vertices.length; i < l; i ++ ) {
testPoint( vertices[ i ], i );
}
However, since I'm using a vertex shader, the geometry.vertices don't match up to the vertices on the screen which prevents the ray trace from working.
Can we get the points back from the vertex shader?
I didn't dive into what your vertex-shader actually does, and I assume there are good reasons for you to do it in the shader, so it's likely not feasible to redo the calculations in javascript when doing the ray-casting.
One approach could be to have some sort of estimate for where the points are, use those for a preselection and do some more involved calculation for the points that are closest to the ray.
If that won't work, your best bet would be to render a lookup-map of your scene, where color-values are the id of a point that is rendered at the coordinates (this is also referred to as GPU-picking, examples here, here and even some library here although that doesn't really do what you will need).
To do that, you need to render your scene twice: create a lookup-map in the first pass and render it regularly in the second pass. The lookup-map will store for every pixel which particle was rendered there.
To get that information you need to setup a THREE.RenderTarget (this might be downscaled to half the width/height for better performance) and a different material. The vertex-shader stays as it is, but the fragment-shader will just output a single, unique color-value for every particle (or anything that you can use to identify them). Then render the scene (or better: only the parts that should be raycast-targets) into the renderTarget:
var size = renderer.getSize();
var renderTarget = new THREE.WebGLRenderTarget(size.width / 2, size.height / 2);
renderer.render(pickingScene, camera, renderTarget);
After rendering, you can obtain the content of this lookup-texture using the renderer.readRenderTargetPixels-method:
var pixelData = new Uint8Array(width * height * 4);
renderer.readRenderTargetPixels(renderTarget, 0, 0, width, height, pixelData);
(the layout of pixelData here is the same as for a regular canvas imageData.data)
Once you have that, the raycaster will only need to lookup a single coordinate, read and interpret the color-value as object-id and do something with it.
I am currently writing a cel shading shader, but I'm having issues with edge detection. I am currently using the following code utilizing laplacian edge detection on non-linear depth buffer values:
uniform sampler2d depth_tex;
void main(){
vec4 color_out;
float znear = 1.0;
float zfar = 50000.0;
float depthm = texture2D(depth_tex, gl_TexCoord[0].xy).r;
float lineAmp = mix( 0.001, 0.0, clamp( (500.0 / (zfar + znear - ( 2.0 * depthm - 1.0 ) * (zfar - znear) )/2.0), 0.0, 1.0 ) );// make the lines thicker at close range
float depthn = texture2D(depth_tex, gl_TexCoord[0].xy + vec2( (0.002 + lineAmp)*0.625 , 0.0) ).r;
depthn = depthn / depthm;
float depths = texture2D(depth_tex, gl_TexCoord[0].xy - vec2( (0.002 + lineAmp)*0.625 , 0.0) ).r;
depths = depths / depthm;
float depthw = texture2D(depth_tex, gl_TexCoord[0].xy + vec2(0.0 , 0.002 + lineAmp) ).r;
depthw = depthw / depthm;
float depthe = texture2D(depth_tex, gl_TexCoord[0].xy - vec2(0.0 , 0.002 + lineAmp) ).r;
depthe = depthe / depthm;
float Contour = -4.0 + depthn + depths + depthw + depthe;
float lineAmp2 = 100.0 * clamp( depthm - 0.99, 0.0, 1.0);
lineAmp2 = lineAmp2 * lineAmp2;
Contour = (512.0 + lineAmp2 * 204800.0 ) * Contour;
if(Contour > 0.15){
Contour = (0.15 - Contour) / 1.5 + 0.5;
} else
Contour = 1.0;
color_out.rgb = color_out.rgb * Contour;
color_out.a = 1.0;
gl_FragColor = color_out;
}
but it is hackish[note the lineAmp2], and the details at large distances are lost. So I made up some other algorithm:
[Note that Laplacian edge detection is in use]
1.Get 5 samples from the depth buffer: depthm, depthn, depths, depthw, depthe, where depthm is exactly where the processed fragment is, depthn is slightly to the top, depths is slightly to the bottom etc.
2.Calculate their real coordinates in camera space[as well as convert to linear].
3.Compare the side samples to the middle sample by substracting and then normalize each difference by dividing by difference in distance between two camera-space points and add all four results. This should in theory help with situation, where at large distances from the camera two fragments are very close on the screen but very far in camera space, which is fatal for linear depth testing.
where:
2.a convert the non linear depth to linear using an algorithm from [url=http://stackoverflow.com/questions/6652253/getting-the-true-z-value-from-the-depth-buffer]http://stackoverflow.com/questions/6652253/getting-the-true-z-value-from-the-depth-buffer[/url]
exact code:
uniform sampler2D depthBuffTex;
uniform float zNear;
uniform float zFar;
varying vec2 vTexCoord;
void main(void)
{
float z_b = texture2D(depthBuffTex, vTexCoord).x;
float z_n = 2.0 * z_b - 1.0;
float z_e = 2.0 * zNear * zFar / (zFar + zNear - z_n * (zFar - zNear));
}
2.b convert the screen coordinates to be [tan a, tan b], where a is horizontal angle and b i vertical. There probably is a better terminology with some spherical coordinates but I don't know these yet.
2.c create a 3d vector ( converted screen coordinates, 1.0 ) and scale it by linear depth. I assume this is estimated camera space coordinates of the fragment. It looks like it.
3.a each difference is as follows: (depthm - sidedepth)/lenght( positionm - sideposition)
And I may have messed up something at any point. Code looks fine, but the algorithm may not be, as I made it up myself.
My code:
uniform sampler2d depth_tex;
void main(){
float znear = 1.0;
float zfar = 10000000000.0;
float depthm = texture2D(depth_tex, gl_TexCoord[0].xy + distort ).r;
depthm = 2.0 * zfar * znear / (zfar + znear - ( 2.0 * depthm - 1.0 ) * (zfar - znear) ); //convert to linear
vec2 scorm = (gl_TexCoord[0].xy + distort) -0.5; //conversion to desired coordinates space. This line returns value from range (-0.5,0.5)
scorm = scorm * 2.0 * 0.5; // normalize to (-1, 1) and multiply by tan FOV/2, and default fov is IIRC 60 degrees
scorm.x = scorm.x * 1.6; //1.6 is aspect ratio 16/10
vec3 posm = vec3( scorm, 1.0 );
posm = posm * depthm; //scale by linearized depth
float depthn = texture2D(depth_tex, gl_TexCoord[0].xy + distort + vec2( 0.002*0.625 , 0.0) ).r; //0.625 is aspect ratio 10/16
depthn = 2.0 * zfar * znear / (zfar + znear - ( 2.0 * depthn - 1.0 ) * (zfar - znear) );
vec2 scorn = (gl_TexCoord[0].xy + distort + vec2( 0.002*0.625, 0.0) ) -0.5;
scorn = scorn * 2.0 * 0.5;
scorn.x = scorn.x * 1.6;
vec3 posn = vec3( scorn, 1.0 );
posn = posn * depthn;
float depths = texture2D(depth_tex, gl_TexCoord[0].xy + distort - vec2( 0.002*0.625 , 0.0) ).r;
depths = 2.0 * zfar * znear / (zfar + znear - ( 2.0 * depths - 1.0 ) * (zfar - znear) );
vec2 scors = (gl_TexCoord[0].xy + distort - vec2( 0.002*0.625, 0.0) ) -0.5;
scors = scors * 2.0 * 0.5;
scors.x = scors.x * 1.6;
vec3 poss = vec3( scors, 1.0 );
poss = poss * depths;
float depthw = texture2D(depth_tex, gl_TexCoord[0].xy + distort + vec2(0.0 , 0.002) ).r;
depthw = 2.0 * zfar * znear / (zfar + znear - ( 2.0 * depthw - 1.0 ) * (zfar - znear) );
vec2 scorw = ( gl_TexCoord[0].xy + distort + vec2( 0.0 , 0.002) ) -0.5;
scorw = scorw * 2.0 * 0.5;
scorw.x = scorw.x * 1.6;
vec3 posw = vec3( scorw, 1.0 );
posw = posw * depthw;
float depthe = texture2D(depth_tex, gl_TexCoord[0].xy + distort - vec2(0.0 , 0.002) ).r;
depthe = 2.0 * zfar * znear / (zfar + znear - ( 2.0 * depthe - 1.0 ) * (zfar - znear) );
vec2 score = ( gl_TexCoord[0].xy + distort - vec2( 0.0 , 0.002) ) -0.5;
score = score * 2.0 * 0.5;
score.x = score.x * 1.6;
vec3 pose = vec3( score, 1.0 );
pose = pose * depthe;
float Contour = ( depthn - depthm )/length(posm - posn) + ( depths - depthm )/length(posm - poss) + ( depthw - depthm )/length(posm - posw) + ( depthe - depthm )/length(posm - pose);
Contour = 0.25 * Contour;
color_out.rgb = vec3( Contour, Contour, Contour );
color_out.a = 1.0;
gl_FragColor = color_out;
}
The exact issue with the second code is that it exhibits some awful artifacts at larger distances.
My goal is to make either of them work properly. Are there any tricks I could use to improve precision/quality in both linearized and non-linearized depth buffer? Is anything wrong with my algorithm for linearized depth buffer?