GLSL trouble with sine curve - three.js

I am trying to alter vertex positions in a vertex shader to form a sine curve along a shape's surface.
As seen in the middle of the page here, a sine wave moving vertically along the z axis could be generated with the simple pattern z = sin(u_time + y);. For every new Y pos, increment the Z pos inward/outward, forming a sine path.
For some reason the outcome is different in my vertex shader. The surface of the shape is changing, but it always stays flat, instead of conforming to a sine curve. See the vertex shader in the example here: https://jsfiddle.net/35w7fsqo/1/ , namely the line
p.z += sin((time + position.y) / duration) * amplitude;
Here's a diagram showing what I mean:
What do I need to do to get this surface to conform to a sine curve?

The BoxGeometry which this shader was running on didn't have enough vertices along the face, it only had one at each corner. I added more heightSegments in the constructor like so var geometry = new THREE.BoxGeometry(200, 200, 200, 1, 10, 1); and now sine curves are visible: https://jsfiddle.net/35w7fsqo/2/

Related

Create a ring on the surface of a sphere in threejs

I have a sphere in threejs, and I'd like a ring to animate over the top of it.
I have the following progress:
https://codepen.io/EightArmsHQ/pen/zYRdQOw/2919f1a1bdcd2643390efc33bd4b73c9?editors=0010
In the animate function, I call:
const scale = Math.cos((circlePos / this.globeRadius) * Math.PI * 0.5);
console.log(scale);
this.ring.scale.set(scale, scale, 1);
My understanding is that the sin and cos functions are exactly what I need to work out how far around the circle the ring has gotten to. However, the animation actually shows the ring fall inside the sphere, before eventually hitting the 0 scale at the outside of the sphere.
Ideally, I'd also like to just be changing the radius of the sphere but I cannot work out how to do that either, so I think it may be an issue of using the scale function.
How can I keep the ring on the surface of the sphere?
Not quite. Consider this:
You have a right triangle whose bases are your x and y, with a hypotenuse of r = globeRadius. So by Pythagoras' theorem, we have:
x2 + y2 = r2.
So if we solve for the height, y, we get:
y = √(r2 - x2).
Thus, in your code, you could write it e.g. like this:
const scale = Math.sqrt(this.globeRadius * this.globeRadius - circlePos * circlePos);
However, this is the scale in terms of world units, not relative to the objects. So for this to work, you need to either divide by your radius again, or just initialise your ring with radius 1:
this.ringGeometry = new THREE.RingGeometry(1, 1.03, 32);
Here I gave it an arbitrary ring width of 0.03 - you may of course adjust it to your own needs.

Rotation of an object in the tangent space of a globe

Given the two following inputs:
a point on a sphere (like an observer on Earth);
and the world matrix of an object in space (the position and attitude of a satellite),
how to get the azimuth and elevation of the object in the tangent space of the point on the sphere (the elevation and azimuth of where the observer should look at)? In particular, when the object is exactly at the zenith, the yaw rotation (rotation around the vertical axis) should account for the azimuth (so that, though the observer is looking straight up, his shoulders would be facing the same azimuth as the object).
The math I've tried so far is:
to put the satellite in tangent space (multiplying its world matrix with the inverse of the matrix of the tangent space on the globe). Or the same with quaternions. An euler rotation is then deduced from the resulting matrix (or the resulting quaternion), with a "ZXY" priority, and the Z and X are interpreted as azimuth and elevation. But this gives incorrect numbers, as part of the rotation seems often interpreted as roll (Y axis rotation) which I want to be zero.
an intuitive approach also is to compute the angle between the vector of the observer to the object's position, with the vertical axis, to deduce the elevation; whereas the azimuth is given by the angle between the tangent north and the projected position of the object on the "tangent ground" (plus some more math to hone this particular deduction). But this approach does not work for the case of the object at the zenith.
Resources exist online but not with these specific inputs and the necessity of supporting the zenith case.
Incidentally the program is in typescript for three.js, and so the code goes as follows for the first solution described above:
function getRotationAtPoint(
object: THREE.Object3D,
point: THREE.Vector3
): { azimuth: number, elevation: number } {
// 1. Get the matrix of the tangent space of the observer.
const tangentSpaceMatrix = new THREE.Matrix4();
const baseTangentSpaceAxes = getBaseTangentAxesOnSphere(point);
tangentSpaceMatrix.makeBasis(...baseTangentSpaceAxes);
// 2. Tranform the object's matrix in tangent space of observer.
const inverseMatrix = new THREE.Matrix4().getInverse(tangentSpaceMatrix);
const objectMatrix = object.matrixWorld.clone().multiply(inverseMatrix);
// 3. Get the angles.
const euler = new THREE.Euler().setFromRotationMatrix(objectMatrix);
return {
azimuth: euler.z,
elevation: euler.x
};
}
Also, Three.js offers references to the up axis of THREE.Object3D instances, however the program I deal with computes everything directly into the objects' matrices and the up axis can't be trusted.

openGL reverse image texturing logic

I'm about to project image into cylindrical panorama. But first I need to get the pixel (or color from pixel) I'm going to draw, then then do some Math in shaders with polar coordinates to get new position of pixel and then finally draw pixel.
Using this way I'll be able to change shape of image from polygon shape to whatever I want.
But I cannot find anything about this method (get pixel first, then do the Math and get new position for pixel).
Is there something like this, please?
OpenGL historically doesn't work that way around; it forward renders — from geometry to pixels — rather than backwards — from pixel to geometry.
The most natural way to achieve what you want to do is to calculate texture coordinates based on geometry, then render as usual. For a cylindrical mapping:
establish a mapping from cylindrical coordinates to texture coordinates;
with your actual geometry, imagine it placed within the cylinder, then from each vertex proceed along the normal until you intersect the cylinder. Use that location to determine the texture coordinate for the original vertex.
The latter is most easily and conveniently done within your geometry shader; it's a simple ray intersection test, with attributes therefore being only vertex location and vertex normal, and texture location being a varying that is calculated purely from the location and normal.
Extemporaneously, something like:
// get intersection as if ray hits the circular region of the cylinder,
// i.e. where |(position + n*normal).xy| = 1
float planarLengthOfPosition = length(position.xy);
float planarLengthOfNormal = length(normal.xy);
float planarDistanceToPerimeter = 1.0 - planarLengthOfNormal;
vec3 circularIntersection = position +
(planarDistanceToPerimeter/planarLengthOfNormal)*normal;
// get intersection as if ray hits the bottom or top of the cylinder,
// i.e. where |(position + n*normal).z| = 1
float linearLengthOfPosition = abs(position.z);
float linearLengthOfNormal = abs(normal.z);
float linearDistanceToEdge = 1.0 - linearLengthOfPosition;
vec3 endIntersection = position +
(linearDistanceToEdge/linearLengthOfNormal)*normal;
// pick whichever of those was lesser
vec3 cylindricalIntersection = mix(circularIntersection,
endIntersection,
step(linearDistanceToEdge,
planarDistanceToPerimeter));
// ... do something to map cylindrical intersection to texture coordinates ...
textureCoordinateVarying =
coordinateFromCylindricalPosition(cylindricalIntersection);
With a common implementation of coordinateFromCylindricalPosition possibly being simply return vec2(atan(cylindricalIntersection.y, cylindricalIntersection.x) / 6.28318530717959, cylindricalIntersection.z * 0.5);.

Geometry Shader Quad Post Processing

Using directx 11, I'm working on a graphics effect system that uses a geometry shader to build quads in world space. These quads then use a fragment shader in which the main texture is the rendered scene texture. Effectively producing post process effects on qorld space quads. The simplest of which is a tint effect.
The vertex shader only passes the data through to the geometry shader.
The geometry shader calculates extra vertices based on a normal. Using cross product, I find the x and z axis and append the tri-stream with 4 new verts in each diagonal direction from the original position (generating a quad from the given position and size).
The pixel shader (tint effect) simply multiplies the scene texture colour with the colour variable set.
The quad generates and displays correctly on screen. However;
The problem that I am facing is the mapping of the uv coordinates fails to align with the image on the back buffer. That is, when using the tint shader with half alpha as the given colour you can see the image displayed on the quad does not overlay the image on the back buffer perfectly, unless the quad facing towards the camera. The closer the quad normal matches the cameras y axis, the more the image is skewed.
I am currently using the formula below to calculate the uv coordinates:
float2 uv = vert0.position.xy / vert0.position.w;
vert0.uv.x = uv.x * 0.5f + 0.5f;
vert0.uv.y = -uv.y * 0.5f + 0.5f;
I have also used the formula below, which resulted (IMO) in the uv's not taking perspective into concideration.
float2 uv = vert0.position.xy / SourceTextureResolution;
vert0.uv.x = uv.x * ASPECT_RATIO + 0.5f;
vert0.uv.y = -uv.y + 0.5f;
Question:
How can I obtain screen space uv coordinates based on a vertex position generated in the geometry shader?
If you would like me to elaborate on any points please ask and i will try my best :)
Thanks in advance.

Direct3D9 Calculating view space point light position

I am working on my own deffered rendering engine. I am rendering the scene to the g-buffer containing diffuse color, view space normals and depth (for now). I have implemented directional light for the second rendering stage and it works great. Now I want to render a point light, which is a bit harder.
I need the point light position for the shader in view space because I have only depth in the g-buffer and I can't afford a matrix multiplication in every pixel. I took the light position and transformed it by the same matrix, by which I transform every vertex in shader, so it should align with verices in the scene (using D3DXVec3Transform). But that isn't the case: transformed position doesn't represent viewspace position nearly at all. It's x,y coordinates are off the charts, they are often way out of the (-1,1) range. The transformed position respects the camera orientation somewhat, but the light moves too quick and the y-axis is inverted. Only if the camera is at (0,0,0), the light stands at (0,0) in the center of the screen. Here is my relevant rendering code executed every frame:
D3DXMATRIX matView; // the view transform matrix
D3DXMATRIX matProjection; // the projection transform matrix
D3DXMatrixLookAtLH(&matView,
&D3DXVECTOR3 (x,y,z), // the camera position
&D3DXVECTOR3 (xt,yt,zt), // the look-at position
&D3DXVECTOR3 (0.0f, 0.0f, 1.0f)); // the up direction
D3DXMatrixPerspectiveFovLH(&matProjection,
fov, // the horizontal field of view
asp, // aspect ratio
znear, // the near view-plane
zfar); // the far view-plane
D3DXMATRIX vysl=matView*matProjection;
eff->SetMatrix("worldViewProj",&vysl); //vertices are transformed ok ín shader
//render g-buffer
D3DXVECTOR4 lpos; D3DXVECTOR3 lpos2(0,0,0);
D3DXVec3Transform(&lpos,&lpos2,&vysl); //transforming lpos into lpos2 using vysl, still the same matrix
eff->SetVector("poslight",&lpos); //but there is already a mess in lpos at this time
//render the fullscreen quad with wrong lighting
Not that relevant shader code, but still, I see the light position this way (passing IN.texture is just me being lazy):
float dist=length(float2(IN.texture0*2-1)-float2(poslight.xy));
OUT.col=tex2D(Sdiff,IN.texture0)/dist;
I have tried to transform a light only by matView without projection, but the problem is still the same. If I transform the light in a shader, it's the same result, so the problem is the matrix itself. But it is the same matrix as is transforming the vertices! How differently are vertices treated?
Can you please take a look at the code and tell me where the mistake is? It seems to me it should work ok, but it doesn't. Thanks in advance.
You don't need a matrix multiplication to reconstruct view position, here is a code snippet (from andrew lauritzen deffered light example)
tP is the projection transform, position screen is -1/1 pixel coordinate and viewspaceZ is linear depth that you sample from your texture.
float3 ViewPosFromDepth(float2 positionScreen,
float viewSpaceZ)
{
float2 screenSpaceRay = float2(positionScreen.x / tP._11,
positionScreen.y / tP._22);
float3 positionView;
positionView.z = viewSpaceZ;
positionView.xy = screenSpaceRay.xy * positionView.z;
return positionView;
}
Result of this transform D3DXVec3Transform(&lpos,&lpos2,&vysl); is a vector in homogeneous space(i.e. projected vector but not divided by w). But in you shader you use it's xy components without respecting this(w). This is (quite probably) the problem. You could divide vector by its w yourself or use D3DXVec3Project instead of D3DXVec3Transform.
It's working fine for vertices as (I suppose) you mul them by the same viewproj matrix in the vertex shader and pass transformed values to interpolator where hardware eventually divides it's xyz by interpolated 'w'.

Resources