As a way of teaching myself WebGL and Three.js, I'm building a simple demo/game. The idea is a 3D version of this: https://www.youtube.com/watch?v=cYpE8-4_YBk&t=75s - navigate the opening, and don't crash.
What I want to do is create a dynamic "tube" using vertices calculated on the fly, like this:
for ( var p = 0 ; p < LIMIT ; p++ ) {
r = RADIUS + Math.random();
let x = CX + (r * Math.cos(ANGLE));
let y = CY + (r * Math.sin(ANGLE));
let z = - ( p / DENSITY );
let v = new THREE.Vector3( x, y, z );
tube.vertices.push( v );
tube.faces.push( new THREE.Face3( p, p-offset[0], p-offset[2] ) );
tube.faces.push( new THREE.Face3( p, p-offset[2], p-offset[1] ) );
// update calculation parameters
CX += ( Math.random() - 0.5 ) * 0.1;
CY += ( Math.random() - 0.5 ) * 0.1;
RADIUS += ( Math.random() - 0.5 ) * 0.1;
if ( RADIUS < MIN_RADIUS ) RADIUS = MIN_RADIUS;
if ( RADIUS > MAX_RADIUS ) RADIUS = MAX_RADIUS;
ANGLE += INCREMENT;
}
I have it working where the vertices are calculated once as a static mesh which is then uploaded to the GPU and animated towards the camera. No interactivity, yet. https://codepen.io/jarrowwx/pen/gKraVm
Next step is to make it infinite. Which means I'm going to have to do it differently.
One way I could do it is to keep a sufficient number of vertices in an array, calculating new vertices as I go, enough to make up for the distance traveled since the last time step. Then, once a second or so, rebuild the face array, and rebuild the scene. But that means once a second, there will probably be a noticeable lag.
But because all the vertices are calculated, it makes sense to do all the work on the GPU and avoid the task of transferring data in the first place.
How would one go about building this data structure on the GPU so that the data never has to be transferred from CPU to GPU? And then, once you have pulled that off, how do you continuously extend it as the camera flies through it?
Related
I think these should be circular. I assume there is something wrong with my normals but I haven't found anything wrong with them. Then again, finding a good test for the normals is difficult.
Here is the image:
Here is my shading code for each light, leaving out the recursive part for reflections:
lighting = ( hit.obj.ambient + hit.obj.emission );
const glm::vec3 view_direction = glm::normalize(eye - hit.pos);
const glm::vec3 reflection = glm::normalize(( static_cast<float>(2) * ( glm::dot(view_direction, hit.normal) * hit.normal ) ) - view_direction);
for(int i = 0; i < numused; ++i)
{
glm::vec3 hit_to_light = (lights[i].pos - hit.pos);
float dist = glm::length(hit_to_light);
glm::vec3 light_direction = glm::normalize(hit_to_light);
Ray lightray(hit.pos, light_direction);
Intersection blocked = Intersect(lightray, scene, verbose ? verbose : false);
if( blocked.dist >= dist)
{
glm::vec3 halfangle = glm::normalize(view_direction + light_direction);
float specular_multiplier = pow(std::max(glm::dot(halfangle,hit.normal), 0.f), shininess);
glm::vec3 attenuation_term = lights[i].rgb * (1.0f / (attenuation + dist * linear + dist*dist * quad));
glm::vec3 diffuse_term = hit.obj.diffuse * ( std::max(glm::dot(light_direction,hit.normal) , 0.f) );
glm::vec3 specular_term = hit.obj.specular * specular_multiplier;
}
}
And here is the line where I transform the object space normal to world space:
*norm = glm::normalize(transinv * glm::vec4(glm::normalize(p - sphere_center), 0));
Using the full phong model, instead of blinn-phong, I get teardrop highlights:
If I color pixels according to the (absolute value of the) normal at the intersection point I get the following image (r = x, g = y, b = z):
I've solved this issue. It turns out that the normals were all just slightly off, but not enough that the image colored by normals could depict it.
I found this out by computing the normals on spheres with a uniform scale and a translation.
The problem occurred in the line where I transformed the normals to world space:
*norm = glm::normalize(transinv * glm::vec4(glm::normalize(p - sphere_center), 0));
I assumed that the homogeneous coordinate would be 0 after the transformation because it was zero beforehand (rotations and scales do not affect it, and because it is 0, neither can translations). However, it is not 0 because the matrix is transposed, so the bottom row was filled with the inverse translations, causing the homogeneous coordinate to be nonzero.
The 4-vector is then normalized and the result is assigned to a 3-vector. The constructor for the 3-vector simply removes the last entry, so the normal was left unnormalized.
Here's the final picture:
How can you ray trace to a Point Cloud with a custom vertex shader in three.js.
This is my vertex shader
void main() {
vUvP = vec2( position.x / (width*2.0), position.y / (height*2.0)+0.5 );
colorP = vec2( position.x / (width*2.0)+0.5 , position.y / (height*2.0) );
vec4 pos = vec4(0.0,0.0,0.0,0.0);
depthVariance = 0.0;
if ( (vUvP.x<0.0)|| (vUvP.x>0.5) || (vUvP.y<0.5) || (vUvP.y>0.0)) {
vec2 smp = decodeDepth(vec2(position.x, position.y));
float depth = smp.x;
depthVariance = smp.y;
float z = -depth;
pos = vec4(( position.x / width - 0.5 ) * z * (1000.0/focallength) * -1.0,( position.y / height - 0.5 ) * z * (1000.0/focallength),(- z + zOffset / 1000.0) * 2.0,1.0);
vec2 maskP = vec2( position.x / (width*2.0), position.y / (height*2.0) );
vec4 maskColor = texture2D( map, maskP );
maskVal = ( maskColor.r + maskColor.g + maskColor.b ) / 3.0 ;
}
gl_PointSize = pointSize;
gl_Position = projectionMatrix * modelViewMatrix * pos;
}
In the Points class, ray tracing is implemented as follows:
function testPoint( point, index ) {
var rayPointDistanceSq = ray.distanceSqToPoint( point );
if ( rayPointDistanceSq < localThresholdSq ) {
var intersectPoint = ray.closestPointToPoint( point );
intersectPoint.applyMatrix4( matrixWorld );
var distance = raycaster.ray.origin.distanceTo( intersectPoint );
if ( distance < raycaster.near || distance > raycaster.far ) return;
intersects.push( {
distance: distance,
distanceToRay: Math.sqrt( rayPointDistanceSq ),
point: intersectPoint.clone(),
index: index,
face: null,
object: object
} );
}
}
var vertices = geometry.vertices;
for ( var i = 0, l = vertices.length; i < l; i ++ ) {
testPoint( vertices[ i ], i );
}
However, since I'm using a vertex shader, the geometry.vertices don't match up to the vertices on the screen which prevents the ray trace from working.
Can we get the points back from the vertex shader?
I didn't dive into what your vertex-shader actually does, and I assume there are good reasons for you to do it in the shader, so it's likely not feasible to redo the calculations in javascript when doing the ray-casting.
One approach could be to have some sort of estimate for where the points are, use those for a preselection and do some more involved calculation for the points that are closest to the ray.
If that won't work, your best bet would be to render a lookup-map of your scene, where color-values are the id of a point that is rendered at the coordinates (this is also referred to as GPU-picking, examples here, here and even some library here although that doesn't really do what you will need).
To do that, you need to render your scene twice: create a lookup-map in the first pass and render it regularly in the second pass. The lookup-map will store for every pixel which particle was rendered there.
To get that information you need to setup a THREE.RenderTarget (this might be downscaled to half the width/height for better performance) and a different material. The vertex-shader stays as it is, but the fragment-shader will just output a single, unique color-value for every particle (or anything that you can use to identify them). Then render the scene (or better: only the parts that should be raycast-targets) into the renderTarget:
var size = renderer.getSize();
var renderTarget = new THREE.WebGLRenderTarget(size.width / 2, size.height / 2);
renderer.render(pickingScene, camera, renderTarget);
After rendering, you can obtain the content of this lookup-texture using the renderer.readRenderTargetPixels-method:
var pixelData = new Uint8Array(width * height * 4);
renderer.readRenderTargetPixels(renderTarget, 0, 0, width, height, pixelData);
(the layout of pixelData here is the same as for a regular canvas imageData.data)
Once you have that, the raycaster will only need to lookup a single coordinate, read and interpret the color-value as object-id and do something with it.
The Problem
I am making a game where enemies appear at some point on the screen then follow a smooth curvy path and disappear at some point. I can make them follow a straight path but can't figure out the way to make them follow the paths depicted in the image.
Attempts
I started with parabolic curve and implemented them successfully. I just used the equation of parabola to calculate the coordinates gradually. I have no clue what is the equation for desired paths supposed to be.
What I want
I am not asking for the code.I just want someone to explain me the general technique.If you still want to show some code then I don't have special preference for programming language for this particular question you can use C,Java or even pseudo-code.
First you need to represent each curve with a set of points over time, For example:
-At T(0) the object should be at (X0, Y0).
-At T(1) the object should be at (X1, Y1).
And the more points you have, the more smooth curve you will get.
Then you will use those set of points to generate two formulas-one for X, and another one for Y-, using any Interpolation method, like The La-grange's Interpolation Formula:
Note that you should replace 'y' with the time T, and replace 'x' with your X for X formula, and Y for Y formula.
I know you hoped for a simple equation, but unfortunately this is will take from you a huge effort to simplify each equation, and my advise DON'T do it unless it's worth it.
If you are seeking for a more simple equation to perform well in each frame in your game you should read about SPline method, In this method is about splitting your curve into a smaller segments, and make a simple equation for every segment, for example:
Linear Spline:
Every segment contains 2 points, this will draw a line between every two points.
The result will be some thing like this:
Or you could use quadratic spline, or cubic spline for more smooth curves, but it will slow your game performance. You can read more about those methods here.
I think linear spline will be great for you with reasonable set of points for each curve.
Please change the question title to be more generic.
If you want to generate a spiral path you need.
Total time
How many full rotations
Largest radius
So, total time T_f = 5sec, rotations R_f = 2.5 * 2 * PI, the final distance from the start D_f = 200px
function SpiralEnemy(spawnX, spawnY, time) {
this.startX = spawnX;
this.startY = spawnY;
this.startTime = time;
// these will change and be used for rendering
this.x = this.startX;
this.y = this.startY;
this.done = false;
// constants we figured out above
var TFinal = 5.0;
var RFinal = -2.6 * 2 * Math.PI;
var RStart = -Math.PI / 2;
var DFinal = 100;
// the update function called every animation tick with the current time
this.update = function(t) {
var delta = t - this.startTime;
if(delta > TFinal) {
this.done = true;
return;
}
// find out how far along you are in the animation
var percent = delta / TFinal;
// what is your current angle of rotation (in radians)
var angle = RStart + RFinal * percent;
// how far from your start point should you be
var dist = DFinal * percent;
// update your coordinates
this.x = this.startX + Math.cos(angle) * dist;
this.y = this.startY + Math.sin(angle) * dist;
};
}
EDIT Here's a jsfiddle to mess with http://jsfiddle.net/pxb3824z/
EDIT 2 Here's a loop (instead of spiral) version http://jsfiddle.net/dpbLxuz7/
The loop code splits the animation into 2 parts the beginning half and the end half.
Beginning half : angle = Math.tan(T_percent) * 2 and dist = Speed + Speed * (1 - T_percent)
End half : angle = -Math.tan(1 - T_percent) * 2 and dist = **Speed + Speed * T_percent
T_percent is normalized to (0, 1.0) for both halfs.
function LoopEnemy(spawnX, spawnY, time) {
this.startX = spawnX;
this.startY = spawnY;
this.startTime = time;
// these will change and be used for rendering
this.x = this.startX;
this.y = this.startY;
this.last = time;
this.done = false;
// constants we figured out above
var TFinal = 5.0;
var RFinal = -2 * Math.PI;
var RStart = 0;
var Speed = 50; // px per second
// the update function called every animation tick with the current time
this.update = function(t) {
var delta = t - this.startTime;
if(delta > TFinal) {
this.done = true;
return;
}
// find out how far along you are in the animation
var percent = delta / TFinal;
var localDelta = t - this.last;
// what is your current angle of rotation (in radians)
var angle = RStart;
var dist = Speed * localDelta;
if(percent <= 0.5) {
percent = percent / 0.5;
angle -= Math.tan(percent) * 2;
dist += dist * (1 - percent);
} else {
percent = (percent - 0.5) / 0.5;
angle -= -Math.tan(1 - percent) * 2;
dist += dist * percent;
}
// update your coordinates
this.last = t;
this.x = this.x + Math.cos(angle) * dist;
this.y = this.y + Math.sin(angle) * dist;
};
}
Deriving the exact distance traveled and the height of the loop for this one is a bit more work. I arbitrarily chose a Speed of 50px / sec, which give a final x offset of ~+145 and a loop height of ~+114 the distance and height will scale from those values linearly (ex: Speed=25 will have final x at ~73 and loop height of ~57)
I don't understand how you give a curve. If you need a curve depicted on the picture, you can find a curve is given analytically and use it. If you have not any curves you can send me here: hedgehogues#bk.ru and I will help find you. I leave e-mail here because I don't get any messages about answers of users from stackoverflow. I don't know why.
If you have some curves in parametric view in [A, B], you can write a code like this:
struct
{
double x, y;
}SPoint;
coord = A;
step = 0.001
eps = 1e-6;
while (coord + step - eps < B)
{
SPoint p1, p2;
p1.x = x(coord);
p1.y = y(coord);
coord += step;
p2.x = x(coord);
p2.y = y(coord);
drawline(p1, p2);
}
I use function like this in three.js 69
function Point3DToScreen2D(point3D,camera){
var p = point3D.clone();
var vector = p.project(camera);
vector.x = (vector.x + 1) / 2 * window.innerWidth;
vector.y = -(vector.y - 1) / 2 * window.innerHeight;
return vector;
}
It works fine when i keep the scene still.
But when i rotate the scene it made a mistake and return wrong position in the screen.It occurs when i rotate how about 180 degrees.It shoudn't have a position in screen but it showed.
I set a position var tmpV=Point3DToScreen2D(new THREE.Vector3(-67,1033,-2500),camera); in update and show it with css3d.And when i rotate like 180 degrees but less than 360 , the point shows in the screen again.Obviously it's a wrong position that can be telled from the scene and i haven't rotate 360 degrees.
I know little about the Matrix,So i don't know how the project works.
Here is the source of project in three.js:
project: function () {
var matrix;
return function ( camera ) {
if ( matrix === undefined ) matrix = new THREE.Matrix4();
matrix.multiplyMatrices( camera.projectionMatrix, matrix.getInverse( camera.matrixWorld ) );
return this.applyProjection( matrix );
};
}()
Is the matrix.getInverse( camera.matrixWorld ) redundant? I tried to delete it and it didn't work.
Can anyone help me?Thanks.
You are projecting a 3D point from world space to screen space using a pattern like this one:
var vector = new THREE.Vector3();
var canvas = renderer.domElement;
vector.set( 1, 2, 3 );
// map to normalized device coordinate (NDC) space
vector.project( camera );
// map to 2D screen space
vector.x = Math.round( ( vector.x + 1 ) * canvas.width / 2 ),
vector.y = Math.round( ( - vector.y + 1 ) * canvas.height / 2 );
vector.z = 0;
However, using this approach, points behind the camera are projected to screen space, too.
You said you want to filter out points that are behind the camera. To do that, you can use this pattern first:
var matrix = new THREE.Matrix4(); // create once and reuse
...
// get the matrix that maps from world space to camera space
matrix.getInverse( camera.matrixWorld );
// transform your point from world space to camera space
p.applyMatrix( matrix );
Since the camera is located at the origin in camera space, and since the camera is always looking down the negative-z axis in camera space, points behind the camera will have a z-coordinate greater than zero.
// check if point is behind the camera
if ( p.z > 0 ) ...
three.js r.71
Like the example above but you can check vector.z to determine if it's in front.
var vector = new THREE.Vector3();
var canvas = renderer.domElement;
vector.set( 1, 2, 3 );
// map to normalized device coordinate (NDC) space
vector.project( camera );
// map to 2D screen space
vector.x = Math.round( ( vector.x + 1 ) * canvas.width / 2 ),
vector.y = Math.round( ( - vector.y + 1 ) * canvas.height / 2 );
//behind the camera if z isn't in 0..1 [frustrum range]
if(vector.z > 1){
vector = null;
}
To delve a little deeper into this answer:
// behind the camera if z isn't in 0..1 [frustrum range]
if(vector.z > 1){
vector = null;
}
This is not true. The mapping is not continuous. Points beyond the far
plane also map to z-values greater than 1
What exactly does the z-value of a projected vector stand for? X and Y are in normalised clipspace [-1,1] , what about z?
Would this be true?
projectVector.project(camera);
var inFrontOfCamera = projectVector.z < 1;
Since the camera is located at the origin in camera space, and since the camera is always looking down the negative-z axis in camera space, points behind the camera will have a z-coordinate greater than 1.
//check if point is behind the camera
if ( p.z > 1 ) ...
NOTICE: If this condition is satisfied, then the projected coordinates need to be centrosymmetric
{x: 0.233, y: -0.566, z: 1.388}
// after transform
{x: -0.233, y: 0.566, z: 1.388}
This code calculates the distance between 2 points by using distance formula, Math.sqrt ( (x1 – x2)^2 + (y1 – y2) ^2). My first point has mmx and mmy coordination and second one has ox and oy coordination. My question is simple, is there any FASTER way for calculate this?
private function dist(mmx:int, mmy:int, ox:int, oy:int):Number{
return Math.sqrt((mmx-ox)*(mmx-ox)+(mmy-oy)*(mmy-oy));
}
This is my code, Thanks for help.
public function moveIT(Xmouse, Ymouse):void{
f = Point.distance( new Point( Xmouse, Ymouse ), new Point( mainSP.x, mainSP.y ) );// distance between mouse and instance
distancePro = Point.distance( pointO, new Point( mainSP.x, mainSP.y ) );// distance from start point
if ( f < strtSen ){ // move forward
tt.stop(); tt.reset(); // delay timer on destination
mF = true; mB = false;
ag = Math.atan2((Ymouse - mainSP.y),(Xmouse - mainSP.x)); // move-forward angle, between mouse and instance
}
if (mF){ /// shoot loop
if (f > 5){// 5 pixel
mainSP.x -= Math.round( (400 /f) + .5 ) * Math.cos(ag);
mainSP.y -= Math.round( (400 /f) + .5 ) * Math.sin(ag);
}
if ( distancePro > backSen ){// (backSen = max distance)
mF = false;
tt.start();// delay timer on destination
}
}
if (mB){ /// return loop
if ( distancePro < 24 ){// back angle re-calculation
agBACK = Math.atan2((y1 - mainSP.y),(x1 - mainSP.x));
}
mainSP.x += (Math.cos(agBACK) * rturnSpeed);
mainSP.y += (Math.sin(agBACK) * rturnSpeed);
if ( distancePro < 4 ){ // fix position to start point (x1,y1)
mB = false;
mainSP.x = x1; mainSP.y = y1;
}
}
}
private function scTimer(evt:TimerEvent):void {// timer
tt.stop();
agBACK = Math.atan2((y1 - mainSP.y),(x1 - mainSP.x));// move-back angle between start point and instance
mB = true;
}
Also: pointO = new Point(x1,y1); set start point. I can not use mouseX and mouseY because of the way that the application is called by parent class, so I can just pass x and y to my loop.
I think that if you in-line your function instead of making an actual function call, it is the fastest way possible.
f = Math.sqrt((Xmouse-mainSP.x)*(Xmouse-mainSP.x)+(Ymouse-mainSP.y)*(Ymouse-mainSP.y));
distancePro = Math.sqrt((x1-mainSP.x)*(x1-mainSP.x)+(y1-mainSP.y)*(y1-mainSP.y));
Using Point.distance is WAY more readable, but it is several times slower. If you want speed, you want to inline your math directly.
Use Point.distance
d = Point.distance( new Point( x1, y1 ), new Point( x2, y2 ) );
It'll be executed in native code which is typically faster than interpreted code.
If you're in 3D space, use Vector3D.distance
If you're doing collision detection, comparing the lengths of vectors (2D or 3D) is quite common and can be resource intensive due to the use of the sqrt function. If you compare the lengthSquared instead, it will be much more performant.
Calling a static function is a bit expensive. You can save that overhead by doing this:
private var sqrtFunc = Math.sqrt;
private function dist(mmx:int, mmy:int, ox:int, oy:int):Number{
return sqrtFunc((mmx-ox)*(mmx-ox)+(mmy-oy)*(mmy-oy));
}