THREE Light 360 degree - three.js

How to make light which is lighting like SUN in the Space, that's mean from one center point in the middle of scene evenly in all directions.
Spotlight has limited angle
Directional light is not possible to use in both directions from the star sides.
Only idea I had is to positing a huge number of spotlights in one position rotated to all directions, butthis is not optimal solution, not CPU, GPU cost safe..

A THREE.PointLight casts light in all directions.

Related

How do I find the corners of a plane in 3d space if I know three points

Apologies in advance for my feeble maths.
I'm trying to be able to find the corners of a plane in space based on the equation of that plane. Here's what I know. I know three points on the plane and I know where they fall in the 2d coordinate space of the plane (x,y) and where they are in 3d space. I know the width and height of the plane and I can now calculate the equation of the plane. The plane sits on the inside of a large sphere that surrounds the origin so, in theory, it should more or less face where the camera is (though in my diagram it doesn't face the origin as it's just for illustrative purposes)
But it's not clear to me how I can use that to figure out another point. One thought I had was to find the transform that moves the plane parallel to the xy axis and rotate it round one of the points (so it stays in the same place), find the position of the new point, and then rotate it by the inverse of that transform. But it's not clear to me how I would find that transform matrix or how to use it. Could I do this using the normal and vector maths? I understand what normals are, but I'm fuzzy about how to use them.

Algorithm for finding the maximum sphere in the gap between a set of spheres

I have a set of non-intersecting spheres with known centers and radii, and I want to find the maximum sphere in the gap between these spheres.
Currently my approach is a 'nudge-and-bulge' method in which I pick on a point P in the void near the center of all the spheres, and I find the largest ball with its center at P. I then let the ball center walk randomly with small steps from P to a new position P' and check if the ball at P' with the old radius intersects with other spheres, if no intersection exists the sphere grows until it hits one sphere, else the center walks again, and repeat.
This approach is quite time-consuming, and I wonder if there is a better way to address the problem. I have searched online and the only relevant discussions are about finding maximum inscribed circle/sphere in between points/polygon/polyhedron.
If the sphere can't be made bigger by your 'nudge and budge' method, then it must be tangent to at least 4 other spheres.
Given 4 spheres, there is only one sphere that is tangent to all of them. (the position and radius are 4 unknowns, determined by the 4 distance equations to the given spheres)
So try all combinations of 4 spheres. For each combination, find the tangent sphere and see if it intersects any other ones.
That's idea that breaks your problem into a finite number of possibilities. The simple algorithm takes O(n^5), which is still a long time, though, if you have a lot of spheres. You can use the 3D Voronoi diagram to greatly accelerate this as #YvesDaoust suggests.

Circle inscribed in a rectangle

I am given a lot of points and the points are suppose to be from a rectangle. I'm required to calculate the boundary of the rectangle.(which is relatively easy) But I also have to figure out radius of a candle(cylinder) that could be anywhere in the rectangle. All of this from just the given points. I would appreciate if someone could suggest ways to achieve this.
The points in my case are measurements of a robot wandering through this rectangle, and the empty circle is a pole of a certain unknown radius inside the rectangle that the robot can hit. So I need to figure out the radius of the pole in order to avoid that. I need to estimate the pole. It doesn't have to be exact. I'm expecting robot measurements to be enough that they'd give me pretty good idea where pole is.
If you can afford it, just rasterize the area, mark the affected pixels, then start "growing" areas around them until they meet (this will give you raster versions of their Voronoi regions). The pixel (inside the rectangle) with the largest distance to any point will likely be the empty circle's center.
Alternatively, you can do a Delauney triangulation, find the largest triangle inside the rectangle, and add its neighbours while they form a convex shape. That shape will be an approximation of the circle.

Z-fighting. Inaccurate coordinates of faces / shakes

For sufficiently large approaching the surface of the sphere, which is a model of the Earth, I get inaccurate coordinates of vertices. Because of this, when moving the camera the shaking noticeable.
How to get rid of it? On the Internet to find solutions to invert zNear and zFar, but I have no idea how to do it in three.js
THREE.PerspectiveCamera has near and far parameters. These define the distance of the near and far clipping plane.
You have to choose clipping planes depending on your scene. For example if you have a large scene, and the near plane is very small, it can cause things you experienced.

Skewed: a rotating camera in a simple CPU-based voxel raycaster/raytracer

I'm trying to write a simple voxel raycaster as a learning exercise. This is purely CPU based for now until I figure out how things work exactly -- fow now, OpenGL is just (ab)used to blit the generated bitmap to the screen as often as possible.
Now I have gotten to the point where a perspective-projection camera can move through the world and I can render (mostly, minus some artifacts that need investigation) perspective-correct 3-dimensional views of the "world", which is basically empty but contains a voxel cube of the Stanford Bunny.
So I have a camera that I can move up and down, strafe left and right and "walk forward/backward" -- all axis-aligned so far, no camera rotations. Herein lies my problem.
Screenshots: (1) raycasting voxels while... ...(2) the camera remains... ...(3) strictly axis-aligned.
Now I have for a few days been trying to get rotation to work. The basic logic and theory behind matrices and 3D rotations, in theory, is very clear to me. Yet I have only ever achieved a "2.5 rendering" when the camera rotates... fish-eyey, bit like in Google Streetview: even though I have a volumetric world representation, it seems --no matter what I try-- like I would first create a rendering from the "front view", then rotate that flat rendering according to camera rotation. Needless to say, I'm by now aware that rotating rays is not particularly necessary and error-prone.
Still, in my most recent setup, with the most simplified raycast ray-position-and-direction algorithm possible, my rotation still produces the same fish-eyey flat-render-rotated style looks:
camera "rotated to the right by 39 degrees" -- note how the blue-shaded left-hand side of the cube from screen #2 is not visible in this rotation, yet by now "it really should"!
Now of course I'm aware of this: in a simple axis-aligned-no-rotation-setup like I had in the beginning, the ray simply traverses in small steps the positive z-direction, diverging to the left or right and top or bottom only depending on pixel position and projection matrix. As I "rotate the camera to the right or left" -- ie I rotate it around the Y-axis -- those very steps should be simply transformed by the proper rotation matrix, right? So for forward-traversal the Z-step gets a bit smaller the more the cam rotates, offset by an "increase" in the X-step. Yet for the pixel-position-based horizontal+vertical-divergence, increasing fractions of the x-step need to be "added" to the z-step. Somehow, none of my many matrices that I experimented with, nor my experiments with matrix-less hardcoded verbose sin/cos calculations really get this part right.
Here's my basic per-ray pre-traversal algorithm -- syntax in Go, but take it as pseudocode:
fx and fy: pixel positions x and y
rayPos: vec3 for the ray starting position in world-space (calculated as below)
rayDir: vec3 for the xyz-steps to be added to rayPos in each step during ray traversal
rayStep: a temporary vec3
camPos: vec3 for the camera position in world space
camRad: vec3 for camera rotation in radians
pmat: typical perspective projection matrix
The algorithm / pseudocode:
// 1: rayPos is for now "this pixel, as a vector on the view plane in 3d, at The Origin"
rayPos.X, rayPos.Y, rayPos.Z = ((fx / width) - 0.5), ((fy / height) - 0.5), 0
// 2: rotate around Y axis depending on cam rotation. No prob since view plane still at Origin 0,0,0
rayPos.MultMat(num.NewDmat4RotationY(camRad.Y))
// 3: a temp vec3. planeDist is -0.15 or some such -- fov-based dist of view plane from eye and also the non-normalized, "in axis-aligned world" traversal step size "forward into the screen"
rayStep.X, rayStep.Y, rayStep.Z = 0, 0, planeDist
// 4: rotate this too -- 0,zstep should become some meaningful xzstep,xzstep
rayStep.MultMat(num.NewDmat4RotationY(CamRad.Y))
// set up direction vector from still-origin-based-ray-position-off-rotated-view-plane plus rotated-zstep-vector
rayDir.X, rayDir.Y, rayDir.Z = -rayPos.X - me.rayStep.X, -rayPos.Y, rayPos.Z + rayStep.Z
// perspective projection
rayDir.Normalize()
rayDir.MultMat(pmat)
// before traversal, the ray starting position has to be transformed from origin-relative to campos-relative
rayPos.Add(camPos)
I'm skipping the traversal and sampling parts -- as per screens #1 through #3, those are "basically mostly correct" (though not pretty) -- when axis-aligned / unrotated.
It's a lot easier if you picture the system as a pinhole camera rather than anything else. Instead of shooting rays from the surface of a rectangle representing your image, shoot the rays from a point, through the rectangle that will be your image plane, into the scene. All the primary rays should have the same point of origin, only with slightly different directions. The directions are determined using basic trig by which pixel in the image plane you want them to go through. To make the simplest example, let's imagine your point is at the camera, and your image plane is one unit along the z axis, and two units tall and wide. That way, the pixel at the upper-left corner wants to go from (0,0,0) through (-1, -1, 1). Normalize (-1, -1, 1) to get the direction. (You don't actually need to normalize the direction just to do ray intersection, but if you decide not to, remember that your directions are non-normalized before you try to compute the distance the ray has travelled or anything like that.) For every other pixel, compute the point on the plane it wants to go through the way you've already been doing, by dividing the size of the plane by the number of pixels, in each direction.
Then, and this is the most important thing, don't try to do a perspective projection. That's necessary for scan-conversion techniques, to map every vertex to a point on the screen, but in ray-tracing, your rays accomplish that just by spreading out from one point into space. The direction from your start point (camera position, the origin in this example), through your image plane, is exactly the direction you need to trace with. If you were to want an orthographic projection instead (and you almost never want this), you'd accomplish this by having the direction be the same for all the rays, and the starting positions vary across the image plane.
If you do that, you'll have a good starting point. Then you can try again to add camera rotation, either by rotating the image plane about the origin before you iterate over it to compute ray directions, or by rotating the ray directions directly. There's nothing wrong with rotating directions directly! When you bear in mind that a direction is just the position your ray goes through if it starts from the origin, it's easy to see that rotating the direction, and rotating the point it goes through, do exactly the same thing.

Resources