how do I get a projection matrix I can use for a pointlight shadow map? - matrix

I'm currently working on a project that uses shadowtextures to render shadows.
It was pretty easy for spotlights, since only 1 texture in the direction of the spotlight is needed, but its a little more difficult since it needs either 6 textures in all directions or 1 texture that somehow renders all the obects around the pointlight.
And thats where my problem is. How can I generate a Projection matrix that somehow renders all the object in a 360 angle around the pointlight?
Basicly how do create a fisheye (or any other 360 degree camera) vertex shader?

How can I generate a Projection matrix that somehow renders all the object in a 360 angle around the pointlight?
You can't. A 4x4 projection matrix in homogenous space cannot represent any operation which would result in bending the edges of polygons. A straight line stays a straight line.
Basicly how do create a fisheye (or any other 360 degree camera) vertex shader?
You can't do that either, at least not in the general case. And this is not a limit of the projection matrix in use, but a general limit of the rasterizer. You could of course put the formula for fisheye distortion into the vertex shader. But the rasterizer will still rasterize each triangle with straight edges, you just distort the position of the corner points of each triangle. This means that it will only be correct for tiny triangles covering a single pixel. For larger triangles, you completely screw up the image. If you have stuff like T-joints, this even results in holes or overlaps in objects which actually should be perfectly closed.
It was pretty easy for spotlights, since only 1 texture in the direction of the spotlight is needed, but its a little more difficult since it needs either 6 textures in all directions or 1 texture that somehow renders all the obects around the pointlight.
The correct solution for this would be using a single cube map texture, with provides 6 faces. In a perfect cube, each face can then be rendered by a standard symmetric perspective projection with a field of view of 90 degrees both horizontally and vertically.
In modern OpenGL, you can use layered rendering. In that case, you attach each of the 6 faces of the cube map as a single layer to an FBO, and you can use the geometry shader to amplify your geomerty 6 times, and transform it according to the 6 different projection matrices, so that you still only need one render pass for the complete shadow map.
There are some other vendor-specific extensions which might be used to further optimize the cube map rendering, like Nvidia's NV_viewport_swizzle (available on Maxwell and newer GPUs), but I only mention this for completness.

Related

Raymarching on rasterized shape

I have been wondering how I can combine two methods of rendering in a way that the rasterized on-screen shape serves as a canvas for ray-march based rendering in fragment shader.
Take these beautiful examples: https://www.shadertoy.com/view/XsjXRm or https://www.shadertoy.com/view/MtXSzS
The visible part of them can be roughly represented as sphere. Now what I'd like to do is to put say two spheres into some place in the world and run the regular rasterization pass. The rasterization will yield which pixels are occupied by the models and for those pixels I'd like to actually run the shadertoy ray-marching algorithms to get the desired look (my two spheres look like shadertoy "spheres" in the examples above).
Is this something doable?
P.S. I know rasterization and matrix/spaces transformation quite well, but I have very vague understanding of how ray-marching works. Pardon my ignorance.
This is definitely possible.
The idea is to use the same camera for ray tracing and for rasterization.
You can get the camera's position from the camera matrix in the fragment shader and you can get the camera's direction by subtracting the fragments position from the camera's position and normalizing it.
This way the rays are only cast from the camera to the visible fragments.

THREE.js adding bullets as sprites and rotating each individually

I have been working on programming a game where everything is rendered in 3d. Though the bullets are 2d sprites. this poses a problem. I have to rotate the bullet sprite by rotating the material. This turns every bullet possessing that material rather than the individual sprite I want to turn. It is also kind of inefficient to create a new sprite clone for every bullet. is there a better way to do this? Thanks in advance.
Rotate the sprite itself instead of the texture.
edit:
as OP mentioned.. the spritematerial controls the sprites rotation.y, so setting it manually does nothing...
So instead of using the Sprite type, you could use a regular planegeometry mesh with a meshbasic material or similar, and update the matrices yourself to both keep the sprite facing the camera, and rotated toward its trajectory..
Then at least you can share the material amongst all instances.
Then the performance bottleneck becomes the number of drawcalls.. (1 per sprite)..
You can improve on that by using a single BufferGeometry, and computing the 4 screen space vertices for each sprite, each frame. This moves the bottleneck away from drawCalls, and will be limited by the speed at which you can transform vertices in javascript, which is slow but not the end of the world. This is also how many THREE.js particle systems are implemented.
The next step beyond that is to use a custom vertex shader to do the heavy vertex computation.. you still update the buffergeometry each frame, but instead of transforming verts, you're just writing the position of the sprite into each of the 4 verts, and letting the vertex shader take care of figuring out which of the 4 verts it's transforming (possibly based on the UV coordinate, or stored in one of the vertex color channels..., .r for instace) and which sprite to render from your sprite atlas (a single texture/canvas with all your sprites layed out on a grid) encoded in the .g of the vertex color..
The next step beyond that, is to not update the BufferGeometry every frame, but store both position and velocity of the sprite in the vertex data.. and only pass a time offset uniform into the vertex shader.. then the vertex shader can handle integrating the sprite position over a longer time period. This only works for sprites that have deterministic behavior, or behavior that can be derived from a texture data source like a noise texture or warping texture. Things like smoke, explosions, etc.
You can extend these techniques to draw gigantic scrolling tilemaps. I've used these techniques to make multilayer scrolling/zoomable hexmaps that were 2048 hexes square, (which is a pretty huge map)(~4m triangles). with multiple layers of sprites on top of that, at 60hz.
Here the original stemkoski particle system for reference:
http://stemkoski.github.io/Three.js/Particle-Engine.html
and:
https://stemkoski.github.io/Three.js/ParticleSystem-Dynamic.html

rendering millions of voxels using 3D textures with three.js

I am using three.js to render a voxel representation as a set of triangles. I have got it render 5 million triangles comfortably but that seems to be the limit. you can view it online here.
select the Dublin model at resolution 3 to see a lot of triangles being drawn.
I have used every trick to get it this far (buffer geometry, voxel culling, multiple buffers) but I think it has hit the maximum amount that openGL triangles can accomplish.
Large amounts of voxels are normally rendered as a set of images in a 3D texture and while there are several posts on how to hack 2d textures into 3D textures but they seem to have a maximum limit on the texture size.
I have searched for tutorials or examples using this approach but haven't found any. Has anyone used this approach before with three.js
Your scene is render twice, because SSAO need depth texture. You could use WEBGL_depth_texture extension - which have pretty good support - so you just need a single render pass. You can stil fallback to low-perf-double-pass if extension is unavailable.
Your voxel's material is double sided. It's may be on purpose, but it may create a huge overdraw.
In your demo, you use a MeshPhongMaterial and directional lights. It's a needlessly complex material. Your geometries don't have any normals so you can't have any lighting. Try to use a simpler unlit material.
Your goal is to render a huge amount of vertices, so assuming the framerate is bound by vertex shader :
try stuff like https://github.com/GPUOpen-Tools/amd-tootle to preprocess your geometries. Focusing on prefetch vertex cache and vertex cache.
reduce the bandwidth used by your vertex buffers. Since your vertices are aligned on a "grid", you could store vertices position as 3 Shorts instead of 3 floats, reducing your VBO size by 2. You could use a same tricks if you had normals since all normals should be Axis aligned (cubes)
generally reduce the amount of varyings needed by fragment shader
if you need more attributes than just vec3 position, use one single interleaved VBO instead of one per attrib.

Webgl texture atlas

I would like to ask for help concerning the making of the WEBGL Engine. I am stuck at the Texture Atlases. There is a texture, containing 2-2 pictures, and I draw its upper left corner to a vertex (texture coordinates are the following : 0-0.5 0-0.5).
This works properly, although when I look the vertex from afar, all of these blur together, and give strange looing colours. I think it is caused, because I use automatically generated Mipmap, and when I look it from afar, the texture unit uses the 1x1 Mipmap texture, where the 4 textures are blurred together to one pixel.
I was suggested the Mipmap’s own generator, with maximum level setting, (GL_TEXTURE_MAX_LEVEL),, although it is not supported by the Webgl. I was also suggested to use the „textureLod” function in the Fragment Shader, but the Webgl only lets me to use it in the vertex shader.
The only solution seems to be the Bias, the value that can be given at the 3rd parameter of the Fragment Shader „texture2D” function, but with this, I can only set the offset of the Mipmap LOD, not the actual value.
My idea is to use the Depth value (the distance from the camera) to move the Bias (increase it , so it will go more and more negative) so this insures, that it won’t use the last Mipmap level at greater distances, but to always take sample from a higher resolution Mipmap level. The issue with this, that I must calculate the angle of the given vertex to the camera, because the LOD value depends on this.
So the Bias=Depth + some combination of the Angle. I would like to ask help calculating this. If someone has any ideas concerning the Webgl Texture Atlases, I would gladly use them.

I have an OpenGL Tessellated Sphere and I want to cut a cylindrical hole in it

I am working on a piece of software which generated a polygon mesh to represent a sphere, and I want to cut a hole through the sphere. This polygon mesh is only an overlay across the surface of the sphere. I have a good idea of how to determine which polygons will intersect my hole, and I can remove them from my collection, but after that point I am getting a little confused. I was wondering if anyone could help me with the high-level concepts?
Basically, I envision three situations:
1.) The cylindrical hole does not intersect my sphere.
2.) The cylindrical hole partially goes through my sphere.
3.) The cylindrical hole goes all the way through my sphere.
For #1, I can test for this (no polygons removed) and act accordingly (do nothing). For #2 and #3, I am not sure how to re-tessellate my sphere to account for the hole. For #3, I have somewhat of an idea that is basically along the following lines:
a.) Find your entry point (a circle)
b.) Find your exit point (a circle)
c.) Remove the necessary polygons
d.) Make new polygons along the 4* 'sides' of the hole to keep my
sphere a manifold.
This extremely simplified algorithm has some 'holes' I would like to fill in. For example, I don't actually want to have 4 sides to my hole - it should be a cylinder, or at lease a tessellated representation of a cylinder. I'm also not sure how to make these new polygons to keep my sphere with a hole in a tessellated surface.
I have no idea how to approach scenario #2.
Sounds like you want constructive solid geometry.
Carve might do what you want. If you just want run-time rendering OpenCSG will work.
Well if you want just to render this (visualize) then may be you do not need to change the generated meshes at all. Instead use Stencil buffer to render your sphere with the holes. For example I am rendering disc (thin cylinder) with circular holes near its outer edge (as a base plate for machinery) with combination of solid and transparent objects around so I need the holes are really holes. As I was lazy to triangulate the shape as is generated at runtime I chose stencil for this.
Create OpenGL context with Stencil buffer
I am using 8 bit for stencil but this technique uses just single bit.
Clear stencil with 0 and turn off Depth&Color masks
This has to be done before rendering your mesh with stencil. So if you have more objects rendered in this way you need to do this before each one of them.
Set stencil with 1 for solid mesh
Clear stencil with 0 for hole meshes
Turn on Depth&Color masks and render solid mesh where stencil is 1
In code it looks like this:
// [stencil]
glEnable(GL_STENCIL_TEST);
// whole stencil=0
glClearStencil(0);
glClear(GL_STENCIL_BUFFER_BIT);
// turn off color,depth
glStencilMask(0xFF);
glColorMask(GL_FALSE, GL_FALSE, GL_FALSE, GL_FALSE);
glDepthMask(GL_FALSE);
// stencil=1 for solid mesh
glStencilFunc(GL_ALWAYS,1,0xFF);
glStencilOp(GL_KEEP, GL_KEEP, GL_REPLACE);
glCylinderxz(0.0,y,0.0,r,qh);
// stencil=0 for hole meshes
glStencilFunc(GL_ALWAYS,0,0xFF);
glStencilOp(GL_KEEP, GL_KEEP, GL_REPLACE);
for(b=0.0,j=0;j<12;j++,b+=db)
{
x=dev_R*cos(b);
z=dev_R*sin(b);
glCylinderxz(x,y-0.1,z,dev_r,qh+0.2);
}
// turn on color,depth
glColorMask(GL_TRUE, GL_TRUE, GL_TRUE, GL_TRUE);
glDepthMask(GL_TRUE);
// render solid mesh the holes will be created by the stencil test
glStencilFunc(GL_NOTEQUAL,0,0xFF);
glStencilOp(GL_KEEP, GL_KEEP, GL_KEEP);
glColor3f(0.1,0.3,0.4);
glCylinderxz(0.0,y,0.0,r,qh);
glDisable(GL_STENCIL_TEST);
where glCylinderxz(x,y,z,r,h) is just function that render cylinder at (x,y,z) with radius r with y-axis as its rotation axis. The db is angle step (2*Pi/12). Radiuses are r-big, dev_r-hole radius, dev_R-hole centers And qhis the thickness of the plate.
The result looks like this (each of the 2 plates is rendered with this):
This approach is more suited for thin objects. If your cuts leads to thick enough sides then you need to add a cut side rendering otherwise the lighting could be wrong on these parts.
I implemented CSG operations using scalar fields earlier this year. It works well if performance isn't important. That is, the calculations aren't real time. The problem is that the derivative isn't defined everywhere, so you can forget about computing cheap vertex-normals that way. It has to be done as a post-step.
See here for the paper I used (in the first answer), and some screenshots I made:
CSG operations on implicit surfaces with marching cubes
Also, CSG this way requires the initial mesh to be represented using implicit surfaces. While any geometric mesh can be split into planes, it wouldn't give good results. So spheres would have to be represented by a radius and an origin, and cylinders would be represented by a radius, origin and base height.

Resources