Light 2D and Performance issues - performance

I want to make a 2D game that will use 2 kinds of lights (Spotlight and Point Light). I need the game to be completly dark unless there is a light in the area and the light to not pass through walls. I tried lots of stuff and plugins to try and make this possible but nothing worked for me.
So I thought I would try to add 3D walls with an Orthographic camera so that I would have the light stop at the walls and cast shadows and make it look 2D at the same time. (Top-down View)
My questions are:
Is there a better way to do this without needing the 3D stuff?
I believe that I will probably have some performance issues if I keep the 3D stuff. Is there a way to fix that since the final output from the orthographic camera is 2D? Like maybe an option to render it as 2D and not having the game process all those triangles? (I want it as light as possible because I also want to port on phones)

Related

Three.js - shadow is passing through wall

Disclaimer: complete noob to 3D programming and three.js
I'm prototyping for an eventual scene which will contain multiple walls, devices, etc... As you can see in the screenshots, the cube representing one of our devices, casts a shadow on the wall next to the cube, but the shadow also passes through the wall. The walls are set to accept shadows but not to cast shadows on purpose as the walls are displayed shorter than reality but are assumed to go to the "ceiling" and therefore should not cast shadows. Any ideas what I am doing wrong for the shadow to pass through the wall. Again I apologize for the likely trivial nature of my question, but at this point I've been looking at this for a couple days with no solution in sight. Thanks!
screenshot showing issue
another screenshot showing a different view

Transparency with complex shapes in three.js

I'm trying to render a fairly complex lamp using Three.js: https://sayduck.com/3d/xhcn
The product is split up in multiple meshes similar to this one:
The main issue is that I also need to use transparent PNG textures (in order to achieve the complex shape while keeping polygon counts low) like this:
As you can see from the live demo, this gives really weird results, especially when rotating the camera around the lamp - I believe due to z-ordering of the meshes.
I've been reading answers to similar questions on SO, like https://stackoverflow.com/a/15995475/5974754 or https://stackoverflow.com/a/37651610/5974754 to get an understanding of the underlying mechanism of how transparency is handled in Three.js and WebGL.
I think that in theory, what I need to do is, each frame, explicitly define a renderOrder for each mesh with a transparent texture (because the order based on distance to camera changes when moving around), so that Three.js knows which pixel is currently closest to the camera.
However, even ignoring for the moment that explicitly setting the order each frame seems far from trivial, I am not sure I understand how to set this order theoretically.
My meshes have fairly complex shapes and are quite intertwined, which means that from a given camera angle, some part of mesh A can be closer to the camera than some part of mesh B, while somewhere else, part of mesh B are closer.
In this situation, it seems impossible to define a closer mesh, and thus a proper renderOrder.
Have I understood correctly, and this is basically reaching the limits of what WebGL can handle?
Otherwise, if this is doable, is the approach with two render scenes (one for opaque meshes first, then one for transparent ones ordered back to front) the right one? How should I go about defining the back to front renderOrder the way that Three.js expects?
Thanks a lot for your help!

Shadow Mapping - artifacts on thin wall orthogonal to light

I'm having an issue with back faces (to the light) and shadow mapping that I can't seem to get past. I'm still at the relatively early stages of optimizing my engine, however I can't seem to get there as even with everything hand-tuned for this one piece of geometry it still looks like garbage.
What it is is a skinny wall that is "curved" via about 5 different chunks of wall. When I create my depth map I'm culling front faces (to the light). This definitely helps, but the front faces on the other side of the wall are what seem to be causing the z-fighting/projective shadowing.
Some notes on the screenshot:
Front faces are culled when the depth texture (from the light) is being drawn
I have the near and far planes tuned just for this chunk of geometry (set at 20 and 25 respectively)
One directional light source, coming down on a slight angle toward the right side of the scene, enough to indicate that wall should be shadowed, but mostly straight down
Using a ludicrously large 4096x4096 shadow map texture
All lighting is disabled, but know that I am doing soft lighting (and hence vertex normals for the vertices) even on this wall
As mentioned here it concludes you should not shadow polygons that are back faced from the light. I'm struggling with this particular issue because I don't want to pass the face normals all the way through to the fragment shader to rule out the true back faces to the light there - however if anyone feels this is the best/only solution for this geometry thats what I'll have to do. Considering how the pipeline doesn't make it easy/obvious to pass the face normals through it makes me feel like this isn't the path of least resistance. And note that the normals I am passing are the vertex normals, to allow for softer lighting effects around the edges (will likely include both non-shadowed and shadowed surfaces).
Note that I am having some nasty Perspective Aliasing, but I'm hoping my next steps are to work on cascaded shadow maps, but without fixing this I feel like I'm just delaying the inevitable as I've hand-tightened the view as best I can (or so I think).
Anyways I feel like I'm missing something, so if you have any thoughts or help at all would be most appreciated!
EDIT
To be clear, the wall technically should NOT be in shadow, based on where the light is coming from.
Below is an image with shadowing turned off. This is just using the vertex normals to calculate diffuse lighting - its not pretty (too much geometry is visible) but it does show that some of the edges are somewhat visible.
So yes, the wall SHOULD be in shadow, but I'm hoping I can get the smoothing working better so the edges can have some diffuse lighting. If I need to have it completely in shadow, then if its the shadow map that puts it in shadow, or my code specifically putting it in shadow because the face normal is away, I'm fine with that - but passing the face normal through to my vertex/fragment shader does not seem like the path of least resistance.
Perhaps these will help illustrate my problem better, or perhaps bring to light some fundamental understanding I am missing.
EDIT #2
I've included the depth texture below. You can see the wall in question in the bottom left, and from the screenshot you can see how i've trimmed the depth values to ~0.4->1. This means the depth values of that wall start in the 0.4 range. So its not PERFECTLY clipped for it, but its close. Does that seem reasonable? I'm pretty sure its a full 24 or 32 bit depth buffer, a la DEPTH_COMPONENT extension on iOS. For #starmole, does this help to determine if its a scaling error in my projection? Do you think the size/area covered of my map is too large, hence if it focuses closer it might help?
The problem seems to be that you are
Culling the front faces
Looking at the back face
Not removing the light from the back face because it's actually not lit by the normal - or there is some inaccuracy in the computation
Probably not adding some epsilon
(1) and (2) mean that there will be Z-fighting between the shadow map and the back faces.
Also, the shadow map resolution is not going to help you - just look at the wall in the shadow map, it's one pixel thick.
Recommendations:
Epsilons. Make sure that Z > lightZ + epsilon
Epsilons. Make sure that the wall is facing the light (dot of normal > epsilon) to make sure the wall is shadowed if it's very nearly orthogonal

Implementing terrains in XNA similar to Battle Zone (1980)

I am developing a 3D game for Windows Phone that includes terrains and volcanoes at infinite distance similar to Battle Zone (1980) by Atari Inc. The player can never touch the terrains no matter how far player drives. Currently, to implement this I am mapping a 2D texture inside the wall of cylinder. The cylinder is also moving with the player so that the player can never reach terrains. I am not sure whether this is a good method to implement terrains as I am facing problems like distortion of texture when mapping it on the wall of cylinder.
Please suggest me methods to implement a view of terrains in XNA similar to Battle Zone?
normally instead of cylinder developers use box (so-called SkyBox)
It has less polygons and in general less distortion (could be some at edges)
To make it look more real some devs like Valve use off-screen render in first pass that include skybox + some distant models with low details and moving cloud sprites or textured ring with alpha. Both points of view are synchronised (main camera and off-screen camera) then (without clearing colour buffer) they render final scene on top. Thanks to that far building will move a bit and scene surrounding will look less plain. To avoid z-buffer cleaning between passes they simply doing first pass under the floor(literally) of the scene of main pass.

Is there a way to pre-render a virtual panoramic scene?

I would like to put a photorealistic virtual scene on a tablet so when the user rotates the tablet, it shows as if the tablet is a window to an virtual world.
Pre-rendered scenes can be rendered photorealistic, while real-time rendering has a "computer-made look". Given that for one scene, the POV can be rotated but not translated in space, is it possible that a pre-rendered virtual panoramic scene give an immersive impression?
I doubt that this is easy, since rotating the view point will cause some sort of distortion. This kind of distortion is easy for apps like Starwalk, but difficult for photos. Can anyone point me out a direction?
I know that this will be tremendously easy for restricting motion in only one direction, but I would like the user to have a full 3d experience.
You need to either warp the photographs before applying them as textures to your "sky dome" or use non uniform texture coordinates. If done right this will even out most of the distortions giving a more realistic appearance.
Another alternative is to use more photographs so that you are only actually using the central area of each one.
I've found that http://code.google.com/p/panoramagl/ can render cubic, spherical and cylindrical panoramic images, so the problem transforms to how to make render a panorama which can be solved by stitching. I will still leave this answer open to see if anyone else has better answers.

Resources