Drawing 3D in front/behind sprites in XNA/WP7? - windows-phone-7

This is kind of frustrating me as I've been grizzling over it for a couple of hours now.
Basically I'm drawing 2D sprites through spritebatch and 3D orthographically projected geometry using the BasicEffect class.
My problem is controlling what gets rendered on top of what. At first I thought it would be simply controlling the render order, i.e. if I do:
Draw3DStuff()
SpriteBatch.Begin(...)
Draw2DStuff();
SpriteBatch.End();
It would mean the 2D stuff would render over the 3D stuff, however since I don't control when the device begins/ends renders this isn't the result. The 3D always renders on top of the 2D elements, no matter the projection settings,world translation, the z components of the 3D geometries vertex definitions and the layer depth of the 2D elements.
Is there something I'm not looking into here?
What's the correct way to handle the depth here?

OK I figured it out 2 seconds after posting this question. I don't know if it was coincidence or if StackOverflow has a new feature granting the ability to see future answers.
The Z position of spritebatch elements are between 0 and 1 so they're not directly comparable to the z positions of orthographic geometry being rendered.
When you create an orthographic matrix however you define a near and far clip plane. The Z pos you set should be within this clip plane. I had a hunch that the spritebatch class is effectively drawing quads orthographically so by extension that 0 to 1 would mean 0 was representing a near clip and 1 a far clip, and the depth was probably being rendered into the same place the 3D geometry depth is being rendered to.
Soooo, to make it work I just figured that the near/far clips I was defining for the orthographic render will be measured against the near/far clips of the sprites being rendered, so it was simply a matter of setting the right z value, so for example:
If I have a near clip of 0 and a far clip of 10000 and I wanted to draw it so that it would correspond to 0.5f layer depth and render in front of sprites being drawn at 0.6 and behind sprites being drawn at 0.4 I do:
float zpos = 0.5f;
float orthoGraphicZPos = LinearInterpolate(0, 10000, zpos);
Or just zpos * 10000 :D
I guess it would make more sense to have your orthographic renderers near/far clip to be 0 and 1 to directly compare with the sprites layer depths.
Hopefully my reasoning for this solution was correct (more or less).

As an aside, since you mentioned you had a hunch on how the sprite batch was drawing quads. You can see the source code for all the default/included shaders and the spritebatch class if you are curious, or need help solving a problem like this:
http://create.msdn.com/en-US/education/catalog/sample/stock_effects

The problem is that the spritebatch messes with some of the renderstates that are used when you draw your 3d objects. To fix this you just have to reset them before rendering your 3d objects like so.
GraphicsDevice.BlendState = BlendState.Opaque;
GraphicsDevice.DepthStencilState = DepthStencilState.Default;
GraphicsDevice.SamplerStates[0] = SamplerState.LinearWrap;
Draw3DStuff()
SpriteBatch.Begin(...)
Draw2DStuff();
SpriteBatch.End();
Note that this is for xna 4.0 which I am pretty sure your using anyway. More info can be found on shawn hargreaves blog here. This will draw the reset the render states, draw the 3d objects, then the 2d objects over them. Without resetting the render states you get weird effects like your seeing.

Related

THREE.js adding bullets as sprites and rotating each individually

I have been working on programming a game where everything is rendered in 3d. Though the bullets are 2d sprites. this poses a problem. I have to rotate the bullet sprite by rotating the material. This turns every bullet possessing that material rather than the individual sprite I want to turn. It is also kind of inefficient to create a new sprite clone for every bullet. is there a better way to do this? Thanks in advance.
Rotate the sprite itself instead of the texture.
edit:
as OP mentioned.. the spritematerial controls the sprites rotation.y, so setting it manually does nothing...
So instead of using the Sprite type, you could use a regular planegeometry mesh with a meshbasic material or similar, and update the matrices yourself to both keep the sprite facing the camera, and rotated toward its trajectory..
Then at least you can share the material amongst all instances.
Then the performance bottleneck becomes the number of drawcalls.. (1 per sprite)..
You can improve on that by using a single BufferGeometry, and computing the 4 screen space vertices for each sprite, each frame. This moves the bottleneck away from drawCalls, and will be limited by the speed at which you can transform vertices in javascript, which is slow but not the end of the world. This is also how many THREE.js particle systems are implemented.
The next step beyond that is to use a custom vertex shader to do the heavy vertex computation.. you still update the buffergeometry each frame, but instead of transforming verts, you're just writing the position of the sprite into each of the 4 verts, and letting the vertex shader take care of figuring out which of the 4 verts it's transforming (possibly based on the UV coordinate, or stored in one of the vertex color channels..., .r for instace) and which sprite to render from your sprite atlas (a single texture/canvas with all your sprites layed out on a grid) encoded in the .g of the vertex color..
The next step beyond that, is to not update the BufferGeometry every frame, but store both position and velocity of the sprite in the vertex data.. and only pass a time offset uniform into the vertex shader.. then the vertex shader can handle integrating the sprite position over a longer time period. This only works for sprites that have deterministic behavior, or behavior that can be derived from a texture data source like a noise texture or warping texture. Things like smoke, explosions, etc.
You can extend these techniques to draw gigantic scrolling tilemaps. I've used these techniques to make multilayer scrolling/zoomable hexmaps that were 2048 hexes square, (which is a pretty huge map)(~4m triangles). with multiple layers of sprites on top of that, at 60hz.
Here the original stemkoski particle system for reference:
http://stemkoski.github.io/Three.js/Particle-Engine.html
and:
https://stemkoski.github.io/Three.js/ParticleSystem-Dynamic.html

Find which object3D's the camera can see in Three.js - Raycast from each camera to object

I have a grid of points (object3D's using THREE.Points) in my Three.js scene, with a model sat on top of the grid, as seen below. In code the model is called default mesh and uses a merged geometry for performance reasons:
I'm trying to work out which of the points in the grid my perspective camera can see at any given point i.e. every time the camera position is update using my orbital controls.
My first idea was to use raycasting to create a ray between the camera and each point in the grid. Then I can find which rays are being intersected with the model and remove the points corresponding to those rays from a list of all the points, thus leaving me with a list of points the camera can see.
So far so good, the ray creation and intersection code is placed in the render loop (as it has to be updated whenever the camera is moved), and therefore it's horrendously slow (obviously).
gridPointsVisible = gridPoints.geometry.vertices.slice(0);
startPoint = camera.position.clone();
//create the rays from each point in the grid to the camera position
for ( var point in gridPoints.geometry.vertices) {
direction = gridPoints.geometry.vertices[point].clone();
vector.subVectors(direction, startPoint);
ray = new THREE.Raycaster(startPoint, vector.clone().normalize());
if(ray.intersectObject( defaultMesh ).length > 0){
gridPointsVisible.pop(gridPoints.geometry.vertices[point]);
}
}
In the example model shown there are around 2300 rays being created, and the mesh has 1500 faces, so the rendering takes forever.
So I 2 questions:
Is there a better of way of finding which objects the camera can see?
If not, can I speed up my raycasting/intersection checks?
Thanks in advance!
Take a look at this example of GPU picking.
You can do something similar, especially easy since you have a finite and ordered set of spheres. The idea is that you'd use a shader to calculate (probably based on position) a flat color for each sphere, and render to an off-screen render target. You'd then parse the render target data for colors, and be able to map back to your spheres. Any colors that are visible are also visible spheres. Any leftover spheres are hidden. This method should produce results faster than raycasting.
WebGLRenderTarget lets you draw to a buffer without drawing to the canvas. You can then access the render target's image buffer pixel-by-pixel (really color-by-color in RGBA).
For the mapping, you'll parse that buffer and create a list of all the unique colors you see (all non-sphere objects should be some other flat color). Then you can loop through your points--and you should know what color each sphere should be by the same color calculation as the shader used. If a point's color is in your list of found colors, then that point is visible.
To optimize this idea, you can reduce the resolution of your render target. You may lose points only visible by slivers, but you can tweak your resolution to fit your needs. Also, if you have fewer than 256 points, you can use only red values, which reduces the number of checked values to 1 in every 4 (only check R of the RGBA pixel). If you go beyond 256, include checking green values, and so on.

how do I get a projection matrix I can use for a pointlight shadow map?

I'm currently working on a project that uses shadowtextures to render shadows.
It was pretty easy for spotlights, since only 1 texture in the direction of the spotlight is needed, but its a little more difficult since it needs either 6 textures in all directions or 1 texture that somehow renders all the obects around the pointlight.
And thats where my problem is. How can I generate a Projection matrix that somehow renders all the object in a 360 angle around the pointlight?
Basicly how do create a fisheye (or any other 360 degree camera) vertex shader?
How can I generate a Projection matrix that somehow renders all the object in a 360 angle around the pointlight?
You can't. A 4x4 projection matrix in homogenous space cannot represent any operation which would result in bending the edges of polygons. A straight line stays a straight line.
Basicly how do create a fisheye (or any other 360 degree camera) vertex shader?
You can't do that either, at least not in the general case. And this is not a limit of the projection matrix in use, but a general limit of the rasterizer. You could of course put the formula for fisheye distortion into the vertex shader. But the rasterizer will still rasterize each triangle with straight edges, you just distort the position of the corner points of each triangle. This means that it will only be correct for tiny triangles covering a single pixel. For larger triangles, you completely screw up the image. If you have stuff like T-joints, this even results in holes or overlaps in objects which actually should be perfectly closed.
It was pretty easy for spotlights, since only 1 texture in the direction of the spotlight is needed, but its a little more difficult since it needs either 6 textures in all directions or 1 texture that somehow renders all the obects around the pointlight.
The correct solution for this would be using a single cube map texture, with provides 6 faces. In a perfect cube, each face can then be rendered by a standard symmetric perspective projection with a field of view of 90 degrees both horizontally and vertically.
In modern OpenGL, you can use layered rendering. In that case, you attach each of the 6 faces of the cube map as a single layer to an FBO, and you can use the geometry shader to amplify your geomerty 6 times, and transform it according to the 6 different projection matrices, so that you still only need one render pass for the complete shadow map.
There are some other vendor-specific extensions which might be used to further optimize the cube map rendering, like Nvidia's NV_viewport_swizzle (available on Maxwell and newer GPUs), but I only mention this for completness.

Drawing transparent sprites in a 3D world in OpenGL(LWJGL)

I need to draw transparent 2D sprites in a 3D world. I tried rendering a QUAD, texturing it(using slick_util) and rotating it to face the camera, but when there are many of them the transparency doesn't really work. The sprite closest to the camera will block the ones behind it if it's rendered before them.
I think it's because OpenGL only draws the object that is closest to the viewer without checking the alpha value.
This could be fixed by sorting them from furthest away to closest but I don't know how to do that
and wouldn't I have to use math.sqrt to get the distance? (I've heard it's slow)
I wonder if there's an easy way of getting transparency in 3D to work correctly. (Enabling something in OpenGL e.g)
Disable depth testing and render transparent geometry back to front.
Or switch to additive blending and hope that looks OK.
Or use depth peeling.

Are SpriteBatch drawcalls culled by XNA?

I have a very subtle problem with XNA, specifically the SpriteBatch.
In my game I have a Camera class. It can Translate the view (obviously) and also zoom in and out.
I apply the Camera to the scene when I call the "Begin" function of my spritebatch instance (the last parameter).
The Problem: When the cameras Zoomfactor is bigger than 1.0f, the spritebatch stops drawing.
I tried to debug my scene but I couldn't find the point where it goes wrong.
I tried to just render with "Matrix.CreateScale(2.0f);" as the last parameter for "Begin".
All other parameters were null and the first "SpriteSortMode.Immediate", so no custom shader or something.
But SpriteBatch still didn't want to draw.
Then I tried to only call "DrawString" and DrawString worked flawlessly with the provided scale (2.0f).
However, through a lot of trial and error, I found out that also multiplying the ScaleMatrix with "Matrix.CreateTranslation(0, 0, -1)" somehow changed the "safe" value to 1.1f.
So all Scale values up to 1.1f worked. For everything above SpriteBatch does not render a single pixel in normal "Draw" calls. (DrawString still unaffected and working).
Why is this happening?
I did not setup any viewport or other matrices.
It appears to me that this could be some kind of strange Near/Farclipping.
But I usually only know those parameters from 3d stuff.
If anything is unclear please ask!
It is near/far clipping.
Everything you draw is transformed into and then rasterised in projection space. That space runs from (-1,-1) at the bottom left of the screen, to (1,1) at the top right. But that's just the (X,Y) coordinates. In Z coordinates it goes from 0 to 1 (front to back). Anything outside this volume is clipped. (References: 1, 2, 3.)
When you're working in 3D, the projection matrix you use will compress the Z coordinates down so that the near plane lands at 0 in projection space, and the far plane lands at 1.
When working in 2D you'd normally use Matrix.CreateOrthographic, which has near and far plane parameters that do exactly the same thing. It's just that SpriteBatch specifies its own matrix and leaves the near and far planes at 0 and 1.
The vertices of sprites in a SpriteBatch do, in fact, have a Z-coordinate, even though it's not normally used. It is specified by the layerDepth parameter. So if you set a layer depth greater than 0.5, and then scale up by 2, the Z-coordinate will be outside the valid range of 0 to 1 and won't get rendered.
(The documentation says that 0 to 1 is the valid range, but does not specify what happens when you apply a transformation matrix.)
The solution is pretty simple: Don't scale your Z-coordinate. Use a scaling matrix like:
Matrix.CreateScale(2f, 2f, 1f)

Resources