Are SpriteBatch drawcalls culled by XNA? - matrix

I have a very subtle problem with XNA, specifically the SpriteBatch.
In my game I have a Camera class. It can Translate the view (obviously) and also zoom in and out.
I apply the Camera to the scene when I call the "Begin" function of my spritebatch instance (the last parameter).
The Problem: When the cameras Zoomfactor is bigger than 1.0f, the spritebatch stops drawing.
I tried to debug my scene but I couldn't find the point where it goes wrong.
I tried to just render with "Matrix.CreateScale(2.0f);" as the last parameter for "Begin".
All other parameters were null and the first "SpriteSortMode.Immediate", so no custom shader or something.
But SpriteBatch still didn't want to draw.
Then I tried to only call "DrawString" and DrawString worked flawlessly with the provided scale (2.0f).
However, through a lot of trial and error, I found out that also multiplying the ScaleMatrix with "Matrix.CreateTranslation(0, 0, -1)" somehow changed the "safe" value to 1.1f.
So all Scale values up to 1.1f worked. For everything above SpriteBatch does not render a single pixel in normal "Draw" calls. (DrawString still unaffected and working).
Why is this happening?
I did not setup any viewport or other matrices.
It appears to me that this could be some kind of strange Near/Farclipping.
But I usually only know those parameters from 3d stuff.
If anything is unclear please ask!

It is near/far clipping.
Everything you draw is transformed into and then rasterised in projection space. That space runs from (-1,-1) at the bottom left of the screen, to (1,1) at the top right. But that's just the (X,Y) coordinates. In Z coordinates it goes from 0 to 1 (front to back). Anything outside this volume is clipped. (References: 1, 2, 3.)
When you're working in 3D, the projection matrix you use will compress the Z coordinates down so that the near plane lands at 0 in projection space, and the far plane lands at 1.
When working in 2D you'd normally use Matrix.CreateOrthographic, which has near and far plane parameters that do exactly the same thing. It's just that SpriteBatch specifies its own matrix and leaves the near and far planes at 0 and 1.
The vertices of sprites in a SpriteBatch do, in fact, have a Z-coordinate, even though it's not normally used. It is specified by the layerDepth parameter. So if you set a layer depth greater than 0.5, and then scale up by 2, the Z-coordinate will be outside the valid range of 0 to 1 and won't get rendered.
(The documentation says that 0 to 1 is the valid range, but does not specify what happens when you apply a transformation matrix.)
The solution is pretty simple: Don't scale your Z-coordinate. Use a scaling matrix like:
Matrix.CreateScale(2f, 2f, 1f)

Related

OpenGL ES 2.0 coordinate system

I use opengl es 2.0 on iphone render a primitive,I use the shader and glFramebufferTexture2D render to texture, but the results are not the same and I expect, I found the whole coordinates are in turmoil, the vertex data I give is this V0 (-1, -1, -1) V1 (0 , -1,01), V2 (-1,0, -1) I expected result should be first image like this, but the result is indeed like the second iamge case. I found the whole coordinate system like this green arrows along the fold of the same, please tell me how this is true?
ok,I found the problem,if I don't use glFramebufferTexture2D,this result is right like the third image .so who can tell me why?
Well, the true coordinate system in the openGL works so that left and bottom are at -1, right and top are at 1. The next thing is the render buffer origin is also in the "bottom-left" corner. So what happens is when you present an image to the screen you want the Y coordinate to be inverted so that the top is at -1 or rather yet at 0, but this is only for a display.
As it goes for textures you should not try to invert it but rather keep the origin at bottom left part. You must understand that once drawn to the texture those pixels are simply RGBA pixel data and when the texture is reused for display (or if you read those pixels) they will be upside-down looking from a natural perspective.
I am not sure all of this is logical enough but what you usually do is invert the scene so what you see is correct but all of this has nothing to do with the actual order of the buffer on the GPU. For instance if you were able to simply replace the RGBA data of the first pixel in the buffer, that pixel would in fact be at the bottom left part of your screen and not your top left.
So it is best to keep the openGL order of things when dealing with the FBO and invert the scene only when drawing to the buffer which will be presented to the screen.

Matrix.frustumM causing problems not being able to set the near draw distance to zero

When setting the OpenGLES draw distance using Matrix.frustumM I notice that you can’t set the near draw distance to zero and any value less than 1 gives really weird distortion. Setting the near distance to 1 works fine most of the time but when the camera moves closer to objects than this distance it looks horrible because they are not drawn (or a portion of them is not drawn). Is there anything that can be done about this?
Many thanks for your time.
Not much can be done actually. The near and far clipping planes clip the pixels closer to near or further then far. Beside this the near is a bit special as it defines your field of view with the combination of the border parameters (left, right, up and down). So if you had a quad with same coordinates as those border if would be full-screen when exactly near away. Because of this the near plane can not be zero or even negative as for instance an object that would be at zero units away using a frustum would appear to be scaled infinitely.
Still you can use values smaller then 1 without having some strange artifacts. What you should do is look at some examples on how to define the frustum by setting a field of view. Generally you define your angle (a field of view) for one of the dimensions like 45 degrees in width, then you define your near and far as you please but both should be positive. Now use the trigonometry to compute the left and right using the angle and near and use the same values for up and down but scaled by screen (view) ratio. By doing so you will have no difference as in distortion when changing the near parameter.

glTranslatef() is moving both, my origin and my sprite

Here is the deal, I'm programming a 2D framework/game engine with opengl ES. I am using VBOs and an ortho projection to draw an arrangement of sprites throughout the screen (as part of the testing), and everything was going nice and smooth until I had to play with translations and rotations. The specific problem I am having is that when I apply a translation with glTranslatef() prior to the rotation, the function does not only move the sprite, but also my origin, messing up my whole transformation. I am 100% sure it is working this way, because I used glTranslatef() to move to the right and bottom the sprite half of the size of the screen (yes, my origin is in the top left) and then apply a constant rotation and the thing just keeps mooving in a circular path around the center of the screen (actually rotating, but not as I expect.
If you want some code, here we go:
gl.glTranslatef(-(x+width/2), -(y+height/2), -layer);
gl.glRotatef(angle, 0.0f, 0.0f, -1.0f);
gl.glTranslatef(x+width/2, y+height/2, layer);
In this fragment of code, x and y are the position of the sprite, height and width are the size of the sprite, angle the angle of rotation, and layer just a form of organizing the sprites into several layers, pretty straight forward, right?
Again, my problem is that glTranslatef(); is moving both, the sprite and the origin, am I doing something wrong or misunderstanding something about the translation?
Thanks in advance.
you might need to use glPushMatrix and glPopMatrix since anything you do after those translations and rotations will be affected by them
but what you are describing is actually how it works, if you use a translate, that sort of becomes your new origin because once you do a translate, everything after that is affected by that translate, thats why you need to push and pop, so that you can go, push -> translate object and/or rotate -> pop, and then you can go about with whatever other translations you need to do without having that previous translation affecting everything else
its a bit confusing at first but google around and you'll see how to use them properly
http://www.khronos.org/opengles/sdk/1.1/docs/man/glPushMatrix.xml
I think you misunderstood how matrices work in openGL. When you do a matrix operation such as glRotatef and glTranslatef the matrices are being multiplied, resulting in affecting the base vectors.. For instance, let's say we are only drawing a point that starts at (0,0,0). If you call translate(1,0,0) the point will be in (1,0,0), after that you call rotate(90, 0, 0, 1) and your point will be on the same place as before but rotated. Now the last call is translate(-1,0,0) and your point is at (1,-1,0) (and not where you started)!
And that is what you did in your "fragment of code". The thing is you did not specify what you really want to do and how do you define your verices is relative as well.. If you want something like a view with some image that you want to control in sense of changing the position and rotation, you might want to create a square vertex buffer with values from -1 to 1 in both dimensions (or (-width/2, -height/2) to (width/2, height/2)). In this case the base center of your object is in (0,0,0) and that is probably the point you want to rotate it around (or am I wrong here?). So when you want to define the position of the object with origin point, you will need to write translatef(x+width/2,y+height/2,..).
As for the whole process of drawing in this case: If you want the origin to be at (x,y,z), with a (width, height) and rotated by (angle) here is the sequence
glTranslatef(x,y,z)
glTranslatef(width/2,height/2,0)
glScalef(width/2,height/2, 1) //only if verices defined at (-1,1)
glRotatef(angle, 0, 0, 1)
Do note in this case that since you rotate the object around its center its origin will not be at (x,y,z) anymore.
In general I would suggest to stay away from glRotate, glTranslate and glScale if possible. They tend to make things very nasty. So another way is to construct a matrix directly from base vectors: With little math you can compute all 4 points of your "square view" based on parameters such as origin, width, height and rotation.. The 4 points being (A-origin), (B-lower left point), (C-lower right point), (D-upper right point) your base vectors are (B-A), (D-A) and normalized(dotProduct((B-A), (D-A))) this 3 vectors can be inserted int top left 3x3 matrix of the GL matrix (witch is 4x4 or float[16]) and they represent both, rotation and scale so all you need to add is the translation part (just google around a bit for this approach).

How to create a shader to mask using a degree offset from a central point?

I'm a little bit lost, and this is somewhat related to another question I've asked about fragment shaders, but goes beyond it.
I have an orthographic scene (although that may not be relevant), with the scene drawn here as black, and I have one billboarded sprite that I draw using a shader, which I show in red. I have a point that I know and define myself, A, represented by the blue dot, at some x,y coordinate in the 2d coordinate space. (Lower-left of screen is origin). I need to mask the red billboard in a programmatic fashion where I specify 0% to 100%, with 0% being fully intact and 100% being fully masked. I can either pass 0-100% (0 to 1.0) in to the shader, or I could precompute an angle, either solution would be fine.
( Here you can see the scene drawn with '0%' masking )
So when I set "15%" I want the following to show up:
( Here you can see the scene drawn with '15%' masking )
And when I set "45%" I want the following to show up:
( Here you can see the scene drawn with '45%' masking )
And here's an example of "80%":
The general idea, I think, is to pass in a uniform 'A' vec2d, and within the fragment shader I determine if the fragment is within the area from 'A' to bottom of screen, to the a line that's the correct angle offset clockwise from there. If within that area, discard the fragment. (Discarding makes more sense than setting alpha to 0.0 or 1.0 if keeping, right?)
But how can I actually achieve this?? I don't understand how to implement that algorithm in terms of a shader. (I'm using OpenGL ES 2.0)
One solution to this would be to calculate the difference between gl_FragCoord (I hope that exists under ES 2.0!) and the point (must be sure the point is in screen coords) and using the atan function with two parameters, giving you an angle. If the angle is not some value that you like (greater than minimum and less than maximum), kill the fragment.
Of course, killing fragments is not precisely the most performant thing to do. A (somewhat more complicated) triangle solution may still be faster.
EDIT:
To better explain "not precisely the most performant thing", consider that killing fragments still causes the fragment shader to run (it only discards the result afterwards) and interferes with early depth/stencil fragment rejection.
Constructing a triangle fan like whoplisp suggested is more work, but will not process any fragments that are not visible, will not interfere with depth/stencil rejection, and may look better in some situations, too (MSAA for example).
Why don't you just draw some black triangles ontop of the red rectangle?

Drawing 3D in front/behind sprites in XNA/WP7?

This is kind of frustrating me as I've been grizzling over it for a couple of hours now.
Basically I'm drawing 2D sprites through spritebatch and 3D orthographically projected geometry using the BasicEffect class.
My problem is controlling what gets rendered on top of what. At first I thought it would be simply controlling the render order, i.e. if I do:
Draw3DStuff()
SpriteBatch.Begin(...)
Draw2DStuff();
SpriteBatch.End();
It would mean the 2D stuff would render over the 3D stuff, however since I don't control when the device begins/ends renders this isn't the result. The 3D always renders on top of the 2D elements, no matter the projection settings,world translation, the z components of the 3D geometries vertex definitions and the layer depth of the 2D elements.
Is there something I'm not looking into here?
What's the correct way to handle the depth here?
OK I figured it out 2 seconds after posting this question. I don't know if it was coincidence or if StackOverflow has a new feature granting the ability to see future answers.
The Z position of spritebatch elements are between 0 and 1 so they're not directly comparable to the z positions of orthographic geometry being rendered.
When you create an orthographic matrix however you define a near and far clip plane. The Z pos you set should be within this clip plane. I had a hunch that the spritebatch class is effectively drawing quads orthographically so by extension that 0 to 1 would mean 0 was representing a near clip and 1 a far clip, and the depth was probably being rendered into the same place the 3D geometry depth is being rendered to.
Soooo, to make it work I just figured that the near/far clips I was defining for the orthographic render will be measured against the near/far clips of the sprites being rendered, so it was simply a matter of setting the right z value, so for example:
If I have a near clip of 0 and a far clip of 10000 and I wanted to draw it so that it would correspond to 0.5f layer depth and render in front of sprites being drawn at 0.6 and behind sprites being drawn at 0.4 I do:
float zpos = 0.5f;
float orthoGraphicZPos = LinearInterpolate(0, 10000, zpos);
Or just zpos * 10000 :D
I guess it would make more sense to have your orthographic renderers near/far clip to be 0 and 1 to directly compare with the sprites layer depths.
Hopefully my reasoning for this solution was correct (more or less).
As an aside, since you mentioned you had a hunch on how the sprite batch was drawing quads. You can see the source code for all the default/included shaders and the spritebatch class if you are curious, or need help solving a problem like this:
http://create.msdn.com/en-US/education/catalog/sample/stock_effects
The problem is that the spritebatch messes with some of the renderstates that are used when you draw your 3d objects. To fix this you just have to reset them before rendering your 3d objects like so.
GraphicsDevice.BlendState = BlendState.Opaque;
GraphicsDevice.DepthStencilState = DepthStencilState.Default;
GraphicsDevice.SamplerStates[0] = SamplerState.LinearWrap;
Draw3DStuff()
SpriteBatch.Begin(...)
Draw2DStuff();
SpriteBatch.End();
Note that this is for xna 4.0 which I am pretty sure your using anyway. More info can be found on shawn hargreaves blog here. This will draw the reset the render states, draw the 3d objects, then the 2d objects over them. Without resetting the render states you get weird effects like your seeing.

Resources