Sprite Sheet Animation in DirectX11.0 - animation

Recently, I've taken up the construction of a 2-D rendering 'engine' of sorts based off: http://www.rastertek.com/dx11tut11.html, but converted to use DirectX11.0.
The point of this 'engine' is to build a small 2-D game as a personal project, to gain more familiarity with the raw API -- meaning no outside libraries such as http://directxtk.codeplex.com/.
My problem lies in the rendering of Sprite Sheet Animations.
Since I'm already using dynamic vertex buffers to re-position my images, I don't see the necessity of doing the transformations in the vertex shader, as detailed here: https://gamedev.stackexchange.com/questions/28287/how-to-set-sprite-source-coordinates.
So instead I opted for this solution:
for (size_t i = 0; i < 6; i++)
{ //having not yet overloaded the operators, I have to be a barbarian.
vertices[i].texture.x *= offset;
vertices[i].texture.y *= offset;
vertices[i].texture.x += offset * (tile % tiles_x);
vertices[i].texture.y += offset * floorf(tile / tiles_x);
}
where tile is the current tile to render, and tiles_x is the number of tiles across the bitmap horizontally.
EDIT: offset is computed as 1.0f / tiles_x
I believe the problem with this to be due to the fact that my Sprite Sheets are decidedly not square.
My question is this: How, using only the D3D11 API, could I achieve drawing simply a section of a texture for animation?
Why just a section of a texture?
I'm loading the sheet from an .xml file containing the various Render Rectangles for the individual animation frames (Anchor Point Animation).
Swapping out the Render Rectangles should achieve what I'm after.
In addition to Sprite Animation, it could then be applied to very large backgrounds in which only the Section in view actually gets drawn.

Related

In SceneKit, how can one tile a texture on differently sized objects while keeping draw calls minimal?

To improve performance/fps in a SceneKit scene, I would like to minimise the number of draw calls. The scene contains a procedurally generated city, for which I generate houses of random heights (each an SCNBox) and tile them with a single, identical repeating facade texture, like so:
The proper way to apply the textures appears to be as follows:
let material = SCNMaterial()
material.diffuse.contents = image
material.diffuse.wrapS = SCNWrapMode.repeat
material.diffuse.wrapT = SCNWrapMode.repeat
buildingGeometry.firstMaterial = material
This works. But as written, it stretches the material to fit the size of the faces of the box. To resize the textures to maintain aspect ratio, one needs to add the following code:
material.diffuse.contentsTransform = SCNMatrix4MakeScale(sx, sy, sz)
where sx, sy and sz are appropriate scale factors derived from size of the faces in the geometry. This also works.
But that latter approach implies that every node needs a custom material, which in turn means that I cannot re-use a single material for all of the houses, which in turn means that every single node requires an extra draw call.
Is there a way to use a single texture material to tile all of the houses (without stretching the texture)?
Using a surface shader modifier (SCNShaderModifierEntryPointSurface) you could modify _surface.diffuseTexcoord based on scn_node.boundingBox.
Since the bounding box is dynamically fed to the shader all the objects will be using the same shader and will benefit from instancing (reducing the number of draw calls).
The SCNShadable.h header file has more details on that.

WebGL – Stretching non power of two textures or adding padding

I'm using images of varying sizes and aspect ratios, uploaded through a CMS in Three.js / A-Frame. Of course, these aren't power of two textures. It seems like I have two options for processing them.
The first is to stretch the image, as is done in Three.JS – with the transformation undone when applied to the plane.
The second is to add extra pixels (which aren't displayed) due to custom UVs.
Would one approach be better than the other? Based on image quality, I'd imagine not doing any stretching would be preferred.
EDIT:
For those interested, I couldn't spot a difference between the two approaches. Here's the code for altering the UVs to cut off the unused texture padding:
var uvX = 1;
var uvY = 0;
if(this.orientation === 'portrait') {
uvX = (1.0 / (this.data.textureWidth / this.data.imageWidth));
} else {
uvY = 1.0 - (this.data.imageHeight / this.data.textureHeight);
}
var uvs = new Float32Array( [
0, uvY,
uvX, uvY,
uvX, 1,
0, 1
]);
EDIT 2:
I hadn't set the texture up properly.
Side by side, the non-stretched (padded) image does look better up close – but not a huge difference:
Left: Stretched to fit the power of two texture. Right: Non-stretched with padding
Custom UV's can be a bit a pain (especially when users can modify the texturing), and padding can break tiling when repeating the texture (unless taken very special care of).
Just stretch the images (or let Three.js do it for you). That's what most engines (like Unity) do anyway. There -might- be a tiny bit of visual degradation if the stretch algorithm and texel sampling do not 100% match, but it will be fine.
The general idea is that if your users -really- cared about sampling quality at that level, they'd carefully handcraft POT textures anyway. Usually, they just want to throw texture images at their models and have them look about right... and they will.

how to drop shadows under sprites in Top-down view with cocos2d

I just started learning OpenGL and cocos2d and I need an advice.
I'm writing a game in which player is allowed to touch and move rectangles on the screen in a top-down view. Every time a rectangle is touched, it moves up (towards the screen) in z direction and is scaled a bit to look like it's closer than the rest. It drops down to z = 0 after touch ends.
I'd like the risen rectangles to drop shadow under them, but can't get it to work. What approach would you recommend for the best result?
Here's what I have so far.
During setup I turn on the depth buffer and then:
1. all the textures are generated with CCRenderTexture
2. the generated textures are used as an atlas to create CCSpriteBatchNode
3. when a rectangle (tile) is touched:
static const float _raisedScale = 1.2;
static const float _raisedVertexZ = 30;
...
-(void)makeRaised
{
_state = TileStateRaised;
self.scale = _raisedScale;
self.scale = _raisedScale;
self.vertexZ = _raisedVertexZ;
_glowOverlay.vertexZ = _raisedVertexZ;
_glowOverlay.opacity = 255;
}
glow overlay is used to "light up" the rectangle.
After that I animate it using -(void)update:(ccTime)delta
Is there a way to make OpenGl cast the shadow for me using cocos? For example using shaders or OpenGL shadowing. Or do I have to use a texture overlay to simulate the shadow?
What do you recommend? How would you do it?
Sorry for a newbie question, but it's all really new to me and I really need your help
.
EDIT 6th of March
I managed to get sprites with shadow overlay show under the tiles and it looks ok until one tile has to drop shadow on another which has a non-zero vertexZ value. I tried to create additional shadow sprites which would be scaled and shown on top of the other tiles (usually rising or falling down), but I have problems with animation (tile up, tile down).
Why complicate the problem.
Simply create a projections of how the shadow would look like using your favourite graphics editing program and save it as a png. When the object is lifted, insert your shadowSprite behind your lifted object (you can shift it left/right depending on where you think your light source is).
When the user drops the object down, the show can remain under the object and move with it, making it self visible when the item is lifted again.

3D sprites, writing correct depth buffer information

I am writing a particle engine for iOS using Monotouch and openTK. My approach is to project the coordinate of each particle, and then write a correctly scaled textured rectangle at this screen location.
it works fine, but I have trouble calculating the correct depth value so that the sprite will correctly overdraw and be overdrawn by 3D objects in the scene.
This is the code I am using today:
//d=distance to projection plane
float d=(float)(1.0/(Math.Tan(MathHelper.DegreesToRadians(fovy/2f))));
Vector3 screenPos=Vector3.Transform(ref objPos,ref viewMatrix, out screenPos);
float depth=1-d/-screenPos.Z;
Then I am drawing a trianglestrip at the screen coordinate where I put the depth value calculated above as the z coordinate.
The results are almost correct, but not quite. I guess I need to take the near and far clipping planes into account somehow (near is 1 and far is 10000 in my case), but I am not sure how. I tried various ways and algorithms without getting accurate results.
I'd appreciate some help on this one.
What you really want to do is take your source position and pass it through modelview and projection or whatever you've got set up instead if you're not using the fixed pipeline. Supposing you've used one of the standard calls to set up the stack, such as glFrustum, and otherwise left things at identity then you can get the relevant formula directly from the man page. So reading directly from that you'd transform as:
z_clip = -( (far + near) / (far - near) ) * z_eye - ( (2 * far * near) / (far - near) )
w_clip = -z
Then, finally:
z_device = z_clip / w_clip;
EDIT: as you're working in ES 2.0, you can actually avoid the issue entirely. Supply your geometry for rendering as GL_POINTS and perform a normal transform in your vertex shader but set gl_PointSize to be the size in pixels that you want that point to be.
In your fragment shader you can then read gl_PointCoord to get a texture coordinate for each fragment that's part of your point, allowing you to draw a point sprite if you don't want just a single colour.

Drawing 3D in front/behind sprites in XNA/WP7?

This is kind of frustrating me as I've been grizzling over it for a couple of hours now.
Basically I'm drawing 2D sprites through spritebatch and 3D orthographically projected geometry using the BasicEffect class.
My problem is controlling what gets rendered on top of what. At first I thought it would be simply controlling the render order, i.e. if I do:
Draw3DStuff()
SpriteBatch.Begin(...)
Draw2DStuff();
SpriteBatch.End();
It would mean the 2D stuff would render over the 3D stuff, however since I don't control when the device begins/ends renders this isn't the result. The 3D always renders on top of the 2D elements, no matter the projection settings,world translation, the z components of the 3D geometries vertex definitions and the layer depth of the 2D elements.
Is there something I'm not looking into here?
What's the correct way to handle the depth here?
OK I figured it out 2 seconds after posting this question. I don't know if it was coincidence or if StackOverflow has a new feature granting the ability to see future answers.
The Z position of spritebatch elements are between 0 and 1 so they're not directly comparable to the z positions of orthographic geometry being rendered.
When you create an orthographic matrix however you define a near and far clip plane. The Z pos you set should be within this clip plane. I had a hunch that the spritebatch class is effectively drawing quads orthographically so by extension that 0 to 1 would mean 0 was representing a near clip and 1 a far clip, and the depth was probably being rendered into the same place the 3D geometry depth is being rendered to.
Soooo, to make it work I just figured that the near/far clips I was defining for the orthographic render will be measured against the near/far clips of the sprites being rendered, so it was simply a matter of setting the right z value, so for example:
If I have a near clip of 0 and a far clip of 10000 and I wanted to draw it so that it would correspond to 0.5f layer depth and render in front of sprites being drawn at 0.6 and behind sprites being drawn at 0.4 I do:
float zpos = 0.5f;
float orthoGraphicZPos = LinearInterpolate(0, 10000, zpos);
Or just zpos * 10000 :D
I guess it would make more sense to have your orthographic renderers near/far clip to be 0 and 1 to directly compare with the sprites layer depths.
Hopefully my reasoning for this solution was correct (more or less).
As an aside, since you mentioned you had a hunch on how the sprite batch was drawing quads. You can see the source code for all the default/included shaders and the spritebatch class if you are curious, or need help solving a problem like this:
http://create.msdn.com/en-US/education/catalog/sample/stock_effects
The problem is that the spritebatch messes with some of the renderstates that are used when you draw your 3d objects. To fix this you just have to reset them before rendering your 3d objects like so.
GraphicsDevice.BlendState = BlendState.Opaque;
GraphicsDevice.DepthStencilState = DepthStencilState.Default;
GraphicsDevice.SamplerStates[0] = SamplerState.LinearWrap;
Draw3DStuff()
SpriteBatch.Begin(...)
Draw2DStuff();
SpriteBatch.End();
Note that this is for xna 4.0 which I am pretty sure your using anyway. More info can be found on shawn hargreaves blog here. This will draw the reset the render states, draw the 3d objects, then the 2d objects over them. Without resetting the render states you get weird effects like your seeing.

Resources