How are vertex shader and pixel shader related? - directx-11

I know this had been asked a million times, but I just don't get some details.
As an example, lets say that I have a swap chain created and one staging ID3D11Texture2D.
What I can do with it is load a bitmap into this 2D texture and then copy it to the render target (assuming the size and format of both resources are the same).
Now I would like to display a sprite over that. One solution is to use vertex and pixel shaders. My understanding problem starts here.
Vertex shader:
I guess I should draw 2 triangles (2 triangles makes a rectangle, or a quad).
DirectX is using left handed coordinate system, but I guess it's irrelevant here, because I'm dealing with 2D sprites, right?
For the same reason, I assume I can ignore the world->view->projection transformations, right? Actually, I only need translation here in order to place the sprite at the right place on the screen, right?
Should the coordinates of these two triangles match the sprite dimensions, plus translation?
In what order should I provide these vertices? Should the origin of the coordinate system be in the center of the sprite, or should the origin be at the top left corner of the sprite?
As an example, if I have 80x70 sprite, what would be the vertex values?
The real vertices: (X, Y, Z - with no translation applied)
1. -40, -35, 0
2. -40, 35, 0
3. 40, -35, 0
4. 40, 35, 0
5. -40, 35, 0
6. 40, -35, 0
Is that correct?
The rasterization step should call pixel shader once for each pixel in the sprite. The output of the vertex shader will be input for the pixel shader. Does that mean the pixel shader should have access to the sprite in order to return the correct pixel value when called?

For 2D drawing, you generally use a world->view->projection that converts classic 'pixel space' to the clip space (-1..1).
float xScale = 2.0f / ViewPortWidth;
float yScale = 2.0f / ViewPortHeight;
[ xScale, 0 0, 0, ]
[ 0, -yScale, 0, 0, ]
[ 0, 0, 1, 0, ]
[ -1, 1, 0, 1 ]
The -1 and -yScale values are negated so that 0,0 is at the upper-left.
The Vertex Shader would do the transform to clip space:
void SpriteVertexShader(inout float4 color : COLOR0,
inout float2 texCoord : TEXCOORD0,
inout float4 position : SV_Position)
{
position = mul(position, MatrixTransform);
}
The points of the quad are then rasterized, then the Pixel Shader is invoked:
Texture2D<float4> Texture : register(t0);
sampler TextureSampler : register(s0);
float4 SpritePixelShader(float4 color : COLOR0,
float2 texCoord : TEXCOORD0) : SV_Target0
{
return Texture.Sample(TextureSampler, texCoord) * color;
}
See SpriteBatch in the DirectX Tool Kit.

Related

Repeat texture like stipple

I'm using orthographic projection.
I have 2 triangles creating one long quad.
On this quad i put a texture that repeat him self along the the way.
The world zoom is always changing by the user - and makes the quad length be short or long accordingly. The height is being calculated in the shader so it is always the same size (in pixels).
My problem is that i want the texture to repeat according to it's real (pixel size) and the length of the quad. In other words, that the texture will be always the same size (pixels) and it will fill the quad by repeating it more or less depend on the quad length.
The rotation is important.
For Example
My texture is
I've added to my vertices - texture coordinates for duplicating it 20 times now
as you see below
Because it's too much zoomed out we see the texture squeezed.
Now i'm zooming in and the texture stretched. It will always be 20 times repeat.
I'm sure that i have to play in with the texture coordinates in the frag shader, but don't see the solution. or perhaps there is a better solution to my problem.
---- ADDITION ----
Solved it by:
Calculating the repeat S value in the current zoom (That i'm adding the vertices) and send the map width (in world values) as attribute. Every draw i'm sending the current map width as uniform for calculating the scale.
But i'm not happy with this solution.
OK, found a way to do it with minimum attributes and minimum code in the shader.
Do Once:
Calculating the the repeat count for each line as my world and my screen are 1:1 - 1 in my world is 1 pixel. LineDistance(InWorldUnits)/picWidth(inScreenUnits)
Saving as an attribute.
Every Draw:
Calculating the scale - world to screen - WorldWidth/ScreenWidth
Setting as uniform
Drawing the buffer
In the frag shader
simply multiply this scale with the repeat attribute.
Works perfectly and looks good. Resizing the window is supported as well.
The general solution is to include a texture matrix. So your vertex shader might look something like
attribute vec4 a_position;
attribute vec2 a_texcoord;
varying vec2 v_texcoord;
uniform mat4 u_matrix;
uniform mat4 u_texMatrix;
void main() {
gl_Position = u_matrix * a_position;
v_texcoord = (u_texMatrix * v_texcoord).xy;
}
Now you can set up texture matrix to scale your texture coordinates however you need. If your texture coordinates go from 0 to 1 across the texture and your pattern is 16 pixels wide then if you're drawing a line 100 pixels long you'd need 100/16 as your X scale.
var pixelsLong = 100;
var pixelsTall = 8;
var textureWidth = 16;
var textureHeight = 16;
var xScale = pixelsLong / textureWidth;
var yScale = pixelsTall / textureHeight;
var texMatrix = [
xScale, 0, 0, 0,
0, yScale, 0, 0,
0, 0, 1, 0,
0, 0, 0, 1,
];
gl.uniformMatrix4fv(texMatrixLocation, false, texMatrix);
That seems like it would work. Because you're using a matrix you can also easily offset or rotate the texture. See matrix math

How to use fragment shader to draw sphere ilusion in OpenGL ES?

I am using this simple function to draw quad in 3D space that is facing camera. Now, I want to use fragment shader to draw illusion of a sphere inside. But, the problem is I'm new to OpenGL ES, so I don't know how?
void draw_sphere(view_t view) {
set_gl_options(COURSE);
glPushMatrix();
{
glTranslatef(view.plyr_pos.x, view.plyr_pos.y, view.plyr_pos.z - 1.9);
#ifdef __APPLE__
#undef glEnableClientState
#undef glDisableClientState
#undef glVertexPointer
#undef glTexCoordPointer
#undef glDrawArrays
static const GLfloat vertices []=
{
0, 0, 0,
1, 0, 0,
1, 1, 0,
0, 1, 0,
0, 0, 0,
1, 1, 0
};
glEnableClientState(GL_VERTEX_ARRAY);
glVertexPointer(3, GL_FLOAT, 0, vertices);
glDrawArrays(GL_TRIANGLE_STRIP, 0, 6);
glDisableClientState(GL_VERTEX_ARRAY);
#else
#endif
}
glPopMatrix();
}
More exactly, I want to achieve this:
There might be quite a few thing you need to to achieve this... The sphere that is drawn on the last image you posted is a result in using lighting and shine and color. In general you need a shader that can process all that and can normally work for any shape.
This specific case (also some others that can be mathematically presented) can be drawn with a single quad without even needing to push normal coordinates to the program. What you need to do is create a normal in a fragment shader: If you receive vectors sphereCenter, fragmentPosition and float sphereRadius, then sphereNormal is a vector such as
sphereNormal = (fragmentPosition-sphereCenter)/radius; //taking into account all have .z = .0
sphereNormal.z = -sqrt(1.0 - length(sphereNormal)); //only if(length(spherePosition) < sphereRadius)
and real sphere position:
spherePosition = sphereCenter + sphereNormal*sphereRadius;
Now all you need to do is add your lighting.. Static or not it is most common to use some ambient factor, linear and square distance factors, shine factor:
color = ambient*materialColor; //apply ambient
vector fragmentToLight = lightPosition-spherePosition;
float lightDistance = length(fragmentToLight);
fragmentToLight = normalize(fragmentToLight); //can also just divide with light distance
float dotFactor = dot(sphereNormal, fragmentToLight); //dot factor is used to take int account the angle between light and surface normal
if(dotFactor > .0) {
color += (materialColor*dotFactor)/(1.0 + lightDistance*linearFactor + lightDistance*lightDistance*squareFactor); //apply dot factor and distance factors (in many cases the distance factors are 0)
}
vector shineVector = (sphereNormal*(2.0*dotFactor)) - fragmentToLight; //this is a vector that is mirrored through the normal, it is a reflection vector
float shineFactor = dot(shineVector, normalize(cameraPosition-spherePosition)); //factor represents how strong is the light reflection towards the viewer
if(shineFactor > .0) {
color += materialColor*(shineFactor*shineFactor * shine); //or some other power then 2 (shineFactor*shineFactor)
}
This pattern to create lights in fragment shader is one of very many. If you don't like it or you cant make it work I suggest you find another one on the web, otherwise I hope you will understand it and be able to play around with it.

Objects look weird with first-person camera in DirectX

I'm having problems creating a 3D first-person camera in DirectX 11.
I have a camera at (0, 0, -2) looking at (0, 0, 100). There is a box at (0, 0, 0) and the box is rendered correctly. See this image below:
When the position of the box (not the camera) changes, it is rendered correctly. For example, the next image shows the box at (1, 0, 0) and the camera still at (0, 0, -2):
However, as soon as the camera moves left or right, the box should go to the opposite direction, but it looks twisted instead. Here is an example when the camera is at (1, 0, -2) and looking at (1, 0, 100). The box is still at (0, 0, 0):
Here is how I set my camera:
// Set the world transformation matrix.
D3DXMATRIX rotationMatrix; // A matrix to store the rotation information
D3DXMATRIX scalingMatrix; // A matrix to store the scaling information
D3DXMATRIX translationMatrix; // A matrix to store the translation information
D3DXMatrixIdentity(&translationMatrix);
// Make the scene being centered on the camera position.
D3DXMatrixTranslation(&translationMatrix, -camera.GetX(), -camera.GetY(), -camera.GetZ());
m_worldTransformationMatrix = translationMatrix;
// Set the view transformation matrix.
D3DXMatrixIdentity(&m_viewTransformationMatrix);
D3DXVECTOR3 cameraPosition(camera.GetX(), camera.GetY(), camera.GetZ());
// ------------------------
// Compute the lookAt position
// ------------------------
const FLOAT lookAtDistance = 100;
FLOAT lookAtXPosition = camera.GetX() + lookAtDistance * cos((FLOAT)D3DXToRadian(camera.GetXZAngle()));
FLOAT lookAtYPosition = camera.GetY() + lookAtDistance * sin((FLOAT)D3DXToRadian(camera.GetYZAngle()));
FLOAT lookAtZPosition = camera.GetZ() + lookAtDistance * (sin((FLOAT)D3DXToRadian(camera.GetXZAngle())) * cos((FLOAT)D3DXToRadian(camera.GetYZAngle())));
D3DXVECTOR3 lookAtPosition(lookAtXPosition, lookAtYPosition, lookAtZPosition);
D3DXVECTOR3 upDirection(0, 1, 0);
D3DXMatrixLookAtLH(&m_viewTransformationMatrix,
&cameraPosition,
&lookAtPosition,
&upDirection);
RECT windowDimensions = GetWindowDimensions();
FLOAT width = (FLOAT)(windowDimensions.right - windowDimensions.left);
FLOAT height = (FLOAT)(windowDimensions.bottom - windowDimensions.top);
// Set the projection matrix.
D3DXMatrixIdentity(&m_projectionMatrix);
D3DXMatrixPerspectiveFovLH(&m_projectionMatrix,
(FLOAT)(D3DXToRadian(45)), // Horizontal field of view
width / height, // Aspect ratio
1.0f, // Near view-plane
100.0f); // Far view-plane
Here is how the final matrix is set:
D3DXMATRIX finalMatrix = m_worldTransformationMatrix * m_viewTransformationMatrix * m_projectionMatrix;
// Set the new values for the constant buffer
mp_deviceContext->UpdateSubresource(mp_constantBuffer, 0, 0, &finalMatrix, 0, 0);
And finally, here is the vertex shader that uses the constant buffer:
VOut VShader(float4 position : POSITION, float4 color : COLOR, float2 texcoord : TEXCOORD)
{
VOut output;
output.color = color;
output.texcoord = texcoord;
output.position = mul(position, finalMatrix); // Transform the vertex from 3D to 2D
return output;
}
Do you see what I'm doing wrong? If you need more information on my code, feel free to ask: I really want this to work.
Thanks!
The problem is you are setting finalMatrix with a row major matrix, but HLSL expects a column major matrix. The solution is to use D3DXMatrixTranspose before updating the constants, or declare row_major in the HLSL file like this:
cbuffer ConstantBuffer
{
row_major float4x4 finalMatrix;
}

Vertex Coordinates in OpenGL

How come vertex coordinates in OpenGL range from -1 to 1? Is there a way to be able to specify vertex coordinates using the same coordinates as the screen?
So instead of:
float triangleCoords[] = {
// X, Y, Z
-0.5f, -0.25f, 0,
0.5f, -0.25f, 0,
0.0f, 0.559016994f, 0
};
I could have
float triangleCoords[] = {
// X, Y, Z
80, 60, 0,
240, 60, 0,
0, 375, 0
};
Just seems a bit much that I need to get out a calculator just so I can hard code in some vertex coordinates. It's not like I'm gonna be trying to lay things out and think "yeah, that should go right around (0, 0.559016994), that'll look perfect..."
This is what projection matrices are for. They project from your coordinate frame to the normalized coordinates.
You're free to setup vertices in pixels if you want, but then you just have to pair it with a proper projection matrix to tell opengl how to transform those coordinates to the screen.
Given your example:
float triangleCoords[] = {
// X, Y, Z
80, 60, 0,
240, 60, 0,
0, 375, 0
};
If you pair this with an orthographic projection matrix (similar to one generated by glOrtho(0, width, 0, height, -1, 1), then your triangle will draw at the pixel coordinates described.

OpenGL quads smaller than expected

Running on iPad.
Mapping a texture that is 256x256 onto a quad. I'm trying to render it exactly the same size as the actual image. The quad looks correct (shape is right, texture mapped correctly), but it is only ~75% of the size of the actual .png.
Not sure why.
The code is characterized as follows (excerpts below):
Screen is 768x1024. Windows is 768x1024 as well.
glViewport(0, 0, 768, 1024); // aspect ratio 1:1.333
glOrthof(-0.5f, 0.5f, -0.666f, 0.666f, -1.0f, 1.0f); // matching aspect ratio with 0,0 centered
// Sets up an array of values to use as the sprite vertices.
//.25 of 1024 is 256 pixels so the quad (centered on 0,0) spans -0.125,
//-0.125 to 0.125, 0.125 (bottom left corner and upper right corner)
GLfloat spriteVertices[] = {
-0.125f, -0.125f,
0.125f, -0.125f,
-0. 125f, 0.125f,
0.125f, 0.125f,
};
// Sets up an array of values for the texture coordinates.
const GLshort spriteTexcoords[] = {
0, 0,
1, 0,
0, 1,
1, 1,
};
followed by the appropriate calls to:
glVertexPointer(2, GL_FLOAT, 0, spriteVertices);
glTexCoordPointer(2, GL_SHORT, 0, spriteTexcoords);
then
glDrawArrays(GL_TRIANGLE_STRIP, 0, 4);
Why is my sprite smaller than 256x256 when rendered?
Your output is 192x192 (approx) because your quad is the wrong size. It's 0.25x0.25 and the "unit length" direction is X which is 768 wide, so it's 0.25 * 768 = 192. If you switched your glOrthof so that top/bottom were -0.5 and +0.5 (with appropriate correction to X) it would work.

Resources