CPP:
https://github.com/walbourn/directx-sdk-samples/blob/master/Direct3D11TutorialsDXUT/Tutorial08/Tutorial08.cpp
HLSL:
https://github.com/walbourn/directx-sdk-samples/blob/master/Direct3D11TutorialsDXUT/Tutorial08/Tutorial08.fx
Inside the DirectX Samples(tutorial 8), they are initializing the vertices with local coords (I think, not sure the correct terminology)
// Create vertex buffer
SimpleVertex vertices[] =
{
{ XMFLOAT3( -1.0f, 1.0f, -1.0f ), XMFLOAT2( 1.0f, 0.0f ) },
{ XMFLOAT3( 1.0f, 1.0f, -1.0f ), XMFLOAT2( 0.0f, 0.0f ) },
{ XMFLOAT3( 1.0f, 1.0f, 1.0f ), XMFLOAT2( 0.0f, 1.0f ) },
{ XMFLOAT3( -1.0f, 1.0f, 1.0f ), XMFLOAT2( 1.0f, 1.0f ) },
{ XMFLOAT3( -1.0f, -1.0f, -1.0f ), XMFLOAT2( 0.0f, 0.0f ) },
{ XMFLOAT3( 1.0f, -1.0f, -1.0f ), XMFLOAT2( 1.0f, 0.0f ) },
{ XMFLOAT3( 1.0f, -1.0f, 1.0f ), XMFLOAT2( 1.0f, 1.0f ) },
{ XMFLOAT3( -1.0f, -1.0f, 1.0f ), XMFLOAT2( 0.0f, 1.0f ) },
{ XMFLOAT3( -1.0f, -1.0f, 1.0f ), XMFLOAT2( 0.0f, 1.0f ) },
{ XMFLOAT3( -1.0f, -1.0f, -1.0f ), XMFLOAT2( 1.0f, 1.0f ) },
{ XMFLOAT3( -1.0f, 1.0f, -1.0f ), XMFLOAT2( 1.0f, 0.0f ) },
{ XMFLOAT3( -1.0f, 1.0f, 1.0f ), XMFLOAT2( 0.0f, 0.0f ) },
{ XMFLOAT3( 1.0f, -1.0f, 1.0f ), XMFLOAT2( 1.0f, 1.0f ) },
{ XMFLOAT3( 1.0f, -1.0f, -1.0f ), XMFLOAT2( 0.0f, 1.0f ) },
{ XMFLOAT3( 1.0f, 1.0f, -1.0f ), XMFLOAT2( 0.0f, 0.0f ) },
{ XMFLOAT3( 1.0f, 1.0f, 1.0f ), XMFLOAT2( 1.0f, 0.0f ) },
{ XMFLOAT3( -1.0f, -1.0f, -1.0f ), XMFLOAT2( 0.0f, 1.0f ) },
{ XMFLOAT3( 1.0f, -1.0f, -1.0f ), XMFLOAT2( 1.0f, 1.0f ) },
{ XMFLOAT3( 1.0f, 1.0f, -1.0f ), XMFLOAT2( 1.0f, 0.0f ) },
{ XMFLOAT3( -1.0f, 1.0f, -1.0f ), XMFLOAT2( 0.0f, 0.0f ) },
{ XMFLOAT3( -1.0f, -1.0f, 1.0f ), XMFLOAT2( 1.0f, 1.0f ) },
{ XMFLOAT3( 1.0f, -1.0f, 1.0f ), XMFLOAT2( 0.0f, 1.0f ) },
{ XMFLOAT3( 1.0f, 1.0f, 1.0f ), XMFLOAT2( 0.0f, 0.0f ) },
{ XMFLOAT3( -1.0f, 1.0f, 1.0f ), XMFLOAT2( 1.0f, 0.0f ) },
};
But then they are translating the position via a Vertex Shader:
XMMATRIX mWorldViewProjection = g_World * g_View * g_Projection;
My question is:
1: If this was a static object(not moving/animating), would you still want to translate the World Position(Not sure the correct terminaology) or would you want to place the object in the correct 3D space/Scene when initializing the vertices, example:
{ XMFLOAT3( CorrectWorldX, CorrectWorldY, CorrectWorldZ ), XMFLOAT2( 1.0f, 0.0f ) },
A perfect example of this would be a randomly generated level/terrain.
2: What is the performance hit if you were to use the shader to position the location in the correct 3d world space rather than just doing it when initializing the vertices?
3: For Animation this is the preferred method for character animations bone transformation?
4: When building a level(placing objects) I guess you have no choice but to translate via the shader, but then you could also save the final position too.
Hope this is clear, as I am learning DirectX 11 and correct practices!
Static objects should only really be placed in the real world co-ordinates if their is no reuse. Then its just geometry, but best practice is to pass a translation matrix in to position/orientate the object from local space into world space. For instance, trees which you would use many times would each have an assigned matrix for rotation, scale and translate. Not only would you use this technique but you would use an "instance" buffer to rapidly feed the matrices to the GPU and also ensure no stalls in the GPU each time you placed an model (using constant buffers will introduce a slight stall, so using them for many models does become a hit).
The performance hit in the GPU is way less than doing it in the CPU. Alot less, matrix math in the GPU is its bread and butter. Obviously, the scale/rotate/translate matrix needs to be prepped through your code before loading into GPU. The GPU will rapidly multiply out your vertices from local to world space (in the vertex shader or domain shader if you are tessellating).
Animation has a few things involved, such as weighting and influence of each vertex on the bones. If a vertex is influenced by more than one bone point then lerping between the 2 are needed. But yes, the GPU is best placed to do this. Compute shader streaming into a instance buffer that the vertex shader uses is the more complete and fastest way to calcuate this.
If you want to calculate once, then the CPU is fine but its best to just hold onto the matrix instance for each model and let the GPU do the work around the vertex calculations. I wouldnt bother trying to save this data as the memory usage will grow quickly and you aren't saving much on performance. In fact, it could be worse as you have to load this data each time in to the GPU and you wont get the benefit of caching also.
good luck
Related
I'm loading an image to the screen using DirectX11, but the image becomes more saturated. Left is the loaded image and right is the original image.
Strange thing is that this happens only when I'm loading large images. The resolution of the image I'm trying to print is 1080 x 675 and my window size is 1280 x 800. Also, although the original image has a high resolution the image becomes a little pixelated. This is solved if I use LINEAR filter but I'm curious why this is happening. I'm fairly new to DirectX and I'm struggling..
Vertex data:
_vertices[0].p = { -1.0f, 1.0f, 0.0f };
//_vertices[0].c = { 1.0f, 1.0f, 1.0f, 1.0f };
_vertices[0].t = { 0.0f, 0.0f };
_vertices[1].p = { 1.0f, 1.0f, 0.0f };
//_vertices[1].c = { 1.0f, 1.0f, 1.0f, 1.0f };
_vertices[1].t = { 1.0f, 0.0f };
_vertices[2].p = { -1.0f, -1.0f, 0.0f };
//_vertices[2].c = { 1.0f, 1.0f, 1.0f, 1.0f };
_vertices[2].t = { 0.0f, 1.0f };
_vertices[3].p = { 1.0f, -1.0f, 0.0f };
//_vertices[3].c = { 1.0f, 1.0f, 1.0f, 1.0f };
_vertices[3].t = { 1.0f, 1.0f };
Vertex layout:
D3D11_INPUT_ELEMENT_DESC elementDesc[] = {
{ "POSITION", 0, DXGI_FORMAT_R32G32B32_FLOAT, 0, 0, D3D11_INPUT_PER_VERTEX_DATA, 0},
//{ "COLOR", 0, DXGI_FORMAT_R32G32B32A32_FLOAT, 0, 12, D3D11_INPUT_PER_VERTEX_DATA, 0},
{ "TEXTURE", 0, DXGI_FORMAT_R32G32_FLOAT, 0, 28, D3D11_INPUT_PER_VERTEX_DATA, 0},
};
Sampler state:
D3D11_SAMPLER_DESC samplerDesc;
ZeroMemory(&samplerDesc, sizeof(samplerDesc));
samplerDesc.Filter = D3D11_FILTER_MIN_MAG_MIP_POINT;
samplerDesc.AddressU = D3D11_TEXTURE_ADDRESS_WRAP;
samplerDesc.AddressV = D3D11_TEXTURE_ADDRESS_WRAP;
samplerDesc.AddressW = D3D11_TEXTURE_ADDRESS_WRAP;
device->CreateSamplerState(&samplerDesc, &g_defaultSS);
Vertex shader:
struct VS_IN
{
float3 p : POSITION;
//float4 c : COLOR;
float2 t : TEXTURE;
};
struct VS_OUT
{
float4 p : SV_POSITION;
//float4 c : COLOR0;
float2 t : TEXCOORD0;
};
VS_OUT VSMain(VS_IN input)
{
VS_OUT output = (VS_OUT)0;
output.p = float4(input.p, 1.0f);
//output.c = input.c;
output.t = input.t;
return output;
}
Pixel shader:
Texture2D g_texture : register(t0);
SamplerState g_sampleWrap : register(s0);
float4 PSMain(VS_OUT input) : SV_Target
{
float4 vColor = g_texture.Sample(g_sampleWrap, input.t);
return vColor; //* input.c;
}
This is most likely an issue of colorspace. If rendering using 'linear colors' (which is recommended), then likely your image is in sRGB colorspace. You can let the texture hardware deal with the gamma by using DXGI_FORMAT_*_SRGB formats for your texture, or you can do it directly in the shader.
See these resources:
Linear-Space Lighting (i.e. Gamma)
Chapter 24. The Importance of Being Linear, GPU Gems 3
Gamma-correct rendering
In the DirectX Tool Kit, you can do various load-time tricks as well. See DDSTextureLoader and WICTextureLoader.
I am trying to draw a cube with different colors on each face using OpenGL ES 2.0. I can only draw a cube in one color now. I knew I need to use VertexAttribPointer in this case instead of Uniform, but I probably added them wrongly. Screen shows nothing after I implement my code. Here is my code, can anybody give me a hand? Thank you so much!!!
public class MyCube {
private FloatBuffer vertexBuffer;
private ShortBuffer drawListBuffer;
private ShortBuffer[] ArrayDrawListBuffer;
private FloatBuffer colorBuffer;
private int mProgram;
//For Projection and Camera Transformations
private final String vertexShaderCode =
// This matrix member variable provides a hook to manipulate
// the coordinates of the objects that use this vertex shader
"uniform mat4 uMVPMatrix;" +
"attribute vec4 vPosition;" +
"void main() {" +
// the matrix must be included as a modifier of gl_Position
// Note that the uMVPMatrix factor *must be first* in order
// for the matrix multiplication product to be correct.
" gl_Position = uMVPMatrix * vPosition;" +
"}";
// Use to access and set the view transformation
private int mMVPMatrixHandle;
private final String fragmentShaderCode =
"precision mediump float;" +
"uniform vec4 vColor;" +
"void main() {" +
" gl_FragColor = vColor;" +
"}";
// number of coordinates per vertex in this array
static final int COORDS_PER_VERTEX = 3;
float cubeCoords[] = {
-0.5f, 0.5f, 0.5f, // front top left 0
-0.5f, -0.5f, 0.5f, // front bottom left 1
0.5f, -0.5f, 0.5f, // front bottom right 2
0.5f, 0.5f, 0.5f, // front top right 3
-0.5f, 0.5f, -0.5f, // back top left 4
0.5f, 0.5f, -0.5f, // back top right 5
-0.5f, -0.5f, -0.5f, // back bottom left 6
0.5f, -0.5f, -0.5f, // back bottom right 7
};
// Set color with red, green, blue and alpha (opacity) values
float color[] = { 0.63671875f, 0.76953125f, 0.22265625f, 1.0f };
float red[] = { 1.0f, 0.0f, 0.0f, 1.0f };
float blue[] = { 0.0f, 0.0f, 1.0f, 1.0f };
private short drawOrder[] = {
0, 1, 2, 0, 2, 3,//front
0, 4, 5, 0, 5, 3, //Top
0, 1, 6, 0, 6, 4, //left
3, 2, 7, 3, 7 ,5, //right
1, 2, 7, 1, 7, 6, //bottom
4, 6, 7, 4, 7, 5};//back (order to draw vertices)
final float[] cubeColor =
{
// Front face (red)
1.0f, 0.0f, 0.0f, 1.0f,
1.0f, 0.0f, 0.0f, 1.0f,
1.0f, 0.0f, 0.0f, 1.0f,
1.0f, 0.0f, 0.0f, 1.0f,
1.0f, 0.0f, 0.0f, 1.0f,
1.0f, 0.0f, 0.0f, 1.0f,
// Top face (green)
0.0f, 1.0f, 0.0f, 1.0f,
0.0f, 1.0f, 0.0f, 1.0f,
0.0f, 1.0f, 0.0f, 1.0f,
0.0f, 1.0f, 0.0f, 1.0f,
0.0f, 1.0f, 0.0f, 1.0f,
0.0f, 1.0f, 0.0f, 1.0f,
// Left face (blue)
0.0f, 0.0f, 1.0f, 1.0f,
0.0f, 0.0f, 1.0f, 1.0f,
0.0f, 0.0f, 1.0f, 1.0f,
0.0f, 0.0f, 1.0f, 1.0f,
0.0f, 0.0f, 1.0f, 1.0f,
0.0f, 0.0f, 1.0f, 1.0f,
// Right face (yellow)
1.0f, 1.0f, 0.0f, 1.0f,
1.0f, 1.0f, 0.0f, 1.0f,
1.0f, 1.0f, 0.0f, 1.0f,
1.0f, 1.0f, 0.0f, 1.0f,
1.0f, 1.0f, 0.0f, 1.0f,
1.0f, 1.0f, 0.0f, 1.0f,
// Bottom face (cyan)
0.0f, 1.0f, 1.0f, 1.0f,
0.0f, 1.0f, 1.0f, 1.0f,
0.0f, 1.0f, 1.0f, 1.0f,
0.0f, 1.0f, 1.0f, 1.0f,
0.0f, 1.0f, 1.0f, 1.0f,
0.0f, 1.0f, 1.0f, 1.0f,
// Back face (magenta)
1.0f, 0.0f, 1.0f, 1.0f,
1.0f, 0.0f, 1.0f, 1.0f,
1.0f, 0.0f, 1.0f, 1.0f,
1.0f, 0.0f, 1.0f, 1.0f,
1.0f, 0.0f, 1.0f, 1.0f,
1.0f, 0.0f, 1.0f, 1.0f
};
public MyCube() {
// initialize vertex byte buffer for shape coordinates
ByteBuffer bb = ByteBuffer.allocateDirect(
// (# of coordinate values * 4 bytes per float)
cubeCoords.length * 4);
bb.order(ByteOrder.nativeOrder());
vertexBuffer = bb.asFloatBuffer();
vertexBuffer.put(cubeCoords);
vertexBuffer.position(0);
// initialize byte buffer for the draw list
ByteBuffer dlb = ByteBuffer.allocateDirect(
// (# of coordinate values * 2 bytes per short)
drawOrder.length * 2);
dlb.order(ByteOrder.nativeOrder());
drawListBuffer = dlb.asShortBuffer();
drawListBuffer.put(drawOrder);
drawListBuffer.position(0);
// initialize byte buffer for the color list
ByteBuffer cb = ByteBuffer.allocateDirect(
// (# of coordinate values * 2 bytes per short)
cubeColor.length * 4);
cb.order(ByteOrder.nativeOrder());
colorBuffer = cb.asFloatBuffer();
colorBuffer.put(cubeColor);
colorBuffer.position(0);
int vertexShader = MyRenderer.loadShader(GLES20.GL_VERTEX_SHADER,
vertexShaderCode);
int fragmentShader = MyRenderer.loadShader(GLES20.GL_FRAGMENT_SHADER,
fragmentShaderCode);
// create empty OpenGL ES Program
mProgram = GLES20.glCreateProgram();
// add the vertex shader to program
GLES20.glAttachShader(mProgram, vertexShader);
// add the fragment shader to program
GLES20.glAttachShader(mProgram, fragmentShader);
// creates OpenGL ES program executables
GLES20.glLinkProgram(mProgram);
}
private int mPositionHandle;
private int mColorHandle;
private final int vertexCount = cubeCoords.length / COORDS_PER_VERTEX;
private final int vertexStride = COORDS_PER_VERTEX * 4; // 4 bytes per vertex
public void draw(float[] mvpMatrix) { // pass in the calculated transformation matrix
// Add program to OpenGL ES environment
GLES20.glUseProgram(mProgram);
// get handle to vertex shader's vPosition member
mPositionHandle = GLES20.glGetAttribLocation(mProgram, "vPosition");
// get handle to fragment shader's vColor member
mColorHandle = GLES20.glGetUniformLocation(mProgram, "vColor");
// Enable a handle to the cube vertices
GLES20.glEnableVertexAttribArray(mPositionHandle);
// Prepare the cube coordinate data
GLES20.glVertexAttribPointer(mPositionHandle, COORDS_PER_VERTEX,
GLES20.GL_FLOAT, false,
vertexStride, vertexBuffer);
// Set color for drawing the triangle
//mColorHandle = GLES20.glGetUniformLocation(mProgram, "vColor");
// Enable a handle to the cube colors
GLES20.glEnableVertexAttribArray(mColorHandle);
// Prepare the cube color data
GLES20.glVertexAttribPointer(mColorHandle, 4, GLES20.GL_FLOAT, false, 16, colorBuffer);
// Set the color for each of the faces
//GLES20.glUniform4fv(mColorHandle, 1, blue, 0);
//***When I add this line of code above, it can show a cube totally in blue.***
// get handle to shape's transformation matrix
mMVPMatrixHandle = GLES20.glGetUniformLocation(mProgram, "uMVPMatrix");
// Pass the projection and view transformation to the shader
GLES20.glUniformMatrix4fv(mMVPMatrixHandle, 1, false, mvpMatrix, 0);
// Draw the cube
GLES20.glDrawElements(GLES20.GL_TRIANGLES, drawOrder.length, GLES20.GL_UNSIGNED_SHORT, drawListBuffer);
// Disable vertex array
GLES20.glDisableVertexAttribArray(mPositionHandle);
GLES20.glDisableVertexAttribArray(mColorHandle);
GLES20.glDisableVertexAttribArray(mMVPMatrixHandle);
}
}
Remove the uniform variable vColor declaration from the fragment shader. Define the new per-vertex attribute input variable in the vertex shader, write that value to a varying variable which is output by the vertex shader, and add as a varying input which is read by the fragment shader.
hey guys i have a problem with the std::vector and i need your help.
I am currently programming a rendering engine with the new vulkan api and i want to support different vertex layouts for different meshes.
The problem is that std::vector::data() doesnt return the raw data i need. Here are my Vertex structs:
struct Vertex
{
};
struct VertexColor : public Vertex
{
public:
VertexColor(Vec3f pos, Vec3f col) : position(pos), color(col) {}
Vec3f position;
Vec3f color;
};
This is what i actually have and works:
std::vector<VertexColor> cubeVertexBuffer = {
VertexColor{ Vec3f( -1.0f, -1.0f, 1.0f ), Vec3f( 0.0f, 0.0f, 0.0f ) },
VertexColor{ Vec3f( 1.0f, -1.0f, 1.0f ), Vec3f( 1.0f, 0.0f, 0.0f ) },
VertexColor{ Vec3f( 1.0f, 1.0f, 1.0f ), Vec3f( 1.0f, 1.0f, 0.0f ) },
VertexColor{ Vec3f( -1.0f, 1.0f, 1.0f ), Vec3f( 0.0f, 1.0f, 0.0f ) },
VertexColor{ Vec3f( -1.0f, -1.0f, -1.0f), Vec3f( 0.0f, 0.0f, 1.0f ) },
VertexColor{ Vec3f( 1.0f, -1.0f, -1.0f ), Vec3f( 0.0f, 1.0f, 0.0f ) },
VertexColor{ Vec3f( 1.0f, 1.0f, -1.0f ), Vec3f( 1.0f, 1.0f, 0.0f ) },
VertexColor{ Vec3f( -1.0f, 1.0f, -1.0f ), Vec3f( 1.0f, 0.0f, 0.0f ) }
};
uint32_t size = 8 * sizeof(VertexColor);
Vertex* vertices = (Vertex*)malloc(size);
memcpy(vertices, cubeVertexBuffer.data(), size);
std::vector<uint32_t> cubeIndexBuffer = { 1,2,0, 2,3,0,
0,3,4, 3,7,4,
5,1,4, 1,0,4,
2,6,3, 6,7,3,
5,6,1, 6,2,1,
6,5,7, 5,4,7 };
Cube::cubeMesh = new Mesh(vertices, size, cubeIndexBuffer);
What i want:
std::vector<Vertex> cubeVertexBuffer = {
VertexColor{ Vec3f( -1.0f, -1.0f, 1.0f ), Vec3f( 0.0f, 0.0f, 0.0f ) },
VertexColor{ Vec3f( 1.0f, -1.0f, 1.0f ), Vec3f( 1.0f, 0.0f, 0.0f ) },
VertexColor{ Vec3f( 1.0f, 1.0f, 1.0f ), Vec3f( 1.0f, 1.0f, 0.0f ) },
VertexColor{ Vec3f( -1.0f, 1.0f, 1.0f ), Vec3f( 0.0f, 1.0f, 0.0f ) },
VertexColor{ Vec3f( -1.0f, -1.0f, -1.0f), Vec3f( 0.0f, 0.0f, 1.0f ) },
VertexColor{ Vec3f( 1.0f, -1.0f, -1.0f ), Vec3f( 0.0f, 1.0f, 0.0f ) },
VertexColor{ Vec3f( 1.0f, 1.0f, -1.0f ), Vec3f( 1.0f, 1.0f, 0.0f ) },
VertexColor{ Vec3f( -1.0f, 1.0f, -1.0f ), Vec3f( 1.0f, 0.0f, 0.0f ) }
};
std::vector<uint32_t> cubeIndexBuffer = { 1,2,0, 2,3,0,
0,3,4, 3,7,4,
5,1,4, 1,0,4,
2,6,3, 6,7,3,
5,6,1, 6,2,1,
6,5,7, 5,4,7 };
Cube::cubeMesh = new Mesh(cubeVertexBuffer, cubeIndexBuffer);
As i mentioned i need the whole vertex data in a raw format contigously in memory for mapping to the gpu, but the .data() function with the std::vector doesnt return the "real data". I dont know how i can using inheritation with the std::vector to get the "raw data from the subclass" in it.
Hope you can help me!
Thanks
EDIT: I checked the memory and with the std::vector, where i put my VertexColor(..) data into it, it dont set any data in memory. Is it because the "Vertex" struct does not have any members?
What you are experiencing is called slicing. When you declare a vector<Vertex>, it will hold instances of Vertex. Each "cell" will be large enough for the data in Vertex. Since Vertex is a base-class of VertexColor, it is possible to assign a VertexColor object to a Vertex, but this will only copy the data members from Vertex. Vertex has no members, so what do you expect to be the content of a vector<Vertex>?
Inheritance is the wrong design approach in this case. From what I see, Mesh should be a template class template<typename V> class Mesh<V> to support different Vertex types, restricted to be a standard layout type (with enable_if<is_standard_layout<V>::value, V, V>::type).
Firstly, never inherit from std::vector. It is not recommended and probably bad practise.
Also have a look at Vec3f. Does it have any virtual methods or destructors? You might be sitting with more data in your classes than just the members you defined. The structures you use should be plain old c++ objects without any polymorphic properties (i.e. plain c-style structures).
Also, check your member byte alignment. You can remove all padding from structures with:
#pragma pack(push)
#pragma pack(1)
struct ...
{
...
}
#pragma pack(pop)
Let me know if you get it going.
Additionally vector<Vertex> and vector<VertexColor> are not comparable, since the individual elements have different sizes. In my experience the 3D APIs expect the data in a very specific format and you can't do it generically if the API does not already provide the format for you.
Slicing can also occur, as Jens suggests.
Here is how the cubic looks like:
I am using single color to specify every vertex before draw the cubic, but the cubic turns out not as i wished. I have enabled depth_color_test and also clear COLOR_BUFFER_BIT and DEPTH_BUFFER_BIT before drawing.
Here is the code:
https://github.com/ufo22940268/Android_RollingBall/blob/master/src/hongbosb/rollingball/model/GLEnvironmentEntity.java
The decimal separator here:
static public final float VERTEX_COLOR_ARRAY[] = {
1.0f, 0,0f, 0.0f, 1.0f,
1.0f, 0,0f, 0.0f, 1.0f,
1.0f, 0,0f, 0.0f, 1.0f,
1.0f, 0,0f, 0.0f, 1.0f,
0.0f, 1,0f, 0.0f, 1.0f,
0.0f, 1,0f, 0.0f, 1.0f,
0.0f, 1,0f, 0.0f, 1.0f,
0.0f, 1,0f, 0.0f, 1.0f,
0.0f, 0,0f, 1.0f, 1.0f,
0.0f, 0,0f, 1.0f, 1.0f,
0.0f, 0,0f, 1.0f, 1.0f,
0.0f, 0,0f, 1.0f, 1.0f,
1.0f, 0,0f, 0.0f, 1.0f,
1.0f, 0,0f, 0.0f, 1.0f,
1.0f, 0,0f, 0.0f, 1.0f,
1.0f, 0,0f, 0.0f, 1.0f,
};
should be a period (.), not a comma. Right now, they’re being treated as separate elements and throwing your indices out of whack.
I am trying to understand the perspective view in OpenGL.
What I am trying to do is render two identical triangles, but at different z coordinates, so I assume they should appear at different sizes. Here is my code:
CUSTOMVERTEX Vertices[] =
{
{ 0.5f, 1.0f, 0.5f, 1.0f, 0.0f, 0.0f, 1.0f }, // x, y, z, color
{ 0.0f, 0.0f, 0.5f, 0.0f, 1.0f, 0.0f, 1.0f },
{ 1.0f, 0.0f, 0.5f, 0.0f, 0.0f, 1.0f, 1.0f },
};
and for drawing
glDrawArrays(GL_TRIANGLES,0, 3);
glTranslatef(0.0f,-1.0f,-1.5f);
glDrawArrays(GL_TRIANGLES,0, 3);
and here is how I init some attributes
glShadeModel(GL_SMOOTH);
glClearDepthf(1.0f);
glEnable(GL_DEPTH_TEST);
glDepthFunc(GL_LEQUAL);
glEnable(GL_CULL_FACE);
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
glFrustumf(-1.0f, 1.0f, -1.0f, 1.0f, 0.0f, 100.0f);
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
but the triangles appear at the same size, just at different locations as I translated Y.
Can someone please explain to me?
You cannot use 0.0 for the perspective projection's near-Z value. It must be a positive number greater than zero. Preferably on the order of 1.0 or so.
As Nicol said you should use numbers greater than 0 for frustrum construction. I strongly suggest you to read this article to understand why it is so.