How do I change the triangle in to a square in my d3d11 app (I don't have much experience so idk how to do it) I got the code from this http://www.directxtutorial.com/Lesson.aspx?lessonid=11-4-5 tutorial to render a green triagle but i need to make it a square.
Instead of drawing a single triangle, you draw TWO triangles that share two vertices. The key challenge is making sure you specify them with the correct winding order for your rendering setup.
// Single multi-colored triangle
static const Vertex s_vertexData[3] =
{
{ { 0.0f, 0.5f, 0.5f, 1.0f },{ 1.0f, 0.0f, 0.0f, 1.0f } }, // Top / Red
{ { 0.5f, -0.5f, 0.5f, 1.0f },{ 0.0f, 1.0f, 0.0f, 1.0f } }, // Right / Green
{ { -0.5f, -0.5f, 0.5f, 1.0f },{ 0.0f, 0.0f, 1.0f, 1.0f } } // Left / Blue
};
...
context->IASetPrimitiveTopology(D3D11_PRIMITIVE_TOPOLOGY_TRIANGLELIST);
context->Draw(3, 0);
SimpleTriangle sample on GitHub
You can draw the two triangles two different ways.
The FIRST and most straight-forward is using an index buffer (IB) which makes it easier to support arbitrary cull settings.
// Two triangles forming a quad with the same color at all corners
static const Vertex s_vertexData[4] =
{
{ { -0.5f, -0.5f, 0.5f, 1.0f }, { 0.f, 1.f } },
{ { 0.5f, -0.5f, 0.5f, 1.0f }, { 1.f, 1.f } },
{ { 0.5f, 0.5f, 0.5f, 1.0f }, { 1.f, 0.f } },
{ { -0.5f, 0.5f, 0.5f, 1.0f }, { 0.f, 0.f } },
};
static const uint16_t s_indexData[6] =
{
3,1,0,
2,1,3,
};
...
context->IASetPrimitiveTopology(D3D11_PRIMITIVE_TOPOLOGY_TRIANGLELIST);
context->DrawIndexed(6, 0, 0);
SimpleTexture sample on GitHub
The SECOND method is a bit trickier which is to use just 4 vertices without an index buffer by drawing them as a triangle strip. The problem here is that it constrains your winding order choices a fair bit. If you turn off backface culling, it's really simple using the same 4 vertices
// Create rsState with D3D11_RASTERIZER_DESC.CullMode
// set to D3D11_CULL_NONE
...
context->RSSetState(rsState);
context->IASetPrimitiveTopology(D3D11_PRIMITIVE_TOPOLOGY_TRIANGLESTRIP);
context->Draw(4, 0);
As you are new to DirectX, you may want to take a look at DirectX Tool Kit.
PS: For the special case of wanting a quad that fills an entire render viewport (such as a full-screen quad), with Direct3D Hardware Feature Level 10.0 or better hardware, you can skip using a IB or VB at all and just generate the quad inside the vertex shader itself. See GitHub.
Related
I'm loading an image to the screen using DirectX11, but the image becomes more saturated. Left is the loaded image and right is the original image.
Strange thing is that this happens only when I'm loading large images. The resolution of the image I'm trying to print is 1080 x 675 and my window size is 1280 x 800. Also, although the original image has a high resolution the image becomes a little pixelated. This is solved if I use LINEAR filter but I'm curious why this is happening. I'm fairly new to DirectX and I'm struggling..
Vertex data:
_vertices[0].p = { -1.0f, 1.0f, 0.0f };
//_vertices[0].c = { 1.0f, 1.0f, 1.0f, 1.0f };
_vertices[0].t = { 0.0f, 0.0f };
_vertices[1].p = { 1.0f, 1.0f, 0.0f };
//_vertices[1].c = { 1.0f, 1.0f, 1.0f, 1.0f };
_vertices[1].t = { 1.0f, 0.0f };
_vertices[2].p = { -1.0f, -1.0f, 0.0f };
//_vertices[2].c = { 1.0f, 1.0f, 1.0f, 1.0f };
_vertices[2].t = { 0.0f, 1.0f };
_vertices[3].p = { 1.0f, -1.0f, 0.0f };
//_vertices[3].c = { 1.0f, 1.0f, 1.0f, 1.0f };
_vertices[3].t = { 1.0f, 1.0f };
Vertex layout:
D3D11_INPUT_ELEMENT_DESC elementDesc[] = {
{ "POSITION", 0, DXGI_FORMAT_R32G32B32_FLOAT, 0, 0, D3D11_INPUT_PER_VERTEX_DATA, 0},
//{ "COLOR", 0, DXGI_FORMAT_R32G32B32A32_FLOAT, 0, 12, D3D11_INPUT_PER_VERTEX_DATA, 0},
{ "TEXTURE", 0, DXGI_FORMAT_R32G32_FLOAT, 0, 28, D3D11_INPUT_PER_VERTEX_DATA, 0},
};
Sampler state:
D3D11_SAMPLER_DESC samplerDesc;
ZeroMemory(&samplerDesc, sizeof(samplerDesc));
samplerDesc.Filter = D3D11_FILTER_MIN_MAG_MIP_POINT;
samplerDesc.AddressU = D3D11_TEXTURE_ADDRESS_WRAP;
samplerDesc.AddressV = D3D11_TEXTURE_ADDRESS_WRAP;
samplerDesc.AddressW = D3D11_TEXTURE_ADDRESS_WRAP;
device->CreateSamplerState(&samplerDesc, &g_defaultSS);
Vertex shader:
struct VS_IN
{
float3 p : POSITION;
//float4 c : COLOR;
float2 t : TEXTURE;
};
struct VS_OUT
{
float4 p : SV_POSITION;
//float4 c : COLOR0;
float2 t : TEXCOORD0;
};
VS_OUT VSMain(VS_IN input)
{
VS_OUT output = (VS_OUT)0;
output.p = float4(input.p, 1.0f);
//output.c = input.c;
output.t = input.t;
return output;
}
Pixel shader:
Texture2D g_texture : register(t0);
SamplerState g_sampleWrap : register(s0);
float4 PSMain(VS_OUT input) : SV_Target
{
float4 vColor = g_texture.Sample(g_sampleWrap, input.t);
return vColor; //* input.c;
}
This is most likely an issue of colorspace. If rendering using 'linear colors' (which is recommended), then likely your image is in sRGB colorspace. You can let the texture hardware deal with the gamma by using DXGI_FORMAT_*_SRGB formats for your texture, or you can do it directly in the shader.
See these resources:
Linear-Space Lighting (i.e. Gamma)
Chapter 24. The Importance of Being Linear, GPU Gems 3
Gamma-correct rendering
In the DirectX Tool Kit, you can do various load-time tricks as well. See DDSTextureLoader and WICTextureLoader.
I am trying to make my 3d object to move. Using default vs2022 template I managed to import model from .ply file and load to pixelshader / vertexshader. Object is displaying on screen, but I have no idea how could I move it. I know I can edit all verticies but I think there is other way of doing it using Matrix.
I was trying to do
XMStoreFloat4x4(&m_constantBufferData.model, XMMatrixTranslation(0.5f, 0.0f, 0.0f));
But It shreded textures.
Upper image is after move.
Can someone help me and explain how to do this?
Camera Initialize:
void Sample3DSceneRenderer::CreateWindowSizeDependentResources()
{
Size outputSize = m_deviceResources->GetOutputSize();
float aspectRatio = outputSize.Width / outputSize.Height;
float fovAngleY = 70.0f * XM_PI / 180.0f;
if (aspectRatio < 1.0f)
{
fovAngleY *= 2.0f;
}
XMMATRIX perspectiveMatrix = XMMatrixPerspectiveFovRH(
fovAngleY,
aspectRatio,
0.01f,
100.0f
);
XMFLOAT4X4 orientation = m_deviceResources->GetOrientationTransform3D();
XMMATRIX orientationMatrix = XMLoadFloat4x4(&orientation);
XMStoreFloat4x4(
&m_constantBufferData.projection,
XMMatrixTranspose(perspectiveMatrix * orientationMatrix)
);
static const XMVECTORF32 eye = { 0.0f, 0.7f, 8.0f, 0.0f };
static const XMVECTORF32 at = { 0.0f, -0.1f, 0.0f, 0.0f };
static const XMVECTORF32 up = { 0.0f, 1.0f, 0.0f, 0.0f };
XMStoreFloat4x4(&m_constantBufferData.view, XMMatrixTranspose(XMMatrixLookAtRH(eye, at, up)));
}
void Sample3DSceneRenderer::Move()
{
XMStoreFloat4x4(&m_constantBufferData.model, XMMatrixTranslation(0.5f, 0.0f, 0.0f));
}
I am trying to draw a cube with different colors on each face using OpenGL ES 2.0. I can only draw a cube in one color now. I knew I need to use VertexAttribPointer in this case instead of Uniform, but I probably added them wrongly. Screen shows nothing after I implement my code. Here is my code, can anybody give me a hand? Thank you so much!!!
public class MyCube {
private FloatBuffer vertexBuffer;
private ShortBuffer drawListBuffer;
private ShortBuffer[] ArrayDrawListBuffer;
private FloatBuffer colorBuffer;
private int mProgram;
//For Projection and Camera Transformations
private final String vertexShaderCode =
// This matrix member variable provides a hook to manipulate
// the coordinates of the objects that use this vertex shader
"uniform mat4 uMVPMatrix;" +
"attribute vec4 vPosition;" +
"void main() {" +
// the matrix must be included as a modifier of gl_Position
// Note that the uMVPMatrix factor *must be first* in order
// for the matrix multiplication product to be correct.
" gl_Position = uMVPMatrix * vPosition;" +
"}";
// Use to access and set the view transformation
private int mMVPMatrixHandle;
private final String fragmentShaderCode =
"precision mediump float;" +
"uniform vec4 vColor;" +
"void main() {" +
" gl_FragColor = vColor;" +
"}";
// number of coordinates per vertex in this array
static final int COORDS_PER_VERTEX = 3;
float cubeCoords[] = {
-0.5f, 0.5f, 0.5f, // front top left 0
-0.5f, -0.5f, 0.5f, // front bottom left 1
0.5f, -0.5f, 0.5f, // front bottom right 2
0.5f, 0.5f, 0.5f, // front top right 3
-0.5f, 0.5f, -0.5f, // back top left 4
0.5f, 0.5f, -0.5f, // back top right 5
-0.5f, -0.5f, -0.5f, // back bottom left 6
0.5f, -0.5f, -0.5f, // back bottom right 7
};
// Set color with red, green, blue and alpha (opacity) values
float color[] = { 0.63671875f, 0.76953125f, 0.22265625f, 1.0f };
float red[] = { 1.0f, 0.0f, 0.0f, 1.0f };
float blue[] = { 0.0f, 0.0f, 1.0f, 1.0f };
private short drawOrder[] = {
0, 1, 2, 0, 2, 3,//front
0, 4, 5, 0, 5, 3, //Top
0, 1, 6, 0, 6, 4, //left
3, 2, 7, 3, 7 ,5, //right
1, 2, 7, 1, 7, 6, //bottom
4, 6, 7, 4, 7, 5};//back (order to draw vertices)
final float[] cubeColor =
{
// Front face (red)
1.0f, 0.0f, 0.0f, 1.0f,
1.0f, 0.0f, 0.0f, 1.0f,
1.0f, 0.0f, 0.0f, 1.0f,
1.0f, 0.0f, 0.0f, 1.0f,
1.0f, 0.0f, 0.0f, 1.0f,
1.0f, 0.0f, 0.0f, 1.0f,
// Top face (green)
0.0f, 1.0f, 0.0f, 1.0f,
0.0f, 1.0f, 0.0f, 1.0f,
0.0f, 1.0f, 0.0f, 1.0f,
0.0f, 1.0f, 0.0f, 1.0f,
0.0f, 1.0f, 0.0f, 1.0f,
0.0f, 1.0f, 0.0f, 1.0f,
// Left face (blue)
0.0f, 0.0f, 1.0f, 1.0f,
0.0f, 0.0f, 1.0f, 1.0f,
0.0f, 0.0f, 1.0f, 1.0f,
0.0f, 0.0f, 1.0f, 1.0f,
0.0f, 0.0f, 1.0f, 1.0f,
0.0f, 0.0f, 1.0f, 1.0f,
// Right face (yellow)
1.0f, 1.0f, 0.0f, 1.0f,
1.0f, 1.0f, 0.0f, 1.0f,
1.0f, 1.0f, 0.0f, 1.0f,
1.0f, 1.0f, 0.0f, 1.0f,
1.0f, 1.0f, 0.0f, 1.0f,
1.0f, 1.0f, 0.0f, 1.0f,
// Bottom face (cyan)
0.0f, 1.0f, 1.0f, 1.0f,
0.0f, 1.0f, 1.0f, 1.0f,
0.0f, 1.0f, 1.0f, 1.0f,
0.0f, 1.0f, 1.0f, 1.0f,
0.0f, 1.0f, 1.0f, 1.0f,
0.0f, 1.0f, 1.0f, 1.0f,
// Back face (magenta)
1.0f, 0.0f, 1.0f, 1.0f,
1.0f, 0.0f, 1.0f, 1.0f,
1.0f, 0.0f, 1.0f, 1.0f,
1.0f, 0.0f, 1.0f, 1.0f,
1.0f, 0.0f, 1.0f, 1.0f,
1.0f, 0.0f, 1.0f, 1.0f
};
public MyCube() {
// initialize vertex byte buffer for shape coordinates
ByteBuffer bb = ByteBuffer.allocateDirect(
// (# of coordinate values * 4 bytes per float)
cubeCoords.length * 4);
bb.order(ByteOrder.nativeOrder());
vertexBuffer = bb.asFloatBuffer();
vertexBuffer.put(cubeCoords);
vertexBuffer.position(0);
// initialize byte buffer for the draw list
ByteBuffer dlb = ByteBuffer.allocateDirect(
// (# of coordinate values * 2 bytes per short)
drawOrder.length * 2);
dlb.order(ByteOrder.nativeOrder());
drawListBuffer = dlb.asShortBuffer();
drawListBuffer.put(drawOrder);
drawListBuffer.position(0);
// initialize byte buffer for the color list
ByteBuffer cb = ByteBuffer.allocateDirect(
// (# of coordinate values * 2 bytes per short)
cubeColor.length * 4);
cb.order(ByteOrder.nativeOrder());
colorBuffer = cb.asFloatBuffer();
colorBuffer.put(cubeColor);
colorBuffer.position(0);
int vertexShader = MyRenderer.loadShader(GLES20.GL_VERTEX_SHADER,
vertexShaderCode);
int fragmentShader = MyRenderer.loadShader(GLES20.GL_FRAGMENT_SHADER,
fragmentShaderCode);
// create empty OpenGL ES Program
mProgram = GLES20.glCreateProgram();
// add the vertex shader to program
GLES20.glAttachShader(mProgram, vertexShader);
// add the fragment shader to program
GLES20.glAttachShader(mProgram, fragmentShader);
// creates OpenGL ES program executables
GLES20.glLinkProgram(mProgram);
}
private int mPositionHandle;
private int mColorHandle;
private final int vertexCount = cubeCoords.length / COORDS_PER_VERTEX;
private final int vertexStride = COORDS_PER_VERTEX * 4; // 4 bytes per vertex
public void draw(float[] mvpMatrix) { // pass in the calculated transformation matrix
// Add program to OpenGL ES environment
GLES20.glUseProgram(mProgram);
// get handle to vertex shader's vPosition member
mPositionHandle = GLES20.glGetAttribLocation(mProgram, "vPosition");
// get handle to fragment shader's vColor member
mColorHandle = GLES20.glGetUniformLocation(mProgram, "vColor");
// Enable a handle to the cube vertices
GLES20.glEnableVertexAttribArray(mPositionHandle);
// Prepare the cube coordinate data
GLES20.glVertexAttribPointer(mPositionHandle, COORDS_PER_VERTEX,
GLES20.GL_FLOAT, false,
vertexStride, vertexBuffer);
// Set color for drawing the triangle
//mColorHandle = GLES20.glGetUniformLocation(mProgram, "vColor");
// Enable a handle to the cube colors
GLES20.glEnableVertexAttribArray(mColorHandle);
// Prepare the cube color data
GLES20.glVertexAttribPointer(mColorHandle, 4, GLES20.GL_FLOAT, false, 16, colorBuffer);
// Set the color for each of the faces
//GLES20.glUniform4fv(mColorHandle, 1, blue, 0);
//***When I add this line of code above, it can show a cube totally in blue.***
// get handle to shape's transformation matrix
mMVPMatrixHandle = GLES20.glGetUniformLocation(mProgram, "uMVPMatrix");
// Pass the projection and view transformation to the shader
GLES20.glUniformMatrix4fv(mMVPMatrixHandle, 1, false, mvpMatrix, 0);
// Draw the cube
GLES20.glDrawElements(GLES20.GL_TRIANGLES, drawOrder.length, GLES20.GL_UNSIGNED_SHORT, drawListBuffer);
// Disable vertex array
GLES20.glDisableVertexAttribArray(mPositionHandle);
GLES20.glDisableVertexAttribArray(mColorHandle);
GLES20.glDisableVertexAttribArray(mMVPMatrixHandle);
}
}
Remove the uniform variable vColor declaration from the fragment shader. Define the new per-vertex attribute input variable in the vertex shader, write that value to a varying variable which is output by the vertex shader, and add as a varying input which is read by the fragment shader.
Consider I have used the OpenGL Control class as follows: (No need to read the code, I have just made slight changes to be able to use the code in more than one opengl window)
OpenGLControl.cpp
#include "stdafx.h"
#include "OpenGLControl.h"
COpenGLControl::COpenGLControl(void)
{
m_fPosX = 0.0f; // X position of model in camera view
m_fPosY = 0.0f; // Y position of model in camera view
m_fZoom = 10.0f; // Zoom on model in camera view
m_fRotX = 0.0f; // Rotation on model in camera view
m_fRotY = 0.0f; // Rotation on model in camera view
m_bIsMaximized = false;
}
COpenGLControl::~COpenGLControl(void)
{
}
BEGIN_MESSAGE_MAP(COpenGLControl, CWnd)
ON_WM_PAINT()
ON_WM_SIZE()
ON_WM_CREATE()
ON_WM_TIMER()
ON_WM_MOUSEMOVE()
END_MESSAGE_MAP()
void COpenGLControl::OnPaint()
{
//CPaintDC dc(this); // device context for painting
ValidateRect(NULL);
}
void COpenGLControl::OnSize(UINT nType, int cx, int cy)
{
wglMakeCurrent(hdc, hrc);
CWnd::OnSize(nType, cx, cy);
if (0 >= cx || 0 >= cy || nType == SIZE_MINIMIZED) return;
// Map the OpenGL coordinates.
glViewport(0, 0, cx, cy);
// Projection view
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
// Set our current view perspective
gluPerspective(35.0f, (float)cx / (float)cy, 0.01f, 2000.0f);
// Model view
glMatrixMode(GL_MODELVIEW);
wglMakeCurrent(NULL, NULL);
}
int COpenGLControl::OnCreate(LPCREATESTRUCT lpCreateStruct)
{
if (CWnd::OnCreate(lpCreateStruct) == -1) return -1;
oglInitialize();
return 0;
}
void COpenGLControl::OnDraw(CDC *pDC)
{
wglMakeCurrent(hdc,hrc);
// If the current view is perspective...
glLoadIdentity();
glTranslatef(0.0f, 0.0f, -m_fZoom);
glTranslatef(m_fPosX, m_fPosY, 0.0f);
glRotatef(m_fRotX, 1.0f, 0.0f, 0.0f);
glRotatef(m_fRotY, 0.0f, 1.0f, 0.0f);
wglMakeCurrent(NULL, NULL);
}
void COpenGLControl::OnTimer(UINT nIDEvent)
{
wglMakeCurrent(hdc,hrc);
switch (nIDEvent)
{
case 1:
{
// Clear color and depth buffer bits
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
// Draw OpenGL scene
oglDrawScene();
// Swap buffers
SwapBuffers(hdc);
break;
}
default:
break;
}
CWnd::OnTimer(nIDEvent);
wglMakeCurrent(NULL, NULL);
}
void COpenGLControl::OnMouseMove(UINT nFlags, CPoint point)
{
wglMakeCurrent(hdc,hrc);
int diffX = (int)(point.x - m_fLastX);
int diffY = (int)(point.y - m_fLastY);
m_fLastX = (float)point.x;
m_fLastY = (float)point.y;
// Left mouse button
if (nFlags & MK_LBUTTON)
{
m_fRotX += (float)0.5f * diffY;
if ((m_fRotX > 360.0f) || (m_fRotX < -360.0f))
{
m_fRotX = 0.0f;
}
m_fRotY += (float)0.5f * diffX;
if ((m_fRotY > 360.0f) || (m_fRotY < -360.0f))
{
m_fRotY = 0.0f;
}
}
// Right mouse button
else if (nFlags & MK_RBUTTON)
{
m_fZoom -= (float)0.1f * diffY;
}
// Middle mouse button
else if (nFlags & MK_MBUTTON)
{
m_fPosX += (float)0.05f * diffX;
m_fPosY -= (float)0.05f * diffY;
}
OnDraw(NULL);
CWnd::OnMouseMove(nFlags, point);
wglMakeCurrent(NULL, NULL);
}
void COpenGLControl::oglCreate(CRect rect, CWnd *parent,CString windowName)
{
CString className = AfxRegisterWndClass(CS_HREDRAW | CS_VREDRAW | CS_OWNDC, NULL, (HBRUSH)GetStockObject(BLACK_BRUSH), NULL);
CreateEx(0, className,windowName, WS_CHILD | WS_VISIBLE | WS_CLIPSIBLINGS | WS_CLIPCHILDREN, rect, parent, 0);
// Set initial variables' values
m_oldWindow = rect;
m_originalRect = rect;
hWnd = parent;
}
void COpenGLControl::oglInitialize(void)
{
// Initial Setup:
//
static PIXELFORMATDESCRIPTOR pfd =
{
sizeof(PIXELFORMATDESCRIPTOR),
1,
PFD_DRAW_TO_WINDOW | PFD_SUPPORT_OPENGL | PFD_DOUBLEBUFFER,
PFD_TYPE_RGBA,
32, // bit depth
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
24, // z-buffer depth
8,0,PFD_MAIN_PLANE, 0, 0, 0, 0,
};
// Get device context only once.
hdc = GetDC()->m_hDC;
// Pixel format.
m_nPixelFormat = ChoosePixelFormat(hdc, &pfd);
SetPixelFormat(hdc, m_nPixelFormat, &pfd);
// Create the OpenGL Rendering Context.
hrc = wglCreateContext(hdc);
wglMakeCurrent(hdc, hrc);
// Basic Setup:
//
// Set color to use when clearing the background.
glClearColor(0.0f, 0.0f, 0.0f, 1.0f);
glClearDepth(1.0f);
// Turn on backface culling
glFrontFace(GL_CCW);
glCullFace(GL_BACK);
// Turn on depth testing
glEnable(GL_DEPTH_TEST);
glDepthFunc(GL_LEQUAL);
// Send draw request
OnDraw(NULL);
wglMakeCurrent(NULL, NULL);
}
void COpenGLControl::oglDrawScene(void)
{
wglMakeCurrent(hdc, hrc);
// Wireframe Mode
glPolygonMode(GL_FRONT_AND_BACK, GL_LINE);
glBegin(GL_QUADS);
// Front Side
glVertex3f( 1.0f, 1.0f, 1.0f);
glVertex3f(-1.0f, 1.0f, 1.0f);
glVertex3f(-1.0f, -1.0f, 1.0f);
glVertex3f( 1.0f, -1.0f, 1.0f);
// Back Side
glVertex3f(-1.0f, -1.0f, -1.0f);
glVertex3f(-1.0f, 1.0f, -1.0f);
glVertex3f( 1.0f, 1.0f, -1.0f);
glVertex3f( 1.0f, -1.0f, -1.0f);
// Top Side
glVertex3f( 1.0f, 1.0f, 1.0f);
glVertex3f( 1.0f, 1.0f, -1.0f);
glVertex3f(-1.0f, 1.0f, -1.0f);
glVertex3f(-1.0f, 1.0f, 1.0f);
// Bottom Side
glVertex3f(-1.0f, -1.0f, -1.0f);
glVertex3f( 1.0f, -1.0f, -1.0f);
glVertex3f( 1.0f, -1.0f, 1.0f);
glVertex3f(-1.0f, -1.0f, 1.0f);
// Right Side
glVertex3f( 1.0f, 1.0f, 1.0f);
glVertex3f( 1.0f, -1.0f, 1.0f);
glVertex3f( 1.0f, -1.0f, -1.0f);
glVertex3f( 1.0f, 1.0f, -1.0f);
// Left Side
glVertex3f(-1.0f, -1.0f, -1.0f);
glVertex3f(-1.0f, -1.0f, 1.0f);
glVertex3f(-1.0f, 1.0f, 1.0f);
glVertex3f(-1.0f, 1.0f, -1.0f);
glEnd();
wglMakeCurrent(NULL, NULL);
}
MyOpenGLTestDlg.h
COpenGLControl m_oglWindow;
COpenGLControl m_oglWindow2;
MyOpenGLTestDlg.cpp
// TODO: Add extra initialization here
CRect rect;
// Get size and position of the picture control
GetDlgItem(ID_OPENGL)->GetWindowRect(rect);
// Convert screen coordinates to client coordinates
ScreenToClient(rect);
// Create OpenGL Control window
CString s1("OPEN_GL");
m_oglWindow.oglCreate(rect, this,s1);
// Setup the OpenGL Window's timer to render
m_oglWindow.m_unpTimer = m_oglWindow.SetTimer(1, 1, 0);
CRect rect2;
GetDlgItem(ID_OPENGL2)->GetWindowRect(rect2);
ScreenToClient(rect2);
CString s2("OPEN_GL2");
m_oglWindow2.oglCreate(rect2, this,s2);
m_oglWindow2.m_unpTimer = m_oglWindow2.SetTimer(1, 1, 0);
The problem is when I only create one OpenGL window, the system shows:
physical memoey: 48%
CPU usage: 54%
and when I create two windows, it shows:
physical memoey: 48%
CPU usage: 95%
I'm concerned that, it is only for such simple geometries!!!
How will be the usage for two opengl windows showing textures??
Is there anyway to reduce the usage?
BTW: why is the usage so much as mensioned?
CPU usage doesn't actually indicate anything about the complexity of your application. If you draw in a tight loop, one frame after the other with no delay or VSYNC enabled you can achieve 100% CPU utilization. What this tells you is you are not GPU bound. The same way if your GPU usage (yes, you can measure this with vendor-specific APIs) is >95% then you are not CPU bound.
In short, you should expect to see very high CPU usage if the GPU is not doing anything particularly complicated :) You can always increase the sleep/timer interval to reduce CPU utilization. Remember that CPU utilization is measured as the time spent doing work versus the total time the OS gave a thread/process. Time spent working is inversely related to time spent waiting (for I/O, sleep, etc.). If you increase the time spent waiting, it will reduce the time spent working, and therefore your reported utilization.
You can also reduce the CPU usage just by enabling VSYNC. Since that will block the calling thread until the VBLANK interval comes around (often 16.666 ms).
It should also be noted that a 1 ms timer interval on your OpenGL timer is excessively low. I cannot think of a lot of applications that need to draw 1000 times a second :) Try something slightly below your target refresh rate (e.g. Monitor = 60 Hz, then try a 10-15 ms timer interval)
This needs more investigation but you may have problems with your main-loop.
This probably is not a problem with OpenGL, but with usage of WinApi. When you add textures, models, shaders... your cpu usage should be similar.
You use SetTimer(1, 1, 0); it means 1 millisecond of delay as I understand? Can you change it to 33 milliseconds (33 FPS)?
That way you will not kill your message pump in mfc app. Note that this timer is very imprecise.
link to [Basic MFC + OpenGL Message loop], (http://archive.gamedev.net/archive/reference/articles/article2204.html), using OnIdle()
Here is a great tutorial about MFC + opengl + threading - #songho
https://gamedev.stackexchange.com/questions/8623/a-good-way-to-build-a-game-loop-in-opengl - discussoion regarding mail loop in GLUT
I am trying to understand the perspective view in OpenGL.
What I am trying to do is render two identical triangles, but at different z coordinates, so I assume they should appear at different sizes. Here is my code:
CUSTOMVERTEX Vertices[] =
{
{ 0.5f, 1.0f, 0.5f, 1.0f, 0.0f, 0.0f, 1.0f }, // x, y, z, color
{ 0.0f, 0.0f, 0.5f, 0.0f, 1.0f, 0.0f, 1.0f },
{ 1.0f, 0.0f, 0.5f, 0.0f, 0.0f, 1.0f, 1.0f },
};
and for drawing
glDrawArrays(GL_TRIANGLES,0, 3);
glTranslatef(0.0f,-1.0f,-1.5f);
glDrawArrays(GL_TRIANGLES,0, 3);
and here is how I init some attributes
glShadeModel(GL_SMOOTH);
glClearDepthf(1.0f);
glEnable(GL_DEPTH_TEST);
glDepthFunc(GL_LEQUAL);
glEnable(GL_CULL_FACE);
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
glFrustumf(-1.0f, 1.0f, -1.0f, 1.0f, 0.0f, 100.0f);
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
but the triangles appear at the same size, just at different locations as I translated Y.
Can someone please explain to me?
You cannot use 0.0 for the perspective projection's near-Z value. It must be a positive number greater than zero. Preferably on the order of 1.0 or so.
As Nicol said you should use numbers greater than 0 for frustrum construction. I strongly suggest you to read this article to understand why it is so.