DirectX11 Loaded texture colour becomes more saturated - directx-11

I'm loading an image to the screen using DirectX11, but the image becomes more saturated. Left is the loaded image and right is the original image.
Strange thing is that this happens only when I'm loading large images. The resolution of the image I'm trying to print is 1080 x 675 and my window size is 1280 x 800. Also, although the original image has a high resolution the image becomes a little pixelated. This is solved if I use LINEAR filter but I'm curious why this is happening. I'm fairly new to DirectX and I'm struggling..
Vertex data:
_vertices[0].p = { -1.0f, 1.0f, 0.0f };
//_vertices[0].c = { 1.0f, 1.0f, 1.0f, 1.0f };
_vertices[0].t = { 0.0f, 0.0f };
_vertices[1].p = { 1.0f, 1.0f, 0.0f };
//_vertices[1].c = { 1.0f, 1.0f, 1.0f, 1.0f };
_vertices[1].t = { 1.0f, 0.0f };
_vertices[2].p = { -1.0f, -1.0f, 0.0f };
//_vertices[2].c = { 1.0f, 1.0f, 1.0f, 1.0f };
_vertices[2].t = { 0.0f, 1.0f };
_vertices[3].p = { 1.0f, -1.0f, 0.0f };
//_vertices[3].c = { 1.0f, 1.0f, 1.0f, 1.0f };
_vertices[3].t = { 1.0f, 1.0f };
Vertex layout:
D3D11_INPUT_ELEMENT_DESC elementDesc[] = {
{ "POSITION", 0, DXGI_FORMAT_R32G32B32_FLOAT, 0, 0, D3D11_INPUT_PER_VERTEX_DATA, 0},
//{ "COLOR", 0, DXGI_FORMAT_R32G32B32A32_FLOAT, 0, 12, D3D11_INPUT_PER_VERTEX_DATA, 0},
{ "TEXTURE", 0, DXGI_FORMAT_R32G32_FLOAT, 0, 28, D3D11_INPUT_PER_VERTEX_DATA, 0},
};
Sampler state:
D3D11_SAMPLER_DESC samplerDesc;
ZeroMemory(&samplerDesc, sizeof(samplerDesc));
samplerDesc.Filter = D3D11_FILTER_MIN_MAG_MIP_POINT;
samplerDesc.AddressU = D3D11_TEXTURE_ADDRESS_WRAP;
samplerDesc.AddressV = D3D11_TEXTURE_ADDRESS_WRAP;
samplerDesc.AddressW = D3D11_TEXTURE_ADDRESS_WRAP;
device->CreateSamplerState(&samplerDesc, &g_defaultSS);
Vertex shader:
struct VS_IN
{
float3 p : POSITION;
//float4 c : COLOR;
float2 t : TEXTURE;
};
struct VS_OUT
{
float4 p : SV_POSITION;
//float4 c : COLOR0;
float2 t : TEXCOORD0;
};
VS_OUT VSMain(VS_IN input)
{
VS_OUT output = (VS_OUT)0;
output.p = float4(input.p, 1.0f);
//output.c = input.c;
output.t = input.t;
return output;
}
Pixel shader:
Texture2D g_texture : register(t0);
SamplerState g_sampleWrap : register(s0);
float4 PSMain(VS_OUT input) : SV_Target
{
float4 vColor = g_texture.Sample(g_sampleWrap, input.t);
return vColor; //* input.c;
}

This is most likely an issue of colorspace. If rendering using 'linear colors' (which is recommended), then likely your image is in sRGB colorspace. You can let the texture hardware deal with the gamma by using DXGI_FORMAT_*_SRGB formats for your texture, or you can do it directly in the shader.
See these resources:
Linear-Space Lighting (i.e. Gamma)
Chapter 24. The Importance of Being Linear, GPU Gems 3
Gamma-correct rendering
In the DirectX Tool Kit, you can do various load-time tricks as well. See DDSTextureLoader and WICTextureLoader.

Related

How to move object DirectX11

I am trying to make my 3d object to move. Using default vs2022 template I managed to import model from .ply file and load to pixelshader / vertexshader. Object is displaying on screen, but I have no idea how could I move it. I know I can edit all verticies but I think there is other way of doing it using Matrix.
I was trying to do
XMStoreFloat4x4(&m_constantBufferData.model, XMMatrixTranslation(0.5f, 0.0f, 0.0f));
But It shreded textures.
Upper image is after move.
Can someone help me and explain how to do this?
Camera Initialize:
void Sample3DSceneRenderer::CreateWindowSizeDependentResources()
{
Size outputSize = m_deviceResources->GetOutputSize();
float aspectRatio = outputSize.Width / outputSize.Height;
float fovAngleY = 70.0f * XM_PI / 180.0f;
if (aspectRatio < 1.0f)
{
fovAngleY *= 2.0f;
}
XMMATRIX perspectiveMatrix = XMMatrixPerspectiveFovRH(
fovAngleY,
aspectRatio,
0.01f,
100.0f
);
XMFLOAT4X4 orientation = m_deviceResources->GetOrientationTransform3D();
XMMATRIX orientationMatrix = XMLoadFloat4x4(&orientation);
XMStoreFloat4x4(
&m_constantBufferData.projection,
XMMatrixTranspose(perspectiveMatrix * orientationMatrix)
);
static const XMVECTORF32 eye = { 0.0f, 0.7f, 8.0f, 0.0f };
static const XMVECTORF32 at = { 0.0f, -0.1f, 0.0f, 0.0f };
static const XMVECTORF32 up = { 0.0f, 1.0f, 0.0f, 0.0f };
XMStoreFloat4x4(&m_constantBufferData.view, XMMatrixTranspose(XMMatrixLookAtRH(eye, at, up)));
}
void Sample3DSceneRenderer::Move()
{
XMStoreFloat4x4(&m_constantBufferData.model, XMMatrixTranslation(0.5f, 0.0f, 0.0f));
}

OpenGL 2.0 Setting different color for a cube

I am trying to draw a cube with different colors on each face using OpenGL ES 2.0. I can only draw a cube in one color now. I knew I need to use VertexAttribPointer in this case instead of Uniform, but I probably added them wrongly. Screen shows nothing after I implement my code. Here is my code, can anybody give me a hand? Thank you so much!!!
public class MyCube {
private FloatBuffer vertexBuffer;
private ShortBuffer drawListBuffer;
private ShortBuffer[] ArrayDrawListBuffer;
private FloatBuffer colorBuffer;
private int mProgram;
//For Projection and Camera Transformations
private final String vertexShaderCode =
// This matrix member variable provides a hook to manipulate
// the coordinates of the objects that use this vertex shader
"uniform mat4 uMVPMatrix;" +
"attribute vec4 vPosition;" +
"void main() {" +
// the matrix must be included as a modifier of gl_Position
// Note that the uMVPMatrix factor *must be first* in order
// for the matrix multiplication product to be correct.
" gl_Position = uMVPMatrix * vPosition;" +
"}";
// Use to access and set the view transformation
private int mMVPMatrixHandle;
private final String fragmentShaderCode =
"precision mediump float;" +
"uniform vec4 vColor;" +
"void main() {" +
" gl_FragColor = vColor;" +
"}";
// number of coordinates per vertex in this array
static final int COORDS_PER_VERTEX = 3;
float cubeCoords[] = {
-0.5f, 0.5f, 0.5f, // front top left 0
-0.5f, -0.5f, 0.5f, // front bottom left 1
0.5f, -0.5f, 0.5f, // front bottom right 2
0.5f, 0.5f, 0.5f, // front top right 3
-0.5f, 0.5f, -0.5f, // back top left 4
0.5f, 0.5f, -0.5f, // back top right 5
-0.5f, -0.5f, -0.5f, // back bottom left 6
0.5f, -0.5f, -0.5f, // back bottom right 7
};
// Set color with red, green, blue and alpha (opacity) values
float color[] = { 0.63671875f, 0.76953125f, 0.22265625f, 1.0f };
float red[] = { 1.0f, 0.0f, 0.0f, 1.0f };
float blue[] = { 0.0f, 0.0f, 1.0f, 1.0f };
private short drawOrder[] = {
0, 1, 2, 0, 2, 3,//front
0, 4, 5, 0, 5, 3, //Top
0, 1, 6, 0, 6, 4, //left
3, 2, 7, 3, 7 ,5, //right
1, 2, 7, 1, 7, 6, //bottom
4, 6, 7, 4, 7, 5};//back (order to draw vertices)
final float[] cubeColor =
{
// Front face (red)
1.0f, 0.0f, 0.0f, 1.0f,
1.0f, 0.0f, 0.0f, 1.0f,
1.0f, 0.0f, 0.0f, 1.0f,
1.0f, 0.0f, 0.0f, 1.0f,
1.0f, 0.0f, 0.0f, 1.0f,
1.0f, 0.0f, 0.0f, 1.0f,
// Top face (green)
0.0f, 1.0f, 0.0f, 1.0f,
0.0f, 1.0f, 0.0f, 1.0f,
0.0f, 1.0f, 0.0f, 1.0f,
0.0f, 1.0f, 0.0f, 1.0f,
0.0f, 1.0f, 0.0f, 1.0f,
0.0f, 1.0f, 0.0f, 1.0f,
// Left face (blue)
0.0f, 0.0f, 1.0f, 1.0f,
0.0f, 0.0f, 1.0f, 1.0f,
0.0f, 0.0f, 1.0f, 1.0f,
0.0f, 0.0f, 1.0f, 1.0f,
0.0f, 0.0f, 1.0f, 1.0f,
0.0f, 0.0f, 1.0f, 1.0f,
// Right face (yellow)
1.0f, 1.0f, 0.0f, 1.0f,
1.0f, 1.0f, 0.0f, 1.0f,
1.0f, 1.0f, 0.0f, 1.0f,
1.0f, 1.0f, 0.0f, 1.0f,
1.0f, 1.0f, 0.0f, 1.0f,
1.0f, 1.0f, 0.0f, 1.0f,
// Bottom face (cyan)
0.0f, 1.0f, 1.0f, 1.0f,
0.0f, 1.0f, 1.0f, 1.0f,
0.0f, 1.0f, 1.0f, 1.0f,
0.0f, 1.0f, 1.0f, 1.0f,
0.0f, 1.0f, 1.0f, 1.0f,
0.0f, 1.0f, 1.0f, 1.0f,
// Back face (magenta)
1.0f, 0.0f, 1.0f, 1.0f,
1.0f, 0.0f, 1.0f, 1.0f,
1.0f, 0.0f, 1.0f, 1.0f,
1.0f, 0.0f, 1.0f, 1.0f,
1.0f, 0.0f, 1.0f, 1.0f,
1.0f, 0.0f, 1.0f, 1.0f
};
public MyCube() {
// initialize vertex byte buffer for shape coordinates
ByteBuffer bb = ByteBuffer.allocateDirect(
// (# of coordinate values * 4 bytes per float)
cubeCoords.length * 4);
bb.order(ByteOrder.nativeOrder());
vertexBuffer = bb.asFloatBuffer();
vertexBuffer.put(cubeCoords);
vertexBuffer.position(0);
// initialize byte buffer for the draw list
ByteBuffer dlb = ByteBuffer.allocateDirect(
// (# of coordinate values * 2 bytes per short)
drawOrder.length * 2);
dlb.order(ByteOrder.nativeOrder());
drawListBuffer = dlb.asShortBuffer();
drawListBuffer.put(drawOrder);
drawListBuffer.position(0);
// initialize byte buffer for the color list
ByteBuffer cb = ByteBuffer.allocateDirect(
// (# of coordinate values * 2 bytes per short)
cubeColor.length * 4);
cb.order(ByteOrder.nativeOrder());
colorBuffer = cb.asFloatBuffer();
colorBuffer.put(cubeColor);
colorBuffer.position(0);
int vertexShader = MyRenderer.loadShader(GLES20.GL_VERTEX_SHADER,
vertexShaderCode);
int fragmentShader = MyRenderer.loadShader(GLES20.GL_FRAGMENT_SHADER,
fragmentShaderCode);
// create empty OpenGL ES Program
mProgram = GLES20.glCreateProgram();
// add the vertex shader to program
GLES20.glAttachShader(mProgram, vertexShader);
// add the fragment shader to program
GLES20.glAttachShader(mProgram, fragmentShader);
// creates OpenGL ES program executables
GLES20.glLinkProgram(mProgram);
}
private int mPositionHandle;
private int mColorHandle;
private final int vertexCount = cubeCoords.length / COORDS_PER_VERTEX;
private final int vertexStride = COORDS_PER_VERTEX * 4; // 4 bytes per vertex
public void draw(float[] mvpMatrix) { // pass in the calculated transformation matrix
// Add program to OpenGL ES environment
GLES20.glUseProgram(mProgram);
// get handle to vertex shader's vPosition member
mPositionHandle = GLES20.glGetAttribLocation(mProgram, "vPosition");
// get handle to fragment shader's vColor member
mColorHandle = GLES20.glGetUniformLocation(mProgram, "vColor");
// Enable a handle to the cube vertices
GLES20.glEnableVertexAttribArray(mPositionHandle);
// Prepare the cube coordinate data
GLES20.glVertexAttribPointer(mPositionHandle, COORDS_PER_VERTEX,
GLES20.GL_FLOAT, false,
vertexStride, vertexBuffer);
// Set color for drawing the triangle
//mColorHandle = GLES20.glGetUniformLocation(mProgram, "vColor");
// Enable a handle to the cube colors
GLES20.glEnableVertexAttribArray(mColorHandle);
// Prepare the cube color data
GLES20.glVertexAttribPointer(mColorHandle, 4, GLES20.GL_FLOAT, false, 16, colorBuffer);
// Set the color for each of the faces
//GLES20.glUniform4fv(mColorHandle, 1, blue, 0);
//***When I add this line of code above, it can show a cube totally in blue.***
// get handle to shape's transformation matrix
mMVPMatrixHandle = GLES20.glGetUniformLocation(mProgram, "uMVPMatrix");
// Pass the projection and view transformation to the shader
GLES20.glUniformMatrix4fv(mMVPMatrixHandle, 1, false, mvpMatrix, 0);
// Draw the cube
GLES20.glDrawElements(GLES20.GL_TRIANGLES, drawOrder.length, GLES20.GL_UNSIGNED_SHORT, drawListBuffer);
// Disable vertex array
GLES20.glDisableVertexAttribArray(mPositionHandle);
GLES20.glDisableVertexAttribArray(mColorHandle);
GLES20.glDisableVertexAttribArray(mMVPMatrixHandle);
}
}
Remove the uniform variable vColor declaration from the fragment shader. Define the new per-vertex attribute input variable in the vertex shader, write that value to a varying variable which is output by the vertex shader, and add as a varying input which is read by the fragment shader.

OpenGL ES 2.0 Texture Mapping to Fill Quad on Raspberry Pi

I am trying to setup my Vertices and Texture Coordinates such that I have a single Quad ( 2 triangles ) covering the entire OpenGL Window. I want to then update the texture as I receive images from my camera.
It is working except for that my streaming texture never fills the entire screen and the texture pixels are repeated outside of the intended texture ( see images below ).
How do I setup my vertices and texture coordinates so that my texture will fill the quad without repeating?
Here is how I generate my texture:
glGenTextures(1, &m_dataFrame);
glBindTexture(GL_TEXTURE_2D, m_dataFrame);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, WIDTH, HEIGHT, 0, GL_RGBA, GL_UNSIGNED_BYTE, frame.bits());
// CUSTOM
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR );
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR );
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
Initial Attempts:
// Setup the vertices and texture coordinates
float scaler = 1.0f;
static const GLfloat squareVertices[] = {
-(1.0f/scaler), -(1.0f/scaler), 0.0f, // First triangle bottom left
(1.0f/scaler), -(1.0f/scaler), 0.0f,
-(1.0f/scaler), (1.0f/scaler), 0.0f,
(1.0f/scaler), -(1.0f/scaler), 0.0f, // Second triangle bottom right
(1.0f/scaler), (1.0f/scaler), 0.0f,
-(1.0f/scaler), (1.0f/scaler), 0.0f,
};
static const GLfloat textureVertices[] = {
-1.0f, -1.0f,
1.0f, -1.0f,
-1.0f, 1.0f,
1.0f, -1.0f,
1.0f, 1.0f,
-1.0f, 1.0f,
};
This results in the following image:
// Setup the vertices and texture coordinates
float x = 1.0f;
float y = 1.0f;
static const GLfloat squareVertices[] = {
-(x*2), -(y*2), 0.0f, // First triangle bottom left
(x*2), -(y*2), 0.0f,
-(x*2), (y*2), 0.0f,
(x*2), -(y*2), 0.0f, // Second triangle bottom right
(x*2), (y*2), 0.0f,
-(x*2), (y*2), 0.0f,
};
static const GLfloat textureVertices[] = {
-1.0f, -1.0f,
1.0f, -1.0f,
-1.0f, 1.0f,
1.0f, -1.0f,
1.0f, 1.0f,
-1.0f, 1.0f,
};
This results in the following image:
This is what I've used for rendering a textured quad on the Raspberry Pi (in Swift, but the syntax is similar to what you'd use for C). Setting up the texture parameters:
glEnable(GLenum(GL_TEXTURE_2D))
glGenTextures(1, &cameraOutputTexture)
glActiveTexture(GLenum(GL_TEXTURE0));
glBindTexture(GLenum(GL_TEXTURE_2D), cameraOutputTexture)
glTexParameteri(GLenum(GL_TEXTURE_2D), GLenum(GL_TEXTURE_MIN_FILTER), GL_NEAREST)
glTexParameteri(GLenum(GL_TEXTURE_2D), GLenum(GL_TEXTURE_MAG_FILTER), GL_NEAREST)
uploading the texture:
glActiveTexture(GLenum(GL_TEXTURE0))
glBindTexture(GLenum(GL_TEXTURE_2D), cameraOutputTexture)
glTexImage2D(GLenum(GL_TEXTURE_2D), 0, GL_RGB, 1280, 720, 0, GLenum(GL_RGB), GLenum(GL_UNSIGNED_BYTE), buffers[Int(currentBuffer)].start)
and rendering it:
glClear(GLenum(GL_COLOR_BUFFER_BIT))
shader.use()
let squareVertices:[GLfloat] = [
-1.0, -1.0,
1.0, -1.0,
-1.0, 1.0,
1.0, 1.0,
]
let textureCoordinates:[GLfloat] = [
0.0, 1.0,
1.0, 1.0,
0.0, 0.0,
1.0, 0.0,
]
glVertexAttribPointer(positionAttribute, 2, GLenum(GL_FLOAT), 0, 0, squareVertices)
glVertexAttribPointer(textureCoordinateAttribute, 2, GLenum(GL_FLOAT), 0, 0, textureCoordinates)
glUniform1i(GLint(textureCoordinateAttribute), 1)
glDrawArrays(GLenum(GL_TRIANGLE_STRIP), 0, 4)
eglSwapBuffers(display, surface)
I might be rotating the image to account for rotation in the camera that I'm using as an input source, so you may need to tweak the vertices or coordinates if you see your image rendering upside-down.

CPU usage is high when using opengl control class?

Consider I have used the OpenGL Control class as follows: (No need to read the code, I have just made slight changes to be able to use the code in more than one opengl window)
OpenGLControl.cpp
#include "stdafx.h"
#include "OpenGLControl.h"
COpenGLControl::COpenGLControl(void)
{
m_fPosX = 0.0f; // X position of model in camera view
m_fPosY = 0.0f; // Y position of model in camera view
m_fZoom = 10.0f; // Zoom on model in camera view
m_fRotX = 0.0f; // Rotation on model in camera view
m_fRotY = 0.0f; // Rotation on model in camera view
m_bIsMaximized = false;
}
COpenGLControl::~COpenGLControl(void)
{
}
BEGIN_MESSAGE_MAP(COpenGLControl, CWnd)
ON_WM_PAINT()
ON_WM_SIZE()
ON_WM_CREATE()
ON_WM_TIMER()
ON_WM_MOUSEMOVE()
END_MESSAGE_MAP()
void COpenGLControl::OnPaint()
{
//CPaintDC dc(this); // device context for painting
ValidateRect(NULL);
}
void COpenGLControl::OnSize(UINT nType, int cx, int cy)
{
wglMakeCurrent(hdc, hrc);
CWnd::OnSize(nType, cx, cy);
if (0 >= cx || 0 >= cy || nType == SIZE_MINIMIZED) return;
// Map the OpenGL coordinates.
glViewport(0, 0, cx, cy);
// Projection view
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
// Set our current view perspective
gluPerspective(35.0f, (float)cx / (float)cy, 0.01f, 2000.0f);
// Model view
glMatrixMode(GL_MODELVIEW);
wglMakeCurrent(NULL, NULL);
}
int COpenGLControl::OnCreate(LPCREATESTRUCT lpCreateStruct)
{
if (CWnd::OnCreate(lpCreateStruct) == -1) return -1;
oglInitialize();
return 0;
}
void COpenGLControl::OnDraw(CDC *pDC)
{
wglMakeCurrent(hdc,hrc);
// If the current view is perspective...
glLoadIdentity();
glTranslatef(0.0f, 0.0f, -m_fZoom);
glTranslatef(m_fPosX, m_fPosY, 0.0f);
glRotatef(m_fRotX, 1.0f, 0.0f, 0.0f);
glRotatef(m_fRotY, 0.0f, 1.0f, 0.0f);
wglMakeCurrent(NULL, NULL);
}
void COpenGLControl::OnTimer(UINT nIDEvent)
{
wglMakeCurrent(hdc,hrc);
switch (nIDEvent)
{
case 1:
{
// Clear color and depth buffer bits
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
// Draw OpenGL scene
oglDrawScene();
// Swap buffers
SwapBuffers(hdc);
break;
}
default:
break;
}
CWnd::OnTimer(nIDEvent);
wglMakeCurrent(NULL, NULL);
}
void COpenGLControl::OnMouseMove(UINT nFlags, CPoint point)
{
wglMakeCurrent(hdc,hrc);
int diffX = (int)(point.x - m_fLastX);
int diffY = (int)(point.y - m_fLastY);
m_fLastX = (float)point.x;
m_fLastY = (float)point.y;
// Left mouse button
if (nFlags & MK_LBUTTON)
{
m_fRotX += (float)0.5f * diffY;
if ((m_fRotX > 360.0f) || (m_fRotX < -360.0f))
{
m_fRotX = 0.0f;
}
m_fRotY += (float)0.5f * diffX;
if ((m_fRotY > 360.0f) || (m_fRotY < -360.0f))
{
m_fRotY = 0.0f;
}
}
// Right mouse button
else if (nFlags & MK_RBUTTON)
{
m_fZoom -= (float)0.1f * diffY;
}
// Middle mouse button
else if (nFlags & MK_MBUTTON)
{
m_fPosX += (float)0.05f * diffX;
m_fPosY -= (float)0.05f * diffY;
}
OnDraw(NULL);
CWnd::OnMouseMove(nFlags, point);
wglMakeCurrent(NULL, NULL);
}
void COpenGLControl::oglCreate(CRect rect, CWnd *parent,CString windowName)
{
CString className = AfxRegisterWndClass(CS_HREDRAW | CS_VREDRAW | CS_OWNDC, NULL, (HBRUSH)GetStockObject(BLACK_BRUSH), NULL);
CreateEx(0, className,windowName, WS_CHILD | WS_VISIBLE | WS_CLIPSIBLINGS | WS_CLIPCHILDREN, rect, parent, 0);
// Set initial variables' values
m_oldWindow = rect;
m_originalRect = rect;
hWnd = parent;
}
void COpenGLControl::oglInitialize(void)
{
// Initial Setup:
//
static PIXELFORMATDESCRIPTOR pfd =
{
sizeof(PIXELFORMATDESCRIPTOR),
1,
PFD_DRAW_TO_WINDOW | PFD_SUPPORT_OPENGL | PFD_DOUBLEBUFFER,
PFD_TYPE_RGBA,
32, // bit depth
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
24, // z-buffer depth
8,0,PFD_MAIN_PLANE, 0, 0, 0, 0,
};
// Get device context only once.
hdc = GetDC()->m_hDC;
// Pixel format.
m_nPixelFormat = ChoosePixelFormat(hdc, &pfd);
SetPixelFormat(hdc, m_nPixelFormat, &pfd);
// Create the OpenGL Rendering Context.
hrc = wglCreateContext(hdc);
wglMakeCurrent(hdc, hrc);
// Basic Setup:
//
// Set color to use when clearing the background.
glClearColor(0.0f, 0.0f, 0.0f, 1.0f);
glClearDepth(1.0f);
// Turn on backface culling
glFrontFace(GL_CCW);
glCullFace(GL_BACK);
// Turn on depth testing
glEnable(GL_DEPTH_TEST);
glDepthFunc(GL_LEQUAL);
// Send draw request
OnDraw(NULL);
wglMakeCurrent(NULL, NULL);
}
void COpenGLControl::oglDrawScene(void)
{
wglMakeCurrent(hdc, hrc);
// Wireframe Mode
glPolygonMode(GL_FRONT_AND_BACK, GL_LINE);
glBegin(GL_QUADS);
// Front Side
glVertex3f( 1.0f, 1.0f, 1.0f);
glVertex3f(-1.0f, 1.0f, 1.0f);
glVertex3f(-1.0f, -1.0f, 1.0f);
glVertex3f( 1.0f, -1.0f, 1.0f);
// Back Side
glVertex3f(-1.0f, -1.0f, -1.0f);
glVertex3f(-1.0f, 1.0f, -1.0f);
glVertex3f( 1.0f, 1.0f, -1.0f);
glVertex3f( 1.0f, -1.0f, -1.0f);
// Top Side
glVertex3f( 1.0f, 1.0f, 1.0f);
glVertex3f( 1.0f, 1.0f, -1.0f);
glVertex3f(-1.0f, 1.0f, -1.0f);
glVertex3f(-1.0f, 1.0f, 1.0f);
// Bottom Side
glVertex3f(-1.0f, -1.0f, -1.0f);
glVertex3f( 1.0f, -1.0f, -1.0f);
glVertex3f( 1.0f, -1.0f, 1.0f);
glVertex3f(-1.0f, -1.0f, 1.0f);
// Right Side
glVertex3f( 1.0f, 1.0f, 1.0f);
glVertex3f( 1.0f, -1.0f, 1.0f);
glVertex3f( 1.0f, -1.0f, -1.0f);
glVertex3f( 1.0f, 1.0f, -1.0f);
// Left Side
glVertex3f(-1.0f, -1.0f, -1.0f);
glVertex3f(-1.0f, -1.0f, 1.0f);
glVertex3f(-1.0f, 1.0f, 1.0f);
glVertex3f(-1.0f, 1.0f, -1.0f);
glEnd();
wglMakeCurrent(NULL, NULL);
}
MyOpenGLTestDlg.h
COpenGLControl m_oglWindow;
COpenGLControl m_oglWindow2;
MyOpenGLTestDlg.cpp
// TODO: Add extra initialization here
CRect rect;
// Get size and position of the picture control
GetDlgItem(ID_OPENGL)->GetWindowRect(rect);
// Convert screen coordinates to client coordinates
ScreenToClient(rect);
// Create OpenGL Control window
CString s1("OPEN_GL");
m_oglWindow.oglCreate(rect, this,s1);
// Setup the OpenGL Window's timer to render
m_oglWindow.m_unpTimer = m_oglWindow.SetTimer(1, 1, 0);
CRect rect2;
GetDlgItem(ID_OPENGL2)->GetWindowRect(rect2);
ScreenToClient(rect2);
CString s2("OPEN_GL2");
m_oglWindow2.oglCreate(rect2, this,s2);
m_oglWindow2.m_unpTimer = m_oglWindow2.SetTimer(1, 1, 0);
The problem is when I only create one OpenGL window, the system shows:
physical memoey: 48%
CPU usage: 54%
and when I create two windows, it shows:
physical memoey: 48%
CPU usage: 95%
I'm concerned that, it is only for such simple geometries!!!
How will be the usage for two opengl windows showing textures??
Is there anyway to reduce the usage?
BTW: why is the usage so much as mensioned?
CPU usage doesn't actually indicate anything about the complexity of your application. If you draw in a tight loop, one frame after the other with no delay or VSYNC enabled you can achieve 100% CPU utilization. What this tells you is you are not GPU bound. The same way if your GPU usage (yes, you can measure this with vendor-specific APIs) is >95% then you are not CPU bound.
In short, you should expect to see very high CPU usage if the GPU is not doing anything particularly complicated :) You can always increase the sleep/timer interval to reduce CPU utilization. Remember that CPU utilization is measured as the time spent doing work versus the total time the OS gave a thread/process. Time spent working is inversely related to time spent waiting (for I/O, sleep, etc.). If you increase the time spent waiting, it will reduce the time spent working, and therefore your reported utilization.
You can also reduce the CPU usage just by enabling VSYNC. Since that will block the calling thread until the VBLANK interval comes around (often 16.666 ms).
It should also be noted that a 1 ms timer interval on your OpenGL timer is excessively low. I cannot think of a lot of applications that need to draw 1000 times a second :) Try something slightly below your target refresh rate (e.g. Monitor = 60 Hz, then try a 10-15 ms timer interval)
This needs more investigation but you may have problems with your main-loop.
This probably is not a problem with OpenGL, but with usage of WinApi. When you add textures, models, shaders... your cpu usage should be similar.
You use SetTimer(1, 1, 0); it means 1 millisecond of delay as I understand? Can you change it to 33 milliseconds (33 FPS)?
That way you will not kill your message pump in mfc app. Note that this timer is very imprecise.
link to [Basic MFC + OpenGL Message loop], (http://archive.gamedev.net/archive/reference/articles/article2204.html), using OnIdle()
Here is a great tutorial about MFC + opengl + threading - #songho
https://gamedev.stackexchange.com/questions/8623/a-good-way-to-build-a-game-loop-in-opengl - discussoion regarding mail loop in GLUT

Only half a triangle shows when rotating in OpenGL ES 2.0

I'm trying to rotate a triangle around the the Y axis. When I rotate it about the Z axis, everything is fine. But when I try rotating about the Y axis, all I get is a half triangle, rotating about the Y axis. I'm using PowerVRs OpenGL ES 2.0 SDK. My Init and Draw functions are below.
int Init(ESContext* esContext)
{
UserData* userData = (UserData *)esContext->userData;
const char *vShaderStr =
"attribute vec4 vPosition; \n"
"uniform mat4 MVPMatrix;"
"void main() \n"
"{ \n"
" gl_Position = MVPMatrix * vPosition;\n"
"} \n";
const char *fShaderStr =
"precision mediump float; \n"
"void main() \n"
"{ \n"
" gl_FragColor = vec4(1.0, 0.0, 0.0, 1.0); \n"
"} \n";
GLuint vertexShader;
GLuint fragmentShader;
GLuint programObject;
GLint linked;
GLfloat ratio = 320.0f/240.0f;
vertexShader = LoadShader(GL_VERTEX_SHADER, vShaderStr);
fragmentShader = LoadShader(GL_FRAGMENT_SHADER, fShaderStr);
programObject = glCreateProgram();
if (programObject == 0)
return 0;
glAttachShader(programObject, vertexShader);
glAttachShader(programObject, fragmentShader);
glBindAttribLocation(programObject, 0, "vPosition");
glLinkProgram(programObject);
glGetProgramiv(programObject, GL_INFO_LOG_LENGTH, &linked);
if (!linked)
{
GLint infoLen = 0;
glGetProgramiv(programObject, GL_INFO_LOG_LENGTH, &infoLen);
if (infoLen > 1)
{
char* infoLog = (char *)malloc(sizeof(char) * infoLen);
glGetProgramInfoLog(programObject, infoLen, NULL, infoLog);
free(infoLog);
}
glDeleteProgram(programObject);
return FALSE;
}
userData->programObject = programObject;
glClearColor(0.0f, 0.0f, 0.0f, 1.0f);
glViewport(0, 0, esContext->width, esContext->height);
glClear(GL_COLOR_BUFFER_BIT);
glUseProgram(userData->programObject);
userData->angle = 0.0f;
userData->start = time(NULL);
userData->ProjMatrix = PVRTMat4::Perspective(ratio*2.0f, 2.0f, 3.0f, 7.0f, PVRTMat4::eClipspace::OGL, false, false);
userData->ViewMatrix = PVRTMat4::LookAtLH(PVRTVec3(0.0f, 0.0f, -3.0f), PVRTVec3(0.0f, 0.0f, 0.0f), PVRTVec3(0.0f, 1.0f, 0.0f));
return TRUE;
}
void Draw(ESContext *esContext)
{
GLfloat vVertices[] = {0.0f, 0.5f, 0.0f,
-0.5f, -0.5f, 0.0f,
0.5f, -0.5f, 0.0f};
GLint MVPHandle;
double timelapse;
PVRTMat4 MVPMatrix = PVRTMat4::Identity();
UserData* userData = (UserData *)esContext->userData;
timelapse = difftime(time(NULL), userData->start) * 1000;
if(timelapse > 16.0f) //Maintain approx 60FPS
{
if (userData->angle > 360.0f)
{
userData->angle = 0.0f;
}
else
{
userData->angle += 0.1f;
}
}
userData->ModelMatrix = PVRTMat4::RotationY(userData->angle);
MVPMatrix = userData->ViewMatrix * userData->ModelMatrix;
MVPMatrix = userData->ProjMatrix * MVPMatrix;
MVPHandle = glGetUniformLocation(userData->programObject, "MVPMatrix");
glUniformMatrix4fv(MVPHandle, 1, FALSE, MVPMatrix.ptr());
glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, 0, vVertices);
glEnableVertexAttribArray(0);
glClear(GL_COLOR_BUFFER_BIT);
glDrawArrays(GL_TRIANGLES, 0, 3);
eglSwapBuffers(esContext->eglDisplay, esContext->eglSurface);
}
PVRTMat4::Perspective(ratio*2.0f, 2.0f, 3.0f, 7.0f, PVRTMat4::eClipspace::OGL, false, false); puts the near clipping plane at 3.0f units away from the camera (via the third argument).
PVRTMat4::LookAtLH(PVRTVec3(0.0f, 0.0f, -3.0f), PVRTVec3(0.0f, 0.0f, 0.0f), PVRTVec3(0.0f, 1.0f, 0.0f)); places the camera at (0, 0, -3), looking back at (0, 0, 0).
You generate a model matrix directly using PVRTMat4::RotationY(userData->angle); so that matrix does no translation. The triangle you're drawing remains positioned on (0, 0, 0) as per its geometry.
So what's happening is that the parts of the triangle that get closer to the camera than 3 units are being clipped by the near clipping plane. The purpose of the near clipping plane is effectively to place the lens of the camera relative to where the image will be perceived. Or it's like specifying the distance from user to screen, if you prefer.
You therefore want either to bring the near clip plane closer to the camera or to move the camera further away from the triangle.

Resources