DirectX 11 Map deviceContext failed - visual-studio

I have an application that render a cube and do some other syuff. But unfortunately, when I want to move some vertices with the mouse. I have an error. When I map the vertex buffer, the vertices recorded in another struct array is empty. For testing my application, I just put a map after creating the vertexbuffer in order to see if the debugger show me the real numbers. It fails too. Everything is populated with a 0.00 value (positions, normals, tangents...).What is the problem ?
Here you can find the code.
D3D11_BUFFER_DESC bd;
ZeroMemory( &bd, sizeof(bd) );
bd.Usage = D3D11_USAGE_DYNAMIC; //D3D11_USAGE_DEFAULT; //D3D11_USAGE_DYNAMIC;// D3D11_USAGE_DEFAULT;
bd.ByteWidth = CBSize( sizeof( VERTEX_PNTTB)* NumVertices);
bd.BindFlags = D3D11_BIND_VERTEX_BUFFER;
bd.CPUAccessFlags = D3D11_CPU_ACCESS_WRITE; // 0; // D3D11_CPU_ACCESS_READ; //D3D11_CPU_ACCESS_WRITE; // 0;
bd.StructureByteStride = 0;
bd.MiscFlags = 0;
//bd.StructureByteStride
//bd.MiscFlags = 0;
D3D11_SUBRESOURCE_DATA InitData;
ZeroMemory( &InitData, sizeof(InitData) );
InitData.pSysMem = (void*)vertices; //(void*)(mesh->GetVertexes().data()); //vertices; //(float*)vertices; // (UINT*)vertices;
InitData.SysMemPitch=0;
ID3D11Buffer* vbuff = NULL;
hr = device->CreateBuffer(&bd, &InitData, &vbuff); // &(m->vertexBuffer)); //&m_pVertexBuffer );
//if( FAILED( hr ) )
// return hr;
//}
m->vertexBuffer = vbuff;
D3D11_MAPPED_SUBRESOURCE mappedResource;
ID3D11Buffer* &buff = vbuff;
//g_pImmediateContext->CopyResource(buff, mElements[ 0 ]->getElement().model ->Meshes()[selectedMeshIndex]->VertexBuffer());
hr = g_pImmediateContext->Map( buff, 0, D3D11_MAP_WRITE_DISCARD ,0, &mappedResource); // D3D11_MAP_WRITE_DISCARD, 0, &mappedResource);
if (SUCCEEDED(hr))
{
VERTEX_PNTTB *vertices = (VERTEX_PNTTB *)mappedResource.pData;
// Fill vertex buffer
//vertices[0].position = vertices[0]; // XMFLOAT3(toX0,toY0,0); //-1;//toX0; //-1;//vf1.x; // x0;
/*for(UINT i=0; i<faces_indices.size(); i++)
{
vertices[ faces_indices[i] ].position.x = vertices[faces_indices[i] ].position.x + DirectX::XMFLOAT3(20*dx,20*dy,0).x;
vertices[ faces_indices[i] ].position.y = vertices[faces_indices[i] ].position.y + DirectX::XMFLOAT3(20*dx,20*dy,0).y;
vertices[ faces_indices[i] ].position.z = vertices[faces_indices[i] ].position.z + DirectX::XMFLOAT3(20*dx,20*dy,0).z;
}*/
//g_pImmediateContext->Unmap( mElements[ 0 ]->getElement().model ->Meshes()[selectedMeshIndex]->VertexBuffer(), 0);
g_pImmediateContext->Unmap( buff, 0);
}

Generally you don't want to create Vertex Buffer or Index Buffers in CPU-readable memory as it has a major negative performance impact. You'll find it's much better to have static VB/IB and another copy of the data in standard RAM for the CPU to modify.
That said, you can create the Direct3D 11 buffer in shared memory by using D3D11_USAGE_DYNAMIC and D3D11_CPU_ACCESS_READ | D3D11_CPU_ACCESS_WRITE. You then call Map with D3D11_MAP_READ_WRITE.
You definitely should not use D3D11_MAP_WRITE_DISCARD which gives you a fresh piece of memory that has no data in it which will overwrite the old data when you call Unmap.

Related

KNEM cookie and declared region

First question is PROT_WRITE and PROT_READ i wasn't able to find anywhere and it's giving me a hard time compiling. I replaced with 0 and 1 but it doesn't seem to work.
Second, "rejected (unexisting region cookie)"
int rank;
MPI_Init( &argc, &argv );
MPI_Comm_rank( MPI_COMM_WORLD, &rank );
MPI_Win win;
int knem_fd = open("/dev/knem", O_RDWR);
int err;
uint64_t size = 64;
if( rank == 0 ){
char *inbuf = malloc(size);
for( int i = 0; i < size; i++ )
inbuf[i] = rand() % 26 + 97;
print_array( inbuf, size, '0' );
struct knem_cmd_create_region create;
struct knem_cmd_param_iovec knem_iov[1];
knem_iov[0].base = (uint64_t)&inbuf;
knem_iov[0].len = size;
create.iovec_array = (uintptr_t) &knem_iov[0];
create.iovec_nr = 1;
create.flags = KNEM_FLAG_SINGLEUSE;
//create.protection = 1;
err = ioctl( knem_fd, KNEM_CMD_CREATE_REGION, &create );
MPI_Send( &(create.cookie), 1, MPI_UINT64_T, 1, 0, MPI_COMM_WORLD );
MPI_Barrier( MPI_COMM_WORLD );
} else if( rank == 1 ){
char *obuf = malloc(size);
int err;
struct knem_cmd_copy copy;
struct knem_cmd_create_region create;
struct knem_cmd_param_iovec knem_iov[1];
knem_iov[0].base = (uint64_t)&obuf;
knem_iov[0].len = size;
create.iovec_array = (uintptr_t) &knem_iov[0];
create.iovec_nr = 1;
//create.protection = 0;
create.flags = KNEM_FLAG_SINGLEUSE;
err = ioctl( knem_fd, KNEM_CMD_CREATE_REGION, &create );
MPI_Recv( &(copy.src_cookie), 1, MPI_UINT64_T, 0, 0, MPI_COMM_WORLD, MPI_STATUS_IGNORE );
copy.src_offset = 0;
copy.dst_cookie = create.cookie;
copy.dst_offset = 0;
copy.flags = 0;
err = ioctl(knem_fd, KNEM_CMD_COPY, &copy);
print_array( obuf, size, '1' );
MPI_Barrier( MPI_COMM_WORLD );
}
0 and 1 both create a region, 0 sends its cookie to 1 and 1 goes in grab data from 0. I checked the received cookie is the same as the send cookie, but it just failed to find the declared region.
PROT_READ and PROT_WRITE are mmap flags, you need to include sys/mman.h to get them. In the second part of the code, you need to set copy.src_cookie to create.cookie (or just use an inline copy to avoid creating that region at all since it'll be destroyed immediately because of the SINGLEUSE flag). Also, make sure ou check the return values of all ioctl before continuing. Copy cannot work if create.cookie wasn't initialized because the create ioctl failed.

MPSImageIntegral returning all zeros

I am trying to use MPSImageIntegral to calculate the sum of some elements in an MTLTexture. This is what I'm doing:
std::vector<float> integralSumData;
for(int i = 0; i < 10; i++)
integralSumData.push_back((float)i);
MTLTextureDescriptor *textureDescriptor = [MTLTextureDescriptor texture2DDescriptorWithPixelFormat:MTLPixelFormatR32Float
width:(integralSumData.size()) height:1 mipmapped:NO];
textureDescriptor.usage = MTLTextureUsageShaderRead | MTLTextureUsageShaderWrite;
id<MTLTexture> texture = [_device newTextureWithDescriptor:textureDescriptor];
// Calculate the number of bytes per row in the image.
NSUInteger bytesPerRow = integralSumData.size() * sizeof(float);
MTLRegion region =
{
{ 0, 0, 0 }, // MTLOrigin
{integralSumData.size(), 1, 1} // MTLSize
};
// Copy the bytes from the data object into the texture
[texture replaceRegion:region
mipmapLevel:0
withBytes:integralSumData.data()
bytesPerRow:bytesPerRow];
MTLTextureDescriptor *textureDescriptor2 = [MTLTextureDescriptor texture2DDescriptorWithPixelFormat:MTLPixelFormatR32Float
width:(integralSumData.size()) height:1 mipmapped:NO];
textureDescriptor2.usage = MTLTextureUsageShaderRead | MTLTextureUsageShaderWrite;
id<MTLTexture> outtexture = [_device newTextureWithDescriptor:textureDescriptor2];
// Create a MPS filter.
MPSImageIntegral *integral = [[MPSImageIntegral alloc] initWithDevice: _device];
MPSOffset offset = { 0,0,0};
[integral setOffset:offset];
[integral setEdgeMode:MPSImageEdgeModeZero];
[integral encodeToCommandBuffer:commandBuffer sourceTexture:texture destinationTexture:outtexture];
[commandBuffer commit];
[commandBuffer waitUntilCompleted];
But, when I check my outtexture values, its all zeroes. Am I doing something wrong? Is this a correct way in which I shall use MPSImageIntegral?
I'm using the following code to read values written into the outTexture:
float outData[100];
[outtexture getBytes:outData bytesPerRow:bytesPerRow fromRegion:region mipmapLevel:0];
for(int i = 0; i < 100; i++)
std::cout << outData[i] << "\n";
Thanks
As pointed out by #Matthijis: All I had to do was use an MTLBlitEncoder to make sure I synchronise my MTLTexture before reading it into CPU, and it worked like charm!

Crackling audio due to wrong audio data

I'm using CoreAudio low level API for audio capturing. The app target is MAC OSX, not iOS.
During testing it, from time to time we got very annoying noise modulate with real audio. the phenomena develops with time, started from barely noticeable and become more and more dominant.
Analyze the captured audio under Audacity indicate that the end part of the audio packet is wrong.
Here are sample picture:
the intrusion repeat every 40 ms which is the configured packetization time (in terms of buffer samples)
Update:
Over time the gap became larger, here is another snapshot from the same captured file 10 minutes later. the gap now contains 1460 samples which is 33ms from the total 40ms of the packet!!
CODE SNIPPESTS:
capture callback
OSStatus MacOS_AudioDevice::captureCallback(void *inRefCon,
AudioUnitRenderActionFlags *ioActionFlags,
const AudioTimeStamp *inTimeStamp,
UInt32 inBusNumber,
UInt32 inNumberFrames,
AudioBufferList *ioData)
{
MacOS_AudioDevice* _this = static_cast<MacOS_AudioDevice*>(inRefCon);
// Get the new audio data
OSStatus err = AudioUnitRender(_this->m_AUHAL, ioActionFlags, inTimeStamp, inBusNumber, inNumberFrames, _this->m_InputBuffer);
if (err != noErr)
{
...
return err;
}
// ignore callback on unexpected buffer size
if (_this->m_params.bufferSizeSamples != inNumberFrames)
{
...
return noErr;
}
// Deliver audio data
DeviceIOMessage message;
message.bufferSizeBytes = _this->m_deviceBufferSizeBytes;
message.buffer = _this->m_InputBuffer->mBuffers[0].mData;
if (_this->m_callbackFunc)
{
_this->m_callbackFunc(_this, message);
}
}
Open and start capture device:
void MacOS_AudioDevice::openAUHALCapture()
{
UInt32 enableIO;
AudioStreamBasicDescription streamFormat;
UInt32 size;
SInt32 *channelArr;
std::stringstream ss;
AudioObjectPropertyAddress deviceBufSizeProperty =
{
kAudioDevicePropertyBufferFrameSize,
kAudioDevicePropertyScopeInput,
kAudioObjectPropertyElementMaster
};
// AUHAL
AudioComponentDescription cd = {kAudioUnitType_Output, kAudioUnitSubType_HALOutput, kAudioUnitManufacturer_Apple, 0, 0};
AudioComponent HALOutput = AudioComponentFindNext(NULL, &cd);
verify_macosapi(AudioComponentInstanceNew(HALOutput, &m_AUHAL));
verify_macosapi(AudioUnitInitialize(m_AUHAL));
// enable input IO
enableIO = 1;
verify_macosapi(AudioUnitSetProperty(m_AUHAL, kAudioOutputUnitProperty_EnableIO, kAudioUnitScope_Input, 1, &enableIO, sizeof(enableIO)));
// disable output IO
enableIO = 0;
verify_macosapi(AudioUnitSetProperty(m_AUHAL, kAudioOutputUnitProperty_EnableIO, kAudioUnitScope_Output, 0, &enableIO, sizeof(enableIO)));
// Setup current device
size = sizeof(AudioDeviceID);
verify_macosapi(AudioUnitSetProperty(m_AUHAL, kAudioOutputUnitProperty_CurrentDevice, kAudioUnitScope_Global, 0, &m_MacDeviceID, sizeof(AudioDeviceID)));
// Set device native buffer length before setting AUHAL stream
size = sizeof(m_originalDeviceBufferTimeFrames);
verify_macosapi(AudioObjectSetPropertyData(m_MacDeviceID, &deviceBufSizeProperty, 0, NULL, size, &m_originalDeviceBufferTimeFrames));
// Get device format
size = sizeof(AudioStreamBasicDescription);
verify_macosapi(AudioUnitGetProperty(m_AUHAL, kAudioUnitProperty_StreamFormat, kAudioUnitScope_Input, 1, &streamFormat, &size));
// Setup channel map
assert(m_params.numOfChannels <= streamFormat.mChannelsPerFrame);
channelArr = new SInt32[streamFormat.mChannelsPerFrame];
for (int i = 0; i < streamFormat.mChannelsPerFrame; i++)
channelArr[i] = -1;
for (int i = 0; i < m_params.numOfChannels; i++)
channelArr[i] = i;
verify_macosapi(AudioUnitSetProperty(m_AUHAL, kAudioOutputUnitProperty_ChannelMap, kAudioUnitScope_Input, 1, channelArr, sizeof(SInt32) * streamFormat.mChannelsPerFrame));
delete [] channelArr;
// Setup stream converters
streamFormat.mFormatID = kAudioFormatLinearPCM;
streamFormat.mFormatFlags = kAudioFormatFlagIsSignedInteger;
streamFormat.mFramesPerPacket = m_SamplesPerPacket;
streamFormat.mBitsPerChannel = m_params.sampleDepthBits;
streamFormat.mSampleRate = m_deviceSampleRate;
streamFormat.mChannelsPerFrame = 1;
streamFormat.mBytesPerFrame = 2;
streamFormat.mBytesPerPacket = streamFormat.mFramesPerPacket * streamFormat.mBytesPerFrame;
verify_macosapi(AudioUnitSetProperty(m_AUHAL, kAudioUnitProperty_StreamFormat, kAudioUnitScope_Output, 1, &streamFormat, size));
// Setup callbacks
AURenderCallbackStruct input;
input.inputProc = captureCallback;
input.inputProcRefCon = this;
verify_macosapi(AudioUnitSetProperty(m_AUHAL, kAudioOutputUnitProperty_SetInputCallback, kAudioUnitScope_Global, 0, &input, sizeof(input)));
// Calculate the size of the IO buffer (in samples)
if (m_params.bufferSizeMS != -1)
{
unsigned int desiredSignalsInBuffer = (m_params.bufferSizeMS / (double)1000) * m_deviceSampleRate;
// making sure the value stay in the device's supported range
desiredSignalsInBuffer = std::min<unsigned int>(desiredSignalsInBuffer, m_deviceBufferFramesRange.mMaximum);
desiredSignalsInBuffer = std::max<unsigned int>(m_deviceBufferFramesRange.mMinimum, desiredSignalsInBuffer);
m_deviceBufferFrames = desiredSignalsInBuffer;
}
// Set device buffer length
size = sizeof(m_deviceBufferFrames);
verify_macosapi(AudioObjectSetPropertyData(m_MacDeviceID, &deviceBufSizeProperty, 0, NULL, size, &m_deviceBufferFrames));
m_deviceBufferSizeBytes = m_deviceBufferFrames * streamFormat.mBytesPerFrame;
m_deviceBufferTimeMS = 1000 * m_deviceBufferFrames/m_deviceSampleRate;
// Calculate number of buffers from channels
size = offsetof(AudioBufferList, mBuffers[0]) + (sizeof(AudioBuffer) * m_params.numOfChannels);
// Allocate input buffer
m_InputBuffer = (AudioBufferList *)malloc(size);
m_InputBuffer->mNumberBuffers = m_params.numOfChannels;
// Pre-malloc buffers for AudioBufferLists
for(UInt32 i = 0; i< m_InputBuffer->mNumberBuffers ; i++)
{
m_InputBuffer->mBuffers[i].mNumberChannels = 1;
m_InputBuffer->mBuffers[i].mDataByteSize = m_deviceBufferSizeBytes;
m_InputBuffer->mBuffers[i].mData = malloc(m_deviceBufferSizeBytes);
}
// Update class properties
m_params.sampleRateHz = streamFormat.mSampleRate;
m_params.bufferSizeSamples = m_deviceBufferFrames;
m_params.bufferSizeBytes = m_params.bufferSizeSamples * streamFormat.mBytesPerFrame;
}
eADMReturnCode MacOS_AudioDevice::start()
{
eADMReturnCode ret = OK;
LOGAPI(ret);
if (!m_isStarted && m_isOpen)
{
OSStatus err = AudioOutputUnitStart(m_AUHAL);
if (err == noErr)
m_isStarted = true;
else
ret = ERROR;
}
return ret;
}
Any idea what cause it and how to solve?
Thanks in advance!
Periodic glitches or dropouts can be caused by not paying attention to or by not fully processing the number of frames sent to each audio callback. Valid buffers don't always contain the expected or same number of samples (inNumberFrames might not equal bufferSizeSamples or the previous inNumberFrames in a perfectly valid audio buffer).
It is possible that these types of glitches might be caused by attempting to record at 44.1k on some models of iOS devices that only support 48k audio in hardware.
Some types of glitch might also be caused by any non-hard-real-time code within your m_callbackFunc function (such as any synchronous file reads/writes, OS calls, Objective C message dispatch, GC, or memory allocation/deallocation).

The ".text" section accessed in memory and loaded from disc differs for GCC compiled application

The content of the '.text' section is accessed using code like this:
1) For the application which is loaded into memory (i.e. executing):
//accessing code in memory
PIMAGE_DOS_HEADER pDOSHeader = NULL;
pDOSHeader = static_cast<PIMAGE_DOS_HEADER>( (void*)hModule);
...
PIMAGE_NT_HEADERS pNTHeader = reinterpret_cast<PIMAGE_NT_HEADERS>((byte*)hModule + pDOSHeader->e_lfanew );
...
PIMAGE_FILE_HEADER pFileHeader = reinterpret_cast<PIMAGE_FILE_HEADER>((byte*)&pNTHeader->FileHeader );
...
PIMAGE_OPTIONAL_HEADER pOptionalHeader =
reinterpret_cast<PIMAGE_OPTIONAL_HEADER>((byte*)&pNTHeader->OptionalHeader );
...
PIMAGE_SECTION_HEADER pSectionHeader = reinterpret_cast<PIMAGE_SECTION_HEADER>(
(byte*)&pNTHeader->OptionalHeader +
pNTHeader->FileHeader.SizeOfOptionalHeader );
//so iterate headers and select one with right name
const char TEXT[] = ".text";
const char BSSTEXT[] = ".textbss";
unsigned int nSectionCount = pNTHeader->FileHeader.NumberOfSections;
char szSectionName[ IMAGE_SIZEOF_SHORT_NAME + 1 ];
szSectionName[ IMAGE_SIZEOF_SHORT_NAME ] = '\0';
for( unsigned int i = 0; i < nSectionCount; i++ )
{
memcpy( szSectionName, pSectionHeader->Name,
IMAGE_SIZEOF_SHORT_NAME );
if( 0 == strncmp( TEXT, szSectionName,
IMAGE_SIZEOF_SHORT_NAME ) )
{
break;
}
pSectionHeader++;
}
pVirtualAddress = (void*)(pSectionHeader->VirtualAddress);
dwCodeSize = pSectionHeader->Misc.VirtualSize;
//seems resonable: To calculate the real starting address of a given section in memory,
//add the base address of the image to the section's VirtualAddress stored in this field.
pCodeStart = (void*)(((byte*)hModule) +(size_t)((byte*)pVirtualAddress) );
pCodeEnd = (void*)((byte*)pCodeStart + dwCodeSize);
2) For the application file read from hdd and mapped into memory:
//loading code from file and mapping
hFile = CreateFile( filename, FILE_READ_DATA, FILE_SHARE_READ, NULL, OPEN_EXISTING, FILE_ATTRIBUTE_NORMAL, 0);
...
hFileMapping = CreateFileMapping( hFile, NULL, PAGE_READONLY ),0, 0, NULL );
...
pBaseAddress = MapViewOfFile( hFileMapping, FILE_MAP_READ, 0, 0, 0 );
...
PIMAGE_DOS_HEADER pDOSHeader = static_cast<PIMAGE_DOS_HEADER>( pBaseAddress);
...
PIMAGE_NT_HEADERS pNTHeader = reinterpret_cast<PIMAGE_NT_HEADERS>(
(PBYTE)_pBaseAddress() + pDOSHeader->e_lfanew );
...
PIMAGE_FILE_HEADER pFileHeader = reinterpret_cast<PIMAGE_FILE_HEADER>(
(PBYTE)&pNTHeader->FileHeader );
...
PIMAGE_OPTIONAL_HEADER pOptionalHeader =
reinterpret_cast<PIMAGE_OPTIONAL_HEADER>(
(PBYTE)&pNTHeader->OptionalHeader );
PIMAGE_SECTION_HEADER pSectionHeader =
reinterpret_cast<PIMAGE_SECTION_HEADER>(
(PBYTE)&pNTHeader->OptionalHeader +
pNTHeader->FileHeader.SizeOfOptionalHeader );
DWORD dwEntryPoint = pNTHeader->OptionalHeader.AddressOfEntryPoint;
UINT nSectionCount = pNTHeader->FileHeader.NumberOfSections;
const char TEXT[] = ".text";
const char BSSTEXT[] = ".textbss";
char szSectionName[ IMAGE_SIZEOF_SHORT_NAME + 1 ];
szSectionName[ IMAGE_SIZEOF_SHORT_NAME ] = '\0';
for( unsigned int i = 0; i < nSectionCount; i++ )
{
memcpy( szSectionName, pSectionHeader->Name,
IMAGE_SIZEOF_SHORT_NAME );
if( 0 == strncmp( TEXT, szSectionName,
IMAGE_SIZEOF_SHORT_NAME ) )
{
break;
}
pSectionHeader++;
}
// Use this when probing On Disk. It is where things
// are on disk - not where they will be in memory
dwRawData = pSectionHeader->PointerToRawData;
// Use this when probing On Disk. It is where things
// are on disk - not where they will be in memory
pCodeStart = (void*)((byte*)pBaseAddress +
pSectionHeader->PointerToRawData );
pEntryPoint = (void*)(((byte*)pBaseAddress) + dwEntryPoint);
dwCodeSize = pSectionHeader->Misc.VirtualSize;
pCodeEnd = (void*)((byte*)pCodeStart + pSectionHeader->Misc.VirtualSize );
If the application is built with Visual Studio, all the bytes between pCodeStart and pCodeEnd are matching in both cases.
But if the application is built with GCC (MinGW) some bytes which are following pCodeStart and prior pCodeEnd are the same but somewhere in the middle some different bytes are appearing.
Why does it happen?

Windows Application (Game) using a lot of resources

I'm currently working on a game that creates a window using WindowsAPI. However, at the moment the process is taking up 50% of my CPU. All I am doing is creating the window and looping using the code found below:
int WINAPI WinMain(HINSTANCE hInstance, HINSTANCE hPrevInstance, LPSTR lpCmdLine, int nShowCmd)
{
MSG message = {0};
WNDCLASSEX wcl = {0};
wcl.cbSize = sizeof(wcl);
wcl.style = CS_OWNDC | CS_HREDRAW | CS_VREDRAW;
wcl.lpfnWndProc = WindowProc;
wcl.cbClsExtra = 0;
wcl.cbWndExtra = 0;
wcl.hInstance = hInstance = hInstance;
wcl.hIcon = LoadIcon(0, IDI_APPLICATION);
wcl.hCursor = LoadCursor(0, IDC_ARROW);
wcl.hbrBackground = 0;
wcl.lpszMenuName = 0;
wcl.lpszClassName = "GL2WindowClass";
wcl.hIconSm = 0;
if (!RegisterClassEx(&wcl))
return 0;
hWnd = CreateAppWindow(wcl, "Application");
if (hWnd)
{
if (Init())
{
ShowWindow(hWnd, nShowCmd);
UpdateWindow(hWnd);
while (true)
{
while (PeekMessage(&message, 0, 0, 0, PM_REMOVE))
{
if (message.message == WM_QUIT)
break;
TranslateMessage(&message);
DispatchMessage(&message);
}
if (message.message == WM_QUIT)
break;
if (hasFocus)
{
elapsedTime = GetElapsedTimeInSeconds();
lastEarth += elapsedTime;
lastUpdate += elapsedTime;
lastFrame += elapsedTime;
lastParticle += elapsedTime;
if(lastUpdate >= (1.0f / 100.0f))
{
Update(lastUpdate);
lastUpdate = 0;
}
if(lastFrame >= (1.0f / 60.0f))
{
UpdateFrameRate(lastFrame);
lastFrame = 0;
Render();
SwapBuffers(hDC);
}
if(lastEarth >= (1.0f / 10.0f))
{
UpdateEarthAnimation();
lastEarth = 0;
}
if(lastParticle >= (1.0f / 30.0f))
{
particleManager->rightBooster->Update();
particleManager->rightBoosterSmoke->Update();
particleManager->leftBooster->Update();
particleManager->leftBoosterSmoke->Update();
particleManager->breakUp->Update();
lastParticle = 0;
}
}
else
{
WaitMessage();
}
}
}
Cleanup();
UnregisterClass(wcl.lpszClassName, hInstance);
}
return static_cast<int>(message.wParam);
}
Where GetElapsedTimeInSeconds :
float GetElapsedTimeInSeconds()
{
static const int MAX_SAMPLE_COUNT = 50;
static float frameTimes[MAX_SAMPLE_COUNT];
static float timeScale = 0.0f;
static float actualElapsedTimeSec = 0.0f;
static INT64 freq = 0;
static INT64 lastTime = 0;
static int sampleCount = 0;
static bool initialized = false;
INT64 time = 0;
float elapsedTimeSec = 0.0f;
if (!initialized)
{
initialized = true;
QueryPerformanceFrequency(reinterpret_cast<LARGE_INTEGER*>(&freq));
QueryPerformanceCounter(reinterpret_cast<LARGE_INTEGER*>(&lastTime));
timeScale = 1.0f / freq;
}
QueryPerformanceCounter(reinterpret_cast<LARGE_INTEGER*>(&time));
elapsedTimeSec = (time - lastTime) * timeScale;
lastTime = time;
if (fabsf(elapsedTimeSec - actualElapsedTimeSec) < 1.0f)
{
memmove(&frameTimes[1], frameTimes, sizeof(frameTimes) - sizeof(frameTimes[0]));
frameTimes[0] = elapsedTimeSec;
if (sampleCount < MAX_SAMPLE_COUNT)
++sampleCount;
}
actualElapsedTimeSec = 0.0f;
for (int i = 0; i < sampleCount; ++i)
actualElapsedTimeSec += frameTimes[i];
if (sampleCount > 0)
actualElapsedTimeSec /= sampleCount;
return actualElapsedTimeSec;
}
So even when I am not drawing anything when the window has focus it still takes up 50%. I don't understand how this is taking up so much system resources.
Am I doing something wrong?
Any help would be much appreciated, thank you!
hasfocus falls out of the sky. You'll burn 100% core when it is true, there's nothing inside the if() statement that will make your program wait for anything. Sleep(1) would fix that but it isn't otherwise obvious what you intend to do.
To add to the other answers...
You need to throttle your game loop in some way. PeekMessage returns immediately if there are no messages so you are just looping as fast as possible, consuming 100% of one of your CPU cores. Presumably you see 50% as you have a dual core PC.
Instead of doing a Sleep() to avoid consuming 100% of the CPU cycles, call MsgWaitForMultipleObjects at the start of each loop, passing in zero handles, passing in a small timeout being your minimal interval between frames. Each time it returns, its because the timeout elapsed, OR because there are messages to process. If there are messages, process them (all) and then, either way, then process a game OnNextFrame / Repaint cycle. Use UpdateWindow instead of waiting for a WM_PAINT to be posted.
Your application is running in a tight loop and burning CPU by not doing anything useful. You have to add something like Sleep(1) into the loop.
This example you provided is a degenerate case and it doesn't really have to be fixed. If you going to make a game out of this, then you are going to put the frame update and render functionality into the loop and it will be doing some useful "stuff". In this case you wouldn't want to "sleep" any cycles.
If I were you I wouldn't start making a game from scratch and would look for some game engine. There's a lot of high quality open and closed source free game engines available on the market. Or at least look for some simple game skeletons that provide basic things like a message loop and double buffered window drawing setup.

Resources