where is memory leak in this code c++ opencv? - memory-management

this is the code
CvMemStorage *mem123 = cvCreateMemStorage(0);
CvSeq* ptr123;CvRect face_rect123;
CvHaarClassifierCascade* cascade123 = (CvHaarClassifierCascade*)cvLoad("haarcascade_frontalface_alt2.xml" ); //detects the face if it's frontal
void HeadDetection(IplImage* frame,CvRect* face){
ptr123=cvHaarDetectObjects(frame,cascade123,mem123,1.2,2,CV_HAAR_DO_CANNY_PRUNING);
if(!ptr123){return ;}
if(!(ptr123->total)){return ;}
face_rect123=*(CvRect*)cvGetSeqElem( ptr123, 0 ); //CvRect face_rect holds the position of Rectangle
face->height=face_rect123.height;
face->width=face_rect123.width;
face->x=face_rect123.x;
face->y=face_rect123.y;
return ;
}//detects the position of head and it is fed in CvRect*face as rectangle
int main(){
IplImage* oldframe=cvCreateImage(cvSize(640,480),8,3);
CvCapture* capture=cvCaptureFromCAM(CV_CAP_ANY);
CvRect a;a.height=0;a.width=0;a.x=0;a.y=0;
while(1){
oldframe=cvQueryFrame(capture); //real frame captured of size 640x480
cvFlip(oldframe,oldframe,1);
cvResize(oldframe,frame); //frame scaled down 4 times
HeadDetection(frame,&a);
cvShowImage("frame",frame);
cvWaitKey(1);
}
}
Here if "HeadDetection(frame,&a);" is commented, then using task manager i see that angledetection.exe (name of my project) consumes 20188 Kb memory (No memory leak happening then).
However if I don't comment that the taskmanager shows that some memory leak is happening (around 300Kb/s )
I'm using VS 2010 on 64 bit windows 7 bit OS (core 2 duo).
This code is trying to detect face and get the four corners of square by haar detection in OpenCV 2.1
In case anything is unclear please ask. :-)
Thanks in advance.

You are getting a pointer to an object when you call cvHaarDetectObjects.
But you never free it ( the object that ptr123 points to).
Also face_rect123 isnt freed.
Btw you should consider refactoring the code and give better names to the variables.

Related

How to DEBUG OpenGL a gray/black texture box?

I'm altering someone else's code. They used PNG's which are loaded via BufferedImage. I need to load a TGA instead, which is just simply a 18 byte header and BGR codes. I have the textures loaded and running, but I get a gray box instead of the texture. I don't even know how to DEBUG this.
Textures are loaded in a ByteBuffer:
final static int datasize = (WIDTH*HEIGHT*3) *2; // Double buffer size for OpenGL // not +18 no header
static ByteBuffer buffer = ByteBuffer.allocateDirect(datasize);
FileInputStream fin = new FileInputStream("/Volumes/RAMDisk/shot00021.tga");
FileChannel inc = fin.getChannel();
inc.position(18); // skip header
buffer.clear(); // prepare for read
int ret = inc.read(buffer);
fin.close();
I've followed this: [how-to-manage-memory-with-texture-in-opengl][1] ... because I am updating the texture once per frame, like video.
Called once:
GL11.glBindTexture(GL11.GL_TEXTURE_2D, textureID);
GL11.glTexParameteri(GL11.GL_TEXTURE_2D, GL11.GL_TEXTURE_WRAP_S, GL11.GL_CLAMP);
GL11.glTexParameteri(GL11.GL_TEXTURE_2D, GL11.GL_TEXTURE_WRAP_T, GL11.GL_CLAMP);
GL11.glTexParameteri(GL11.GL_TEXTURE_2D, GL11.GL_TEXTURE_MAG_FILTER, GL11.GL_NEAREST);
GL11.glTexParameteri(GL11.GL_TEXTURE_2D, GL11.GL_TEXTURE_MIN_FILTER, GL11.GL_NEAREST);
GL11.glTexImage2D(GL11.GL_TEXTURE_2D, 0, GL11.GL_RGB, width, height, 0, GL11.GL_RGB, GL11.GL_UNSIGNED_BYTE, (ByteBuffer) null);
assert(GL11.GL_NO_ERROR == GL11.glGetError());
Called repeatedly:
GL11.glBindTexture(GL11.GL_TEXTURE_2D, textureID);
GL11.glTexSubImage2D(GL11.GL_TEXTURE_2D, 0, 0, 0, width, height, GL11.GL_RGB, GL11.GL_UNSIGNED_BYTE, byteBuffer);
assert(GL11.GL_NO_ERROR == GL11.glGetError());
return textureID;
The render code hasn't changed and is based on:
GL11.glDrawArrays(GL11.GL_TRIANGLES, 0, this.vertexCount);
Make sure you set the texture sampling mode. Especially min filter: glTexParameteri ( GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR). The default setting is mip mapped (GL_NEAREST_MIPMAP_LINEAR) so unless you upload mip maps you will get a white read result.
So either set the texture to no mip or generate them. One way to do that is to call glGenerateMipmap after the tex img call.
(see https://www.khronos.org/opengles/sdk/docs/man/xhtml/glTexParameter.xml).
It's a very common gl pitfall and something people just tend to know after getting bitten by it a few times.
There is no easy way to debug stuff like this. There are good gl debugging tools in for example xcode but they will not tell you about this case.
Debugging GPU code is always a hassle. I would bet my money on a big industry progress in this area as more companies discover the power of GPU. Until then; I'll share my two best GPU debugging friends:
1) Define a function to print OGL errors:
int printOglError(const char *file, int line)
{
/* Returns 1 if an OpenGL error occurred, 0 otherwise. */
GLenum glErr;
int retCode = 0;
glErr = glGetError();
while (glErr != GL_NO_ERROR) {
printf("glError in file %s # line %d: %s\n", file, line, gluErrorString(glErr));
retCode = 1;
glErr = glGetError();
}
return retCode;
}
#define printOpenGLError() printOglError(__FILE__, __LINE__)
And call it after your render draw calls (possible earlier errors will also show up):
GL11.glDrawArrays(GL11.GL_TRIANGLES, 0, this.vertexCount);
printOpenGLError();
This alerts if you make some invalid operations (which might just be your case) but you usually have to find where the error occurs by trial and error.
2) Check out gDEBugger, free software with tons of GPU memory information.
[Edit]:
I would also recommend using the opensource lib DevIL - its quite competent in loading various image formats.
Thanks to Felix, by not calling glTexSubImage2D (leaving the memory valid, but uninitialized) I noticed a remnant pattern left by the default memory. This indicated that the texture is being displayed, but the load is most likely the problem.
**UPDATE:
The, problem with the code above is essentially the buffer. The buffer is 1024*1024, but it is only partially filled in by the read, leaving the limit marker of the ByteBuffer at 2359296(1024*768*3) instead of 3145728(1024*1024*3). This gives the error:
Number of remaining buffer elements is must be ... at least ...
I thought that OpenGL needed space to return data, so I doubled the size of the buffer.
The buffer size is doubled to compensate for the error.
final static int datasize = (WIDTH*HEIGHT*3) *2; // Double buffer size for OpenGL // not +18 no header
This is wrong, what is needed is the flip() function (Big THANKS to Reto Koradi for the small hint to the buffer rewind) to put the ByteBuffer in read mode. Since the buffer is only semi-full, the OpenGL buffer check gives an error. The correct thing to do is not double the buffer size; use buffer.position(buffer.capacity()) to fill the buffer before doing a flip().
final static int datasize = (WIDTH*HEIGHT*3); // not +18 no header
buffer.clear(); // prepare for read
int ret = inc.read(buffer);
fin.close();
buffer.position(buffer.capacity()); // make sure buffer is completely FILLED!
buffer.flip(); // flip buffer to read mode
To figure this out, it is helpful to hardcode the memory of the buffer to make sure the OpenGL calls are working, isolating the load problem. Then when the OpenGL calls are correct, concentrate on the loading of the buffer. As suggested by Felix K, it is good to make sure one texture has been drawn correctly before calling glTexSubImage2D repeatedly.
Some ideas which might cause the issue:
Your texture is disposed somewhere. I don't know the whole code but I guess somewhere there is a glDeleteTextures and this could cause some issues if called at the wrong time.
Are the texture width and height powers of two? If not this might be an issue depending on your hardware. Old hardware sometimes won't support non-power of two images.
The texture parameters changed between the draw calls at some other point ( Make a debug check of the parameters with glGetTexParameter ).
There could be a loading issue when loading the next image ( edit: or even the first image ). Check if the first image is displayed without loading the next images. If so it must be one of the cases above.

iOS8 , Xcode6 how to get memory usage programmatically as shown by Xcode

I am using the following to get the memory usage:
struct task_basic_info info;
mach_msg_type_number_t sizeNew = sizeof(info);
kern_return_t kerr = task_info(mach_task_self(),
TASK_BASIC_INFO,
(task_info_t)&info,
&sizeNew);
if( kerr == KERN_SUCCESS ) {
printf("Memory in use (in bytes): %u", info.resident_size);
} else {
printf("Error with task_info(): %s", mach_error_string(kerr));
}
But the memory returned by this is much higher than that of shown by XCode6, any one else facing the same issue ?
Resident set size (RSIZE) is not the same as the 'amount of memory used'. it includes the code as well.
You're probably looking for the top equivalent of RPRVT from the top program.
Obtaining that information requires walking the VM information for the process. Using the code for libtop.c, function libtop_update_vm_regions as a template, you would need to walk through the entire memory map adding up all the private pages. There's a simpler example of walking the address space, which can be used as a basis for calculating this size. You're looking for the VPRVT value, not the RPRVT value.
I don't currently have a mac to hand to write out an example with any degree of confidence that would work.

Direct2D API calls stall at specific intervals

I am working on migrating the drawing code of an application from GDI/GDI+ to Direct2D. So far things have been going well - however, while testing the new code, I have noticed some bizarre performance. The flow of execution I have been investigating is as follows (I have done my best to remove irrelevant code):
Create D2D Factory (on creation of app)
HRESULT hr = S_OK;
hr = D2D1CreateFactory(D2D1_FACTORY_TYPE_MULTI_THREADED, &m_pD2DFactory);
if (hr == S_FALSE) {
ASSERT(FALSE);
throw Exception(CExtString(_T("Failed to create Direct2D factory")));
}
OnDraw Callback
HWND hwnd = GetSafeHwnd();
RECT rc;
GetClientRect(&rc);
D2D1_SIZE_U size = D2D1::SizeU(rc.right - rc.left, rc.bottom - rc.top);
// Create a render target if it has been destroyed
if (!m_pRT) {
D2D1_RENDER_TARGET_PROPERTIES props = D2D1::RenderTargetProperties(
D2D1_RENDER_TARGET_TYPE_DEFAULT,
D2D1::PixelFormat(
DXGI_FORMAT_B8G8R8A8_UNORM,
D2D1_ALPHA_MODE_IGNORE),
0,
0,
D2D1_RENDER_TARGET_USAGE_NONE,
D2D1_FEATURE_LEVEL_DEFAULT);
GetD2DFactory()->CreateHwndRenderTarget(props,
D2D1::HwndRenderTargetProperties(hwnd, size),
&m_pRT);
}
m_pRT->Resize(size);
m_pRT->BeginDraw();
// Begin drawing the layers, given the
// transformation matrix and some geometric information
Draw(m_pRT, matrixD2D, rectClipWorld, rectClipDP);
HRESULT hr = m_pRT->EndDraw();
if (hr == D2DERR_RECREATE_TARGET) {
SafeRelease(m_pRT);
}
The contents of the Draw method
The draw method does a lot of fluff that is largely irrelevant to this test (as I have turned all extraneous layers off), but it eventually draws a layer that executes this method several thousand times:
void DrawStringWithEffects(ID2D1RenderTarget* m_pRT, const CString& text, const D2D1_POINT_2F& point, const COLORREF rgbFore, const COLORREF rgbBack, IDWriteTextFormat* pfont) {
// The text will be vertically centered around point.y, with point.x on the left hand side
// Create a TextLayout for the string
IDWriteTextLayout* textLayout = NULL;
GetDWriteFactory()->CreateTextLayout(text,
text.GetLength(),
pfont,
std::numeric_limits<float>::infinity(),
std::numeric_limits<float>::infinity(),
&textLayout);
DWRITE_TEXT_METRICS metrics = {0};
textLayout->GetMetrics(&metrics);
D2D1_RECT_F rect = D2D1::RectF(point.x, point.y - metrics.height/2, point.x + metrics.width, point.y + metrics.height/2);
D2D1_POINT_2F pointDraw = point;
pointDraw.y -= metrics.height/2;
ID2D1SolidColorBrush* brush = NULL;
m_pRT->CreateSolidColorBrush(ColorD2DFromCOLORREF(rgbBack), &brush);
m_pRT->FillRectangle(rect, brush);
// ^^ this is sometimes very slow!
brush->SetColor(ColorD2DFromCOLORREF(rgbFore));
m_pRT->DrawTextLayout(pointDraw, textLayout, brush, D2D1_DRAW_TEXT_OPTIONS_NONE);
// ^^ this is also sometimes very slow!
SafeRelease(&brush);
SafeRelease(&textLayout);
The vast majority of the time, the Direct2D calls are executing ~3-4 times faster than the GDI+ equivalents, which is great (generally 0.1ms compared to ~0.35ms). For some reason, though, the function calls will occasionally stall for a long period of time - upwards of 200ms combined. The offending calls are straight from the Direct2D API - FillRectangle and DrawTextLayout. Strangely, these stalls appear in the same location every time I run the application - the 73rd occurrence of the loop, then the 218th, then the 290th and so on (there is somewhat of a pattern in the differences, alternating between every ~73rd and every ~145th cycle). This is independent of the data that it draws (when I told it to skip drawing the 73rd cycle, the next cycle simply becomes the 73rd and thus stalls).
I thought this may be a GPU/CPU communication issue, so I set the render target (I am using an HWnd target) to software mode (D2D1_RENDER_TARGET_TYPE_SOFTWARE), and the results were even more strange. The stall times dropped from ~200ms to ~20ms (still not great, but hey), but there were two instances that stalled for over 2500ms! (These two, like the rest of the stalls, are completely reproducible in terms of being the n'th API call).
This is rather frustrating, as 99% of the loop is several times faster than the old implementation, but the (less than) 1% remaining hang for an abnormally long time.
To any Direct2D experts out there - what type of problem might this stalling be a symptom of? What, in general, could be causing this disconnect between my code and what D2D is doing in the background?
Direct2D buffers drawing commands (presumably to optimize them). You can't look at the performance of an individual drawing command, you must look at the total time between BeginDraw() and EndDraw(). If you want to force each drawing command to execute immediately, you must follow each one with a call to Flush(). That's probably a bad idea for performance though.
https://msdn.microsoft.com/en-us/library/windows/desktop/dd371768(v=vs.85).aspx
After BeginDraw is called, a render target will normally build up a
batch of rendering commands, but defer processing of these commands
until either an internal buffer is full, the Flush method is called,
or until EndDraw is called.

Fix for DirectX 7 latency on Windows 7?

We have a piece of software that is programmed against DirextX 7 SDK (i.e the code uses LPDIRECTDRAWSURFACE7 and the likes) and runs in fullscreen. The main task is putting something on the screen in response to external triggers in a reliable manner. This behaves very well on Windows XP: bsically the software waits for some trigger and when triggered, creates a new frame, puts it in the backbuffer, then tells DX to flip the buffers. The result is the approximate delay between the trigger and when the frame is effectively shown on the screen is, depending on video card and drivers, 3 frames or 50mSec for a 60Hz screen. This is tested on a variety of systems, all running NVidia cards. On some systems with higher end cards we even get 2 frames.
When running the same software on Windows 7 (with no other software installed at all) however, we cannot get lower than 5 frames. Meaning somewhere in the pipeline the OS or driver or both eat 2 extra frames, which is near to unacceptable for the application. We tried disabling aero/desktop composition/different driver versions/different video cards but no avail.
where does this come from? is this documented somewhere?
is there an easy way to fix? I know DirectX 7 is old, but upgrading to compile agains a more recent version might be tons of work so another type of fix would be nice. Maybe some flag that can be set in code?
edit here's some code which seems relevant:
Creation of front/back surfaces:
ddraw7->SetCooperativeLevel( GetSafeHwnd(),
DDSCL_EXCLUSIVE | DDSCL_FULLSCREEN | DDSCL_ALLOWMODEX | DDSCL_MULTITHREADED )
DDSURFACEDESC2 desc;
ZeroMemory( &desc, sizeof(desc) );
desc.dwSize = sizeof( desc );
desc.dwFlags = DDSD_CAPS | DDSD_BACKBUFFERCOUNT;
desc.ddsCaps.dwCaps = DDSCAPS_PRIMARYSURFACE | DDSCAPS_FLIP |
DDSCAPS_COMPLEX | DDSCAPS_3DDEVICE |
DDSCAPS_VIDEOMEMORY | DDSCAPS_LOCALVIDMEM;
desc.dwBackBufferCount = 1;
ddraw7->CreateSurface( &desc, &primsurf, 0 )
DDSCAPS2 surfcaps;
ZeroMemory( &surfcaps,sizeof( surfcaps ) );
surfcaps.dwCaps = DDSCAPS_BACKBUFFER;
primsurf->GetAttachedSurface( &surfcaps, &backsurf );
Creation of surfaces used to render frames before they get drawn:
DDSURFACEDESC2 desc;
ZeroMemory( &desc, sizeof(desc) );
desc.dwSize = sizeof(desc);
desc.dwFlags = DDSD_WIDTH | DDSD_HEIGHT | DDSD_CAPS ;
desc.dwWidth = w;
desc.dwHeight = h;
desc.ddsCaps.dwCaps = DDSCAPS_OFFSCREENPLAIN | DDSCAPS_VIDEOMEMORY;
desc.ddpfPixelFormat.dwSize = sizeof( DDPIXELFORMAT );
desc.ddpfPixelFormat.dwFlags = DDPF_PALETTEINDEXED8;
LPDIRECTDRAWSURFACE7 surf;
HRESULT r=ddraw7->CreateSurface( &desc, &surf, 0 )
Rendering loop, in OnIdle:
//clear surface
DDBLTFX bltfx;
ZeroMemory( &bltfx, sizeof(bltfx) );
bltfx.dwSize = sizeof( bltfx );
bltfx.dwFillColor = RGBtoPixel( r, g, b );
backsurf->Blt( rect, 0, 0, DDBLT_COLORFILL | DDBLT_WAIT, &bltfx )
//blit some prerendered surface onto it, x/y/rect etc are calculated properly)
backsurf->BltFast( x, y, sourceSurf, s&sourceRect, DDBLTFAST_WAIT );
primsurf->Flip( 0, DDFLIP_WAIT )
primsurf->Blt(&drect,backsurf,&srect,DDBLT_WAIT,0);
I think that the Windows XP thing is a red herring. The last version of Windows that ran DirectX 7 directly was Windows 2000. Windows XP is just emulating DX7 in DX9, same as Windows 7 is doing.
I'll venture a guess that your application uses palettized textures, and that when DX emulates that functionality (as it was dropped after DX7) it's generating a texture using the indexed colors. You might try profiling the app with GPUView to see if there's a delay in pushing the texture to the GPU. E.g., perhaps the Win7 driver compressing it first?

XNA 4.0 InvalidOperationException was unhandeled

I am using this tutorial to learn a little XNA, and i keep running into problems. I've had to convert alot of the code, since it seems the tutorial do not use XNA 4.0.
But lets cut to the chase!
float aXPosition = (float)(-mCarWidth / 2 + mCarPosition.X + aMove * Math.Cos(mCarRotation));
float aYPosition = (float)(-mCarHeight / 2 + mCarPosition.Y + aMove * Math.Sin(mCarRotation));
Texture2D aCollisionCheck = CreateCollisionTexture(aXPosition, aYPosition);
//Bruke GetData til å fylle en array med fargen på pixlene ved collisons texturen
int aPixels = mCarWidth * mCarHeight;
Color[] myColors = new Color[aPixels];
aCollisionCheck.GetData<Color>(0, new Rectangle((int)(aCollisionCheck.Width / 2 - mCarWidth / 2),
(int)(aCollisionCheck.Height / 2 - mCarHeight / 2), mCarWidth, mCarHeight), myColors, 0, aPixels);
The error i get when i try to debug the code says: InvalidOperationException was unhandeled, The render Target must not be set on the device when it is used as a texture.
I have no clue what to do.
This basically means exactly what it says.
You have to unset the render target from the device by calling GraphicsDevice.SetRenderTarget(null) (or setting it to a different render target). Because you can't use it as both a source texture and a destination buffer at the same time.
Keep in mind that, in this version of XNA, there is no ResolveRenderTarget. Render targets simply are textures.
Note that the tutorial that you are using is pretty terrible. Reading back from a render target like this is extremely slow. Especially seeing as the operations that it is using the render target for (selecting pixels in a transformed region) could easily be done efficiently on the CPU. Consider using this better, official example.

Resources