Segmentation fault operating a camera with MATLAB - image

I am using Matlab to operate a camera. It is an IDT SharpVision camera, and I am using the Matlab interface provided by the company. When I try to acquire an image, I get a segmentation fault. I have tried preallocating memory by creating an empty array for the image, but this does not work.
This is the line of code that causes the seg fault:
[nResult, x] = sharpML('IdtSvAcquire',cameraId);
sharpML.dll includes a MEX file for controlling the camera.
Here is a selection from the error message stack trace:
[0] QCamChildDriver.dll:0x160fdde4(0x0f99ef08, 15, 0x00ced938, 0x00ced938)
[1] QCamDriver.dll:0x0f9c1dd8(4146, 0x00ced938, 0x00ced924, 0x11283430)
[2] sharpML.dll:0x0f991d8c(2, 0x00cedf88, 2, 0x00cedfe8)
[3] sharpML.dll:0x0f991448(2, 0x00cedf88, 2, 0x00cedfe8)
...
[35] MATLAB.exe:0x00403bd2(1109972, 0, 0x7ffd9000, 0x805512fa)
[36] kernel32.dll:0x7c817077(0x00403daf, 0, 0x78746341, 32)
Any suggestions? The company that makes the camera has since gone out of business.
~ Adam

This sounds like a driver issue since the fault is occurring here:
QCamChildDriver.dll:0x160fdde4(0x0f99ef08, 15, 0x00ced938, 0x00ced938)
One possible issue - the driver might be in conflict with your OS, especially if you are running Vista or any 64 bit OS.
If it is a driver issue, you might be able to find updated drivers somewhere on line, even if the company is gone.
Other than that you might be up a creek, unless you can find the C-source for scratchML and/or the driver.

Problem solved:
I reinstalled the camera software and relevant QCam drivers, along with cleaning up a few other bugs.

If your camera uses firewire, you could try to use this tool.

Related

LoadLibrary() fails with error 8 (ERROR_NOT_ENOUGH_MEMORY)

Later edit: After more investigation, the Windows Updates and the OpenGL DLL were red herrings. The cause of these symptoms was a LoadLibrary() call failing with GetLastError() == ERROR_NOT_ENOUGH_MEMORY. See my answer for how to solve such issues. Below is the original question for historical interest. /edit
A map viewer I wrote in Python/wxPython for Windows with a C++ backend suddenly
stopped working, without any code changes or even recompiling. The very same
executables had been working for weeks before (same Python, same DLLs, ...).
Now, when querying Windows for a pixel format to use with OpenGL (with
ChoosePixelFormat()), I get a MessageBox saying:
LoadLibrary failed with error 8:
Not enough storage is available to process this command
The error message is displayed when executing the following code fragment:
void DevContext::SetPixelFormat() {
PIXELFORMATDESCRIPTOR pfd;
memset(&pfd, 0, sizeof(pfd));
pfd.nSize = sizeof(pfd);
pfd.nVersion = 1;
pfd.dwFlags = PFD_DRAW_TO_WINDOW | PFD_SUPPORT_OPENGL;
pfd.iPixelType = PFD_TYPE_RGBA;
pfd.cColorBits = 32;
int pf = ChoosePixelFormat(m_hdc, &pfd); // <-- ERROR OCCURS IN HERE
if (pf == 0) {
throw std::runtime_error("No suitable pixel format.");
}
if (::SetPixelFormat(m_hdc, pf, &pfd) == FALSE) {
throw std::runtime_error("Cannot set pixel format.");
}
}
It's actually an ATI GL driver DLL showing the message box. The relevant part of the call stack is this:
... More MessageBox stuff
0027e860 770cfcf1 USER32!MessageBoxTimeoutA+0x76
0027e880 770cfd36 USER32!MessageBoxExA+0x1b
*** ERROR: Symbol file not found. Defaulted to export symbols for C:\Windows\SysWOW64\atiglpxx.dll -
0027e89c 58471df1 USER32!MessageBoxA+0x18
0027e9d4 58472065 atiglpxx+0x1df1
0027e9dc 57acaf0b atiglpxx!DrvValidateVersion+0x13
0027ea00 57acb0f3 OPENGL32!wglSwapMultipleBuffers+0xc5e
0027edf0 57acb1a9 OPENGL32!wglSwapMultipleBuffers+0xe46
0027edf8 57acc6a4 OPENGL32!wglSwapMultipleBuffers+0xefc
0027ee0c 57ad5658 OPENGL32!wglGetProcAddress+0x45f
0027ee28 57ad5dd4 OPENGL32!wglGetPixelFormat+0x70
0027eec8 57ad6559 OPENGL32!wglDescribePixelFormat+0xa2
0027ef48 751c5ac7 OPENGL32!wglChoosePixelFormat+0x3e
0027ef60 57c78491 GDI32!ChoosePixelFormat+0x28
0027f0b0 57c7867a OutdoorMapper!DevContext::SetPixelFormat+0x71 [winwrap.cpp # 42]
0027f1a0 57ce3120 OutdoorMapper!OGLContext::OGLContext+0x6a [winwrap.cpp # 61]
0027f224 1e0acdf2 maplib_sip!func_CreateOGLDisplay+0xc0 [maps.sip # 96]
0027f240 1e0fac79 python33!PyCFunction_Call+0x52
... More Python stuff
I did a Windows Update two weeks ago and noticed some glitches (e.g. when
resizing the window), but my program still worked mostly OK. Just now I
rebooted, Windows installed 1 more update, and I don't get past
ChoosePixelFormat() any more. However, the last installed update was
KB2998527, a Russia timezone update?!
Things that I already checked:
Recompiling doesn't make it work.
Rebooting and running without other programs running doesn't work.
Memory consumption of my program is only 67 MB, I'm not out of memory.
Plenty of diskspace free (~50 GB).
The HDC m_hdc is obtained from the display panel's HWND and seems to be valid.
Changing my linker commandline doesn't work.
Should I update my graphics drivers or roll back the updates? Any other ideas?
System data dump: Windows 7 Ultimate SP1 x64, 4GB RAM; HP EliteBook 8470p; Python 3.3, wxPython 3.0.1.dev76673 msw (phoenix); access to C++ data structures via SIP 4.15.4; C++ code compiled with Visual Studio 2010 Express, Debug build with /MDd.
I was running out of virtual address space.
By default, LibTIFF reads TIF images by memory-mapping them (mmap() or CreateFileMapping()). This is fine for pictures of your wife, but it turns out it's a bad idea for gigabytes worth of topographic raster-maps of the Alps.
This was difficult to diagnose, because LibTIFF silently fell back to read() if the memory mapping failed, so there never was an explicit error before. Further, mapped memory is not accounted as working memory by Windows, so the Task-Manager was showing 67MB, when in fact nearly all virtual address space used up.
This blew up now because I added more TIF images to my database recently. LoadLibrary() started failing because it couldn't find any address space to put the new library. GetLastError() returned 8, which is ERROR_NOT_ENOUGH_MEMORY. That this happened within ATI's OpenGL library was just coincidence.
The solution was to pass "m" as flag to TiffOpen() to disable memory mapped IO.
Diagnosing this is easy with the Windows SysInternals tool VMMap (documentation link), which shows you how much of the virtual address space of a process is taken up by code/heap/stack/mapped files/shareable data/etc.
This should be the first thing to check if LoadLibrary() or CreateFileMapping() fails with ERROR_NOT_ENOUGH_MEMORY.

Zero Opengl 3.2 pixelformat matches found?

Today I finally found out what has been stalling my development process: Even though no errorcode is set, the function wglChoosePixelFormatARB returns 0 pixelformats.
I am trying to set up an OpenGL context in my C++ application and I have managed to retrieve the function pointers for the extensions.
glGetIntegerv(GL_MAJOR_VERSION, &maj)
returns 4 so, naturally, I assumed it would be possible to create an OpenGL 3.2 context. However, after finding out there were no matches, I started to comment out some of my requirements to go in the attribList parameter. There were no matches whatsoever.
Only when I, just to be certain, commented out
WGL_CONTEXT_MAJOR_VERSION_ARB, 3,
WGL_CONTEXT_MINOR_VERSION_ARB, 2,
I finally got matches. Out of the 8 matching pixel formats that the other requirements meet, not ONE of them seems to support version 3 of OGL.
Has anyone ever run into this? I have tried updating/reinstalling my video drivers, but nothing has changed. I am running this on Windows 7, MS Visual Studio 2008, and my graphics card is one from the AMD Radeon HD 7700 Series.
The WGL_CONTEXT_MAJOR_VERSION_ARB, WGL_CONTEXT_MINOR_VERSION_ARB and related attributes are not attributes of the Windows Pixelformat.
You must not use them with wglChoosePixelFormatARB().
Those options belong into the attribute list of wglCreateContextAttribsARB as defined by the WGL_ARB_create_context extension.

OpenGL core profile incredible slowdown on OS X

I added a new GL renderer to my engine, which uses the core profile. While it runs fine on Windows and/or nvidia cards, it is like 10 times slower on OS X (3 fps instead of 30). The weird thing is, that my compatibility profile renderer runs fine.
I collected some traces with Instruments and the GL profiler:
https://www.dropbox.com/sh/311fg9wu0zrarzm/31CGvUcf2q
It shows that the application spends its time in glDrawRangeElements.
I tried the following things:
use glDrawElements instead (no effect)
flip culling (no effect on speed)
disable some GL_DYNAMIC_DRAW buffers (no effect)
bind index buffer after VAO when drawing (no effect)
converted indices to 4 byte (no effect)
use GL_BGRA textures (no effect)
What I didn't try is to align my vertices to 16 byte boundary and/or convert indices to 4 byte, but seriously, if that would be the issue then why the hell does the standard allow it?
I'm creating the context like this:
NSOpenGLPixelFormatAttribute attributes[] =
{
NSOpenGLPFAColorSize, 24,
NSOpenGLPFAAlphaSize, 8,
NSOpenGLPFADepthSize, 24,
NSOpenGLPFAStencilSize, 8,
NSOpenGLPFADoubleBuffer,
NSOpenGLPFAAccelerated,
NSOpenGLPFANoRecovery,
NSOpenGLPFAOpenGLProfile, NSOpenGLProfileVersion3_2Core,
0
};
NSOpenGLPixelFormat* format = [[NSOpenGLPixelFormat alloc] initWithAttributes:attributes];
NSOpenGLContext* context = [[NSOpenGLContext alloc] initWithFormat:format shareContext:nil];
[self.view setOpenGLContext:context];
[context makeCurrentContext];
Tried on the following specs:
radeon 6630M, OS X 10.7.5
radeon 6750M, OS X 10.7.5
geforce GT 330M, OS X 10.8.3
Do you have any ideas what I might do wrong? Again, it works fine with the compatibility profile (not using VAOs though).
UPDATE: reported to Apple.
UPDATE: Apple doesn't give a damn to the problem...anyway I created a small test program which is actually good. Now I compared the call stack with Instruments, and found out that when using the engine, glDrawRangeElements does two calls:
gleDrawArraysOrElements_ExecCore
gleDrawArraysOrElements_Entries_Body
while in the test program it calls only the second. Now the first call does something like an immediate mode render (gleFlushPrimitivesTCLFunc, gleRunVertexSubmitterImmediate), so obviously casues the slowdown.
Finally, I was able to reproduce the slowdown. This is just crazy... It is clearly caused by glBindAttribLocation being called on the "my_Position" attribute. Now I did some testing:
1 is default (as returned by glGetAttribLocation)
if I set it to zero, theres no problem
if I set it to 1, the rendering becomes slow
if I set it to any larger number, it is slow again
Obviously I relink the program (check code). It is not a problem in the implementation, I tested it with "normal" values too.
Test program:
https://www.dropbox.com/s/dgg48g1fwgyc5h0/SLOWDOWN_REPRO.zip
How to repro:
open with XCode
open common/glext.h (don't be disturbed by the name)
modify the GLDECLUSAGE_POSITION constant from 0 to 1
compile and run => slow
changing back to zero => good
I have managed to get myself the same problem in the following circumstance under
OS X Mavericks:
Instanced rendering using array buffers to give each instance its own modelToWorld and inverseNormal matrices; attribute locations are being specified through layout rather than using glGetAttribLocation
leaving one of these array buffers unused in the shader, where location is declared but the attribute isn't actually used for anything in the glsl code
In this case, a call to glDrawElementsInstanced takes up a LOT of CPU time (under normal circumstances, this call uses nearly zero CPU even when drawing several thousand instances).
You can tell that you're getting this specific problem if almost all of the CPU time used within glDrawElementsInstanced is spent in gleDrawArraysOrElements_ExecCore. Making sure that all of the array buffers are actually referenced in your shader code fixes the CPU time back to (nearly) zero.
I suspect that this is one of the situations where leaving a variable out of your main() in glsl confuses the compiler in to deleting all reference to that variable, leaving you with a dangling reference to an attribute or uniform.

Kernel does NOT recognize NAND bad blocks marked by u-boot

While in u-boot of my ARM based board (DM368) I mark some kernel partition block manually as bad. U-boot says that it was marked and, for example, while writing/reading kernel image I see it skipping this bad block.
But when I try to write the same partition from within Linux (loaded via NFS) I see that Linux nandwrite command USES this bad block! I checked this in several ways - Linux ignores bad block mark for 100%. But everywhere in the internet it is said that BBT is one for both u-boot and Linux.
So, where is the catch?
OK, the answer is found.
For some unclear reason Texas Instruments, manufacturer of the board DM365EVM which I use for development, provides the kernel with different BBT structure. They defined BBT offset as 2, while all the world, including the provided u-boot, defines this offset as 8.
I wish them a good health for many years.

OpenCL-GL Interop memory not in sync

I'm having troubles with OpenCL-GL shared memory.
I have a application that's working in both linux and windows. The CL-GL sharing works in linux, but not in windows.
The windows driver says that it supports sharing, the examples from AMD work so it should work. My code for creating the context in windows is:
cl_context_properties properties[] = {
CL_CONTEXT_PLATFORM, (cl_context_properties)platform_(),
CL_WGL_HDC_KHR, (intptr_t) wglGetCurrentDC(),
CL_GL_CONTEXT_KHR, (intptr_t) wglGetCurrentContext(),
0
};
platform_.getDevices(CL_DEVICE_TYPE_GPU, &devices_);
context_ = cl::Context(devices_, properties, &CL::cl_error_callback, nullptr, &err);
err = clGetGLContextInfoKHR(properties, CL_CURRENT_DEVICE_FOR_GL_CONTEXT_KHR, sizeof(device_id), &device_id, NULL);
context_device_ = cl::Device(device_id);
queue_ = cl::CommandQueue(context_, context_device_, 0, &err);
My problem is that the CL and GL memory in a shared buffer is not the same. I print them out (by memory mapping) and I notice that they differ. Changing the data in the memory works in both CL and GL, but only changes that memory, not both (that is both buffers seems intact, but not shared).
Also, clGetGLObjectInfo on the cl-buffer returns the correct gl buffer.
Update: I have found that if I create the opencl-context on the cpu it works. This seems weird, as I'm not using integrated graphics, and I don't belive the cpu is handling opengl. I'm using SDL to create the window, could that have something to do with this?
I have now confirmed that the opengl context is running on the gpu, so the problem lies elsewhere.
Update 2: Ok, so this is weird. I tried again today, and suddenly it works. As far as I know I didn't install any new drivers before I shut down the computer yesterday, so I don't know what could have brought this about.
Update 3: Right, I noticed that changing the number of particles caused this to work. When I allocated so many particles that the shared buffer is slightly above one MB it suddenly starts to work.
I solved the problem.
OpenGL buffer object must be created "after" OpenCL context was created.
If "before", we can't share the OpenGL data.
I use RadeonHD5670 ATI Catalyst 12.10
Maybe, ATI driver's problem because Nvidia-Computing-SDK samples don't depend on the order.

Resources