I'm new to C++ and opencv and having difficulty running the background subtractor model. I'm using Visual Studio 2017 (VC++) and opencv 3.2.
I need to use BackgroundSubtractorMOG2 and it creates the model fine but throws exception when I pass it a new frame that is converted from an encoded image (because I'm using a USB3 camera that doesn't allow direct reading of frame by opencv) to cv::Mat. The code snippet is below
if (EncodeRawImage(pRawImage, &frameDesc, imageFormat, &pEncodedImage, &encodedImageSize) == SUCCESS) {
// Convert current raw image to openCV Mat format for analysis
auto store = gcnew cli::array<Byte>(encodedImageSize);
System::Runtime::InteropServices::Marshal::Copy(IntPtr(pEncodedImage), store, 0, encodedImageSize);
auto stream = gcnew System::IO::MemoryStream(store);
System::Drawing::Bitmap^ bitmapFrame = safe_cast<System::Drawing::Bitmap ^> (Image::FromStream(stream));
Mat imgBuf = Mat(bitmapFrame->Width, bitmapFrame->Height, CV_8U, pEncodedImage);
Mat imgMat = imdecode(imgBuf, CV_LOAD_IMAGE_COLOR);
bgm->apply(imgMat , fgMaskMOG);
returnCode = 1;
}
The exception is
owner 0x0000023afa2a2ec0 Information not available, no symbols loaded for opencv_world320d.dll> cv::detail::PtrOwner *
stored 0x0000023afa29baa0 Information not available, no symbols loaded for opencv_world320d.dll> cv::BackgroundSubtractorMOG2 *
Even though the opencv320d.lib are linked properly and a few other basic opencv samples run for me in the same program (e.g., cv::subtract, cv::calcHist etc). I wonder if it because the image size is too large (4608x3288) and bitmpaFrame that I'm creating may have an issue?
Or if I'm trying to access image data in the stream/memory in a way that is not allowed?
Even though the Information not found string is still there when I do the breakpoint, I just found that the model was throwing exception because of large image size as I mentioned in my original question (4608x3288). I resizes the image (resize(newImageMat, newImageMat, cv::Size(1000, 1000 * newImageMat.rows / newImageMat.cols), INTER_LINEAR);) and the error is gone and the model learns and shows the result. The text maybe a Visual Studio bug since I was reading about a similar issue here https://stackoverflow.com/a/7736034/3377101?
Related
Trying to use JavaCV for image processing to rotate image according to it's Exif data.
Reading and writing from file to either Mat or IplImage works, but since the file is being upload or downloaded, I also want to be able to do the same thing processing byte[] instead of having to write to file.
However, I cannot find how to create a Mat instance from byte[].
The method 'aMat.put(0, 0, byteArray)' which is mentions in some answers is not available on a Mat instance in javacp version 1.0 using javacpp-presets:opencv:3.0.0.
Trying to put the bytes in the Mat data via : 'aMat.data().put(imageBytes, 0, 0)' throws a NPE because data() returns null. I cannot find how to set the data since it is a JNI call.
Any idea's on how to create a opencv_core.Mat from byte[]?
Did you try using:
yourMat.data().put(yourByteArray);
Just make sure yourMat is of the right size.
I was trying to use gl_FragDepthEXT in a shader but ran into issues. Is there something I have to do to enable this extension?
Yes, you are missing one requirement, when you are using a raw shader you must enable with the following string in your shader code:
"#extension GL_EXT_frag_depth : enable"
When using a THREE.ShaderMaterial the program string is partly auto-generated therefore the above string cannot be added early enough in your shader strings to avoid a shader compiler error so you enable with:
material.extensions.fragDepth = true
This will make gl_FragDepthEXT available as a fragment shader output if the extension is supported.
It is your hardware that determines if an extension is supported or not. So what you can do is query the hardware to see if the extension is supported. If you look in the source of three.js (src/renderers/webgl/WebGLExtensions.js), there are helper function to determine if an extension is supported:
// assuming here that _gl is the webgl context
var extensions = new THREE.WebGLExtensions( _gl );
// the return value is null if the extension is not supported,
// or otherwise an extension object
extensions.get( "gl_FragDepthEXT" );
or in pure webGL:
// returns an array of strings, one for each supported extension
// for informational purposes only
var available_extensions = _gl.getSupportedExtensions();
// the return value is null if the extension is not supported,
// or otherwise an extension object
var object_ext = _gl.getExtension( "gl_FragDepthEXT" );
Answering your question from the comments above, re: how well extension is supported. You can check http://webglstats.com/ to get a idea of webgl extensions currently supported by Devices/OS/Browser. The data comes from visitors on the participating websites only, but it should give you a general idea.
I'm building a Windows 8.1 DirectX app and trying to load in an external to store level data.
The relevant code currently is (this method is called in the AssetHandler constructor):
void AssetHandler::LoadLevelsData()
{
unsigned int i = 0;
std::string lineData;
this->currentFile.open("Assets/reg.txt");
//Below statement here purely to check if the blasted thing is opening
if (this->currentFile.is_open())
{
i++;
}
while (std::getline(this->currentFile, lineData))
{
levels[i] = lineData;
i++;
}
currentFile.close();
}
The problem that i'm having is that the file does not appear to be opening. I have tried:
Using a full path
Opening the file in the initialisation list
A breakpoint shows that it is jumping over the if and while
I found some information saying that DirectX has constraints on working with external files but it did not specify exactly what these were.
The Item Type was set to 'Does not participate in build'. Setting this value to 'Text' solved the problem.
Not sure why this is happening. The method :
IplImage.createFrom(image);
Is hanging without returning any value. I've tried multiple images, and confirmed their existence. I'm writing an application that harnesses template matching, however this initial step is giving me a headache. Does anyone know why this method would suspend the thread and not return any value? I've done some research, and confirmed my OpenCV path is set up, and that all my libraries are properly setup.
Before converting a BufferedImage to iplimage we need to create an iplimage which has same height and width as the BufferedImage. Try this code:
IplImage ipl_image = IplImage.create(buffered_image.getWidth(),buffered_.getHeight(),IPL_DEPTH_8U,1);
ipl_image = IplImage.createFrom(buffered_image);
I am running an MVC3 app on Mono/linux and everything is working fine with the exception of an image upload utility. Whenever an image upload is attempted i get the Invalid Parameter error from within the method below:
System.Drawing.GDIPlus.CheckStatus(status As Status) (unknown file): N 00339
System.Drawing.Bitmap.SetResolution(xDpi As Single, yDpi As Single)
I've googled this error extensively and have found that the Invalid parameter error can often be misleading, and could fire if for example there was an error with the upload itself, or if the image was not fully read. This runs fine on IIS/Windows, but I've not been able to get it to work in Mono.
Apache2
Mono 2.10.8.1
Am i missing something simple or do i need to find a different way to handle image manipulation for mono?
After doing quite a bit of testing, I was able to determine the root of my error. I was attempting to use the Image.HorizontalResolution and Image.VerticalResolution properties for Bitmap.Resolution . While these properties were set on the initial upload (where the file is read into a stream from the tmp directory), when i posted back with the base64 encoded string of the image itself, it appears these values were lost somehow. Because of this the SetResolution method failed.
For whatever reason i do not have this issue on IIS/Windows, the properties exist in both circumstances.
I encountered a similar issue. A Bitmap loaded from disk, reported bmp.HorizontalResolution==0 and bmp.VerticalResolution==0 when they were both==300. This behaviour does not occur on Windows.
A bit more digging, I found that the following test fails:
[Test]
public void GDI_SetResoltion()
{
var b1 = new Bitmap (100, 100);
Assert.That (b1.HorizontalResolution, Is.Not.EqualTo (0));
Assert.That (b1.VerticalResolution, Is.Not.EqualTo (0));
}
I believe Windows will default resolution to 96 dpi.