Halide with CUDA targets not working - halide

I am new to Halide and have written a simple code to compute max(127, pix(x,y)) for every pixel in an image.
Though the code runs fine on CPU, it gives me wrong outputs when I set Target::CUDA. I'm not able to find the issue.
The following is a part of my code. Let me know if there is a mistake in the code, or do I have to re-build Halide with a support that will enable CUDA.
Halide::Var x, y;
Halide::Buffer<uint8_t> inputImageBuf(inpImg, imgSizes);
Halide::Func reluOp("ReLU Operation");
reluOp(x,y) = Halide::max(127, inputImageBuf(x, y));
int numTiles = 4;
Halide::Var threads_x, threads_y, blocks_x, blocks_y;
Halide::Target targetCUDA = Halide::get_host_target();
targetCUDA.set_feature(Halide::Target::CUDA);
targetCUDA.set_feature(Halide::Target::Debug);
reluOp.gpu_tile(x, y, blocks_x, blocks_y, threads_x, threads_y, numTiles, numTiles, Halide::TailStrategy::Auto, Halide::DeviceAPI::CUDA);
// reluOp.compile_jit(targetCUDA);
reluOp.print_loop_nest();
Halide::Buffer<uint8_t> result = reluOp.realize(cols, rows, targetCUDA);
result.copy_to_host();

One thing to try is adding a inputImageBuf.set_host_dirty(). If that helps I would consider that a bug in Halide.
You can also scroll through the debug output and see if the expected number of copies to and from the host are happening.

Related

rust imgui, how do you set it up?

I am trying to set up rust imgui for a custom renderer I am porting to rust.
I am stuck on two fronts, getting the peripheral callbacks, and the rendering.
In C++ the setup was farily simple
ImGuiContext* InitImgui(ModuleStorage::ModuleStorage& module, NECore::Gallery& gallery)
{
ImGuiContext* imgui_context = ImGui::CreateContext();
ImGuiIO& io = ImGui::GetIO();
unsigned char* pixels;
int width, height;
io.Fonts->GetTexDataAsRGBA32(&pixels, &width, &height);
CpuImage font_image(pixels, width, height, 4);
uint font_id = gallery.StoreImage<CpuImage::GetImageData>(
font_image, "__ImguiFont", NECore::ImageFormat::R8G8B8A8_UNORM);
io.Fonts->SetTexID((ImTextureID)(intptr_t)font_id);
ImGui_ImplGlfw_InitForVulkan(module.GetWindow().GetGLFWWindow(), true);
imgui_shader = module.AddShader(
{"./CommonShaders/imgui.vert",
"./CommonShaders/imgui.frag"});
return imgui_context;
}
30 lines of code and we have the initialization done.
Well some issues in rust, io.Fonts->GetTexDataAsRGBA32(&pixels, &width, &height); does not exist. I assume the equivalent is let font = fonts.build_rgba32_texture();
Assuming that's the case the next issue is setting the texture id, which I cannot find anywhere in the docs or the source code.
io.Fonts->SetTexID((ImTextureID)(intptr_t)font_id);
That function does not exist in the rust bindings. And ImGui_ImplGlfw_InitForVulkan is no where to be found either.
The examples https://github.com/imgui-rs/imgui-rs/blob/main/imgui-examples/examples/support/mod.rs
Seem to be using pre existen renderers and do not do a good job of showing how to integrate the tool onto an existing renderer other than the ones the author chose, which is baffling, one of the biggest selling points of imgui is how simple it is to integrate in pre-existing codebases.
I am at a loss, hwo do you bootstrap the library in rust?

How to DEBUG OpenGL a gray/black texture box?

I'm altering someone else's code. They used PNG's which are loaded via BufferedImage. I need to load a TGA instead, which is just simply a 18 byte header and BGR codes. I have the textures loaded and running, but I get a gray box instead of the texture. I don't even know how to DEBUG this.
Textures are loaded in a ByteBuffer:
final static int datasize = (WIDTH*HEIGHT*3) *2; // Double buffer size for OpenGL // not +18 no header
static ByteBuffer buffer = ByteBuffer.allocateDirect(datasize);
FileInputStream fin = new FileInputStream("/Volumes/RAMDisk/shot00021.tga");
FileChannel inc = fin.getChannel();
inc.position(18); // skip header
buffer.clear(); // prepare for read
int ret = inc.read(buffer);
fin.close();
I've followed this: [how-to-manage-memory-with-texture-in-opengl][1] ... because I am updating the texture once per frame, like video.
Called once:
GL11.glBindTexture(GL11.GL_TEXTURE_2D, textureID);
GL11.glTexParameteri(GL11.GL_TEXTURE_2D, GL11.GL_TEXTURE_WRAP_S, GL11.GL_CLAMP);
GL11.glTexParameteri(GL11.GL_TEXTURE_2D, GL11.GL_TEXTURE_WRAP_T, GL11.GL_CLAMP);
GL11.glTexParameteri(GL11.GL_TEXTURE_2D, GL11.GL_TEXTURE_MAG_FILTER, GL11.GL_NEAREST);
GL11.glTexParameteri(GL11.GL_TEXTURE_2D, GL11.GL_TEXTURE_MIN_FILTER, GL11.GL_NEAREST);
GL11.glTexImage2D(GL11.GL_TEXTURE_2D, 0, GL11.GL_RGB, width, height, 0, GL11.GL_RGB, GL11.GL_UNSIGNED_BYTE, (ByteBuffer) null);
assert(GL11.GL_NO_ERROR == GL11.glGetError());
Called repeatedly:
GL11.glBindTexture(GL11.GL_TEXTURE_2D, textureID);
GL11.glTexSubImage2D(GL11.GL_TEXTURE_2D, 0, 0, 0, width, height, GL11.GL_RGB, GL11.GL_UNSIGNED_BYTE, byteBuffer);
assert(GL11.GL_NO_ERROR == GL11.glGetError());
return textureID;
The render code hasn't changed and is based on:
GL11.glDrawArrays(GL11.GL_TRIANGLES, 0, this.vertexCount);
Make sure you set the texture sampling mode. Especially min filter: glTexParameteri ( GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR). The default setting is mip mapped (GL_NEAREST_MIPMAP_LINEAR) so unless you upload mip maps you will get a white read result.
So either set the texture to no mip or generate them. One way to do that is to call glGenerateMipmap after the tex img call.
(see https://www.khronos.org/opengles/sdk/docs/man/xhtml/glTexParameter.xml).
It's a very common gl pitfall and something people just tend to know after getting bitten by it a few times.
There is no easy way to debug stuff like this. There are good gl debugging tools in for example xcode but they will not tell you about this case.
Debugging GPU code is always a hassle. I would bet my money on a big industry progress in this area as more companies discover the power of GPU. Until then; I'll share my two best GPU debugging friends:
1) Define a function to print OGL errors:
int printOglError(const char *file, int line)
{
/* Returns 1 if an OpenGL error occurred, 0 otherwise. */
GLenum glErr;
int retCode = 0;
glErr = glGetError();
while (glErr != GL_NO_ERROR) {
printf("glError in file %s # line %d: %s\n", file, line, gluErrorString(glErr));
retCode = 1;
glErr = glGetError();
}
return retCode;
}
#define printOpenGLError() printOglError(__FILE__, __LINE__)
And call it after your render draw calls (possible earlier errors will also show up):
GL11.glDrawArrays(GL11.GL_TRIANGLES, 0, this.vertexCount);
printOpenGLError();
This alerts if you make some invalid operations (which might just be your case) but you usually have to find where the error occurs by trial and error.
2) Check out gDEBugger, free software with tons of GPU memory information.
[Edit]:
I would also recommend using the opensource lib DevIL - its quite competent in loading various image formats.
Thanks to Felix, by not calling glTexSubImage2D (leaving the memory valid, but uninitialized) I noticed a remnant pattern left by the default memory. This indicated that the texture is being displayed, but the load is most likely the problem.
**UPDATE:
The, problem with the code above is essentially the buffer. The buffer is 1024*1024, but it is only partially filled in by the read, leaving the limit marker of the ByteBuffer at 2359296(1024*768*3) instead of 3145728(1024*1024*3). This gives the error:
Number of remaining buffer elements is must be ... at least ...
I thought that OpenGL needed space to return data, so I doubled the size of the buffer.
The buffer size is doubled to compensate for the error.
final static int datasize = (WIDTH*HEIGHT*3) *2; // Double buffer size for OpenGL // not +18 no header
This is wrong, what is needed is the flip() function (Big THANKS to Reto Koradi for the small hint to the buffer rewind) to put the ByteBuffer in read mode. Since the buffer is only semi-full, the OpenGL buffer check gives an error. The correct thing to do is not double the buffer size; use buffer.position(buffer.capacity()) to fill the buffer before doing a flip().
final static int datasize = (WIDTH*HEIGHT*3); // not +18 no header
buffer.clear(); // prepare for read
int ret = inc.read(buffer);
fin.close();
buffer.position(buffer.capacity()); // make sure buffer is completely FILLED!
buffer.flip(); // flip buffer to read mode
To figure this out, it is helpful to hardcode the memory of the buffer to make sure the OpenGL calls are working, isolating the load problem. Then when the OpenGL calls are correct, concentrate on the loading of the buffer. As suggested by Felix K, it is good to make sure one texture has been drawn correctly before calling glTexSubImage2D repeatedly.
Some ideas which might cause the issue:
Your texture is disposed somewhere. I don't know the whole code but I guess somewhere there is a glDeleteTextures and this could cause some issues if called at the wrong time.
Are the texture width and height powers of two? If not this might be an issue depending on your hardware. Old hardware sometimes won't support non-power of two images.
The texture parameters changed between the draw calls at some other point ( Make a debug check of the parameters with glGetTexParameter ).
There could be a loading issue when loading the next image ( edit: or even the first image ). Check if the first image is displayed without loading the next images. If so it must be one of the cases above.

OpenCV - Earth Mover's Distance issue, icvInitEMD()

I am having trouble calling EMD() in OpenCV 2.4.2 under Mac OS ML.
I have a class with an attribute Mat _signature defined like that :
Mat _signature(size,dim+1,CV_32F);
for (int i = 0; i<size; ++i){
_signature.at<float>(i,0) = weight;
for (int j = 1; j < dim+1; ++j){
_signature.at<float>(i,j) = vec[i].at<float>(0,j-1); // vec[i] is a line vector containing the position in R^dim
}
}
I then have u and v 2 instances of that class, and when I call EMD(u._signature, v._signature, CV_DIST_L2);
It fails with OpenCV Error: One of arguments' values is out of range () in icvInitEMD, file /*SOME PATH*/OpenCV-2.4.2/modules/imgproc/src/emd.cpp, line 408
I looked at the sourcecode but could not figure out what this fails. My arguments appear in correspondence to what the documentation wants. Any help will be appreciated.
Ok, it took me quite some time to figure it out, but among my data there was a component of one of my vector which was miscalculated, and ended up being NaN.
Of course this was buried deep into my data so that it would be completely lost in any amount of data reasonably observable via a debugger (or even cout)
The cryptic error from OpenCV did the rest in confusing me.
For people stumbling upon the same issue as me :
Make sure your weight vectors are not zero
Make sure none of your data is NaN

Nsight 2.2 sometimes works sometimes doesn't

I have problem about Parallel Nsight 2.2 debugger. It is very strange and I don't know how to describe it. Anyway, It works sometimes and sometimes doesn't.
What I observed is, that it works with dynamic array(this array has no effect on cuda_kernels or any other functon like cudaMemcpy atc...) named with 3 elements. And this is importnat... If I set size on 4+, it just falls down, no errors, nothing just fall down.
Interesting fact is, that if I run it normally via normal debugger hole program works correctly with right results. Also interesting fact is, that when set this array as static
unsigned topology[4];
and set in same values Nsight debugger works but very slowly.
So first of all I commented all cuda source code (like kernels and all cuda functions) but still same - it falls down. So I started to comment more host_code and I found loop (in host code) which does this creepy thing. So when program in Nsght-debug reach loop(under text) it falls down, BUT, when I write command in this loop to print number of each loop on screen, it runs, loop is finished, hole program is finished and then debugger told me:
Debug Assertion Failed!
Program:
File:f:\dd\vctools\crt_bld\self_x86\crt\src\dbgheap.c
Line: 1322
Expression: _CrtIsValidHeapPointer(pUserData)
.... I don't even have disk f ... so wtf???
Anyway, on normal debugger it runs fine and with right results.
This is mentioned loop and dynamic array *topology:
unsigned *topology;
unsigned numberOfLayersInput = 5;
topology = new unsigned [numberOfLayersInput];
topology[0] = 784;
topology[1] = 1000;
topology[2] = 800;
topology[3] = 300;
topology[4] = 10;
kernelTopology_ *topologyOfKernels;
topologyOfKernels = new kernelTopology_ [numberOfLayersInput - 1];
for (int i = 0, numberOfThreads; i < numberOfLayersInput; i++)
{
cout <<i << endl; // this is the added line!
numberOfThreads = fixedTopology[i];
topologyOfKernels[i].size = numberOfThreads;
if(numberOfThreads > THREADS_PER_BLOCK)
topologyOfKernels[i].BLOCK_SIZE = THREADS_PER_BLOCK;
else topologyOfKernels[i].BLOCK_SIZE = numberOfThreads;
if(numberOfThreads <= THREADS_PER_BLOCK)
topologyOfKernels[i].GRID_SIZE = 1;
else if(fixedTopology[i] % topologyOfKernels[i].BLOCK_SIZE == 0)
topologyOfKernels[i].GRID_SIZE = fixedTopology[i] / topologyOfKernels[i].BLOCK_SIZE;
else
topologyOfKernels[i].GRID_SIZE = (fixedTopology[i] / topologyOfKernels[i].BLOCK_SIZE) + 1;
}
I can't see any mistakes in this code... also normal debugger has no problem with it.
I have reinstalled graphics drivers, CUDA toolkit, CUDA SDK and Paralell Nsight but it does same creepy things. By the way I use Win 7 64 bit and VS2010.
Does have anyone any ideas what I should do with this?
Please, let me know if someone has any idea :)
The error
Debug Assertion Failed! Program: File:f:\dd\vctools\crt_bld\self_x86\crt\src\dbgheap.c Line: 1322
is from the Microsoft C runtime function _CrtIsValidHeapPointer. The default debug build adds additional heap and stack checks into the code. This function is used to verify that a specified pointer is in the local heap. The path f:... is the location of the source file in the C runtime. This function is at the time Microsoft built the library.
The assertion indicates an out of bounds memory access. The cause of the error appears to be incorrect allocation of topologyOfKernels.
corruption.topologyOfKernels = new kernelTopology_ [numberOfLayersInput - 1];
should be allocating numberofLayersInput elements.
corruption.topologyOfKernels = new kernelTopology_ [numberOfLayersInput];

WP7 zxing scan not reliable

I've printed a few short qr-codes (like "HAEB16653") on a page using this algorythm:
private void CreateQRCodeFile(int size, string filename, string codecontent)
{
QRCodeWriter writer = new QRCodeWriter();
com.google.zxing.common.ByteMatrix matrix;
matrix = writer.encode(codecontent, BarcodeFormat.QR_CODE, size, size, null);
Bitmap img = new Bitmap(size, size);
Color Color = Color.FromArgb(0, 0, 0);
for (int y = 0; y < matrix.Height; ++y)
{
for (int x = 0; x < matrix.Width; ++x)
{
Color pixelColor = img.GetPixel(x, y);
//Find the colour of the dot
if (matrix.get_Renamed(x, y) == -1)
{
img.SetPixel(x, y, Color.White);
}
else
{
img.SetPixel(x, y, Color.Black);
}
}
}
img.Save(filename, ImageFormat.Png);
}
The printed barcodes work very well and fast with the integrated WP7 bing scan&search.
When I try to scan the very same printed qrcodes with Stéphanie Hertrichs sample app, scanning is very slow, most do not scan at all, or will only be recognized when I slowly rotate the camera around.
How do I get my scanning to be as reliable as the integrated barcode recognition? I only need to scan QrCodes, so I disabled all the others, still it does not work most of the time.
Is there maybe some other barcode scanning library which is working better?
The silverlight port in Stéphanie Hertrichs sample app is very old. It seems to me that the project at codeplex isn't maintained anymore since more then 1 year. You should try one of the newer and maintained ports like ZXing.Net
zxing works very well -- just try it on Android. I would not be surprised if it is what powers the Bing search.
The problems are likely in the port. Any non-Java port is at best old and incomplete. I also can't speak to the efficiency of the approach used in the sample you are looking at. For example, is it really binarizing the image from the APIs correctly? Also make sure it is not using TRY_HARDER mode.
There is no objective answer to this question...
My personal opinion is that the ZXing lib that you tried (Stéphanie Hertrichs sample app) is the best you can get. As far as I know it is used on the other plattforms, too (e.g. Android).
As I tested the lib a few months ago, I had the impression it worked very reliable and quick, but it may be that you had other circumstances (lighting, camera, angle, etc...)

Resources