IUP matrix refresh - iup

I am trying to use IUP matrix from C like I uses DataGrid from VB.
Till now I come to this:
int refreshl(Ihandle *mat, int from)
{
struct lotstruct lot;
FILE *fol;
fol = fopen("C:/myfolder/myfile", "rb+");
int b;
int temp = 1;
for (b=from; b<(from+31); b++)
{
int rec = sizeof(lot) * (b - 1);
fseek(fol, rec, SEEK_SET);
int fr;
fr = fread(&lot, sizeof(lot), 1, fol);
//------------------------------------
char k1[36] = {0};
strncpy(k1, lot.str1, 35);
char* tp = ibm852_to_cp1250(k1);
char row[6] = {0};
sprintf(row, "%d", temp);
char* ro = ibm852_to_cp1250(row);
char cel1[10] = {0};
sprintf(cel1, "%d%s", temp, ":0");
IupSetAttribute(mat, cel1, ro);
char cel2[10] = {0};
sprintf(cel2, "%d%s", temp, ":1");
IupSetAttribute(mat, cel2, tp);
temp += 1;
}
fclose(fol);
IupSetAttribute(mat, "REDRAW", "ALL");
return 0;
}
With this I read data from binary file and I can see data on console.
But mytrix don't refreshes by changing data. Data is changet by k_any + case K_DOWN function by increasing "from" integer.
So I call "REDRAW" "ALL" but also without result, starting data stays in matrix.
Since I am total beginner please answer to few questions.
1) Is this good idea to use IUP matrix like common windows Grid?
2) How to invoke refresh matrix to change data in it without loosing speed?
3) Can IUP work with UTF-8 strings on windows like gtk can? (I try but without results).

1) Is this good idea to use IUP matrix like common windows Grid?
Yes. IupMatrix is exactly for that.
2) How to invoke refresh matrix to change data in it without loosing speed?
Your code is correct. Maybe you are updating the wrong cell in the IupMatrix. L=0 or C=0 are title cells, and exist if certain conditions are true. Maybe what you want is to set L=1 or C=1.
A suggestion, instead of this:
char row[6] = {0};
sprintf(row, "%d", temp);
char* ro = ibm852_to_cp1250(row);
char cel1[10] = {0};
sprintf(cel1, "%d%s", temp, ":0");
IupSetAttribute(mat, cel1, ro);
Try this:
IupMatSetfAttribute(mat, "", temp, 0, "%d", temp);
and
IupMatStoreAttribute(mat, "", temp, 1, tp);
You only need the string conversion for the second part.
Also, have you checked the temp variable if it has a valid index?
3) Can IUP work with UTF-8 strings on windows like gtk can? (I try but without results).
Not yet. It will be in a (near) future version.

Related

Number of Computing units in OpenCL

_kernel void kmp(__global char pattern[2*4], __global char* string, __global int failure[2*4], __global int ret[2], int g_length, int l_length, int thread_num){
int pattern_num = 2;
int pattern_size = 4;
int gid = get_group_id(0);
int glid = get_global_id(0);
int lid = get_local_id(0);
int i, j, x = 0;
int old = 0;
__local char tmp_string[32768];
event_t event;
event = async_work_group_copy(tmp_string+lid*l_length, string+glid*l_length, l_length, 0);
wait_group_events(1, &event);
for(i = 0; i < pattern_num; i++){
x = i*pattern_size;
for(j = lid*l_length; j < (lid+1)*l_length; j++){
while(tmp_string[j] != pattern[x] && x > 0 && x != i*pattern_size){
x = failure[x-1]+i*pattern_size;
}
if(tmp_string[j] == pattern[x]){
if(x == (i+1)*pattern_size-1){
//ret[i]++;
old = atomic_add(&ret[i], 1);
x = failure[x]+i*pattern_size;
}
else{
x++;
}
}
}
}
barrier(CLK_LOCAL_MEM_FENCE);
}
I need help with this code.
To find the matched pattern in the string, I wrote code like this.
I'm using AMD Hawaii and it has 44 groups which have 64 cores in each group(Total 2816 computing units, I mean).
The problem is when I try using more than 44 computing units(Using more than 1 core in one group; like 88 units-using 2 cores in each group- or 2816 units-using 64 cores in each group-), it doesn't work well.
It couldn't correctly find the matched number.
I checked the index of string, ids(glid, gid, lid) and the size of all variable.
But, there is nothing wrong.
Anyone who has some advice, please help!
What is going wrong that you saying it doesn't work well? Also why are you not doing anything within async copy? Maybe a simple global to local assignment could work. Why is there a local barrier at the end of kernel?
Anyway, the error seems to be the async copy. It has different values for each thread in a group. For it to work right, it must be given exact same numbers in all threads of a group. Thats why it works with local size = 1 and not for bigger local groups.
For example, glid is different for all 64 threads in a group so it wouldn't work. Async work group copy command makes all threads of a group work on same copy. Not different copies. If you need different copies, you need multiple async commands serially but they would work async if you use the waiting on all of them at once.

Improving the Efficiency of Compact/Scatter in CUDA

Summary:
Any ideas about how to further improve upon the basic scatter operation in CUDA? Especially if one knows it will only be used to compact a larger array into a smaller one? or why the below methods of vectorizing memory ops and shared memory didn't work? I feel like there may be something fundamental I am missing and any help would be appreciated.
EDIT 03/09/15: So I found this Parallel For All Blog post "Optimized Filtering with Warp-Aggregated Atomics". I had assumed atomics would be intrinsically slower for this purpose, however I was wrong - especially since I don't think I care about maintaining element order in the array during my simulation. I'll have to think about it some more and then implement it to see what happens!
EDIT 01/04/16: I realized I never wrote about my results. Unfortunately in that Parallel for All Blog post they compared the global atomic method for compact to the Thrust prefix-sum compact method, which is actually quite slow. CUB's Device::IF is much faster than Thrust's - as is the prefix-sum version I wrote using CUB's Device::Scan + custom code. The warp-aggregrate global atomic method is still faster by about 5-10%, but nowhere near the 3-4x faster I had been hoping for based on the results in the blog. I'm still using the prefix-sum method as while maintaining element order is not necessary, I prefer the consistency of the prefix-sum results and the advantage from the atomics is not very big. I still try various methods to improve compact, but so far only marginal improvements (2%) at best for dramatically increased code complexity.
Details:
I am writing a simulation in CUDA where I compact out elements I am no longer interested in simulating every 40-60 time steps. From profiling it seems that the scatter op takes up the most amount of time when compacting - more so than the filter kernel or the prefix sum. Right now I use a pretty basic scatter function:
__global__ void scatter_arrays(float * new_freq, const float * const freq, const int * const flag, const int * const scan_Index, const int freq_Index){
int myID = blockIdx.x*blockDim.x + threadIdx.x;
for(int id = myID; id < freq_Index; id+= blockDim.x*gridDim.x){
if(flag[id]){
new_freq[scan_Index[id]] = freq[id];
}
}
}
freq_Index is the number of elements in the old array. The flag array is the result from the filter. Scan_ID is the result from the prefix sum on the flag array.
Attempts I've made to improve it are to read the flagged frequencies into shared memory first and then write from shared memory to global memory - the idea being that the writes to global memory would be more coalesced amongst the warps (e.g. instead of thread 0 writing to position 0 and thread 128 writing to position 1, thread 0 would write to 0 and thread 1 would write to 1). I also tried vectorizing the reads and the writes - instead of reading and writing floats/ints I read/wrote float4/int4 from the global arrays when possible, so four numbers at a time. This I thought might speed up the scatter by having fewer memory ops transferring larger amounts of memory. The "kitchen sink" code with both vectorized memory loads/stores and shared memory is below:
const int compact_threads = 256;
__global__ void scatter_arrays2(float * new_freq, const float * const freq, const int * const flag, const int * const scan_Index, const int freq_Index){
int gID = blockIdx.x*blockDim.x + threadIdx.x; //global ID
int tID = threadIdx.x; //thread ID within block
__shared__ float row[4*compact_threads];
__shared__ int start_index[1];
__shared__ int end_index[1];
float4 myResult;
int st_index;
int4 myFlag;
int4 index;
for(int id = gID; id < freq_Index/4; id+= blockDim.x*gridDim.x){
if(tID == 0){
index = reinterpret_cast<const int4*>(scan_Index)[id];
myFlag = reinterpret_cast<const int4*>(flag)[id];
start_index[0] = index.x;
st_index = index.x;
myResult = reinterpret_cast<const float4*>(freq)[id];
if(myFlag.x){ row[0] = myResult.x; }
if(myFlag.y){ row[index.y-st_index] = myResult.y; }
if(myFlag.z){ row[index.z-st_index] = myResult.z; }
if(myFlag.w){ row[index.w-st_index] = myResult.w; }
}
__syncthreads();
if(tID > 0){
myFlag = reinterpret_cast<const int4*>(flag)[id];
st_index = start_index[0];
index = reinterpret_cast<const int4*>(scan_Index)[id];
myResult = reinterpret_cast<const float4*>(freq)[id];
if(myFlag.x){ row[index.x-st_index] = myResult.x; }
if(myFlag.y){ row[index.y-st_index] = myResult.y; }
if(myFlag.z){ row[index.z-st_index] = myResult.z; }
if(myFlag.w){ row[index.w-st_index] = myResult.w; }
if(tID == blockDim.x -1 || gID == mutations_Index/4 - 1){ end_index[0] = index.w + myFlag.w; }
}
__syncthreads();
int count = end_index[0] - st_index;
int rem = st_index & 0x3; //equivalent to modulo 4
int offset = 0;
if(rem){ offset = 4 - rem; }
if(tID < offset && tID < count){
new_mutations_freq[population*new_array_Length+st_index+tID] = row[tID];
}
int tempID = 4*tID+offset;
if((tempID+3) < count){
reinterpret_cast<float4*>(new_freq)[tID] = make_float4(row[tempID],row[tempID+1],row[tempID+2],row[tempID+3]);
}
tempID = tID + offset + (count-offset)/4*4;
if(tempID < count){ new_freq[st_index+tempID] = row[tempID]; }
}
int id = gID + freq_Index/4 * 4;
if(id < freq_Index){
if(flag[id]){
new_freq[scan_Index[id]] = freq[id];
}
}
}
Obviously it gets a bit more complicated. :) While the above kernel seems stable when there are hundreds of thousands of elements in the array, I've noticed a race condition when the array numbers in the tens of millions. I'm still trying to track the bug down.
But regardless, neither method (shared memory or vectorization) together or alone improved performance. I was especially surprised by the lack of benefit from vectorizing the memory ops. It had helped in other functions I had written, though now I am wondering if maybe it helped because it increased Instruction-Level-Parallelism in the calculation steps of those other functions rather than the fewer memory ops.
I found the algorithm mentioned in this poster (similar algorithm also discussed in this paper) works pretty well, especially for compacting large arrays. It uses less memory to do it and is slightly faster than my previous method (5-10%). I put in a few tweaks to the poster's algorithm: 1) eliminating the final warp shuffle reduction in phase 1, can simply sum the elements as they are calculated, 2) giving the function the ability to work over more than just arrays sized as a multiple of 1024 + adding grid-strided loops, and 3) allowing each thread to load their registers simultaneously in phase 3 instead of one at a time. I also use CUB instead of Thrust for Inclusive sum for faster scans. There may be more tweaks I can make, but for now this is good.
//kernel phase 1
int myID = blockIdx.x*blockDim.x + threadIdx.x;
//padded_length is nearest multiple of 1024 > true_length
for(int id = myID; id < (padded_length >> 5); id+= blockDim.x*gridDim.x){
int lnID = threadIdx.x % warp_size;
int warpID = id >> 5;
unsigned int mask;
unsigned int cnt=0;//;//
for(int j = 0; j < 32; j++){
int index = (warpID<<10)+(j<<5)+lnID;
bool pred;
if(index > true_length) pred = false;
else pred = predicate(input[index]);
mask = __ballot(pred);
if(lnID == 0) {
flag[(warpID<<5)+j] = mask;
cnt += __popc(mask);
}
}
if(lnID == 0) counter[warpID] = cnt; //store sum
}
//kernel phase 2 -> CUB Inclusive sum transforms counter array to scan_Index array
//kernel phase 3
int myID = blockIdx.x*blockDim.x + threadIdx.x;
for(int id = myID; id < (padded_length >> 5); id+= blockDim.x*gridDim.x){
int lnID = threadIdx.x % warp_size;
int warpID = id >> 5;
unsigned int predmask;
unsigned int cnt;
predmask = flag[(warpID<<5)+lnID];
cnt = __popc(predmask);
//parallel prefix sum
#pragma unroll
for(int offset = 1; offset < 32; offset<<=1){
unsigned int n = __shfl_up(cnt, offset);
if(lnID >= offset) cnt += n;
}
unsigned int global_index = 0;
if(warpID > 0) global_index = scan_Index[warpID - 1];
for(int i = 0; i < 32; i++){
unsigned int mask = __shfl(predmask, i); //broadcast from thread i
unsigned int sub_group_index = 0;
if(i > 0) sub_group_index = __shfl(cnt, i-1);
if(mask & (1 << lnID)){
compacted_array[global_index + sub_group_index + __popc(mask & ((1 << lnID) - 1))] = input[(warpID<<10)+(i<<5)+lnID];
}
}
}
}
EDIT: There is a newer article by a subset of the poster authors where they examine a faster variation of compact than what is written above. However, their new version is not order preserving, so not useful for myself and I haven't implemented it to test it out. That said, if your project doesn't rely on object order, their newer compact version can probably speed up your algorithm.

Optimising Matrix Multiplication OpenCL - Purpose: learn how to manage memory

I'm new to OpenCL and trying to understand how to optimise matrix multiplication to become familiar with the various paradigms. Here's the current code.
If I'm multipliying matrices A and B. I allocate a row of A in private memory to start with (because each work item uses it), and a column of B in local memory (because each work group uses it).
1) the code is currently incorrect, unfortunately I'm struggling on how to use local work ids to get the correct code, but I can't find my mistake? I'm basing myself on http://www.cs.bris.ac.uk/home/simonm/workshops/OpenCL_lecture3.pdf but (slide 27) it seems that this is wrong as they don't make use of loc_size in their internal loop)
2) Are there any other optimisations you would suggest with this code?
__kernel void mmul(
__global int* C,
__global int* A,
__global int* B,
const int rA,
const int rB,
const int cC,
__local char* local_mem)
{
int k,ty;
int tx = get_global_id(0);
int loctx = get_local_id(0);
int loc_size = get_local_size(0);
int value = 0 ;
int tmp_array[1000];
for(k=0;k<rB;k++) {
tmp_array[k] = A[tx * cA + k] ;
}
for (ty=0 ; ty < cC ; ty++) { \n" \
for (k = loctx ; k < rB ; k+=loc_size) {
local_mem[k] = B[ty + k * cC] ;
}
barrier(CLK_LOCAL_MEM_FENCE);
value = 0 ;
for(k=0;k<rB;k+=1) {
int i = loctx + k*loc_size;
value += tmp_array[k] * local_mem[i];
}
C[ty + (tx * cC)] = value;
}
}
where I set the global and local work items as follows
const size_t globalWorkItems[1] = {result_row};
const size_t localWorkItems[1] = {(size_t)local_wi_size};
local_wi_size is result_row/number of compute units (such that result_row % compute units == 0)
Your code is pretty close, but the indexing into the local memory array is actually simpler that you think. You have a row in private memory and a column in local memory, and you need to compute the dot product of these two vectors. You just need to sum row[k]*col[k], for k = 0 up to N-1:
for(k=0;k<rB;k+=1) {
value += tmp_array[k] * local_mem[k];
}
There's actually a second, more subtle bug that is also present in the example solution given on the slides you are using. Since you are reading and writing local memory inside a loop, you actually need two barriers, in order to make sure that work-items writing to local memory on iteration i don't overwrite values that are being read by other work-items executing iteration i-1.
Therefore, the full code for your kernel (tested and working), should look something like this:
__kernel void mmul(
__global int* C,
__global int* A,
__global int* B,
const int rA,
const int rB,
const int cC,
__local char* local_mem)
{
int k,ty;
int tx = get_global_id(0);
int loctx = get_local_id(0);
int loc_size = get_local_size(0);
int value = 0;
int tmp_array[1000];
for(k=0;k<rB;k++) {
tmp_array[k] = A[tx * cA + k] ;
}
for (ty=0 ; ty < cC ; ty++) {
for (k = loctx ; k < rB ; k+=loc_size) {
local_mem[k] = B[ty + k * cC];
}
barrier(CLK_LOCAL_MEM_FENCE); // First barrier to ensure writes have finished
value = 0;
for(k=0;k<rB;k+=1) {
value += tmp_array[k] * local_mem[k];
}
C[ty + (tx * cC)] = value;
barrier(CLK_LOCAL_MEM_FENCE); // Second barrier to ensure reads have finished
}
}
You can find the full set of exercises and solutions that go with the slides you are looking at on the HandsOnOpenCL GitHub page. There's also a more complete set of slides from the same tutorial available here, which go on to show a much more optimised matrix multiply example that uses a blocking approach to better exploit temporal and spatial locality. The aforementioned missing barrier bug has been fixed in the example solution code, but not on the slides (yet).

Compose new 32bit bitmap with alpha channel from other 32bit bitmaps

I several a 32bit bitmap with Alpha channel.
I need to compose a new Bitmap that has again an alpha channel. So the final bitmap is later used with AlphaBlend.
There is no need for stretching. If there would be no alpha channel, I would just use BitBlt to create the new bitmap.
I am not using managed code, I just want to do this with standard GDI / WinAPI functions. Also I am interested in a solution that there is no need for some special libraries.
TIA
Note: I know that I can use several AphaBlend functions to do the same composition in the final output. But for the ease of use in my program I would prefer to compose such a bitmap once.
You can go through every pixel and compose them manually:
void ComposeBitmaps(BITMAP* bitmaps, int bitmapCount, BITMAP& outputBitmap)
{
for(int y=0; y<outputBitmap.bmHeight; ++y)
{
for(int x=0; x<outputBitmap.bmWidth; ++x)
{
int b = 0;
int g = 0;
int r = 0;
int a = 0;
for(int i=0; i<bitmapCount; ++i)
{
unsigned char* samplePtr = (unsigned char*)bitmaps[i].bmBits+(y*outputBitmap.bmWidth+x)*4;
b += samplePtr[0]*samplePtr[3];
g += samplePtr[1]*samplePtr[3];
r += samplePtr[2]*samplePtr[3];
a += samplePtr[3];
}
unsigned char* outputSamplePtr = (unsigned char*)outputBitmap.bmBits+(y*outputBitmap.bmWidth+x)*4;
if(a>0)
{
outputSamplePtr[0] = b/a;
outputSamplePtr[1] = g/a;
outputSamplePtr[2] = r/a;
outputSamplePtr[3] = a/bitmapCount;
}
else
outputSamplePtr[3] = 0;
}
}
(Assuming all bitmaps are 32-bit and have the same width and height)
Or, if you want to draw bitmaps one on top of another, rather than mix them in equal proportions:
unsigned char* outputSamplePtr = (unsigned char*)outputBitmap.bmBits+(y*outputBitmap.bmWidth+x)*4;
outputSamplePtr[3] = 0;
for(int i=0; i<bitmapCount; ++i)
{
unsigned char* samplePtr = (unsigned char*)bitmaps[i].bmBits+(y*outputBitmap.bmWidth+x)*4;
outputSamplePtr[0] = (outputSamplePtr[0]*outputSamplePtr[3]*(255-samplePtr[3])+samplePtr[0]*samplePtr[3]*255)/(255*255);
outputSamplePtr[1] = (outputSamplePtr[1]*outputSamplePtr[3]*(255-samplePtr[3])+samplePtr[1]*samplePtr[3]*255)/(255*255);
outputSamplePtr[2] = (outputSamplePtr[2]*outputSamplePtr[3]*(255-samplePtr[3])+samplePtr[2]*samplePtr[3]*255)/(255*255);
outputSamplePtr[3] = samplePtr[3]+outputSamplePtr[3]*(255-samplePtr[3])/255;
}
I found the following solution that fits best for me.
I Create a new target bitmap with CreateDIBSection
I prefill the new bitmap with fully transparent pixels. (FillMemory/ZeroMemory)
I Receive the Pixel that needs to be copied with GetDIBits. If possible form the width I directly copy the rows into the buffer I previously created. Otherwise I copy the data row by row into the buffer created in step.
The resulting bitmap can be used with AlphaBlend and in CImageList objects.
Because the bitmaps don't overlap I don't need take care about the target data.

Initialize device array in CUDA

How do I initialize device array which is allocated using cudaMalloc()?
I tried cudaMemset, but it fails to initialize all values except 0.code, for cudaMemset looks like below, where value is initialized to 5.
cudaMemset(devPtr,value,number_bytes)
As you are discovering, cudaMemset works like the C standard library memset. Quoting from the documentation:
cudaError_t cudaMemset ( void * devPtr,
int value,
size_t count
)
Fills the first count bytes of the memory area pointed to by devPtr
with the constant byte value value.
So value is a byte value. If you do something like:
int *devPtr;
cudaMalloc((void **)&devPtr,number_bytes);
const int value = 5;
cudaMemset(devPtr,value,number_bytes);
what you are asking to happen is that each byte of devPtr will be set to 5. If devPtr was a an array of integers, the result would be each integer word would have the value 84215045. This is probably not what you had in mind.
Using the runtime API, what you could do is write your own generic kernel to do this. It could be as simple as
template<typename T>
__global__ void initKernel(T * devPtr, const T val, const size_t nwords)
{
int tidx = threadIdx.x + blockDim.x * blockIdx.x;
int stride = blockDim.x * gridDim.x;
for(; tidx < nwords; tidx += stride)
devPtr[tidx] = val;
}
(standard disclaimer: written in browser, never compiled, never tested, use at own risk).
Just instantiate the template for the types you need and call it with a suitable grid and block size, paying attention to the last argument now being a word count, not a byte count as in cudaMemset. This isn't really any different to what cudaMemset does anyway, using that API call results in a kernel launch which is do too different to what I posted above.
Alternatively, if you can use the driver API, there is cuMemsetD16 and cuMemsetD32, which do the same thing, but for half and full 32 bit word types. If you need to do set 64 bit or larger types (so doubles or vector types), your best option is to use your own kernel.
I also needed a solution to this question and I didn't really understand the other proposed solution. Particularly I didn't understand why it iterates over the grid blocks for(; tidx < nwords; tidx += stride) and for that matter, the kernel invocation and why using the counter-intuitive word sizes.
Therefore I created a much simpler monolithic generic kernel and customized it with strides i.e. you may use it to initialize a matrix in multiple ways e.g. set rows or columns to any value:
template <typename T>
__global__ void kernelInitializeArray(T* __restrict__ a, const T value,
const size_t n, const size_t incx) {
int tid = threadIdx.x + blockDim.x * blockIdx.x;
if (tid*incx < n) {
a[tid*incx] = value;
}
}
Then you may invoke the kernel like this:
template <typename T>
void deviceInitializeArray(T* a, const T value, const size_t n, const size_t incx) {
int number_of_blocks = ((n / incx) + BLOCK_SIZE - 1) / BLOCK_SIZE;
dim3 gridDim(number_of_blocks, 1);
dim3 blockDim(BLOCK_SIZE, 1);
kernelInitializeArray<T> <<<gridDim, blockDim>>>(a, value, n, incx);
}

Resources