OpenCL parallel buffer compaction barrier issue - parallel-processing

As a school project, we're 4 working on a parallel raytracer with OpenCL.
It's our first project using OpenCL, so we might have some incomprehensions about it.
We're trying to implement parallel buffer compaction to remove finished rays, or rays that didn't collide with anything so the next iteration has less data to process.
Basically, we've got a buffer of as many s_ray_states as needed for rendering, tracing them, getting collision data, compacting the buffer so there'd be only rays which collided with an object inside it, then shading them.
So we have a buffer uint *prefix_sum which contains the indices at which each s_ray_state must be moved to in the buffer s_ray_state *ray_states in order to reduce the number of rays which are sent to the shading kernel, and the next iterations of the trace/shade kernels.
Sadly, the ray_sort kernel below doesn't seem to be working right, we verified the input prefix_sum data, which is 100% correct, same for ray_states buffer, but we're getting unwanted data in the output.
We're launching a single workgroup ( global work size = local work size ), the rays are always moved in the buffer to a smaller index than their original. We've put barriers, and are using the s_ray_state *tmp buffer to prevent parallel executions to write on each-other's data, but it doesn't seem to work, even when removing the barriers we'd get the same result.
The both of us have been working on it for 4 days and have already asked for help from other students, but no-one seems to be able to figure out what's wrong.
We may not be understanding barriers / mem fences enough to be sure this can in fact work.
We already tried making a single work item in a single work group sort the whole array, which works, and even gives better performance.
Is the code below supposed to be working ? With our understanding of OpenCL, it should be working, and we did a lot of research, but never really got any clear answer..
kernel void ray_sort(
global read_only uint *prefix_sum,
global read_write struct s_ray_state *ray_states,
global read_only uint *ray_states_size,
local read_write struct s_ray_state *tmp
)
{
int l_size = get_local_size(0);
int l_id = get_local_id(0);
int group_id = -1;
int group_nb = *ray_states_size / l_size;
int state_id;
while (++group_id < group_nb)
{
state_id = group_id * l_size + l_id;
tmp[l_id] = ray_states[state_id];
barrier(CLK_LOCAL_MEM_FENCE);
if (did_hit(tmp[l_id]))
ray_states[prefix_sum[state_id]] = tmp[l_id];
barrier(CLK_GLOBAL_MEM_FENCE);
}
}
ray_states length is ray_states_size
prefix_sum contains indices at which each ray_states element must be moved to
tmp is a local buffer of size local_work_size
local_work_size = global_work_size
did_hit() returns 1 if the ray hits an object, 0 otherwise
We're expecting the ray_states elements to be moved to the indices contained in prefix_sum
Example: Each ray_states[id] gets moved to prefix_sum[id] index in
ray_states
prefix_sum: 0 | 0 | 1 | 1 | 2 | 3 | 3 | 3 | 4
did_hit(ray_states[id]): 0 | 1 | 0 | 1 | 1 | 0 | 0 | 1 | 0
did_hit(output[id]): 1 | 1 | 1 | 1 | X | X | X | X | X
Xs can be anything

I could be completely off here, but it seems to me with did_hit(ray_states[state_id]) you are reading the same piece of global memory which you put into a local memory buffer tmp just 2 lines above. Which wouldn't be a problem, except you are using that buffer for both input and output.
The way i see it, what actually happens on the hardware is this:
tmp[l_id] = ray_states[state_id];
tmp[l_id] = ray_states[state_id];
tmp[l_id] = ray_states[state_id];
tmp[l_id] = ray_states[state_id];
tmp[l_id] = ray_states[state_id];
... local-work-size times
barrier(CLK_LOCAL_MEM_FENCE);
if (did_hit(ray_states[state_id]))
ray_states[prefix_sum[state_id]] = tmp[l_id];
if (did_hit(ray_states[state_id]))
ray_states[prefix_sum[state_id]] = tmp[l_id];
if (did_hit(ray_states[state_id]))
ray_states[prefix_sum[state_id]] = tmp[l_id];
if (did_hit(ray_states[state_id]))
ray_states[prefix_sum[state_id]] = tmp[l_id];
... again local-work-size times
Considering WItem parallel execution order is undefined (hardware may chose any order it wants), this will lead to random results. Can you try this instead:
if (did_hit(tmp[l_id]))
ray_states[prefix_sum[state_id]] = tmp[l_id];
BTW if ray_states_size is just a simple integer, you can pass it directly, by making the argument "uint ray_states_size". No need to fool around with buffers there.
EDIT1: my suggestion will only work if prefix_sum[state_id] doesn't have any duplicates in every local-work-size worth of ids, otherwise there will still be a data race. So e.g. if for both state_id-s 1 and 3 the prefix_sum[state_id] array has 0, and your local WG size is >= 4, there will be a data race.
Also, is there some really good reason you must use the same buffer for input and output ? it seems to me it'd be a lot less complicated if you had separate in/out buffers.
EDIT2: i just noticed you said "the rays are always moved in the buffer to a smaller index than their original" (sorry i missed it). That's good, but not enough - are they always moved to a smaller index then any other ray's index in the same local WG ? if yes, good, but there's still the other data race i mentioned.

Related

Convert structure fields to arrays efficiently matlab

I have a structure called s in Matlab. This is a structure with two fields a and b. The structure size is 1 x 1,620,000.
It is a very large structure (that probably takes half of the ram of my machine). This is what the structure looks like:
I am looking for an efficient way to concatenate each of the fields a and b into two separate arrays that I can then export to csv. I built the code below, to do so, but even after 12 hours running it has not even reached a quarter of the loop. Any more efficient way of doing this?
a = [];
b =[];
total_n = size(s,2);
count = 1;
while size(s,2)>0
if size(s(1).a,1)
a = [a; s(1).a];
end
if size(s(1).b,1)
b = [b; s(1).b];
end
s(1) = []; %to save memory
if mod(count,1000) == 0
fprintf('Done %2f \n', [count/total_n])
end
count = count+1;
end
s(1) = []; %to save memory
ah, but such huge misunderstanding that comment is.
if size(s) is 1 x 1,620,000, you just suddenly forced the loop to do (under the hood, you dont see it)
snew=zeros(1,size(s,2)-1) # now you use double memory
snew=s(2:end) # now you force an unnecesary copy
So not only does that line make your code require double the memory, but also in each loop, you make an unnecesary copy of a large array.
Just replace your while for a normal for loop of for ii=1:size(s,2) and then index s!
Now, you can see hopefully then why the following is equally a big mistake (not only that, but any modern MATLAB version is currently telling you this is a bad idea in your editor)
a=[]
a=[a;s(1).a]
In here in each loop you are forcing MATLAB to make a new a that is 1 bigger than before, and copy the contents of the old a there.
instead, preallocate the size of a.
As you don't know what you are going to put there, I suggest using a cell array, as each s(ii).a has a different length.
You can then, after the loop, remove all empty (isempty) cells if you want.
Managed to do it efficiently:
s= struct2cell(s);
s= squeeze(s);
a = a(1,:);
a = a';
a = vertcat(a{:});
b = a(2,:);
b = b';
b = vertcat(b{:});

Static Analysis erroneously reports out of bounds access

While reviewing a codebase, I came upon a particular piece of code that triggered a warning regarding an "out of bounds access". After looking at the code, I could not see a way for the reported access to happen - and tried to minimize the code to create a reproducible example. I then checked this example with two commercial static analysers that I have access to - and also with the open-source Frama-C.
All 3 of them see the same "out of bounds" access.
I don't. Let's have a look:
3 extern int checker(int id);
4 extern int checker2(int id);
5
6 int compute(int *q)
7 {
8 int res = 0, status;
9
10 status = checker2(12);
11 if (!status) {
12 status = 1;
13 *q = 2;
14 for(int i=0; i<2 && 0!=status; i++) {
15 if (checker(i)) {
16 res = i;
17 status=checker2(i);
18 }
19 }
20 }
21 if (!status)
22 *q = res;
23 return status;
24 }
25
26 int someFunc(int id)
27 {
28 int p;
29 extern int data[2];
30
31 int status = checker2(132);
32 status |= compute(&p);
33 if (status == 0) {
34 return data[p];
35 } else
36 return -1;
37 }
Please don't try to judge the quality of the code, or why it does things the way it does. This is a hacked, cropped and mutated version of the original, with the sole intent being to reach a small example that demonstrates the issue.
All analysers I have access to report the same thing - that the indexing in the caller at line 34, doing the return data[p] may read via the invalid index "2". Here's the output from Frama-C - but note that two commercial static analysers provide exactly the same assessment:
$ frama-c -val -main someFunc -rte why.c |& grep warning
...
why.c:34:[value] warning: accessing out of bounds index. assert p < 2;
Let's step the code in reverse, to see how this out of bounds access at line 34 can happen:
To end up in line 34, the returned status from both calls to checker2 and compute should be 0.
For compute to return 0 (at line 32 in the caller, line 23 in the callee), it means that we have performed the assignment at line 22 - since it is guarded at line 21 with a check for status being 0. So we wrote in the passed-in pointer q, whatever was stored in variable res. This pointer points to the variable used to perform the indexing - the supposed out-of-bounds index.
So, to experience an out of bounds access into the data, which is dimensioned to contain exactly two elements, we must have written a value that is neither 0 nor 1 into res.
We write into res via the for loop at 14; which will conditionally assign into res; if it does assign, the value it will write will be one of the two valid indexes 0 or 1 - because those are the values that the for loop allows to go through (it is bound with i<2).
Due to the initialization of status at line 12, if we do reach line 12, we will for sure enter the loop at least once. And if we do write into res, we will write a nice valid index.
What if we don't write into it, though? The "default" setup at line 13 has written a "2" into our target - which is probably what scares the analysers. Can that "2" indeed escape out into the caller?
Well, it doesn't seem so... if the status checks - at either line 11 or at line 21 fail, we will return with a non-zero status; so whatever value we wrote (or didn't, and left uninitialised) into the passed-in q is irrelevant; the caller will not read that value, due to the check at line 33.
So either I am missing something and there is indeed a scenario that leads to an out of bounds access with index 2 at line 34 (how?) or this is an example of the limits of mainstream formal verification.
Help?
When dealing with a case such as having to distinguish between == 0 and != 0 inside a range, such as [INT_MIN; INT_MAX], you need to tell Frama-C/Eva to split the cases.
By adding //# split annotations in the appropriate spots, you can tell Frama-C/Eva to maintain separate states, thus preventing merging them before status is evaluated.
Here's how your code would look like, in this case (courtesy of #Virgile):
extern int checker(int id);
extern int checker2(int id);
int compute(int *q)
{
int res = 0, status;
status = checker2(12);
//# split status <= 0;
//# split status == 0;
if (!status) {
status = 1;
*q = 2;
for(int i=0; i<2 && 0!=status; i++) {
if (checker(i)) {
res = i;
status=checker2(i);
}
}
}
//# split status <= 0;
//# split status == 0;
if (!status)
*q = res;
return status;
}
int someFunc(int id)
{
int p;
extern int data[2];
int status = checker2(132);
//# split status <= 0;
//# split status == 0;
status |= compute(&p);
if (status == 0) {
return data[p];
} else
return -1;
}
In each case, the first split annotation tells Eva to consider the cases status <= 0 and status > 0 separately; this allows "breaking" the interval [INT_MIN, INT_MAX] into [INT_MIN, 0] and [1, INT_MAX]; the second annotation allows separating [INT_MIN, 0] into [INT_MIN, -1] and [0, 0]. When these 3 states are propagated separately, Eva is able to precisely distinguish between the different situations in the code and avoid the spurious alarm.
You also need to allow Frama-C/Eva some margin for keeping the states separated (by default, Eva will optimize for efficiency, merging states somewhat aggressively); this is done by adding -eva-precision 1 (higher values may be required for your original scenario).
Related options: -eva-domains sign (previously -eva-sign-domain) and -eva-partition-history N
Frama-C/Eva also has other options which are related to splitting states; one of them is the signs domain, which computes information about sign of variables, and is useful to distinguish between 0 and non-zero values. In some cases (such as a slightly simplified version of your code, where status |= compute(&p); is replaced with status = compute(&p);), the sign domain may help splitting without the need for annotations. Enable it using -eva-domains sign (-eva-sign-domain for Frama-C <= 20).
Another related option is -eva-partition history N, which tells Frama-C to keep the states partitioned for longer.
Note that keeping states separated is a bit costly in terms of analysis, so it may not scale when applied to the "real" code, if it contains several more branches. Increasing the values given to -eva-precision and -eva-partition-history may help, as well as adding # split annotations.
I'd like to add some remarks which will hopefully be useful in the future:
Using Frama-C/Eva effectively
Frama-C contains several plug-ins and analyses. Here in particular, you are using the Eva plug-in. It performs an analysis based on abstract interpretation that reports all possible runtime errors (undefined behaviors, as the C standard puts it) in a program. Using -rte is thus unnecessary, and adds noise to the result. If Eva cannot be certain about the absence of some alarm, it will report it.
Replace the -val option with -eva. It's the same thing, but the former is deprecated.
If you want to improve precision (to remove false alarms), add -eva-precision N, where 0 <= N <= 11. In your example program, it doesn't change much, but in complex programs with multiple callstacks, extra precision will take longer but minimize the number of false alarms.
Also, consider providing a minimal specification for the external functions, to avoid warnings; here they contain no pointers, but if they did, you'd need to provide an assigns clause to explicitly tell Frama-C whether the functions modify such pointers (or any global variables, for instance).
Using the GUI and Studia
With the Frama-C graphical interface and the Studia plug-in (accessible by right-clicking an expression of interest and choosing the popup menu Studia -> Writes), and using the Values panel in the GUI, you can easily track what the analysis inferred, and better understand where the alarms and values come from. The only downside is that, it does not report exactly where merges happen. For the most precise results possible, you may need to add calls to an Eva built-in, Frama_C_show_each(exp), and put it inside a loop to get Eva to display, at each iteration of its analysis, the values contained in exp.
See section 9.3 (Displaying intermediate results) of the Eva user manual for more details, including similar built-ins (such as Frama_C_domain_show_each and Frama_C_dump_each, which show information about abstract domains). You may need to #include "__fc_builtin.h" in your program. You can use #ifdef __FRAMAC__ to allow the original code to compile when including this Frama-C-specific file.
Being nitpicky about the term erroneous reports
Frama-C is a semantic-based tool whose main analyses are exhaustive, but may contain false positives: Frama-C may report alarms when they do not happen, but it should never forget any possible alarm. It's a trade-off, you can't have an exact tool in all cases (though, in this example, with sufficient -eva-precision, Frama-C is exact, as in reporting only issues which may actually happen).
In this sense, erroneous would mean that Frama-C "forgot" to indicate some issue, and we'd be really concerned about it. Indicating an alarm where it may not happen is still problematic for the user (and we work to improve it, so such situations should happen less often), but not a bug in Frama-C, and so we prefer using the term imprecisely, e.g. "Frama-C/Eva imprecisely reports an out of bounds access".

MPI: Best way to coordinate many sends and recieves

I've been away from parallel programming for a long period of time and I am trying to figure out the best method for coordinating sending large amounts of data between many processors with a complicated dependency structure. For example, I might to send data to/from the following processes:
int process_1_dependencies[] = {2,3,5,6}
int process_2_dependencies[] = {1}
int process_3_dependencies[] = {1,4,5}
int process_4_dependencies[] = {3,5,6}
int process_5_dependencies[] = {1,3,4,6}
int process_6_dependencies[] = {1,4,5,7}
int process_7_dependencies[] = {6,8}
int process_8_dependencies[] = {7}
The obvious, and stupid, way of doing this would be do something like:
for(int i = 0; i < world_size; i++)
{
for(int j = 0; j < dependency_length; j++)
{
if (i == my_rank)
{
mpi_irecv(...,source=dependency[j],)
}
else
{
if (i == dependency[j])
{
mpi_isend(...,dest=dependency[j])
}
}
}
// blocking stuff?
}
I'm not actually sure if this would work once you have 100's of communications going and in anycase, it seems super inefficient. It's at least O(N) and only allows a single process to be receiving at once. A better way would be to use blocking and ensure that independent processes are simultaneously exchanging information. But that becomes quite complicated and requires optimizing which processes are simultaneously sending and receiving.
Am I just completely overthinking this? Is it safe to do something like this (provided that every sending process has a receiving pair):
for(int i = 0; i < dependency_length; i++)
{
mpi_isend(..., dest=dependency[i], ...)
mpi_irecv(..., source=dependency[i], ...)
}
//blocking stuff
sorry for the lack of focus in the question. I'm away from my computer so I can't really test it out, and in even if it did would I guess I'm not confident that it is saleable and that the buffers would keep working for arbitrary numbers of processes?
To avoid queueing a large number of messages and to avoid opaque deadlock problems, you can also employ a single call to MPI_Alltoallv, where all sends and receives are done for you automatically, and---with crossed fingers--- even hope that you MPI implemetation is able to optimize all communication on its own. The prototype is
MPI_Alltoallv
(
sendbuf, // buffer containing all data needed by other ranks in comm
sendcounts, // number of elements to send to each rank in comm
sdispls, // offsets in sendbuf per rank in comm
sendtype, // MPI datatype of the sent data
recvbuf, // buffer to contain all data needed by this rank
recvcounts, // number of elements to receive per rank in comm
rdispls, // offsets in recvbuf per rank in comm
recvtype, // MPI datatype of the received data
comm // the communicator
);
where sendcounts would be directly related to your process_X_dependencies; it would contain non-zero values at positions listed by process_X_dependencies.

C-API: Allocating "PyTypeObject-extension"

I have found some code in PyCXX that may be buggy.
Is it indeed a bug, and if so, what is the right way to fix it?
Here is the problem:
struct PythonClassInstance
{
PyObject_HEAD
ExtObjBase* m_pycxx_object;
}
:
{
:
table->tp_new = extension_object_new; // PyTypeObject
:
}
:
static PyObject* extension_object_new(
PyTypeObject* subtype, PyObject* args, PyObject* kwds )
{
PythonClassInstance* o = reinterpret_cast<PythonClassInstance *>
( subtype->tp_alloc(subtype,0) );
if( ! o )
return nullptr;
o->m_pycxx_object = nullptr;
PyObject* self = reinterpret_cast<PyObject* >( o );
return self;
}
Now PyObject_HEAD expands to "PyObject ob_base;", so clearly PythonClassInstance trivially extends PyObject to contain an extra pointer (which will point to PyCXX's representation for this PyObject)
tp_alloc allocates memory for storing a PyObject
The code then typecasts this pointer to a PythonClassInstance, laying claim to an extra 4(or 8?) bytes that it does not own!
And then it sets this extra memory to 0.
This looks very dangerous, and I'm surprised the bug has gone unnoticed. The risk is that some future object will get placed in this location (that is meant to be storing the ExtObjBase*).
How to fix it?
PythonClassInstance foo{};
PyObject* tmp = subtype->tp_alloc(subtype,0);
// !!! memcpy sizeof(PyObject) bytes starting from location tmp into location (void*)foo
But I think now maybe I need to release tmp, and I don't think I should be playing with memory directly like this. I feel like it could be jeopardising Python's memory management/garbage collection inbuilt machinery.
The other option is maybe I can persuade tp_alloc to allocate 4 extra bytes (or is it 8 now; enough for a pointer) bypassing in 1 instead of 0.
Documentation says this second parameter is "Py_ssize_t nitems" and:
If the type’s tp_itemsize is non-zero, the object’s ob_size field
should be initialized to nitems and the length of the allocated memory
block should be tp_basicsize + nitemstp_itemsize, rounded up to a
multiple of sizeof(void); otherwise, nitems is not used and the
length of the block should be tp_basicsize.
So it looks like I should be setting:
table->tp_itemsize = sizeof(void*);
:
PyObject* tmp = subtype->tp_alloc(subtype,1);
EDIT: just tried this and it causes a crash
But then the documentation goes on to say:
Do not use this function to do any other instance initialization, not
even to allocate additional memory; that should be done by tp_new.
Now I'm not sure whether this code belongs in tp_new or tp_init.
Related:
Passing arguments to tp_new and tp_init from subtypes in Python C API
Python C-API Object Allocation‏
The code is correct.
As long as the PyTypeObject for the extension object is properly initialized it should work.
The base class tp_alloc receives subtype so it should know how much memory to allocate by checking the tp_basicsize member.
This is a common Python C/API pattern as demonstrated int the tutorial.
Actually this is a (minor/harmless) bug in PyCXX
SO would like to convert this answer to a comment, which makes no sense I can't awarded the green tick of completion so I comment. So I have to ramble in order to qualify it. blerh.

How to get running maximum in Stata?

I would like to get the running maximum by writing Stata code.
I think I am quite close:
gen ctrhigh`iv' = max(ctr, L1.ctr, L2.ctr, L3.ctr, ..., L`iv'.ctr)
As you can see, my data are time series and `iv' represents the window (e.g. 5, 10 or 200 days)
The only problem is that you cannot pass a varlist or string containing numbers to max. E.g. the following is not possible:
local ivs 5 10 50 100 200
foreach iv in `ivs' {
local vals
local i = 1
while (`i' <= `iv') {
vals "`vals' `i'"
local ++i
}
gen ctrhigh`iv' = max(varlist vals) //not possible
}
How would I achieve this instead?
Example of quickly computing a running standard deviation
* standard deviation of ctr, see http://en.wikipedia.org/wiki/Standard_deviation#Rapid_calculation_methods *
gen ctr_sq = ctr^2
by tid: gen ctr_cum = sum(ctr) if !missing(ctr)
by tid: gen ctr_sq_cum = sum(ctr_sq) if !missing(ctr_sq)
foreach iv in $ivs {
if `iv' == 1 continue
by tid: gen ctr_sum = ctr_cum - L`iv'.ctr_cum if !missing(ctr_cum) & !missing(L`iv'.ctr_cum)
by tid: gen ctr_sq_sum = ctr_sq_cum - L`iv'.ctr_sq_cum if !missing(ctr_sq_cum) & !missing(L`iv'.ctr_sq_cum)
by tid: gen ctrsd`iv' = sqrt((`iv' * ctr_sq_sum - ctr_sum^2) / (`iv'*(`iv'-1))) if !missing(ctr_sq_sum) & !missing(ctr_sum)
label variable ctrsd`iv' "Rolling std dev of close ticker rank by `iv' days."
drop ctr_sum ctr_sq_sum
}
drop ctr_sq ctr_cum ctr_sq_cum
Note: this is not an exact sd, it's an approximation. I realize that this is very different from a maximum, but this may serve as an illustration on how to deal with large data computations.
Your example is time series data and implies that you have tsset the data. You don't say whether you also have panel or longitudinal structure. I will assume the worst and assume the latter as it doesn't make the code much worse. So, suppose tsset id date. In fact, that's irrelevant to the code here except to make explicit my assumption that id is an identifier and date a time variable.
An unattractive way to do this is to loop over observations. Suppose window is set to 42.
local window = 42
gen max = .
tsset id date
quietly forval i = 1/`=_N' {
su ctr if inrange(date, date[`i'] - `window', date[`i']) & id == id[`i'], meanonly
replace max = r(max) in `i'
}
So, in words as well: summarize values of ctr if date within window and it's in the same panel (same id), and put the maximum in the current observation.
The meanonly option is not well named. It calculates some other quantities besides the mean, and the maximum is one. But you do want the meanonly option to make summarize go as fast as possible.
See my 2007 paper on events in intervals, freely available at http://www.stata-journal.com/sjpdf.html?articlenum=pr0033
I say unattractive, but this approach does have the advantage that it is easy to work with once you understand it.
I am not setting up an expression with lots of arguments to max(). You said 200 as an example and nothing stated that you might not ask for more, so far as I can see there may be no upper limit on window length, but there will be a limit on how complicated that expression can be.
If I think of a better way to do it, I'll post it. Or someone else will....
It seems like I can pass a string of arguments to max, like so:
* OPTION 1: compute running max by days *
foreach iv in $ivs {
* does not make sense for less than two days *
if `iv' < 2 continue
di "computing running max for ctr interval `iv'"
* set high for this amount of days *
local vars "ctr"
forval i = 1 / `iv' {
local vars "`vars', L`i'.ctr"
}
by tid: gen ctrh`iv' = max(`vars')
}
* OPTION 2: compute running max by days, ensuring that entire range is nonmissing *
foreach iv in $ivs {
* does not make sense for less than two days *
if `iv' < 2 continue
di "computing running max for ctr interval `iv'"
* set high for this amount of days *
local vars "ctr"
local condition "!missing(ctr)"
forval i = 1 / `iv' {
local vars "`vars', L`i'.ctr"
local condition "`condition' & !missing(L`i'.ctr)"
}
by tid: gen ctrh`iv' = max(`vars') if `condition'
}
This computes very quickly and does exactly what I need.
However, if you need an arbitrarily large window I think you should resort to Nick's answer.

Resources