I expect the following code snippet to produce the sum of all the elements and the initial value but it doesn't:
vector<long> a;
a.resize(10);
generate(a.begin(), a.end(), [n = 1]() mutable { return n++; });
#ifdef _MSC_VER
sum = parallel_reduce(a.begin(), a.end(), 10);
assert(sum == 65); // 11 * 5 + 10. It produces 215 instead.
#endif
Instead of the expected 65, it produces 215. Even if every element is added by 10, 31 * 5 = 155, not 215. How does 215 come about?
What do I miss? Any reference that explains how the initial value is used is appreciated.
You did not really provide a MCVE as the code snippet in question does not compile, but one problem is still apparent: you assume the last argument to parallel_reduce is the initial value, while according to documentation it's identity value.
Identity for addition is 0 and if you pass that, you will get expected result (55).
Related
I found a very good answer to this question on this thread
I want to understand a little bit more about why printf() can't print a floating point number as a decimal (with %d).
The program is a simple one converting Fahrenheit to Celsius degree.
I understand that %.f or %.0f is doing what i want.
But when i try to do the same thing with %d, the output is unpredictable.
I searched for more detailed pieces of information cplusplus, but i don't see where it overflows or why i get this result. For example, if you use an uninitialized variable, you will get some random (or not so random) value that is in that place of memory where your variable's name is "pointing" towards. Here, what is the reason ?
float fahr, celsius;
int lower, upper, step;
lower = 0; upper = 300; step = 20;
fahr = lower;
while(fahr <= upper){
celsius = (5 / 9.) * (fahr - 32);
printf("%d\t%d\n",fahr,celsius);
fahr+=step;
}
i was expecting :
0 20 40 60 .... 300 (first column)
-17, -6, -4, ... (second column)
instead i got
0 0 0 0 0 0 0 ,,, 0 (first column)
0, 1077149696, 1078198272, ... (second column)
When trying to run the sample code below (similar to a look up table), it always generates the following error message: "The pure definition of Function 'out' calls function 'color' in an unbounded way in dimension 0".
RDom r(0, 10, 0, 10);
Func label, color, out;
Var x,y,c;
label(x,y) = 0;
label(r.x,r.y) = 1;
color(c) = 0;
color(label(r.x,r.y)) = 255;
out(x,y) = color(label(x,y));
out.realize(10,10);
Before calling realize, I have tried to statically set bound, like below, without success.
color.bound(c,0,10);
label.bound(x,0,10).bound(y,0,10);
out.bound(x,0,10).bound(y,0,10);
I also looked at the histogram examples, but they are a bit different.
Is this some kind of restrictions in Halide?
Halide prevents any out of bounds access (and decides what to compute) by analyzing the range of the values you pass as arguments to a Func. If those values are unbounded, it can't do that. The way to make them bounded is with clamp:
out(x, y) = color(clamp(label(x, y), 0, 9));
In this case, the reason it's unbounded is that label has an update definition, which makes the analysis give up. If you wrote label like this instead:
label(x, y) = select(x >= 0 && x < 10 && y >= 0 && y < 10, 1, 0);
Then you wouldn't need the clamp.
Hi I am taking in data in real time where the value goes from 1009 , 1008 o 1007 to 0. I am trying to count the number of distinct times this occurs, for example the snippet below should count 2 distinct periods of change.
1008
1009
1008
0
0
0
1008
1007
1008
1008
1009
9
0
0
1009
1008
I have written a for loop as below but I can't figure out if the logic is correct as I get multiple increments instead of just the one
if(current != previous && current < 100)
x++;
else
x = x;
You tagged this with the LabVIEW tag. Is this actually supposed to be LabVIEW code?
Your logic has a bug related to the noise you say you have - if the value is less than 100 and it changes (for instance from 9 to 0), you log that as a change. You also have a line which doesn't do anything (x=x), although if this is supposed to be LV code, then this could make sense.
The code you posted here does not seem to make sense to me if I understand your goal. My understanding is that you want to identify this specific pattern:
1009
1008
1007
0
And that any deviation from this sequence of numbers would constitute data that should be ignored. To this end, you should be monitoring the history of the past 3 numbers. In C you might write this logic in the following way:
#include <stdio.h>
//Function to get the next value from our data stream.
int getNext(int *value) {
//Variable to hold our return code.
int code;
//Replace following line to get gext number from the stream. Possible read from a file?
*value = 0;
//Replace following logic to set 'code' appropriately.
if(*value == -1)
code = -1;
else
code = 0;
//Return 'code' to the caller.
return code;
}
//Example application for counting the occurrences of the sequence '1009','1008','1007','0'.
int main(int argc, char **argv) {
//Declare 4 items to store the past 4 items in the sequence (x0-x3)
//Declare a count and a flag to monitor the occurrence of our pattern
int x0 = 0, x1 = 0, x2 = 0, x3 = 0, count = 0, occurred = 0;
//Run forever (just as an example, you would provide your own looping structure or embed the algorithm in your app).
while(1) {
//Get the next element (implement getNext to provide numbers from your data source).
//If the function returns non-zero, exit the loop and print the count.
if( getNext(&x0) != 0 )
break;
//If the newest element is 0, we can trigger a check of the prior 3.
if(x0 == 0) {
//Set occurred to 0 if the prior elements don't match our pattern.
occurred = (x1 == 1007) && (x2 == 1008) && (x3 == 1009);
if(occurred) {
//Occurred was 1, meaning the pattern was matched. Increment our count.
count++;
//Reset occurred
occurred = 0;
}
//If the newest element is not 0, dont bother checking. Just shift the elements down our list.
} else {
x3 = x2; //Shift 3rd element to 4th position
x2 = x1; //Shift 2nd element to 3rd position
x1 = x0; //Shift 1st element to 2nd position
}
}
printf("The pattern count is %d\n", count);
//Exit application
return 0;
}
Note that the getNext function is just shown here as an example but obviously what I have implemented will not work. This function should be implemented based on how you are extracting data from the stream.
Writing the application in this way might not make sense within your larger application but the algorithm is what you should take away from this. Essentially you want to buffer 4 elements in a rolling window. You push the newest element into x0 and shift the others down. After this process you check the four elements to see if they match your desired pattern and increment the count accordingly.
If the requirement is to count falling edges and you don't care about the specific level, and want to reject noise band or ripple in the steady state then just make the conditional something like
if ((previous - current) > threshold)
No complex shifting, history, or filtering required. Depending on the application you can follow up with a debounce (persistency check) to ignore spurious samples (just keep track of falling/rising, or fell/rose as simple toggling state spanning a desired number of samples).
Code to the pattern, not the specific values; use constant or adjustable parameters to control the value sensitivity.
I don't know what the right way to do this. But this is what I want.
I have an Enum
EValue{A=1,B=2,C=4,D=8,E=16,}
I need to say int value in database as number ,
say if select A,c,E
need to say 1=4=16 =21 in database.
Which is ok but
then from 21 how to I retrieve a,c,e
Thanks for help in advance
You could do something like this (pseudo-code, since you didn't say what language you're using):
function getEnums(value:string):enum[]
{
var result = new enum[];
foreach (var e in Enum.getValues(enum).orderBy(val.toInt()))
{
if (value == 0)
break;
var eVal = e.toInt();
if (value >= eVal)
{
result.add(e);
value -= eVal;
}
}
return result;
}
Looks like your enum values can be represented by powers of 2.
e.g. 2^0 = 1, 2^1 = 2 .. etc.
So your problem comes down to : Convert decimal to bianry.
If you put the simple logic around that, you could get the respective enums.
e.g. For input 21, you could get 1, 4, 16 ( = 2 power 0, 2, 4)
now you can map them to ENUMs.
To clarify the question, please observe the c/c++ code fragment:
int a = 10, b = 20, c = 30, d = 40; //consecutive 4 int data values.
int* p = &d; //address of variable d.
Now, in visual studio (tested on 2013), if value of p == hex_value (which can be viewed in debugger memory window), then, you can observe that, the addresses for other variables a, b, c, and d are each at a 12 byte difference!
So, if p == hex_value, then it follows:
&c == hex_value + 0xC (note hex C is 12 in decimal)
&b == &c + 0xC
&a == &b + 0xC
So, why is there a 12 bytes offset instead of 4 bytes -- int are just 4 bytes?
Now, if we declared an array:
int array[] = {10,20,30,40};
The values 10, 20, 30, 40 each are located at 4 bytes difference as expected!
Can anyone please explain this behavior?
The standard C++ states in section 8.3.4 Arrays that "An object of array type contains a contiguously allocated non-empty set of N subobjects of type T."
This is why, array[] will be a set of contiguous int, and that difference between one element and the next will be exactly sizeof(int).
For local/block variables (automatic storage), no such guarantee is given. The only statements are in section 1.7. The C++ memory model: "Every byte has a unique address." and 1.8. The C++ object model: "the address of that object is the address of the first byte it occupies. Two objects (...) shall have distinct addresses".
So everything that you do assuming contiguousness of such objects would be undefined behavior and non portable. You cannot even be sure of the order of the addresses in which these objects are created.
Now I have played with a modified version of your code:
int a = 10, b = 20, c = 30, d = 40; //consecutive 4 int data values.
int* p = &d; //address of variable d.
int array[] = { 10, 20, 30, 40 };
char *pa = reinterpret_cast<char*>(&a),
*pb = reinterpret_cast<char*>(&b),
*pc = reinterpret_cast<char*>(&c),
*pd = reinterpret_cast<char*>(&d);
cout << "sizeof(int)=" << sizeof(int) << "\n &a=" << &a << \
" +" << pa - pb << "char\n &b=" << &b << \
" +" << pb - pc << "char\n &c=" << &c << \
" +" << pc - pd << "char\n &d=" << &d;
memset(&d, 0, (&a - &d)*sizeof(int));
// ATTENTION: undefined behaviour:
// will trigger core dump on leaving
// "Runtime check #2, stack arround the variable b was corrupted".
When running this code I get:
debug release comment on release
sizeof(int)=4 sizeof(int)=4
&a=0052F884 +12char &a=009EF9AC +4char
&b=0052F878 +12char &b=009EF9A8 +-8char // is before a
&c=0052F86C +12char &c=009EF9B0 +12char // is just after a !!
&d=0052F860 &d=009EF9A4
So you see that the order of the addresses may even be altered on the same compiler, depending on the build options !! In fact, in release mode the variables are contiguous but not in the same order.
The extra spaces on the debug version come from option /RTCs. I have on purpose overwritten the variables with a harsh memset() that assumes they are contiguous. Upon exit of the execution, I get immediately a message: "Runtime check #2, stack arround the variable b was corrupted", which clearly demonstrate the purpose of these extra chars.
If you remove the option, you will get with MSVC13 contiguous variables, each of 4 bytes as you did expect. But there will be no more error message about corruption of stack either.