OpenMP many for loops issue - openmp

I want to parallel the outer for loop, but I tried instructions below, and it turned out to be wrong. I hope you can help with this.
#pragma omp parallel for private(i,ii,j) reduction(+:Number1)
for(ii=1;ii<numbers_of_sieve;ii++)
{
for(j=0;j<area;j++)
flags[j]=1;
int a=sqrt((double)(1+ii))*1024;
for(int k=0;k<Number;k++)
{
if(sieve[k]<=a)
{
__int64 x=(sieve[k]+(-(ii*area))%sieve[k])%sieve[k];
for(__int64 m=ii*area+x;m<(1+ii)*area;m+=sieve[k])
flags[m-ii*area]=0;
}
}
for(i=0;i<(1<<20);i++)
{
if(flags[i]==1)
{
//fprintf(fp,"%I64d\t",i+ii*area);
Number1++;
}
}
}
Thanks!

I do not know what does mean your "and it turned out to be wrong." But flags (array?) must be declared as private or it must be declared inside loop.
P.S.
Please check that your program works right in serial mode before trying to run in parallel mode. Compare, for example, two line:
for(j=0;j<area;j++)
and
for(i=0;i<(1<<20);i++)
Think about - what will be if area != (1<<20) or size of flags != area.

Related

(Solved User error) c++11 make_shared<t>(new class) memory leak

I've been looking into smart pointers, unit testing how they manage memory and am finding and unexpected issue that all the examples recommend doing, but it creates a huge memory leak for me.
This seems to occur when I use a class that has a constructor that builds from another copy of the same class. I'll give an example.
If I have a class like:
Class foo{
public:
//Ignore unsafe practices here
HeavyInMemory* variable;
foo(){
variable = new HeavyInMemory();
}
foo(foo* copyThis){
variable = nullptr;
if(copyThis){
variable = new HeavyInMemory(copyThis->variable);
}
}
~foo(){
delete variable;
}
}
I find that I will get a huge memory leak because std::make_shared has no way to tell the difference between make_shared(args) and make_shared(new T)
Main(){
for(int i =0; i < 100; i++{
//Should not leak, if I follow examples of how to use make_shared
auto test = make_shared<foo>(new foo());
}
//Checking memory addresses these do not match, checking total program memory use, leaks like a
//sieve.
}
Am I misunderstanding something?
Do the examples just not consider this as most use primitive types as examples rather than classes.
Does c++11 just not support the make_shared(new T) format even though I see old books like scott meyers books from 1992. It just doesn't make sense.
Also why would you use make_shared(new T) over make_shared(args)? I've seen a couple threads where people have asked this on here, but neither seemed to actually answer the question with a code example.
//As they mainly say code compiler order causes the leak but in my example this would still leak:
auto originalObject = new foo();
auto expectedDestructorWhenOutofScope = make_shared<foo>(originalObject);
//I have found if I give if the object instead it doesn't leak, but this is getting into the realms of
//hacks that may sometimes work
auto originalObject = new foo();
auto expectedDestructorWhenOutofScope = make_shared<foo>(*originalObject);
EDIT:
Thanks to Igor Tandetnik I now see I am using make_shared entirely wrong. It should be used as a constructor. Thanks again Igor I appreciate it.
//Create new
auto expectedDestructorWhenOutofScope = make_shared<foo>();
//Use object already created
std::shared_ptr<Object> p2(new foo())

Dynamic JavaFX buttons action [duplicate]

I want to be able to do something like this:
for(i = 0; i < 10; i++) {
//if any button in the array is pressed, disable it.
button[i].setOnAction( ae -> { button[i].setDisable(true) } );
}
However, I get a error saying "local variables referenced from a lambda expression must be final or effectively final". How might I still do something like the code above (if it is even possible)? If it can't be done, what should be done instead to get a similar result?
As the error message says, local variables referenced from a lambda expression must be final or effectively final ("effectively final" meaning the compiler can make it final for you).
Simple workaround:
for(i = 0; i < 10; i++) {
final int ii = i;
button[i].setOnAction( ae -> { button[ii].setDisable(true) } );
}
Since you are using lambdas, you can benefit also from other features of Java 8, like streams.
For instance, IntStream:
A sequence of primitive int-valued elements supporting sequential and parallel aggregate operations. This is the int primitive specialization of Stream.
can be used to replace the for loop:
IntStream.range(0,10).forEach(i->{...});
so now you have an index that can be used to your purpose:
IntStream.range(0,10)
.forEach(i->button[i].setOnAction(ea->button[i].setDisable(true)));
Also you can generate a stream from an array:
Stream.of(button).forEach(btn->{...});
In this case you won't have an index, so as #shmosel suggests, you can use the source of the event:
Stream.of(button)
.forEach(btn->btn.setOnAction(ea->((Button)ea.getSource()).setDisable(true)));
EDIT
As #James_D suggests, there's no need of downcasting here:
Stream.of(button)
.forEach(btn->btn.setOnAction(ea->btn.setDisable(true)));
In both cases, you can also benefit from parallel operations:
IntStream.range(0,10).parallel()
.forEach(i->button[i].setOnAction(ea->button[i].setDisable(true)));
Stream.of(button).parallel()
.forEach(btn->btn.setOnAction(ea->btn.setDisable(true)));
Use the Event to get the source Node.
for(int i = 0; i < button.length; i++)
{
button[i].setOnAction(event ->{
((Button)event.getSource()).setDisable(true);
});
}
Lambda expressions are effectively like an annonymous method which works on stream. In order to avoid any unsafe operations, Java has made that no external variables which can be modified, can be accessed in a lambda expression.
In order to work around it,
final int index=button[i];
And use index instead of i inside your lambda expression.
You say If the button is pressed, but in your example all the buttons in the list will be disabled. Try to associate a listener to each button rather than just disable it.
For the logic, do you mean something like that :
Arrays.asList(buttons).forEach(
button -> button.addActionListener(new ActionListener() {
#Override
public void actionPerformed(ActionEvent e) {
button.setEnabled(false);
}
}));
I Also like Sedrick's answer but you have to add an action listener inside the loop .

When to use ostream_iterator

As I know, we can use ostream_iterator in c++11 to print a container.
For example,
std::vector<int> myvector;
for (int i=1; i<10; ++i) myvector.push_back(i*10);
std::copy ( myvector.begin(), myvector.end(), std::ostream_iterator<int>{std::cout, " "} );
I don't know when and why we use the code above, instead of traditional way, such as:
for(const auto & i : myvector) std::cout<<i<<" ";
In my opinion, the traditional way is faster because there is no copy, am I right?
std::ostream_iterator is a single-pass OutputIterator, so it can be used in any algorithms which accept such iterator. The use of it for outputing vector of int-s is just for presenting its capabilities.
In my opinion, the traditional way is faster because there is no copy, am I right?
You may find here: http://en.cppreference.com/w/cpp/algorithm/copy that copy is implemented quite similarly to your for-auto loop. It is also specialized for various types to work as efficient as possible. On the other hand writing to std::ostream_iterator is done by assignment to it, and you can read here : http://en.cppreference.com/w/cpp/iterator/ostream_iterator/operator%3D that it resolves to *out_stream << value; operation (if delimiter is ignored).
You may also find that this iterator suffers from the problem of extra trailing delimiter which is inserted at the end. To fix this there will be (possibly in C++17) a new is a single-pass OutputIterator std::experimental::ostream_joiner
A short (and maybe silly) example where using iterator is usefull. The point is that you can direct your data to any sink - a file, console output, memory buffer. Whatever output you choose, MyData::serialize does not needs changes, you only need to provide OutputIterator.
struct MyData {
std::vector<int> data = {1,2,3,4};
template<typename T>
void serialize(T iterator) {
std::copy(data.begin(), data.end(), iterator);
}
};
int main()
{
MyData data;
// Output to stream
data.serialize(std::ostream_iterator<int>(std::cout, ","));
// Output to memory
std::vector<int> copy;
data.serialize(std::back_inserter(copy));
// Other uses with different iterator adaptors:
// std::front_insert_iterator
// other, maybe custom ones
}
The difference is polymorphism vs. hardcoded stream.
std::ostream_iterator builds itself from any class which inherits from std::ostream, so in runtime, you can change or wire the iterator to write to difference output stream type based on the context on which the functions runs.
the second snippet uses a hardcoded std::cout which cannot change in runtime.

Thread safety question about one container

Let's talk about theory a bit. We have one container, let's call it TMyObj that looks like this:
struct TMyObj{
bool bUpdated;
bool bUnderUpdate;
}
Let a class named TMyClass have an array of the container above + 3 helpful functions. One for getting an object to be updated. One for adding update info to a certain object and one for getting an updated object. It's also called in this order. Here's the class
class TMyClass{
TmyObj entries[];
TMyObj GetObjToUpdate;
{
//Enter critical section
for(int i=0; i<Length(entries); i++)
if(!entries[i].bUnderUpdate)
{
entries[i].bUnderUpdate=true;
return entries[i];
}
//Leave critical section
}
//the parameter here is always contained in the Entries array above
void AddUpdateInfo(TMyObj obj)
{
//Do something...
//Enter critical section
if(updateInfoOver) obj.bUpdated=true; //left bUnderUpdate as true so it doesn't bother us
//Leave critical section
}
TmyObj GetUpdatedObj
{
//<-------- here
for(int i=0; i<Length(entrues); i++)
if(entries[i].bUpdated) then return entries[i];
//<-------- and here?
}
}
Now imagine 5+ threads using the first two and another one for using the last function(getUpdadtedObj) on one instance of the class above.
Question: Will it be thread-safe if there's no critical section in the last function?
Given your sample code, it appears that it would be thread-safe for a read. This is assuming entries[] is a fixed size. If you are simply iterating over a fixed collection, there is no reason that the size of the collection should be modified, therefore making a thread-safe read ok.
The only thing I could see is that the result might be out of date. The problem comes from a call to GetUpdatedObj -- Thread A might not see an update to entries[0] during the life-cycle of
for(int i=0; i<Length(entrues); i++)
if(entries[i].bUpdated) then return entries[i];
if Thread B comes along and updates entries[0] while i > 0 -- it all depends if that is considered acceptable or not.

Flatten conditional as a refactoring

Consider:
if (something) {
// Code...
}
With CodeRush installed it recommended doing:
if (!something) {
return;
}
// Code...
Could someone explain how this is better? Surely there is no benefit what so ever.
Isolated, as you've presented it - no benefit. But mark4o is right on: it's less nesting, which becomes very clear if you look at even, say a 4-level nesting:
public void foo() {
if (a)
if (b)
if (c)
if (d)
doSomething();
}
versus
public void foo() {
if (!a)
return;
if (!b)
return;
if (!c)
return;
if (!d)
return;
doSomething();
}
early returns like this improve readability.
In some cases, it's cleaner to validate all of your inputs at the beginning of a method and just bail out if anything is not correct. You can have a series of single-level if checks that check successively more and more specific things until you're confident that your inputs are good. The rest of the method will then be much easier to write, and will tend to have fewer nested conditionals.
One less level of nesting.
This is a conventional refactoring meant for maintainability. See:
http://www.refactoring.com/catalog/replaceNestedConditionalWithGuardClauses.html
With one condition, it's not a big improvement. But it follows the "fail fast" principle, and you really start to notice the benefit when you have lots of conditions. If you grew up on "structured programming", which typically recommends functions have single exit points, it may seem unnatural, but if you've ever tried to debug code that has three levels or more of nested conditionals, you'll start to appreciate it.
It can be used to make the code more readable (by way of less nesting). See here for a good example, and here for a good discussion of the merits.
That sort of pattern is commonly used to replace:
void SomeMethod()
{
if (condition_1)
{
if (condition_2)
{
if (condition_3)
{
// code
}
}
}
}
With:
void SomeMethod()
{
if (!condition_1) { return; }
if (!condition_2) { return; }
if (!condition_3) { return; }
// code
}
Which is much easier on the eyes.
I don't think CodeRush is recommending it --- rather just offering it as an option.
IMO, it depends on if something or !something is the exceptional case. If there is a significant amount of code if something happens, then using the !something conditional makes more sense for legibility and potential nesting reduction.
Well, look at it this way (I'll use php as an example):
You fill a form and go to this page: validate.php
example 1:
<?php
if (valid_data($_POST['username'])) {
if (valid_data($_POST['password'])) {
login();
} else {
die();
}
} else {
die();
}
?>
vs
<?php
if (!valid_data($_POST['username'])) {
die();
}
if (!valid_data($_POST['password'])) {
die();
}
login();
?>
Which one is better and easier to maintain? Remember this is just validating two things. Imagine this for a register page or something else.
I remember very clearly losing marks on a piece of college work because I had gone with the
if (!something) {
return;
}
// Code...
format. My lecturer pontificated that it was bad practice to have more than one exit point in a function. I thought that was nuts and 20+ years of computer programming later, I still do.
To be fair, he lived in an era where the lingua franca was C and functions were often pages long and full of nested conditionals making it difficult to track what was going on.
Then and now, however, simplicity is king: Keeping functions small and commenting them well is the best way to make things readable and maintainable.

Resources