Which method is better and fast, and why?
for (unsigned int i=0; i<meshes.size(); i++)
{
meshes.erase(meshes.begin() + i);
if(meshes[i]) delete meshes[i];
}
or this one...
for (auto it = meshes.begin(); it != meshes.end(); ++it)
delete *it;
meshes.clear();
Ideally, you have std::vector<std::unique_ptr<Mesh>> meshes;, and then it's just meshes.clear();, or if possible, just std::vector<Mesh>!
I don't quite know what you're trying to achieve with the first snippet, but the seconds snippet looks right and efficient assuming your data is in an std::vector<Mesh*>where each pointer is newed
Might as well use Modern C++ syntax.
for (auto p: meshes) delete p;
meshes.clear();
Related
std::XXX::inerator always support operator+, such as
const string s = "abcdef";
cout << *(s.begin() + 3);//output d
But ptree doesn't support operator+.
ptree p = root.get_child("insert");
//I want to directly goto the 3rd child.
auto iter = p.begin()+3;//error
I know I can use a for loop to do this. But I wonder if there is a more grace way to do this?
I achieved this by:
template <class InputIterator, class Distance>
void advance (InputIterator& it, Distance n);
C++ stl advance document
auto iter = p.begin();
advance(iter, 3);
I am looking at the lambda generators from https://stackoverflow.com/a/12735970 and https://stackoverflow.com/a/12639820.
I would like to adapt these to a template version and I'm wondering what I should return to signal that the generator has reached its end.
Consider
template<typename T>
std::function<T()> my_template_vector_generator(std::vector<T> &v) {
int idx = 0;
return [=,&v]() mutable {
return v[idx++];
};
}
The [=,&v] addition is mine and I hope it's correct. For now, it lets me change the vector outside as expected. Please comment on this if you have a bad feeling about it...
For reference, this works (REQUIRE is from catch):
std::vector<double> v({1.0, 2.0, 3.0});
auto vec_gen = my_template_vector_generator(v);
REQUIRE( vec_gen() == 1 );
REQUIRE( vec_gen() == 2 );
v[2] = 5.0;
REQUIRE( vec_gen() == 5.0 );
That vector version obviously requires knowledge of v.size() at the call site. I'd like to go without that by returning something that indicates the generator is empty.
Brainstorming, I can think of the following:
return pair<T, bool> and indicate false once there are no more values.
return iterators to the container values and iterate for (auto it=gen(); it!=gen.end(); it=gen()) {cout << *it << endl;}
wrapping the generator in a class and implementing an empty() method. Not entirely sure how that would work, however.
Does either of these versions feel good to you? Do you have another idea?
I'd particularly be interested in implications when mapping such a generator or combining them recursively (see this C++14 blog post for inspiration).
Update:
After hearing the optional suggestions, I implemented something based on unique_ptr.
template<typename T>
std::function<std::unique_ptr<T>()> my_pointer_template_vector_generator(std::vector<T> &v) {
int idx = 0;
return [=,&v]() mutable {
if (idx < v.size()) {
return std::unique_ptr<T>(new T(v[idx++]));
} else {
return std::unique_ptr<T>();
}
};
}
This seems to work, consider the following passing test.
TEST_CASE("Pointer generator terminates as expected") {
std::vector<double> v({1.0, 2.0, 3.0});
int k = 0;
auto gen = my_pointer_template_vector_generator(v);
while(auto val = gen()) {
REQUIRE( *val == v[k++] );
}
REQUIRE( v.size() == k );
}
I have some concerns about creating lots of new T. I suppose I could also return pointers into the vector and nullptr otherwise. That should work in the same way. What do you think about this?
As soon as it looks like just study example i could suggest to add exception as end of collection. But do not use it in real code.
Is there a halfway elegant way to upgrade to following code snipped by the use of boost's scoped_ptr or scoped_array?
MyClass** dataPtr = NULL;
dataPtr = new MyClass*[num];
memset(dataPtr, 0, sizeof(MyClass*));
allocateData(dataPtr); // allocates objects under all the pointers
// have fun with the data objects
// now I'm bored and want to get rid of them
for(uint i = 0; i < num; ++i)
delete dataPtr[i];
delete[] dataPtr;
I did it the following way now:
boost::scoped_array<MyClass*> dataPtr(new MyClass*[num]);
memset(dataPtr.get(), 0, num * sizeof(MyClass*));
allocateData(dataPtr.get());
Seems to work fine.
I populate a very large array using a stxxl::VECTOR_GENERATOR<MyData>::result::bufwriter_type (something like 100M entries) which I need to sort in parallel.
I use the stxxl::sort(vector->begin(), vector->end(), cmp(), memoryAmount) method, which in theory should do what I need: sort the elements very efficiently.
However, during the execution of this method I noticed that only one processor is fully utilised, and all the other cores are quite idle (I suspect there is little activity to fetch the input, but in practice they don't do anything).
This is my question: is it possible to exploit more cores during the sorting phase, or is the parallelism used only to fetch the input asynchronously? If so, are there documents that explain how to enable it? (I looked extensively the documentation on the website, but I couldn't find anything).
Thanks very much!
EDIT
Thanks for the suggestion. I provide below some more information.
First of all I use MacOs for my experiments. What I do is that I launch the following program and I study its behaviour.
typedef struct Triple {
long t1, t2, t3;
Triple(long s, long p, long o) {
this->t1 = s;
this->t2 = p;
this->t3 = o;
}
Triple() {
t1 = t2 = t3 = 0;
}
} Triple;
const Triple minv(std::numeric_limits<long>::min(),
std::numeric_limits<long>::min(), std::numeric_limits<long>::min());
const Triple maxv(std::numeric_limits<long>::max(),
std::numeric_limits<long>::max(), std::numeric_limits<long>::max());
struct cmp: std::less<Triple> {
bool operator ()(const Triple& a, const Triple& b) const {
if (a.t1 < b.t1) {
return true;
} else if (a.t1 == b.t1) {
if (a.t2 < b.t2) {
return true;
} else if (a.t2 == b.t2) {
return a.t3 < b.t3;
}
}
return false;
}
Triple min_value() const {
return minv;
}
Triple max_value() const {
return maxv;
}
};
typedef stxxl::VECTOR_GENERATOR<Triple>::result vector_type;
int main(int argc, const char** argv) {
vector_type vector;
vector_type::bufwriter_type writer(vector);
for (int i = 0; i < 1000000000; ++i) {
if (i % 10000000 == 0)
std::cout << "Inserting element " << i << std::endl;
Triple t;
t.t1 = rand();
t.t2 = rand();
t.t3 = rand();
writer << t;
}
writer.finish();
//Sort the vector
stxxl::sort(vector.begin(), vector.end(), cmp(), 1024*1024*1024);
std::cout << vector.size() << std::endl;
}
Indeed there seems to be only one or maximum two threads working during the execution of this program. Notice that the machine has only a single disk.
Can you please confirm me whether the parallelism work on macos? If not, then I will try to use linux to see what happens. Or is perhaps because there is only one disk?
In principle what you are doing should work out-of-the-box. With everything working you should see all cores doing processing.
Since it doesnt work, we'll have to find the error, and debugging why we see no parallel speedups is still tricky business these days.
The main idea is to go from small to large examples:
what platform is this? There is no parallelism on MSVC, only on Linux/gcc.
By default STXXL builds on Linux/gcc with USE_GNU_PARALLEL. you can turn it off to see if it has an effect.
Try reproducing the example values shown in http://stxxl.sourceforge.net/tags/master/stxxl_tool.html - with and without USE_GNU_PARALLEL
See if just in memory parallel sorting scales on your processor/system.
I serialize multiple objects into a binary archive with Boost.
When reading back those objects from a binary_iarchive, is there a way to know how many objects are in the archive or simply a way to detect the end of the archive ?
The only way I found is to use a try-catch to detect the stream exception.
Thanks in advance.
I can think of a number of approaches:
Serialize STL containers to/from your archive (see documentation). The archive will automatically keep track of how many objects there are in the containers.
Serialize a count variable before serializing your objects. When reading back your objects, you'll know beforehand how many objects you expect to read back.
You could have the last object have a special value that acts as a kind of sentinel that indicates the end of the list of objects. Perhaps you could add an isLast member function to the object.
This is not very pretty, but you could have a separate "index file" alongside your archive that stores the number of objects in the archive.
Use the tellp position of the underlying stream object to detect if you're at the end of file:
Example (just a sketch, not tested):
std::streampos archiveOffset = stream.tellg();
std::streampos streamEnd = stream.seekg(0, std::ios_base::end).tellg();
stream.seekg(archiveOffset);
while (stream.tellp() < streamEnd)
{
// Deserialize objects
}
This might not work with XML archives.
Do you have all your objects when you begin serializing? If not, you are "abusing" boost serialization - it is not meant to be used that way. However, I am using it that way, using try catch to find the end of the file, and it works for me. Just hide it away somewhere in the implementation. Beware though, if using it this way, you need to either not serialize pointers, or disable pointer tracking.
If you do have all the objects already, see Emile's answer. They are all valid approaches.
std::istream* stream_;
boost::iostreams::filtering_streambuf<boost::iostreams::input>* filtering_streambuf_;
...
stream_ = new std::istream(memoryBuffer_);
if (stream_) {
filtering_streambuf_ = new boost::iostreams::filtering_streambuf<boost::iostreams::input>();
if (filtering_streambuf_) {
filtering_streambuf_->push(boost::iostreams::gzip_decompressor());
filtering_streambuf_->push(*stream_);
archive_ = new eos::portable_iarchive(*filtering_streambuf_);
}
}
using zip when reading data from the archives, and filtering_streambuf have such method as
std::streamsize std::streambuf::in_avail()
Get number of characters available to read
so i check the end of archive as
bool IArchiveContainer::eof() const {
if (filtering_streambuf_) {
return filtering_streambuf_->in_avail() == 0;
}
return false;
}
It is not helping to know how many objects are last in the archive, but helping to detect the end of them
(i'm using eof test only in the unit test for serialization/unserialization my classes/structures - to make sure that i'm reading all what i'm writing)
Sample code which I used to debug the similar issue
(based on Emile's answer) :
#include <fstream>
#include <iostream>
#include <boost/archive/binary_oarchive.hpp>
#include <boost/archive/binary_iarchive.hpp>
struct A{
int a,b;
template <typename T>
void serialize(T &ar, int ){
ar & a;
ar & b;
}
};
int main(){
{
std::ofstream ofs( "ff.ar" );
boost::archive::binary_oarchive ar( ofs );
for(int i=0;i<3;++i){
A a {2,3};
ar << a;
}
ofs.close();
}
{
std::ifstream ifs( "ff.ar" );
ifs.seekg (0, ifs.end);
int length = ifs.tellg();
ifs.seekg (0, ifs.beg);
boost::archive::binary_iarchive ar( ifs );
while(ifs.tellg() < length){
A a;
ar >> a;
std::cout << "a.a-> "<< a.a << " and a.b->"<< a.b << "\n";
}
}
return 0;
}
you just read a byte from the file.
If you do not reach the end,
backword a byte then.