send/receive object arrays using MPI - parallel-processing

Is it possible to send/receive C++ object and object arrays using MPI_Bcast, MPI_Scatter and MPI_Gather. If yes then which MPI datatype is used for objects?
For example I have a class named cell.
class cell
{
private:
int abc;
double xyz;
public:
cell(){ }
...
};
In the main function, I would like to make an object array of class cell and would like to send/receive as object array. e.g.,
void main ()
{
...
cell** cells = new cell*[someVar];
for(int i = 0; i < someVar; ++i)
{
cells[i] = new cell[someVar];
}
MPI_Bcast(cells, someVar, ???, 0, MPI_COMM_WORLD);
...
}
How can we define an MPI data type to send / receive an object array?

Check out the MPI_Pack / MPI_Unpack mechanism. On the sending side you stuff the elements into a pack buffer and you send that; the receiving side unpacks it component-by-component. This offers such cute possibilities as first unpacking an integer that tells you how many subsequent doubles there are to unpack. A big advantage of this approach is that it applies to objects that are only indirectly accessible through an iterator or such.

Related

Removing a std::function<()> from a vector c++

I'm building a publish-subscribe class (called SystermInterface), which is responsible to receive updates from its instances, and publish them to subscribers.
Adding a subscriber callback function is trivial and has no issues, but removing it yields an error, because std::function<()> is not comparable in C++.
std::vector<std::function<void()> subs;
void subscribe(std::function<void()> f)
{
subs.push_back(f);
}
void unsubscribe(std::function<void()> f)
{
std::remove(subs.begin(), subs.end(), f); // Error
}
I've came down to five solutions to this error:
Registering the function using a weak_ptr, where the subscriber must keep the returned shared_ptr alive.
Solution example at this link.
Instead of registering at a vector, map the callback function by a custom key, unique per callback function.
Solution example at this link
Using vector of function pointers. Example
Make the callback function comparable by utilizing the address.
Use an interface class (parent class) to call a virtual function.
In my design, all intended classes inherits a parent class called
ServiceCore, So instead of registering a callback function, just
register ServiceCore reference in the vector.
Given that the SystemInterface class has a field attribute per instance (ID) (Which is managed by ServiceCore, and supplied to SystemInterface by constructing a ServiceCore child instance).
To my perspective, the first solution is neat and would work, but it requires handling at subscribers, which is something I don't really prefer.
The second solution would make my implementation more complex, where my implementation looks as:
using namespace std;
enum INFO_SUB_IMPORTANCE : uint8_t
{
INFO_SUB_PRIMARY, // Only gets the important updates.
INFO_SUB_COMPLEMENTARY, // Gets more.
INFO_SUB_ALL // Gets all updates
};
using CBF = function<void(string,string)>;
using INFO_SUBTREE = map<INFO_SUB_IMPORTANCE, vector<CBF>>;
using REQINF_SUBS = map<string, INFO_SUBTREE>; // It's keyed by an iterator, explaining it goes out of the question scope.
using INFSRC_SUBS = map<string, INFO_SUBTREE>;
using WILD_SUBS = INFO_SUBTREE;
REQINF_SUBS infoSubrs;
INFSRC_SUBS sourceSubrs;
WILD_SUBS wildSubrs;
void subscribeInfo(string info, INFO_SUB_IMPORTANCE imp, CBF f) {
infoSubrs[info][imp].push_back(f);
}
void subscribeSource(string source, INFO_SUB_IMPORTANCE imp, CBF f) {
sourceSubrs[source][imp].push_back(f);
}
void subscribeWild(INFO_SUB_IMPORTANCE imp, CBF f) {
wildSubrs[imp].push_back(f);
}
The second solution would require INFO_SUBTREE to be an extended map, but can be keyed by an ID:
using KEY_T = uint32_t; // or string...
using INFO_SUBTREE = map<INFO_SUB_IMPORTANCE, map<KEY_T,CBF>>;
For the third solution, I'm not aware of the limitations given by using function pointers, and the consequences of the fourth solution.
The Fifth solution would eliminate the purpose of dealing with CBFs, but it'll be more complex at subscriber-side, where a subscriber is required to override the virtual function and so receives all updates at one place, in which further requires filteration of the message id and so direct the payload to the intended routines using multiple if/else blocks, which will increase by increasing subscriptions.
What I'm looking for is an advice for the best available option.
Regarding your proposed solutions:
That would work. It can be made easy for the caller: have subscribe() create the shared_ptr and corresponding weak_ptr objects, and let it return the shared_ptr.
Then the caller must not lose the key. In a way this is similar to the above.
This of course is less generic, and then you can no longer have (the equivalent of) captures.
You can't: there is no way to get the address of the function stored inside a std::function. You can do &f inside subscribe() but that will only give you the address of the local variable f, which will go out of scope as soon as you return.
That works, and is in a way similar to 1 and 2, although now the "key" is provided by the caller.
Options 1, 2 and 5 are similar in that there is some other data stored in subs that refers to the actual std::function: either a std::shared_ptr, a key or a pointer to a base class. I'll present option 6 here, which is kind of similar in spirit but avoids storing any extra data:
Store a std::function<void()> directly, and return the index in the vector where it was stored. When removing an item, don't std::remove() it, but just set it to std::nullptr. Next time subscribe() is called, it checks if there is an empty element in the vector and reuses it:
std::vector<std::function<void()> subs;
std::size_t subscribe(std::function<void()> f) {
if (auto it = std::find(subs.begin(), subs.end(), std::nullptr); it != subs.end()) {
*it = f;
return std::distance(subs.begin(), it);
} else {
subs.push_back(f);
return subs.size() - 1;
}
}
void unsubscribe(std::size_t index) {
subs[index] = std::nullptr;
}
The code that actually calls the functions stored in subs must now of course first check against std::nullptrs. The above works because std::nullptr is treated as the "empty" function, and there is an operator==() overload that can check a std::function against std::nullptr, thus making std::find() work.
One drawback of option 6 as shown above is that a std::size_t is a rather generic type. To make it safer, you might wrap it in a class SubscriptionHandle or something like that.
As for the best solution: option 1 is quite heavy-weight. Options 2 and 5 are very reasonable, but 6 is, I think, the most efficient.

Dynamic JavaFX buttons action [duplicate]

I want to be able to do something like this:
for(i = 0; i < 10; i++) {
//if any button in the array is pressed, disable it.
button[i].setOnAction( ae -> { button[i].setDisable(true) } );
}
However, I get a error saying "local variables referenced from a lambda expression must be final or effectively final". How might I still do something like the code above (if it is even possible)? If it can't be done, what should be done instead to get a similar result?
As the error message says, local variables referenced from a lambda expression must be final or effectively final ("effectively final" meaning the compiler can make it final for you).
Simple workaround:
for(i = 0; i < 10; i++) {
final int ii = i;
button[i].setOnAction( ae -> { button[ii].setDisable(true) } );
}
Since you are using lambdas, you can benefit also from other features of Java 8, like streams.
For instance, IntStream:
A sequence of primitive int-valued elements supporting sequential and parallel aggregate operations. This is the int primitive specialization of Stream.
can be used to replace the for loop:
IntStream.range(0,10).forEach(i->{...});
so now you have an index that can be used to your purpose:
IntStream.range(0,10)
.forEach(i->button[i].setOnAction(ea->button[i].setDisable(true)));
Also you can generate a stream from an array:
Stream.of(button).forEach(btn->{...});
In this case you won't have an index, so as #shmosel suggests, you can use the source of the event:
Stream.of(button)
.forEach(btn->btn.setOnAction(ea->((Button)ea.getSource()).setDisable(true)));
EDIT
As #James_D suggests, there's no need of downcasting here:
Stream.of(button)
.forEach(btn->btn.setOnAction(ea->btn.setDisable(true)));
In both cases, you can also benefit from parallel operations:
IntStream.range(0,10).parallel()
.forEach(i->button[i].setOnAction(ea->button[i].setDisable(true)));
Stream.of(button).parallel()
.forEach(btn->btn.setOnAction(ea->btn.setDisable(true)));
Use the Event to get the source Node.
for(int i = 0; i < button.length; i++)
{
button[i].setOnAction(event ->{
((Button)event.getSource()).setDisable(true);
});
}
Lambda expressions are effectively like an annonymous method which works on stream. In order to avoid any unsafe operations, Java has made that no external variables which can be modified, can be accessed in a lambda expression.
In order to work around it,
final int index=button[i];
And use index instead of i inside your lambda expression.
You say If the button is pressed, but in your example all the buttons in the list will be disabled. Try to associate a listener to each button rather than just disable it.
For the logic, do you mean something like that :
Arrays.asList(buttons).forEach(
button -> button.addActionListener(new ActionListener() {
#Override
public void actionPerformed(ActionEvent e) {
button.setEnabled(false);
}
}));
I Also like Sedrick's answer but you have to add an action listener inside the loop .

Why protobuf serialize "oneof" message use if-else

I has a Message define like this:
message Command{
oneof type{
Point point = 1;
Rotate rotate = 2;
Move move = 3;
... //about 100 messages
}
}
Then protoc generate the SerializeWithCachedSizes function:
void Command::SerializeWithCachedSizes(
::google::protobuf::io::CodedOutputStream* output) const {
// ##protoc_insertion_point(serialize_start:coopshare.proto.Command)
::google::protobuf::uint32 cached_has_bits = 0;
(void) cached_has_bits;
// .coopshare.proto.Point point = 1;
if (has_point()) {
::google::protobuf::internal::WireFormatLite::WriteMessageMaybeToArray(
1, *type_.point_, output);
}
// .coopshare.proto.Rotate rotate = 2;
if (has_rotate()) {
::google::protobuf::internal::WireFormatLite::WriteMessageMaybeToArray(
2, *type_.rotate_, output);
}
// .coopshare.proto.Move move = 3;
if (has_move()) {
::google::protobuf::internal::WireFormatLite::WriteMessageMaybeToArray(
3, *type_.move_, output);
}
The "oneof" message saves the specific type in _oneof_case_. I think using switch-case is more efficient.
But why protobuf still generate code like this?
Oneofs are internally handled similar to optional fields. In fact, descriptor.proto represents them as a set of optional fields that just have an extra oneof_index to indicate that they belong together. This is a reasonable choice, because it allowed oneofs to be used immediately with many libraries before any special support was added.
I assume that the C++ code generator uses the same structure for both optional fields and oneofs.
It is possible that switch-case could generate more efficient code, and in that case it would be useful to propose that as an improvement to the protobuf project. However, like Jorge Bellón pointed out in the comments, it is entirely possible that the compiler will be able to optimize this structure automatically. One would have to test and benchmark to be sure.

When to use ostream_iterator

As I know, we can use ostream_iterator in c++11 to print a container.
For example,
std::vector<int> myvector;
for (int i=1; i<10; ++i) myvector.push_back(i*10);
std::copy ( myvector.begin(), myvector.end(), std::ostream_iterator<int>{std::cout, " "} );
I don't know when and why we use the code above, instead of traditional way, such as:
for(const auto & i : myvector) std::cout<<i<<" ";
In my opinion, the traditional way is faster because there is no copy, am I right?
std::ostream_iterator is a single-pass OutputIterator, so it can be used in any algorithms which accept such iterator. The use of it for outputing vector of int-s is just for presenting its capabilities.
In my opinion, the traditional way is faster because there is no copy, am I right?
You may find here: http://en.cppreference.com/w/cpp/algorithm/copy that copy is implemented quite similarly to your for-auto loop. It is also specialized for various types to work as efficient as possible. On the other hand writing to std::ostream_iterator is done by assignment to it, and you can read here : http://en.cppreference.com/w/cpp/iterator/ostream_iterator/operator%3D that it resolves to *out_stream << value; operation (if delimiter is ignored).
You may also find that this iterator suffers from the problem of extra trailing delimiter which is inserted at the end. To fix this there will be (possibly in C++17) a new is a single-pass OutputIterator std::experimental::ostream_joiner
A short (and maybe silly) example where using iterator is usefull. The point is that you can direct your data to any sink - a file, console output, memory buffer. Whatever output you choose, MyData::serialize does not needs changes, you only need to provide OutputIterator.
struct MyData {
std::vector<int> data = {1,2,3,4};
template<typename T>
void serialize(T iterator) {
std::copy(data.begin(), data.end(), iterator);
}
};
int main()
{
MyData data;
// Output to stream
data.serialize(std::ostream_iterator<int>(std::cout, ","));
// Output to memory
std::vector<int> copy;
data.serialize(std::back_inserter(copy));
// Other uses with different iterator adaptors:
// std::front_insert_iterator
// other, maybe custom ones
}
The difference is polymorphism vs. hardcoded stream.
std::ostream_iterator builds itself from any class which inherits from std::ostream, so in runtime, you can change or wire the iterator to write to difference output stream type based on the context on which the functions runs.
the second snippet uses a hardcoded std::cout which cannot change in runtime.

Feeding a Python list into a function taking in a vector with Boost Python

I've got a function with the signature:
function(std::vector<double> vector);
And I've exposed it, but it doesn't take in Python lists. I've looked through the other SO answers, and most involve changing the function to take in boost::python::lists, but I don't want to change the function. I imagine I can use the vector_indexing_suite to write a simple wrapper around this function, but I have many functions of this form and would rather not write a wrapper for every single one. Is there a way to automatically make a Python list->std::vector mapping occur?
There are a few solutions to accomplish this without having to modify the original functions.
To accomplish this with a small amount of boilerplate code and transparency to python, consider registering a custom converter. Boost.Python uses registered converters when going between C++ and Python types. Some converters are implicitly created when creating bindings, such as when class_ exports a type.
The following complete example uses an iterable_converter type that allows for the registration of conversion functions from a python type supporting the python iterable protocol. The example enable conversions for:
Collection of built-in type: std::vector<double>
2-dimensional collection of strings: std::vector<std::vector<std::String> >
Collection of user type: std::list<foo>
#include <iostream>
#include <list>
#include <vector>
#include <boost/python.hpp>
#include <boost/python/stl_iterator.hpp>
/// #brief Mockup model.
class foo {};
// Test functions demonstrating capabilities.
void test1(std::vector<double> values)
{
for (auto&& value: values)
std::cout << value << std::endl;
}
void test2(std::vector<std::vector<std::string> > values)
{
for (auto&& inner: values)
for (auto&& value: inner)
std::cout << value << std::endl;
}
void test3(std::list<foo> values)
{
std::cout << values.size() << std::endl;
}
/// #brief Type that allows for registration of conversions from
/// python iterable types.
struct iterable_converter
{
/// #note Registers converter from a python interable type to the
/// provided type.
template <typename Container>
iterable_converter&
from_python()
{
boost::python::converter::registry::push_back(
&iterable_converter::convertible,
&iterable_converter::construct<Container>,
boost::python::type_id<Container>());
// Support chaining.
return *this;
}
/// #brief Check if PyObject is iterable.
static void* convertible(PyObject* object)
{
return PyObject_GetIter(object) ? object : NULL;
}
/// #brief Convert iterable PyObject to C++ container type.
///
/// Container Concept requirements:
///
/// * Container::value_type is CopyConstructable.
/// * Container can be constructed and populated with two iterators.
/// I.e. Container(begin, end)
template <typename Container>
static void construct(
PyObject* object,
boost::python::converter::rvalue_from_python_stage1_data* data)
{
namespace python = boost::python;
// Object is a borrowed reference, so create a handle indicting it is
// borrowed for proper reference counting.
python::handle<> handle(python::borrowed(object));
// Obtain a handle to the memory block that the converter has allocated
// for the C++ type.
typedef python::converter::rvalue_from_python_storage<Container>
storage_type;
void* storage = reinterpret_cast<storage_type*>(data)->storage.bytes;
typedef python::stl_input_iterator<typename Container::value_type>
iterator;
// Allocate the C++ type into the converter's memory block, and assign
// its handle to the converter's convertible variable. The C++
// container is populated by passing the begin and end iterators of
// the python object to the container's constructor.
new (storage) Container(
iterator(python::object(handle)), // begin
iterator()); // end
data->convertible = storage;
}
};
BOOST_PYTHON_MODULE(example)
{
namespace python = boost::python;
// Register interable conversions.
iterable_converter()
// Build-in type.
.from_python<std::vector<double> >()
// Each dimension needs to be convertable.
.from_python<std::vector<std::string> >()
.from_python<std::vector<std::vector<std::string> > >()
// User type.
.from_python<std::list<foo> >()
;
python::class_<foo>("Foo");
python::def("test1", &test1);
python::def("test2", &test2);
python::def("test3", &test3);
}
Interactive usage:
>>> import example
>>> example.test1([1, 2, 3])
1
2
3
>>> example.test1((4, 5, 6))
4
5
6
>>> example.test2([
... ['a', 'b', 'c'],
... ['d', 'e', 'f']
... ])
a
b
c
d
e
f
>>> example.test3([example.Foo(), example.Foo()])
2
A few comments on this approach:
The iterable_converter::convertible function could be changed to only allowing python list, rather than allowing any type that supports the iterable protocol. However, the extension may become slightly unpythonic as a result.
The conversions are registered based on C++ types. Thus, the registration only needs to be done once, as the same registered conversion will be selected on any number of exported functions that accept the C++ type as an argument.
It does not introduce unnecessary types into the example extension namespace.
Meta-programming could allow for multi-dimensional types to recursively register each dimension type. However, the example code is already complex enough, so I did not want to add an additional level of complexity.
Alternative approaches include:
Create a custom function or template function that accepts a boost::python::list for each function accepting a std::vector. This approach causes the bindings to scale based on the amount of functions being exported, rather than the amount of types needing converted.
Using the Boost.Python vector_indexing_suite. The *_indexing_suite classes export a type that is adapted to match some semantics of Python list or dictionaries. Thus, the python code now has to know the exact container type to provide, resulting in a less-pythonic extension. For example, if std::vector<double> is exported as VecDouble, then the resulting Python usage would be:
v = example.VecDouble()
v[:] = [1, 2, 3]
example.test1(v)
However, the following would not work because the exact types must match, as exporting the class only registers a conversion between VecDouble and std::vector<double>:
example.test1([4, 5, 6])
While this approach scales to types rather than functions, it results in a less pythonic extension and bloats the example namespace with unnecessary types.

Resources