Java 8 how stream to array size works? - java-8

String[] stringArray = streamString.toArray(size -> new String[size]);
How it takes the size as stream's size automatically?

The Stream API is formed around Spliterator which is an advanced form of iterators. These can report certain characteristics, allowing optimizations of the operations a Stream will apply. They may also report the expected number of elements, either estimated or exact. A Spliterator will report a SIZED characteristic if it knows the number of elements in advance.
You can test which knowledge about its elements a Stream has, given the encapsulated operations, using the following method:
public static <T> Stream<T> printProperties(String op, Stream<T> s) {
System.out.print("characteristics after "+op+": ");
Spliterator<T> sp=s.spliterator();
int characteristics=sp.characteristics();
if(characteristics==0) System.out.println("0");
else {
String str;
for(;;) {
int flag=Integer.highestOneBit(characteristics);
switch(flag) {
case ORDERED: str="ORDERED"; break;
case DISTINCT: str="DISTINCT"; break;
case SORTED: str="SORTED"; break;
case SIZED: str="SIZED"; break;
case NONNULL: str="NONNULL"; break;
case IMMUTABLE: str="IMMUTABLE"; break;
case CONCURRENT: str="CONCURRENT"; break;
case SUBSIZED: str="SUBSIZED"; break;
default: str=String.format("0x%X", flag);
}
characteristics-=flag;
if(characteristics==0) break;
System.out.append(str).append('|');
}
System.out.println(str);
}
return StreamSupport.stream(sp, s.isParallel());
}
You can use it to learn how certain operations influence the knowledge about the elements. E.g., when you use this method with the following test program:
Stream<Object> stream;
stream=printProperties("received from TreeSet", new TreeSet<>().stream() );
stream=printProperties("applying map", stream.map(x->x) );
stream=printProperties("applying distinct", stream.distinct() );
stream=printProperties("filtering", stream.filter(x->true) );
stream=printProperties("applying sort", stream.sorted() );
stream=printProperties("requesting unordered", stream.unordered() );
System.out.println();
stream=printProperties("received from varargs array", Stream.of("foo", "bar") );
stream=printProperties("applying sort", stream.sorted() );
stream=printProperties("applying map", stream.map(x->x) );
stream=printProperties("applying distinct", stream.distinct() );
stream=printProperties("requesting unordered", stream.unordered() );
System.out.println();
printProperties("ConcurrentHashMap.keySet().stream()",
new ConcurrentHashMap<>().keySet().stream() );
it will print:
characteristics after received from TreeSet: SIZED|ORDERED|SORTED|DISTINCT
characteristics after applying map: SIZED|ORDERED
characteristics after applying distinct: ORDERED|DISTINCT
characteristics after filtering: ORDERED|DISTINCT
characteristics after applying sort: ORDERED|SORTED|DISTINCT
characteristics after requesting unordered: SORTED|DISTINCT
characteristics after received from varargs array: SUBSIZED|IMMUTABLE|SIZED|ORDERED
characteristics after applying sort: SUBSIZED|SIZED|ORDERED|SORTED
characteristics after applying map: SUBSIZED|SIZED|ORDERED
characteristics after applying distinct: ORDERED|DISTINCT
characteristics after requesting unordered: DISTINCT
characteristics after ConcurrentHashMap.keySet().stream(): CONCURRENT|NONNULL|DISTINCT
As JB Nizet explained, if a stream does not know the size in advance, it has to use a strategy for collecting the elements which might include reallocating arrays. As the documentation says:
… using the provided generator function to allocate the returned array, as well as any additional arrays that might be required for a partitioned execution or for resizing.

size -> new String[size]
is a lambda, which is an instance of IntFunction<A[]> generator, as the signature of the method is
<A> A[] toArray(IntFunction<A[]> generator)
So this line creates an instance of IntFunction, and passes it as argument to the stream. The stream is the one which calls the function (i.e. invokes the method apply(int)), and the stream is thus the one passing the size as argument. And the stream knows its own size.

Related

Responsive asynchronous search-as-you-type in Java 8

I'm trying to implement a "search as you type" pattern in Java.
The goal of the design is that no change gets lost but at the same time, the (time consuming) search operation should be able to abort early and try with the updated pattern.
Here is what I've come up so far (Java 8 pseudocode):
AtomicReference<String> patternRef
AtomicLong modificationCount
ReentrantLock busy;
Consumer<List<ResultType>> resultConsumer;
// This is called in a background thread every time the user presses a key
void search(String pattern) {
// Update the pattern
synchronized {
patternRef.set(pattern)
modificationCount.inc()
}
try {
if (!busy.tryLock()) {
// Another search is already running, let it handle the change
return;
}
// Get local copy of the pattern and modCount
synchronized {
String patternCopy = patternRef.get();
long modCount = modificationCount.get()
}
while (true) {
// Try the search. It will return false when modificationCount changes before the search is finished
boolean success = doSearch(patternCopy, modCount)
if (success) {
// Search completed before modCount was changed again
break
}
// Try again with new pattern+modCount
synchronized {
patternCopy = patternRef.get();
modCount = modificationCount.get()
}
}
} finally {
busy.unlock();
}
}
boolean doSearch(String pattern, long modCount)
... search database ...
if (modCount != modificationCount.get()) {
return false;
}
... prepare results ...
if (modCount != modificationCount.get()) {
return false;
}
resultConsumer.accept(result); // Consumer for the UI code to do something
return modCount == modificationCount.get();
}
Did I miss some important point? A race condition or something similar?
Is there something in Java 8 which would make the code above more simple?
The fundamental problem of this code can be summarized as “trying to achieve atomicity by multiple distinct atomic constructs”. The combination of multiple atomic constructs is not atomic and trying to reestablish atomicity leads to very complicated, usually broken, and inefficient code.
In your case, doSearch’s last check modCount == modificationCount.get() happens while still holding the lock. After that, another thread (or multiple other threads) could update the search string and mod count, followed by finding the lock occupied, hence, concluding that another search is running and will take care.
But that thread doesn’t care after that last modCount == modificationCount.get() check. The caller just does if (success) { break; }, followed by the finally { busy.unlock(); } and returns.
So the answer is, yes, you have potential race conditions.
So, instead of settling on two atomic variables, synchronized blocks, and a ReentrantLock, you should use one atomic construct, e.g. a single atomic variable:
final AtomicReference<String> patternRef = new AtomicReference<>();
Consumer<List<ResultType>> resultConsumer;
// This is called in a background thread every time the user presses a key
void search(String pattern) {
if(patternRef.getAndSet(pattern) != null) return;
// Try the search. doSearch will return false when not completed
while(!doSearch(pattern) || !patternRef.compareAndSet(pattern, null))
pattern = patternRef.get();
}
boolean doSearch(String pattern) {
//... search database ...
if(pattern != (Object)patternRef.get()) {
return false;
}
//... prepare results ...
if(pattern != (Object)patternRef.get()) {
return false;
}
resultConsumer.accept(result); // Consumer for the UI code to do something
return true;
}
Here, a value of null indicates that no search is running, so if a background thread sets this to a non-null value and finds the old value to be null (in an atomic operation), it knows it has to perform the actual search. After the search, it tries to set the reference to null again, using compareAndSet with the pattern used for the search. Thus, it can only succeed if it has not changed again. Otherwise, it will fetch the new value and repeat.
These two atomic updates are already sufficient to ensure that there is only a single search operation at a time while not missing an updated search pattern. The ability of doSearch to return early when it detects a change, is just a nice to have and not required by the caller’s loop.
Note that in this example, the check within doSearch has been reduced to a reference comparison (using a cast to Object to prevent compiler warnings), to demonstrate that it can be as cheap as the int comparison of your original approach. As long as no new string has been set, the reference will be the same.
But, in fact, you could also use a string comparison, i.e. if(!pattern.equals(patternRef.get())) { return false; } without a significant performance degradation. String comparison is not (necessarily) expensive in Java. The first thing, the implementation of String’s equals does, is a reference comparison. So if the string has not changed, it will return true immediately here. Otherwise, it will check the lengths then (unlike C strings, the length is known beforehand) and return false immediately on a mismatch. So in the typical scenario of the user typing another character or pressing backspace, the lengths will differ and the comparison bail out immediately.

Why filter with side effects performs better than a Spliterator based implementation?

Regarding the question How to skip even lines of a Stream obtained from the Files.lines I followed the accepted answer approach implementing my own filterEven() method based on Spliterator<T> interface, e.g.:
public static <T> Stream<T> filterEven(Stream<T> src) {
Spliterator<T> iter = src.spliterator();
AbstractSpliterator<T> res = new AbstractSpliterator<T>(Long.MAX_VALUE, Spliterator.ORDERED)
{
#Override
public boolean tryAdvance(Consumer<? super T> action) {
iter.tryAdvance(item -> {}); // discard
return iter.tryAdvance(action); // use
}
};
return StreamSupport.stream(res, false);
}
which I can use in the following way:
Stream<DomainObject> res = Files.lines(src)
filterEven(res)
.map(line -> toDomainObject(line))
However measuring the performance of this approach against the next one which uses a filter() with side effects I noticed that the next one performs better:
final int[] counter = {0};
final Predicate<String> isEvenLine = item -> ++counter[0] % 2 == 0;
Stream<DomainObject> res = Files.lines(src)
.filter(line -> isEvenLine ())
.map(line -> toDomainObject(line))
I tested the performance with JMH and I am not including the file load in the benchmark. I previously load it into an array. Then each benchmark starts by creating a Stream<String> from previous array, then filtering even lines, then applying a mapToInt() to extract the value of an int field and finally a max() operation. Here it is one of the benchmarks (you can check the whole Program here and here you have the data file with about 186 lines):
#Benchmark
public int maxTempFilterEven(DataSource src){
Stream<String> content = Arrays.stream(src.data)
.filter(s-> s.charAt(0) != '#') // Filter comments
.skip(1); // Skip line: Not available
return filterEven(content) // Filter daily info and skip hourly
.mapToInt(line -> parseInt(line.substring(14, 16)))
.max()
.getAsInt();
}
I am not getting why the filter() approach has better performance (~80ops/ms) than the filterEven() (~50ops/ms)?
Intro
I think I know the reason but unfortunately I have no idea how to improve performance of Spliterator-based solution (at least without rewritting of the whole Streams API feature).
Sidenote 1: performance was not the most important design goal when Stream API was designed. If performance is critical, most probably re-writting the code without Stream API will make the code faster. (For example, Stream API unavoidably increases memory allocation and thus GC-pressure). On the other hand in most of the scenarios Stream API provides a nicer higher-level API at a cost of a relatively small performance degradation.
Part 1 or Short theoretical answer
Stream is designed to implement a kind of internal iteration as the main mean of consuming and external iteration (i.e. Spliterator-based) is an additional mean that is kind of "emulated". Thus external iteration involves some overhead. Laziness adds some limits to the efficiency of external iteration and a need to support flatMap makes it necessary to use some kind of dynamic buffer in this process.
Sidenote 2 In some cases Spliterator-based iteration might be as fast as the internal iteration (i.e. filter in this case). Particularly it is so in the cases when you create a Spliterator directly from that data-containing Stream. To see it, you can modify your tests to materialize your first filter into a Strings array:
String[] filteredData = Arrays.stream(src.data)
.filter(s-> s.charAt(0) != '#') // Filter comments
.skip(1)
.toArray(String[]::new);
and then compare preformance of maxTempFilter and maxTempFilterEven modified to accept that pre-filtered String[] filteredData. If you want to know why this is so, you probably should read the rest of this long answer or at least Part 2.
Part 2 or Longer theoretical answer:
Streams were designed to be mainly consumed as a whole by some terminal operation. Iterating elements one by one although supported is not designed as a main way to consume streams.
Note that using the "functional" Stream API such as map, flatMap, filter, reduce, and collect you can't say at some step "I have had enough data, stop iterating over the source and pushing values". You can discard some incoming data (as filter does) but can't stop iteration. (take and skip transformations are actually implemented using Spliterator inside; and anyMatch, allMatch, noneMatch, findFirst, findAny, etc. use non-public API j.u.s.Sink.cancellationRequested, also they are easier as there can't be several terminal operations). If all transformations in the pipeline are synchronous, you can combine them into a single aggregated function (Consumer) and call it in a simple loop (optionally splitting the loop execution over several thread). This is what my simplified version of the state based filter represents (see the code in the Show me some code section). It gets a bit more complicated if there is a flatMap in the pipeline but idea is still the same.
Spliterator-based transformation is fundamentally different because it adds an asynchronous consumer-driven step to the pipeline. Now the Spliterator rather than the source Stream drives the iteration process. If you ask for a Spliterator directly on the source Stream, it might be able to return you some implementation that just iterates over its internal data structure and this is why materializing pre-filtered data should remove performance difference. However, if you create a Spliterator for some non-empty pipeline, there is no other (simple) choice other than asking the source to push elements one by one through the pipeline until some element passes all the filters (see also second example in the Show me some code section). The fact that source elements are pushed one by one rather than in some batches is a consequence of the fundamental decision to make Streams lazy. The need for a buffer instead of just one element is the consequence of support for flatMap: pushing one element from the source can produce many elements for Spliterator.
Part 3 or Show me some code
This part tries to provide some backing with the code (both links to the real code and simulated code) of what was described in the "theoretical" parts.
First of all, you should know that current Streams API implementation accumulates non-terminal (intermediate) operations into a single lazy pipeline (see j.u.s.AbstractPipeline and its children such as j.u.s.ReferencePipeline. Then, when the terminal operation is applied, all the elements from the original Stream are "pushed" through the pipeline.
What you see is the result of two things:
the fact that streams pipelines are different for cases when you
have a Spliterator-based step inside.
the fact that your OddLines is not the first step in the pipeline
The code with a stateful filter is more or less similar to the following straightforward code:
static int similarToFilter(String[] data)
{
final int[] counter = {0};
final Predicate<String> isEvenLine = item -> ++counter[0] % 2 == 0;
int skip = 1;
boolean reduceEmpty = true;
int reduceState = 0;
for (String outerEl : data)
{
if (outerEl.charAt(0) != '#')
{
if (skip > 0)
skip--;
else
{
if (isEvenLine.test(outerEl))
{
int intEl = parseInt(outerEl.substring(14, 16));
if (reduceEmpty)
{
reduceState = intEl;
reduceEmpty = false;
}
else
{
reduceState = Math.max(reduceState, intEl);
}
}
}
}
}
return reduceState;
}
Note that this is effectively a single loop with some calculations (filtering/transformations) inside.
When you add a Spliterator into the pipeline on the other hand, things change significantly and even with simplifications code that is reasonably similar to what actually happens becomes much larger such as:
interface Sp<T>
{
public boolean tryAdvance(Consumer<? super T> action);
}
static class ArraySp<T> implements Sp<T>
{
private final T[] array;
private int pos;
public ArraySp(T[] array)
{
this.array = array;
}
#Override
public boolean tryAdvance(Consumer<? super T> action)
{
if (pos < array.length)
{
action.accept(array[pos]);
pos++;
return true;
}
else
{
return false;
}
}
}
static class WrappingSp<T> implements Sp<T>, Consumer<T>
{
private final Sp<T> sourceSp;
private final Predicate<T> filter;
private final ArrayList<T> buffer = new ArrayList<T>();
private int pos;
public WrappingSp(Sp<T> sourceSp, Predicate<T> filter)
{
this.sourceSp = sourceSp;
this.filter = filter;
}
#Override
public void accept(T t)
{
buffer.add(t);
}
#Override
public boolean tryAdvance(Consumer<? super T> action)
{
while (true)
{
if (pos >= buffer.size())
{
pos = 0;
buffer.clear();
sourceSp.tryAdvance(this);
}
// failed to fill buffer
if (buffer.size() == 0)
return false;
T nextElem = buffer.get(pos);
pos++;
if (filter.test(nextElem))
{
action.accept(nextElem);
return true;
}
}
}
}
static class OddLineSp<T> implements Sp<T>, Consumer<T>
{
private Sp<T> sourceSp;
public OddLineSp(Sp<T> sourceSp)
{
this.sourceSp = sourceSp;
}
#Override
public boolean tryAdvance(Consumer<? super T> action)
{
if (sourceSp == null)
return false;
sourceSp.tryAdvance(this);
if (!sourceSp.tryAdvance(action))
{
sourceSp = null;
}
return true;
}
#Override
public void accept(T t)
{
}
}
static class ReduceIntMax
{
boolean reduceEmpty = true;
int reduceState = 0;
public int getReduceState()
{
return reduceState;
}
public void accept(int t)
{
if (reduceEmpty)
{
reduceEmpty = false;
reduceState = t;
}
else
{
reduceState = Math.max(reduceState, t);
}
}
}
static int similarToSpliterator(String[] data)
{
ArraySp<String> src = new ArraySp<>(data);
int[] skip = new int[1];
skip[0] = 1;
WrappingSp<String> firstFilter = new WrappingSp<String>(src, (s) ->
{
if (s.charAt(0) == '#')
return false;
if (skip[0] != 0)
{
skip[0]--;
return false;
}
return true;
});
OddLineSp<String> oddLines = new OddLineSp<>(firstFilter);
final ReduceIntMax reduceIntMax = new ReduceIntMax();
while (oddLines.tryAdvance(s ->
{
int intValue = parseInt(s.substring(14, 16));
reduceIntMax.accept(intValue);
})) ; // do nothing in the loop body
return reduceIntMax.getReduceState();
}
This code is larger because the logic is impossible (or at least very hard) to represent without some non-trivial stateful callbacks inside the loop. Here interface Sp is a mix of j.u.s.Stream and j.u.Spliterator interfaces.
Class ArraySp represents a result of Arrays.stream.
Class WrappingSp is similar to j.u.s.StreamSpliterators.WrappingSpliterator which in the real code represents an implementation of Spliterator interface for any non-empty pipeline i.e. a Stream with at least one intermediate operation applied to it (see j.u.s.AbstractPipeline.spliterator method). In my code I merged it with a StatelessOp subclass and put there logic responsible for filter method implementation. Also for simplcity I implemented skip using filter.
OddLineSp corresponds to your OddLines and its resulting Stream
ReduceIntMax represents ReduceOps terminal operation for Math.max for int
So what's important in this example? The important thing here is that since you first filter you original stream, your OddLineSp is created from a non-empty pipeline i.e. from a WrappingSp. And if you take a closer look at WrappingSp, you'll notice that every time tryAdvance is called, it delegates the call to the sourceSp and accumulates that result(s) into a buffer. Moreover, since you have no flatMap in the pipeline, elements to the buffer will be copied one by one. I.e. every time WrappingSp.tryAdvance is called, it will call ArraySp.tryAdvance, get back exactly one element (via callback), and pass it further to the consumer provided by the caller (unless the element doesn't match the filter in which case ArraySp.tryAdvance will be called again and again but still the buffer is never filled with more than one element at a time).
Sidenote 3: If you want to look at the real code, the most intersting places are j.u.s.StreamSpliterators.WrappingSpliterator.tryAdvance which calls
j.u.s.StreamSpliterators.AbstractWrappingSpliterator.doAdvance which in turn calls j.u.s.StreamSpliterators.AbstractWrappingSpliterator.fillBuffer which in turn calls pusher that is initialized at j.u.s.StreamSpliterators.WrappingSpliterator.initPartialTraversalState
So the main thing that's hurting performance is this copying into the buffer.
Unfortunately for us, usual Java developers, current implementation of the Stream API is pretty much closed and you can't modify only some aspects of the internal behavior using inheritance or composition.
You may use some reflection-based hacking to make copying-to-buffer more efficient for your specific case and gain some performance (but sacrifice laziness of the Stream) but you can't avoid this copying altogether and thus Spliterator-based code will be slower anyway.
Going back to the example from the Sidenote #2, Spliterator-based test with materialized filteredData works faster because there is no WrappingSp in the pipeline before OddLineSp and thus there will be no copying into an intermediate buffer.

Creating a stream of booleans from a boolean array? [duplicate]

There is no nice way to convert given boolean[] foo array into stream in Java-8 in one statement, or I am missing something?
(I will not ask why?, but it is really incomprehensible: why not add stream support for all primitive types?)
Hint: Arrays.stream(foo) will not work, there is no such method for boolean[] type.
Given boolean[] foo use
Stream<Boolean> stream = IntStream.range(0, foo.length)
.mapToObj(idx -> foo[idx]);
Note that every boolean value will be boxed, but it's usually not a big problem as boxing for boolean does not allocate additional memory (just uses one of predefined values - Boolean.TRUE or Boolean.FALSE).
You can use Guava's Booleans class:
Stream<Boolean> stream = Booleans.asList(foo).stream();
This is a pretty efficient way because Booleans.asList returns a wrapper for the array and does not make any copies.
of course you could create a stream directly
Stream.Builder<Boolean> builder = Stream.builder();
for (int i = 0; i < foo.length; i++)
builder.add(foo[i]);
Stream<Boolean> stream = builder.build();
…or by wrapping an AbstractList around foo
Stream<Boolean> stream = new AbstractList<Boolean>() {
public Boolean get(int index) {return (foo[index]);}
public int size() {return foo.length;}
}.stream();
Skimming through the early access JavaDoc (ie. java.base module) of the newest java-15, there is still no neat way to make the primitive boolean array work with Stream API together well. There is no new feature in the API with treating a primitive boolean array since java-8.
Note that there exist IntStream, DoubleStream and LongStream, but nothing like BooleanStream that would represent of a variation of a sequence of primitive booleans. Also the overloaded methods of Stream are Stream::mapToInt, Stream::mapToDouble and Stream::mapToLong, but not Stream::mapToBoolean returning such hypothetical BooleanStream.
Oracle seems to keep following this pattern, which could be found also in Collectors. There is also no such support for float primitives (there is for double primitives instead). In my opinion, unlike of float, the boolean support would make sense to implement.
Back to the code... if you have a boxed boolean array (ie. Boolean[] array), the things get easier:
Boolean[] array = ...
Stream<Boolean> streamOfBoxedBoolean1 = Arrays.stream(array);
Stream<Boolean> streamOfBoxedBoolean2 = Stream.of(array);
Otherwise you have to use more than one statement as said in this or this answer.
However, you asked (emphasizes mine):
way to convert given boolean[] foo array into stream in Java-8 in one statement.
... there is actually a way to achieve this through one statement using a Spliterator made from an Iterator. It is definetly not nice but :
boolean[] array = ...
Stream<Boolean> stream = StreamSupport.stream(
Spliterators.spliteratorUnknownSize(
new Iterator<>() {
int index = 0;
#Override public boolean hasNext() { return index < array.length; }
#Override public Boolean next() { return array[index++]; }
}, 0), false);

Indirect Enum or Class, Which One Should I Use for Building Basic Data Structures

When I tried to practice some basic data structure such as Linked /Doubly Linked/ Recycling Linked / Recycling Doubly Linked List, AVL Tree, Red-Black Tree, B-Tree and Treap by implementing them in Swift 2, I decided to do such things by taking advantage of Swift 2's new feature: indirect enum, because enum makes an empty node and a filled node more semantic than class.
But soonish it was found that for non-recycling linked lists, returning the inserted node after inserting an element makes no sense because the returned value is a value type but not a reference type. It is said that you cannot accelerate next insertion by writing information directly to the returned value because it is a copy of the inserted node but not a reference to the inserted node.
And what's worse is that mutating an indirect enum based node means writing the whole bunch of data of the associative value, which definitively introduces unnecessary system resource consumption, because the associative value in each enum case is a tuple in essence, which is a sort of contiguous data in memory in essence, which is the same to struct but doesn't have per property accessor to enable small bunch of data writing.
So which one should I use for building such basic data structures? Indirect enum or class?
Well it doesn't matter if it's swift 1 or swift 2 cause at the moment Enum and Structures are value types while Classes are called by reference. Since you want to use data structures in your code and like you called it yourself calling them by value is no good. You will have to use a Class in order for your code to do what you want it to do. Here is an example of a linked list using a Class:
class LLNode<T>
{
var key: T? var next: LLNode? var previous: LLNode?
}
//key printing
func printAllKeys()
{
var current: LLNode! = head; //assign the next instance
while(current != nil)
{
println("link item is: \(current.key)")
current = current.next
}
}
public class LinkedList<T: Equatable>
{ //create a new LLNode instance private
var head: LLNode<T> = LLNode<T>() //append a new item to a linked list
func addLink(key: T)
{ //establish the head node
if (head.key == nil)
{
head.key = key;
return;
} //establish the iteration variables
var current: LLNode? = head
while (current != nil)
{
if (current?.next == nil)
{
var childToUse: LLNode = LLNode<T>()
childToUse.key = key;
childToUse.previous = current
current!.next = childToUse;
break;
}
current = current?.next
} //end while
} ///end function
for more examples using swift and data structures please do visit:
data structures swift
Conclusion : Use Class if you want to call by reference else use Enum or Struct

send/receive object arrays using MPI

Is it possible to send/receive C++ object and object arrays using MPI_Bcast, MPI_Scatter and MPI_Gather. If yes then which MPI datatype is used for objects?
For example I have a class named cell.
class cell
{
private:
int abc;
double xyz;
public:
cell(){ }
...
};
In the main function, I would like to make an object array of class cell and would like to send/receive as object array. e.g.,
void main ()
{
...
cell** cells = new cell*[someVar];
for(int i = 0; i < someVar; ++i)
{
cells[i] = new cell[someVar];
}
MPI_Bcast(cells, someVar, ???, 0, MPI_COMM_WORLD);
...
}
How can we define an MPI data type to send / receive an object array?
Check out the MPI_Pack / MPI_Unpack mechanism. On the sending side you stuff the elements into a pack buffer and you send that; the receiving side unpacks it component-by-component. This offers such cute possibilities as first unpacking an integer that tells you how many subsequent doubles there are to unpack. A big advantage of this approach is that it applies to objects that are only indirectly accessible through an iterator or such.

Resources