java 8 stream interference versus non-interference - java-8

I understand why the following code is ok. Because the collection is being modified before calling the terminal operation.
List<String> wordList = ...;
Stream<String> words = wordList.stream();
wordList.add("END"); // Ok
long n = words.distinct().count();
But why is this code is not ok?
Stream<String> words = wordList.stream();
words.forEach(s -> if (s.length() < 12) wordList.remove(s)); // Error—interference

Stream.forEach() is a terminal operation, and the underlying wordList collection is modified after the terminal has been started/called.

Joachim's answer is correct, +1.
You didn't ask specifically, but for the benefit of other readers, here are a couple techniques for rewriting the program a different way, avoiding stream interference problems.
If you want to mutate the list in-place, you can do so with a new default method on List instead of using streams:
wordList.removeIf(s -> s.length() < 12);
If you want to leave the original list intact but create a modified copy, you can use a stream and a collector to do that:
List<String> newList = wordList.stream()
.filter(s -> s.length() >= 12)
.collect(Collectors.toList());
Note that I had to invert the sense of the condition, since filter takes a predicate that keeps values in the stream if the condition is true.

Related

Comparator.compareBoolean() the same as Comparator.compare()?

How can I write this
Comparator <Item> sort = (i1, i2) -> Boolean.compare(i2.isOpen(), i1.isOpen());
to something like this (code does not work):
Comparator<Item> sort = Comparator.comparing(Item::isOpen).reversed();
Comparing method does not have something like Comparator.comparingBool(). Comparator.comparing returns int and not "Item".
Why can't you write it like this?
Comparator<Item> sort = Comparator.comparing(Item::isOpen);
Underneath Boolean.compareTo is called, which in turn is the same as Boolean.compare
public static int compare(boolean x, boolean y) {
return (x == y) ? 0 : (x ? 1 : -1);
}
And this: Comparator.comparing returns int and not "Item". make little sense, Comparator.comparing must return a Comparator<T>; in your case it correctly returns a Comparator<Item>.
The overloads comparingInt, comparingLong, and comparingDouble exist for performance reasons only. They are semantically identical to the unspecialized comparing method, so using comparing instead of comparingXXX has the same outcome, but might having boxing overhead, but the actual implications depend on the particular execution environment.
In case of boolean values, we can predict that the overhead will be negligible, as the method Boolean.valueOf will always return either Boolean.TRUE or Boolean.FALSE and never create new instances, so even if a particular JVM fails to inline the entire code, it does not depend on the presence of Escape Analysis in the optimizer.
As you already figured out, reversing a comparator is implemented by swapping the argument internally, just like you did manually in your lambda expression.
Note that it is still possible to create a comparator fusing the reversal and an unboxed comparison without having to repeat the isOpen() expression:
Comparator<Item> sort = Comparator.comparingInt(i -> i.isOpen()? 0: 1);
but, as said, it’s unlikely to have a significantly higher performance than the Comparator.comparing(Item::isOpen).reversed() approach.
But note that if you have a boolean sort criteria and care for the maximum performance, you may consider replacing the general-purpose sort algorithm with a bucket sort variant. E.g.
If you have a Stream, replace
List<Item> result = /* stream of Item */
.sorted(Comparator.comparing(Item::isOpen).reversed())
.collect(Collectors.toList());
with
Map<Boolean,List<Item>> map = /* stream of Item */
.collect(Collectors.partitioningBy(Item::isOpen,
Collectors.toCollection(ArrayList::new)));
List<Item> result = map.get(true);
result.addAll(map.get(false));
or, if you have a List, replace
list.sort(Comparator.comparing(Item::isOpen).reversed());
with
ArrayList<Item> temp = new ArrayList<>(list.size());
list.removeIf(item -> !item.isOpen() && temp.add(item));
list.addAll(temp);
etc.
Use comparing using key extractor parameter:
Comparator<Item> comparator =
Comparator.comparing(Item::isOpen, Boolean::compare).reversed();

Is there any performance benefit of using Arrays.stream() over iterating on an array?

I need to iterate on all the enum values, check if they were used to construct an int (called input) and if so, add them to a Set (called usefulEnums). I can either use streams API or iterate over all the enums to do this task. Is there any benefit of using Arrays.stream() over the traditional approach of iterating over the values() array?
enum TestEnum { VALUE1, VALUE2, VALUE3 };
Set<TestEnum> usefulEnums = new HashSet<>();
Arrays.stream(TestEnum.values())
.filter(t -> (input & t.getValue()) != 0)
.forEach(usefulEnums::add);
for (TestEnum t : TestEnum.values()) {
if ((input & t.getValue()) != 0) {
usefulEnums.add(t);
}
}
If you care for efficiency, you should consider:
Set<TestEnum> usefulEnums = EnumSet.allOf(TestEnum.class);
usefulEnums.removeIf(t -> (input & t.getValue()) == 0);
Note that when you have to iterate over all enum constants of a type, using EnumSet.allOf(EnumType.class).stream() avoids the array creation of EnumType.values() entirely, however, most enum types don’t have enough constants for this to make a difference. Further, the JVM’s optimizer may remove the temporary array creation anyway.
But for this specific task, where the result is supposed to be a Set<TestEnum>, using an EnumSet instead of a HashSet may even improve subsequent operations working with the Set. Creating an EnumSet holding all constants and removing unintented constants like in the solution above, means just initializing a long with 0b111, followed by clearing the bits of nonmatching elements.
For this short operation the for loop is going to be faster (nano-seconds faster), but to me the stream operation is more verbose, it tells exactly what is being done here. It's like reading diagonally.
Also you could collect directly to a HashSet:
Arrays.stream(TestEnum.values())
.filter(t -> (input & t.getValue()) != 0)
.collect(Collectors.toCollection(HashSet::new));
Valuable input from Holger as usual makes this even nicer:
EnumSet<TestEnum> filtered = EnumSet.allOf(TestEnum.class).stream()
.filter(t -> (input & t.getValue()) != 0)
.collect(Collectors.toCollection(() -> EnumSet.noneOf(TestEnum.class)));

What is the most elegant way to run a lambda for each element of a Java 8 stream and simultaneously count how many elements were processed?

What is the most elegant way to run a lambda for each element of a Java 8 stream and simultaneously count how many items were processed, assuming I want to process the stream only once and not mutate a variable outside the lambda?
It might be tempting to use
long count = stream.peek(action).count();
and it may appear to work. However, peek’s action will only be performed when an element is being processed, but for some streams, the count may be available without processing the elements. Java 9 is going to take this opportunity, which makes the code above fail to perform action for some streams.
You can use a collect operation that doesn’t allow to take short-cuts, e.g.
long count = stream.collect(
Collectors.mapping(s -> { action.accept(s); return s; }, Collectors.counting()));
or
long count = stream.collect(Collectors.summingLong(s -> { action.accept(s); return 1; }));
I would go with a reduce operation of some sort, something like this:
int howMany = Stream.of("a", "vc", "ads", "ts", "ta").reduce(0, (i, string) -> {
if (string.contains("a")) {
// process a in any other way
return i+1;
}
return i;
}, (left, right) -> null); // override if parallel stream required
System.out.println(howMany);
This can be done with peek function, as it returns a stream consisting of the elements of this stream, additionally performing the provided action on each element as elements are consumed from the resulting stream.
AtomicInteger counter = new AtomicInteger(0);
elements
.stream()
.forEach(doSomething())
.peek(elem -> counter.incrementAndGet());
int elementsProcessed = counter.get();
Streams are lazily evaluated and therefore processed in a single step, combining all intermediate operations when a final operation is called, no matter how many operations you perform over them.
This way, you don't have to worry because your stream will be processed at once. But the best way to perform some operation on each stream's element and count the number of elements processed depends on your goal.
Anyway, the two examples below don't mutate a variable to perform that count.
Both examples create a Stream of Strings, perform a trim() on each String to remove blank spaces and then, filter the Strings that have some content.
Example 1
Uses the peek method to perform some operation over each filtered string. In this case, just print each one. Finally, it just uses the count() to get how many Strings were processed.
Stream<String> stream =
Stream.of(" java", "", " streams", " are", " lazily ", "evaluated");
long count = stream
.map(String::trim)
.filter(s -> !s.isEmpty())
.peek(System.out::println)
.count();
System.out.printf(
"\nNumber of non-empty strings after a trim() operation: %d\n\n", count);
Example 2
Uses the collect method after filtering and mapping to get all the processed Strings into a List. By this way, the List can be printed separately and the number of elements got from list.size()
Stream<String> stream =
Stream.of(" java", "", " streams", " are", " lazily ", "evaluated");
List<String> list = stream
.map(String::trim)
.filter(s -> !s.isEmpty())
.collect(Collectors.toList());
list.forEach(System.out::println);
System.out.printf(
"\nNumber of non-empty strings after a trim() operation: %d\n\n", list.size());

Java8 style for comparing arrays? (Streams and Math3)

I'm just beginning to learn Java8 streams and Apache commons Math3 at the same time, and looking for missed opportunities to simplify my solution for comparing instances for equality. Consider this Math3 RealVector:
RealVector testArrayRealVector =
new ArrayRealVector(new double [] {1d, 2d, 3d});
and consider this member variable containing boxed doubles, plus this copy of it as an array list collection:
private final Double [] m_ADoubleArray = {13d, 14d, 15d};
private final Collection<Double> m_CollectionArrayList =
new ArrayList<>(Arrays.asList(m_ADoubleArray));
Here is my best shot at comparing these in a functional style in a JUnit class (full gist here), using protonpack from codepoetix because I couldn't find zip in the Streams library. This looks really baroque to my eyes and I wonder whether I've missed ways to make this shorter, faster, simpler, better because I'm just beginning to learn this stuff and don't know much.
// Make a stream out of the RealVector:
DoubleStream testArrayRealVectorStream =
Arrays.stream(testArrayRealVector.toArray());
// Check the type of that Stream
assertTrue("java.util.stream.DoublePipeline$Head" ==
testArrayRealVectorStream.getClass().getTypeName());
// Use up the stream:
assertEquals(3, testArrayRealVectorStream.count());
// Old one is used up; make another:
testArrayRealVectorStream = Arrays.stream(testArrayRealVector.toArray());
// Make a new stream from the member-var arrayList;
// do arithmetic on the copy, leaving the original unmodified:
Stream<Double> collectionStream = getFreshMemberVarStream();
// Use up the stream:
assertEquals(3, collectionStream.count());
// Stream is now used up; make new one:
collectionStream = getFreshMemberVarStream();
// Doesn't seem to be any way to use zip on the real array vector
// without boxing it.
Stream<Double> arrayRealVectorStreamBoxed =
testArrayRealVectorStream.boxed();
assertTrue(zip(
collectionStream,
arrayRealVectorStreamBoxed,
(l, r) -> Math.abs(l - r) < DELTA)
.reduce(true, (a, b) -> a && b));
where
private Stream<Double> getFreshMemberVarStream() {
return m_CollectionArrayList
.stream()
.map(x -> x - 12.0);
}
Again, here is a gist of my entire JUnit test class.
It seems you are trying to bail in Streams at all cost.
If I understand you correctly, you have
double[] array1=testArrayRealVector.toArray();
Double[] m_ADoubleArray = {13d, 14d, 15d};
as starting point. Then, the first thing you can do is to verify the lengths of these arrays:
assertTrue(array1.length==m_ADoubleArray.length);
assertEquals(3, array1.length);
There is no point in wrapping the arrays into a stream and calling count() and, of course, even less in wrapping an array into a collection to call stream().count() on it. Note that if your starting point is a Collection, calling size() will do as well.
Given that you already verified the length, you can simply do
IntStream.range(0, 3).forEach(ix->assertEquals(m_ADoubleArray[ix]-12, array1[ix], DELTA));
to compare the elements of the arrays.
or when you want to apply arithmetic as a function:
// keep the size check as above as the length won’t change
IntToDoubleFunction f=ix -> m_ADoubleArray[ix]-12;
IntStream.range(0, 3).forEach(ix -> assertEquals(f.applyAsDouble(ix), array1[ix], DELTA));
Note that you can also just create a new array using
double[] array2=Arrays.stream(m_ADoubleArray).mapToDouble(d -> d-12).toArray();
and compare the arrays similar to above:
IntStream.range(0, 3).forEach(ix -> assertEquals(array1[ix], array2[ix], DELTA));
or just using
assertArrayEquals(array1, array2, DELTA);
as now both arrays have the same type.
Don’t think about that temporary three element array holding the intermediate result. All other attempts consume far more memory…

Scala: Mutable vs. Immutable Object Performance - OutOfMemoryError

I wanted to compare the performance characteristics of immutable.Map and mutable.Map in Scala for a similar operation (namely, merging many maps into a single one. See this question). I have what appear to be similar implementations for both mutable and immutable maps (see below).
As a test, I generated a List containing 1,000,000 single-item Map[Int, Int] and passed this list into the functions I was testing. With sufficient memory, the results were unsurprising: ~1200ms for mutable.Map, ~1800ms for immutable.Map, and ~750ms for an imperative implementation using mutable.Map -- not sure what accounts for the huge difference there, but feel free to comment on that, too.
What did surprise me a bit, perhaps because I'm being a bit thick, is that with the default run configuration in IntelliJ 8.1, both mutable implementations hit an OutOfMemoryError, but the immutable collection did not. The immutable test did run to completion, but it did so very slowly -- it takes about 28 seconds. When I increased the max JVM memory (to about 200MB, not sure where the threshold is), I got the results above.
Anyway, here's what I really want to know:
Why do the mutable implementations run out of memory, but the immutable implementation does not? I suspect that the immutable version allows the garbage collector to run and free up memory before the mutable implementations do -- and all of those garbage collections explain the slowness of the immutable low-memory run -- but I'd like a more detailed explanation than that.
Implementations below. (Note: I don't claim that these are the best implementations possible. Feel free to suggest improvements.)
def mergeMaps[A,B](func: (B,B) => B)(listOfMaps: List[Map[A,B]]): Map[A,B] =
(Map[A,B]() /: (for (m <- listOfMaps; kv <-m) yield kv)) { (acc, kv) =>
acc + (if (acc.contains(kv._1)) kv._1 -> func(acc(kv._1), kv._2) else kv)
}
def mergeMutableMaps[A,B](func: (B,B) => B)(listOfMaps: List[mutable.Map[A,B]]): mutable.Map[A,B] =
(mutable.Map[A,B]() /: (for (m <- listOfMaps; kv <- m) yield kv)) { (acc, kv) =>
acc + (if (acc.contains(kv._1)) kv._1 -> func(acc(kv._1), kv._2) else kv)
}
def mergeMutableImperative[A,B](func: (B,B) => B)(listOfMaps: List[mutable.Map[A,B]]): mutable.Map[A,B] = {
val toReturn = mutable.Map[A,B]()
for (m <- listOfMaps; kv <- m) {
if (toReturn contains kv._1) {
toReturn(kv._1) = func(toReturn(kv._1), kv._2)
} else {
toReturn(kv._1) = kv._2
}
}
toReturn
}
Well, it really depends on what the actual type of Map you are using. Probably HashMap. Now, mutable structures like that gain performance by pre-allocating memory it expects to use. You are joining one million maps, so the final map is bound to be somewhat big. Let's see how these key/values get added:
protected def addEntry(e: Entry) {
val h = index(elemHashCode(e.key))
e.next = table(h).asInstanceOf[Entry]
table(h) = e
tableSize = tableSize + 1
if (tableSize > threshold)
resize(2 * table.length)
}
See the 2 * in the resize line? The mutable HashMap grows by doubling each time it runs out of space, while the immutable one is pretty conservative in memory usage (though existing keys will usually occupy twice the space when updated).
Now, as for other performance problems, you are creating a list of keys and values in the first two versions. That means that, before you join any maps, you already have each Tuple2 (the key/value pairs) in memory twice! Plus the overhead of List, which is small, but we are talking about more than one million elements times the overhead.
You may want to use a projection, which avoids that. Unfortunately, projection is based on Stream, which isn't very reliable for our purposes on Scala 2.7.x. Still, try this instead:
for (m <- listOfMaps.projection; kv <- m) yield kv
A Stream doesn't compute a value until it is needed. The garbage collector ought to collect the unused elements as well, as long as you don't keep a reference to the Stream's head, which seems to be the case in your algorithm.
EDIT
Complementing, a for/yield comprehension takes one or more collections and return a new collection. As often as it makes sense, the returning collection is of the same type as the original collection. So, for example, in the following code, the for-comprehension creates a new list, which is then stored inside l2. It is not val l2 = which creates the new list, but the for-comprehension.
val l = List(1,2,3)
val l2 = for (e <- l) yield e*2
Now, let's look at the code being used in the first two algorithms (minus the mutable keyword):
(Map[A,B]() /: (for (m <- listOfMaps; kv <-m) yield kv))
The foldLeft operator, here written with its /: synonymous, will be invoked on the object returned by the for-comprehension. Remember that a : at the end of an operator inverts the order of the object and the parameters.
Now, let's consider what object is this, on which foldLeft is being called. The first generator in this for-comprehension is m <- listOfMaps. We know that listOfMaps is a collection of type List[X], where X isn't really relevant here. The result of a for-comprehension on a List is always another List. The other generators aren't relevant.
So, you take this List, get all the key/values inside each Map which is a component of this List, and make a new List with all of that. That's why you are duplicating everything you have.
(in fact, it's even worse than that, because each generator creates a new collection; the collections created by the second generator are just the size of each element of listOfMaps though, and are immediately discarded after use)
The next question -- actually, the first one, but it was easier to invert the answer -- is how the use of projection helps.
When you call projection on a List, it returns new object, of type Stream (on Scala 2.7.x). At first you may think this will only make things worse, because you'll now have three copies of the List, instead of a single one. But a Stream is not pre-computed. It is lazily computed.
What that means is that the resulting object, the Stream, isn't a copy of the List, but, rather, a function that can be used to compute the Stream when required. Once computed, the result will be kept so that it doesn't need to be computed again.
Also, map, flatMap and filter of a Stream all return a new Stream, which means you can chain them all together without making a single copy of the List which created them. Since for-comprehensions with yield use these very functions, the use of Stream inside the prevent unnecessary copies of data.
Now, suppose you wrote something like this:
val kvs = for (m <- listOfMaps.projection; kv <-m) yield kv
(Map[A,B]() /: kvs) { ... }
In this case you aren't gaining anything. After assigning the Stream to kvs, the data hasn't been copied yet. Once the second line is executed, though, kvs will have computed each of its elements, and, therefore, will hold a complete copy of the data.
Now consider the original form::
(Map[A,B]() /: (for (m <- listOfMaps.projection; kv <-m) yield kv))
In this case, the Stream is used at the same time it is computed. Let's briefly look at how foldLeft for a Stream is defined:
override final def foldLeft[B](z: B)(f: (B, A) => B): B = {
if (isEmpty) z
else tail.foldLeft(f(z, head))(f)
}
If the Stream is empty, just return the accumulator. Otherwise, compute a new accumulator (f(z, head)) and then pass it and the function to the tail of the Stream.
Once f(z, head) has executed, though, there will be no remaining reference to the head. Or, in other words, nothing anywhere in the program will be pointing to the head of the Stream, and that means the garbage collector can collect it, thus freeing memory.
The end result is that each element produced by the for-comprehension will exist just briefly, while you use it to compute the accumulator. And this is how you save keeping a copy of your whole data.
Finally, there is the question of why the third algorithm does not benefit from it. Well, the third algorithm does not use yield, so no copy of any data whatsoever is being made. In this case, using projection only adds an indirection layer.

Resources