How to get that the limit exceeded when I use limit() on a range of items from stream using Java 8 lambda? - java-8

How should I know without using another condition to compare the map.size() with limitValue, that the limit was exceeding when my stream iterated?
Here,
for limitValue = 3, it should return false.
for limitValue = 4, it should return true.
I can not use an outside int field as it must be final to be used inside lambda.
import java.util.*;
import java.util.stream.*;
public class Test {
public static void main(String[] args) throws Exception {
Map<Integer, String> map = new HashMap<>();
map.put(1, "foo");
map.put(2, "bar");
map.put(3, "baz");
int limitValue = 3;
String result = map.entrySet()
.stream()
.limit(limitValue)
.map(entry -> entry.getKey() + " - " + entry.getValue())
.collect(Collectors.joining(", "));
System.out.println(result);
}
}

I can not use an outside int field as it must be final to be used
inside lambda.
Yes, this is because, within a lambda expression, you can only reference local variables whose value doesn’t change (in java).
This is a good thing in a way as mutating a variable(s) inside a lambda is not thread safe when executing in parallel.
So, the system is helping you prevent such scenarios at compile time by allowing only final or effectively final variables to be used in lambdas.
Note, this restriction only holds for local variables.
Anyhow, my advice is not to mutate variables that are not solely contained within a given function itself as it introduces a side-effect and side-effects in behavioral parameters to stream operations are, in general, discouraged.
Keep things simple and proceed with the below approach.
boolean exceeded = limitValue > map.size();

Related

How to return the count, while using nested foreach loops in the stream

I am using java8 streams to iterate two lists, In that one list contains some custom objects and another contains string.
With this, I have to call a method by passing custom object and sting as a input and then I have to get the count.
This is what I tried:
public int returnCode() {
/*int count = 0;
* list.forEach(x -> {
list2.forEach(p -> {
count+ = myDao.begin(conn, x.getCode(), p);
});
return count;
});*/
}
compiler is giving an error that count should be final.
Can anyone, give me how to do this in a better way.
What you're attempting to do is not possible as local variables accessed from a lambda must be final or effectively final i.e. any variable whose value does not change.
You're attempting to change the value of count in the lambda passed to the forEach hence the compilation error.
To replicate your exact code using the stream API, it would be:
int count = list.stream()
.limit(1)
.flatMapToInt(x -> list2.stream().mapToInt(p -> myDao.begin(conn, x.getCode(), p)))
.sum();
However, if you want to iterate over the entire sequence in list and not just the first then you can proceed with the following:
int count = list.stream()
.flatMapToInt(x -> list2.stream().mapToInt(p -> myDao.begin(conn, x.getCode(), p)))
.sum();
Lambdas mainly substitutes anonymous inner classes. Inside an anonymous inner class you can access only final local variables. Hence the same holds true with lambda expressions. Local variable is copied when JVM creates a lambda instance, hence it is counter intuitive to allow any update to them. So declaring the variable as final would solve the issue. But if you make it final you won't be able to do this, leading to another pitfall.
count+ = myDao.begin(conn, x.getCode(), p);
So your solution is not good and does not comply with lambda. So this will be a one way of doing it.
final int count = customObjects.stream()
.mapToInt(co -> strings.stream().mapToInt(s -> myDao.begin(conn, co.getCode(), s)).sum())
.sum();

Why is it possible initialize java.util.function.Consumer with lambda that returns value? [duplicate]

I am confused by the following code
class LambdaTest {
public static void main(String[] args) {
Consumer<String> lambda1 = s -> {};
Function<String, String> lambda2 = s -> s;
Consumer<String> lambda3 = LambdaTest::consume; // but s -> s doesn't work!
Function<String, String> lambda4 = LambdaTest::consume;
}
static String consume(String s) { return s;}
}
I would have expected the assignment of lambda3 to fail as my consume method does not match the accept method in the Consumer Interface - the return types are different, String vs. void.
Moreover, I always thought that there is a one-to-one relationship between Lambda expressions and method references but this is clearly not the case as my example shows.
Could somebody explain to me what is happening here?
As Brian Goetz pointed out in a comment, the basis for the design decision was to allow adapting a method to a functional interface the same way you can call the method, i.e. you can call every value returning method and ignore the returned value.
When it comes to lambda expressions, things get a bit more complicated. There are two forms of lambda expressions, (args) -> expression and (args) -> { statements* }.
Whether the second form is void compatible, depends on the question whether no code path attempts to return a value, e.g. () -> { return ""; } is not void compatible, but expression compatible, whereas () -> {} or () -> { return; } are void compatible. Note that () -> { for(;;); } and () -> { throw new RuntimeException(); } are both, void compatible and value compatible, as they don’t complete normally and there’s no return statement.
The form (arg) -> expression is value compatible if the expression evaluates to a value. But there are also expressions, which are statements at the same time. These expressions may have a side effect and therefore can be written as stand-alone statement for producing the side effect only, ignoring the produced result. Similarly, the form (arg) -> expression can be void compatible, if the expression is also a statement.
An expression of the form s -> s can’t be void compatible as s is not a statement, i.e. you can’t write s -> { s; } either. On the other hand s -> s.toString() can be void compatible, because method invocations are statements. Similarly, s -> i++ can be void compatible as increments can be used as a statement, so s -> { i++; } is valid too. Of course, i has to be a field for this to work, not a local variable.
The Java Language Specification §14.8. Expression Statements lists all expressions which may be used as statements. Besides the already mentioned method invocations and increment/ decrement operators, it names assignments and class instance creation expressions, so s -> foo=s and s -> new WhatEver(s) are void compatible too.
As a side note, the form (arg) -> methodReturningVoid(arg) is the only expression form that is not value compatible.
consume(String) method matches Consumer<String> interface, because it consumes a String - the fact that it returns a value is irrelevant, as - in this case - it is simply ignored. (Because the Consumer interface does not expect any return value at all).
It must have been a design choice and basically a utility: imagine how many methods would have to be refactored or duplicated to match needs of functional interfaces like Consumer or even the very common Runnable. (Note that you can pass any method that consumes no parameters as a Runnable to an Executor, for example.)
Even methods like java.util.List#add(Object) return a value: boolean. Being unable to pass such method references just because that they return something (that is mostly irrelevant in many cases) would be rather annoying.

JAVA 8 Extract predicates as fields or methods?

What is the cleaner way of extracting predicates which will have multiple uses. Methods or Class fields?
The two examples:
1.Class Field
void someMethod() {
IntStream.range(1, 100)
.filter(isOverFifty)
.forEach(System.out::println);
}
private IntPredicate isOverFifty = number -> number > 50;
2.Method
void someMethod() {
IntStream.range(1, 100)
.filter(isOverFifty())
.forEach(System.out::println);
}
private IntPredicate isOverFifty() {
return number -> number > 50;
}
For me, the field way looks a little bit nicer, but is this the right way? I have my doubts.
Generally you cache things that are expensive to create and these stateless lambdas are not. A stateless lambda will have a single instance created for the entire pipeline (under the current implementation). The first invocation is the most expensive one - the underlying Predicate implementation class will be created and linked; but this happens only once for both stateless and stateful lambdas.
A stateful lambda will use a different instance for each element and it might make sense to cache those, but your example is stateless, so I would not.
If you still want that (for reading purposes I assume), I would do it in a class Predicates let's assume. It would be re-usable across different classes as well, something like this:
public final class Predicates {
private Predicates(){
}
public static IntPredicate isOverFifty() {
return number -> number > 50;
}
}
You should also notice that the usage of Predicates.isOverFifty inside a Stream and x -> x > 50 while semantically the same, will have different memory usages.
In the first case, only a single instance (and class) will be created and served to all clients; while the second (x -> x > 50) will create not only a different instance, but also a different class for each of it's clients (think the same expression used in different places inside your application). This happens because the linkage happens per CallSite - and in the second case the CallSite is always different.
But that is something you should not rely on (and probably even consider) - these Objects and classes are fast to build and fast to remove by the GC - whatever fits your needs - use that.
To answer, it's better If you expand those lambda expressions for old fashioned Java. You can see now, these are two ways we used in our codes. So, the answer is, it all depends how you write a particular code segment.
private IntPredicate isOverFifty = new IntPredicate<Integer>(){
public void test(number){
return number > 50;
}
};
private IntPredicate isOverFifty() {
return new IntPredicate<Integer>(){
public void test(number){
return number > 50;
}
};
}
1) For field case you will have always allocated predicate for each new your object. Not a big deal if you have a few instances, likes, service. But if this is a value object which can be N, this is not good solution. Also keep in mind that someMethod() may not be called at all. One of possible solution is to make predicate as static field.
2) For method case you will create the predicate once every time for someMethod() call. After GC will discard it.

Why filter with side effects performs better than a Spliterator based implementation?

Regarding the question How to skip even lines of a Stream obtained from the Files.lines I followed the accepted answer approach implementing my own filterEven() method based on Spliterator<T> interface, e.g.:
public static <T> Stream<T> filterEven(Stream<T> src) {
Spliterator<T> iter = src.spliterator();
AbstractSpliterator<T> res = new AbstractSpliterator<T>(Long.MAX_VALUE, Spliterator.ORDERED)
{
#Override
public boolean tryAdvance(Consumer<? super T> action) {
iter.tryAdvance(item -> {}); // discard
return iter.tryAdvance(action); // use
}
};
return StreamSupport.stream(res, false);
}
which I can use in the following way:
Stream<DomainObject> res = Files.lines(src)
filterEven(res)
.map(line -> toDomainObject(line))
However measuring the performance of this approach against the next one which uses a filter() with side effects I noticed that the next one performs better:
final int[] counter = {0};
final Predicate<String> isEvenLine = item -> ++counter[0] % 2 == 0;
Stream<DomainObject> res = Files.lines(src)
.filter(line -> isEvenLine ())
.map(line -> toDomainObject(line))
I tested the performance with JMH and I am not including the file load in the benchmark. I previously load it into an array. Then each benchmark starts by creating a Stream<String> from previous array, then filtering even lines, then applying a mapToInt() to extract the value of an int field and finally a max() operation. Here it is one of the benchmarks (you can check the whole Program here and here you have the data file with about 186 lines):
#Benchmark
public int maxTempFilterEven(DataSource src){
Stream<String> content = Arrays.stream(src.data)
.filter(s-> s.charAt(0) != '#') // Filter comments
.skip(1); // Skip line: Not available
return filterEven(content) // Filter daily info and skip hourly
.mapToInt(line -> parseInt(line.substring(14, 16)))
.max()
.getAsInt();
}
I am not getting why the filter() approach has better performance (~80ops/ms) than the filterEven() (~50ops/ms)?
Intro
I think I know the reason but unfortunately I have no idea how to improve performance of Spliterator-based solution (at least without rewritting of the whole Streams API feature).
Sidenote 1: performance was not the most important design goal when Stream API was designed. If performance is critical, most probably re-writting the code without Stream API will make the code faster. (For example, Stream API unavoidably increases memory allocation and thus GC-pressure). On the other hand in most of the scenarios Stream API provides a nicer higher-level API at a cost of a relatively small performance degradation.
Part 1 or Short theoretical answer
Stream is designed to implement a kind of internal iteration as the main mean of consuming and external iteration (i.e. Spliterator-based) is an additional mean that is kind of "emulated". Thus external iteration involves some overhead. Laziness adds some limits to the efficiency of external iteration and a need to support flatMap makes it necessary to use some kind of dynamic buffer in this process.
Sidenote 2 In some cases Spliterator-based iteration might be as fast as the internal iteration (i.e. filter in this case). Particularly it is so in the cases when you create a Spliterator directly from that data-containing Stream. To see it, you can modify your tests to materialize your first filter into a Strings array:
String[] filteredData = Arrays.stream(src.data)
.filter(s-> s.charAt(0) != '#') // Filter comments
.skip(1)
.toArray(String[]::new);
and then compare preformance of maxTempFilter and maxTempFilterEven modified to accept that pre-filtered String[] filteredData. If you want to know why this is so, you probably should read the rest of this long answer or at least Part 2.
Part 2 or Longer theoretical answer:
Streams were designed to be mainly consumed as a whole by some terminal operation. Iterating elements one by one although supported is not designed as a main way to consume streams.
Note that using the "functional" Stream API such as map, flatMap, filter, reduce, and collect you can't say at some step "I have had enough data, stop iterating over the source and pushing values". You can discard some incoming data (as filter does) but can't stop iteration. (take and skip transformations are actually implemented using Spliterator inside; and anyMatch, allMatch, noneMatch, findFirst, findAny, etc. use non-public API j.u.s.Sink.cancellationRequested, also they are easier as there can't be several terminal operations). If all transformations in the pipeline are synchronous, you can combine them into a single aggregated function (Consumer) and call it in a simple loop (optionally splitting the loop execution over several thread). This is what my simplified version of the state based filter represents (see the code in the Show me some code section). It gets a bit more complicated if there is a flatMap in the pipeline but idea is still the same.
Spliterator-based transformation is fundamentally different because it adds an asynchronous consumer-driven step to the pipeline. Now the Spliterator rather than the source Stream drives the iteration process. If you ask for a Spliterator directly on the source Stream, it might be able to return you some implementation that just iterates over its internal data structure and this is why materializing pre-filtered data should remove performance difference. However, if you create a Spliterator for some non-empty pipeline, there is no other (simple) choice other than asking the source to push elements one by one through the pipeline until some element passes all the filters (see also second example in the Show me some code section). The fact that source elements are pushed one by one rather than in some batches is a consequence of the fundamental decision to make Streams lazy. The need for a buffer instead of just one element is the consequence of support for flatMap: pushing one element from the source can produce many elements for Spliterator.
Part 3 or Show me some code
This part tries to provide some backing with the code (both links to the real code and simulated code) of what was described in the "theoretical" parts.
First of all, you should know that current Streams API implementation accumulates non-terminal (intermediate) operations into a single lazy pipeline (see j.u.s.AbstractPipeline and its children such as j.u.s.ReferencePipeline. Then, when the terminal operation is applied, all the elements from the original Stream are "pushed" through the pipeline.
What you see is the result of two things:
the fact that streams pipelines are different for cases when you
have a Spliterator-based step inside.
the fact that your OddLines is not the first step in the pipeline
The code with a stateful filter is more or less similar to the following straightforward code:
static int similarToFilter(String[] data)
{
final int[] counter = {0};
final Predicate<String> isEvenLine = item -> ++counter[0] % 2 == 0;
int skip = 1;
boolean reduceEmpty = true;
int reduceState = 0;
for (String outerEl : data)
{
if (outerEl.charAt(0) != '#')
{
if (skip > 0)
skip--;
else
{
if (isEvenLine.test(outerEl))
{
int intEl = parseInt(outerEl.substring(14, 16));
if (reduceEmpty)
{
reduceState = intEl;
reduceEmpty = false;
}
else
{
reduceState = Math.max(reduceState, intEl);
}
}
}
}
}
return reduceState;
}
Note that this is effectively a single loop with some calculations (filtering/transformations) inside.
When you add a Spliterator into the pipeline on the other hand, things change significantly and even with simplifications code that is reasonably similar to what actually happens becomes much larger such as:
interface Sp<T>
{
public boolean tryAdvance(Consumer<? super T> action);
}
static class ArraySp<T> implements Sp<T>
{
private final T[] array;
private int pos;
public ArraySp(T[] array)
{
this.array = array;
}
#Override
public boolean tryAdvance(Consumer<? super T> action)
{
if (pos < array.length)
{
action.accept(array[pos]);
pos++;
return true;
}
else
{
return false;
}
}
}
static class WrappingSp<T> implements Sp<T>, Consumer<T>
{
private final Sp<T> sourceSp;
private final Predicate<T> filter;
private final ArrayList<T> buffer = new ArrayList<T>();
private int pos;
public WrappingSp(Sp<T> sourceSp, Predicate<T> filter)
{
this.sourceSp = sourceSp;
this.filter = filter;
}
#Override
public void accept(T t)
{
buffer.add(t);
}
#Override
public boolean tryAdvance(Consumer<? super T> action)
{
while (true)
{
if (pos >= buffer.size())
{
pos = 0;
buffer.clear();
sourceSp.tryAdvance(this);
}
// failed to fill buffer
if (buffer.size() == 0)
return false;
T nextElem = buffer.get(pos);
pos++;
if (filter.test(nextElem))
{
action.accept(nextElem);
return true;
}
}
}
}
static class OddLineSp<T> implements Sp<T>, Consumer<T>
{
private Sp<T> sourceSp;
public OddLineSp(Sp<T> sourceSp)
{
this.sourceSp = sourceSp;
}
#Override
public boolean tryAdvance(Consumer<? super T> action)
{
if (sourceSp == null)
return false;
sourceSp.tryAdvance(this);
if (!sourceSp.tryAdvance(action))
{
sourceSp = null;
}
return true;
}
#Override
public void accept(T t)
{
}
}
static class ReduceIntMax
{
boolean reduceEmpty = true;
int reduceState = 0;
public int getReduceState()
{
return reduceState;
}
public void accept(int t)
{
if (reduceEmpty)
{
reduceEmpty = false;
reduceState = t;
}
else
{
reduceState = Math.max(reduceState, t);
}
}
}
static int similarToSpliterator(String[] data)
{
ArraySp<String> src = new ArraySp<>(data);
int[] skip = new int[1];
skip[0] = 1;
WrappingSp<String> firstFilter = new WrappingSp<String>(src, (s) ->
{
if (s.charAt(0) == '#')
return false;
if (skip[0] != 0)
{
skip[0]--;
return false;
}
return true;
});
OddLineSp<String> oddLines = new OddLineSp<>(firstFilter);
final ReduceIntMax reduceIntMax = new ReduceIntMax();
while (oddLines.tryAdvance(s ->
{
int intValue = parseInt(s.substring(14, 16));
reduceIntMax.accept(intValue);
})) ; // do nothing in the loop body
return reduceIntMax.getReduceState();
}
This code is larger because the logic is impossible (or at least very hard) to represent without some non-trivial stateful callbacks inside the loop. Here interface Sp is a mix of j.u.s.Stream and j.u.Spliterator interfaces.
Class ArraySp represents a result of Arrays.stream.
Class WrappingSp is similar to j.u.s.StreamSpliterators.WrappingSpliterator which in the real code represents an implementation of Spliterator interface for any non-empty pipeline i.e. a Stream with at least one intermediate operation applied to it (see j.u.s.AbstractPipeline.spliterator method). In my code I merged it with a StatelessOp subclass and put there logic responsible for filter method implementation. Also for simplcity I implemented skip using filter.
OddLineSp corresponds to your OddLines and its resulting Stream
ReduceIntMax represents ReduceOps terminal operation for Math.max for int
So what's important in this example? The important thing here is that since you first filter you original stream, your OddLineSp is created from a non-empty pipeline i.e. from a WrappingSp. And if you take a closer look at WrappingSp, you'll notice that every time tryAdvance is called, it delegates the call to the sourceSp and accumulates that result(s) into a buffer. Moreover, since you have no flatMap in the pipeline, elements to the buffer will be copied one by one. I.e. every time WrappingSp.tryAdvance is called, it will call ArraySp.tryAdvance, get back exactly one element (via callback), and pass it further to the consumer provided by the caller (unless the element doesn't match the filter in which case ArraySp.tryAdvance will be called again and again but still the buffer is never filled with more than one element at a time).
Sidenote 3: If you want to look at the real code, the most intersting places are j.u.s.StreamSpliterators.WrappingSpliterator.tryAdvance which calls
j.u.s.StreamSpliterators.AbstractWrappingSpliterator.doAdvance which in turn calls j.u.s.StreamSpliterators.AbstractWrappingSpliterator.fillBuffer which in turn calls pusher that is initialized at j.u.s.StreamSpliterators.WrappingSpliterator.initPartialTraversalState
So the main thing that's hurting performance is this copying into the buffer.
Unfortunately for us, usual Java developers, current implementation of the Stream API is pretty much closed and you can't modify only some aspects of the internal behavior using inheritance or composition.
You may use some reflection-based hacking to make copying-to-buffer more efficient for your specific case and gain some performance (but sacrifice laziness of the Stream) but you can't avoid this copying altogether and thus Spliterator-based code will be slower anyway.
Going back to the example from the Sidenote #2, Spliterator-based test with materialized filteredData works faster because there is no WrappingSp in the pipeline before OddLineSp and thus there will be no copying into an intermediate buffer.

Accessing public static final field using JoSQL

I've been using JoSQL for quite a few months now and today I came across a problem I am not sure how to solve. I probably could solve it by binding variables/placeholders, but I'd like to include the fields in the query.
SELECT * FROM ...MyObject WHERE getType != com.mypackage.myclass.TYPE_A
This is the query that I have. TYPE_A is a public static final int attribute in "myclass" class. Accessing methods (such as getType) is easy, because getType is expected to be a method from MyObject - just that I do not write round brackets after it (this is how JoSQL works as far as I know).
Does anyone happen to have an idea how to access a public static final field?
JoSQL uses gentlyweb-utils; it seems to be some sort of Accessor/Getter/Setter framework. I'd love to access that attribute without having to bind variables, but I haven't been able to do so.
Thanks for your help in advance! I really appreciate it.
I think I have figured something out. First: it seems not possible to access the static variables for whatever reason. I've used the following approach to solve my issue:
create a method, which picks up a given JoSQL-statement
mark the constants, which you want to replace, by say "{?FULL_PACKAGE_AND$CONSTANT}"
use reflections to determine the column as well as the column (and value) from the field
iteratively replace the statement until no "{?"-values are available
Example:
JoSQL-statement looks like this:
(isWeapon = TRUE AND getItem.getType2 = {?com.l2jserver.gameserver.model.items.L2Item$TYPE2_WEAPON})
Method using the query-object:
final Query query = DataLayer.createJoSqlQuery(joSql);
Method (pre)processing the JoSQL-statement:
final Query query = new Query();
int variableColumn = 0;
while (joSql.indexOf("{?") > -1) {
variableColumn++;
final int startIndex = joSql.indexOf("{?");
final int endIndex = joSql.indexOf("}", startIndex);
final String value = joSql.substring(startIndex + 2, endIndex);
try {
final Object variableValue = Class.forName(value.split("\\$")[0]).getField(value.split("\\$")[1]).get(null);
query.setVariable(variableColumn, variableValue);
joSql = joSql.replace("{?" + value + "}", "?");
}
catch (...) {
e.printStackTrace();
}
}
query.parse(joSql);
return query;
The JoSQL-statement preprocessing method bascially iterates through a given JoSQL-statement and sees whether it contains the string "{?". If it does, it does some copy and paste (note the dollar-symbol right in front of the constant name).
Finally it creates the objects and sets them using something similar to prepared statements "setObject"-method. In the end it just replaces the values within the JoSQL-statement with question marks ("?") and sets a corresponding object in the newly created Query-object, which is later used to retrieve information.

Resources