Java8 Method chaining for Single object without Stream/Optional? - java-8

I felt it easiest to capture my question with the below example. I would like to apply multiple transformations on an object (in this case, they all return same class, Number, but not necessarily). With an Optional (Method 3) or Stream (Method 4), I can use the .map elegantly and legibly. However, when used with a single object, I have to either just make an Optional just to use the .map chaining (with a .get() in the end), or use Stream.of() with a findFirst in the end, which seems like unnecessary work.
[My Preference]: I prefer methods 3 & 4, as they seem better for readability than the pre-java8 options - methods 1 & 2.
[Question]: Is there a better/neater/more preferable/more elegant way of achieving the same than all the methods used here? If not, what method would you use?
import java.util.ArrayList;
import java.util.List;
import java.util.Optional;
import java.util.stream.Collectors;
import java.util.stream.Stream;
public class Tester {
static class Number {
private final int value;
private Number(final int value) {
this.value = value;
}
public int getValue() {
return value;
}
#Override
public String toString() {
return String.valueOf(value);
}
}
private static Number add(final Number number, final int val) {
return new Number(number.getValue() + val);
}
private static Number multiply(final Number number, final int val) {
return new Number(number.getValue() * val);
}
private static Number subtract(final Number number, final int val) {
return new Number(number.getValue() - val);
}
public static void main(final String[] args) {
final Number input = new Number(1);
System.out.println("output1 = " + method1(input)); // 100
System.out.println("output2 = " + method2(input)); // 100
System.out.println("output3 = " + method3(input)); // 100
System.out.println("output4 = " + method4(input)); // 100
processAList();
}
// Processing an object - Method 1
private static Number method1(final Number input) {
return subtract(multiply(add(input, 10), 10), 10);
}
// Processing an object - Method 2
private static Number method2(final Number input) {
final Number added = add(input, 10);
final Number multiplied = multiply(added, 10);
return subtract(multiplied, 10);
}
// Processing an object - Method 3 (Contrived use of Optional)
private static Number method3(final Number input) {
return Optional.of(input)
.map(number -> add(number, 10))
.map(number -> multiply(number, 10))
.map(number -> subtract(number, 10)).get();
}
// Processing an object - Method 4 (Contrived use of Stream)
private static Number method4(final Number input) {
return Stream.of(input)
.map(number -> add(number, 10))
.map(number -> multiply(number, 10))
.map(number -> subtract(number, 10))
.findAny().get();
}
// Processing a list (naturally uses the Stream advantage)
private static void processAList() {
final List<Number> inputs = new ArrayList<>();
inputs.add(new Number(1));
inputs.add(new Number(2));
final List<Number> outputs = inputs.stream()
.map(number -> add(number, 10))
.map(number -> multiply(number, 10))
.map(number -> subtract(number, 10))
.collect(Collectors.toList());
System.out.println("outputs = " + outputs); // [100, 110]
}
}

The solution is to build your methods into your Number class. For example:
static class Number {
// instance variable, constructor and getter unchanged
public Number add(final int val) {
return new Number(getValue() + val);
}
// mulitply() and subtract() in the same way
// toString() unchanged
}
Now your code becomes very simple and readable:
private static Number method5(final Number input) {
return input
.add(10)
.multiply(10)
.subtract(10);
}
You may even write the return statement on one line if you prefer:
return input.add(10).multiply(10).subtract(10);
Edit: If you can't change the Number class, my personal taste would be for method2. Using Optional or Stream would be misuse or at least misplaced and could easily confuse your reader. If you insist, write your own Mandatory class, like Optional except it always holds a value, which makes it simpler. For my part I wouldn't bother.

Related

java 8 streams grouping and creating Map<String, Set<String>> issue [duplicate]

In Java 8 how can I filter a collection using the Stream API by checking the distinctness of a property of each object?
For example I have a list of Person object and I want to remove people with the same name,
persons.stream().distinct();
Will use the default equality check for a Person object, so I need something like,
persons.stream().distinct(p -> p.getName());
Unfortunately the distinct() method has no such overload. Without modifying the equality check inside the Person class is it possible to do this succinctly?
Consider distinct to be a stateful filter. Here is a function that returns a predicate that maintains state about what it's seen previously, and that returns whether the given element was seen for the first time:
public static <T> Predicate<T> distinctByKey(Function<? super T, ?> keyExtractor) {
Set<Object> seen = ConcurrentHashMap.newKeySet();
return t -> seen.add(keyExtractor.apply(t));
}
Then you can write:
persons.stream().filter(distinctByKey(Person::getName))
Note that if the stream is ordered and is run in parallel, this will preserve an arbitrary element from among the duplicates, instead of the first one, as distinct() does.
(This is essentially the same as my answer to this question: Java Lambda Stream Distinct() on arbitrary key?)
An alternative would be to place the persons in a map using the name as a key:
persons.collect(Collectors.toMap(Person::getName, p -> p, (p, q) -> p)).values();
Note that the Person that is kept, in case of a duplicate name, will be the first encontered.
You can wrap the person objects into another class, that only compares the names of the persons. Afterward, you unwrap the wrapped objects to get a person stream again. The stream operations might look as follows:
persons.stream()
.map(Wrapper::new)
.distinct()
.map(Wrapper::unwrap)
...;
The class Wrapper might look as follows:
class Wrapper {
private final Person person;
public Wrapper(Person person) {
this.person = person;
}
public Person unwrap() {
return person;
}
public boolean equals(Object other) {
if (other instanceof Wrapper) {
return ((Wrapper) other).person.getName().equals(person.getName());
} else {
return false;
}
}
public int hashCode() {
return person.getName().hashCode();
}
}
Another solution, using Set. May not be the ideal solution, but it works
Set<String> set = new HashSet<>(persons.size());
persons.stream().filter(p -> set.add(p.getName())).collect(Collectors.toList());
Or if you can modify the original list, you can use removeIf method
persons.removeIf(p -> !set.add(p.getName()));
There's a simpler approach using a TreeSet with a custom comparator.
persons.stream()
.collect(Collectors.toCollection(
() -> new TreeSet<Person>((p1, p2) -> p1.getName().compareTo(p2.getName()))
));
We can also use RxJava (very powerful reactive extension library)
Observable.from(persons).distinct(Person::getName)
or
Observable.from(persons).distinct(p -> p.getName())
You can use groupingBy collector:
persons.collect(Collectors.groupingBy(p -> p.getName())).values().forEach(t -> System.out.println(t.get(0).getId()));
If you want to have another stream you can use this:
persons.collect(Collectors.groupingBy(p -> p.getName())).values().stream().map(l -> (l.get(0)));
You can use the distinct(HashingStrategy) method in Eclipse Collections.
List<Person> persons = ...;
MutableList<Person> distinct =
ListIterate.distinct(persons, HashingStrategies.fromFunction(Person::getName));
If you can refactor persons to implement an Eclipse Collections interface, you can call the method directly on the list.
MutableList<Person> persons = ...;
MutableList<Person> distinct =
persons.distinct(HashingStrategies.fromFunction(Person::getName));
HashingStrategy is simply a strategy interface that allows you to define custom implementations of equals and hashcode.
public interface HashingStrategy<E>
{
int computeHashCode(E object);
boolean equals(E object1, E object2);
}
Note: I am a committer for Eclipse Collections.
Similar approach which Saeed Zarinfam used but more Java 8 style:)
persons.collect(Collectors.groupingBy(p -> p.getName())).values().stream()
.map(plans -> plans.stream().findFirst().get())
.collect(toList());
You can use StreamEx library:
StreamEx.of(persons)
.distinct(Person::getName)
.toList()
I recommend using Vavr, if you can. With this library you can do the following:
io.vavr.collection.List.ofAll(persons)
.distinctBy(Person::getName)
.toJavaSet() // or any another Java 8 Collection
Extending Stuart Marks's answer, this can be done in a shorter way and without a concurrent map (if you don't need parallel streams):
public static <T> Predicate<T> distinctByKey(Function<? super T, ?> keyExtractor) {
final Set<Object> seen = new HashSet<>();
return t -> seen.add(keyExtractor.apply(t));
}
Then call:
persons.stream().filter(distinctByKey(p -> p.getName());
My approach to this is to group all the objects with same property together, then cut short the groups to size of 1 and then finally collect them as a List.
List<YourPersonClass> listWithDistinctPersons = persons.stream()
//operators to remove duplicates based on person name
.collect(Collectors.groupingBy(p -> p.getName()))
.values()
.stream()
//cut short the groups to size of 1
.flatMap(group -> group.stream().limit(1))
//collect distinct users as list
.collect(Collectors.toList());
Distinct objects list can be found using:
List distinctPersons = persons.stream()
.collect(Collectors.collectingAndThen(
Collectors.toCollection(() -> new TreeSet<>(Comparator.comparing(Person:: getName))),
ArrayList::new));
I made a generic version:
private <T, R> Collector<T, ?, Stream<T>> distinctByKey(Function<T, R> keyExtractor) {
return Collectors.collectingAndThen(
toMap(
keyExtractor,
t -> t,
(t1, t2) -> t1
),
(Map<R, T> map) -> map.values().stream()
);
}
An exemple:
Stream.of(new Person("Jean"),
new Person("Jean"),
new Person("Paul")
)
.filter(...)
.collect(distinctByKey(Person::getName)) // return a stream of Person with 2 elements, jean and Paul
.map(...)
.collect(toList())
Another library that supports this is jOOλ, and its Seq.distinct(Function<T,U>) method:
Seq.seq(persons).distinct(Person::getName).toList();
Under the hood, it does practically the same thing as the accepted answer, though.
Set<YourPropertyType> set = new HashSet<>();
list
.stream()
.filter(it -> set.add(it.getYourProperty()))
.forEach(it -> ...);
While the highest upvoted answer is absolutely best answer wrt Java 8, it is at the same time absolutely worst in terms of performance. If you really want a bad low performant application, then go ahead and use it. Simple requirement of extracting a unique set of Person Names shall be achieved by mere "For-Each" and a "Set".
Things get even worse if list is above size of 10.
Consider you have a collection of 20 Objects, like this:
public static final List<SimpleEvent> testList = Arrays.asList(
new SimpleEvent("Tom"), new SimpleEvent("Dick"),new SimpleEvent("Harry"),new SimpleEvent("Tom"),
new SimpleEvent("Dick"),new SimpleEvent("Huckle"),new SimpleEvent("Berry"),new SimpleEvent("Tom"),
new SimpleEvent("Dick"),new SimpleEvent("Moses"),new SimpleEvent("Chiku"),new SimpleEvent("Cherry"),
new SimpleEvent("Roses"),new SimpleEvent("Moses"),new SimpleEvent("Chiku"),new SimpleEvent("gotya"),
new SimpleEvent("Gotye"),new SimpleEvent("Nibble"),new SimpleEvent("Berry"),new SimpleEvent("Jibble"));
Where you object SimpleEvent looks like this:
public class SimpleEvent {
private String name;
private String type;
public SimpleEvent(String name) {
this.name = name;
this.type = "type_"+name;
}
public String getName() {
return name;
}
public void setName(String name) {
this.name = name;
}
public String getType() {
return type;
}
public void setType(String type) {
this.type = type;
}
}
And to test, you have JMH code like this,(Please note, im using the same distinctByKey Predicate mentioned in accepted answer) :
#Benchmark
#OutputTimeUnit(TimeUnit.SECONDS)
public void aStreamBasedUniqueSet(Blackhole blackhole) throws Exception{
Set<String> uniqueNames = testList
.stream()
.filter(distinctByKey(SimpleEvent::getName))
.map(SimpleEvent::getName)
.collect(Collectors.toSet());
blackhole.consume(uniqueNames);
}
#Benchmark
#OutputTimeUnit(TimeUnit.SECONDS)
public void aForEachBasedUniqueSet(Blackhole blackhole) throws Exception{
Set<String> uniqueNames = new HashSet<>();
for (SimpleEvent event : testList) {
uniqueNames.add(event.getName());
}
blackhole.consume(uniqueNames);
}
public static void main(String[] args) throws RunnerException {
Options opt = new OptionsBuilder()
.include(MyBenchmark.class.getSimpleName())
.forks(1)
.mode(Mode.Throughput)
.warmupBatchSize(3)
.warmupIterations(3)
.measurementIterations(3)
.build();
new Runner(opt).run();
}
Then you'll have Benchmark results like this:
Benchmark Mode Samples Score Score error Units
c.s.MyBenchmark.aForEachBasedUniqueSet thrpt 3 2635199.952 1663320.718 ops/s
c.s.MyBenchmark.aStreamBasedUniqueSet thrpt 3 729134.695 895825.697 ops/s
And as you can see, a simple For-Each is 3 times better in throughput and less in error score as compared to Java 8 Stream.
Higher the throughput, better the performance
I would like to improve Stuart Marks answer. What if the key is null, it will through NullPointerException. Here I ignore the null key by adding one more check as keyExtractor.apply(t)!=null.
public static <T> Predicate<T> distinctByKey(Function<? super T, ?> keyExtractor) {
Set<Object> seen = ConcurrentHashMap.newKeySet();
return t -> keyExtractor.apply(t)!=null && seen.add(keyExtractor.apply(t));
}
This works like a charm:
Grouping the data by unique key to form a map.
Returning the first object from every value of the map (There could be multiple person having same name).
persons.stream()
.collect(groupingBy(Person::getName))
.values()
.stream()
.flatMap(values -> values.stream().limit(1))
.collect(toList());
The easiest way to implement this is to jump on the sort feature as it already provides an optional Comparator which can be created using an element’s property. Then you have to filter duplicates out which can be done using a statefull Predicate which uses the fact that for a sorted stream all equal elements are adjacent:
Comparator<Person> c=Comparator.comparing(Person::getName);
stream.sorted(c).filter(new Predicate<Person>() {
Person previous;
public boolean test(Person p) {
if(previous!=null && c.compare(previous, p)==0)
return false;
previous=p;
return true;
}
})./* more stream operations here */;
Of course, a statefull Predicate is not thread-safe, however if that’s your need you can move this logic into a Collector and let the stream take care of the thread-safety when using your Collector. This depends on what you want to do with the stream of distinct elements which you didn’t tell us in your question.
There are lot of approaches, this one will also help - Simple, Clean and Clear
List<Employee> employees = new ArrayList<>();
employees.add(new Employee(11, "Ravi"));
employees.add(new Employee(12, "Stalin"));
employees.add(new Employee(23, "Anbu"));
employees.add(new Employee(24, "Yuvaraj"));
employees.add(new Employee(35, "Sena"));
employees.add(new Employee(36, "Antony"));
employees.add(new Employee(47, "Sena"));
employees.add(new Employee(48, "Ravi"));
List<Employee> empList = new ArrayList<>(employees.stream().collect(
Collectors.toMap(Employee::getName, obj -> obj,
(existingValue, newValue) -> existingValue))
.values());
empList.forEach(System.out::println);
// Collectors.toMap(
// Employee::getName, - key (the value by which you want to eliminate duplicate)
// obj -> obj, - value (entire employee object)
// (existingValue, newValue) -> existingValue) - to avoid illegalstateexception: duplicate key
Output - toString() overloaded
Employee{id=35, name='Sena'}
Employee{id=12, name='Stalin'}
Employee{id=11, name='Ravi'}
Employee{id=24, name='Yuvaraj'}
Employee{id=36, name='Antony'}
Employee{id=23, name='Anbu'}
Here is the example
public class PayRoll {
private int payRollId;
private int id;
private String name;
private String dept;
private int salary;
public PayRoll(int payRollId, int id, String name, String dept, int salary) {
super();
this.payRollId = payRollId;
this.id = id;
this.name = name;
this.dept = dept;
this.salary = salary;
}
}
import java.util.ArrayList;
import java.util.Comparator;
import java.util.List;
import java.util.Map;
import java.util.Optional;
import java.util.stream.Collector;
import java.util.stream.Collectors;
public class Prac {
public static void main(String[] args) {
int salary=70000;
PayRoll payRoll=new PayRoll(1311, 1, "A", "HR", salary);
PayRoll payRoll2=new PayRoll(1411, 2 , "B", "Technical", salary);
PayRoll payRoll3=new PayRoll(1511, 1, "C", "HR", salary);
PayRoll payRoll4=new PayRoll(1611, 1, "D", "Technical", salary);
PayRoll payRoll5=new PayRoll(711, 3,"E", "Technical", salary);
PayRoll payRoll6=new PayRoll(1811, 3, "F", "Technical", salary);
List<PayRoll>list=new ArrayList<PayRoll>();
list.add(payRoll);
list.add(payRoll2);
list.add(payRoll3);
list.add(payRoll4);
list.add(payRoll5);
list.add(payRoll6);
Map<Object, Optional<PayRoll>> k = list.stream().collect(Collectors.groupingBy(p->p.getId()+"|"+p.getDept(),Collectors.maxBy(Comparator.comparingInt(PayRoll::getPayRollId))));
k.entrySet().forEach(p->
{
if(p.getValue().isPresent())
{
System.out.println(p.getValue().get());
}
});
}
}
Output:
PayRoll [payRollId=1611, id=1, name=D, dept=Technical, salary=70000]
PayRoll [payRollId=1811, id=3, name=F, dept=Technical, salary=70000]
PayRoll [payRollId=1411, id=2, name=B, dept=Technical, salary=70000]
PayRoll [payRollId=1511, id=1, name=C, dept=HR, salary=70000]
Late to the party but I sometimes use this one-liner as an equivalent:
((Function<Value, Key>) Value::getKey).andThen(new HashSet<>()::add)::apply
The expression is a Predicate<Value> but since the map is inline, it works as a filter. This is of course less readable but sometimes it can be helpful to avoid the method.
Building on #josketres's answer, I created a generic utility method:
You could make this more Java 8-friendly by creating a Collector.
public static <T> Set<T> removeDuplicates(Collection<T> input, Comparator<T> comparer) {
return input.stream()
.collect(toCollection(() -> new TreeSet<>(comparer)));
}
#Test
public void removeDuplicatesWithDuplicates() {
ArrayList<C> input = new ArrayList<>();
Collections.addAll(input, new C(7), new C(42), new C(42));
Collection<C> result = removeDuplicates(input, (c1, c2) -> Integer.compare(c1.value, c2.value));
assertEquals(2, result.size());
assertTrue(result.stream().anyMatch(c -> c.value == 7));
assertTrue(result.stream().anyMatch(c -> c.value == 42));
}
#Test
public void removeDuplicatesWithoutDuplicates() {
ArrayList<C> input = new ArrayList<>();
Collections.addAll(input, new C(1), new C(2), new C(3));
Collection<C> result = removeDuplicates(input, (t1, t2) -> Integer.compare(t1.value, t2.value));
assertEquals(3, result.size());
assertTrue(result.stream().anyMatch(c -> c.value == 1));
assertTrue(result.stream().anyMatch(c -> c.value == 2));
assertTrue(result.stream().anyMatch(c -> c.value == 3));
}
private class C {
public final int value;
private C(int value) {
this.value = value;
}
}
Maybe will be useful for somebody. I had a little bit another requirement. Having list of objects A from 3rd party remove all which have same A.b field for same A.id (multiple A object with same A.id in list). Stream partition answer by Tagir Valeev inspired me to use custom Collector which returns Map<A.id, List<A>>. Simple flatMap will do the rest.
public static <T, K, K2> Collector<T, ?, Map<K, List<T>>> groupingDistinctBy(Function<T, K> keyFunction, Function<T, K2> distinctFunction) {
return groupingBy(keyFunction, Collector.of((Supplier<Map<K2, T>>) HashMap::new,
(map, error) -> map.putIfAbsent(distinctFunction.apply(error), error),
(left, right) -> {
left.putAll(right);
return left;
}, map -> new ArrayList<>(map.values()),
Collector.Characteristics.UNORDERED)); }
I had a situation, where I was suppose to get distinct elements from list based on 2 keys.
If you want distinct based on two keys or may composite key, try this
class Person{
int rollno;
String name;
}
List<Person> personList;
Function<Person, List<Object>> compositeKey = personList->
Arrays.<Object>asList(personList.getName(), personList.getRollno());
Map<Object, List<Person>> map = personList.stream().collect(Collectors.groupingBy(compositeKey, Collectors.toList()));
List<Object> duplicateEntrys = map.entrySet().stream()`enter code here`
.filter(settingMap ->
settingMap.getValue().size() > 1)
.collect(Collectors.toList());
A variation of the top answer that handles null:
public static <T, K> Predicate<T> distinctBy(final Function<? super T, K> getKey) {
val seen = ConcurrentHashMap.<Optional<K>>newKeySet();
return obj -> seen.add(Optional.ofNullable(getKey.apply(obj)));
}
In my tests:
assertEquals(
asList("a", "bb"),
Stream.of("a", "b", "bb", "aa").filter(distinctBy(String::length)).collect(toList()));
assertEquals(
asList(5, null, 2, 3),
Stream.of(5, null, 2, null, 3, 3, 2).filter(distinctBy(x -> x)).collect(toList()));
val maps = asList(
hashMapWith(0, 2),
hashMapWith(1, 2),
hashMapWith(2, null),
hashMapWith(3, 1),
hashMapWith(4, null),
hashMapWith(5, 2));
assertEquals(
asList(0, 2, 3),
maps.stream()
.filter(distinctBy(m -> m.get("val")))
.map(m -> m.get("i"))
.collect(toList()));
In my case I needed to control what was the previous element. I then created a stateful Predicate where I controled if the previous element was different from the current element, in that case I kept it.
public List<Log> fetchLogById(Long id) {
return this.findLogById(id).stream()
.filter(new LogPredicate())
.collect(Collectors.toList());
}
public class LogPredicate implements Predicate<Log> {
private Log previous;
public boolean test(Log atual) {
boolean isDifferent = previouws == null || verifyIfDifferentLog(current, previous);
if (isDifferent) {
previous = current;
}
return isDifferent;
}
private boolean verifyIfDifferentLog(Log current, Log previous) {
return !current.getId().equals(previous.getId());
}
}
My solution in this listing:
List<HolderEntry> result ....
List<HolderEntry> dto3s = new ArrayList<>(result.stream().collect(toMap(
HolderEntry::getId,
holder -> holder, //or Function.identity() if you want
(holder1, holder2) -> holder1
)).values());
In my situation i want to find distinct values and put their in List.

Limit and get flat list in java 8

I have a object like this
public class Keyword
{
private int id;
private DateTime creationDate
private int subjectId
...
}
So now i have the data list like bellow
KeywordList = [{1,'2018-10-20',10},{1,'2018-10-21',10},{1,'2018-10-22',10},{1,'2018-10-23',20},{1,'2018-10-24',20}{1,'2018-10-25',20},{1,'2018-10-26',30},{1,'2018-10-27',30},{1,'2018-10-28',40}]
I wanted to limit this list for subject id
Ex: If i provide limit as 2 it should only include latest 2 records for each subject id by sorting by creationDate and return the result as list too.
resultList = KeywordList = [{1,'2018-10-21',10},{1,'2018-10-22',10},{1,'2018-10-24',20},{1,'2018-10-25',20},{1,'2018-10-26',30},{1,'2018-10-27',30},{1,'2018-10-28',40}]
how we can achive this kind of thing in Java 8
I have achived it in this kind of way.But i have a doubt in this code peformance wise.
dataList.stream()
.collect(Collectors.groupingBy(Keyword::getSubjectId,
Collectors.collectingAndThen(Collectors.toList(),
myList-> myList.stream().sorted(Comparator.comparing(Keyword::getCreationDate).reversed()).limit(limit)
.collect(Collectors.toList()))))
.values().stream().flatMap(List::stream).collect(Collectors.toList())
Well you could do it in two steps I guess(assuming DateTime is comparable):
yourInitialList
.stream()
.collect(Collectors.groupingBy(Keyword::getSubjectId));
List<Keyword> result = map.values()
.stream()
.flatMap(x -> x.stream()
.sorted(Comparator.comparing(Keyword::getCreationDate))
.limit(2))
.collect(Collectors.toList());
This is doable in a single step too with Collectors.collectingAndThen I guess, but not sure on how readable it would be.
private static final Comparator<Keyword> COMPARE_BY_CREATION_DATE_DESC = (k1, k2) -> k2.getCreationDate().compareTo(k1.getCreationDate());
private static <T> Collector<T, ?, List<T>> limitingList(int limit) {
return Collector.of(ArrayList::new,
(l, e) -> {
if (l.size() < limit)
l.add(e);
},
(l1, l2) -> {
l1.addAll(l2.subList(0, Math.min(l2.size(), Math.max(0, limit - l1.size()))));
return l1;
}
);
}
public static <R> Map<R, List<Keyword>> retrieveLast(List<Keyword> keywords, Function<Keyword, ? extends R> classifier, int limit) {
return keywords.stream()
.sorted(COMPARE_BY_CREATION_DATE_DESC)
.collect(Collectors.groupingBy(classifier, limitingList(limit)));
}
And the client's code:
List<Keyword> keywords = Collections.emptyList();
Map<Integer, List<Keyword>> map = retrieveLast(keywords, Keyword::getSubjectId, 2);

Converting to lambda expression with ForEach for a breaking for loop

Have the following codes with breaking behavior in a for loop:
package test;
import java.util.Arrays;
import java.util.List;
public class Test {
private static List<Integer> integerList = Arrays.asList(1, 2, 3, 4);
public static void main(String[] args) {
countTo2(integerList);
}
public static void countTo2(List<Integer> integerList) {
for (Integer integer : integerList) {
System.out.println("counting " + integer);
if (integer >= 2) {
System.out.println("returning!");
return;
}
}
}
}
trying to express it with Lambda using forEach() will change the behavior as the for loop is not breaking anymore:
public static void countTo2(List<Integer> integerList) {
integerList.forEach(integer -> {
System.out.println("counting " + integer);
if (integer >= 2) {
System.out.println("returning!");
return;
}
});
}
This actually makes sense as the return; statements are only enforced within the lambda expression itself (within the internal iteration) and not for the whole execution sequence, so is there a way to get the desired behavior (breaking the for loop) using the lambda expression?
The following code is logically equivalent to yours:
public static void countTo2(List<Integer> integerList) {
integerList.stream()
.peek(i -> System.out.println("counting " + i))
.filter(i -> i >= 2)
.findFirst()
.ifPresent(i -> System.out.println("returning!"));
}
If you're confused about anything, please let me know!
What you are looking for is a short-circuit terminal operation and while this is the way to do it:
integerList.stream()
.peek(x -> System.out.println("counting = " + x))
.filter(x -> x >= 2)
.findFirst()
.ifPresent(x -> System.out.println("retunrning"));
That's an equivalent only when dealing with sequential stream. As soon as you add parallel that peek might show elements that you would not expect, because there is no defined processing order, but there is encounter order - meaning that elements will be correctly fed to the terminal operation.
One way I could think of doing that would be using anyMatch and the inverse:
if (integerList.stream().noneMatch(val -> val >= 2)) {
System.out.println("counting " + val);
}
if (integerList.stream().anyMatch(val -> val >= 2)) {
System.out.println("returning!");
}
but internally that would iterate over the list twice and wouldn't be very optimal I believe.

Java Collector.combiner getting called with supplier values always

Problem : Create a Collector implementation which would multiply stream of Integers in parallel and return Long.
Implementation:
public class ParallelMultiplier implements Collector<Integer, Long, Long> {
#Override
public BiConsumer<Long, Integer> accumulator() {
// TODO Auto-generated method stub
return (operand1, operand2) -> {
System.out.println("Accumulating Values (Accumulator, Element): (" + operand1 + ", " + operand2 + ")");
long Lval = operand1.longValue();
int Ival = operand2.intValue();
Lval *= Ival;
operand1 = Long.valueOf(Lval);
System.out.println("Acc Updated : " + operand1);
};
}
#Override
public Set<java.util.stream.Collector.Characteristics> characteristics() {
// TODO Auto-generated method stub
return Collections.unmodifiableSet(EnumSet.of(UNORDERED));
}
#Override
public BinaryOperator<Long> combiner() {
return (operand1, operand2) -> {
System.out.println("Combining Values : (" + operand1 + ", " + operand2 + ")");
long Lval1 = operand1.longValue();
long Lval2 = operand2.longValue();
Lval1 *= Lval2;
return Long.valueOf(Lval1);
};
}
#Override
public Function<Long, Long> finisher() {
// TODO Auto-generated method stub
return Function.identity();
}
#Override
public Supplier<Long> supplier() {
return () -> new Long(1L);
}
}
Observed Output:
Accumulating Values (Accumulator, Element): (1, 4)
Acc Updated : 4
Accumulating Values (Accumulator, Element): (1, 3)
Acc Updated : 3
Combining Values : (1, 1)
Accumulating Values (Accumulator, Element): (1, 8)
Accumulating Values (Accumulator, Element): (1, 6)
Accumulating Values (Accumulator, Element): (1, 2)
Acc Updated : 2
Acc Updated : 8
Accumulating Values (Accumulator, Element): (1, 5)
Accumulating Values (Accumulator, Element): (1, 1)
Acc Updated : 5
Acc Updated : 6
Combining Values : (1, 1)
Accumulating Values (Accumulator, Element): (1, 7)
Acc Updated : 7
Combining Values : (1, 1)
Combining Values : (1, 1)
Acc Updated : 1
Combining Values : (1, 1)
Combining Values : (1, 1)
Combining Values : (1, 1)
Invocation:
List<Integer> intList = Arrays.asList(1, 2, 3, 4, 5, 6, 7, 8);
Collector<Integer, Long, Long> parallelMultiplier = new ParallelMultiplier();
result = intList.parallelStream().collect(parallelMultiplier);
i.e. multiplication result is 1, where it should have been 8 factorial. I am not using 'concurrent' Characteristics either.
Ideally, i should have gotten multiplication result of substreams, fed into combiner() operation, but that seems to be not happening here.
Keeping aside the inefficient implementations of boxing/unboxing, any clue where I might have made mistake??
Your collector is slightly off. Here is a simplified version(the why your does not work - see at the end):
static class ParallelMultiplier implements Collector<Integer, long[], Long> {
#Override
public BiConsumer<long[], Integer> accumulator() {
return (left, right) -> left[0] *= right;
}
#Override
public BinaryOperator<long[]> combiner() {
return (left, right) -> {
left[0] = left[0] * right[0];
return left;
};
}
#Override
public Function<long[], Long> finisher() {
return arr -> arr[0];
}
#Override
public Supplier<long[]> supplier() {
return () -> new long[] { 1L };
}
#Override
public Set<java.util.stream.Collector.Characteristics> characteristics() {
return Collections.unmodifiableSet(EnumSet.noneOf(Characteristics.class));
}
}
You problems can be exemplified like this:
static Long test(Long left, Long right) {
left = left * right;
return left;
}
long l = 12L;
long r = 13L;
test(l, r);
System.out.println(l); // still 12
As Flown stated, Java's primitive wrapper types are immutable and cannot be used as an accumulator. Because you're computing the multiplication in parallel, we'll want to use a thread-safe implementation of a mutable Long, which is an AtomicLong.
import java.util.*;
import java.util.concurrent.atomic.*;
import java.util.function.*;
import java.util.stream.*;
public class ParallelMultiplier implements Collector<Integer, AtomicLong, Long> {
#Override
public BiConsumer<AtomicLong, Integer> accumulator() {
return (operand1, operand2) -> operand1.set(operand1.longValue() * operand2.longValue());
}
#Override
public Set<java.util.stream.Collector.Characteristics> characteristics() {
return Collections.unmodifiableSet(EnumSet.of(Characteristics.UNORDERED));
}
#Override
public BinaryOperator<AtomicLong> combiner() {
return (operand1, operand2) -> new AtomicLong(operand1.longValue() * operand2.longValue());
}
#Override
public Function<AtomicLong, Long> finisher() {
return l -> l.longValue();
}
#Override
public Supplier<AtomicLong> supplier() {
return () -> new AtomicLong(1);
}
}
Testing this with what you've provided results in the correct answer, 8! = 40320.

How to sort comma separated keys in Reducer ouput?

I am running an RFM Analysis program using MapReduce. The OutputKeyClass is Text.class and I am emitting comma separated R (Recency), F (Frequency), M (Monetory) as the key from Reducer where R=BigInteger, F=Binteger, M=BigDecimal and the value is also a Text representing Customer_ID. I know that Hadoop sorts output based on keys but my final result is a bit wierd. I want the output keys to be sorted by R first, then F and then M. But I am getting the following output sort order for unknown reasons:
545,1,7652 100000
545,23,390159.402343750 100001
452,13,132586 100002
452,4,32202 100004
452,1,9310 100007
452,1,4057 100018
452,3,18970 100021
But I want the following output:
545,23,390159.402343750 100001
545,1,7652 100000
452,13,132586 100002
452,4,32202 100004
452,3,18970 100021
452,1,9310 100007
452,1,4057 100018
NOTE: The customer_ID was the key in Map phase and all the RFM values belonging to a particular Customer_ID are brought together at the Reducer for aggregation.
So after a lot of searching I found some useful material the compilation of which I am posting now:
You have to start with your custom data type. Since I had three comma separated values which needed to be sorted descendingly, I had to create a TextQuadlet.java data type in Hadoop. The reason I am creating a quadlet is because the first part of the key will be the natural key and the rest of the three parts will be the R, F, M:
import java.io.*;
import org.apache.hadoop.io.*;
public class TextQuadlet implements WritableComparable<TextQuadlet> {
private String customer_id;
private long R;
private long F;
private double M;
public TextQuadlet() {
}
public TextQuadlet(String customer_id, long R, long F, double M) {
set(customer_id, R, F, M);
}
public void set(String customer_id2, long R2, long F2, double M2) {
this.customer_id = customer_id2;
this.R = R2;
this.F = F2;
this.M=M2;
}
public String getCustomer_id() {
return customer_id;
}
public long getR() {
return R;
}
public long getF() {
return F;
}
public double getM() {
return M;
}
#Override
public void write(DataOutput out) throws IOException {
out.writeUTF(this.customer_id);
out.writeLong(this.R);
out.writeLong(this.F);
out.writeDouble(this.M);
}
#Override
public void readFields(DataInput in) throws IOException {
this.customer_id = in.readUTF();
this.R = in.readLong();
this.F = in.readLong();
this.M = in.readDouble();
}
// This hashcode function is important as it is used by the custom
// partitioner for this class.
#Override
public int hashCode() {
return (int) (customer_id.hashCode() * 163 + R + F + M);
}
#Override
public boolean equals(Object o) {
if (o instanceof TextQuadlet) {
TextQuadlet tp = (TextQuadlet) o;
return customer_id.equals(tp.customer_id) && R == (tp.R) && F==(tp.F) && M==(tp.M);
}
return false;
}
#Override
public String toString() {
return customer_id + "," + R + "," + F + "," + M;
}
// LHS in the conditional statement is the current key
// RHS in the conditional statement is the previous key
// When you return a negative value, it means that you are exchanging
// the positions of current and previous key-value pair
// Returning 0 or a positive value means that you are keeping the
// order as it is
#Override
public int compareTo(TextQuadlet tp) {
// Here my natural is is customer_id and I don't even take it into
// consideration.
// So as you might have concluded, I am sorting R,F,M descendingly.
if (this.R != tp.R) {
if(this.R < tp.R) {
return 1;
}
else{
return -1;
}
}
if (this.F != tp.F) {
if(this.F < tp.F) {
return 1;
}
else{
return -1;
}
}
if (this.M != tp.M){
if(this.M < tp.M) {
return 1;
}
else{
return -1;
}
}
return 0;
}
public static int compare(TextQuadlet tp1, TextQuadlet tp2) {
int cmp = tp1.compareTo(tp2);
return cmp;
}
public static int compare(Text customer_id1, Text customer_id2) {
int cmp = customer_id1.compareTo(customer_id1);
return cmp;
}
}
Next you'll need a custom partitioner so that all the values which have the same key end up at one reducer:
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Partitioner;
public class FirstPartitioner_RFM extends Partitioner<TextQuadlet, Text> {
#Override
public int getPartition(TextQuadlet key, Text value, int numPartitions) {
return (int) key.hashCode() % numPartitions;
}
}
Thirdly, you'll need a custom group comparater so that all the values are grouped together by their natural key which is customer_id and not the composite key which is customer_id,R,F,M:
import org.apache.hadoop.io.WritableComparable;
import org.apache.hadoop.io.WritableComparator;
public class GroupComparator_RFM_N extends WritableComparator {
protected GroupComparator_RFM_N() {
super(TextQuadlet.class, true);
}
#SuppressWarnings("rawtypes")
#Override
public int compare(WritableComparable w1, WritableComparable w2) {
TextQuadlet ip1 = (TextQuadlet) w1;
TextQuadlet ip2 = (TextQuadlet) w2;
// Here we tell hadoop to group the keys by their natural key.
return ip1.getCustomer_id().compareTo(ip2.getCustomer_id());
}
}
Fourthly, you'll need a key comparater which will again sort the keys based on R,F,M descendingly and implement the same sort technique which is used in TextQuadlet.java. Since I got lost while coding, I slightly changed the way I compared data types in this function but the underlying logic is the same as in TextQuadlet.java:
import org.apache.hadoop.io.WritableComparable;
import org.apache.hadoop.io.WritableComparator;
public class KeyComparator_RFM extends WritableComparator {
protected KeyComparator_RFM() {
super(TextQuadlet.class, true);
}
#SuppressWarnings("rawtypes")
#Override
public int compare(WritableComparable w1, WritableComparable w2) {
TextQuadlet ip1 = (TextQuadlet) w1;
TextQuadlet ip2 = (TextQuadlet) w2;
// LHS in the conditional statement is the current key-value pair
// RHS in the conditional statement is the previous key-value pair
// When you return a negative value, it means that you are exchanging
// the positions of current and previous key-value pair
// If you are comparing strings, the string which ends up as the argument
// for the `compareTo` method turns out to be the previous key and the
// string which is invoking the `compareTo` method turns out to be the
// current key.
if(ip1.getR() == ip2.getR()){
if(ip1.getF() == ip2.getF()){
if(ip1.getM() == ip2.getM()){
return 0;
}
else{
if(ip1.getM() < ip2.getM())
return 1;
else
return -1;
}
}
else{
if(ip1.getF() < ip2.getF())
return 1;
else
return -1;
}
}
else{
if(ip1.getR() < ip2.getR())
return 1;
else
return -1;
}
}
}
And finally, in your driver class, you'll have to include our custom classes. Here I have used TextQuadlet,Text as k-v pair. But you can choose any other class depending on your needs.:
job.setPartitionerClass(FirstPartitioner_RFM.class);
job.setSortComparatorClass(KeyComparator_RFM.class);
job.setGroupingComparatorClass(GroupComparator_RFM_N.class);
job.setMapOutputKeyClass(TextQuadlet.class);
job.setMapOutputValueClass(Text.class);
job.setOutputKeyClass(TextQuadlet.class);
job.setOutputValueClass(Text.class);
Do correct me if I am technically going wrong somewhere in the code or in the explanation as I have based this answer purely on my personal understanding from what I read on the internet and it works for me perfectly.

Resources