Java 8 is not maintaining the order while grouping - java-8

I m using Java 8 for grouping by data. But results obtained are not in order formed.
Map<GroupingKey, List<Object>> groupedResult = null;
if (!CollectionUtils.isEmpty(groupByColumns)) {
Map<String, Object> mapArr[] = new LinkedHashMap[mapList.size()];
if (!CollectionUtils.isEmpty(mapList)) {
int count = 0;
for (LinkedHashMap<String, Object> map : mapList) {
mapArr[count++] = map;
}
}
Stream<Map<String, Object>> people = Stream.of(mapArr);
groupedResult = people
.collect(Collectors.groupingBy(p -> new GroupingKey(p, groupByColumns), Collectors.mapping((Map<String, Object> p) -> p, toList())));
public static class GroupingKey
public GroupingKey(Map<String, Object> map, List<String> cols) {
keys = new ArrayList<>();
for (String col : cols) {
keys.add(map.get(col));
}
}
// Add appropriate isEqual() ... you IDE should generate this
#Override
public boolean equals(Object obj) {
if (obj == null) {
return false;
}
if (getClass() != obj.getClass()) {
return false;
}
final GroupingKey other = (GroupingKey) obj;
if (!Objects.equals(this.keys, other.keys)) {
return false;
}
return true;
}
#Override
public int hashCode() {
int hash = 7;
hash = 37 * hash + Objects.hashCode(this.keys);
return hash;
}
#Override
public String toString() {
return keys + "";
}
public ArrayList<Object> getKeys() {
return keys;
}
public void setKeys(ArrayList<Object> keys) {
this.keys = keys;
}
}
Here i am using my class groupingKey by which i m dynamically passing from ux. How can get this groupByColumns in sorted form?

Not maintaining the order is a property of the Map that stores the result. If you need a specific Map behavior, you need to request a particular Map implementation. E.g. LinkedHashMap maintains the insertion order:
groupedResult = people.collect(Collectors.groupingBy(
p -> new GroupingKey(p, groupByColumns),
LinkedHashMap::new,
Collectors.mapping((Map<String, Object> p) -> p, toList())));
By the way, there is no reason to copy the contents of mapList into an array before creating the Stream. You may simply call mapList.stream() to get an appropriate Stream.
Further, Collectors.mapping((Map<String, Object> p) -> p, toList()) is obsolete. p->p is an identity mapping, so there’s no reason to request mapping at all:
groupedResult = mapList.stream().collect(Collectors.groupingBy(
p -> new GroupingKey(p, groupByColumns), LinkedHashMap::new, toList()));
But even the GroupingKey is obsolete. It basically wraps a List of values, so you could just use a List as key in the first place. Lists implement hashCode and equals appropriately (but you must not modify these key Lists afterwards).
Map<List<Object>, List<Object>> groupedResult=
mapList.stream().collect(Collectors.groupingBy(
p -> groupByColumns.stream().map(p::get).collect(toList()),
LinkedHashMap::new, toList()));

Based on #Holger's great answer. I post this to help those who want to keep the order after grouping as well as changing the mapping.
Let's simplify and suppose we have a list of persons (int age, String name, String adresss...etc) and we want the names grouped by age while keeping ages in order:
final LinkedHashMap<Integer, List<String> map = myList
.stream()
.sorted(Comparator.comparing(p -> p.getAge())) //sort list by ages
.collect(Collectors.groupingBy(p -> p.getAge()),
LinkedHashMap::new, //keeps the order
Collectors.mapping(p -> p.getName(), //map name
Collectors.toList())));

Related

java 8 streams grouping and creating Map<String, Set<String>> issue [duplicate]

In Java 8 how can I filter a collection using the Stream API by checking the distinctness of a property of each object?
For example I have a list of Person object and I want to remove people with the same name,
persons.stream().distinct();
Will use the default equality check for a Person object, so I need something like,
persons.stream().distinct(p -> p.getName());
Unfortunately the distinct() method has no such overload. Without modifying the equality check inside the Person class is it possible to do this succinctly?
Consider distinct to be a stateful filter. Here is a function that returns a predicate that maintains state about what it's seen previously, and that returns whether the given element was seen for the first time:
public static <T> Predicate<T> distinctByKey(Function<? super T, ?> keyExtractor) {
Set<Object> seen = ConcurrentHashMap.newKeySet();
return t -> seen.add(keyExtractor.apply(t));
}
Then you can write:
persons.stream().filter(distinctByKey(Person::getName))
Note that if the stream is ordered and is run in parallel, this will preserve an arbitrary element from among the duplicates, instead of the first one, as distinct() does.
(This is essentially the same as my answer to this question: Java Lambda Stream Distinct() on arbitrary key?)
An alternative would be to place the persons in a map using the name as a key:
persons.collect(Collectors.toMap(Person::getName, p -> p, (p, q) -> p)).values();
Note that the Person that is kept, in case of a duplicate name, will be the first encontered.
You can wrap the person objects into another class, that only compares the names of the persons. Afterward, you unwrap the wrapped objects to get a person stream again. The stream operations might look as follows:
persons.stream()
.map(Wrapper::new)
.distinct()
.map(Wrapper::unwrap)
...;
The class Wrapper might look as follows:
class Wrapper {
private final Person person;
public Wrapper(Person person) {
this.person = person;
}
public Person unwrap() {
return person;
}
public boolean equals(Object other) {
if (other instanceof Wrapper) {
return ((Wrapper) other).person.getName().equals(person.getName());
} else {
return false;
}
}
public int hashCode() {
return person.getName().hashCode();
}
}
Another solution, using Set. May not be the ideal solution, but it works
Set<String> set = new HashSet<>(persons.size());
persons.stream().filter(p -> set.add(p.getName())).collect(Collectors.toList());
Or if you can modify the original list, you can use removeIf method
persons.removeIf(p -> !set.add(p.getName()));
There's a simpler approach using a TreeSet with a custom comparator.
persons.stream()
.collect(Collectors.toCollection(
() -> new TreeSet<Person>((p1, p2) -> p1.getName().compareTo(p2.getName()))
));
We can also use RxJava (very powerful reactive extension library)
Observable.from(persons).distinct(Person::getName)
or
Observable.from(persons).distinct(p -> p.getName())
You can use groupingBy collector:
persons.collect(Collectors.groupingBy(p -> p.getName())).values().forEach(t -> System.out.println(t.get(0).getId()));
If you want to have another stream you can use this:
persons.collect(Collectors.groupingBy(p -> p.getName())).values().stream().map(l -> (l.get(0)));
You can use the distinct(HashingStrategy) method in Eclipse Collections.
List<Person> persons = ...;
MutableList<Person> distinct =
ListIterate.distinct(persons, HashingStrategies.fromFunction(Person::getName));
If you can refactor persons to implement an Eclipse Collections interface, you can call the method directly on the list.
MutableList<Person> persons = ...;
MutableList<Person> distinct =
persons.distinct(HashingStrategies.fromFunction(Person::getName));
HashingStrategy is simply a strategy interface that allows you to define custom implementations of equals and hashcode.
public interface HashingStrategy<E>
{
int computeHashCode(E object);
boolean equals(E object1, E object2);
}
Note: I am a committer for Eclipse Collections.
Similar approach which Saeed Zarinfam used but more Java 8 style:)
persons.collect(Collectors.groupingBy(p -> p.getName())).values().stream()
.map(plans -> plans.stream().findFirst().get())
.collect(toList());
You can use StreamEx library:
StreamEx.of(persons)
.distinct(Person::getName)
.toList()
I recommend using Vavr, if you can. With this library you can do the following:
io.vavr.collection.List.ofAll(persons)
.distinctBy(Person::getName)
.toJavaSet() // or any another Java 8 Collection
Extending Stuart Marks's answer, this can be done in a shorter way and without a concurrent map (if you don't need parallel streams):
public static <T> Predicate<T> distinctByKey(Function<? super T, ?> keyExtractor) {
final Set<Object> seen = new HashSet<>();
return t -> seen.add(keyExtractor.apply(t));
}
Then call:
persons.stream().filter(distinctByKey(p -> p.getName());
My approach to this is to group all the objects with same property together, then cut short the groups to size of 1 and then finally collect them as a List.
List<YourPersonClass> listWithDistinctPersons = persons.stream()
//operators to remove duplicates based on person name
.collect(Collectors.groupingBy(p -> p.getName()))
.values()
.stream()
//cut short the groups to size of 1
.flatMap(group -> group.stream().limit(1))
//collect distinct users as list
.collect(Collectors.toList());
Distinct objects list can be found using:
List distinctPersons = persons.stream()
.collect(Collectors.collectingAndThen(
Collectors.toCollection(() -> new TreeSet<>(Comparator.comparing(Person:: getName))),
ArrayList::new));
I made a generic version:
private <T, R> Collector<T, ?, Stream<T>> distinctByKey(Function<T, R> keyExtractor) {
return Collectors.collectingAndThen(
toMap(
keyExtractor,
t -> t,
(t1, t2) -> t1
),
(Map<R, T> map) -> map.values().stream()
);
}
An exemple:
Stream.of(new Person("Jean"),
new Person("Jean"),
new Person("Paul")
)
.filter(...)
.collect(distinctByKey(Person::getName)) // return a stream of Person with 2 elements, jean and Paul
.map(...)
.collect(toList())
Another library that supports this is jOOλ, and its Seq.distinct(Function<T,U>) method:
Seq.seq(persons).distinct(Person::getName).toList();
Under the hood, it does practically the same thing as the accepted answer, though.
Set<YourPropertyType> set = new HashSet<>();
list
.stream()
.filter(it -> set.add(it.getYourProperty()))
.forEach(it -> ...);
While the highest upvoted answer is absolutely best answer wrt Java 8, it is at the same time absolutely worst in terms of performance. If you really want a bad low performant application, then go ahead and use it. Simple requirement of extracting a unique set of Person Names shall be achieved by mere "For-Each" and a "Set".
Things get even worse if list is above size of 10.
Consider you have a collection of 20 Objects, like this:
public static final List<SimpleEvent> testList = Arrays.asList(
new SimpleEvent("Tom"), new SimpleEvent("Dick"),new SimpleEvent("Harry"),new SimpleEvent("Tom"),
new SimpleEvent("Dick"),new SimpleEvent("Huckle"),new SimpleEvent("Berry"),new SimpleEvent("Tom"),
new SimpleEvent("Dick"),new SimpleEvent("Moses"),new SimpleEvent("Chiku"),new SimpleEvent("Cherry"),
new SimpleEvent("Roses"),new SimpleEvent("Moses"),new SimpleEvent("Chiku"),new SimpleEvent("gotya"),
new SimpleEvent("Gotye"),new SimpleEvent("Nibble"),new SimpleEvent("Berry"),new SimpleEvent("Jibble"));
Where you object SimpleEvent looks like this:
public class SimpleEvent {
private String name;
private String type;
public SimpleEvent(String name) {
this.name = name;
this.type = "type_"+name;
}
public String getName() {
return name;
}
public void setName(String name) {
this.name = name;
}
public String getType() {
return type;
}
public void setType(String type) {
this.type = type;
}
}
And to test, you have JMH code like this,(Please note, im using the same distinctByKey Predicate mentioned in accepted answer) :
#Benchmark
#OutputTimeUnit(TimeUnit.SECONDS)
public void aStreamBasedUniqueSet(Blackhole blackhole) throws Exception{
Set<String> uniqueNames = testList
.stream()
.filter(distinctByKey(SimpleEvent::getName))
.map(SimpleEvent::getName)
.collect(Collectors.toSet());
blackhole.consume(uniqueNames);
}
#Benchmark
#OutputTimeUnit(TimeUnit.SECONDS)
public void aForEachBasedUniqueSet(Blackhole blackhole) throws Exception{
Set<String> uniqueNames = new HashSet<>();
for (SimpleEvent event : testList) {
uniqueNames.add(event.getName());
}
blackhole.consume(uniqueNames);
}
public static void main(String[] args) throws RunnerException {
Options opt = new OptionsBuilder()
.include(MyBenchmark.class.getSimpleName())
.forks(1)
.mode(Mode.Throughput)
.warmupBatchSize(3)
.warmupIterations(3)
.measurementIterations(3)
.build();
new Runner(opt).run();
}
Then you'll have Benchmark results like this:
Benchmark Mode Samples Score Score error Units
c.s.MyBenchmark.aForEachBasedUniqueSet thrpt 3 2635199.952 1663320.718 ops/s
c.s.MyBenchmark.aStreamBasedUniqueSet thrpt 3 729134.695 895825.697 ops/s
And as you can see, a simple For-Each is 3 times better in throughput and less in error score as compared to Java 8 Stream.
Higher the throughput, better the performance
I would like to improve Stuart Marks answer. What if the key is null, it will through NullPointerException. Here I ignore the null key by adding one more check as keyExtractor.apply(t)!=null.
public static <T> Predicate<T> distinctByKey(Function<? super T, ?> keyExtractor) {
Set<Object> seen = ConcurrentHashMap.newKeySet();
return t -> keyExtractor.apply(t)!=null && seen.add(keyExtractor.apply(t));
}
This works like a charm:
Grouping the data by unique key to form a map.
Returning the first object from every value of the map (There could be multiple person having same name).
persons.stream()
.collect(groupingBy(Person::getName))
.values()
.stream()
.flatMap(values -> values.stream().limit(1))
.collect(toList());
The easiest way to implement this is to jump on the sort feature as it already provides an optional Comparator which can be created using an element’s property. Then you have to filter duplicates out which can be done using a statefull Predicate which uses the fact that for a sorted stream all equal elements are adjacent:
Comparator<Person> c=Comparator.comparing(Person::getName);
stream.sorted(c).filter(new Predicate<Person>() {
Person previous;
public boolean test(Person p) {
if(previous!=null && c.compare(previous, p)==0)
return false;
previous=p;
return true;
}
})./* more stream operations here */;
Of course, a statefull Predicate is not thread-safe, however if that’s your need you can move this logic into a Collector and let the stream take care of the thread-safety when using your Collector. This depends on what you want to do with the stream of distinct elements which you didn’t tell us in your question.
There are lot of approaches, this one will also help - Simple, Clean and Clear
List<Employee> employees = new ArrayList<>();
employees.add(new Employee(11, "Ravi"));
employees.add(new Employee(12, "Stalin"));
employees.add(new Employee(23, "Anbu"));
employees.add(new Employee(24, "Yuvaraj"));
employees.add(new Employee(35, "Sena"));
employees.add(new Employee(36, "Antony"));
employees.add(new Employee(47, "Sena"));
employees.add(new Employee(48, "Ravi"));
List<Employee> empList = new ArrayList<>(employees.stream().collect(
Collectors.toMap(Employee::getName, obj -> obj,
(existingValue, newValue) -> existingValue))
.values());
empList.forEach(System.out::println);
// Collectors.toMap(
// Employee::getName, - key (the value by which you want to eliminate duplicate)
// obj -> obj, - value (entire employee object)
// (existingValue, newValue) -> existingValue) - to avoid illegalstateexception: duplicate key
Output - toString() overloaded
Employee{id=35, name='Sena'}
Employee{id=12, name='Stalin'}
Employee{id=11, name='Ravi'}
Employee{id=24, name='Yuvaraj'}
Employee{id=36, name='Antony'}
Employee{id=23, name='Anbu'}
Here is the example
public class PayRoll {
private int payRollId;
private int id;
private String name;
private String dept;
private int salary;
public PayRoll(int payRollId, int id, String name, String dept, int salary) {
super();
this.payRollId = payRollId;
this.id = id;
this.name = name;
this.dept = dept;
this.salary = salary;
}
}
import java.util.ArrayList;
import java.util.Comparator;
import java.util.List;
import java.util.Map;
import java.util.Optional;
import java.util.stream.Collector;
import java.util.stream.Collectors;
public class Prac {
public static void main(String[] args) {
int salary=70000;
PayRoll payRoll=new PayRoll(1311, 1, "A", "HR", salary);
PayRoll payRoll2=new PayRoll(1411, 2 , "B", "Technical", salary);
PayRoll payRoll3=new PayRoll(1511, 1, "C", "HR", salary);
PayRoll payRoll4=new PayRoll(1611, 1, "D", "Technical", salary);
PayRoll payRoll5=new PayRoll(711, 3,"E", "Technical", salary);
PayRoll payRoll6=new PayRoll(1811, 3, "F", "Technical", salary);
List<PayRoll>list=new ArrayList<PayRoll>();
list.add(payRoll);
list.add(payRoll2);
list.add(payRoll3);
list.add(payRoll4);
list.add(payRoll5);
list.add(payRoll6);
Map<Object, Optional<PayRoll>> k = list.stream().collect(Collectors.groupingBy(p->p.getId()+"|"+p.getDept(),Collectors.maxBy(Comparator.comparingInt(PayRoll::getPayRollId))));
k.entrySet().forEach(p->
{
if(p.getValue().isPresent())
{
System.out.println(p.getValue().get());
}
});
}
}
Output:
PayRoll [payRollId=1611, id=1, name=D, dept=Technical, salary=70000]
PayRoll [payRollId=1811, id=3, name=F, dept=Technical, salary=70000]
PayRoll [payRollId=1411, id=2, name=B, dept=Technical, salary=70000]
PayRoll [payRollId=1511, id=1, name=C, dept=HR, salary=70000]
Late to the party but I sometimes use this one-liner as an equivalent:
((Function<Value, Key>) Value::getKey).andThen(new HashSet<>()::add)::apply
The expression is a Predicate<Value> but since the map is inline, it works as a filter. This is of course less readable but sometimes it can be helpful to avoid the method.
Building on #josketres's answer, I created a generic utility method:
You could make this more Java 8-friendly by creating a Collector.
public static <T> Set<T> removeDuplicates(Collection<T> input, Comparator<T> comparer) {
return input.stream()
.collect(toCollection(() -> new TreeSet<>(comparer)));
}
#Test
public void removeDuplicatesWithDuplicates() {
ArrayList<C> input = new ArrayList<>();
Collections.addAll(input, new C(7), new C(42), new C(42));
Collection<C> result = removeDuplicates(input, (c1, c2) -> Integer.compare(c1.value, c2.value));
assertEquals(2, result.size());
assertTrue(result.stream().anyMatch(c -> c.value == 7));
assertTrue(result.stream().anyMatch(c -> c.value == 42));
}
#Test
public void removeDuplicatesWithoutDuplicates() {
ArrayList<C> input = new ArrayList<>();
Collections.addAll(input, new C(1), new C(2), new C(3));
Collection<C> result = removeDuplicates(input, (t1, t2) -> Integer.compare(t1.value, t2.value));
assertEquals(3, result.size());
assertTrue(result.stream().anyMatch(c -> c.value == 1));
assertTrue(result.stream().anyMatch(c -> c.value == 2));
assertTrue(result.stream().anyMatch(c -> c.value == 3));
}
private class C {
public final int value;
private C(int value) {
this.value = value;
}
}
Maybe will be useful for somebody. I had a little bit another requirement. Having list of objects A from 3rd party remove all which have same A.b field for same A.id (multiple A object with same A.id in list). Stream partition answer by Tagir Valeev inspired me to use custom Collector which returns Map<A.id, List<A>>. Simple flatMap will do the rest.
public static <T, K, K2> Collector<T, ?, Map<K, List<T>>> groupingDistinctBy(Function<T, K> keyFunction, Function<T, K2> distinctFunction) {
return groupingBy(keyFunction, Collector.of((Supplier<Map<K2, T>>) HashMap::new,
(map, error) -> map.putIfAbsent(distinctFunction.apply(error), error),
(left, right) -> {
left.putAll(right);
return left;
}, map -> new ArrayList<>(map.values()),
Collector.Characteristics.UNORDERED)); }
I had a situation, where I was suppose to get distinct elements from list based on 2 keys.
If you want distinct based on two keys or may composite key, try this
class Person{
int rollno;
String name;
}
List<Person> personList;
Function<Person, List<Object>> compositeKey = personList->
Arrays.<Object>asList(personList.getName(), personList.getRollno());
Map<Object, List<Person>> map = personList.stream().collect(Collectors.groupingBy(compositeKey, Collectors.toList()));
List<Object> duplicateEntrys = map.entrySet().stream()`enter code here`
.filter(settingMap ->
settingMap.getValue().size() > 1)
.collect(Collectors.toList());
A variation of the top answer that handles null:
public static <T, K> Predicate<T> distinctBy(final Function<? super T, K> getKey) {
val seen = ConcurrentHashMap.<Optional<K>>newKeySet();
return obj -> seen.add(Optional.ofNullable(getKey.apply(obj)));
}
In my tests:
assertEquals(
asList("a", "bb"),
Stream.of("a", "b", "bb", "aa").filter(distinctBy(String::length)).collect(toList()));
assertEquals(
asList(5, null, 2, 3),
Stream.of(5, null, 2, null, 3, 3, 2).filter(distinctBy(x -> x)).collect(toList()));
val maps = asList(
hashMapWith(0, 2),
hashMapWith(1, 2),
hashMapWith(2, null),
hashMapWith(3, 1),
hashMapWith(4, null),
hashMapWith(5, 2));
assertEquals(
asList(0, 2, 3),
maps.stream()
.filter(distinctBy(m -> m.get("val")))
.map(m -> m.get("i"))
.collect(toList()));
In my case I needed to control what was the previous element. I then created a stateful Predicate where I controled if the previous element was different from the current element, in that case I kept it.
public List<Log> fetchLogById(Long id) {
return this.findLogById(id).stream()
.filter(new LogPredicate())
.collect(Collectors.toList());
}
public class LogPredicate implements Predicate<Log> {
private Log previous;
public boolean test(Log atual) {
boolean isDifferent = previouws == null || verifyIfDifferentLog(current, previous);
if (isDifferent) {
previous = current;
}
return isDifferent;
}
private boolean verifyIfDifferentLog(Log current, Log previous) {
return !current.getId().equals(previous.getId());
}
}
My solution in this listing:
List<HolderEntry> result ....
List<HolderEntry> dto3s = new ArrayList<>(result.stream().collect(toMap(
HolderEntry::getId,
holder -> holder, //or Function.identity() if you want
(holder1, holder2) -> holder1
)).values());
In my situation i want to find distinct values and put their in List.

Java Map: group by key's attribute and max over value

I have an instance of Map<Reference, Double> the challenge is that the key objects may contain a reference to the same object, I need to return a map of the same type of the "input" but grouped by the attribute key and by retaining the max value.
I tried by using groupingBy and maxBy but I'm stuck.
private void run () {
Map<Reference, Double> vote = new HashMap<>();
Student s1 = new Student(12L);
vote.put(new Reference(s1), 66.5);
vote.put(new Reference(s1), 71.71);
Student s2 = new Student(44L);
vote.put(new Reference(s2), 59.75);
vote.put(new Reference(s2), 64.00);
// I need to have a Collection of Reference objs related to the max value of the "vote" map
Collection<Reference> maxVote = vote.entrySet().stream().collect(groupingBy(Map.Entry.<Reference, Double>comparingByKey(new Comparator<Reference>() {
#Override
public int compare(Reference r1, Reference r2) {
return r1.getObjId().compareTo(r2.getObjId());
}
}), maxBy(Comparator.comparingDouble(Map.Entry::getValue))));
}
class Reference {
private final Student student;
public Reference(Student s) {
this.student = s;
}
public Long getObjId() {
return this.student.getId();
}
}
class Student {
private final Long id;
public Student (Long id) {
this.id = id;
}
public Long getId() {
return id;
}
}
I have an error in the maxBy argument: Comparator.comparingDouble(Map.Entry::getValue) and I don't know how to fix it. Is there a way to achieve the expected result?
You can use Collectors.toMap to get the collection of Map.Entry<Reference, Double>
Collection<Map.Entry<Reference, Double>> result = vote.entrySet().stream()
.collect(Collectors.toMap( a -> a.getKey().getObjId(), Function.identity(),
BinaryOperator.maxBy(Comparator.comparingDouble(Map.Entry::getValue)))).values();
then stream over again to get List<Reference>
List<Reference> result = vote.entrySet().stream()
.collect(Collectors.toMap(a -> a.getKey().getObjId(), Function.identity(),
BinaryOperator.maxBy(Comparator.comparingDouble(Map.Entry::getValue))))
.values().stream().map(e -> e.getKey()).collect(Collectors.toList());
Using your approach of groupingBy and maxBy:
Comparator<Entry<Reference, Double>> c = Comparator.comparing(e -> e.getValue());
Map<Object, Optional<Entry<Reference, Double>>> map =
vote.entrySet().stream()
.collect(
Collectors.groupingBy
(
e -> ((Reference) e.getKey()).getObjId(),
Collectors.maxBy(c)));
// iterate to get the result (or invoke another stream)
for (Entry<Object, Optional<Entry<Reference, Double>>> obj : map.entrySet()) {
System.out.println("Student Id:" + obj.getKey() + ", " + "Max Vote:" + obj.getValue().get().getValue());
}
Output (For input in your question):
Student Id:12, Max Vote:71.71
Student Id:44, Max Vote:64.0

How to convert List<Person> into Map<String,List<Employee>>

Here is the piece of code which was written in Java7 and I wants to convert into Java8 by using Streams and Lambdas.
public static Map<String, List<Employee>> getEmployees(List<Person> personList) {
Map<String, List<Employee>> result = new HashMap<>();
for (Person person : personList) {
String[] perArr = person.getName().split("-");
List<Employee> employeeList = result.get(perArr[0]);
if (employeeList == null) {
employeeList = new ArrayList<>();
}
employeeList.add(new Employee(person.getPersonId(), perArr[1]));
result.put(perArr[0], employeeList);
}
return result;
}
Well you were somehow close I would say, problem being that you would need to pass along to the next stage of the stream pipeline 3 things actually: first token and second token (from split("-")) and also person::getPersonId; I've used a List here and some casting for this purpose (you could use a Triple for example, I've heard apache has it):
personList.stream()
.map(person -> {
String[] tokens = person.getName().split("-");
return Arrays.asList(tokens[0], tokens[1], person.getPersonId());
})
.collect(Collectors.groupingBy(
list -> (String) list.get(0),
Collectors.mapping(
list -> new Employee((Integer) list.get(2), (String) list.get(1)),
Collectors.toList())));
A straightforward way to improve your loop is to use Map.computeIfAbsent to manage creation of new map entries:
for (Person person : personList) {
String[] perArr = person.getName().split("-");
List<Employee> employeeList = result.computeIfAbsent(perArr[0], x -> new ArrayList<>());
employeeList.add(new Employee(person.getPersonId(), perArr[1]));
}
Doing this with streams is somewhat awkward because you cannot conveniently carry the result of an intermediate computation and so you would have to either complicate matters with intermediate objects or just split the string again:
import static java.util.stream.Collectors.*;
personList.stream()
.collect(groupingBy(
p -> p.getName().split("-")[0],
mapping(
p -> new Employee(p.getPersonId(), p.getName().split("-")[1]),
toList()
)
));

Ordering a list of objects in the same order as another list of objects with objects having some common attribute

I have two ArrayLists like below;
List<HashMap<String, String>> mapList;
List<myclass> sortedDtos;
As the name implies sortedDtos is already sorted.
`myclass` has a field called `jobNo`
The HashMap has a key called `jobNo`
Basically I want to order mapList based on sortedDtos by comparing the attribute jobNo.
How can I do this in Java 8?
This seems to work correctly, also can be probably be made more efficient.
List<String> jobNos = new ArrayList();
sortedDtos.stream().forEach(dto -> jobNos.add(dto.getJobNo()));
// sort maplist based on the sorting of the dtos
mapList.sort(Comparator.comparing(el -> jobNos.indexOf(el.get("jobNo"))));
I will suggest the following way. This approach involves providing your own custom comparator to sort the map.
Collections.sort(mapList, new Comparator<HashMap<String, String>>() {
List<String> list = sortedDtos.stream().map(l -> l.jobNo).collect(Collectors.toList());
public int compare(HashMap<String, String> map1, HashMap<String, String> map2) {
if(list.indexOf(map1.get("jobNo")) < list.indexOf(map2.get("jobNo"))) {
return -1;
}
else if(list.indexOf(map1.get("jobNo")) > list.indexOf(map2.get("jobNo"))) {
return 1;
}
else {
return 0;
}
}
});

Hadoop seems to modify my key object during an iteration over values of a given reduce call

Hadoop Version: 0.20.2 (On Amazon EMR)
Problem: I have a custom key that i write during map phase which i added below. During the reduce call, I do some simple aggregation on values for a given key. Issue I am facing is that during the iteration of values in reduce call, my key got changed and i got values of that new key.
My key type:
class MyKey implements WritableComparable<MyKey>, Serializable {
private MyEnum type; //MyEnum is a simple enumeration.
private TreeMap<String, String> subKeys;
MyKey() {} //for hadoop
public MyKey(MyEnum t, Map<String, String> sK) { type = t; subKeys = new TreeMap(sk); }
public void readFields(DataInput in) throws IOException {
Text typeT = new Text();
typeT.readFields(in);
this.type = MyEnum.valueOf(typeT.toString());
subKeys.clear();
int i = WritableUtils.readVInt(in);
while ( 0 != i-- ) {
Text keyText = new Text();
keyText.readFields(in);
Text valueText = new Text();
valueText.readFields(in);
subKeys.put(keyText.toString(), valueText.toString());
}
}
public void write(DataOutput out) throws IOException {
new Text(type.name()).write(out);
WritableUtils.writeVInt(out, subKeys.size());
for (Entry<String, String> each: subKeys.entrySet()) {
new Text(each.getKey()).write(out);
new Text(each.getValue()).write(out);
}
}
public int compareTo(MyKey o) {
if (o == null) {
return 1;
}
int typeComparison = this.type.compareTo(o.type);
if (typeComparison == 0) {
if (this.subKeys.equals(o.subKeys)) {
return 0;
}
int x = this.subKeys.hashCode() - o.subKeys.hashCode();
return (x != 0 ? x : -1);
}
return typeComparison;
}
}
Is there anything wrong with this implementation of key? Following is the code where I am facing the mixup of keys in reduce call:
reduce(MyKey k, Iterable<MyValue> values, Context context) {
Iterator<MyValue> iterator = values.iterator();
int sum = 0;
while(iterator.hasNext()) {
MyValue value = iterator.next();
//when i come here in the 2nd iteration, if i print k, it is different from what it was in iteration 1.
sum += value.getResult();
}
//write sum to context
}
Any help in this would be greatly appreciated.
This is expected behavior (with the new API at least).
When the next method for the underlying iterator of the values Iterable is called, the next key/value pair is read from the sorted mapper / combiner output, and checked that the key is still part of the same group as the previous key.
Because hadoop re-uses the objects passed to the reduce method (just calling the readFields method of the same object) the underlying contents of the Key parameter 'k' will change with each iteration of the values Iterable.

Resources