Extract list values from list of in java - java-8

public class FetchVarableList {
public static void main(String[] args) {
List<List<Employee>> empsList = new ArrayList<>();
Employee e1 = new Employee(1, "Abi", "Fin", 2000);
Employee e2 = new Employee(2, "Chandu", "OPs", 5000);
Employee e3 = new Employee(3, "mahesh", "HR", 8000);
Employee e4 = new Employee(4, "Suresh", "Main", 1000);
List<Employee> empList = new ArrayList<>();
empList.add(e1); empList.add(e2);
List<Employee> empList2 = new ArrayList<>();
empList2.add(e3); empList2.add(e4);
empsList.add(empList);
empsList.add(empList2);
}
}
The code above has a list of employee e1, e2 in empList and e3, e4 in empList2. These two lists are added to empslist. I would like to fetch all employee numbers and store them in a single list of Integers.
How to get the list of employee numbers from empslist in Java8?

List<Integer> numbers = empsList.stream()
.flatMap(List::stream)
.map(Employee::getNumber)
.collect(Collectors.toList());

Combining the answer from JB Nizet and ahmet kamaran you can avoid the step with creating a list of lists (empsList) by concatenating the 2 lists:
List<Integer> numbers = Stream.concat(empList.stream(), empList2.stream())
.map(Employee::getNumber)
.collect(Collectors.toList());

List<Integer> numbers = Stream.concat(empList.stream(), empList2.stream())
.map(Employee::getNumber)
.collect(Collectors.toList());

Related

Group the data into a Map<Long, List<Long>> where Lists need to be sorted

Assume I have the following domain object:
public class MyObj {
private Long id;
private Long relationId;
private Long seq;
// getters
}
There is a list List<MyObj> list. I want to create a Map by grouping the data by relationId (*key) and sort values (value is a list of id).
My code without sort values:
List<MyObj> list = getMyObjs();
// key: relationId, value: List<Long> ids (needs to be sorted)
Map<Long, List<Long>> map = list.stream()
.collect(Collectors.groupingBy(
MyObj::getRelationId,
Collectors.mapping(MyObj::getId, toList())
));
public class MyObjComparator{
public static Comparator<MyObj> compare() {
...
}
}
I have created compare method MyObjComparator::compare, my question is how to sort this map's values in the above stream.
To obtain the Map having the sorted lists of id as Values, you can sort the stream elements by id before collecting them (as #shmosel has pointed out in the comment).
Collector groupingBy() will preserve the order of stream elements while storing them into Lists. In short, the only case when the order can break is while merging the partial results during parallel execution using an unordered collector (which has a leeway of combining results in arbitrary order). groupingBy() is not unordered, therefore the order of values in the list would reflect the initial order of elements in the stream. You can find detailed explanation in this answer by #Holger.
You don't need a TreeMap (or a LinkedHashMap), unless you want the Entries of the Map to be sorted as well.
List<MyObj> list = getMyObjs();
// Key: relationId, Value: List<Long> ids (needs to be sorted)
Map<Long, List<Long>> map = list.stream()
.sorted(Comparator.comparing(MyObj::getId))
.collect(Collectors.groupingBy(
MyObj::getRelationId,
Collectors.mapping(MyObj::getId, Collectors.toList())
));
As #dan1st said you can use TreeMap if you want to sort keys
If you want to sort values you can only sort them before they are grouped and then they are grouped again
#Data
#AllArgsConstructor
public class MyObj {
private Long relationId;
private Long id;
static int comparing(MyObj obj,MyObj obj2){
return obj.getId().compareTo(obj2.getId());
}
public static void main(String[] args) {
List<MyObj> list = new ArrayList<>();
list.add(new MyObj(2L, 3L));
list.add(new MyObj(2L, 1L));
list.add(new MyObj(2L, 5L));
list.add(new MyObj(1L, 1L));
list.add(new MyObj(1L, 2L));
list.add(new MyObj(1L, 3L));
Map<Long, List<Long>> collect = list.stream()
// value sort
// .sorted(MyObj::comparing)
.collect(groupingBy(MyObj::getRelationId,
// key sort
// (Supplier<Map<Long, List<Long>>>) () -> new TreeMap<>(Long::compareTo),
mapping(MyObj::getId, toList())));
try {
// collect = {"1":[1,2,3],"2":[1,3,5]}
System.out.println("collect = " + new ObjectMapper().writeValueAsString(collect));
} catch (JsonProcessingException e) {
throw new RuntimeException(e);
}
}
}
It appears from the presented code that the resulting map should look like Map<Long, List<Long>> with the key - relationId and the value - list of id. Therefore custom comparator for List<Long> should be implemented like this:
Comparator<List<Long>> cmp = (a, b) -> {
for (int i = 0, n = Math.min(a.size(), b.size()); i < n; i++) {
int res = Long.compare(a.get(i), b.get(i));
if (res != 0) {
return res;
}
}
return Integer.compare(a.size(), b.size());
};
This comparator should be applied to the entry set of the map, however, either the map should be converted into a SortedSet of the map entries, or a LinkedHashMap needs to be recreated on the basis of the comparator:
Map<Long, List<Long>> map = list.stream()
.collect(groupingBy(
MyObj::getRelationId, Collectors.mapping(MyObj::getId, toList())
))
.entrySet()
.stream()
.sorted(cmp)
.collect(toMap(Map.Entry::getKey, Map.Entry::getValue, (a, b) -> a, LinkedHashMap::new));
Not clear if you want to sort by ids or by elements and extract the ids.
In the first case you can use an ending operation after the collect to get a sorted list. As Collections.sort() can sort a list, you just have to call it at the appropriate place.
Map<Long, List<Long>> map = list.stream()
.collect(Collectors.groupingBy(
MyObj::getRelationId,
Collectors.collectingAndThen(Collectors.mapping(MyObj::getId,toList()),
l -> { Collections.sort(l, your_comparator); return l; })
));
In the second case you just need to sort the stream (if it is finite and not too big).

java 8 streams grouping and creating Map<String, Set<String>> issue [duplicate]

In Java 8 how can I filter a collection using the Stream API by checking the distinctness of a property of each object?
For example I have a list of Person object and I want to remove people with the same name,
persons.stream().distinct();
Will use the default equality check for a Person object, so I need something like,
persons.stream().distinct(p -> p.getName());
Unfortunately the distinct() method has no such overload. Without modifying the equality check inside the Person class is it possible to do this succinctly?
Consider distinct to be a stateful filter. Here is a function that returns a predicate that maintains state about what it's seen previously, and that returns whether the given element was seen for the first time:
public static <T> Predicate<T> distinctByKey(Function<? super T, ?> keyExtractor) {
Set<Object> seen = ConcurrentHashMap.newKeySet();
return t -> seen.add(keyExtractor.apply(t));
}
Then you can write:
persons.stream().filter(distinctByKey(Person::getName))
Note that if the stream is ordered and is run in parallel, this will preserve an arbitrary element from among the duplicates, instead of the first one, as distinct() does.
(This is essentially the same as my answer to this question: Java Lambda Stream Distinct() on arbitrary key?)
An alternative would be to place the persons in a map using the name as a key:
persons.collect(Collectors.toMap(Person::getName, p -> p, (p, q) -> p)).values();
Note that the Person that is kept, in case of a duplicate name, will be the first encontered.
You can wrap the person objects into another class, that only compares the names of the persons. Afterward, you unwrap the wrapped objects to get a person stream again. The stream operations might look as follows:
persons.stream()
.map(Wrapper::new)
.distinct()
.map(Wrapper::unwrap)
...;
The class Wrapper might look as follows:
class Wrapper {
private final Person person;
public Wrapper(Person person) {
this.person = person;
}
public Person unwrap() {
return person;
}
public boolean equals(Object other) {
if (other instanceof Wrapper) {
return ((Wrapper) other).person.getName().equals(person.getName());
} else {
return false;
}
}
public int hashCode() {
return person.getName().hashCode();
}
}
Another solution, using Set. May not be the ideal solution, but it works
Set<String> set = new HashSet<>(persons.size());
persons.stream().filter(p -> set.add(p.getName())).collect(Collectors.toList());
Or if you can modify the original list, you can use removeIf method
persons.removeIf(p -> !set.add(p.getName()));
There's a simpler approach using a TreeSet with a custom comparator.
persons.stream()
.collect(Collectors.toCollection(
() -> new TreeSet<Person>((p1, p2) -> p1.getName().compareTo(p2.getName()))
));
We can also use RxJava (very powerful reactive extension library)
Observable.from(persons).distinct(Person::getName)
or
Observable.from(persons).distinct(p -> p.getName())
You can use groupingBy collector:
persons.collect(Collectors.groupingBy(p -> p.getName())).values().forEach(t -> System.out.println(t.get(0).getId()));
If you want to have another stream you can use this:
persons.collect(Collectors.groupingBy(p -> p.getName())).values().stream().map(l -> (l.get(0)));
You can use the distinct(HashingStrategy) method in Eclipse Collections.
List<Person> persons = ...;
MutableList<Person> distinct =
ListIterate.distinct(persons, HashingStrategies.fromFunction(Person::getName));
If you can refactor persons to implement an Eclipse Collections interface, you can call the method directly on the list.
MutableList<Person> persons = ...;
MutableList<Person> distinct =
persons.distinct(HashingStrategies.fromFunction(Person::getName));
HashingStrategy is simply a strategy interface that allows you to define custom implementations of equals and hashcode.
public interface HashingStrategy<E>
{
int computeHashCode(E object);
boolean equals(E object1, E object2);
}
Note: I am a committer for Eclipse Collections.
Similar approach which Saeed Zarinfam used but more Java 8 style:)
persons.collect(Collectors.groupingBy(p -> p.getName())).values().stream()
.map(plans -> plans.stream().findFirst().get())
.collect(toList());
You can use StreamEx library:
StreamEx.of(persons)
.distinct(Person::getName)
.toList()
I recommend using Vavr, if you can. With this library you can do the following:
io.vavr.collection.List.ofAll(persons)
.distinctBy(Person::getName)
.toJavaSet() // or any another Java 8 Collection
Extending Stuart Marks's answer, this can be done in a shorter way and without a concurrent map (if you don't need parallel streams):
public static <T> Predicate<T> distinctByKey(Function<? super T, ?> keyExtractor) {
final Set<Object> seen = new HashSet<>();
return t -> seen.add(keyExtractor.apply(t));
}
Then call:
persons.stream().filter(distinctByKey(p -> p.getName());
My approach to this is to group all the objects with same property together, then cut short the groups to size of 1 and then finally collect them as a List.
List<YourPersonClass> listWithDistinctPersons = persons.stream()
//operators to remove duplicates based on person name
.collect(Collectors.groupingBy(p -> p.getName()))
.values()
.stream()
//cut short the groups to size of 1
.flatMap(group -> group.stream().limit(1))
//collect distinct users as list
.collect(Collectors.toList());
Distinct objects list can be found using:
List distinctPersons = persons.stream()
.collect(Collectors.collectingAndThen(
Collectors.toCollection(() -> new TreeSet<>(Comparator.comparing(Person:: getName))),
ArrayList::new));
I made a generic version:
private <T, R> Collector<T, ?, Stream<T>> distinctByKey(Function<T, R> keyExtractor) {
return Collectors.collectingAndThen(
toMap(
keyExtractor,
t -> t,
(t1, t2) -> t1
),
(Map<R, T> map) -> map.values().stream()
);
}
An exemple:
Stream.of(new Person("Jean"),
new Person("Jean"),
new Person("Paul")
)
.filter(...)
.collect(distinctByKey(Person::getName)) // return a stream of Person with 2 elements, jean and Paul
.map(...)
.collect(toList())
Another library that supports this is jOOλ, and its Seq.distinct(Function<T,U>) method:
Seq.seq(persons).distinct(Person::getName).toList();
Under the hood, it does practically the same thing as the accepted answer, though.
Set<YourPropertyType> set = new HashSet<>();
list
.stream()
.filter(it -> set.add(it.getYourProperty()))
.forEach(it -> ...);
While the highest upvoted answer is absolutely best answer wrt Java 8, it is at the same time absolutely worst in terms of performance. If you really want a bad low performant application, then go ahead and use it. Simple requirement of extracting a unique set of Person Names shall be achieved by mere "For-Each" and a "Set".
Things get even worse if list is above size of 10.
Consider you have a collection of 20 Objects, like this:
public static final List<SimpleEvent> testList = Arrays.asList(
new SimpleEvent("Tom"), new SimpleEvent("Dick"),new SimpleEvent("Harry"),new SimpleEvent("Tom"),
new SimpleEvent("Dick"),new SimpleEvent("Huckle"),new SimpleEvent("Berry"),new SimpleEvent("Tom"),
new SimpleEvent("Dick"),new SimpleEvent("Moses"),new SimpleEvent("Chiku"),new SimpleEvent("Cherry"),
new SimpleEvent("Roses"),new SimpleEvent("Moses"),new SimpleEvent("Chiku"),new SimpleEvent("gotya"),
new SimpleEvent("Gotye"),new SimpleEvent("Nibble"),new SimpleEvent("Berry"),new SimpleEvent("Jibble"));
Where you object SimpleEvent looks like this:
public class SimpleEvent {
private String name;
private String type;
public SimpleEvent(String name) {
this.name = name;
this.type = "type_"+name;
}
public String getName() {
return name;
}
public void setName(String name) {
this.name = name;
}
public String getType() {
return type;
}
public void setType(String type) {
this.type = type;
}
}
And to test, you have JMH code like this,(Please note, im using the same distinctByKey Predicate mentioned in accepted answer) :
#Benchmark
#OutputTimeUnit(TimeUnit.SECONDS)
public void aStreamBasedUniqueSet(Blackhole blackhole) throws Exception{
Set<String> uniqueNames = testList
.stream()
.filter(distinctByKey(SimpleEvent::getName))
.map(SimpleEvent::getName)
.collect(Collectors.toSet());
blackhole.consume(uniqueNames);
}
#Benchmark
#OutputTimeUnit(TimeUnit.SECONDS)
public void aForEachBasedUniqueSet(Blackhole blackhole) throws Exception{
Set<String> uniqueNames = new HashSet<>();
for (SimpleEvent event : testList) {
uniqueNames.add(event.getName());
}
blackhole.consume(uniqueNames);
}
public static void main(String[] args) throws RunnerException {
Options opt = new OptionsBuilder()
.include(MyBenchmark.class.getSimpleName())
.forks(1)
.mode(Mode.Throughput)
.warmupBatchSize(3)
.warmupIterations(3)
.measurementIterations(3)
.build();
new Runner(opt).run();
}
Then you'll have Benchmark results like this:
Benchmark Mode Samples Score Score error Units
c.s.MyBenchmark.aForEachBasedUniqueSet thrpt 3 2635199.952 1663320.718 ops/s
c.s.MyBenchmark.aStreamBasedUniqueSet thrpt 3 729134.695 895825.697 ops/s
And as you can see, a simple For-Each is 3 times better in throughput and less in error score as compared to Java 8 Stream.
Higher the throughput, better the performance
I would like to improve Stuart Marks answer. What if the key is null, it will through NullPointerException. Here I ignore the null key by adding one more check as keyExtractor.apply(t)!=null.
public static <T> Predicate<T> distinctByKey(Function<? super T, ?> keyExtractor) {
Set<Object> seen = ConcurrentHashMap.newKeySet();
return t -> keyExtractor.apply(t)!=null && seen.add(keyExtractor.apply(t));
}
This works like a charm:
Grouping the data by unique key to form a map.
Returning the first object from every value of the map (There could be multiple person having same name).
persons.stream()
.collect(groupingBy(Person::getName))
.values()
.stream()
.flatMap(values -> values.stream().limit(1))
.collect(toList());
The easiest way to implement this is to jump on the sort feature as it already provides an optional Comparator which can be created using an element’s property. Then you have to filter duplicates out which can be done using a statefull Predicate which uses the fact that for a sorted stream all equal elements are adjacent:
Comparator<Person> c=Comparator.comparing(Person::getName);
stream.sorted(c).filter(new Predicate<Person>() {
Person previous;
public boolean test(Person p) {
if(previous!=null && c.compare(previous, p)==0)
return false;
previous=p;
return true;
}
})./* more stream operations here */;
Of course, a statefull Predicate is not thread-safe, however if that’s your need you can move this logic into a Collector and let the stream take care of the thread-safety when using your Collector. This depends on what you want to do with the stream of distinct elements which you didn’t tell us in your question.
There are lot of approaches, this one will also help - Simple, Clean and Clear
List<Employee> employees = new ArrayList<>();
employees.add(new Employee(11, "Ravi"));
employees.add(new Employee(12, "Stalin"));
employees.add(new Employee(23, "Anbu"));
employees.add(new Employee(24, "Yuvaraj"));
employees.add(new Employee(35, "Sena"));
employees.add(new Employee(36, "Antony"));
employees.add(new Employee(47, "Sena"));
employees.add(new Employee(48, "Ravi"));
List<Employee> empList = new ArrayList<>(employees.stream().collect(
Collectors.toMap(Employee::getName, obj -> obj,
(existingValue, newValue) -> existingValue))
.values());
empList.forEach(System.out::println);
// Collectors.toMap(
// Employee::getName, - key (the value by which you want to eliminate duplicate)
// obj -> obj, - value (entire employee object)
// (existingValue, newValue) -> existingValue) - to avoid illegalstateexception: duplicate key
Output - toString() overloaded
Employee{id=35, name='Sena'}
Employee{id=12, name='Stalin'}
Employee{id=11, name='Ravi'}
Employee{id=24, name='Yuvaraj'}
Employee{id=36, name='Antony'}
Employee{id=23, name='Anbu'}
Here is the example
public class PayRoll {
private int payRollId;
private int id;
private String name;
private String dept;
private int salary;
public PayRoll(int payRollId, int id, String name, String dept, int salary) {
super();
this.payRollId = payRollId;
this.id = id;
this.name = name;
this.dept = dept;
this.salary = salary;
}
}
import java.util.ArrayList;
import java.util.Comparator;
import java.util.List;
import java.util.Map;
import java.util.Optional;
import java.util.stream.Collector;
import java.util.stream.Collectors;
public class Prac {
public static void main(String[] args) {
int salary=70000;
PayRoll payRoll=new PayRoll(1311, 1, "A", "HR", salary);
PayRoll payRoll2=new PayRoll(1411, 2 , "B", "Technical", salary);
PayRoll payRoll3=new PayRoll(1511, 1, "C", "HR", salary);
PayRoll payRoll4=new PayRoll(1611, 1, "D", "Technical", salary);
PayRoll payRoll5=new PayRoll(711, 3,"E", "Technical", salary);
PayRoll payRoll6=new PayRoll(1811, 3, "F", "Technical", salary);
List<PayRoll>list=new ArrayList<PayRoll>();
list.add(payRoll);
list.add(payRoll2);
list.add(payRoll3);
list.add(payRoll4);
list.add(payRoll5);
list.add(payRoll6);
Map<Object, Optional<PayRoll>> k = list.stream().collect(Collectors.groupingBy(p->p.getId()+"|"+p.getDept(),Collectors.maxBy(Comparator.comparingInt(PayRoll::getPayRollId))));
k.entrySet().forEach(p->
{
if(p.getValue().isPresent())
{
System.out.println(p.getValue().get());
}
});
}
}
Output:
PayRoll [payRollId=1611, id=1, name=D, dept=Technical, salary=70000]
PayRoll [payRollId=1811, id=3, name=F, dept=Technical, salary=70000]
PayRoll [payRollId=1411, id=2, name=B, dept=Technical, salary=70000]
PayRoll [payRollId=1511, id=1, name=C, dept=HR, salary=70000]
Late to the party but I sometimes use this one-liner as an equivalent:
((Function<Value, Key>) Value::getKey).andThen(new HashSet<>()::add)::apply
The expression is a Predicate<Value> but since the map is inline, it works as a filter. This is of course less readable but sometimes it can be helpful to avoid the method.
Building on #josketres's answer, I created a generic utility method:
You could make this more Java 8-friendly by creating a Collector.
public static <T> Set<T> removeDuplicates(Collection<T> input, Comparator<T> comparer) {
return input.stream()
.collect(toCollection(() -> new TreeSet<>(comparer)));
}
#Test
public void removeDuplicatesWithDuplicates() {
ArrayList<C> input = new ArrayList<>();
Collections.addAll(input, new C(7), new C(42), new C(42));
Collection<C> result = removeDuplicates(input, (c1, c2) -> Integer.compare(c1.value, c2.value));
assertEquals(2, result.size());
assertTrue(result.stream().anyMatch(c -> c.value == 7));
assertTrue(result.stream().anyMatch(c -> c.value == 42));
}
#Test
public void removeDuplicatesWithoutDuplicates() {
ArrayList<C> input = new ArrayList<>();
Collections.addAll(input, new C(1), new C(2), new C(3));
Collection<C> result = removeDuplicates(input, (t1, t2) -> Integer.compare(t1.value, t2.value));
assertEquals(3, result.size());
assertTrue(result.stream().anyMatch(c -> c.value == 1));
assertTrue(result.stream().anyMatch(c -> c.value == 2));
assertTrue(result.stream().anyMatch(c -> c.value == 3));
}
private class C {
public final int value;
private C(int value) {
this.value = value;
}
}
Maybe will be useful for somebody. I had a little bit another requirement. Having list of objects A from 3rd party remove all which have same A.b field for same A.id (multiple A object with same A.id in list). Stream partition answer by Tagir Valeev inspired me to use custom Collector which returns Map<A.id, List<A>>. Simple flatMap will do the rest.
public static <T, K, K2> Collector<T, ?, Map<K, List<T>>> groupingDistinctBy(Function<T, K> keyFunction, Function<T, K2> distinctFunction) {
return groupingBy(keyFunction, Collector.of((Supplier<Map<K2, T>>) HashMap::new,
(map, error) -> map.putIfAbsent(distinctFunction.apply(error), error),
(left, right) -> {
left.putAll(right);
return left;
}, map -> new ArrayList<>(map.values()),
Collector.Characteristics.UNORDERED)); }
I had a situation, where I was suppose to get distinct elements from list based on 2 keys.
If you want distinct based on two keys or may composite key, try this
class Person{
int rollno;
String name;
}
List<Person> personList;
Function<Person, List<Object>> compositeKey = personList->
Arrays.<Object>asList(personList.getName(), personList.getRollno());
Map<Object, List<Person>> map = personList.stream().collect(Collectors.groupingBy(compositeKey, Collectors.toList()));
List<Object> duplicateEntrys = map.entrySet().stream()`enter code here`
.filter(settingMap ->
settingMap.getValue().size() > 1)
.collect(Collectors.toList());
A variation of the top answer that handles null:
public static <T, K> Predicate<T> distinctBy(final Function<? super T, K> getKey) {
val seen = ConcurrentHashMap.<Optional<K>>newKeySet();
return obj -> seen.add(Optional.ofNullable(getKey.apply(obj)));
}
In my tests:
assertEquals(
asList("a", "bb"),
Stream.of("a", "b", "bb", "aa").filter(distinctBy(String::length)).collect(toList()));
assertEquals(
asList(5, null, 2, 3),
Stream.of(5, null, 2, null, 3, 3, 2).filter(distinctBy(x -> x)).collect(toList()));
val maps = asList(
hashMapWith(0, 2),
hashMapWith(1, 2),
hashMapWith(2, null),
hashMapWith(3, 1),
hashMapWith(4, null),
hashMapWith(5, 2));
assertEquals(
asList(0, 2, 3),
maps.stream()
.filter(distinctBy(m -> m.get("val")))
.map(m -> m.get("i"))
.collect(toList()));
In my case I needed to control what was the previous element. I then created a stateful Predicate where I controled if the previous element was different from the current element, in that case I kept it.
public List<Log> fetchLogById(Long id) {
return this.findLogById(id).stream()
.filter(new LogPredicate())
.collect(Collectors.toList());
}
public class LogPredicate implements Predicate<Log> {
private Log previous;
public boolean test(Log atual) {
boolean isDifferent = previouws == null || verifyIfDifferentLog(current, previous);
if (isDifferent) {
previous = current;
}
return isDifferent;
}
private boolean verifyIfDifferentLog(Log current, Log previous) {
return !current.getId().equals(previous.getId());
}
}
My solution in this listing:
List<HolderEntry> result ....
List<HolderEntry> dto3s = new ArrayList<>(result.stream().collect(toMap(
HolderEntry::getId,
holder -> holder, //or Function.identity() if you want
(holder1, holder2) -> holder1
)).values());
In my situation i want to find distinct values and put their in List.

java 8 use reduce and Collectors grouping by to get list

EDIT
**Request to provide answer to First approach also using reduce method **
public class Messages {
int id;
String message;
String field1;
String field2;
String field3;
int audId;
String audmessage;
//constructor
//getter or setters
}
public class CustomMessage {
int id;
String msg;
String field1;
String field2;
String field3;
List<Aud> list;
//getters and setters
}
public class Aud {
int id;
String message;
//getters and setters
}
public class Demo {
public static void main(String args[]){
List<Messages> list = new ArrayList<Messages>();
list.add(new Messages(1,"abc","c","d","f",10,"a1"));
list.add(new Messages(2,"ac","d","d","f",21,"a2"));
list.add(new Messages(3,"adc","s","d","f",31,"a3"));
list.add(new Messages(4,"aec","g","d","f",40,"a4"));
list.add(new Messages(1,"abc","c","d","f",11,"a5"));
list.add(new Messages(2,"ac","d","d","f",22,"a5"));
}
I want the message to be mapped with audits
CustomMessage must have ->1,"abc","c","d","f"----->List of 2 audits (10,a1) and (11,"a5");
There are two ways to do it
1.Reduce-I would like to use reduce also to create my own accumulator and combiner
List<CustomMessage> list1= list.stream().reduce(new ArrayList<CustomMessage>(),
accumulator1,
combiner1);
**I am unable to write a accumulator and combiner**
2.Collectors.groupingBy-
I do not want to use constructors for creating the Message and
neither for Custom Message.here I have less fields my actual object has many fields.Any way to have a static
method for object creation
Is there is a way to do it via reduce by writing accumulator or
combiner
List<CustomMessage> l = list.stream()
.collect(Collectors.groupingBy(m -> new SimpleEntry<>(m.getId(), m.getMessage()),
Collectors.mapping(m -> new Aud(m.getAudId(), m.getAudMessage()), Collectors.toList())))
.entrySet()
.stream()
.map(e -> new CustomMessage(e.getKey().getKey(), e.getKey().getValue(), e.getValue()))
.collect(Collectors.toList());
Can anyone help me with both the approaches.
This code will create a Collection of CustomMessage. I would recommend putting a constructor in CustomMessage that takes a Messages argument. And maybe also move the mergeFunction out of the collect.
Collection<CustomMessage> customMessages = list.stream()
.collect(toMap(
Messages::getId,
m -> new CustomMessage(m.getId(), m.getMessage(), m.getField1(), m.getField2(), m.getField3(),
new ArrayList<>(singletonList(new Aud(m.getAudId(), m.getAudmessage())))),
(m1, m2) -> {
m1.getList().addAll(m2.getList());
return m1;
}))
.values();
What toMap does here is : The first time a Messages id is encountered, it will put it to a Map as key with value the newly created CustomMessage by the second argument to toMap (the "valueMapper"). The next times it will merge two CustomMessage with the 3rd argument the "mergeFunction" that will effectively concatenate the 2 lists of Aud.
And if you absolutely need a List and not a Collection:
List<CustomMessage> lm = new ArrayList<>(customMessages);
You cannot do this by either grouping or reducing. You need both: group first and then reduce. I coded the reduction differently:
List<CustomMessage> list1 = list.stream()
.collect(Collectors.groupingBy(Messages::getId))
.values()
.stream() // stream of List<Messages>
.map(lm -> {
List<Aud> la = lm.stream()
.map(m -> new Aud(m.getAudId(), m.getAudmessage()))
.collect(Collectors.toList());
Messages m0 = lm.get(0);
return new CustomMessage(m0.getId(), m0.getMessage(),
m0.getField1(), m0.getField2(), m0.getField3(), la);
})
.collect(Collectors.toList());
I have introduced a constructor in Aud and then read your comment that you are trying to avoid constructors. I will revert to this point in the end. Anyway, you can rewrite the creation of Aud objects to be the same way as in your question. And the construction of CustomMessage objects too if you like.
Result:
[1 abc c d f [10 a1, 11 a5], 3 adc s d f [31 a3], 4 aec g d f [40 a4],
2 ac d d f [21 a2, 22 a5]]
I grouped messages only by ID since you said their equals method uses ID only. You may also group by more fields like in your question. A quick and dirty way wold be
.collect(Collectors.groupingBy(m -> "" + m.getId() + '-' + m.getMessage()
+ '-' + m.getField1() + '-' + m.getField2() + '-' + m.getField3()))
Avoiding public constructors and using static methods for object creation doesn’t change a lot. For example if you have
public static Aud createAud(int id, String message) {
return new Aud(id, message);
}
(well, this didn’t eliminate the constructor completely, but now you can declare it private; if still not satisfied, you can also rewrite the method into not using a declared constructor). Now in the stream you just need to do:
.map(m -> Aud.createAud(m.getAudId(), m.getAudmessage()))
You can do similarly for CustomMessage. In this case your static method may take a Messages argument if you like, a bit like Manos Nikolaidis suggested, this could simplify the stream code a bit.
Edit: You couldn’t just forget about the three-argument reduce method, could you? ;-) It can be used. If you want to do that, I suggest you first fit CustomMessage with a range of methods for the purpose:
private CustomMessage(int id, String msg,
String field1, String field2, String field3, List<Aud> list) {
this.id = id;
this.msg = msg;
this.field1 = field1;
this.field2 = field2;
this.field3 = field3;
this.list = list;
}
public static CustomMessage create(Messages m, List<Aud> la) {
return new CustomMessage(m.getId(), m.getMessage(),
m.getField1(), m.getField2(), m.getField3(), la);
}
/**
* #return original with the Aud from m added
*/
public static CustomMessage adopt(CustomMessage original, Messages m) {
if (original.getId() != m.getId()) {
throw new IllegalArgumentException("adopt(): incompatible messages, wrong ID");
}
Aud newAud = Aud.createAud(m.getAudId(), m.getAudmessage());
original.addAud(newAud);
return original;
}
public static CustomMessage merge(CustomMessage cm1, CustomMessage cm2) {
if (cm1.getId() != cm2.getId()) {
throw new IllegalArgumentException("Cannot merge non-matching custom messages, id "
+ cm1.getId() + " and " + cm2.getId());
}
cm1.addAuds(cm2.getList());
return cm1;
}
private void addAud(Aud aud) {
list.add(aud);
}
private void addAuds(List<Aud> list) {
this.list.addAll(list);
}
With these in place it’s not so bad:
List<CustomMessage> list2 = list.stream()
.collect(Collectors.groupingBy(Messages::getId))
.values()
.stream()
.map(lm -> lm.stream()
.reduce(CustomMessage.create(lm.get(0), new ArrayList<>()),
CustomMessage::adopt,
CustomMessage::merge))
.collect(Collectors.toList());

java 8 groupingBy

I have a List of Student and each Student may register for a couple of subjects.
Therefore each Student will have a List of Subject. I would like to do a groupingBy on Subject using java 8 features.
I am not able to figure out a way. Any help will be appreciated.
This is just an example where groupBy is used. You can find many other examples on web. [This example is taken from here]
Group items by price: Collectors.groupingBy and Collectors.mapping example
public static void main(String[] args) {
//3 apple, 2 banana, others 1
List<Item> items = Arrays.asList(
new Item("apple", 10, new BigDecimal("9.99")),
new Item("banana", 20, new BigDecimal("19.99")),
new Item("orang", 10, new BigDecimal("29.99")),
new Item("watermelon", 10, new BigDecimal("29.99")),
new Item("papaya", 20, new BigDecimal("9.99")),
new Item("apple", 10, new BigDecimal("9.99")),
new Item("banana", 10, new BigDecimal("19.99")),
new Item("apple", 20, new BigDecimal("9.99"))
);
//group by price
Map<BigDecimal, List<Item>> groupByPriceMap =
items.stream().collect(Collectors.groupingBy(Item::getPrice));
System.out.println(groupByPriceMap);
// group by price, uses 'mapping' to convert List<Item> to Set<String>
Map<BigDecimal, Set<String>> result =
items.stream().collect(
Collectors.groupingBy(Item::getPrice,
Collectors.mapping(Item::getName, Collectors.toSet())
)
);
System.out.println(result);
}
Output
{
19.99=[
Item{name='banana', qty=20, price=19.99},
Item{name='banana', qty=10, price=19.99}
],
29.99=[
Item{name='orang', qty=10, price=29.99},
Item{name='watermelon', qty=10, price=29.99}
],
9.99=[
Item{name='apple', qty=10, price=9.99},
Item{name='papaya', qty=20, price=9.99},
Item{name='apple', qty=10, price=9.99},
Item{name='apple', qty=20, price=9.99}
]
}
//group by + mapping to Set
{
19.99=[banana],
29.99=[orang, watermelon],
9.99=[papaya, apple]
}
One of my friend suggested this solution and it worked fine
package grpBy;
import java.util.ArrayList;
import java.util.List;
import java.util.Map;
import java.util.stream.Collectors;
class Pair {
Subject sub1;
Student student;
public Pair(Student student, Subject sub1 ) {
this.sub1 = sub1;
this.student = student;
}
public String getSub1() {
return sub1.name;
}
public String getStudent() {
return student.name;
}
static Pair of(Student stu, Subject sub) {
return new Pair( stu, sub);
}
}
public class Test2 {
public static void main(String[] args) {
Subject maths = new Subject("maths", 1);
Subject chemi = new Subject("chemi", 1);
Subject phy = new Subject("phy", 1);
Subject bio = new Subject("bio", 1);
List<Subject> s1 = new ArrayList<>();
s1.add(maths);
s1.add(chemi);
List<Subject> s2 = new ArrayList<>();
s2.add(maths);
s2.add(phy);
List<Subject> s3 = new ArrayList<>();
s3.add(bio);
s3.add(phy);
Student jack = new Student(1, "jack", s1);
Student jil = new Student(2, "jil", s2);
Student john = new Student(3, "john", s3);
List<Student> students = new ArrayList();
students.add(jack);
students.add(jil);
students.add(john);
Map<String, List<String>> m = students.stream().
flatMap(student -> student.subjects.stream().map(subject -> Pair.of(student, subject))).
collect(Collectors.groupingBy(e -> e.getSub1(),
Collectors.mapping(e -> e.getStudent(),
Collectors.toList())));
System.out.println(m);
}
}

C# List with Unique Entries

I have a list like this,
List A
ItemNum FileName
001 A.txt,B.txt,A.txt,B.txt
002 A.txt,C.txt,A.txt,C.txt
I need to make a list like this.
ItemNum FileName
001 A.txt,B.txt
002 A.txt,C.txt
Is there any way to do it?
Case sensitive, in a method format:
public List<string> ToDistinct(IEnumerable<string> input)
{
List<string> unique = new List<string>();
foreach (string s in input)
{
List<string> files = s.Split(',').ToList();
unique.Add(String.Join(",", files.Distinct()));
}
return unique;
}
Here's a console method that displays the output like you have above and you can tweak it if you like:
public static void Main(string[] args)
{
List<string> input = new List<string>{"A.txt,B.txt,A.txt,B.txt", "A.txt,C.txt,A.txt,C.txt"};
//Display Input
Console.WriteLine("Input");
Console.WriteLine("ItemNum FileNames");
for(int i = 0; i < input.Count(); i++)
{
Console.WriteLine(String.Format(" {0,-23:000}{1}", i + 1, input[i]));
}
//Build the Unique List
List<string> unique = new List<string>();
foreach (string s in input)
{
List<string> files = s.Split(',').ToList();
unique.Add(String.Join(",", files.Distinct()));
}
//Display Output
Console.WriteLine();
Console.WriteLine("Output");
Console.WriteLine("ItemNum FileNames");
for(int i = 0; i < unique.Count(); i++)
{
Console.WriteLine(String.Format(" {0,-23:000}{1}", i + 1, unique[i]));
}
Console.ReadKey();
}
I'm going to suggest that you convert the FileName's to a List<string>, then all you need is this:
listA.ForEach(x => x.FileNames = x.FileNames.Distinct().ToList());
If there is no specific reason why you're storing what is - for all intents - a list in a string.. why not store it as one? List<string> will allow you to add and remove as you see fit.
List<Item> list = new List<Item>();
list.Add(new Item("001", "A.txt,B.txt,A.txt,B.txt"));
list.Add(new Item("002", "A.txt,C.txt,A.txt,C.txt"));
var newList = from l in list
select new Item() { ItemNum = l.ItemNum,
FileName = string.Join(",", l.FileName.Split(',').Distinct()) };

Resources