I have an Topic:
#Entity
public class Topic {
#Id
private int id;
private LocalDate date;
private String name;
private int points;
#JoinColumn(name = "user_id", nullable = false)
private User user;
}
I'm getting list of topics in given date by spring data jpa method:
List topics = topicRepository.findByDateBetween(begin, end);
which on output has e.g:
Topic(id=1, date="2018-01-01", name="Java examples", User(...), 12)
Topic(id=2, date="2018-02-02", name="Java examples", User(...), 34)
Topic(id=3, date="2018-02-02", name="Java examples", User(...), 56)
Topic(id=4, date="2018-03-03", name="Java examples", User(...), 78)
Topic(id=5, date="2018-03-03", name="Java examples", User(...), 90)
What I try to achive is to filter my result output as (if date && User is the same add points)
Topic(id=1, date="2018-01-01", name="Java examples", User(...), 12)
Topic(id=2, date="2018-02-02", name="Java examples", User(...), 90)
Topic(id=4, date="2018-03-03", name="Java examples", User(...), 168)
My actual solution returns map with date as key and summed points as value, but I need to have more data given in output like User or name.
return topics.stream()
.collect(Collectors.groupingBy(Topic::getDate,
Collectors.summingInt(Topic::getPoints)));
Maybe there is another way to instead of map return created dto for that case? e.g.
#Data
public class ResultDto {
private LocalDate date;
private String name;
private int points;
private User user;
}
A simple way to group by a subset of fields is to use a TreeMap with a custom Comparator. Suppose you were to define
Comparator<Topic> byDateAndUser = Comparator.comparing(Topic::getDate)
.thenComparing(t -> t.getUser().getUserId());
Map<Topic,...> map = new TreeMap<>(byDateAndUser);
The resulting map would use the supplied comparator instead of equals to determine equality and will thus treat all topics with the same date and user as the same.
This feature of TreeMap allows you to compute a Map of topic to the total points and it will only contain one entry for each combination of date/user:
import static java.util.stream.Collectors.groupingBy;
import static java.util.stream.Collectors.summingInt;
Comparator<Topic> byDateAndUser = Comparator.comparing(Topic::getDate)
.thenComparing(t -> t.getUser().getUserId());
Map<Topic, Integer> pointTotals = topics.stream()
.collect(groupingBy(
topic -> topic,
() -> new TreeMap<>(byDateAndUser),
summingInt(Topic::getPoints)
));
Assuming User implements hashCode and equals consistently, you could group to a map of date/user composite key and ResultDto values. To do this, you need two operations: one to map to and the other one to aggregate points for each group. (You can place these operations either in ResultDto or in a utility class, etc, here I'm assuming the first one is in a Mapper utility class and the 2nd one in ResultDto):
public final class Mapper {
private Mapper() { }
public static ResultDto fromTopic(Topic topic) {
ResultDto result = new ResultDto();
result.setDate(topic.getDate());
result.setName(topic.getName());
result.setPoints(topic.getPoints());
result.setUser(topic.getUser());
return result;
}
}
In ResultDto:
public ResultDto merge(ResultDto another) {
this.points += another.points;
return this;
}
Note that I'm assigning the first Topic.name found in the Mapper.fromTopic mapping operation. I'm assuming this is an inconsistency in your example, please take it into account if you use this approach in a real world scenario.
Now we can stream the topics and group by date/user:
Map<List<Object>, ResultDto> groups = topics.stream()
.collect(Collectors.toMap(
topic -> Arrays.asList(topic.getDate, topic.getUser()), // Java 9: List.of
Mapper::fromTopic,
ResultDto::merge));
This collects to a map whose key is a List with its first element being the date and the second, the user. These 2-element lists aren't very useful for later usage, here we're only using them to create a composite key and group topics by this key. The values of the map is a ResultDto instance that is initially created from the topic and then merged with other topics that belong to the same group (same date and user). In this ResultDto.merge operation the points are being summed.
The results you need are the values of the map:
Collection<ResultDto> results = groups.values(); // or new ArrayList<>(groups.values())
EDIT: Here's a slightly more succinct variant without streams:
Map<List<Object>, ResultDto> groups = new HashMap<>();
topics.forEach(topic -> groups.merge(
List.of(topic.getDate(), topic.getUser()), // Java 8: Arrays.asList
Mapper.fromTopic(topic),
ResultDto::merge));
Collection<ResultDto> results = groups.values();
Related
I receive a List of MediaDTO and this Object has two attributes:
String sizeType and String URL.
In 'sizeType' comes the image´s size: small, medium, large, and thumbnail.
So I have to filter the sizeType of these objects and create 4 new lists based on them.
This is how I get the List<MediaDTO> mediaDTO:
medias=[MediaDTO(sizeType=THUMBNAIL, liveloUrl=https://s3.sao01.cloud-object-storage.appdomain.cloud/catalog-media-storage/id-source/productId/skuseller2/thumbnail/celular-iphone-11-azul.png), MediaDTO(sizeType=SMALL, liveloUrl=https://s3.sao01.cloud-object-storage.appdomain.cloud/catalog-media-storage/id-source/productId/skuseller2/small/celular-iphone-11-azul.png), MediaDTO(sizeType=SMALL, liveloUrl=https://s3.sao01.cloud-object-storage.appdomain.cloud/catalog-media-storage/id-source/productId/skuseller2/medium/celular-iphone-11-azul.png), MediaDTO(sizeType=LARGE, liveloUrl=https://s3.sao01.cloud-object-storage.appdomain.cloud/catalog-media-storage/id-source/productId/skuseller2/large/celular-iphone-11-azul.png), MediaDTO(sizeType=THUMBNAIL, liveloUrl=https://s3.sao01.cloud-object-storage.appdomain.cloud/catalog-media-storage/id-source/productId/skuseller2/thumbnail/celular-iphone-11-vermelho.png), MediaDTO(sizeType=SMALL, liveloUrl=https://s3.sao01.cloud-object-storage.appdomain.cloud/catalog-media-storage/id-source/productId/skuseller2/small/celular-iphone-11-vermelho.png), MediaDTO(sizeType=MEDIUM, liveloUrl=https://s3.sao01.cloud-object-storage.appdomain.cloud/catalog-media-storage/id-source/productId/skuseller2/medium/celular-iphone-11-vermelho.png), MediaDTO(sizeType=LARGE, liveloUrl=https://s3.sao01.cloud-object-storage.appdomain.cloud/catalog-media-storage/id-source/productId/skuseller2/large/celular-iphone-11-vermelho.png)]
I achieved filtering for one of the sizes. This works!
However, I could not figure out how can I filter over the 4 sizes and create 4 new lists of it.
If I fix some error another appears ... so I´m really stuck.
And by the way I´ve been searching for a solution on the internet and in the forum for a couple of days but did´nt find something that fits.
If someone might help, I´d really be grateful.
I was thinking about using a 'forEach' to filter but even like that, I could filter just one size.
Thanks in advance.
**This is what I got till now: **
public class ProcessProductDTO {
String processId;
OperationProcess operation;
String categoryId;
ProductDTO productDTO;
}
public class ProductDTO {
String id;
Boolean active;
String displayName;
String longDescription;
List<MediaDTO> medias;
List<AttributeDTO> attributes;
}
public class MediaDTO {
String sizeType;
String liveloUrl;
}
public Properties toOccProductPropertiesDTO(ProcessProductDTO processProductDTO) throws JsonProcessingException {
String pSpecs = convertAttributes(processProductDTO.getProductDTO().getAttributes());
//List<String> medias = convertMedias(processProductDTO.getProductDTO().getMedias());
return Properties.builder()
.id(processProductDTO.getProductDTO().getId()) .active(processProductDTO.getProductDTO().getActive())
.listPrices(new HashMap())
.p_specs(pSpecs)
//.medias(medias)
.displayName(processProductDTO.getProductDTO()
.getDisplayName())
.longDescription(processProductDTO.getProductDTO().getLongDescription())
.build(); }
private String convertAttributes(List<AttributeDTO> attributes) throws JsonProcessingException {
Map<String, String> attribs = attributes.stream()
.collect(Collectors.toMap(AttributeDTO::getName, AttributeDTO::getValue));
return objectMapper.writeValueAsString(attribs);
}
private List<MediaDTO> convertMedias(ProcessProductDTO processProduct, List<MediaDTO> mediaDTO){
List<MediaDTO> filteredList = processProduct.getProductDTO().getMedias();
Set<String> filterSet = mediaDTO.stream().map(MediaDTO::getSizeType).collect(Collectors.toSet());
return filteredList.stream().filter(url -> filterSet.contains("SMALL")).collect(Collectors.toList());
}
UPDATE
I got the following result:
private Properties toOccProductPropertiesDTO(ProcessProductDTO processProductDTO) throws JsonProcessingException {
String pSpecs = convertAttributes(processProductDTO.getProductDTO().getAttributes());
MediaOccDTO medias = convertMedias(processProductDTO.getProductDTO().getMedias());
return Properties.builder()
.id(processProductDTO.getProductDTO().getId())
.active(processProductDTO.getProductDTO().getActive())
.listPrices(new HashMap())
.p_specs(pSpecs)
.medias(medias)
.displayName(processProductDTO.getProductDTO().getDisplayName())
.longDescription(processProductDTO.getProductDTO().getLongDescription())
.build();
}
private MediaOccDTO convertMedias(List<MediaDTO> mediaDTOs){
String smallImageUrls = generateOccUrl(mediaDTOs, ImageSizeType.SMALL);
String mediumImageUrls = generateOccUrl(mediaDTOs, ImageSizeType.MEDIUM);
String largeImageUrls = generateOccUrl(mediaDTOs, ImageSizeType.LARGE);
String thumbImageUrls = generateOccUrl(mediaDTOs, ImageSizeType.THUMB);
return MediaOccDTO.builder()
.p_smallImageUrls(smallImageUrls)
.p_mediumImageUrls(mediumImageUrls)
.p_largeImageUrls(largeImageUrls)
.p_thumbImageUrls(thumbImageUrls)
.build();
}
private String generateOccUrl(List<MediaDTO> mediaDTOs, ImageSizeType imageSizeType){
return mediaDTOs.stream()
.filter(m -> m.getSizeType().equals(imageSizeType))
.map(MediaDTO::getLiveloUrl)
.reduce(",", String::concat);
}
The problem is:
the comparison: m.getSizeType().equals(imageSizeType)
is always false, so the list gets created empty...
Though the question is laborious, I could think of the requirement being, you need to create 4 new lists based on sizeType.
Stream collector, can collect the results to a single data structure. It can be a list, set, Map etc.
Since you need 4 lists based on sizeType, you will need to pass through the stream 4 times to create 4 lists.
Another Alternate will be to create a Map<SizeType, List<MediaDTO>>
This can be achieved through,
mediaDTO.stream().collect(Collectors.toMap(i -> i.getSizeType(), i->i)
I think the toMap doesn't collect the values in a list. We need groupingBy instead.
mediaDTO.stream()
.collect(Collectors.groupingBy(MediaDTO::getSizeType));
I have the following object:
public class Book {
private Long id;
private Long bookId;
private String bookName;
private String owner;
}
Represented from following table:
Basically, a book can be owned by multiple owners i.e. Owner "a" owns books 1 and 2.
I have a basic function that will when passed a book object, will give its owner(s) in a List.
private List<String> getBookToOwner(Book book) {
List<String> a = new ArrayList<>();
if (book.getOwner() != null && !book.getOwner().isEmpty()) {
a.addAll(Arrays.asList(book.getOwner().split("/")));
}
return a;
}
I want to use that to apply to each book, retrieve their owners and create the following Map.
Map<String, List<Long>> ownerToBookMap;
Like this:
How do I use streams here?
//books is List<Book>
Map<String, List<Long>> ownerToBookMap = books.stream().map(
// apply the above function to get its owners, flatten it and finally collect it to get the above Map object
// Need some help here..
);
You can get the owner list from the book, then flatten the owners and map as pair of bookId and owner using flatMap. Then grouping by owner using groupingBy and collect the list of bookId of owner.
Map<String, List<Long>> ownerToBookMap =
books.stream()
.flatMap(b -> getBookToOwner(b)
.stream()
.map(o -> new AbstractMap.SimpleEntry<>(o, b.getBookId())))
.collect(Collectors.groupingBy(Map.Entry::getKey,
Collectors.mapping(Map.Entry::getValue, Collectors.toList())));
Flatmap the owners into a single one, create entries with key as an single owner and value as a bookId. Then group the structure by the key (owner). Finally use Collectors::mapping to get the List of bookIds instead of the actual entries:
List<Book> books = ...
Map<String, List<Long>> booksByOwner = books.stream()
.flatMap(book -> Arrays.stream(book.getOwner().split("/"))
.map(owner -> new AbstractMap.SimpleEntry<>(owner, book.getBookId())))
.collect(Collectors.groupingBy(
AbstractMap.SimpleEntry::getKey,
Collectors.mapping(AbstractMap.SimpleEntry::getValue, Collectors.toList())));
I use reduce instead of map.
Map<String, List<Long>> ownerToBookMap = books.stream().reduce(
HashMap::new,
(acc,b) -> {
getBookToOwner(b).stream().forEach( o -> {
if (!acc.containsKey(o))
acc.put(o, new ArrayList<Long>());
acc.get(o).put(b.bookId);
});
return acc;
}
).get();
I have the below class:
class A{
String property1;
String property2;
Double property3;
Double property4;
}
So the property1 and property2 is the key.
class Key{
String property1;
String property2;
}
I already have a list of A like below:
List<A> list=new ArrayList<>();
I want to group by using the key and add to another list of A in order to avoid having multiple items with same key in the list:
Function<A, Key> keyFunction= r-> Key.valueOf(r.getProperty1(), r.getProperty2());
But then while doing group by I have to take a sum of property3 and average of property4.
I need an efficient way to do it.
Note: I have skipped the methods of the given classes.
Collecting to a Map is unavoidable since you want to group things. A brute-force way to do that would be :
yourListOfA
.stream()
.collect(Collectors.groupingBy(
x -> new Key(x.getProperty1(), x.getProperty2()),
Collectors.collectingAndThen(Collectors.toList(),
list -> {
double first = list.stream().mapToDouble(A::getProperty3).sum();
// or any other default
double second = list.stream().mapToDouble(A::getProperty4).average().orElse(0D);
A a = list.get(0);
return new A(a.getProperty1(), a.getProperty2(), first, second);
})))
.values();
This could be slightly improved for example in the Collectors.collectingAndThen to only iterate the List once, for that a custom collector would be required. Not that complicated to write one...
Try like this:
Map<A,List<A>> map = aList
.stream()
.collect(Collectors
.groupingBy(item->new A(item.property1,item.property2)));
List<A> result= map.entrySet().stream()
.map(list->new A(list.getValue().get(0).property1,list.getValue().get(0).property1)
.avgProperty4(list.getValue())
.sumProperty3(list.getValue()))
.collect(Collectors.toList());
and create avgProperty4 and sumProperty3 methods like to this
public A sumProperty3(List<A> a){
this.property3 = a.stream().mapToDouble(A::getProperty3).sum();
return this;
}
public A avgProperty4(List<A> a){
this.property4 = a.stream().mapToDouble(A::getProperty4).average().getAsDouble();
return this;
}
result = aList.stream().collect(Collectors
.groupingBy(item -> new A(item.property1, item.property2),
Collectors.collectingAndThen(Collectors.toList(), list ->
new A(list.get(0).property1, list.get(0).property1)
.avgProperty4(list).sumProperty3(list))
)
);
I have a POJO that looks something like this:
public class Account {
private Integer accountId;
private List<String> contacts;
}
The equals And hashCode methods are set to use the accountId field to identify uniqueness, so any Accounts with the same accountId are equal regardless of what contacts contain.
I have a List of accounts and there are some duplicates with the same accountId. How do I use Java 8 Stream API to merge these duplicates together?
For example, the list of account contains:
+-----------+----------+
| accountId | contacts |
+-----------+----------+
| 1 | {"John"} |
| 1 | {"Fred"} |
| 2 | {"Mary"} |
+-----------+----------+
And I want it to produce a list of accounts like this:
+-----------+------------------+
| accountId | contacts |
+-----------+------------------+
| 1 | {"John", "Fred"} |
| 2 | {"Mary"} |
+-----------+------------------+
Use Collectors.toMap Ref: https://docs.oracle.com/javase/8/docs/api/java/util/stream/Collectors.html#toMap-java.util.function.Function-java.util.function.Function-java.util.function.BinaryOperator-
#lombok.Value
class Account {
Integer accountId;
List<String> contacts;
}
List<Account> accounts = new ArrayList<>();
//Fill
List<Account> result = new ArrayList<>(accounts.stream()
.collect(
Collectors.toMap(Account::getAccountId, Function.identity(), (Account account1, Account account2) -> {
account1.getContacts().addAll(account2.getContacts());
account2.getContacts().clear();
return account1;
})
)
.values());
A clean Stream API solution can be quiet complicated, so perhaps you’re better off with a Collection API solution that has less constraints to obey.
HashMap<Integer, Account> tmp = new HashMap<>();
listOfAccounts.removeIf(a -> a != tmp.merge(a.getAccountId(), a, (o,n) -> {
o.getContacts().addAll(n.getContacts());
return o;
}));
This directly removes all elements with a duplicate id from the list after having added their contacts to the first account of that id.
Of course, this assumes that the list supports removal and the list returned by getContacts() is a reference to the stored list and supports adding elements.
The solution is built around Map.merge which will add the specified object if the key didn’t exist or evaluates the merge function if the key already existed. The merge function returns the old object after having added the contacts, so we can do a reference comparison (a != …) to determine that we have a duplicate that should be removed.
You could add two constructors and a merge method to the Account class that would combine contacts:
public class Account {
private final Integer accountId;
private List<String> contacts = new ArrayList<>();
public Account(Integer accountId) {
this.accountId = accountId;
}
// Copy constructor
public Account(Account another) {
this.accountId = another.accountId;
this.contacts = new ArrayList<>(another.contacts);
}
public Account merge(Account another) {
this.contacts.addAll(another.contacts);
return this;
}
// TODO getters and setters
}
Then, you have a few alternatives. One is to use Collectors.toMap to collect accounts to a map, grouping by accountId and merging the contacts of the accounts with equal accountId by means of the Account.merge method. Finally, get the values of the map:
Collection<Account> result = accounts.stream()
.collect(Collectors.toMap(
Account::getAccountId, // group by accountId (keys)
Account::new, // use copy constructor (values)
Account::merge)) // merge values with equal key
.values();
You need to use the copy constructor for the values, otherwise you would mutate the accounts of the original list when Account.merge is invoked.
An equivalent way (without streams) would be to use the Map.merge method:
Map<Integer, Account> map = new HashMap<>();
accounts.forEach(a ->
map.merge(a.getAccountId(), new Account(a), Account::merge));
Collection<Account> result = map.values();
Again, you need to use the copy constructor to avoid undesired mutations on the accounts of the original list.
A third alternative which is more optimized (because it doesn't create a new account for every element of the list) consists of using the Map.computeIfAbsent method:
Map<Integer, Account> map = new HashMap<>();
accounts.forEach(a -> map.computeIfAbsent(
a.getAccountId(), // group by accountId (keys)
Account::new) // invoke new Account(accountId) if absent
.merge(a)); // merge account's contacts
Collection<Account> result = map.values();
All the alternatives above return a Collection<Account>. If you need a List<Account> instead, you can do:
List<Account> list = new ArrayList<>(result);
I've got some working, inelegant code here:
The custom object is:
public class Person {
private int id;
public getId() { return this.id }
}
And I have a Class containing a Set<Person> allPersons containing all available subjects. I want to extract a new Set<Person> based upon one or more ID's of my choosing. I've written something which works using a nested enhanced for loop, but it strikes me as inefficient and will make a lot of unnecessary comparisons. I am getting used to working with Java 8, but can't quite figure out how to compare the Set against an Array. Here is my working, but verbose code:
public class MyProgram {
private Set<Person> allPersons; // contains 100 people with Ids 1-100
public Set<Person> getPersonById(int[] ids) {
Set<Person> personSet = new HashSet<>() //or any type of set
for (int i : ids) {
for (Person p : allPersons) {
if (p.getId() == i) {
personSet.add(p);
}
}
}
return personSet;
}
}
And to get my result, I'd call something along the lines of:
Set<Person> resultSet = getPersonById(int[] intArray = {2, 56, 66});
//resultSet would then contain 3 people with the corresponding ID
My question is how would i convert the getPersonById method to something using which streams allPersons and finds the ID match of any one of the ints in its parameter array? I thought of some filter operation, but since the parameter is an array, I can't get it to take just the one I want only.
The working answer to this is:
return allPersons.stream()
.filter(p -> (Arrays.stream(ids).anyMatch(i -> i == p.getId())) )
.collect(Collectors.toSet());
However, using the bottom half of #Flown's suggestion and if the program was designed to have a Map - it would also work (and work much more efficiently)
As you said, you can introduce a Stream::filter step using a Stream::anyMatch operation.
public Set<Person> getPersonById(int[] ids) {
Objects.requireNonNull(ids);
if (ids.length == 0) {
return Collections.emptySet();
}
return allPersons.stream()
.filter(p -> IntStream.of(ids).anyMatch(i -> i == p.getId()))
.collect(Collectors.toSet());
}
If the method is called more often, then it would be a good idea to map each Person to its id having a Map<Integer, Person>. The advantage is, that the lookup is much faster than iterating over the whole set of Person.Then your algorithm may look like this:
private Map<Integer, Person> idMapping;
public Set<Person> getPersonById(int[] ids) {
Objects.requireNonNull(ids);
return IntStream.of(ids)
.filter(idMapping::containsKey)
.mapToObj(idMapping::get)
.collect(Collectors.toSet());
}