I have below map of map and want to filter it based on a value. The result should be assigned back to same map. Please let know what is the best approach for this.
Map<String, Map<String, Employee>> employeeMap;
<
dep1, <"empid11", employee11> <"empid12",employee12>
dep2, <"empid21", employee21> <"empid22",employee22>
>
Filter: employee.getState="MI"
I tried like below but i was not able to access the employee object
currentMap = currentMap.entrySet().stream()
**.filter(p->p.getValue().getState().equals("MI"))**
.collect(Collectors.toMap(p -> p.getKey(),p->p.getValue()));
If you want to modify the map in place (and that it allows so), you can use forEach to iterate over the entries of the map, and then use removeIf for each values of the inner maps to remove the employees that satisfy the predicate:
employeeMap.forEach((k, v) -> v.values().removeIf(e -> e.getState().equals("MI")));
Otherwise, what you can do is to use the toMap collector, where the function to map the values takes care of removing the concerned employees by iterating over the entry set of the inner maps:
Map<String, Map<String, Employee>> employeeMap =
employeeMap.entrySet()
.stream()
.collect(toMap(Map.Entry::getKey,
e -> e.getValue().entrySet().stream().filter(emp -> !emp.getValue().getState().equals("MI")).collect(toMap(Map.Entry::getKey, Map.Entry::getValue))));
Related
I have a map Map<String,EnrollmentData> which maps student ID to his data.
The student id need to be filtered on certain EnrollmentData attributes ,and returned as a Set.
Map<String, EnrollmentData> studentData = .........;
if(MapUtils.isNotEmpty(studentData )){
Set<String> idSet = studentData .entrySet().stream()
.filter(x -> x.getValue().equals(...) )
.collect(Collectors.toSet( x -> x.getKey()));
}
However,this gives me a compilation error in the toSet [ Collectors is not applicable for the arguments (( x) -> {}) ] .
What needs to be done here.
After the filtering, you have a Stream<Map.Entry<String, EnrollmentData>>. Collecting with toSet() (which accepts no arguments) would collect Entry<String, EnrollmentData>s, but you want to map each element to their key prior to collecting instead.
You must first map the elements of the resulting stream to the Entry's key:
.filter(yourFilterFunction)
.map(Map.Entry::getKey)
.collect(Collectors.toSet());
I need to convert Map<K, List<V>> to Map<V, List<K>>.
I've been struggling with this issue for some time.
It's obvious how to do conversion Map<K, V> to Map<V, List<K>>:
.collect(Collectors.groupingBy(
Map.Entry::getKey,
Collectors.mapping(Map.Entry::getValue, toList())
)
But I can't find solve an initial issue. Is there some easy-to-ready-java-8 way to do it?
I think you were close, you would need to flatMap those entries to a Stream and collect from there. I've used the already present SimpleEntry, but you can use a Pair of some kind too.
initialMap.entrySet()
.stream()
.flatMap(entry -> entry.getValue().stream().map(v -> new SimpleEntry<>(entry.getKey(), v)))
.collect(Collectors.groupingBy(
Entry::getValue,
Collectors.mapping(Entry::getKey, Collectors.toList())
));
Well, if you don't want to create the extra overhead of those SimpleEntry instances, you could do it a bit different:
Map<Integer, List<String>> result = new HashMap<>();
initialMap.forEach((key, values) -> {
values.forEach(value -> result.computeIfAbsent(value, x -> new ArrayList<>()).add(key));
});
Model:
public class AgencyMapping {
private Integer agencyId;
private String scoreKey;
}
public class AgencyInfo {
private Integer agencyId;
private Set<String> scoreKeys;
}
My code:
List<AgencyMapping> agencyMappings;
Map<Integer, AgencyInfo> agencyInfoByAgencyId = agencyMappings.stream()
.collect(groupingBy(AgencyMapping::getAgencyId,
collectingAndThen(toSet(), e -> e.stream().map(AgencyMapping::getScoreKey).collect(toSet()))))
.entrySet().stream().map(e -> new AgencyInfo(e.getKey(), e.getValue()))
.collect(Collectors.toMap(AgencyInfo::getAgencyId, identity()));
Is there a way to get the same result and use more simpler code and faster?
You can simplify the call to collectingAndThen(toSet(), e -> e.stream().map(AgencyMapping::getScoreKey).collect(toSet())))) with a call to mapping(AgencyMapping::getScoreKey, toSet()).
Map<Integer, AgencyInfo> resultSet = agencyMappings.stream()
.collect(groupingBy(AgencyMapping::getAgencyId,
mapping(AgencyMapping::getScoreKey, toSet())))
.entrySet()
.stream()
.map(e -> new AgencyInfo(e.getKey(), e.getValue()))
.collect(toMap(AgencyInfo::getAgencyId, identity()));
A different way to see it using a toMap collector:
Map<Integer, AgencyInfo> resultSet = agencyMappings.stream()
.collect(toMap(AgencyMapping::getAgencyId, // key extractor
e -> new HashSet<>(singleton(e.getScoreKey())), // value extractor
(left, right) -> { // a merge function, used to resolve collisions between values associated with the same key
left.addAll(right);
return left;
}))
.entrySet()
.stream()
.map(e -> new AgencyInfo(e.getKey(), e.getValue()))
.collect(toMap(AgencyInfo::getAgencyId, identity()));
The latter example is arguably more complicated than the former. Nevertheless, your approach is pretty much the way to go apart from using mapping as opposed to collectingAndThen as mentioned above.
Apart from that, I don't see anything else you can simplify with the code shown.
As for faster code, if you're suggesting that your current approach is slow in performance then you may want to read the answers here that speak about when you should consider going parallel.
You are collecting to an intermediate map, then streaming the entries of this map to create AgencyInfo instances, which are finally collected to another map.
Instead of all this, you could use Collectors.toMap to collect directly to a map, mapping each AgencyMapping object to the desired AgencyInfo and merging the scoreKeys as needed:
Map<Integer, AgencyInfo> agencyInfoByAgencyId = agencyMappings.stream()
.collect(Collectors.toMap(
AgencyMapping::getAgencyId,
mapping -> new AgencyInfo(
mapping.getAgencyId(),
new HashSet<>(Set.of(mapping.getScoreKey()))),
(left, right) -> {
left.getScoreKeys().addAll(right.getScoreKeys());
return left;
}));
This works by grouping the AgencyMapping elements of the stream by AgencyMapping::getAgencyId, but storing AgencyInfo objects in the map instead. We get these AgencyInfo instances from manually mapping each original AgencyMapping object. Finally, we're merging AgencyInfo instances that are already in the map by means of a merge function that folds left scoreKeys from one AgencyInfo to another.
I'm using Java 9's Set.of to create a singleton set. If you don't have Java 9, you can replace it with Collections.singleton.
Iterator<Rate> rateIt = rates.iterator();
int lastRateOBP = 0;
while (rateIt.hasNext())
{
Rate rate = rateIt.next();
int currentOBP = rate.getPersonCount();
if (currentOBP == lastRateOBP)
{
rateIt.remove();
continue;
}
lastRateOBP = currentOBP;
}
how can i use above code convert to lambda by stream of java 8? such as list.stream().filter().....but i need to operation list.
The simplest solution is
Set<Integer> seen = new HashSet<>();
rates.removeIf(rate -> !seen.add(rate.getPersonCount()));
it utilizes the fact that Set.add will return false if the value is already in the Set, i.e. has been already encountered. Since these are the elements you want to remove, all you have to do is negating it.
If keeping an arbitrary Rate instance for each group with the same person count is sufficient, there is no sorting needed for this solution.
Like with your original Iterator-based solution, it relies on the mutability of your original Collection.
If you really want distinct and sorted as you say in your comments, than it is as simple as :
TreeSet<Rate> sorted = rates.stream()
.collect(Collectors.toCollection(() ->
new TreeSet<>(Comparator.comparing(Rate::getPersonCount))));
But notice that in your example with an iterator you are not removing duplicates, but only duplicates that are continuous (I've exemplified that in the comment to your question).
EDIT
It seems that you want distinct by a Function; or in simpler words you want distinct elements by personCount, but in case of a clash you want to take the max pos.
Such a thing is not yet available in jdk. But it might be, see this.
Since you want them sorted and distinct by key, we can emulate that with:
Collection<Rate> sorted = rates.stream()
.collect(Collectors.toMap(Rate::getPersonCount,
Function.identity(),
(left, right) -> {
return left.getLos() > right.getLos() ? left : right;
},
TreeMap::new))
.values();
System.out.println(sorted);
On the other hand if you absolutely need to return a TreeSet to actually denote that this are unique elements and sorted:
TreeSet<Rate> sorted = rates.stream()
.collect(Collectors.collectingAndThen(
Collectors.toMap(Rate::getPersonCount,
Function.identity(),
(left, right) -> {
return left.getLos() > right.getLos() ? left : right;
},
TreeMap::new),
map -> {
TreeSet<Rate> set = new TreeSet<>(Comparator.comparing(Rate::getPersonCount));
set.addAll(map.values());
return set;
}));
This should work if your Rate type has natural ordering (i.e. implements Comparable):
List<Rate> l = rates.stream()
.distinct()
.sorted()
.collect(Collectors.toList());
If not, use a lambda as a custom comparator:
List<Rate> l = rates.stream()
.distinct()
.sorted( (r1,r2) -> ...some code to compare two rates... )
.collect(Collectors.toList());
It may be possible to remove the call to sorted if you just need to remove duplicates.
I have a map of the format (reference to Finding average using Lambda (Double stored as String))
Map<String, Double> averages=mapOfIndicators.values().stream()
.flatMap(Collection::stream)
.filter(objectDTO -> !objectDTO.getNewIndex().isEmpty())
.collect(Collectors.groupingBy(ObjectDTO::getCountryName,
Collectors.mapping(ObjectDTO::getNewIndex,
Collectors.averagingDouble(Double::parseDouble))));
I would like to ignore the ignore the entire country mapping even if one of them is newIndex value for that country is empty?
Since Collectors.groupingBy does not allow to skip groups, you either have to analyze the filtering condition in advance so you can filter before performing groupBy or filter the map afterwards (I ignore the third option, implement your own groupingBy collector.
Analyze in advance:
Map<String, Boolean> hasEmpty=mapOfIndicators.values().stream()
.flatMap(Collection::stream)
.collect(Collectors.groupingBy(ObjectDTO::getCountryName,
Collectors.mapping(o->o.getNewIndex().isEmpty(),
Collectors.reducing(false, Boolean::logicalOr))));
Map<String, Double> averages=mapOfIndicators.values().stream()
.flatMap(Collection::stream)
.filter(objectDTO -> !hasEmpty.get(objectDTO.getCountryName()))
.collect(Collectors.groupingBy(ObjectDTO::getCountryName,
Collectors.mapping(ObjectDTO::getNewIndex,
Collectors.averagingDouble(Double::parseDouble))));
Filter the result:
Map<String, Double> averages=mapOfIndicators.values().stream()
.flatMap(Collection::stream)
.collect(Collectors.collectingAndThen(
Collectors.groupingBy(ObjectDTO::getCountryName,
Collectors.mapping(ObjectDTO::getNewIndex, Collectors.averagingDouble(
s->s.isEmpty()? Double.NaN: Double.parseDouble(s)))),
m->{ m.values().removeIf(d->Double.isNaN(d)); return m; }));