java 8 - print a map sorted by key - java-8

I am printing a map sorted by key with an intermediate object LinkedHashMap as follows;
LinkedHashMap<String, AtomicInteger> sortedMap = wcMap.entrySet().stream()
.sorted(Map.Entry.comparingByKey())
.collect(Collectors.toMap(Map.Entry::getKey, Map.Entry::getValue,
(oldValue, newValue) -> oldValue, LinkedHashMap::new));
sortedMap.forEach((k, v) -> System.out.println(String.format("%s ==>> %d",k, v.get())));
How can I print it directly from the stream before collecting?

In case you are not interested in the collected LinkedHashMap:
wcMap.entrySet().stream()
.sorted(Map.Entry.comparingByKey())
.forEachOrdered(e -> System.out.println(String.format("%s ==>> %d", e.getKey(), e.getValue().get()));
Or even better:
wcMap.entrySet().stream()
.sorted(Map.Entry.comparingByKey())
.map(e -> String.format("%s ==>> %d", e.getKey(), e.getValue().get()))
.forEachOrdered(System.out::println);
In case you still want the resulting LinkedHashMap, use peek():
wcMap.entrySet().stream()
.sorted(Map.Entry.comparingByKey())
.peek(e -> System.out.println(String.format("%s ==>> %d", e.getKey(), e.getValue().get())))
.collect(Collectors.toMap(Map.Entry::getKey, Map.Entry::getValue,
(oldValue, newValue) -> oldValue, LinkedHashMap::new));

You cannot utilise forEach before collecting because that would consume the stream and that means you can no longer collect.
You can either use the peek intermediate operation to perform a certain action (mainly to support debugging, where you want to see the elements as they flow past a certain point in a pipeline) and then collect or collect and then apply forEach as you've done.
Example with peek:
LinkedHashMap<String, AtomicInteger> sortedMap = wcMap.entrySet().stream()
.sorted(Map.Entry.comparingByKey())
.peek(e -> System.out.println(String.format("%s ==>> %d", e.getKey(), e.getValue().get())))
.collect(Collectors.toMap(Map.Entry::getKey, Map.Entry::getValue,
(oldValue, newValue) -> oldValue, LinkedHashMap::new));
Also, if you're only interested in printing the data then there is no need to dump the result into a Map instance as it's unnecessary and can be avoided. Thus, you can just chain a forEach terminal operation after the sorted operation and print the data.

Related

Resolve, Incompatible types error, java 8

I am trying to implement one logic using java 8 streams().
List<Persons> persons = logs.stream().map(l -> {
return rules.stream().map(rule -> generator.apply(rule)).collect(Collectors.toList());
}).collect(Collectors.toList());
But I am getting:
Incompatible type: Required List but collect was inferred to R, no instance of type
List of List of Persons
If
l -> {return rules.stream().map(rule -> generator.apply(rule)).collect(Collectors.toList());}
produces a List<Person>, the outer Stream pipeline would produce a List<List<Person>>.
You need flatMap if you want a List<Person>:
List<Persons> persons =
logs.stream()
.flatMap(l -> rules.stream().flatMap(rule -> generator.apply(rule).stream()))
.collect(Collectors.toList());
Explanation:
rules.stream().flatMap(rule -> generator.apply(rule).stream()) creates a Stream<String> and flat maps it to a Stream<Persons>.
.flatMap(l -> rules.stream().flatMap(rule -> generator.apply(rule).stream())) flat maps the elements of the outer Stream to a Stream<Persons>, which can be collected to a List<Persons>.
BTW, it's not clear how the input logs are related to the output, since you are ignoring the elements of the logs.stream() Stream in your mapping.

Group by Map Value Item [duplicate]

I need to convert Map<K, List<V>> to Map<V, List<K>>.
I've been struggling with this issue for some time.
It's obvious how to do conversion Map<K, V> to Map<V, List<K>>:
.collect(Collectors.groupingBy(
Map.Entry::getKey,
Collectors.mapping(Map.Entry::getValue, toList())
)
But I can't find solve an initial issue. Is there some easy-to-ready-java-8 way to do it?
I think you were close, you would need to flatMap those entries to a Stream and collect from there. I've used the already present SimpleEntry, but you can use a Pair of some kind too.
initialMap.entrySet()
.stream()
.flatMap(entry -> entry.getValue().stream().map(v -> new SimpleEntry<>(entry.getKey(), v)))
.collect(Collectors.groupingBy(
Entry::getValue,
Collectors.mapping(Entry::getKey, Collectors.toList())
));
Well, if you don't want to create the extra overhead of those SimpleEntry instances, you could do it a bit different:
Map<Integer, List<String>> result = new HashMap<>();
initialMap.forEach((key, values) -> {
values.forEach(value -> result.computeIfAbsent(value, x -> new ArrayList<>()).add(key));
});

Caching Java 8 stream

Suppose I have a list which I perform multiple stream operations on.
bobs = myList.stream()
.filter(person -> person.getName().equals("Bob"))
.collect(Collectors.toList())
...
and
tonies = myList.stream()
.filter(person -> person.getName().equals("tony"))
.collect(Collectors.toList())
Can I not just do:
Stream<Person> stream = myList.stream();
which then means I can do:
bobs = stream.filter(person -> person.getName().equals("Bob"))
.collect(Collectors.toList())
tonies = stream.filter(person -> person.getName().equals("tony"))
.collect(Collectors.toList())
NO, you can't. One Stream can only be use one time It will throw below error when you will try to reuse:
java.lang.IllegalStateException: stream has already been operated upon or closed
at java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:229)
As per Java Docs:
A stream should be operated on (invoking an intermediate or terminal stream operation) only once.
But a neat solution to your query will be to use Stream Suplier. It looks like below:
Supplier<Stream<Person>> streamSupplier = myList::stream;
bobs = streamSupplier.get().filter(person -> person.getName().equals("Bob"))
.collect(Collectors.toList())
tonies = streamSupplier.get().filter(person -> person.getName().equals("tony"))
.collect(Collectors.toList())
But again, every get call will return a new stream.
No you can't, doc says:
A stream should be operated on (invoking an intermediate or terminal
stream operation) only once.
But you can use a single stream by filtering all elements you want once and then group them the way you need:
Set<String> names = ...; // construct a sets containing bob, tony, etc
Map<String,List<Person>> r = myList.stream()
.filter(p -> names.contains(p.getName())
.collect(Collectors.groupingBy(Person::getName);
List<Person> tonies = r.get("tony");
List<Person> bobs = r.get("bob");
Well, what you can do in your case is generate dynamic stream pipelines. Assuming that the only variable in your pipeline is the name of the person that you filter by.
We can represent this as a Function<String, Stream<Person>> as in the following :
final Function<String, Stream<Person>> pipelineGenerator = name -> persons.stream().filter(person -> Objects.equals(person.getName(), name));
final List<Person> bobs = pipelineGenerator.apply("bob").collect(Collectors.toList());
final List<Person> tonies = pipelineGenerator.apply("tony").collect(Collectors.toList());
As already mentioned a given stream should be operated upon only once.
I can understand the "idea" of caching a reference to an object if you're going to refer to it more than once, or to simply avoid creating more objects than necessary.
However, you should not be concerned when invoking myList.stream() every time you need to query again as creating a stream, in general, is a cheap operation.

java 8 not seeing desired output with using map with filter

I see different result with using map with filter than using foreach with filter:
public class test1
{
public static void main (String[] args) throws java.lang.Exception
{
// your code goes here
Map<String, String> map = new HashMap<>();
map.put("a", "a");
map.put("c", "a");
Set<String> vs = new HashSet<>();
vs.add("b");
vs.add("c");
List<String> list = new ArrayList<>();
vs.stream()
.filter(a -> map.containsKey(a))
.map(c -> list.add(c));
System.out.println("here "+ list.size());
vs.stream()
.filter(a -> map.containsKey(a))
.forEach(c -> list.add(c));
System.out.println("here "+ list.size());
}
}
here is the output:
here 0
here 1
can somebody explain?
Terminal operations produces a non-stream, result such as primitive value, a collection or no value at all. Terminal operations are typically preceded by intermediate operations which return another Stream which allows operations to be connected in a form of a query. e.g. forEach()
Intermediate operations return another Stream which allows you to call multiple operations in a form of a query. Intermediate operations do not get executed until a terminal operation is invoked as there is a possibility they could be processed together when a terminal operation is executed. e.g. map()
In the following code, you didn't invoke a terminal operation in the last such as forEach() or collect(). That's why c -> list.add(c) isn't executed along with .filter(a -> map.containsKey(a)).
vs.stream()
.filter(a -> map.containsKey(a))
.map(c -> list.add(c));
Examine the result after using the following code snippet instead of above one,
vs.stream()
.filter(a -> map.containsKey(a)) // intermediate
.map(t -> list.add(t)) // intermediate
.collect(Collectors.toList()); // terminal
In our first stream you need terminal operation.
Without this, map is not executed.
Second stream have terminal operation (foreach) and whole stream is executed.
Add terminal operation to first stream(like count()) and you will see
Here 1
Here 2

Java - Lambda filter criteria, to ignore adding to map

I have a map of the format (reference to Finding average using Lambda (Double stored as String))
Map<String, Double> averages=mapOfIndicators.values().stream()
.flatMap(Collection::stream)
.filter(objectDTO -> !objectDTO.getNewIndex().isEmpty())
.collect(Collectors.groupingBy(ObjectDTO::getCountryName,
Collectors.mapping(ObjectDTO::getNewIndex,
Collectors.averagingDouble(Double::parseDouble))));
I would like to ignore the ignore the entire country mapping even if one of them is newIndex value for that country is empty?
Since Collectors.groupingBy does not allow to skip groups, you either have to analyze the filtering condition in advance so you can filter before performing groupBy or filter the map afterwards (I ignore the third option, implement your own groupingBy collector.
Analyze in advance:
Map<String, Boolean> hasEmpty=mapOfIndicators.values().stream()
.flatMap(Collection::stream)
.collect(Collectors.groupingBy(ObjectDTO::getCountryName,
Collectors.mapping(o->o.getNewIndex().isEmpty(),
Collectors.reducing(false, Boolean::logicalOr))));
Map<String, Double> averages=mapOfIndicators.values().stream()
.flatMap(Collection::stream)
.filter(objectDTO -> !hasEmpty.get(objectDTO.getCountryName()))
.collect(Collectors.groupingBy(ObjectDTO::getCountryName,
Collectors.mapping(ObjectDTO::getNewIndex,
Collectors.averagingDouble(Double::parseDouble))));
Filter the result:
Map<String, Double> averages=mapOfIndicators.values().stream()
.flatMap(Collection::stream)
.collect(Collectors.collectingAndThen(
Collectors.groupingBy(ObjectDTO::getCountryName,
Collectors.mapping(ObjectDTO::getNewIndex, Collectors.averagingDouble(
s->s.isEmpty()? Double.NaN: Double.parseDouble(s)))),
m->{ m.values().removeIf(d->Double.isNaN(d)); return m; }));

Resources