Convert for loop into Java 8 Stream - java-8

I need to convert these code into Java 8 Stream I tried it using the given below code written by me but still I haven't got what I wanted.
//contractList is list of Contract class
//contract.getProgramId() returns String
//contract.getEnrollmentID() returns String
//'usage = CommonUtils.getUsageType()' is other service to call wich returns String
//enroll and usage are String type
//enrollNoWithUsageTypeJson is json object '{"enroll": value, "usage": value}'
//usages is List<JSONObject> where enrollNoWithUsageTypeJson need to add
for (Contract contract : contractList) {
if (!StringUtils.isEmpty(contract.getProgramId())) {
enroll = contract.getEnrollmentID();
usage = CommonUtils.getUsageType(envProperty, contract.getProgramId());
if (!(StringUtils.isEmpty(enroll) || StringUtils.isEmpty(usage))) {
enrollNoWithUsageTypeJson.put("enroll", enroll);
enrollNoWithUsageTypeJson.put("usage", usage);
usages.add(enrollNoWithUsageTypeJson);
}
}
}
This is till now what I have got:
contractList.stream()
.filter(contract -> !StringUtils.isEmpty(contract) &&
!StringUtils.isEmpty(contract.getProgramId()))
.collect(Collectors.to);
Thakyou in advance :)

Here is how a stream based version of your code might look like (add static imports as needed):
List<JSONObject> usages = contractList.stream()
.filter(c -> isNotEmpty(c.getProgramId()))
.map(c -> new SimpleEntry<>(c.getEnrollmentID(), getUsageType(envProperty, c.getProgramId())))
.filter(e -> isNotEmpty(e.getKey()) && isNotEmpty(e.getValue())))
.map(e -> {
enrollNoWithUsageTypeJson.put("enroll", e.getKey());
enrollNoWithUsageTypeJson.put("usage", e.getValue());
return enrollNoWithUsageTypeJson; })
.collect(toList());
I took the liberty of using isNotEmpty from Apache Commons as given this option !isEmpty looks terrible. I am (ab)using AbstractMap.SimpleEntry to hold a pair of values. If you feel getKey, getValue make the code less readable, you can introduce a class to hold these 2 variables. E.g.:
class EnrollUsage {
String enroll, usage;
}
You may also prefer to define a method:
JSONObject withEnrollAndUsage(JSONObject json, String enroll, String usage) {
json.put("enroll", enroll);
json.put("usage", usage);
return json;
}
and in the above use instead:
.map(e -> withEnrollAndUsage(enrollNoWithUsageTypeJson, e.getKey(), e.getValue()))
Keep in mind that you never really "need" to convert code to use streams. There are cases where using streams, albeit intellectually satisfying, actually complicates your code. Exercise your best judgement in this case.

Related

Stream in java 8 for nested loop

I am trying to convert this piece of code into stream and filter but finding it really hard. help will be highly appreciated. Here is the code portion.
protected void processFacetData(final List<FacetData<SearchStateData>> facets){
final List<FacetValueData<SearchStateData>> facetValueEmptyDatas = new ArrayList<FacetValueData<SearchStateData>>();
for (final FacetData<SearchStateData> facetData : facets)
{
final List<FacetValueData<SearchStateData>> facetValueDatas = facetData.getValues();
for (final FacetValueData<SearchStateData> facetValueData : facetValueDatas)
{
if (facetValueData.getCount() == 0)
{
facetValueEmptyDatas.add(facetValueData);
}
}
facetValueDatas.removeAll(facetValueEmptyDatas);
}
}
There is no way that I can test the following code, based solely on the information in your question, but it looks like you need to use method flatMap.
facets.stream()
.flatMap(f -> f.getValues().stream())
.filter(f -> f.getCount() == 0)
.collect(Collectors.toList())
Method flatMap will return a single stream which you can think of as the concatenation of all the elements in all the lists of all the elements in the method parameter.
Then you filter for all the values in that stream where method getCount returns zero.
Finally you collect all the filtered elements into a single list.

How to Extract the String value from MONO/FLUX -

I am new to reactor programming,and need some help on MONO/Flux
I have POJO class
Employee.java
class Employee {
String name
}
I have Mono being returned on hitting a service, I need to extract the name from Mono as a string.
Mono<Employee> m = m.map(value -> value.getName())
but this returns again a Mono but not a string. I need to extract String value from this Mono.
You should do something like this:
m.block().getName();
This solution doesn't take care of null check.
A standard approach would be:
Employee e = m.block();
if (null != e) {
e.getName();
}
But using flux you should proceed using something like this:
Mono.just(new Employee().setName("Kill"))
.switchIfEmpty(Mono.defer(() -> Mono.just(new Employee("Bill"))))
.block()
.getName();
Keep in mind that requesting for blocking operation should be avoided if possible: it blocks the flow
You should be avoiding block() because it will block indefinitely until a next signal is received.
You should not think of the reactive container as something that is going to provide your program with an answer. Instead, you need to give it whatever you want to do with that answer. For example:
employeeMono.subscribe(value -> whatYouWantToDoWithName(value.getName()));

How to keep non-nullable properties in late initialization

Following issue: In a client/server environment with Spring-Boot and Kotlin the client wants to create objects of type A and therefore posts the data through a RESTful endpoint to the server.
Entity A is realized as a data class in Kotlin like this:
data class A(val mandatoryProperty: String)
Business-wise that property (which is a primary key, too) must never be null. However, it is not known by the client, as it gets generated quite expensively by a Spring #Service Bean on the server.
Now, at the endpoint Spring tries to deserialize the client's payload into an object of type A, however, the mandatoryProperty is unknown at that point in time, which would result in a mapping exception.
Several ways to circumvent that problem, none of which really amazes me.
Don't expect an object of type A at the endpoint, but get a bunch of parameters describing A that are passed on until the entity has actually been created and mandatoryProperty is present . Quite cumbersome actually, since there are a lot more properties than just that single one.
Quite similar to 1, but create a DTO. One of my favorites, however, since data classes can't be extended it would mean to duplicate the properties of type A into the DTO (except for the mandatory property) and copy them over. Furthemore, when A grows, the DTO has to grow, too.
Make mandatoryProperty nullable and work with !! operator throughout the code. Probably the worst solution as it foils the sense of nullable and non-nullable variables.
The client would set a dummy value for the mandatoryProperty which is replaced as soon as the property has been generated. However, A is validated by the endpoint and therefore the dummy value must obey its #Pattern constraint. So each dummy value would be a valid primary key, which gives me a bad feeling.
Any other ways I might have overseen that are more feasible?
I don't think there is a general-purpose answer to this... So I will just give you my 2 cents regarding your variants...
Your first variant has a benefit which no other really has, i.e. that you will not use the given objects for anything else then they were designed to be (i.e. endpoint or backend purposes only), which however probably will lead to cumbersome development.
The second variant is nice, but could lead to some other development errors, e.g. when you thought you used the actual A but you were rather operating on the DTO instead.
Variant 3 and 4 are in that regard similar to 2... You may use it as A even though it has all the properties of a DTO only.
So... if you want to go the safe route, i.e. no one should ever use this object for anything else then its specific purpose you should probably use the first variant. 4 sounds rather like a hack. 2 & 3 are probably ok. 3 because you actually have no mandatoryProperty when you use it as DTO...
Still, as you have your favorite (2) and I have one too, I will concentrate on 2 & 3, starting with 2 using a subclass approach with a sealed class as supertype:
sealed class AbstractA {
// just some properties for demo purposes
lateinit var sharedResettable: String
abstract val sharedReadonly: String
}
data class A(
val mandatoryProperty: Long = 0,
override val sharedReadonly: String
// we deliberately do not override the sharedResettable here... also for demo purposes only
) : AbstractA()
data class ADTO(
// this has no mandatoryProperty
override val sharedReadonly: String
) : AbstractA()
Some demo code, demonstrating the usage:
// just some random setup:
val a = A(123, "from backend").apply { sharedResettable = "i am from backend" }
val dto = ADTO("from dto").apply { sharedResettable = "i am dto" }
listOf(a, dto).forEach { anA ->
// somewhere receiving an A... we do not know what it is exactly... it's just an AbstractA
val param: AbstractA = anA
println("Starting with: $param sharedResettable=${param.sharedResettable}")
// set something on it... we do not mind yet, what it is exactly...
param.sharedResettable = UUID.randomUUID().toString()
// now we want to store it... but wait... did we have an A here? or a newly created DTO?
// lets check: (demo purpose again)
when (param) {
is ADTO -> store(param) // which now returns an A
is A -> update(param) // maybe updated also our A so a current A is returned
}.also { certainlyA ->
println("After saving/updating: $certainlyA sharedResettable=${certainlyA.sharedResettable /* this was deliberately not part of the data class toString() */}")
}
}
// assume the following signature for store & update:
fun <T> update(param : T) : T
fun store(a : AbstractA) : A
Sample output:
Starting with: A(mandatoryProperty=123, sharedReadonly=from backend) sharedResettable=i am from backend
After saving/updating: A(mandatoryProperty=123, sharedReadonly=from backend) sharedResettable=ef7a3dc0-a4ac-47f0-8a73-0ca0ef5069fa
Starting with: ADTO(sharedReadonly=from dto) sharedResettable=i am dto
After saving/updating: A(mandatoryProperty=127, sharedReadonly=from dto) sharedResettable=57b8b3a7-fe03-4b16-9ec7-742f292b5786
I did not yet show you the ugly part, but you already mentioned it yourself... How do you transform your ADTO to A and viceversa? I will leave that up to you. There are several approaches here (manually, using reflection or mapping utilities, etc.).
This variant cleanly seperates all the DTO specific from the non-DTO-specific properties. However it will also lead to redundant code (all the override, etc.). But at least you know on which object type you operate and can setup signatures accordingly.
Something like 3 is probably easier to setup and to maintain (regarding the data class itself ;-)) and if you set the boundaries correctly it may even be clear, when there is a null in there and when not... So showing that example too. Starting with a rather annoying variant first (annoying in the sense that it throws an exception when you try accessing the variable if it wasn't set yet), but at least you spare the !! or null-checks here:
data class B(
val sharedOnly : String,
var sharedResettable : String
) {
// why nullable? Let it hurt ;-)
lateinit var mandatoryProperty: ID // ok... Long is not usable with lateinit... that's why there is this ID instead
}
data class ID(val id : Long)
Demo:
val b = B("backend", "resettable")
// println(newB.mandatoryProperty) // uh oh... this hurts now... UninitializedPropertyAccessException on the way
val newB = store(b)
println(newB.mandatoryProperty) // that's now fine...
But: even though accessing mandatoryProperty will throw an Exception it is not visible in the toString nor does it look nice if you need to check whether it already has been initialized (i.e. by using ::mandatoryProperty::isInitialized).
So I show you another variant (meanwhile my favorite, but... uses null):
data class C(val mandatoryProperty: Long?,
val sharedOnly : String,
var sharedResettable : String) {
// this is our DTO constructor:
constructor(sharedOnly: String, sharedResettable: String) : this(null, sharedOnly, sharedResettable)
fun hasID() = mandatoryProperty != null // or isDTO, etc. what you like/need
}
// note: you could extract the val and the method also in its own interface... then you would use an override on the mandatoryProperty above instead
// here is what such an interface may look like:
interface HasID {
val mandatoryProperty: Long?
fun hasID() = mandatoryProperty != null // or isDTO, etc. what you like/need
}
Usage:
val c = C("dto", "resettable") // C(mandatoryProperty=null, sharedOnly=dto, sharedResettable=resettable)
when {
c.hasID() -> update(c)
else -> store(c)
}.also {newC ->
// from now on you should know that you are actually dealing with an object that has everything in place...
println("$newC") // prints: C(mandatoryProperty=123, sharedOnly=dto, sharedResettable=resettable)
}
The last one has the benefit, that you can use the copy-method again, e.g.:
val myNewObj = c.copy(mandatoryProperty = 123) // well, you probably don't do that yourself...
// but the following might rather be a valid case:
val myNewDTO = c.copy(mandatoryProperty = null)
The last one is my favorite as it needs the fewest code and uses a val instead (so also no accidental override is possible or you operate on a copy instead). You could also just add an accessor for the mandatoryProperty if you do not like using ? or !!, e.g.
fun getMandatoryProperty() = mandatoryProperty ?: throw Exception("You didn't set it!")
Finally if you have some helper methods like hasID(isDTO or whatever) in place it might also be clear from the context what you are exactly doing. The most important is probably to setup a convention that everyone understands, so they know when to apply what or when to expect something specific.

Why is it possible initialize java.util.function.Consumer with lambda that returns value? [duplicate]

I am confused by the following code
class LambdaTest {
public static void main(String[] args) {
Consumer<String> lambda1 = s -> {};
Function<String, String> lambda2 = s -> s;
Consumer<String> lambda3 = LambdaTest::consume; // but s -> s doesn't work!
Function<String, String> lambda4 = LambdaTest::consume;
}
static String consume(String s) { return s;}
}
I would have expected the assignment of lambda3 to fail as my consume method does not match the accept method in the Consumer Interface - the return types are different, String vs. void.
Moreover, I always thought that there is a one-to-one relationship between Lambda expressions and method references but this is clearly not the case as my example shows.
Could somebody explain to me what is happening here?
As Brian Goetz pointed out in a comment, the basis for the design decision was to allow adapting a method to a functional interface the same way you can call the method, i.e. you can call every value returning method and ignore the returned value.
When it comes to lambda expressions, things get a bit more complicated. There are two forms of lambda expressions, (args) -> expression and (args) -> { statements* }.
Whether the second form is void compatible, depends on the question whether no code path attempts to return a value, e.g. () -> { return ""; } is not void compatible, but expression compatible, whereas () -> {} or () -> { return; } are void compatible. Note that () -> { for(;;); } and () -> { throw new RuntimeException(); } are both, void compatible and value compatible, as they don’t complete normally and there’s no return statement.
The form (arg) -> expression is value compatible if the expression evaluates to a value. But there are also expressions, which are statements at the same time. These expressions may have a side effect and therefore can be written as stand-alone statement for producing the side effect only, ignoring the produced result. Similarly, the form (arg) -> expression can be void compatible, if the expression is also a statement.
An expression of the form s -> s can’t be void compatible as s is not a statement, i.e. you can’t write s -> { s; } either. On the other hand s -> s.toString() can be void compatible, because method invocations are statements. Similarly, s -> i++ can be void compatible as increments can be used as a statement, so s -> { i++; } is valid too. Of course, i has to be a field for this to work, not a local variable.
The Java Language Specification §14.8. Expression Statements lists all expressions which may be used as statements. Besides the already mentioned method invocations and increment/ decrement operators, it names assignments and class instance creation expressions, so s -> foo=s and s -> new WhatEver(s) are void compatible too.
As a side note, the form (arg) -> methodReturningVoid(arg) is the only expression form that is not value compatible.
consume(String) method matches Consumer<String> interface, because it consumes a String - the fact that it returns a value is irrelevant, as - in this case - it is simply ignored. (Because the Consumer interface does not expect any return value at all).
It must have been a design choice and basically a utility: imagine how many methods would have to be refactored or duplicated to match needs of functional interfaces like Consumer or even the very common Runnable. (Note that you can pass any method that consumes no parameters as a Runnable to an Executor, for example.)
Even methods like java.util.List#add(Object) return a value: boolean. Being unable to pass such method references just because that they return something (that is mostly irrelevant in many cases) would be rather annoying.

Java 8 JPA Repository Stream produce two (or more) results?

I have a Java 8 stream being returned by a Spring Data JPA Repository. I don't think my usecase is all that unusual, there are two (actually 3 in my case), collections off of the resulting stream that I would like collected.
Set<Long> ids = // initialized
try (Stream<SomeDatabaseEntity> someDatabaseEntityStream =
someDatabaseEntityRepository.findSomeDatabaseEntitiesStream(ids)) {
Set<Long> theAlphaComponentIds = someDatabaseEntityStream
.map(v -> v.getAlphaComponentId())
.collect(Collectors.toSet());
// operations on 'theAlphaComponentIds' here
}
I need to pull out the 'Beta' objects and do some work on those too. So I think I had to repeat the code, which seems completely wrong:
try (Stream<SomeDatabaseEntity> someDatabaseEntityStream =
someDatabaseEntityRepository.findSomeDatabaseEntitiesStream(ids)) {
Set<BetaComponent> theBetaComponents = someDatabaseEntityStream
.map(v -> v.getBetaComponent())
.collect(Collectors.toSet());
// operations on 'theBetaComponents' here
}
These two code blocks occur serially in the processing. Is there clean way to get both Sets from processing the Stream only once? Note: I do not want some kludgy solution that makes up a wrapper class for the Alpha's and Beta's as they don't really belong together.
You can always refactor code by putting the common parts into a method and turning the uncommon parts into parameters. E.g.
public <T> Set<T> getAll(Set<Long> ids, Function<SomeDatabaseEntity, T> f)
{
try(Stream<SomeDatabaseEntity> someDatabaseEntityStream =
someDatabaseEntityRepository.findSomeDatabaseEntitiesStream(ids)) {
return someDatabaseEntityStream.map(f).collect(Collectors.toSet());
}
}
usable via
Set<Long> theAlphaComponentIds = getAll(ids, v -> v.getAlphaComponentId());
// operations on 'theAlphaComponentIds' here
and
Set<BetaComponent> theBetaComponents = getAll(ids, v -> v.getBetaComponent());
// operations on 'theBetaComponents' here
Note that this pulls the “operations on … here” parts out of the try block, which is a good thing, as it implies that the associated resources are released earlier. This requires that BetaComponent can be processed independently of the Stream’s underlying resources (otherwise, you shouldn’t collect it into a Set anyway). For the Longs, we know for sure that they can be processed independently.
Of course, you could process the result out of the try block even without the moving the common code into a method. Whether the original code bears a duplication that requires this refactoring, is debatable. Actually, the operation consists a single statement within a try block that looks big only due to the verbose identifiers. Ask yourself, whether you would still deem the refactoring necessary, if the code looked like
Set<Long> alphaIDs, ids = // initialized
try(Stream<SomeDatabaseEntity> s = repo.findSomeDatabaseEntitiesStream(ids)) {
alphaIDs = s.map(v -> v.getAlphaComponentId()).collect(Collectors.toSet());
}
// operations on 'theAlphaComponentIds' here
Well, different developers may come to different conclusions…
If you want to reduce the number of repository queries, you can simply store the result of the query:
List<SomeDatabaseEntity> entities;
try(Stream<SomeDatabaseEntity> someDatabaseEntityStream =
someDatabaseEntityRepository.findSomeDatabaseEntitiesStream(ids)) {
entities=someDatabaseEntityStream.collect(Collectors.toList());
}
Set<Long> theAlphaComponentIds = entities.stream()
.map(v -> v.getAlphaComponentId()).collect(Collectors.toSet());
// operations on 'theAlphaComponentIds' here
Set<BetaComponent> theBetaComponents = entities.stream()
.map(v -> v.getBetaComponent()).collect(Collectors.toSet());
// operations on 'theBetaComponents' here

Resources