What are the differences of Flux<T>, Flux<ResponseEntity<T>>, ResponseEntity<Flux<T>> as return type in Spring WebFlux? - spring

I often see three different response return types: Flux<T>, ResponseEntity<Flux<T>>, and Flux<ResponseEntity<T>> in MVC style controllers using Spring WebFlux. The documentation explains the difference between ResponseEntity<Flux<T>> and Flux<ResponseEntity<T>>. Does Spring automatically wrap Flux<T> as either ResponseEntity<Flux<T>> or Flux<ResponseEntity<T>>? if yes, which one?
Moreover, how to decide which one to return, ResponseEntity<Flux<T>> or Flux<ResponseEntity<T>>? What situation or use case would call for using one over the other?
And, from a webclient's point of view, are there any significant differences when consuming the two types of response?

Does Spring automatically wrap Flux as either
ResponseEntity<Flux> or Flux<ResponseEntity>? if yes, which one?
Spring will automatically wrap that Flux as a ResponseEntity<Flux>. For example if you have a web endpoint as follows
#GetMapping("/something")
public Flux handle() {
doSomething()
}
And if you are consuming from a WebClient, you can retrieve your response as either ResponseEntity<Flux<T>> or Flux<T>. There is no default, but I would think it's good practice to retrieve only Flux unless you explicitly need something from the ResponseEntity. The Spring doc has good examples of this.
You actually can consume it as a Flux<ResponseEntity<T>>, but this would only be applicable for more complex use cases.
Moreover, how to decide which one to return, ResponseEntity<Flux>
or Flux<ResponseEntity>? What situation or use case would call for
using one over the other?
It really depends on your use case.
Returning ResponseEntity<Flux<T>> is saying something like,
I am returning a Response of a Collection of type T objects.
Where Flux<ResponseEntity<T>> is saying something more like
I am returning a Collection of Responses, where the responses have an entity of type T.
Again, I think in most use cases returning just Flux<T> makes sense (This is equivalent to returning ResponseEntity<Flux<T>>)
And finally
And, from a webclient's point of view, are there any significant
differences when consuming the two types of response?
I think what you're trying to ask is whether you should be using ResponseEntity<Flux<T>> or Mono<ResponseEntity<T>> when consuming from the WebClient. And the spring doc answers this question quite elegantly
ResponseEntity<Flux> make the response status and headers known
immediately while the body is provided asynchronously at a later
point.
Mono<ResponseEntity> provides all three — response status, headers,
and body, asynchronously at a later point. This allows the response
status and headers to vary depending on the outcome of asynchronous
request handling.

Related

What is the point of having PATCH, POST, PUT types when we use repository save methods for all?

As a newcomer to spring I would like to know the actual difference between:-
#PostMapping
#PutMapping
#PatchMapping
My understanding is PUT is for update but then we have to get the element by its id and then save() it. Similarly the save() method is again used by Post which automatically replaces by its identifier(PRIMARY). In my application I am able to use three of these methods interchangeably.
What is the point of having PATCH, POST, PUT types when we use repository save methods for all?
HTTP method tokens are used to define request semantics in such a way that general purpose components (browsers, reverse proxies, etc) can exploit the information to do intelligent things.
The easiest of these is that PUT has idempotent semantics; if an http response is lost, a general purpose component knows that it may autonomously retry sending the request. This in turn gives you a bit of extra reliability over an unreliable network, "for free".
The fact that your origin server uses the same persistence mechanism for each is an implementation detail, something deliberately hidden behind the "uniform interface".
The difference between PATCH and POST is subtle; PATCH gives you an unambiguous way to designate that the enclosed entity is a patch document, and offers a mechanism for discovering which patch document formats are understood by the origin server, neither of which you get from POST alone.
What's less clear, at least to me, is whether PATCH semantics allow an intermediate component to do something intelligent with a request - in other words, do the additional constraints (relative to POST) allow intermediaries to do anything interesting?
As best I can tell, the semantics of a PATCH request are more specific, but not actionably more specific -- certainly not as obviously as we have in the case of safe or idempotent request semantics.
POST is for creating a brand new object.
PUT will replace all of an objects properties in one go.
Leaving a property empty will empty the value in the datastore.
PATCH does a partial update of an object.
You can send it just the properties which should be updated.
A PATCH request with all object properties included will have the same effect as a POST request. But they are not the same.
The HTTP method is a convention not specific to Spring but is a main pillar of the REST API specification.
They make sure the intent of a request is clear and both the provider and consumer are in agreement of the end result.
Kind of like the pedals or gear shift in our cars. It's a lot easier when they all work the same.
Switching them up could lead to a lot of accidents.
For us as developers, it means we can expect most REST APIs to behave in a similar way, assuming an API is implemented according to or reasonably close to the specification.
POST/PUT/PATCH may look alike but there are subtle differences.
As you mention the PUT and PATCH methods require some kind of ID of the object to be updated.
In an example of a combined POST/PUT/PATCH endpoint, sending a request with an object, omitting some of its properties. How does the API react?
Update only the received properties.
Update the entire object, emptying the omitted properties.
Attempt to create a new object.
How is the consumer of the endpoint to know which of the three actions the server took?
This is where the HTTP method and specification/convention help determine the appropriate course of action.
Spring may facilitate the save method which can handle both creation, updates and partial updates. But this is not necessarily the case for other frameworks in Java or other languages.
Also, your application may be simple enough to handle POST/PUT/PATCH in the same controller method right now.
But over time as your application grows more complex, the separation of concerns makes your code a lot cleaner, more readable and maintainable.

Why doesn't Spring WebFlux MockServerRequest allow an empty body?

I'm writing some tests for a Spring WebFlux application and I'm trying to mock a scenario where a request doesn't have a body. I reached for the built-in MockServerRequest, figuring I'd use the built-in mock over making my own. It does allow constructing an instance without a body, but my tests are failing as all of its methods for extracting the body contain an assertion that the body is not null. This doesn't seem to line up with how an actual request would behave. It's totally possible to make a request with no body. I'd also say it's reasonable to have code that checks to see if there's a body, as backed up by the existence of methods like awaitBodyOrNull (I'm using Kotlin).
Am I missing/misunderstanding something here? I'm constructing my mock by just doing MockServerRequest.builder().build() (the methods under test don't care about anything other than the body). Is this class perhaps not actually meant to be used on its own? I'm not finding anyone else asking about this so I feel like I must be overlooking something.
For now I'll work around this by just making my own mock.
MockServerRequest.Builder expects you to give it a body already wrapped in a Mono. It doesn't do any wrapping for you. So mocking an empty request is done with MockServerRequest.builder().body(Mono.empty<TestDto>()).

Is it good idea to use decorators in high-load application?

We are building High load backend api in nestjs.
I am searching for good solution for rest request validation.
We have some specific requirements for internationalization, so we decided not to use standard schema-based validation pipes, that does not handle internationalization well.
I am considering custom Mapper class for each request DTO. So it gets request data and transforms them into specific DTO:
class CreateAccountRequestMapper { map(data: any): CreateAccountRequestDto {} }
If the input is not valid, it will throw some API specific exception.
Is it good idea in terms of performance to implement this into decorators + pipes?
I do not know the concept well, but it seems to me that I would need to make unnecessary object instantiation on each request, while if I would use the mapper directly in handler I would avoid it.
Do decorators means significant overhead in general?

How to improve the gRPC development?

I find it's tedious to define the protobuf message again in the .proto file after the entity model is ready.
For example, exposure the CRUD operations through gRPC you need to define the table schema in .proto files in a message way because gRPC requires it.
In traditional restful API development, we don't need to define the messages because we just return some json, and the json object can be arbitrary.
Any suggestions?
P.S. I know the gRPC is more efficient than restful APIs at run time. However I find it's far less efficient than restful APIs at development time.
Before I found the elegant way to improve the efficiency I currently use an ugly way: define a JSON message type:
syntax = "proto3";
package user;
service User {
rpc FindOneByJSON(JSON) returns (JSON) {}
rpc CreateByJSON(JSON) returns (JSON) {}
}
message JSON {
string value = 1;
}
It's ugly because it need the invoker to JSON.stringify() the arguments and JSON.parse() the response.
Because gRPC and REST follow different concepts.
In REST, the server maintains the state and you just control it from the client (that's what you use GET, POST, PUT, UPDATE, DELETE request types for). In contrast, a procedure call has a well-defined return type that is reliable and self-describing. gRPC does not follow the concept of the server being the single source of truth concerning an object's state; instead -- conceptually -- you can interact with the server using regular calls, as you would on a local setup.
By the way, in good RESTful design, you do use schemas for your JSON returns, so in fact it is not arbitrary, even though you can abuse it to be. For example, check the OpenAPI 3 specification for the response object definition: They usually contain references to schemas.

spring mvc #requestmapping best practice

Checked the official ref, found a million ways to do things.
I guess I have 2 set of use cases.
1. return customized http response, basically I am responsible for filling in status code, response body(either XML or JSON or text).
2. return Model and View. view is a jsp page typically and fill in the view with data from modle.
My question is which is a better way to go? it possible to mix them together. In my first use set, is it possible to return a View? also is it possible to have both on them in one method. something like if A return customized http response, if B return ModelAndView.
Thanks!
The return value from any request handling method (i.e. on marked with the #RequestMapping annotation must either identify a view (that will generate the HTTP Response) or generate the HTTP Response itself.
Each handler method stands alone; by that I mean, you can return a view name from some handler methods and generate the HTTP Response in other handler methods.
Check out 15.3.2.3 Supported handler method arguments and return types in the Spring 3x reference document at http://static.springsource.org/spring/docs/3.0.x/spring-framework-reference/html/
As an option to generating the HTTP Response in the handler method, you could setup multiple view resolvers; one or more for normal view resolution (jsp pages, tiles, etc.) and one or more for "special" view resolution (XML, JSON, etc.). For the "special" views, you may want to create your own view class that extends org.springframework.web.servlet.view.AbstractView.
You can accomplish something similar to what you are describing using the ContentNegotiatingViewResolver, which can deal with serving different content based on a request, with no changes required to your #RequestMapping annotations, or in fact anything in you're controllers.
There are plenty of resources on how to use this method, including this and this

Resources