Putting Spring WebFlux Publisher inside Model, good or bad practice? - spring-boot

I'm working on a code audit on a SpringBoot Application with Spring WebFlux and the team is putting Publisher directly inside the Model and then resolve the view.
I'm wondering if it is a good or bad practice because it seems to be working but in that case, which component is in charge of executing the Publisher ?
I think that it's the ViewResolver and it should not be its job. What do you think ?
Moreover, if the Publisher is not executed by the Controller, the classes annotated by #ControllerAdvice such like ExceptionHandler won't work if these Publisher return an error, right ?

Extract of the Spring WebFlux documentation :
Spring WebFlux, unlike Spring MVC, explicitly supports reactive types in the model (for example, Mono or io.reactivex.Single). Such asynchronous model attributes can be transparently resolved (and the model updated) to their actual values at the time of #RequestMapping invocation, provided a #ModelAttribute argument is declared without a wrapper, as the following example shows:
#ModelAttribute
public void addAccount(#RequestParam String number) {
Mono<Account> accountMono = accountRepository.findAccount(number);
model.addAttribute("account", accountMono);
}
#PostMapping("/accounts")
public String handle(#ModelAttribute Account account, BindingResult errors) {
// ...
}
In addition, any model attributes that have a reactive type wrapper are resolved to their actual values (and the model updated) just prior to view rendering.
https://docs.spring.io/spring-framework/docs/current/reference/html/web-reactive.html#webflux-ann-modelattrib-methods

Doesn't come as a shock to me.
Actually seems to be a good trade off between complexity and efficiency when the Publisher is handling complex stuff.
It has the advantage of executing the Publisher only if and when needed.
Although it might be a problem if the ModelMap handler does not have the capacity to use it properly.
As for the exceptional cases, maybe you do not want it to be executed and just printed, thus failing faster.
As for the question about what is executing the Publisher, a specific ViewResolver can be used as it is the component responsible for the "rendering". IMHO that's it's job. I do not know if a standard ViewResolver can be used for detecting values vs publishers and handle those automagically, yet this seems completely doable and efficient.

Related

Spring: new() operator and autowired together

If I use Spring, which of these two methods is more correct.
Can I use the new() operator even if I use dipendency injection?.Can I mix both?
I would like to have some clarification on these concepts.
Thanks
First method:
#RequestMapping(method=RequestMethod.GET)
public String create(Model model){
model.addAttribute(new User());
return "index";
}
Second Method:
#Autowired
User user;
#RequestMapping(method=RequestMethod.GET)
public String create(Model model){
model.addAttribute(user);
return "index";
}
By using dependency injection does not mean that the use of new operator is automatically prohibited throughout your code. It's just different approaches applied to different requirements.
A web application in spring is composed of a number of collaborating beans that are instantiated by the framework and (unless overriding the default scope) are singletons. This means that they must not preserve any state since they are shared across all requests (threads). In other words if you autowire the User object (or any other model attribute), it is created on application context initialization and the same instance is given to any user request. This also means that if a request modifies the object, other requests will see the modification as well. Needless to say this is erroneous behavior in multithreaded applications because your User object (or other model attribute) belongs to the request, so it must have the very narrow scope of a method invocation, or session at most.
You can also have spring create beans with different scopes for you, but for a simple scenario of a model attribute initialization, the new operator is sufficient. See the following documentation if interested in bean scopes : Bean scopes
So in your use case, the second method is totally wrong.
But you can also delegate the creation of your model attributes to spring if they are used as command objects (i.e. if you want to bind request parameters to them). Just add it in the method signature (with or without the modelattribute annotation).
So you may also write the above code as
#RequestMapping(method=RequestMethod.GET)
public String create(#ModelAttribute User user){
return "index";
}
see also : Supported method argument types
If you want your beans to be "managed" by Spring (for e.g. to use with Dependency Injection or PropertySources or any other Spring-related functionality), then you do NOT create new objects on your own. You declare them (via XML or JavaConfig) and let Spring create and manage them.
If the beans don't need to be "managed" by Spring, you can create a new instance using new operator.
In your case, is this particular object - User - used anywhere else in code? Is it being injected into any other Spring bean? Or is any other Spring bean being injected in User? How about any other Spring-based functionality?
If the answer to all these questions is "No", then you can use the first method (create a new object and return it). As soon as the create() method execution is complete, the User object created there would go out of scope and will be marked for GC. The User object created in this method will eventually be GC-ed.
Things can be injected in two ways in a Spring MVC applications. And yes, you can you can mix injection and creation if doing right.
Components like the controller in your example are singletons managed by the application context. If you inject anything to them it is global, not per request or session! So a user is not the right thing to inject, a user directory can be. Be aware of this as you are writing a multithreaded application!
Request related things can be injected to the method like the used locale, the request, the user principal may be injected as parameters, see a full list at Spring MVC Documentation.
But if you create a model attribute you may use new() to create it from scratch. I will not be filled by spring but to be used by your view to display data created by the controller. When created in the request mapped method that is ok.

Autowire two Neo4j GraphRepository in Spring

I'm new to using Spring with Neo4j and I have a question about #Autowire for a GraphRepository.
Most examples I've seen use one #Autowire per Controller, but I have two Nodes I need to modify at the same time when a particular method is called in the controller. Should I simply #Autowire the repositories for both nodes (eg per the code below)? Is there any impact if I do this in a second controller with the same repositories as well (so if I had a ChatSessionController which also #Autowired ChatMessageService and ChatSessionService)?
ChatMessageController.java
#Controller
public class ChatMessageController {
#Autowired
private ChatMessageService chatMessageService;
#Autowired
private ChatSessionService chatSessionService;
#RequestMapping(value = "/message/add/{chatSessionId}", method = RequestMethod.POST)
#ResponseBody
#Transactional
public void addMessage(#RequestBody ChatMessagePack chatMessagePack,
#PathVariable("chatSessionId") Long chatSessionId) {
ChatMessage chatMessage = new ChatMessage(chatMessagePack);
chatMessageService.save(chatMessage);
// TODO: Make some modifications to the ChatSession as well
}
}
Any help would be much appreciated! I've been googling and looking through Stackoverflow to understand this better but I haven't found anything yet. Any pointers in the right directions would be great.
Another underlying question is, should I be (and can I?) modifying other Nodes in a GraphRepository that handles a particular node? Eg Should my GraphRepository be able to modify my GraphRespository?
Thanks!
I'm not convinced that this is a SO question, it's not really a Neo4J or Spring question either, it is more about the architecture of your application. However assuming that you understand the negatives of class fan out, and how to use the #Transactional annotation to achieve what you want then the answer to your question is that it is just fine to have many Repositories (Neo4J or otherwise, autowired or otherwise) in your class and in as many classes as you want.
Neo4J transactions default to Isolation level READ_COMMITTED and if you need anything else, you need to add the guards/locks yourself. Nested transactions are consideredd tobe the same transaction. The Spring #Transactional annotation relies on proxies that you should be aware of as they have implications when calling methods from within the same class.
I would go through this tuotorial over at Spring Data and get your head around how real world vs domain vs node models differ, there will be cases where one repository impacts another node type but I would think it is often transparent to you (i.e adding relationships). You can do what you like in each repository (the generic nature of them is largely confined to all of the built in CRUD and queries derived from finder-method names (see documentation ) using the #Query annotation, and some queries have side effects, but largely you should avoid it.
As you start adding multiple repositories to multiple controllers I think that your code will begin to smell bad and that you should consider encapsulating this business logic off on its own somewhere, neatly unit tested. I also wouldn't tie myself to one controller per data object, it would be fine to have a single ChatController with a POST/chat/ to create a new session and POST /chat/{sessionId} to add a message. Intersting questions on Programmers:
How accurate is "Business logic should be in a service, not in a model?"
Best Practices for MVC Architecture
MVC Architecture — How many Controllers do I need?

ArgumentResolvers within single transaction?

I am wondering if there is a way to wrap all argument resolvers like for #PathVariables or #ModelAttributes into one single transaction? We are already using the OEMIV filter but spring/hibernate is spawning too many transactions (one per select if they are not wrapped within a service class which is be the case in pathvariable resolvers for example).
While the system is still pretty fast I think this is not necessary and neither consistent with the rest of the architecture.
Let me explain:
Let's assume that I have a request mapping including two entities and the conversion is based on a StringToEntityConverter
The actual URL would be like this if we support GET: http://localhost/app/link/User_231/Item_324
#RequestMapping("/link/{user}/{item}", method="POST")
public String linkUserAndItem(#PathVariable("user") User user, #PathVariable("item") Item item) {
userService.addItem(user, item);
return "linked";
}
#Converter
// simplified
public Object convert(String classAndId) {
return entityManager.find(getClass(classAndId), getId(classAndId));
}
The UserService.addItem() method is transactional so there is no issue here.
BUT:
The entity converter is resolving the User and the Item against the database before the call to the Controller, thus creating two selects, each running in it's own transaction. Then we have #ModelAttribute methods which might also issue some selects again and each will spawn a transaction.
And this is what I would like to change. I would like to create ONE readonly Transaction
I was not able to find any way to intercept/listen/etc... by the means of Spring.
First I wanted to override the RequestMappingHandlerAdapter but the resolver calls are well "hidden" inside the invokeHandleMethod method...
The ModelFactory is not a spring bean, so i cannot write an interceptor either.
So currently I only see a way by completely replacing the RequestMappingHandlerAdapter, but I would really like to avoid that.
And ideas?
This seems like a design failure to me. OEMIV is usually a sign that you're doing it wrong™.
Instead, do:
#RequestMapping("/link/User_{userId}/Item_{itemId}", method="POST")
public String linkUserAndItem(#PathVariable("userId") Long userId,
#PathVariable("itemId") Long itemId) {
userService.addItem(userId, itemId);
return "linked";
}
Where your service layer takes care of fetching and manipulating the entities. This logic doesn't belong in the controller.

How to add a custom ContentHander for JAXB2 support in Spring 3 (MVC)?

Scenario: I have a web application that uses Spring 3 MVC. Using the powerful new annotations in Spring 3 (#Controller, #ResponseBody etc), I have written some domain objects with #XML annotations for marhalling ajax calls to web clients. Everything works great. I declared my Controller class to have a return type #ResponseBody with root XML object - the payload gets marshalled correctly and sent to Client.
The problem is that some data in the content is breaking the XML compliance. I need to wrap this with CDATA when necessary. I saw a POST here How to generate CDATA block using JAXB? that recommends using a custom Content Handler. Ok, fantastic!
public class CDataContentHandler extends (SAXHandler|XMLSerializer|Other...) {
// see http://www.w3.org/TR/xml/#syntax
private static final Pattern XML_CHARS = Pattern.compile("[<>&]");
public void characters(char[] ch, int start, int length) throws SAXException {
boolean useCData = XML_CHARS.matcher(new String(c,start,length)).find();
if (useCData) super.startCDATA();
super.characters(ch, start, length);
if (useCData) super.endCDATA();
}
}
Using Spring MVC 3, how do I achieve this? Everything was "auto-magically" done for me with regards to the JAXB aspects of setup, Spring read the return type of the method, saw the annotations of the return type and picked up JAXB2 off the classpath to do the marshalling (Object to XML conversion). So where on earth is the "hook" that permits a user to register a custom Content Handler to the config?
Using EclipseLink JAXB implementation it is as easy as adding #XmlCDATA to the Object attribute concerned. Is there some smart way Spring can help out here / abstract this problem away into a minor configuration detail?
I know Spring isn't tied to any particular implementation but for the sake of this question, please can we assume I am using whatever the default implementation is. I tried the Docs here http://static.springsource.org/spring-ws/site/reference/html/oxm.html but it barely helped at all with this question from what I could understand.
Thanks all for any replies, be really appreciated.
Update:
Thanks for the suggested answer below Akshay. It was sufficient to put me on right tracks. Investigating further, I see there is a bit of history with this one between Spring version 3.05 and 3.2. In Spring 3.05 it used to be quite difficult to register a custom MessageConverter (this is really the goal here).
This conversation pretty much explains the thinking behind the development changes requested:
https://jira.springsource.org/browse/SPR-7504
Here is a link to the typically required class override to build a cusom solution:
http://static.springsource.org/spring/docs/3.1.0.M1/javadoc-api/org/springframework/http/converter/AbstractHttpMessageConverter.html
And the following Question on stack overflow is very similar to what I was asking for (except the #ResponseBody discussion relates to JSON and jackson) - the goal is basically the same.
Spring 3.2 and Jackson 2: add custom object mapper
So it looks like usage of , and overriding MarshallingHttpMessageConverter is needed, registering to AnnotationMethodHandlerAdapter. There is a recommended solution in link above to also get clever with this stuff and wrap the whole thing behind a custom defined Annotation.
I haven't yet developed a working solution but since I asked the questions, wanted to at least post something that may help others with the same sort of question, to get started. With all due respect, although this has all improved in Spring 3.2, it's still bit of a dogs dinner to get a little customization working... I really was expecting a one liner config change etc.
Rather than twist and bend Spring, perhaps the easiest answer for my particular issue is just to change JAXB2 implementation and use something like Eclipse Link JAXB that can do this out of the box.
Basically you need to create a custom HttpMessageConverter. Instead of relying on the Jaxb2RootElementHttpMessageConverter that spring uses by default.
Unfortunately, customizing one converter means you are telling spring that you will take care of loading all the converters you need! Which is fairly involved and can get complicated, based on whether you use annotations, component scanning, Spring 3.1 or earlier, etc.. The issue of how to add a custom converter is addressed here: Custom HttpMessageConverter with #ResponseBody to do Json things
In your custom message converter you are free to use any custom JAXB2 content handlers.
Another, simpler approach to solve your original problem would be to use a custom XmlJavaTypeAdapter. Create a custom implementation of javax.xml.bind.annotation.adapters.XmlAdapter to handle CDATA, in the marshal method wrap the return value with the cdata braces. Then in your mapped pojo, use the XmlAdapter annotation, pass it the class of your custom adapter and you should be done.
I have not myself implemented the adapter approach, so couldn't provide sample code. But it should work, and won't be a lot of work.
Hope this helps.

Check preconditions in Controller or Service layer

I'm using Google's Preconditions class to validate user's input data.
But I'm worried about where is the best point of checking user's input data using Preconditions class.
First, I wrote validation check code in Controller like below:
#Controller
...
public void register(ProductInfo data) {
Preconditions.checkArgument(StringUtils.hasText(data.getName()),
"Empty name parameter.");
productService.register(data);
}
#Service
...
public void register(ProductInfo data) {
productDao.register(data);
}
But I thought that register method in Service layer would be using another Controller method like below:
#Controller
...
public void register(ProductInfo data) {
productService.register(data);
}
public void anotherRegister(ProductInfo data) {
productService.register(data);
}
#Service
...
public void register(ProductInfo data) {
Preconditions.checkArgument(StringUtils.hasText(data.getName()),
"Empty name parameter.");
productDao.register(data);
}
On the other hand, the method of service layer would be used in just one controller.
I was confused. Which is the better way of checking preconditions in controller or service?
Thanks in advance.
Ideally you would do it in both places. But you are confusing two different things:
Validation (with error handling)
Defensivie Programming (aka assertions, aka design by contract).
You absolutely should do validation in the controller and defensive programming in your service. And here is why.
You need to validate for forms and REST requests so that you can send a sensible error back to the client. This includes what fields are bad and then doing localization of the error messages, etc... (your current example would send me a horrible 500 error message with a stack trace if ProductInfo.name property was null).
Spring has a solution for validating objects in the controller.
Defensive programming is done in the service layer BUT NOT validation because you don't have access to locale to generate proper error messages. Some people do but Spring doesn't really help you there.
The other reason why validation is not done in the service layer is that the ORM already typically does this through the JSR Bean Validation spec (hibernate) but it doesn't generate sensible error messages.
One strategy people do is to create their own preconditions utils library that throws custom derived RuntimeExceptions instead of guava's (and commons lang) IllegalArgumentException and IllegalStateException and then try...catch the exceptions in the controller converting them to validation error messages.
There is no "better" way. If you think that the service is going to be used by multiple controllers (or other pieces of code), then it may well make sense to do the checks there. If it's important to your application to check invalid requests while they're still in the controller, it may well make sense to do the checks there. These two, as you have noticed, are not mutually exclusive. You might have to check twice to cover both scenarios.
Another possible solution: use Bean Validation (JSR-303) to put the checks (preconditions) onto the ProductInfo bean itself. That way you only specify the checks once, and anything that needs to can quickly validate the bean.
Preconditions, validations, whether simple or business should be handled at the filter layer or by interceptors, even before reaching the controller or service layer.
The danger if you check it in your controller layer, you are violating the single responsibility principle of a controller, whose sole purpose is to delegate request and response.
Putting preconditions in service layer is introducing cross cutting concerns to the core business.
Filter or inceptor is built for this purpose. Putting preconditions at the filter layer or in interceptors also allow you to “pick and match” rules you can place in the stack for each servlet request, thus not confining a particular rule to only one servlet request or introduce duplication.
I think in your special case you need to to check it on Service layer and return exception to Controller in case of data integrity error.
#controller
public class MyController{
#ExceptionHandler(MyDataIntegrityExcpetion.class)
public String handleException(MyDataIntegrityExcpetion ex, HttpServletRequest request) {
//do someting on exception or return some view.
}
}
It also depend on what you are doing in controller. whether you return View or just using #ResponseBody Annotation. Spring MVC has nice "out of the box" solution for input/dat validation I recommend you to check this libraries out.
http://static.springsource.org/spring/docs/3.1.x/spring-framework-reference/html/validation.html

Resources