struts2 spring jpa layers best practice - spring

I have an app written with Struts2 Spring3 JPA2 and hibernate. In this app i have the following levels :
- struts2 actions
- spring services
- spring DAO
So, one struts action call for a service which can contain calls to one or many dao objects.
In order to display the info on the screen i have created some "mirror" objects for entities; ie: EmailMessage entity has a EmailMessageForm bean that is used to display/gather data from webforms (i don`t know if this is the best practice), and hence my problem.
In EmailMessageServiceImpl i have a method called:
public List < EmailMessage > getEmailMessages(){
//code here
}
and, if i call this from struts action i cannot get dependencies because session has expired (i have TRANSACTION entity manager). So, one solution would be to create another method
List<EmailMessageForm> getEmailMessagesForDisplay()
{
//....
}
and here to call for getEmailMessages() and convert this to form objects.
What do you recommend me ?
What is the best practice for this kind of problem ?

If by "dependencies" you mean "lazy-loaded objects", IMO it's best to get all the required data before hitting the view layer. In your current architecture it looks like that would mean a service method that retrieves the DTOs ("form beans"; I hesitate to use the term since it's easy to confuse with Struts 1).
Some say using an Open Session in View filter/interceptor is better. It's easier, but can lead to unintended consequences if the view developer isn't paying attention, including multiple N+1 queries etc.

Related

Putting Spring WebFlux Publisher inside Model, good or bad practice?

I'm working on a code audit on a SpringBoot Application with Spring WebFlux and the team is putting Publisher directly inside the Model and then resolve the view.
I'm wondering if it is a good or bad practice because it seems to be working but in that case, which component is in charge of executing the Publisher ?
I think that it's the ViewResolver and it should not be its job. What do you think ?
Moreover, if the Publisher is not executed by the Controller, the classes annotated by #ControllerAdvice such like ExceptionHandler won't work if these Publisher return an error, right ?
Extract of the Spring WebFlux documentation :
Spring WebFlux, unlike Spring MVC, explicitly supports reactive types in the model (for example, Mono or io.reactivex.Single). Such asynchronous model attributes can be transparently resolved (and the model updated) to their actual values at the time of #RequestMapping invocation, provided a #ModelAttribute argument is declared without a wrapper, as the following example shows:
#ModelAttribute
public void addAccount(#RequestParam String number) {
Mono<Account> accountMono = accountRepository.findAccount(number);
model.addAttribute("account", accountMono);
}
#PostMapping("/accounts")
public String handle(#ModelAttribute Account account, BindingResult errors) {
// ...
}
In addition, any model attributes that have a reactive type wrapper are resolved to their actual values (and the model updated) just prior to view rendering.
https://docs.spring.io/spring-framework/docs/current/reference/html/web-reactive.html#webflux-ann-modelattrib-methods
Doesn't come as a shock to me.
Actually seems to be a good trade off between complexity and efficiency when the Publisher is handling complex stuff.
It has the advantage of executing the Publisher only if and when needed.
Although it might be a problem if the ModelMap handler does not have the capacity to use it properly.
As for the exceptional cases, maybe you do not want it to be executed and just printed, thus failing faster.
As for the question about what is executing the Publisher, a specific ViewResolver can be used as it is the component responsible for the "rendering". IMHO that's it's job. I do not know if a standard ViewResolver can be used for detecting values vs publishers and handle those automagically, yet this seems completely doable and efficient.

How to avoid the vulnerability created by using entities at a requestMapping method?

I have a controller with a method like
#PostMapping(value="/{reader}")
public String addToReadingList(#PathVariable("reader") String reader, Book book) {
book.setReader(reader);
readingListRepository.save(book);
return "redirect:/readingList/{reader}";
}
When I run a static code analysis with Sonarqube I get a vulnerability report stating that
Replace this persistent entity with a simple POJO or DTO object
But if I use a DTO (which has exactly the same fields as the entity class, then I get another error:
1 duplicated blocks of code must be removed
What should be the right solution?
Thanks in advance.
Enric
You should build a new separate class which represents your Entity ("Book" ) as Plain Old Java Object (POJO) or Data Transfer Object (DTO). If you use JSF or other stateful technology this rule is important. If your entity is stateful there might be open JPA sessions etc. which may modify your database (e.g. if you call a setter in JSF on a stateful bean).
For my projects I ignore this Sonar rule because of two reasons:
I alway you REST and REST will map my Java Class into JSON which can be seen as a DTO.
REST is stateless (no server session) so no database transaction will be open after the transformation to JSON
Information obtained from sonarsource official documentation.
On one side, Spring MVC automatically bind request parameters to beans
declared as arguments of methods annotated with #RequestMapping.
Because of this automatic binding feature, it’s possible to feed some
unexpected fields on the arguments of the #RequestMapping annotated
methods.
On the other end, persistent objects (#Entity or #Document) are linked
to the underlying database and updated automatically by a persistence
framework, such as Hibernate, JPA or Spring Data MongoDB.
These two facts combined together can lead to malicious attack: if a
persistent object is used as an argument of a method annotated with
#RequestMapping, it’s possible from a specially crafted user input, to
change the content of unexpected fields into the database.
For this reason, using #Entity or #Document objects as arguments of
methods annotated with #RequestMapping should be avoided.
In addition to #RequestMapping, this rule also considers the
annotations introduced in Spring Framework 4.3: #GetMapping,
#PostMapping, #PutMapping, #DeleteMapping, #PatchMapping.
See More Here

Is it a good idea to use Bean Validation (JSR303) in JSF2 Webapplications?

Iam about to create a webapplication with JavaServer Faces 2. In the backend things are managed with other usual JEE technologies like EJB3.1 and JPA2. The point is, Iam following some domain driven architecture, which means, the the domain models are used at the JSF layer as backing bean models and are typically persisted as persistent entities at the persistence layer. Using the same model at different layers yields the advantage of only defining the related model restrictions once and for all. Using Bean Validation at this level provides the restrictions of this model to either JSF, JPA etc.
Now my question is, whether using bean validation with JSF2 is a good idea? My concerns are that linking the validation restrictions directly to the model might be the wrong approach, as the validation of the JSF lifecycle usually happens somehow earlier than accessing the model (for its validations rules). As much as I know JSF validation is not taking place during model processing (aka. phase 4: apply model values) but earlier in its own dedicated point in time (phase 3: process validations) and is applied on the component value (submitted value). However, how should the JSF validation (in phase 3) know the actual restrictions of the bean, if it is not validating during the model processing?
Thanks for any clarification on this.
[Edit]
To be more specific, right now Iam using Bean Validation for this purpose and it is working, as invalid values are rejected and a faces message is properly shown. I just wonder if it is the right approach, as to my understanding the validation might take place in the wrong JSF phase and thus might lead to an inconsist component model.
You can add all those bean validations in Data Transfer Object(DTO), The DTO's responsible for delevering the UI data in the JSF. After all the validations are succeed you can copy the DTO's to the Entity(Model) Object.
For copying the (DTO's to Entity) or (Entity to DTO's) we can use third party librires like dozer mapper.
This will avoid the validation restrictions directly to the model layer. I mean the Entity Objects

jsf 2 spring application logging specific events

I have an JSF2 app which uses Spring for transactions,security and DI container.
The application has 3 layers :
1. JSF view + JSF Managed Bean
2. Service classes
3. DAO classes
So, a request is something like:
JSF Page -> JSF MB -> Service class -> DAO Class -> DB, and the the other way around.
My problem is that there are service methods that after perform their business had to log to DB that event.
For instance, when someone activates/deactivates a user. I want to log this action along with the user id.
So, I only see two approaches here : (I`m sure there are more)
1. inside this method I determine the logged in user and perform the actual logging
- as i disadvantages here I would see the fact that this method will be not so easy to test, because of the userId picked from SpringSecurity
2. Using SpringAOP. This way would be noninvasive, which is cool, but then I would have an aspect for one method, which is not so efficient.
I would like to know if you guys had this kind of issues and if so, how did you solve them ?
Consider introducing a marker annotation. Let's call it: #LogEvent. Then annotate every method you wish to intercept. This way you can implement a single aspect with an advice that matches not on naming convention but on the presence of #LogEvent.
something like:
#After("execution(#LogEvent * *.*(..))")

DTO in spring mvc

I'm using spring mvc.
I have created the controller, view, pojo, dao.
Now I have the need to create an object composted from multiple objects pojo, is the case of creating a DTO?
If you're looking to build a composite kind of Object for view purposes only, then there is a good argument for a DTO. If the composite is just an aggregation of the POJOs you can use org.springframework.ui.Model and just add attributes inside your Controller. If there is logic and business rules that need to be applied, it is probably best to do this in a Service layer that sits between your Controller and your DAO.
If you mean that you need to access properties of few POJOs on the client side and you want to reduce amount of calls from
client to server then yes. It is better to create a DTO object where place only necessary properties from POJOs that you will
use on client side. And return this DTO as a result of a single call from client to server.

Resources