Unit test class annotated by #Transactional and implications to detach/evict in hibernate - spring

I'm struggling with a problem I and can not find out proper solution or even a cause neither in hibernate docs, sources nor S/O.
I have spring/hibernate application with DAO-Service-RPC layers, where DAO provides Hibernate entities and Service DTOs for RPC. Therefore I'm converting (mapping by Dozer) Entities to DTOs in service methods and mapping DTOs back to Entities there as well.
Mapping is as follows (not full method, checking ommited):
#Transactional
public updateAuthor(Author author) {
AuthorEntity existingEntity = this.authorDao.findById(author.getId());
this.authorDao.detach(existingEntity);
this.authorAssembler.toEntity(author, existingEntity, null);
this.authorDao.merge(existingEntity);
}
I have unit test classes annotated by #Transactional to avoid test data bleeding. Now, I realized, that there is something, I don't understand, going on in Service.
When I have my test class annotated by #Transactional, calling detach() seems to work (i.e. Hibernate is not reporting org.hibernate.NonUniqueObjectException: A different object with the same identifier value was already associated with the session, but #Version number on entity is not incremented properly (as if the parent (unit test)) TX was still holding on.
When I remove my test class annotation, mapping throws org.hibernate.LazyInitializationException: failed to lazily initialize a collection - which makes sense on it's own, because the entity is detached from session. What I don't understand is, why this exception is NOT thrown when the test class is annotated. As I understand, entity is detached from the current session in both cases.
My other question is - assuming the entity is behaving correctly - how to avoid missing such errors in unit tests and avoid test bleeding as well, as it seems to be, this type of error manifests only with unannotated test class.
Thank you!
(JUnit4, Spring 4.0.2.RELEASE, Hibernate 4.3.1.Final)

Related

How to avoid the vulnerability created by using entities at a requestMapping method?

I have a controller with a method like
#PostMapping(value="/{reader}")
public String addToReadingList(#PathVariable("reader") String reader, Book book) {
book.setReader(reader);
readingListRepository.save(book);
return "redirect:/readingList/{reader}";
}
When I run a static code analysis with Sonarqube I get a vulnerability report stating that
Replace this persistent entity with a simple POJO or DTO object
But if I use a DTO (which has exactly the same fields as the entity class, then I get another error:
1 duplicated blocks of code must be removed
What should be the right solution?
Thanks in advance.
Enric
You should build a new separate class which represents your Entity ("Book" ) as Plain Old Java Object (POJO) or Data Transfer Object (DTO). If you use JSF or other stateful technology this rule is important. If your entity is stateful there might be open JPA sessions etc. which may modify your database (e.g. if you call a setter in JSF on a stateful bean).
For my projects I ignore this Sonar rule because of two reasons:
I alway you REST and REST will map my Java Class into JSON which can be seen as a DTO.
REST is stateless (no server session) so no database transaction will be open after the transformation to JSON
Information obtained from sonarsource official documentation.
On one side, Spring MVC automatically bind request parameters to beans
declared as arguments of methods annotated with #RequestMapping.
Because of this automatic binding feature, it’s possible to feed some
unexpected fields on the arguments of the #RequestMapping annotated
methods.
On the other end, persistent objects (#Entity or #Document) are linked
to the underlying database and updated automatically by a persistence
framework, such as Hibernate, JPA or Spring Data MongoDB.
These two facts combined together can lead to malicious attack: if a
persistent object is used as an argument of a method annotated with
#RequestMapping, it’s possible from a specially crafted user input, to
change the content of unexpected fields into the database.
For this reason, using #Entity or #Document objects as arguments of
methods annotated with #RequestMapping should be avoided.
In addition to #RequestMapping, this rule also considers the
annotations introduced in Spring Framework 4.3: #GetMapping,
#PostMapping, #PutMapping, #DeleteMapping, #PatchMapping.
See More Here

Kotlin instance variable is null when accessed by Spring proxied class

I have a service class that is being proxied by Spring, like so:
#Service
#Transactional
open class MyService { ... }
If I remove the open modifier, Spring complains that it needs to proxy the class to apply the #Transactional annotation tweaks.
However, this is causing issues when calling a function on the proxied service, which attempts to access a variable:
#Service
#Transactional
open class MyService {
protected val internalVariable = ...
fun doWork() {
internalVariable.execute() // NullPointerException
}
}
The internalVariable is assigned as part of its declaration, does not have any annotations (like #Autowired, etc.), and works fine when I remove the #Transactional annotation and the requirement for Spring to proxy the class.
Why is this variable null when Spring is proxying/subclassing my service class?
I hit a similar issue and the above comments by Rafal G & Craig Otis helped me-- so I'd like to propose that the following write up be accepted as an answer (or the comments above be changed to an answer and they be accepted).
The solution: open the method/field.
(I hit a similar case where it was a closed method that caused the problem. But whether it is a field/method the solution is the same, and I think the general cause is the same...)
Explanation:
Why this is the solution is more complicated and definitely has to do with Spring AOP, final fields/methods, CGLIB proxies, and how Spring+CGLIB attempts to deal with final methods (or fields).
Spring uses proxies to represent certain objects to handle certain concerns dealt with by Aspect Oriented Programming. This happens with services & controllers (especially when #Transactional or other advice is given that requires AOP solutions).
So a Proxy/Wrapper is needed with these beans, and Spring has 2 choices-- but only CGLIB is available when the parent class is not an interface.
When using CGLIB to proxy classes Spring will create a subclass called
something like myService$EnhancerByCGLIB. This enhanced class will
override some if not all of your business methods to apply
cross-cutting concerns around your actual code.
Here comes the real surprise. This extra subclass does not call super
methods of the base class. Instead it creates second instance of
myService and delegates to it. This means you have two objects now:
your real object and CGLIB enhanced object pointing to (wrapping) it.
From: spring singleton bean fields are not populated
Referenced By: Spring AOP CGLIB proxy's field is null
In Kotlin, classes & methods are final unless explicitly opened.
The magic of how Spring/CGLib when & how chooses to wrap a Bean in an EnhancerByCGLIB with a target delegate (so that it can use finalized methods/fields) I don't know. For my case, however the debugger showed me the 2 different structures. When the parent methods are open, it does not create a delegate (using subclassing instead) and works without NPE. However, when a particular methods is closed then for that closed method Spring/CGLIB uses a wrapped object with delegation to a properly initialized target delegate. For some reason, the actual invocation of the method is done with the context being the wrapper with its uninitialized field values (NULLs), causing NPE. (Had the method on the actual target/delegate been called, there should not have been a problem).
Craig was able to solve the problem by opening the property (not the method)-- which I suspect had a similar effect of allowing Spring/CGLib to either not use a delegate, or to somehow use the delegate correctly.

Questions regarding best use of testng and mockito

I am very new for testng(unit testing) and mockito. I have read some articles and went through some code snippets on Internet. But still I have some doubts regarding unit testing with testng & mockito in spring framework.
For unit testing a service layer we mock a DAO. What if I want to test a function wich fetch some data from database and do some operations. How does mock DAO works here. From where mocked DAO will get some data for testing such a function.
If am doing a validation like Data not present in database and I want to test wheather it throws correct exception for that. So it needs some values in database and mocked DAO will check if data present in that predefined database(in-memory). How to provide such a data.
Does dataprovider helps to provide a data to used by DAO. If yes, How it can be done?
Please correct me if my understanding regarding unit testing is correct. Please let me know where I am getting it wrong if I miss understood a concept.
Thank you.
1) Besides UnitTests, you also need Integration and / or Acceptance-Tests.
The Unit-Tests will test that your SUT - Single Unit of Test, in this case a specific service class works as intended, without integrating it with other classes or systems (DB). However, additionally I would write an Integration Test for this Service that retrieves / manipulates test data from the database. This test should ideally not make any assumptions about the data in the database, so inserting the data you will be looking for, before executing the test, is recommended, e.g. using a #Before annotation and actually committing this test data into the test database. However, I further recommend you to do a proper cleanup of the database, in the #After test method. Auto rolling back the data could be done, but is not as optimal, especially if you have a persistence framework like Hibernate or JPA in between. Only when you work on committed data that is really in the physical (not virtual!) dabase, you can be 100% sure your test succeeded.
If I correctly understood your intend, this actually sounds like a perfect reason for mocking your DB / persistence object - make it throw the expected exception / return an empty result, that test that your code behaves as expected on this condition.
A TestNG Dataprovider actually does the opposite of what you are looking for - it is a way to provide an array of data to your test method:
org.testng.annotations.DataProvider
Annotation Type DataProvider
Mark a method as supplying data for a test method. The data provider name defaults to method name. The annotated method must return an Object[][] where each Object[] can be assigned the parameter list of the test method. The #Test method that wants to receive data from this DataProvider needs to use a dataProvider name equals to the name of this annotation.

Commit/Flush transactions during unit test?

I am using Spring and JUnit to write some integration tests for my DAO. I set up my test data in the beginning of the test method, and then test my DAO methods later in the same test method. The problem is that if I don't flush/commit the transaction, the EntityManager returns the same instance of the entities that I have just created in my data setup - rendering my test useless as they will always pass.
E.g.
#Test
#Transactional()
public void loadTreeBasicCase() {
// creates and saved node to DB
Node n = createNode();
// test DAO
Node result = dao.lookup(n.getId());
// verify
assertThat(n, equalTo(result));
}
One way is to expose commit() and/or flush() methods in my DAO. But I would prefer not to do that because in production code, this almost never needs to happen (let EntityManager do it's thing). Is there a way to configure this via annotations or in Spring config? I am using Spring, JPA2 with Hibernate.
You can set the defaultRollback attribute on #Transactional to reset things between tests. This doesn't sound like what you are asking for, just throwing it out there first.
Within the test, the entity manager is behaving correctly. You want to inject different behavior for testing to "disconnect" the setup from the rest of the test. One thing I did in some tests was to call flush on the entity manager directly from the test. I only had to do it a few times, but it was valuable in those cases. I did it in the test (not the DAO) so as to not provide a method on the DAO that I don't want people calling.

How to manage transactions with JAX-RS, Spring and JPA

I'm using JAX-RS to provide an HTTP-based interface to manage a data model. The data model is stored in a database and interacted with via JPA.
This allows me to modify the interface to the data model to suit REST clients and mostly seems to work quite well. However, I'm not sure how to handle the scenario where a method provided by a JAX-RS resource requires a transaction, which affects the JPA get, update, commit-on-tx-end pattern, because there is only a transaction wrapping the get operation, so the update is never committed. I can see the same problem occurring if a single REST operation requires multiple JPA operations.
As I'm using Spring's transaction support, the obvious thing to do is to apply #Transactional to these methods in the JAX-RS resources. However, in order for this to work, Spring needs to manage the lifecycle of the JAX-RS resources, and the useage examples I'm aware of have resources being created via `new' when needed, which makes me a little nervous anyway.
I can think of the following solutions:
update my JPA methods to provide a transaction-managed version of everything I want to do from my REST interface atomically. Should work, keeps transactions out of the JAX-RS layer, but prevents the get, update, commit-on-tx-end pattern and means I need to create a very granular JPA interface.
Inject Resource objects; but they are typically stateful holding at least the ID of the object being interacted with
Ditch the hierarchy of resources and inject big, stateless super resources at the root that manage the entire hierarchy from that root; not cohesive, big services
Have a hierarchy of injected, stateless, transaction-supporting helper objects that 'shadow' the actual resources; the resources are instantiated and hold ths state but delegate method invocations to the helper objects
Anyone got any suggestions? It's quite possible I've missed some key point somewhere.
Update - to work around the lack of a transaction around the get, update, commit-on-tx-close flow, I can expose the EntityManager merge(object) method and call it manually. Not neat and doesn't solve the larger problem though.
Update 2 #skaffman
Code example:
In JPA service layer, injected, annotations work
public class MyEntityJPAService {
...
#Transactional(readOnly=true) // do in transaction
public MyEntity getMyEntity(final String id) {
return em.find(MyEntity.class, id);
}
In JAX-RS resource, created by new, no transactions
public class MyEntityResource {
...
private MyEntityJPAService jpa;
...
#Transactional // not injected so not effective
public void updateMyEntity(final String id, final MyEntityRepresentation rep) {
MyEntity entity = jpa.getMyEntity(id);
MyEntity.setSomeField(rep.getSomeField());
// no transaction commit, change not saved...
}
I have a few suggestions
Introduce a layer between your JPA and JAX-RS layers. This layer would consist of Spring-managed #Transactional beans, and would compose the various business-level operations from their component JPA calls. This is somewhat similar to your (1), but keeps the JPA layer simple.
Replace JAX-RS with Spring-MVC, which provides the same (or similar) functionality, including #PathVariable, #ResponseBody, etc.
Programmatically wrap your JAX-RS objects in transactional proxies using TransactionProxyFactorybean. This would detct your #Transactional annotations and generate a proxy that honours them.
Use #Configurable and AspectJ LTW to allow Spring to honour #Transactional even if you create the object using `new. See 8.8.1 Using AspectJ to dependency inject domain objects with Spring

Resources