What do i get from setting this TransactionAttributeType.NOT_SUPPORTED - ejb-3.0

I happen to find examples that uses this construct though I am not sure what can I get from this?
Does it means that all select statements in a stateless EJB should follow this?
#Stateless
public class EmployeeFacade {
#PersistenceContext(unitName="EmployeeService")
EntityManager em;
#TransactionAttribute(TransactionAttributeType.NOT_SUPPORTED)
public List<Department> findAllEmployees() {
return em.createQuery("SELECT e FROM Employee e",
Employee.class)
.getResultList();
}
What do I get from this?
Thanks.

What you get is:
Relatively formal way to tell that your method does not need transaction (as consequence you know for example that it will not call persist, merge or remove in EntityManager).
Possible performance optimization in some cases.
No need to create/pass transaction. According Java EE 5 Tutorial: "Because transactions involve overhead, this attribute may improve performance."
According other sources (for example Pro JPA 2) it offers implementations possibility to not create managed entities at all (which is likely heavier operation than creating detached entities right away).

Related

Spring Data problem - derived delete doesn't work

I have a spring boot application (based off spring-boot-starter-data-jpa. I have an absolute minimum of configuration going on, and only a single table and entity.
I'm using CrudRepository<Long, MyEntity> with a couple of findBy methods which all work. And I have a derived deleteBy method - which doesn't work. The signature is simply:
public interface MyEntityRepository<Long, MyEntity> extends CrudRespository<> {
Long deleteBySystemId(String systemId);
// findBy methods left out
}
The entity is simple, too:
#Entity #Table(name="MyEntityTable")
public class MyEntity {
#Id
#GeneratedValue(strategy=GenerationType.IDENTITY)
#Column(name="MyEntityPID")
private Long MyEntityPID;
#Column(name="SystemId")
private String systemId;
#Column(name="PersonIdentifier")
private String personIdentifier;
// Getters and setters here, also hashCode & equals.
}
The reason the deleteBy method isn't working is because it seems to only issue a "select" statement to the database, which selects all the MyEntity rows which has a SystemId with the value I specify. Using my mysql global log I have captured the actual, physical sql and issued it manually on the database, and verified that it returns a large number of rows.
So Spring, or rather Hibernate, is trying to select the rows it has to delete, but it never actually issues a DELETE FROM statement.
According to a note on Baeldung this select statement is normal, in the sense that Hibernate will first select all rows that it intends to delete, then issue delete statements for each of them.
Does anyone know why this derived deleteBy method would not be working? I have #TransactionManagementEnabled on my #Configuration, and the method calling is #Transactional. The mysql log shows that spring sets autocommit=0 so it seems like transactions are properly enabled.
I have worked around this issue by manually annotating the derived delete method this way:
public interface MyEntityRepository<Long, MyEntity> extends CrudRespository<> {
#Modifying
#Query("DELETE FROM MyEntity m where m.systemId=:systemId")
Long deleteBySystemId(#Param("systemId") String systemId);
// findBy methods left out
}
This works. Including transactions. But this just shouldn't have to be, I shouldn't need to add that Query annotation.
Here is a person who has the exact same problem as I do. However the Spring developers were quick to wash their hands and write it off as a Hibernate problem so no solution or explanation to be found there.
Oh, for reference I'm using Spring Boot 2.2.9.
tl;dr
It's all in the reference documentation. That's the way JPA works. (Me rubbing hands washing.)
Details
The two methods do two different things: Long deleteBySystemId(String systemId); loads the entity by the given constraints and ends up issuing EntityManager.delete(…) which the persistence provider is about to delay until transaction commits. I.e. code following that call is not guaranteed that the changes have already been synced to the database. That in turn is due to JPA allowing its implementations to actually do just that. Unfortunately that's nothing Spring Data can fix on top of that. (More rubbing, more washing, plus a bit of soap.)
The reference documentation justifies that behavior with the need for the EntityManager (again a JPA abstraction, not something Spring Data has anything to do with) to trigger lifecycle events like #PreDelete etc. which users expect to fire.
The second method declaring a modifying query manually is declaring a query to be executed in the database, which means that entity lifecycles do not fire as the entities do not get materialized upfront.
However the Spring developers were quick to wash their hands and write it off as a Hibernate problem so no solution or explanation to be found there.
There's detailed explanation why it works the way it works in the comments to the ticket. There are solutions provided even. Workarounds and suggestions to bring this up with the part of the stack that has control over this behavior. (Shuts faucet, reaches for a towel.)

finall() in Spring services

Yesterday I've got access to the new project in my company and I have found this
public List<User> findNotActiveUsers() {
return this.userRepository.findAll().splititerator()
.filter(u -> u.isActive())
.collect(Collect.toList());
}
Is this a good way to find all the active users? Or should it be done in a repository like this?
public interface UserRepository extends JpaRepository<Long, User> {
#Query("SELECT user FROM User user WHERE user.active IS TRUE")
List<User> findActiveUsers();
}
And If first solution is correct what about performance?
Firstly, both options fulfill the requirement.
However, the option 2 makes more sense to filter the data at query level rather than at Java level. I believe the performance would be better on the second option though I don't have any data to backup this statement. I have commented about the performance based on my experience.
You can also consider whether Cache (#Cacheable) can be used. It purely depends on the use case i.e. how frequently the User entity is changed and how frequently you would like to refresh the cache.
One disadvantage of using native query is that currently Spring JPA doesn't support execution of dynamic sorting for native queries.
Please refer the similar question discussed in the below link though it is very much related to Hibernate. Clearly, the option 3 is preferred (i.e. #Query approach).
Spring Data Repository with ORM, EntityManager, #Query, what is the most elegant way to deal with custom SQL queries?

Good usage of managed entities / proxies with ORM

I'm currently thinking about how i handle my domain objects along with hibernate considering the following :
My model object are directly annotated with JPA annotation, no entity layer.
On some database heavy operation, i don't mind tuning my code so i take full advantages of the proxies, even if we can consider it as a leak of abstraction/implementation masking. Of course i prefer when i can do otherwise.
Because i don't have entity layer, i don't have a DAO layer, the entity manager is considerered itself as a DAO layer (related : I found JPA, or alike, don't encourage DAO pattern)
However i was thinking about improve what i'm doing know in order to reduce a bit the complexity, or at least, relocate that complexity in a place it fits better, like entity's related service. And maybe more abstract the fact that i'm using an ORM.
Here is a generic CRUD Service from which all my business service inherits. This code is to show you how things are done currently (annotation, logs remove for clarity) :
public void create(T entity) {
this.entityManager.persist(entity);
}
#Transactional(value = TxType.REQUIRED, rollbackOn=NumeroVersionException.class)
public void update(T entity) throws NumeroVersionException{
try{
this.entityManager.merge(entity);
}catch(OptimisticLockException ole){
throw new NumeroVersionException("for entity "+entity, ole);
}
}
public T read(int id) {
return this.entityManager.find(entityClass, id);
}
public void delete(int id) {
T entity = this.entityManager.getReference(entityClass, id);
this.entityManager.remove(entity);
// edit : removed null test thanks to #JBNizet
}
The problem with this kind of implementation, is that if i want to create an object, then use the advantages of the proxies i basically have to create it then refetch it. Of course the query may not hits the database but hits only hibernat's cache (not sure about it though). But that means i still have to not forget to refetch the proxy.
This mean i leak the fact that i'm using an ORM and proxies behind the scenes.
So i was thinking to change my interface to something like :
public T read(int id);
public T update(T t)throws NumeroVersionException;
public T create(T object);
public void delete(int id);
List<T> list();
Meaning once i pass an object to this layer, i will have to use the returned value.
And implements update specifically like :
public T update(T t){
if(!(t instanceof [Proxy class goes there])){
//+ check if it is a detached proxy
entityManager.merge(t);
}
}
Since merge hits the database every time called, for some operation involving just some 10ish entities this can be annoying i wouldn't call it in an update method with a proxy.
Of course I expect to have some edge cases where i'll need the entityManager to flush things and so on. But i think this would reduce significatively the current complexity of my code and isolate better the concerns.
What i'm trying in short is to relocate the ORM code within the service so i can hide the fact that i'm using an ORM and proxies and use the interface like i was using any other implementation without loosing the benefits of using an ORM.
The question is so :
Is that new design a good idea towards this idea ?
Did i miss anything about how to handle this properly ?
Note : Even though i'm talking about performance, my concern is also about isolation of concerns, maintenability, and easier usability for developers that aren't familiars with ORMs and Java which i work with.
Thanks to #JBNizet i'm seeing some thing more clearly :
I should use the value returned by merge() method.
A managed entity is not always a proxy.
I don't have to abstract the fact that i use managed entities, this will lead to complex and unefficient code
I choosed JPA i won't switch for it which is true unless rewriting the full model to stand for something based on non relationnal database.
So i'll just change my update method from the original code and i'll keep the rest.

Autowire two Neo4j GraphRepository in Spring

I'm new to using Spring with Neo4j and I have a question about #Autowire for a GraphRepository.
Most examples I've seen use one #Autowire per Controller, but I have two Nodes I need to modify at the same time when a particular method is called in the controller. Should I simply #Autowire the repositories for both nodes (eg per the code below)? Is there any impact if I do this in a second controller with the same repositories as well (so if I had a ChatSessionController which also #Autowired ChatMessageService and ChatSessionService)?
ChatMessageController.java
#Controller
public class ChatMessageController {
#Autowired
private ChatMessageService chatMessageService;
#Autowired
private ChatSessionService chatSessionService;
#RequestMapping(value = "/message/add/{chatSessionId}", method = RequestMethod.POST)
#ResponseBody
#Transactional
public void addMessage(#RequestBody ChatMessagePack chatMessagePack,
#PathVariable("chatSessionId") Long chatSessionId) {
ChatMessage chatMessage = new ChatMessage(chatMessagePack);
chatMessageService.save(chatMessage);
// TODO: Make some modifications to the ChatSession as well
}
}
Any help would be much appreciated! I've been googling and looking through Stackoverflow to understand this better but I haven't found anything yet. Any pointers in the right directions would be great.
Another underlying question is, should I be (and can I?) modifying other Nodes in a GraphRepository that handles a particular node? Eg Should my GraphRepository be able to modify my GraphRespository?
Thanks!
I'm not convinced that this is a SO question, it's not really a Neo4J or Spring question either, it is more about the architecture of your application. However assuming that you understand the negatives of class fan out, and how to use the #Transactional annotation to achieve what you want then the answer to your question is that it is just fine to have many Repositories (Neo4J or otherwise, autowired or otherwise) in your class and in as many classes as you want.
Neo4J transactions default to Isolation level READ_COMMITTED and if you need anything else, you need to add the guards/locks yourself. Nested transactions are consideredd tobe the same transaction. The Spring #Transactional annotation relies on proxies that you should be aware of as they have implications when calling methods from within the same class.
I would go through this tuotorial over at Spring Data and get your head around how real world vs domain vs node models differ, there will be cases where one repository impacts another node type but I would think it is often transparent to you (i.e adding relationships). You can do what you like in each repository (the generic nature of them is largely confined to all of the built in CRUD and queries derived from finder-method names (see documentation ) using the #Query annotation, and some queries have side effects, but largely you should avoid it.
As you start adding multiple repositories to multiple controllers I think that your code will begin to smell bad and that you should consider encapsulating this business logic off on its own somewhere, neatly unit tested. I also wouldn't tie myself to one controller per data object, it would be fine to have a single ChatController with a POST/chat/ to create a new session and POST /chat/{sessionId} to add a message. Intersting questions on Programmers:
How accurate is "Business logic should be in a service, not in a model?"
Best Practices for MVC Architecture
MVC Architecture — How many Controllers do I need?

JPA and DAO - what's the standard approach?

I'm developing my first app with JPA/Hibernate and Spring. My first attempt at a DAO class looks like this:
#Repository(value = "userDao")
public class UserDaoJpa implements UserDao {
#PersistenceContext
private EntityManager em;
public User getUser(Long id) {
return em.find(User.class, id);
}
public List getUsers() {
Query query = em.createQuery("select e from User e");
return query.getResultList();
}
}
I also found some examples using JpaDaoSupport and JpaTemplate. Which design do you prefer? Is there anything wrong with my example?
I'd say your approach looks totally sound. Personally I don't use JpaDaoSupport or JpaTemplate because you can do everything you need with the EntityManager and Criteria Queries.
Quote from the JavaDoc of JpaTemplate:
JpaTemplate mainly exists as a sibling of JdoTemplate and HibernateTemplate, offering the same style for people used to it. For newly started projects, consider adopting the standard JPA style of coding data access objects instead, based on a "shared EntityManager" reference injected via a Spring bean definition or the JPA PersistenceContext annotation.
I prefer the template-less approach (i.e. your current approach) because
it's less invasive, you don't tie DAOs to Spring
templates don't offer much value with APIs that use unchecked exceptions
And this is the Spring recommendation, as summarized in the blog post "So should you still use Spring's HibernateTemplate and/or JpaTemplate??" and the official javadoc:
The real question is: which approach to choose??
(...)
So in short (as the JavaDoc for
HibernateTemplate and
JpaTemplate already mention)
I'd recommend you to start using the
Session and/or EntityManager API
directly if you're starting to use
Hibernate or JPA respectively on a new
project–remember: Spring tries to be
non-invasive, this is another great
example!
I, personally, prefer your approach - inject EntityManager and use it directly. But JpaTemplate is also a good option. I don't like it, because adds yet another, unnecessary layer of abstraction.
I don't know if there's a "standard" approach.
If you're using JPA, you have your choice of implementations: Hibernate, TopLink, etc.
If you deploy to Google App Engine, you'll use JPA talking to BigTable.
So if your objectives are to maximize portability, stick with the JPA standard, and not tie yourself to a particular implementation like Hibernate, make sure that your DAOs only use JPA constructs.

Resources