A question regarding DAO vs Repository patterns - spring

I am new to repositories and I am a bit confused at the moment. From what I have read, DAO pattern is where you provide methods to access the the data store. Though, with repository, you access the datastore with an object repo.
I saw two examples here:
https://medium.com/#gustavo.ponce.ch/spring-boot-spring-mvc-spring-security-mysql-a5d8545d837d
http://javainsimpleway.com/spring-mvc-with-hibernate-crud-example/
The first example extends JpaRepository as intended, and no implementations are available (for add, remove, etc).
The second example provides DAO access with methods, though it goes with service/repository implementation. I mean it uses #Repository and #Service even though it is DAO.
Which one is the right implementation handling repositories.
Thanks for your time.

I recommend reading this article.
A DAO is much closer to the underlying storage , it's really data
centric. That's why in many cases you'll have DAOs matching db tables
or views 1 on 1.
A repository sits at a higher level. It deals with data too and hides
queries and all that but, a repository deals with** business/domain
objects**.

Related

Implement "cache tags pattern" in Spring

I have been searching for long time in the Internet, if Spring supports the "cache tag pattern" but seems it doesn't.... How do you implement this cache pattern in Spring???
An example of this cache pattern can be seen in Drupal cache tag implementation, I will create an example of how would that look
#Service
#AllArgsConstructor
class TicketService {
private final UserRepository userRepository;
private final ProductRepository productRepository;
private final TicketRepository ticketRepository;
#Cacheable(cacheNames = "global-tags-cache", tags = { "ticket:list", "user:#{user.id}", "product:#{product.id}"})
public List<TicketDto> findTicketsForUserThatIncludesProduct(User user, Product product) {
var tickets = ticketRepository.findByUserAndProduct(user, product);
}
#CacheEvict(cacheNames = "global-tags-cache", tags = "ticket:list")
public Ticket saveNewTicket(TicketRequestDto ticketRequestDto) { ... }
}
#Service
#AllArgsConstructor
class ProductService {
private final ProductRepository productRepository;
#CacheEvict(cacheNames = "global-tags-cache", tags = "product:#{productRequestDto.id}")
public Product updateProductInformation(ProductRequestDto productRequestDto() {
...
}
}
#Service
class NonTagCacheService() {
#Cacheable(cacheNames = "some-non-global-cache-store")
public Object doStrongComputation() { ... }
}
The idea is to handle the responsability of the "tag eviction" where it belongs to, for example TicketService wants its cache to be break when user is altered, or when any ticket is altered, or when the specified product is altered... TicketService doesn't need to know when or where Product is going to clear its tag
This pattern is strongly useful in Drupal, and makes its cache very powerfull, this is just an example, but one can implement its own tags for whatever reason he wants, for example a "kafka-process:custom-id"
As M. Deinum explained, Spring's Cache Abstraction is just that, an "abstraction", or rather an SPI enabling different caching providers to be plugged into the framework in order to offer caching capabilities to managed application services (beans) where needed, not unlike Security, or other cross-cutting concerns.
Spring's Cache Abstraction only declares fundamental, but essential caching functions that are common across most caching providers. In effect, Spring's Cache Abstraction implements the lowest common denominator of caching capabilities (e.g. put and get). You can think of java.util.Map as the most basic, fundamental cache implementation possible, and is indeed one of the many supported caching providers offered out of the box (see here and here).
This means advanced caching functions, such as expiration, eviction, compression, serialization, general memory management and configuration, and so on, are left to individual providers since these type of capabilities vary greatly from one cache implementation (provider) to another, as does the configuration of these features. This would be no different in Drupal's case. The framework documentation is definitive on this matter.
Still, not all is lost, but it usually requires a bit of work on the users part.
Being that the Cache and CacheManager interfaces are the primary interfaces (SPI) of the Spring Cache Abstraction, it is easy to extend or customize the framework.
First, and again, as M. Deinum points out, you have the option of custom key generation. But, this offers no relief with respect to (custom) eviction policies based on the key(s) (or tags applied to cache entries).
Next, you do have the option to get access to the low-level, "native" cache implementation of the provider using the API, Cache.getNativeCache(). Of course, then you must forgo the use of Spring Cache Annotations, or alternatively, the JCAche API Annotations, supported by Spring, which isn't as convenient, particularly if you want to enable/disable caching conditionally.
Finally, I tested a hypothesis to a question posted in SO not long ago regarding a similar problem... using Regular Expressions (REGEX) to evict entries in a cache where the keys matched the REGEX. I never provided an answer to this question, but I did come up with a "generic" solution, implemented with 3 different caching providers: Redis, Apache Geode, and using the simple ConcurrentMap implementation.
This involved a significant amount of supporting infrastructure classes, beginning here and declared here. Of course, my goal was to implement support for multiple caching providers via "decoration". Your implementation need not be so complicated, or rather sophisticated, ;-)
Part of my future work on the Spring team will involve gathering use cases like yours and providing (pluggable) extensions to the core Spring Framework Cache Abstraction that may eventually find its way back into the core framework, or perhaps exist as separate pluggable modules based on application use case and requirements that our many users have expressed over the years.
At any rate, I hope this offers you some inspiration on how to possibly and more elegantly handle your use case.
Always keep in mind the Spring Framework is an exceptional example of the Open/Closed principle; it offers many extension points, and when combined with the right design pattern (e.g. Decorator, not unlike AOP itself), it can be quite powerful.
Good luck!

What strategies exist for using Spring Cache on methods that take an array or collection parameter?

I want to use Spring's Cache abstraction to annotate methods as #Cacheable. However, some methods are designed to take an array or collection of parameters and return a collection. For example, consider this method to find entites:
public Collection<Entity> getEntities(Collection<Long> ids)
Semantically, I need to cache Entity objects individually (keyed by id), not based on the collection of IDs as a whole. Similar to what this question is asking about.
Simple Spring Memcached supports what I want, via its ReadThroughMultiCache, but I want to use Spring's abstraction in order to support easy changing of the cache store implementation (Guava, Coherence, Hazelcast, etc), not just memcached.
What strategies exist for caching this kind of method using Spring Cache?
Spring's Cache Abstraction does not support this behavior out-of-the-box. However, it does not mean it is not possible; it's just a bit more work to support the desired behavior.
I wrote a small example demonstrating how a developer might accomplish this. The example uses Spring's ConcurrentMapCacheManager to demonstrate the customizations. This example will need to be adapted to your desired caching provider (e.g. Hazelcast, Coherence, etc).
In short, you need to override the CacheManager implementation's method for "decorating" the Cache. This varies from implementation to implementation. In the ConcurrentMapCacheManager, the method is createConcurrentMapCache(name:String). In Spring Data GemFire, you would override the getCache(name:String) method to decorate the Cache returned. For Guava, it would be the createGuavaCache(name:String) in the GuavaCacheManager, and so on.
Then your custom, decorated Cache implementation (perhaps/ideally, delegating to the actual Cache impl, from this) would handle caching Collections of keys and corresponding values.
There are few limitations of this approach:
A cache miss is all or nothing; i.e. partial keys cached will be considered a miss if any single key is missing. Spring (OOTB) does not let you simultaneously return cache values and call the method for the diff. That would require some very extensive modifications to the Cache Abstraction that I would not recommend.
My implementation is just an example so I chose not to implement the Cache.putIfAbsent(key, value) operation (here).
While my implementation works, it could be made more robust.
Anyway, I hope it provides some insight in how to handle this situation properly.
The test class is self-contained (uses Spring JavaConfig) and can run without any extra dependencies (beyond Spring, JUnit and the JRE).
Cheers!
Worked for me. Here's a link to my answer.
https://stackoverflow.com/a/60992530/2891027
TL:DR
#Cacheable(cacheNames = "test", key = "#p0")
public List<String> getTestFunction(List<String> someIds) {
My example is with String but it also works with Integer and Long, and probably others.

Identifying Spring MVC architecture pattern

I'm working through a spring mvc video series and loving it!
https://www.youtube.com/channel/UCcawgWKCyddtpu9PP_Fz-tA/videos
I'd like to learn more about the specifics of the exact architecture being used and am having trouble identifying the proper name - so that I can read further.
For example, I understand that the presentation layer is MVC, but not really sure how you would more specifically describe the pattern to account for the use of service and resource objects - as opposed to choosing to use service, DAO and Domain objects.
Any clues to help me better focus my search on understanding the layout below?
application
core
models/entities
services
rest
controllers
resources
resource_assemblers
Edit:
Nathan Hughes comment clarified my confusion with the nomenclature and SirKometa connected the architectural dots that I was not grasping. Thanks guys.
As far as I can tell the layout you have mentioned represents the application which communicates with the world through REST services.
core package represents all the classes (domain, services, repositories) which are not related to view.
model package - Assuming you are aiming for the typical application you do have a model/domain/entity package which represents your data For example: https://github.com/chrishenkel/spring-angularjs-tutorial-10/blob/master/src/main/java/tutorial/core/models/entities/Account.java.
repository package - Since you are using Spring you will most likely use also since spring-data or even spring-data-jpa with Hibernate as your ORM Library. It will most likely lead you to use Repository interfaces (author of videos you watch for some reason decided not to use it though). Anyway it will be your layer to access database, for example: https://github.com/chrishenkel/spring-angularjs-tutorial-10/blob/master/src/main/java/tutorial/core/repositories/jpa/JpaAccountRepo.java
service package will be your package to manipulate data. It's not the best example but this layer doesn't access your database directly, it will use Repositories to do it, but it might also do other things - it will be your API to manipulate data in you application. Let's say you want to have a fancy calculation on your wallet before you save it to DB, or like here https://github.com/chrishenkel/spring-angularjs-tutorial-10/blob/master/src/main/java/tutorial/core/services/impl/AccountServiceImpl.java you want to make sure that the Blog you try to create doesn't exist yet.
controllers package contain all classes which will be used by DispacherServlet to take care of the requests. You will read "input" from the request, process it (use your Services here) and send your responses.
resource_assemblers package in this case is framework specific (Hateoas). As far as I can tell it's just a DTO for your json responses (for example you might want to store password in your Account but exposing it through json won't be a good idea, and it would happen if you didn't use DTO).
Please let me know if that is the answer you were looking for.
This question may be of interest to you as well as this explanation.
You are mostly talking about the same things in each case, Spring just uses annotations so that when it scans them it knows what type of object you are creating or instantiating.
Basically everything request flows through the controller annotated with #Controller. Each method process the request and (if needed) calls a specific service class to process the business logic. These classes are annotated with #Service. The controller can instantiate these classes by autowiring them in #Autowire or resourcing them #Resource.
#Controller
#RequestMapping("/")
public class MyController {
#Resource private MyServiceLayer myServiceLayer;
#RequestMapping("/retrieveMain")
public String retrieveMain() {
String listOfSomething = myServiceLayer.getListOfSomethings();
return listOfSomething;
}
}
The service classes then perform their business logic and if needed, retrieve data from a repository class annotated with #Repository. The service layer instantiate these classes the same way, either by autowiring them in #Autowire or resourcing them #Resource.
#Service
public class MyServiceLayer implements MyServiceLayerService {
#Resource private MyDaoLayer myDaoLayer;
public String getListOfSomethings() {
List<String> listOfSomething = myDaoLayer.getListOfSomethings();
// Business Logic
return listOfSomething;
}
}
The repository classes make up the DAO, Spring uses the #Repository annotation on them. The entities are the individual class objects that are received by the #Repository layer.
#Repository
public class MyDaoLayer implements MyDaoLayerInterface {
#Resource private JdbcTemplate jdbcTemplate;
public List<String> getListOfSomethings() {
// retrieve list from database, process with row mapper, object mapper, etc.
return listOfSomething;
}
}
#Repository, #Service, and #Controller are specific instances of #Component. All of these layers could be annotated with #Component, it's just better to call it what it actually is.
So to answer your question, they mean the same thing, they are just annotated to let Spring know what type of object it is instantiating and/or how to include another class.
I guess the architectural pattern you are looking for is Representational State Transfer (REST). You can read up on it here:
http://en.wikipedia.org/wiki/Representational_state_transfer
Within REST the data passed around is referred to as resources:
Identification of resources:
Individual resources are identified in requests, for example using URIs in web-based REST systems. The resources themselves are conceptually separate from the representations that are returned to the client. For example, the server may send data from its database as HTML, XML or JSON, none of which are the server's internal representation, and it is the same one resource regardless.

Autowire two Neo4j GraphRepository in Spring

I'm new to using Spring with Neo4j and I have a question about #Autowire for a GraphRepository.
Most examples I've seen use one #Autowire per Controller, but I have two Nodes I need to modify at the same time when a particular method is called in the controller. Should I simply #Autowire the repositories for both nodes (eg per the code below)? Is there any impact if I do this in a second controller with the same repositories as well (so if I had a ChatSessionController which also #Autowired ChatMessageService and ChatSessionService)?
ChatMessageController.java
#Controller
public class ChatMessageController {
#Autowired
private ChatMessageService chatMessageService;
#Autowired
private ChatSessionService chatSessionService;
#RequestMapping(value = "/message/add/{chatSessionId}", method = RequestMethod.POST)
#ResponseBody
#Transactional
public void addMessage(#RequestBody ChatMessagePack chatMessagePack,
#PathVariable("chatSessionId") Long chatSessionId) {
ChatMessage chatMessage = new ChatMessage(chatMessagePack);
chatMessageService.save(chatMessage);
// TODO: Make some modifications to the ChatSession as well
}
}
Any help would be much appreciated! I've been googling and looking through Stackoverflow to understand this better but I haven't found anything yet. Any pointers in the right directions would be great.
Another underlying question is, should I be (and can I?) modifying other Nodes in a GraphRepository that handles a particular node? Eg Should my GraphRepository be able to modify my GraphRespository?
Thanks!
I'm not convinced that this is a SO question, it's not really a Neo4J or Spring question either, it is more about the architecture of your application. However assuming that you understand the negatives of class fan out, and how to use the #Transactional annotation to achieve what you want then the answer to your question is that it is just fine to have many Repositories (Neo4J or otherwise, autowired or otherwise) in your class and in as many classes as you want.
Neo4J transactions default to Isolation level READ_COMMITTED and if you need anything else, you need to add the guards/locks yourself. Nested transactions are consideredd tobe the same transaction. The Spring #Transactional annotation relies on proxies that you should be aware of as they have implications when calling methods from within the same class.
I would go through this tuotorial over at Spring Data and get your head around how real world vs domain vs node models differ, there will be cases where one repository impacts another node type but I would think it is often transparent to you (i.e adding relationships). You can do what you like in each repository (the generic nature of them is largely confined to all of the built in CRUD and queries derived from finder-method names (see documentation ) using the #Query annotation, and some queries have side effects, but largely you should avoid it.
As you start adding multiple repositories to multiple controllers I think that your code will begin to smell bad and that you should consider encapsulating this business logic off on its own somewhere, neatly unit tested. I also wouldn't tie myself to one controller per data object, it would be fine to have a single ChatController with a POST/chat/ to create a new session and POST /chat/{sessionId} to add a message. Intersting questions on Programmers:
How accurate is "Business logic should be in a service, not in a model?"
Best Practices for MVC Architecture
MVC Architecture — How many Controllers do I need?

Confused about Spring-Data DDD repository pattern

I don't know so much about DDD repository pattern but the implementation in Spring is confusion me.
public interface PersonRepository extends JpaRepository<Person, Long> { … }
As the interface extends JpaRepository (or MongoDBRepository...), if you change from one db to another, you have to change the interface as well.
For me an interface is there to provide some abstraction, but here it's not so much abstract...
Do you know why Spring-Data works like that?
You are right, an Interface is an abstraction about something that works equals for all implementing classes, from an outside point of view.
And that is exactly what happens here:
JpaRepository is a common view of all your JPA Repositories (for all the different Entities), while MongoDBRepository is the same for all MongoDB Entities.
But JpaRepository and MongoDBRepository have nothing in common, except the stuff that is defined in there common super Interfaces:
org.springframework.data.repository.PagingAndSortingRepository
org.springframework.data.repository.Repository
So for me it looks normal.
If you use the classes that implement your Repository then use PagingAndSortingRepository or Repository if you want to be able to switch from an JPA implementation to an Document based implementation (sorry but I can not imagine such a use case - anyway). And of course your Repository implementation should implement the correct interface (JpaRepository, MongoDBRepository) depending on what it is.
The reasoning behind this is pretty clearly stated in this blog post http://blog.springsource.com/2011/02/10/getting-started-with-spring-data-jpa/.
Defining this interface serves two purposes: First, by extending JpaRepository we get a bunch of generic CRUD methods into our type that allows saving Accounts, deleting them and so on. Second, this will allow the Spring Data JPA repository infrastructure to scan the classpath for this interface and create a Spring bean for it.
If you do not trust sources so close to the source (pun intended) it might be a good idea to read this post as well http://www.brucephillips.name/blog/index.cfm/2011/3/25/Using-Spring-Data-JPA-To-Reduced-Data-Access-Coding.
What I did not need to code is an implementation of the PersonRepository interface. Spring will create an implementation of this interface and make a PersonRepository bean available to be autowired into my Service class. The PersonRepository bean will have all the standard CRUD methods (which will be transactional) and return Person objects or collection of Person objects. So by using Spring Data JPA, I've saved writing my own implementation class.
Until M2 of Spring Data we required users to extend JpaRepository due to the following reasons:
The classpath scanning infrastructure only picked up interfaces extending that interface as one might use Spring Data JPA and Spring Data Mongo in parallel and have both of them pointed to the very same package it would not be clear which store to create the proxy for. However since RC1 we simply leave that burden to the developer as we think it's a rather exotic case and the benefit of just using Repository, CrudRepository or the like outweights the effort you have to take in the just described corner case. You can use exclude and include elements in the namespace to gain finer-grained control over this.
Until M2 we had transactionality applied to the CRUD methods by redeclaring the CRUD methods and annotating them with #Transactional. This decision in turn was driven by the algorithm AnnotationTransactionAttributeSource uses to find transaction configuration. As we wanted to provide the user with the possibility to reconfigure transactions by just redeclaring a CRUD method in the concrete repository interface and applying #Transactional on it. For RC1 we decided to implement a custom TransactionAttributeSource to be able to move the annotations back to the repository CRUD implementation.
Long story short, here's what it boils down to:
As of RC1 there's no need to extend the store specific repository interface anymore, except you want to…
Use List-based access to findAll(…) instead of the Iterable-based one in the more core repository interfaces (allthough you could simply redeclare the relevant methods in a common base interface to return Lists as well)
You want to make use of the JPA-specific methods like saveAndFlush(…) and so on.
Generally you are much more flexible regarding the exposure of CRUD methods since RC1 as you can even only extend the Repository marker interface and selectively add the CRUD methods you want to expose. As the backing implementation will still implement all of the methods of PagingAndSortingRepository we can still route the calls to the instance:
public interface MyBaseRepository<T, ID extends Serializable> extends Repository<T, ID> {
List<T> findAll();
T findOne(ID id);
}
public interface UserRepository extends MyBaseRepository<User, Long> {
List<T> findByUsername(String username);
}
In that example we define MyBaseRepository to only expose findAll() and findOne(…) (which will be routed into the instance implementing the CRUD methods) and the concrete repository adding a finder method to the two CRUD ones.
For more details on that topic please consult the reference documentation.

Resources