I found out that there is an interface called GraphRepository. I have a repository for users implementing a homemade interface that does its job, but I was wondering, shouldn't I implement GraphRepository instead ? Even if it will be quite long to implement and some methods will be useless, I think it is a standard and I already re-coded a lot of methods that are defined in this interface.
So should I write "YAGNI" code or not respect the standard ?
What is your advice ?
you don't need to actually implement GraphRepository but extend it. the principals of Spring-Data is that all the boiler-plate CRUD code is taken care of (by proxying at startup time) so all you have to do is create an interface for your specific entity extending GraphRepository and then add only specific methods that you require.
for example; if i have an entity CustomerNode, to create standard CRUD methods, i can create a new interface CustomerNodeRepository extends GraphRepository<CustomerNode,Long>. all the methods from GraphRepository (e.g. save, findAll, findOne, delete, deleteAll, etc.) are now accessible from CustomerNodeRepository and implemented by Spring-Data-Neo4J without having to write a single line of implementation code.
the pattern now allows you to work on your specific repository code (e.g. findByNameAndDateOfBirth) rather than the simple CRUD stuff.
Spring-Data package is very useful for repository interaction. it can reduce huge amounts of code (have seen 80%+ reduction in code lines) and would highly recommend using it
edit: implementing custom execution
if you want to add your own custom behavior to a Repository method, you create the concept of merging interfaces and custom implementation. for example, lets say i want to create a method called findCustomerNodeBySomeStrangeCriteria and to do this, i actually want to link off to a relational database to perform the function.
first we define a separate, standalone interface that only includes our 'extra' method.
public interface CustomCustomerNodeRepository {
List<CustomerNode> findCustomerNodeBySomeStrangeCriteria(Object strangeCriteria);
}
next we update our normal interface to extend not only GraphRepository, but our new custom one too
public interface CustomerNodeRepository extends GraphRepository<CustomerNode,Long>, CustomCustomerNodeRepository {
}
the last piece, is to actually implement our findCustomerNodeBySomeStrangeCriteria method
public class CustomerNodeRepositoryImpl implements CustomCustomerNodeRepository {
public List<CustomerNode> findCustomerNodeBySomeStrangeCriteria(Object criteria) {
//implementation code
}
}
so, there's a couple of points to note;
we create a separate interface to define any custom methods that have custom implementations (as distinct from Spring-Data compatible "findBy..." methods)
our CustomerNodeRepository interface (our 'main' interface) extends both the GraphRepository and our 'custom' one
we implement only the 'custom' method in a class that implements only the custom interface
the 'custom' implementation class must (by default) be called our 'main' interface Impl to be picked up by Spring Data (so in this case CustomNodeRepositoryImpl)
under the covers, Spring Data delivers a proxy implementation of CustomerNodeRepository as a merge of the auto-built GraphRepository and our class implementing CustomCustomerNodeRepository. the reason for the name of the class is to allow Spring Data to pick it up easily/successfully (this can be overwritten so it doesn't look for *Impl)
Related
In a Spring Boot Web Application layout, I have defined a Service Interface named ApplicationUserService. An implementation called ApplicationUserServiceImpl implements all the methods present in ApplicationUserService.
Now, I have a Controller called ApplicationUserController that calls all the methods of ApplicationUserServiceImpl under different #GetMapping annotations.
As suggested by my instructor, I have defined a Dependency Injection as follows:
public class ApplicationUserController {
private final ApplicationUserService applicationUserService; //This variable will work as an object now.
public ApplicationUserController(ApplicationUserService applicationUserService) {
this.applicationUserService = applicationUserService;
}
#GetMapping
//REST OF THE CODE
}
I am new to Spring Boot and I tried understanding Dependency Injection in plain English and I understood how it works. I understood that the basic idea is to separate the dependency from the usage. But I am totally confused about how this works in my case.
My Questions:
Here ApplicationUserService is an Interface and it's implementation has various methods defined. In the code above, applicationUserService now has access to every method from ApplicationUserServiceImpl. How did that happen?
I want to know how the Object creation works here.
Could you tell me the difference between not using DI and using DI in this case?
Answers
The interface layer is used for abstraction of the code it will be really helpfull when you want to provide different implementations for it. When you create a instance you are simply creating a ApplicationUserServiceImpl and assigning it into a reference variable of ApplicationUserService this concept is called upcasting. Even though you have created the object of child class/implementation class you can only access the property or methods defined in parent class/interface class. Please refer this for more info
https://stackoverflow.com/questions/62599259/purpose-of-service-interface-class-in-spring-boot#:~:text=There%20are%20various%20reasons%20why,you%20can%20create%20test%20stubs.
in this example when a applicationusercontroller object is created. The spring engine(or ioc container) will create a object of ApplicationUserServiceImpl (since there is a single implementation of the interface ) and returns to controller object as a constructor orgument which you assign to the refrence variable also refer the concept called IOC(Invertion of control)
as explained in the previous answer the spring will take care of object creation (object lifecycle)rather than you explsitly doing it. it will make the objects loosely coupled. In this the controll of creating the instances is with spring .
The non DI way of doing this is
private ApplicationUserService applicationUserService = new ApplicationUserServiceImpl()
Hope I have answered your questions
this analogy may make you understand better consider an example, wherein you have the ability to cook. According to the IoC principle(principal which is the basis of DI), you can invert the control, so instead of you cooking food, you can just directly order from outside, wherein you receive food at your doorstep. Thus the process of food delivered to you at your doorstep is called the Inversion of Control.
You do not have to cook yourself, instead, you can order the food and let a delivery executive, deliver the food for you. In this way, you do not have to take care of the additional responsibilities and just focus on the main work.
Now, that you know the principle behind Dependency Injection, let me take you through the types of Dependency Injection
I'm learning how to use Mapstruct in a Spring Boot and Kotlin project.
I've got a generated DTO (ThessaurusDTO) that has a List and I need this mapped into a List on my model (Vocab).
It makes sense that MapStruct can't map this automatically, but I know for a fact that the first list will always be size = 1. I have no control on the API the DTO model belongs to.
I found on the documentation that I can create define a default method implementation within the interface, which would loosely translate to a normal function in Kotlin
My mapper interface:
#Mapper
interface VocabMapper {
#Mappings(
// ...
)
fun thessaurusToVocab(thessaurusDTO: ThessaurusDTO): Vocab
fun metaSyns(nestedList: List<List<String>>): List<String>
= nestedList.flatten()
}
When I try to do a build I get the following error:
VocabMapper.java:16: error: Can't map collection element "java.util.List<java.lang.String>" to "java.lang.String ". Consider to declare/implement a mapping method: "java.lang.String map(java.util.List<java.lang.String> value)".
It looks like mapStruct is still trying to automatically do the mapping while ignoring my custom implementation. Am I missing something trivial here?
I found on the documentation that I can create define a default method implementation within the interface, which would loosely translate to a normal function in Kotlin
From my understand of what I found online, Kotlin does not properly translate an interface function into a default method in Java, but actually generates a class that implements the interface.
If that's the problem, you can annotate metaSyns with #JvmDefault:
Specifies that a JVM default method should be generated for non-abstract Kotlin interface member.
Usages of this annotation require an explicit compilation argument to be specified: either -Xjvm-default=enable or -Xjvm-default=compatibility.
See the link for the difference, but you probably need -Xjvm-default=enable.
I've seen to have fixed this by relying on an abstract based implementation, instead of using an interface.
From my understand of what I found online, Kotlin does not properly translate an interface function into a default method in Java, but actually generates a class that implements the interface.
https://github.com/mapstruct/mapstruct/issues/1577
I'm working through a spring mvc video series and loving it!
https://www.youtube.com/channel/UCcawgWKCyddtpu9PP_Fz-tA/videos
I'd like to learn more about the specifics of the exact architecture being used and am having trouble identifying the proper name - so that I can read further.
For example, I understand that the presentation layer is MVC, but not really sure how you would more specifically describe the pattern to account for the use of service and resource objects - as opposed to choosing to use service, DAO and Domain objects.
Any clues to help me better focus my search on understanding the layout below?
application
core
models/entities
services
rest
controllers
resources
resource_assemblers
Edit:
Nathan Hughes comment clarified my confusion with the nomenclature and SirKometa connected the architectural dots that I was not grasping. Thanks guys.
As far as I can tell the layout you have mentioned represents the application which communicates with the world through REST services.
core package represents all the classes (domain, services, repositories) which are not related to view.
model package - Assuming you are aiming for the typical application you do have a model/domain/entity package which represents your data For example: https://github.com/chrishenkel/spring-angularjs-tutorial-10/blob/master/src/main/java/tutorial/core/models/entities/Account.java.
repository package - Since you are using Spring you will most likely use also since spring-data or even spring-data-jpa with Hibernate as your ORM Library. It will most likely lead you to use Repository interfaces (author of videos you watch for some reason decided not to use it though). Anyway it will be your layer to access database, for example: https://github.com/chrishenkel/spring-angularjs-tutorial-10/blob/master/src/main/java/tutorial/core/repositories/jpa/JpaAccountRepo.java
service package will be your package to manipulate data. It's not the best example but this layer doesn't access your database directly, it will use Repositories to do it, but it might also do other things - it will be your API to manipulate data in you application. Let's say you want to have a fancy calculation on your wallet before you save it to DB, or like here https://github.com/chrishenkel/spring-angularjs-tutorial-10/blob/master/src/main/java/tutorial/core/services/impl/AccountServiceImpl.java you want to make sure that the Blog you try to create doesn't exist yet.
controllers package contain all classes which will be used by DispacherServlet to take care of the requests. You will read "input" from the request, process it (use your Services here) and send your responses.
resource_assemblers package in this case is framework specific (Hateoas). As far as I can tell it's just a DTO for your json responses (for example you might want to store password in your Account but exposing it through json won't be a good idea, and it would happen if you didn't use DTO).
Please let me know if that is the answer you were looking for.
This question may be of interest to you as well as this explanation.
You are mostly talking about the same things in each case, Spring just uses annotations so that when it scans them it knows what type of object you are creating or instantiating.
Basically everything request flows through the controller annotated with #Controller. Each method process the request and (if needed) calls a specific service class to process the business logic. These classes are annotated with #Service. The controller can instantiate these classes by autowiring them in #Autowire or resourcing them #Resource.
#Controller
#RequestMapping("/")
public class MyController {
#Resource private MyServiceLayer myServiceLayer;
#RequestMapping("/retrieveMain")
public String retrieveMain() {
String listOfSomething = myServiceLayer.getListOfSomethings();
return listOfSomething;
}
}
The service classes then perform their business logic and if needed, retrieve data from a repository class annotated with #Repository. The service layer instantiate these classes the same way, either by autowiring them in #Autowire or resourcing them #Resource.
#Service
public class MyServiceLayer implements MyServiceLayerService {
#Resource private MyDaoLayer myDaoLayer;
public String getListOfSomethings() {
List<String> listOfSomething = myDaoLayer.getListOfSomethings();
// Business Logic
return listOfSomething;
}
}
The repository classes make up the DAO, Spring uses the #Repository annotation on them. The entities are the individual class objects that are received by the #Repository layer.
#Repository
public class MyDaoLayer implements MyDaoLayerInterface {
#Resource private JdbcTemplate jdbcTemplate;
public List<String> getListOfSomethings() {
// retrieve list from database, process with row mapper, object mapper, etc.
return listOfSomething;
}
}
#Repository, #Service, and #Controller are specific instances of #Component. All of these layers could be annotated with #Component, it's just better to call it what it actually is.
So to answer your question, they mean the same thing, they are just annotated to let Spring know what type of object it is instantiating and/or how to include another class.
I guess the architectural pattern you are looking for is Representational State Transfer (REST). You can read up on it here:
http://en.wikipedia.org/wiki/Representational_state_transfer
Within REST the data passed around is referred to as resources:
Identification of resources:
Individual resources are identified in requests, for example using URIs in web-based REST systems. The resources themselves are conceptually separate from the representations that are returned to the client. For example, the server may send data from its database as HTML, XML or JSON, none of which are the server's internal representation, and it is the same one resource regardless.
I just switched from ActiveRecord/NHibernate to Dapper. Previously, I had all of my queries in my controllers. However, some properties which were convenient to implement on my models (such as summaries/sums/totals/averages), I could calculate by iterating over instance variables (collections) in my model.
To be specific, my Project has a notion of AppSessions, and I can calculate the total number of sessions, plus the average session length, by iterating over someProject.AppSessions.
Now that I'm in Dapper, this seems confused: my controller methods now make queries to the database via Dapper (which seems okay), but my model class also makes queries to the database via Dapper (which seems strange).
TLDR: Should the DB access go in my model, or controller, or both? It seems that both is not correct, and I would like to limit it to one "layer" so that changing DB access style later doesn't impact too much.
You should consider using a repository pattern:
With repositories, all of the database queries are encapsulated within a repository which is exposed through public interface, for example:
public interface IGenericRepository<T> where T : class
{
T Get(object id);
IQueryable<T> GetAll();
void Insert(T entity);
void Delete(T entity);
void Save(T entity);
}
Then you can inject a repository into a controller:
public class MyController
{
private readonly IGenericRepository<Foo> _fooRepository;
public MyController(IGenericRepository<Foo> fooRepository)
{
_fooRepository = fooRepository;
}
}
This keeps UI free of any DB dependencies and makes testing easier; from unit tests you can inject any mock that implements IRepository. This also allows the repository to implement and switch between technologies like Dapper or Entity Framework without any client changes and at any time.
The above example used a generic repository, but you don't have to; you can create a separate interface for each repository, e.g. IFooRepository.
There are many examples and many variations of how repository pattern can be implemented, so google some more to understand it. Here is one of my favorite articles re. layered architectures.
Another note: For small projects, it should be OK to put queries directly into controllers...
I can't speak for dapper personally, but I've always restricted my db access to models only except in very rare circumstances. That seems to make the most sense in my opinion.
A little more info: http://en.wikipedia.org/wiki/Model%E2%80%93view%E2%80%93controller
A model notifies its associated views and controllers when there has been a change in its state. This notification allows the views to produce updated output, and the controllers to change the available set of commands. A passive implementation of MVC omits these notifications, because the application does not require them or the software platform does not support them.
Basically, data access in models seems to be the standard.
I agree with #void-ray regarding the repository model. However, if you don't want to get into interfaces and dependency injection you could still separate out your data access layer and use static methods to return data from Dapper.
When I am using Dapper I typically have a Repository library that returns very small objects or lists that can then be mapped into a ViewModel and passed to the View (the mapping is done by StructureMap, but could be handled in the controller or another helper).
I don't know so much about DDD repository pattern but the implementation in Spring is confusion me.
public interface PersonRepository extends JpaRepository<Person, Long> { … }
As the interface extends JpaRepository (or MongoDBRepository...), if you change from one db to another, you have to change the interface as well.
For me an interface is there to provide some abstraction, but here it's not so much abstract...
Do you know why Spring-Data works like that?
You are right, an Interface is an abstraction about something that works equals for all implementing classes, from an outside point of view.
And that is exactly what happens here:
JpaRepository is a common view of all your JPA Repositories (for all the different Entities), while MongoDBRepository is the same for all MongoDB Entities.
But JpaRepository and MongoDBRepository have nothing in common, except the stuff that is defined in there common super Interfaces:
org.springframework.data.repository.PagingAndSortingRepository
org.springframework.data.repository.Repository
So for me it looks normal.
If you use the classes that implement your Repository then use PagingAndSortingRepository or Repository if you want to be able to switch from an JPA implementation to an Document based implementation (sorry but I can not imagine such a use case - anyway). And of course your Repository implementation should implement the correct interface (JpaRepository, MongoDBRepository) depending on what it is.
The reasoning behind this is pretty clearly stated in this blog post http://blog.springsource.com/2011/02/10/getting-started-with-spring-data-jpa/.
Defining this interface serves two purposes: First, by extending JpaRepository we get a bunch of generic CRUD methods into our type that allows saving Accounts, deleting them and so on. Second, this will allow the Spring Data JPA repository infrastructure to scan the classpath for this interface and create a Spring bean for it.
If you do not trust sources so close to the source (pun intended) it might be a good idea to read this post as well http://www.brucephillips.name/blog/index.cfm/2011/3/25/Using-Spring-Data-JPA-To-Reduced-Data-Access-Coding.
What I did not need to code is an implementation of the PersonRepository interface. Spring will create an implementation of this interface and make a PersonRepository bean available to be autowired into my Service class. The PersonRepository bean will have all the standard CRUD methods (which will be transactional) and return Person objects or collection of Person objects. So by using Spring Data JPA, I've saved writing my own implementation class.
Until M2 of Spring Data we required users to extend JpaRepository due to the following reasons:
The classpath scanning infrastructure only picked up interfaces extending that interface as one might use Spring Data JPA and Spring Data Mongo in parallel and have both of them pointed to the very same package it would not be clear which store to create the proxy for. However since RC1 we simply leave that burden to the developer as we think it's a rather exotic case and the benefit of just using Repository, CrudRepository or the like outweights the effort you have to take in the just described corner case. You can use exclude and include elements in the namespace to gain finer-grained control over this.
Until M2 we had transactionality applied to the CRUD methods by redeclaring the CRUD methods and annotating them with #Transactional. This decision in turn was driven by the algorithm AnnotationTransactionAttributeSource uses to find transaction configuration. As we wanted to provide the user with the possibility to reconfigure transactions by just redeclaring a CRUD method in the concrete repository interface and applying #Transactional on it. For RC1 we decided to implement a custom TransactionAttributeSource to be able to move the annotations back to the repository CRUD implementation.
Long story short, here's what it boils down to:
As of RC1 there's no need to extend the store specific repository interface anymore, except you want to…
Use List-based access to findAll(…) instead of the Iterable-based one in the more core repository interfaces (allthough you could simply redeclare the relevant methods in a common base interface to return Lists as well)
You want to make use of the JPA-specific methods like saveAndFlush(…) and so on.
Generally you are much more flexible regarding the exposure of CRUD methods since RC1 as you can even only extend the Repository marker interface and selectively add the CRUD methods you want to expose. As the backing implementation will still implement all of the methods of PagingAndSortingRepository we can still route the calls to the instance:
public interface MyBaseRepository<T, ID extends Serializable> extends Repository<T, ID> {
List<T> findAll();
T findOne(ID id);
}
public interface UserRepository extends MyBaseRepository<User, Long> {
List<T> findByUsername(String username);
}
In that example we define MyBaseRepository to only expose findAll() and findOne(…) (which will be routed into the instance implementing the CRUD methods) and the concrete repository adding a finder method to the two CRUD ones.
For more details on that topic please consult the reference documentation.