This question already has answers here:
What is dependency injection?
(37 answers)
Closed 7 years ago.
I have a class
#Component
public class MessageServiceHelper {
#Autowired
private MessageService messageService;
public boolean sendMessage(String text){
return messageService.sendMessage(text);
}
public MessageService getMessageService() {
return messageService;
}
public void setMessageService(MessageService messageService) {
this.messageService = messageService;
}
}
so here I am autowiring messageService, so that whenever the object ob MessageServiceHelper is instantiated then it will automatically inject dependency messageService into MessageServiceHelper. Same thing I can achieve if I have write some other class which will create the instance of MessageService and will call the setter method.
Now here the point can be raised is we have shifted the dependency resolution logic some where else and that code is coupled with the instantiation of MessageService and if some implementation is changed then I will have to change that class but with spring also If I have to change implementation I have to make change in metadata that I have given before.
so here my question is what is different with DI? What is strongest point about DI here?
In short, the metadata change is desirable which enables configuration of a higher layer of abstraction rather than change in the application code. The configurable higher layer is then reusable and your lower layer of application code is easily testable. Dependency injection is one of the very old core design principles featuring in famous SOLID design principles. This design principle has featured into lots of frameworks out there - Spring being one of the most popular and evangelist of DI in Java world. There are already lots of good articles available to understand the necessity of DI- you can read them on Wikipedia and Microsoft library. And yes, then there are few gem of articles on Martin Fowler's site to understand it in depth here and here.
The point of dependency injection is reuse. Crafting your code correctly you can inject multiple implementations at runtime.
This is particularly useful for unit testing where you want to pass a mock object instead of a real one without modifying the object. By moving the dependency outside of the code and injecting it you can test your code without modifying it.
It also allows a separation of concerns. If the creation of the object is a complex affair you don't have to put that code in the class which uses it. Other code or classes can do the creation and just pass it into your class, which no longer has to worry about the specifics of how it was created. So moving it outside is an advantage. Think of complex objects created using frameworks. Good examples include instantiating a database driver or database session.
There are other specific instances where the reuse is practical but I would say these are the main ones I've seen for web applications and business code.
Related
In a Spring Boot Web Application layout, I have defined a Service Interface named ApplicationUserService. An implementation called ApplicationUserServiceImpl implements all the methods present in ApplicationUserService.
Now, I have a Controller called ApplicationUserController that calls all the methods of ApplicationUserServiceImpl under different #GetMapping annotations.
As suggested by my instructor, I have defined a Dependency Injection as follows:
public class ApplicationUserController {
private final ApplicationUserService applicationUserService; //This variable will work as an object now.
public ApplicationUserController(ApplicationUserService applicationUserService) {
this.applicationUserService = applicationUserService;
}
#GetMapping
//REST OF THE CODE
}
I am new to Spring Boot and I tried understanding Dependency Injection in plain English and I understood how it works. I understood that the basic idea is to separate the dependency from the usage. But I am totally confused about how this works in my case.
My Questions:
Here ApplicationUserService is an Interface and it's implementation has various methods defined. In the code above, applicationUserService now has access to every method from ApplicationUserServiceImpl. How did that happen?
I want to know how the Object creation works here.
Could you tell me the difference between not using DI and using DI in this case?
Answers
The interface layer is used for abstraction of the code it will be really helpfull when you want to provide different implementations for it. When you create a instance you are simply creating a ApplicationUserServiceImpl and assigning it into a reference variable of ApplicationUserService this concept is called upcasting. Even though you have created the object of child class/implementation class you can only access the property or methods defined in parent class/interface class. Please refer this for more info
https://stackoverflow.com/questions/62599259/purpose-of-service-interface-class-in-spring-boot#:~:text=There%20are%20various%20reasons%20why,you%20can%20create%20test%20stubs.
in this example when a applicationusercontroller object is created. The spring engine(or ioc container) will create a object of ApplicationUserServiceImpl (since there is a single implementation of the interface ) and returns to controller object as a constructor orgument which you assign to the refrence variable also refer the concept called IOC(Invertion of control)
as explained in the previous answer the spring will take care of object creation (object lifecycle)rather than you explsitly doing it. it will make the objects loosely coupled. In this the controll of creating the instances is with spring .
The non DI way of doing this is
private ApplicationUserService applicationUserService = new ApplicationUserServiceImpl()
Hope I have answered your questions
this analogy may make you understand better consider an example, wherein you have the ability to cook. According to the IoC principle(principal which is the basis of DI), you can invert the control, so instead of you cooking food, you can just directly order from outside, wherein you receive food at your doorstep. Thus the process of food delivered to you at your doorstep is called the Inversion of Control.
You do not have to cook yourself, instead, you can order the food and let a delivery executive, deliver the food for you. In this way, you do not have to take care of the additional responsibilities and just focus on the main work.
Now, that you know the principle behind Dependency Injection, let me take you through the types of Dependency Injection
I'm working through a spring mvc video series and loving it!
https://www.youtube.com/channel/UCcawgWKCyddtpu9PP_Fz-tA/videos
I'd like to learn more about the specifics of the exact architecture being used and am having trouble identifying the proper name - so that I can read further.
For example, I understand that the presentation layer is MVC, but not really sure how you would more specifically describe the pattern to account for the use of service and resource objects - as opposed to choosing to use service, DAO and Domain objects.
Any clues to help me better focus my search on understanding the layout below?
application
core
models/entities
services
rest
controllers
resources
resource_assemblers
Edit:
Nathan Hughes comment clarified my confusion with the nomenclature and SirKometa connected the architectural dots that I was not grasping. Thanks guys.
As far as I can tell the layout you have mentioned represents the application which communicates with the world through REST services.
core package represents all the classes (domain, services, repositories) which are not related to view.
model package - Assuming you are aiming for the typical application you do have a model/domain/entity package which represents your data For example: https://github.com/chrishenkel/spring-angularjs-tutorial-10/blob/master/src/main/java/tutorial/core/models/entities/Account.java.
repository package - Since you are using Spring you will most likely use also since spring-data or even spring-data-jpa with Hibernate as your ORM Library. It will most likely lead you to use Repository interfaces (author of videos you watch for some reason decided not to use it though). Anyway it will be your layer to access database, for example: https://github.com/chrishenkel/spring-angularjs-tutorial-10/blob/master/src/main/java/tutorial/core/repositories/jpa/JpaAccountRepo.java
service package will be your package to manipulate data. It's not the best example but this layer doesn't access your database directly, it will use Repositories to do it, but it might also do other things - it will be your API to manipulate data in you application. Let's say you want to have a fancy calculation on your wallet before you save it to DB, or like here https://github.com/chrishenkel/spring-angularjs-tutorial-10/blob/master/src/main/java/tutorial/core/services/impl/AccountServiceImpl.java you want to make sure that the Blog you try to create doesn't exist yet.
controllers package contain all classes which will be used by DispacherServlet to take care of the requests. You will read "input" from the request, process it (use your Services here) and send your responses.
resource_assemblers package in this case is framework specific (Hateoas). As far as I can tell it's just a DTO for your json responses (for example you might want to store password in your Account but exposing it through json won't be a good idea, and it would happen if you didn't use DTO).
Please let me know if that is the answer you were looking for.
This question may be of interest to you as well as this explanation.
You are mostly talking about the same things in each case, Spring just uses annotations so that when it scans them it knows what type of object you are creating or instantiating.
Basically everything request flows through the controller annotated with #Controller. Each method process the request and (if needed) calls a specific service class to process the business logic. These classes are annotated with #Service. The controller can instantiate these classes by autowiring them in #Autowire or resourcing them #Resource.
#Controller
#RequestMapping("/")
public class MyController {
#Resource private MyServiceLayer myServiceLayer;
#RequestMapping("/retrieveMain")
public String retrieveMain() {
String listOfSomething = myServiceLayer.getListOfSomethings();
return listOfSomething;
}
}
The service classes then perform their business logic and if needed, retrieve data from a repository class annotated with #Repository. The service layer instantiate these classes the same way, either by autowiring them in #Autowire or resourcing them #Resource.
#Service
public class MyServiceLayer implements MyServiceLayerService {
#Resource private MyDaoLayer myDaoLayer;
public String getListOfSomethings() {
List<String> listOfSomething = myDaoLayer.getListOfSomethings();
// Business Logic
return listOfSomething;
}
}
The repository classes make up the DAO, Spring uses the #Repository annotation on them. The entities are the individual class objects that are received by the #Repository layer.
#Repository
public class MyDaoLayer implements MyDaoLayerInterface {
#Resource private JdbcTemplate jdbcTemplate;
public List<String> getListOfSomethings() {
// retrieve list from database, process with row mapper, object mapper, etc.
return listOfSomething;
}
}
#Repository, #Service, and #Controller are specific instances of #Component. All of these layers could be annotated with #Component, it's just better to call it what it actually is.
So to answer your question, they mean the same thing, they are just annotated to let Spring know what type of object it is instantiating and/or how to include another class.
I guess the architectural pattern you are looking for is Representational State Transfer (REST). You can read up on it here:
http://en.wikipedia.org/wiki/Representational_state_transfer
Within REST the data passed around is referred to as resources:
Identification of resources:
Individual resources are identified in requests, for example using URIs in web-based REST systems. The resources themselves are conceptually separate from the representations that are returned to the client. For example, the server may send data from its database as HTML, XML or JSON, none of which are the server's internal representation, and it is the same one resource regardless.
I'm new to using Spring with Neo4j and I have a question about #Autowire for a GraphRepository.
Most examples I've seen use one #Autowire per Controller, but I have two Nodes I need to modify at the same time when a particular method is called in the controller. Should I simply #Autowire the repositories for both nodes (eg per the code below)? Is there any impact if I do this in a second controller with the same repositories as well (so if I had a ChatSessionController which also #Autowired ChatMessageService and ChatSessionService)?
ChatMessageController.java
#Controller
public class ChatMessageController {
#Autowired
private ChatMessageService chatMessageService;
#Autowired
private ChatSessionService chatSessionService;
#RequestMapping(value = "/message/add/{chatSessionId}", method = RequestMethod.POST)
#ResponseBody
#Transactional
public void addMessage(#RequestBody ChatMessagePack chatMessagePack,
#PathVariable("chatSessionId") Long chatSessionId) {
ChatMessage chatMessage = new ChatMessage(chatMessagePack);
chatMessageService.save(chatMessage);
// TODO: Make some modifications to the ChatSession as well
}
}
Any help would be much appreciated! I've been googling and looking through Stackoverflow to understand this better but I haven't found anything yet. Any pointers in the right directions would be great.
Another underlying question is, should I be (and can I?) modifying other Nodes in a GraphRepository that handles a particular node? Eg Should my GraphRepository be able to modify my GraphRespository?
Thanks!
I'm not convinced that this is a SO question, it's not really a Neo4J or Spring question either, it is more about the architecture of your application. However assuming that you understand the negatives of class fan out, and how to use the #Transactional annotation to achieve what you want then the answer to your question is that it is just fine to have many Repositories (Neo4J or otherwise, autowired or otherwise) in your class and in as many classes as you want.
Neo4J transactions default to Isolation level READ_COMMITTED and if you need anything else, you need to add the guards/locks yourself. Nested transactions are consideredd tobe the same transaction. The Spring #Transactional annotation relies on proxies that you should be aware of as they have implications when calling methods from within the same class.
I would go through this tuotorial over at Spring Data and get your head around how real world vs domain vs node models differ, there will be cases where one repository impacts another node type but I would think it is often transparent to you (i.e adding relationships). You can do what you like in each repository (the generic nature of them is largely confined to all of the built in CRUD and queries derived from finder-method names (see documentation ) using the #Query annotation, and some queries have side effects, but largely you should avoid it.
As you start adding multiple repositories to multiple controllers I think that your code will begin to smell bad and that you should consider encapsulating this business logic off on its own somewhere, neatly unit tested. I also wouldn't tie myself to one controller per data object, it would be fine to have a single ChatController with a POST/chat/ to create a new session and POST /chat/{sessionId} to add a message. Intersting questions on Programmers:
How accurate is "Business logic should be in a service, not in a model?"
Best Practices for MVC Architecture
MVC Architecture — How many Controllers do I need?
What is the difference using #Autowired annotation and new key ?
Let's within a class what would be the difference between :
#Autowired private UserDao userdao;
and
private UserDao userDao = new UserDaoImpl();
Is there an impact on the performance?
Besides low coupling, that others have already touched on, a major difference is that with the new approach, you get a new object every time, whether you want to or not. Even if UserDaoImpl is reusable, stateless and threadsafe (which DAO classes should strive to be) you will still create new instances of them at every place they are needed.
This may not be a huge issue at first, but consider as the object graph grows - the UserDaoImpl perhaps needs a Hibernate session, which needs a DataSource, which needs a JDBC connection - it quickly becomes a lot of objects that has to be created and initialized over and over again. When you rely on new in your code, you are also spreading out initialization logic over a whole bunch of places. Like in this example, you need to have code in your UserDaoImpl to open a JDBC connection with the proper parameters, but all other DAO classes have to do the same thing.
And here is where Inversion-of-Control (IoC) comes in. It aims to address just these things, by decoupling object creation and life-cycle from object binding and usage. The most basic application of IoC is a simple Factory class. A more sophisticated approach is Dependency Injection, such as Spring.
Is there an impact on the performance?
Yes, but it's most likely not going to be very significant. Using Springs dependency injection costs a little more in startup-time as the container has to be initialized and all managed objects set up. However, since you won't be creating new instances of your managed objects (assuming that is how you design them), you'll gain some runtime performance from less GC load and less object creation.
Your big gain however is going to be in maintainability and robustness of your application.
The above comments are correct, but here I am going add some example that helps to you.
Look We have 100 Classes that uses UserDao class, And you get dao instance like that:
private UserDao userDao = new UserDaoImpl(); in 100 places,
After few weeks the requirement changed and we need to use LdapUserDao or something similar
class LdapUserDao implements UserDao{
}
that implements UserDao. What do you do? How do you handle your new keywords, You tied impl class into usage.
If you use #Autowired in 100 places then from one place (if you use xml based config then, just go xml and switch to another UserDao from xml, if annotation the go to that component and switch Spring annotation to proper one) manage it, so in 100 places it appears. This is what is called DI pattern. This is whole purpose of DI
Another important thing is with spring annotation even you can manage object scope, but with new keyword no way(unles you do dumb singleton or something like that),
I am sure with new keyword you can no have
Prototype
Request
Single
Scoped objects
In terms of performance, It is quite difficult to say, unless to see your code. But I am sure at worst case they may have equal performance, otherwise springs way is fast, because As I know dao class should be singleton, not prototype, so whole project you will have on userdao object in spring way, with new way It depends where you are loosing reference to dao object. But Leave the performance. Do not consider performance over good design. All time 1st make it in good manner(not fast manner) with good design, then look it's performance.
You will get NullPointerException if you use
private UserDao userDao = new UserDaoImpl();
because it doesn't set the session reference that was declared in the *context.xml.
At runtime, you'll get a UserDaoImpl instance asigned in the userdao attribute in both cases.
But the first approach implies the ussage of the Dependency Injection pattern, that has some advantages over the second approach:
Your class now depends on an Interface, so it can work with any implementation of UserDao (maybe one that uses and RDBMS, other that uses XML files as a repository)
As you depend on an Interface now, Unit Testing and Mocking are pretty easy and straightforward.
Low-coupling is nice attribute to have in your code.
So you should prefer the second approach over the first, specially if your already using Spring (that's the idea behind an Inversion-Of-Control container)
I find myself struggling with the fuzz around the concept of string-based 'Service Locators'.
For starters, IoC is great, and programming to interfaces is the way to go. But I fail to see where the big benefit lies in the yellow-pages pattern used here, apart from compilation-less reconfigurability .
Application code will use a (Spring) container to retrieve objects from. Now that's nice: since the code needs to know only the needed interface (to cast to), the Spring container interface, and the name of the needed object, a lot of coupling is removed.
public void foo(){
((MyInterface) locator.get("the/object/I/need")).callMe();
}
Where, of course, the locator can be populated with a gazillion of objects of all kind of Object derivatives.
But I'm a bit puzzled by the fact that the 'flexibility' of retrieving an object by name actually hides the dependency in a type-unsafe, lookup-unsafe manner: where my compiler used to check the presence of a requested object member, and it's type, now all of that is postponed to the runtime phase.
The simplest, functionally allright, pattern I could think of is a giant, application wide struct like object:
public class YellowPages {
public MyInterface the_object_i_need;
public YourInterface the_object_you_need;
....
}
// context population (no xml... is that bad?)
YellowPages locator = new YellowPages();
locator.the_object_i_need=new MyImpl("xyx",true),
locator.the_object_you_need=new YourImpl(1,2,3)
public void foo(){
locator.the_object_i_need.callMe(); // type-safe, lookup-safe
}
Would there be a way/pattern/framework to ask the compiler to resolve the requested object, and check whether it's type is ok? Are there DI frameworks that also do that?
Many thanks
What you are describing is an anti-pattern, plain and simple. Dependency injection (done via the constructor if possible) is the preferred way to do Inversion of Control. Older articles on IOC talk about service location as a viable way to implement the pattern, but you'd be hard pressed to find anyone advocating this in more recent writings.
You hit the nail on the head with this statement:
But I'm a bit puzzled by the fact that the 'flexibility' of retrieving
an object by name actually hides the dependency in a type-unsafe,
lookup-unsafe manner:
Most modern DI frameworks can get around the type-unsafe part, but they do hide the dependency by making it implicit instead of explicit.
Good article on this topic:
Service Locator is an Anti-Pattern
Spring is the only DI framework that I've used, so I can't speak for others, but even though Spring give you the ability to request an object by its name in your code, you don't have to get your Spring beans by name -- in fact, if you do, you're not really capitalizing on the "I" (inversion) in IOC.
The whole principle behind Spring/DI is that your objects shouldn't be asking for a particular object -- it should just have whatever objects it needs handed to it by the IOC container. The distinction is subtle, but it's there. The former approach would resemble the code that you pasted:
public void foo(){
((MyInterface) locator.get("the/object/I/need")).callMe();
}
The latter approach, by contrast, doesn't depend on any service locator, nor does it depend on the "name" of the object that it wants:
private ObjectINeed objectINeed;
public void setObjectINeed(ObjectINeed objectINeed) {
this.objectINeed = objectINeed;
}
public void foo(){
objectINeed.callMe();
}
It's the IOC container that calls the setObjectINeed() method. DI is definitely tackles the same problem as the service locator pattern, but it goes that extra step towards getting rid of the dependency on your service locator class.