Inversion of Control, Spring Framework - system of global instances - spring

Is inversion of control essentially just retrieving a set of already instantiated objects? In theory. I guess more granular details as implemented by IoC frameworks like Spring have a lot more functionality, but in theory it seems like IoC containers operate like a collection of instantiated beans (in the Java world) and then you get access to those beans. Almost like you would with a collection Singleton objects?

It's partly getting hold of singletons in practice, yes. Some beans will be instantiated multiple times, whenever they're needed (depending on the configuration), but often you can make do with single instances - particularly if they're stateless once configured. I like the idea of data flowing "through" an application's plumbing after it's been properly hooked up.
The benefit is that the "singletoneity" is only present in the configuration, not in the code, which makes the system more testable and flexible. The difference in terms of how you view (and expose) the dependencies with your app is huge.

Although the answer has already been accepted, I will elaborate a little more:
Initially spring was mostly about singleton management. with the introduction of custom scopes came the web specific scopes and the ability to create your own custom scopes. Leaning on AOP features this also allows you to "stay singleton" for as long as possible, because it uses a technique known as scope proxying. This can let you introduce a scoped object right in the middle of a chain of singletons - a feature you'd often be using threadlocals for.
So I'd say it's about tight control of instance creation, to make sure everything is done only the required number of times, and preferably only the construction that is necessary is done for each request. Singleton management was the old days.

Related

Why does spring use ioc and di?

I'm new to spring 5 and my question is why does spring use DI and IOC? I mean why do we have to write the beans in an XML (legacy) and then create it where we need it? Why don't we use one method instead that gives us that object, until we want to use this complex mechanism that happens in the spring container?
And another question is, does not reading XML slow down the program? Because we are reading from the hard disk anyway.
Note: It is true that we can use annotations, but for now I want to ask a question about reading from xml.
Spring IoC Container is the core of Spring Framework. It creates the objects, configures and assembles their dependencies, manages their entire life cycle. The Container uses Dependency Injection(DI) to manage the components that make up the application. It gets the information about the objects from a configuration file(XML) or Java Code or Java Annotations and Java POJO class. These objects are called Beans. Since the Controlling of Java objects and their lifecycle is not done by the developers, hence the name Inversion Of Control.
More on link HERE
As for your first part of the question.
why does spring use DI
To allow the developer to keep his code loose, and not entagle classes, it keeps your code clean.
In object oriented design, the amount of coupling refers to how much the design of one class depends on the design of another class. In other words, how often do changes in class A force related changes in class B? Tight coupling means the two classes often change together, loose coupling means they are mostly independent. In general, loose coupling is recommended because it's easier to test and maintain.
You may find this paper by Martin Fowler (PDF) helpful.
I mean why do we have to write the beans in an XML (legacy) and then create it where we need it
Note: We write the bean in XML and it is created when application starts when it looks at bean defintion, techinally you are never creating a bean, you are fetching only created bean from Spring Container(IOC) that Spring created for you when you started your application.
We are writing bean blueprint, or just bean, so that it can be constructed, placed in the Spring Container when the application starts, and then we have it at our disposal that we can fetch it using getBean method.
The whole point of "why", is because by default all beans are scoped as singleton, that means, when you fetch a bean, and do with it whatever you want, you do not worry about memory or anything, Spring takes care of the beans for you if they are scoped as a Singleton.
Second question:
And another question is, does not reading XML slow down the program? Because we are reading from the hard disk anyway.
There is no difference in performance between annotation or XML, it is just a different approach, I am not sure what you mean by "reading from hard disk", but one way or another you will have to configure your application, yes, many forums prefer to run away from XML, but in my honest opinion the only reason for that is because when you write a bad configuration in XML it is lot harder to find it compared to configuration in Java that will throw an exception.
XML, application.properties files require a redeployment of the application, while annotation and java configuration require recompilation of your project, so both of them have "flaws", but it is normal and quite understandable to me.
But in the end I believe that it is a matter of preference, I know personally quite a few people that combine annotations with XML configuration and they know lot more about Spring compared to me.
So in summary, it is pain to write beans and their configuration, same as you can write a class with methods without creating an interface for it since the result will be the same, but it will help you in the long run since you do not have to worry about memory or if you destroyed that bean or if you did not.
It would be nice that you read about
1.Lazy initialization of beans
2.Eager initialization of beans
3.Singleton scope of beans
4.Prototype scope of beans

What does mean Inversion of Control and Dependency Injection in Spring Framework? and what is difference ? Why in the Spring framework?

What does mean Inversion of Control and Dependency Injection in Spring Framework? and what is difference ? Why in the Spring framework ?
Can any one explain ?
Also suggest the some books to learn Spring framework for beginners ?
I shall write down my simple understanding of this two terms:
For quick understanding just read examples*
Dependency Injection(DI):
Dependency injection generally means passing a dependent object as a parameter to a method, rather than having the method create the dependent object. What it means in practice is that the method does not have a direct dependency on a particular implementation; any implementation that meets the requirements can be passed as a parameter.
With this objects tell thier dependencies.
And spring makes it available. This leads to loosely coupled application development.
Quick Example:EMPLOYEE OBJECT WHEN CREATED,IT WILL AUTOMATICALLY CREATE ADDRESS OBJECT (if address is defines as dependency by Employee object).
Inversion of Control(IoC) Container:
This is common characteristic of frameworks,
IOC manages java objects – from instantiation to destruction through its BeanFactory. -Java components that are instantiated by the IoC container are called beans, and the IoC container manages a bean's scope, lifecycle events, and any AOP features for which it has been configured and coded.
QUICK EXAMPLE:Inversion of Control is about getting freedom, more flexibility, and less dependency. When you are using a desktop computer, you are slaved (or say, controlled). You have to sit before a screen and look at it. Using keyboard to type and using mouse to navigate. And a bad written software can slave you even more. If you replaced your desktop with a laptop, then you somewhat inverted control. You can easily take it and move around. So now you can control where you are with your computer, instead of computer controlling it.
By implementing Inversion of Control, a software/object consumer get more controls/options over the software/objects, instead of being controlled or having less options.
Inversion of control as a design guideline serves the following purposes:
There is a decoupling of the execution of a certain task from implementation.
Every module can focus on what it is designed for.
Modules make no assumptions about what other systems do but rely on their contracts.
Replacing modules has no side effect on other modules I will keep things abstract here, You can visit following links for detail understanding of the topic.
A good read with example
Detailed explanation

Spring core container is the basis for complete Spring framework?

All websites state that the Spring core container is the basis for complete Spring framework i.e., it is used across
the all modules like AOP, JDBC module, Web module, etc. As per my understanding, the Spring core container's main purpose is
to inject dependencies, so avoiding the need of factory classes and methods. Is that correct?
Second question: When it is said, Spring core container is the basis for complete Spring framework (e.g., for Spring AOP). As per my understanding, in Spring AOP also, getting the object of classes like
ProxyFactoryBean is achieved by core container. Right?
Thirdly, it is stated that Spring core container avoids the need for programming the use of singletons. How come singleton
classes are avoided by core container?
yep
yep
All beans declared in Spring config files are singleton by default. They are instantiated when your application starts.
First off, your understanding of what you get from Spring is about right. So let's get on to your third question, the interesting one.
The key is it's not that you don't have singletons, it's that they're singletons by configuration. This is a vital difference, as it means you can avoid all the complicated singleton enforcement code (the source of frequent problems) and instead just write exceptionally simple programs that focus on the business end of things. This is particularly important when you are writing a program with non-trivial object lifetimes: for example, in a webapp it makes it very easy to manage the lifespan of objects that hold state associated with a user's session, since if the objects have session scope, they'll be “singleton per user session”. That's enormously easier to work with than many of the alternatives.
The fact that Spring can also help out with transactions is just perfect as transaction handling is distinctly non-trivial, and AOP is the best solution to them in Java that I've seen (other languages have other options open) with Spring supporting a pretty straight-forward way of doing it. Try to do it properly without if you don't believe me. Spring's pretty much wonderful.

Example use-cases for using Dependency Injection with the Play Framework

I am a big fan of Dependency Injection and the Play Framework, but am having trouble seeing how the two could be exploited together.
There are modules for Spring and Guice, but the way that Play works makes it hard for me to see how DI could be beneficial beyond some quite simple cases.
A good example of this is that Play expects JPA work to be done by static methods associated with the entity in question:
#Entity
Person extends Model {
public static void delete(long id) {
em().find(id).remove();
}
//etc
}
So there is no need for a PersonManager to be injected into controllers in the way it might for a Spring J2EE application. Instead a controller just calls Person.delete(x).
Obviously, DI is beneficial when there are interfaces with external systems, as the concrete implementation can be mocked for testing etc., but I don't see much benefit for a self-contained Play application.
Does anyone have any good examples? Does anyone use it to inject a Manager-style class into Controllers so that a number of operations can be done within the same transaction, for example?
I believe from this sentence you wrote:
"Does anyone have any good examples? Does anyone use it to inject a Manager-style class into Controllers so that a number of operations can be done within the same transaction, for example?"
that before answering the DI question I should note something: transactions are managed automatically by Play. If you check the model documentation you will see that a transaction is automatically created at the beginning of a request, and committed at the end. You can roll it back via JPA or it will be rolled back if an exception is raised.
I mention this because from the wording of your sentence I'm not sure if you are aware of this.
Now, on DI itself, in my (not-so-extensive) experience with DI, I've seen it used mainly to:
Load the ORM (Hibernate) factory/manager
Load Service classes/DAOs into another class to work with them
Sure, there are more scenarios, but these probably cover most of the real usage. Now:
The first one is irrelevant to Play as you get access to your JPA object and transaction automatically
The second one is quite irrelevant too as you mainly work with static methods in controllers. You may have some helper classes that need to be instantiated, and some may even belong to a hierarchy (common interface) so DI would be beneficial. But you could just as well create your won factory class and get rid of the jars of DI.
There is another matter to consider here: I'm not so sure about Guice, but Spring is not only DI, it also provides a lot of extra functionalities which depend on the DI module. So maybe you don't want to use DI in Play, but you want to take advantage of the Spring tools and they will use DI, albeit indirectly (via xml configuration).
The problem in my humble opinion on the static initialization approach of Play! is that it makes testing harder. Once you approach the HTTP vs Object Orientation problem with static members and objects that carries the HTTP message data (request and response) you make a trade of having to create new instances for each request by the ability of make your objects loosely coupled with the rest of your project classes.
One good example of a different design are servlets, it also extends a base class but it approaches the problem with a single instance creation (by default, because there are configurations that enable more instances).
I believe that maybe a mix of the two approaches would be better, having a singleton of each controller would give the same characteristics of a full static class and would allow dependency injection of some kinds of object. But not the objects with request or session scope, once the controller would need to be created every new request. Moreover it would improve testability by inverting the control of dependency injection, thus allowing arbitrary injection points.
Dependencies would be injected by the container or by a test, probably using mocks for the heavy stuff that much likely would already have been tested before.
In my point of view, this static model pushes the developer away from testing controllers because extending FunctionalTest starts the application server, thus paying the price of heavy objects like repositories, services, crawlers, http clients, etc. I don't want to wait a lot of objects to be bootstrapped just to check if some code was executed on the controller, tests should be quick and clear to make developers love them as their programming assistant/guide.
DI is not the ultimate solution to use everywhere... Don't use DI just because you have it in your hands... In play, you don't need DI to develop controllers/models etc... but sometimes it could be a nice design: IMO, you could use it if you have a service with a well know interface but you would like to develop this service outside Play and test it outside play and even test your play project with just a dummy service in order NOT to depend on the full service implementation. Therefore DI can be interesting: you plug the service loosely in play. In fact, this is the original use case for DI afaik...
I just wrote a blog post about setting up a Play Framework application with Google Guice. http://geeks.aretotally.in/dependency-injection-with-play-framework-and-google-guice
I see some benefits, especially when a component of your application requires a different behavior based on a certain context or something like that. But I def believe people should be selective about what goes into a DI context.
It shows again that you should only use dependencies injection if you really have a benefit. If you have complex services it's useful, but in many cases it's not. Read the chapter about models in the play-documentation.
So to give you an example where you can use DI with play. Perhaps you must make a complex calculation, or you create a pdf with a report-engine. There I think DI can be useful, specially for testing. There I think the guice-module and spring-module are useful and can help you.
Niels
As of a year and some change later, Play 2.1 now has support for dependency injection in controllers. Here's their demo project using Spring 3, which lays it out pretty clearly.
Edit: here's another example using Guice and Scala, if that's your poison.

AOP x IoC for caching

Do you prefer the clean approach of an AOP cache layer on top of your methods (any DAO or service method) OR do you prefer the total control approach of injecting a cache instance wherever you need?
I understand AOP gives you loose coupling and separation of concerns, but not so much flexibility, unless you are coding the method interceptors yourself.
I tend to like the IoC approach, because a cache instance can be easily mocked if you need to and with an instance of the cache you have total control and flexibility.
It is like logging. Who actually uses AOP for application wide logging?
This is the question about should we use "real" AOP at all (by "real" I mean quantification and obliviousness, so that you can enable aspects globally and 100% transparently for the system). My opinion is to avoid that 100% transparent solutions every time you can. If you're at the development stage, it will be better to design your system in the way that don't force you to do tricks like AOP weaving at intermediate code level.
At the other hand, it will be quite disturbing to write caching concern for every component/interface you need, no matter how you'll do it. The most obvious way which came to my mind is to have a caching Decorator class for each cached class - one thing done many times.
So the path I will try to follow is to have an idea of AOP done using some IoC extension based on dynamic proxying. When fetching an object from IoC, container can check in its configuration should the cache be applied for given type and if so, create an interface-based dynamic proxy with caching. This solution is not forcing you to write a lot of similar code and is less confusing than "real" AOP by its visibility in the source code. There are some implementations like this, but you haven't specified any language, so I can't offer any.

Resources