Regarding Dependency Injection [closed] - spring

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 9 years ago.
Improve this question
interface Car {
public void tyres();
}
class BMW implements Car {
public void tyres() {
System.out.println("BMW");
}
}
class HMW implements Car {
public void tyres() {
System.out.println("HMW");
}
}
public class DependencyInjection {
private Car car;
public void draw() {
this.car.tyres();
}
public void setCar(Car car) {
this.car = car;
}
public static void tyres1(Car c) {
c.tyres();
}
public static void main(String[] args) {
Car car = new BMW();
tyres1(car);//point 1
DependencyInjection dep=new DependencyInjection();
dep.setCar(car);//point2
dep.draw();
}
}
I want some clarification what is advantage we have creating dependency injection at point 1 and point2.Please explain me in detail as i am new to spring???

This is not a Spring specific design principle, and it's hard to grasp in the snippet you've provided which does not have notable separated pieces, but becomes more helpful as systems grow.
By removing the reference to a concrete class you eliminate "type coupling" which means that the internal class can remain "closed": it won't have to change if the implementation changes of the involved class and the client code can use different ways to adapt as needed without the inner class needing to change or be aware of the implementation. This also helps ensure that the method body within the class is focusing on its role and its specific job rather than being getting tangled in with any other implementation (which helps in clearly defining what should be considered part of an API).
You should also read about the concept of SOLID classes:
http://en.wikipedia.org/wiki/SOLID_(object-oriented_design)
The most immediate benefit in my experience is that it also allows for far more complete, isolated unit tests which is very important if that is part of your development process (which is recommended also as systems grow).

Actually what Dependency Injection does that part is done by your code:
Car car = new BMW();
Dependency Injection feature helps you to mentioned dependency as configuration and Framework actually does the rest Injection of Object to references.

Related

Inheriting default methods with the same name in the class without any compilation error

How a class can implement two interfaces with the same default method in Java 8.
I was not able to get the concept behind the same default method from different interfaces getting inherited in the sub class.Please explain the issue.
interface House {
default String getAddress() {
return "101 Main Str";
}
}
interface Bungalow extends House {
default String getAddress() {
return "101 Smart Str";
}
}
class MyHouse implements Bungalow, House {
}
public class TestClass {
public static void main(String[] args) {
House ci = new MyHouse(); //1
System.out.println(ci.getAddress()); //2
}
}
In the above code default method getAddress() in interface House is present.another method with the same name is declared as default in the extending interface Bungalow
How class MyHouse can implement both the interfaces without any compilation error(because it doesn't know which method has the preference in that case implementing should fail.)
If i call new MyHouse().getAddress(); gives compile error but it should give compilation error even without method calling from MyHouse class.
It seems that the answer is here, where there is a different example, but sort of makes sense and is really close to yours.
Ask me the exact same thing in 1/2 a year and I'll say it will fail at compile time and point me to this answer, so that I could read the JLS again. I guess this is how they decided to implement it. Without thinking too much, I, personally (may be wrong) think that this is at least counter intuitive...

When calling abstract method from concrete class throwing NullPointerException [duplicate]

This question already has answers here:
What is a NullPointerException, and how do I fix it?
(12 answers)
Closed 6 years ago.
I am testing a method in a class. Which is calling a method of abstract class.
Eg:
class abstract Abstract {
public ReturnObject abstractMethod(SomeObject value) {
// do something
return returnObject;
}
}
class Concreate extends Abstract {
public ReturnObject concreteMethod(SomeObject value) {
//do something
returnObject = abstractMethod(value);
return returnObject;
}
}
My UT is
class ConcreateTest {
#InjectMocks
private Concreate conctrete;
#Mock
private Concreate conctrete2;
#Test
public void test_method() {
when(conctrete2.abstractMethod(value)).thenReturn(returnObject);
conctrete.concreteMethod(value);
}
}
This way it is returning me NullPointerException.
From what you are showing, I see very little of your code to tell for sure what's going on, but one thing I see is that you are mocking one Concreate and then injecting that mock into another Concreate. I don't see in the code you are showing anywhere that tells me that a Concreate uses another injected Concreate. This essentially is just pseudo-code. So essentially I am assuming that your main Concreate is being injected in an application context and that your other Concreate is being injected in the first one.
You need #Named to solve this ambiguity or more generically speaking you must give your beans an individual name, even if they are mocked.

TDD : tests too close of method implementations

We are doing TDD for quite a while and we are facing some concerns when we refactor. As we are trying to respect as much as we can the SRP (Single responsibility principle), we created a lot of composition that our classes use to deal with common responsibilities (such as validation, logging, etc..).
Let's take a very simple example :
public class Executioner
{
public ILogger Logger { get; set; }
public void DoSomething()
{
Logger.DoLog("Starting doing something");
Thread.Sleep(1000);
Logger.DoLog("Something was done!");
}
}
public interface ILogger
{
void DoLog(string message);
}
As we use a mocking framework, the kind of test that we would do for this situation would be somthing like
[TestClass]
public class ExecutionerTests
{
[TestMethod]
public void Test_DoSomething()
{
var objectUnderTests = new Executioner();
#region Mock setup
var loggerMock = new Mock<ILogger>(MockBehavior.Strict);
loggerMock.Setup(l => l.DoLog("Starting doing something"));
loggerMock.Setup(l => l.DoLog("Something was done!"));
objectUnderTests.Logger = loggerMock.Object;
#endregion
objectUnderTests.DoSomething();
loggerMock.VerifyAll();
}
}
As you can see, the test is clearly aware of the method implementation that we are testing. I have to admit that this example is too simple, but we sometimes have compositions that cover responsibilities that don't add any value to a test.
Let's add some complexity to this example
public interface ILogger
{
void DoLog(LoggingMessage message);
}
public interface IMapper
{
TTarget DoMap<TSource, TTarget>(TSource source);
}
public class LoggingMessage
{
public string Message { get; set; }
}
public class Executioner
{
public ILogger Logger { get; set; }
public IMapper Mapper { get; set; }
public void DoSomething()
{
DoLog("Starting doing something");
Thread.Sleep(1000);
DoLog("Something was done!");
}
private void DoLog(string message)
{
var startMessage = Mapper.DoMap<string, LoggingMessage>(message);
Logger.DoLog(startMessage);
}
}
Ok, this is an example. I would include the Mapper stuff within the implementation of my Logger and keep a DoLog(string message) method in my interface, but it's an example to demonstrate my concerns
The corresponding test leads us to
[TestClass]
public class ExecutionerTests
{
[TestMethod]
public void Test_DoSomething()
{
var objectUnderTests = new Executioner();
#region Mock setup
var loggerMock = new Mock<ILogger>(MockBehavior.Strict);
var mapperMock = new Mock<IMapper>(MockBehavior.Strict);
var mockedMessage = new LoggingMessage();
mapperMock.Setup(m => m.DoMap<string, LoggingMessage>("Starting doing something")).Returns(mockedMessage);
mapperMock.Setup(m => m.DoMap<string, LoggingMessage>("Something was done!")).Returns(mockedMessage);
loggerMock.Setup(l => l.DoLog(mockedMessage));
objectUnderTests.Logger = loggerMock.Object;
objectUnderTests.Mapper = mapperMock.Object;
#endregion
objectUnderTests.DoSomething();
mapperMock.VerifyAll();
loggerMock.Verify(l => l.DoLog(mockedMessage), Times.Exactly(2));
loggerMock.VerifyAll();
}
}
Wow... imagine that we would use another way to translate our entities, I would have to change every tests that has some method that uses the mapper service.
Anyways, we really feel some pain when we do major refactoring as we need to change a bunch of tests.
I'd love to discuss about this kind of problem. Am I missing something? Are we testing too much stuff?
Tips:
Specify exactly what should happen and no more.
In your fabricated example,
Test E.DoSomething asks Mapper to map string1 and string2 (Stub out Logger - irrelevant)
Test E.DoSomething tells Logger to log mapped strings (Stub/Fake out Mapper to return message1 and message2)
Tell don't ask
Like you've yourself hinted, if this was a real example. I'd expect Logger to handle the translation internally via a hashtable or using a Mapper. So then I'd have a simple test for E.DoSomething
Test E.DoSomething tells Logger to log string1 and string2
The tests for Logger would ensure L.Log asks mapper to translate s1 and log the result
Ask methods complicate tests (ask Mapper to translate s1 and s2. Then pass the return values m1 and m2 to Logger) by coupling the collaborators.
Ignore irrelevant objects
The tradeoff for isolation via testing interactions is that the tests are aware of implementation.
The trick is to minimize this (via not creating interfaces/specifying expectations willy-nilly). DRY applies to expectations as well. Minimize the amount of places that an expectation is specified... ideally Once.
Minimize coupling
If there are lots of collaborators, coupling is high which is a bad thing. So you may need to rework your design to see which collaborators don't belong at the same level of abstraction
Your difficulties come from testing behavior rather than state. If you would rewrite the tests so that you look at what's in the log rather than verifying that the call to the log is made, your tests wouldn't break due to changes in the implementation.

Dependency Injection with Interface implemented by multiple classes

Update: Is there a way to achieve what I'm trying to do in an IoC framework other than Windsor? Windsor will handle the controllers fine but won't resolve anything else. I'm sure it's my fault but I'm following the tutorial verbatim and objects are not resolving with ctor injection, they are still null despite doing the registers and resolves. I've since scrapped my DI code and have manual injection for now because the project is time sensitive. Hoping to get DI worked out before deadline.
I have a solution that has multiple classes that all implement the same interface
As a simple example, the Interface
public interface IMyInterface {
string GetString();
int GetInt();
...
}
The concrete classes
public class MyClassOne : IMyInterface {
public string GetString() {
....
}
public int GetInt() {
....
}
}
public class MyClassTwo : IMyInterface {
public string GetString() {
....
}
public int GetInt() {
....
}
}
Now these classes will be injected where needed into layers above them like:
public class HomeController {
private readonly IMyInterface myInterface;
public HomeController() {}
public HomeController(IMyInterface _myInterface) {
myInterface = _myInterface
}
...
}
public class OtherController {
private readonly IMyInterface myInterface;
public OtherController() {}
public OtherController(IMyInterface _myInterface) {
myInterface = _myInterface
}
...
}
Both controllers are getting injected with the same interface.
When it comes to resolving these interfaces with the proper concrete class in my IoC, how do I differentiate that HomeController needs an instance of MyClassOne and OtherController needs an instance of MyClassTwo?
How do I bind two different concrete classes to the same interface in the IoC? I don't want to create 2 different interfaces as that breaks the DRY rule and doesn't make sense anyway.
In Castle Windsor I would have 2 lines like this:
container.Register(Component.For<IMyInterface>().ImplementedBy<MyClassOne>());
container.Register(Component.For<IMyInterface>().ImplementedBy<MyClassTwo>());
This won't work because I will only ever get a copy of MyClassTwo because it's the last one registered for the interface.
Like I said, I don't get how I can do it without creating specific interfaces for each concrete, doing that breaks not only DRY rules but basic OOP as well. How do I achieve this?
Update based on Mark Polsen's answer
Here is my current IoC, where would the .Resolve statements go? I don' see anything in the Windsor docs
public class Dependency : IDependency {
private readonly WindsorContainer container = new WindsorContainer();
private IDependency() {
}
public IDependency AddWeb() {
...
container.Register(Component.For<IListItemRepository>().ImplementedBy<ProgramTypeRepository>().Named("ProgramTypeList"));
container.Register(Component.For<IListItemRepository>().ImplementedBy<IndexTypeRepository>().Named("IndexTypeList"));
return this;
}
public static IDependency Start() {
return new IDependency();
}
}
I hope you can use service overrides.
Ex.
container.Register(
Component.For<IMyService>()
.ImplementedBy<MyServiceImpl>()
.Named("myservice.default"),
Component.For<IMyService>()
.ImplementedBy<OtherServiceImpl>()
.Named("myservice.alternative"),
Component.For<ProductController>()
.ServiceOverrides(ServiceOverride.ForKey("myService").Eq("myservice.alternative"))
);
public class ProductController
{
// Will get a OtherServiceImpl for myService.
// MyServiceImpl would be given without the service override.
public ProductController(IMyService myService)
{
}
}
You should be able to accomplish it with named component registration.
container.Register(Component.For<IMyInterface>().ImplementedBy<MyClassOne>().Named("One"));
container.Register(Component.For<IMyInterface>().ImplementedBy<MyClassTwo>().Named("Two"));
and then resolve them with
kernel.Resolve<IMyInterface>("One");
or
kernel.Resolve<IMyInterface>("Two");
See: To specify a name for the component
Typically DI containers follow Register, Resolve and Release patterns. During the register phase there are two steps. The first is to specify the mapping as you are doing. The second step is to specify the rules which govern which to inject where.
This problem is very common when we try to address Cross cutting concerns using decorators. In these situations, you have multiple classes(decorators) implementing a single interface.
Briefly, we need to implement IModelInterceptorsSelector which allows you to write imperative code that decides which Interceptor to apply to which types or members.
This is elaborately described in the book Dependency Injection in .Net book by Mark Seemann. Look for chapter 9 interception or search for the above interface.
I am not an expert at this, but was searching for the exact same problem and found the ans in the above book.
Hope this helps.
Regards
Dev1

Using DI to cache a query for application lifetime

Using a DI container (in this case, Ninject) is it possible - - or rather, wise to cache a frequently used object for the entire application lifetime (or at least until it is refreshed)?
To cite example, say I have a Template. There are many Template objects, but each user will inherit at least the lowest level one. This is immutable and will never change without updating everything that connects to it (so it will only change on administration demand, never based on user input). It seems foolish to keep querying the database over and over for information I know is not changed.
Would caching this be best done in my IoC container, or should I outsource it to something else?
I already store ISessionFactory (nHibernate) as a Singleton. But that's a little bit different because it doesn't include a query to the database, just the back-end to open and close ISession objects to it.
So basically I would do something like this..
static class Immutable
{
[Inject]
public IRepository<Template> TemplateRepository { get; set; }
public static ITemplate Template { get; set; }
public void Initialize()
{
if(Immutable.Template == null)
{
Immutable.Template = TemplateRepository.Retrieve(1); // obviously better logic here.
}
}
class TemplateModule : Module
{
public void Load()
{
Bind<ITemplate>().ToMethod(() => Immutable.Initialize())InSingletonScope();
}
}
Is this a poor approach? And if so, can anyone recommend a more intelligent one?
I'd generally avoid using staticness and null-checking from your code - create normal classes without singleton wiring by default and layer that aspect on top via the container. Ditto, remove reliance on property injection - ctor injection is always better unless you have no choice
i.e.:
class TemplateManager
{
readonly IRepository<Template> _templateRepository;
public TemplateManager(IRepository<Template> templateRepository)
{
_templateRepository = templateRepository;
}
public ITemplate LoadRoot()
{
return _templateRepository.Retrieve(1); // obviously better logic here.
}
}
class TemplateModule : Module
{
public void Load()
{
Bind<ITemplate>().ToMethod(() => kernel.Get<TemplateManager>().LoadRoot()).InSingletonScope();
}
}
And then I'd question whether TemplateManager should become a ninject provider or be inlined.
As for the actual question... The big question is, how and when do you want to control clearing the cache to force reloading if you decided that the caching should be at session level, not app level due to authorization influences on the template tree? In general, I'd say that should be the Concern of an actual class rather than bound into your DI wiring or hardwired into whether a class is a static class or is a Singleton (as in the design pattern, not the ninject Scope).
My tendency would be to have a TemplateManager class with no static methods, and make that a singleton class in the container. However, to get the root template, consumers should get the TemplateManager injected (via ctor injection) but then say _templateManager.GetRootTemplate() to get the template.
That way, you can:
not have a reliance on fancy ninject providers and/or tie yourself to your container
have no singleton cruft or static methods
have simple caching logic in the TemplateManager
vary the Scoping of the manager without changing all the client code
have it clear that getting the template may or may not be a simple get operation
i.e, I'd manage it like so:
class TemplateManager
{
readonly IRepository<Template> _templateRepository;
public TemplateManager(IRepository<Template> templateRepository)
{
_templateRepository = templateRepository;
}
ITemplate _cachedRootTemplate;
ITemplate FetchRootTemplate()
{
if(_cachedRootTemplate==null)
_cachedRootTemplate = LoadRootTemplate();
return _cachedRootTemplate;
}
ITemplate LoadRoot()
{
return _templateRepository.Retrieve(1); // obviously better logic here.
}
}
register it like so:
class TemplateModule : Module
{
public void Load()
{
Bind<TemplateManager>().ToSelf().InSingletonScope();
}
}
and then consume it like so:
class TemplateConsumer
{
readonly TemplateManager _templateManager;
public TemplateConsumer(TemplateManager templateManager)
{
_templateManager = templateManager;
}
void DoStuff()
{
var rootTempalte = _templateManager.FetchRootTemplate();
Wild speculation: I'd also consider not having a separate IRepository being resolvable in the container (and
presumably having all sorts of ties into units of work). Instead, I'd have the TemplateRepository be a longer-lived thing not coupled to an ORM layer and Unit Of Work. IOW having a repository and a Manager none of which do anything well defined on their own isnt a good sign - the repository should not just be a Table Data Gateway - it should be able to be the place that an Aggregate Root such as Templates gets cached and collated together. But I'd have to know lots more about your code base before slinging out stuff like that without context!

Resources