How to use lazy initialization together with DIContainer in Spring4D - lazy-initialization

I'm new to Spring4D and trying to understand how to combine DIContainers with lazy loaded objects. I think i get the concept of pushing all the creation of the objects back in the callstack to the root of the application and register the types and invoke something or pass the solved objects using constructor/property/method injection. This way the project will only get dependency to interfaces in the seperate units and only get dependency to Spring in the root unit.
IMyClass = interface ['{593ABC29-B882-4B70-903F-52F381DD53BF}']
procedure DoSomething;
end;
TMyClass = class(TInterfacedObject, IMyClass)
public
procedure DoSomething;
end;
// In root of application
Container.RegisterType<TMyClass>;
Container.Resolve<IMyClass>.DoSomething;
I see Spring4D has something called Lazy<T> for lazy loading, so in following example an instance of TMyClass is not created before referencing Value e.g.
var LazyMyClassObjectFactory := TLazy<TMyClass>.Create;
LazyMyClassObjectFactory.Value.DoSomething;
My problem is 1. how do i combine a DIContainer with lazy loading (not quite sure how to resolve the interfaced lazy objects) and 2. can it be done without requiring Lazy<T> since i dont like the idea of becoming dependant on "spring" in all my units.

Got it working some what, though I don't use Lazy\<T\>. Instead I register a factory and pass down the line. Not sure if this is the right way to go about it, but I only get dependencies to my unit with the interfaces this way.
IMyClass = interface ['{735BA720-F1D9-4138-85E1-44EA9DCAA773}']
procedure DoSomething;
end;
IMyClassFactory = interface ['{7B1125D7-2639-4D4E-9909-28A08543F5FF}']
function Create: IMyClass
end;
TMyClass = class(TInterfacedObject, IMyClass)
public
procedure DoSomething;
end;
// From top-level root
GlobalContainer.RegisterType<IMyClass>;
GlobalContainer.RegisterFactory<IMyClassFactory>;
GlobalContainer.Build;
// Instanciate MyClassFactoryObject and pass it down to other units having params of type IMyClassFactory
var MyClassFactoryObject := GlobalContainer.Resolve<IMyClassFactory>;
// inside some other class
var MyClassObject := MyClassFactoryObject.Create;
MyClassObject.DoSomething;

The concept of DI is not to push instance creation to the root of the application but the composition of the object graph - hence the name "composition root".
It is quite the opposite - given that the composition is done very close to the start of the application it is to be avoided to create all instances at that time because a) it can be really expensive, b) not all instances might be ever needed and c) some information might not be there at that point such as user input.
This is what "lazy initialization" solves and the container knows about this. It can automatically resolve Lazy<T> and Func<T>. These two differ from each other: Lazy<T> defers the instantiation to the point where .Value is actually accessed and then keeps the created instance for further use. An automatically created Func<T> from the container internally resolves each time it is being called.
The code in your answer makes me wonder if you fully understood the concept of DI because then you don't need to resolve something from the container and pass it on because that is what the container does - you only resolve the root of an object graph and everything beneath "unfolds" automatically (by being injected and lazily constructed) - nothing magic though that can exclusively be done with the container but also via pure DI and a proper architecture.

Related

Calling function of Exe project in referenced aticvex project in vb6

Calling function of Exe project in referenced aticvex project in vb6
I have one AtiveX project in VB6 (Lets call it A )
i have another Exe project in VB6 (Lets call it B )
Project A is referenced in Project B
but now i have a requirement that i want to call function of B inside A
is there a way Can we do it in VB6?
if yes what do i need to search.Already tried searching on Google but not sure i could explain what to search
As i didn't work much on VB6 so i am kind a stuck here what to do
First off let me say that when you run into this problem it is often a sign that the code organization could be improved. Perhaps some modules in A actually should be in B, or they could be broken down in a more modular way that would resolve the problem. I would first encourage you to think about the problem this way.
That said, there are technical ways of doing what you want, I've suggested some examples below.
One approach to this is to utilize an interface. The interface would be provided by A.dll and the class used to implement it is provided by B.exe.
This is rough pseudocode just to sketch out the concept.
Within project A:
Create a class to act as an interface (called ISample). Within ISample have at least one sub or function defined, so you would have ISample.SomeProcedure.
Elsewhere in project A, you need some property or procedure SomeClassInA.RunSampleFunction which will accept an object of type ISample and then call SomeFunction.
Within project B:
Create another class SampleImplementation which Implements the ISample interface. This means it will have an actual concrete implementation of SomeFunction.
From B, create a new object of type SampleImplementation and then pass it to the code in A:
Dim impl As SampleImplementation
Dim objectFromA As SomeClassInA
Set impl = New SampleImplementation
Set objectFromA As New SomeClassInA
' Pass the object from B to A, where A can call its methods:
objectFromA.SomeProcedure impl
This avoids circular references and is a generally object-oriented pattern which I've used before. I will note however that the interface concept in VB6 has some annoyances that you'll have to live with, but for something like this it can be a good tool.
A different approach would be to have a class in A which exposes an event. You can then declare an object from that class in B using WithEvents and attach an event handler to the class. The event handler is just a procedure which exists in B, but now can be called by A.
Events in VB6 also have some limitations (specifically that the WithEvents object has to be within global scope IIRC) but you can usually live with / work around those problems as well.
The interface approach may be more general and a more powerful way to share information between A and B, but the event approach could be quicker to get working. Depends a lot on the specifics of what you need to accomplish.

Registering all types in Assembly for Unity

I'm working on a large Asp.Net MVC3 application (>50 views) and we are currently planning on using Unity for our dependency injection framework. For ease of maintenance, I would like to be able to query the assembly to find all of the base types, then register them with Unity.
Based on sample code from the Unity MVC3 Project for registering all controllers, I tried the following code -
var orchestratorTypes = (from t in Assembly.GetCallingAssembly().GetTypes()
where typeof(IOrchesratorBase).IsAssignableFrom(t) &&
!t.IsAbstract
select t).ToList();
orchestratorTypes.ForEach(t => container.RegisterType(t);
When I run the application I get the following error message
The current type, WwpMvcHelpers.BaseClasses.IOrchesratorBase, is an interface and cannot be constructed. Are you missing a type mapping?
If I register the class using individually, as below -
container.RegisterType<IOrchesratorBase, HomeOrchestrator>();
Everything works correctly. Is there a way to do this so that I don't have to register each type individually?
EDIT
Per request, my inheritance hierarchy is
HomeOrcestrator <- IOrchesratorBaseList<LocalModel>
<- OrchesratorBase<LocalModel> <- IOrchesratorBase
The usage in the controller is
public class HomeController : ControllerListBase <HomeOrchestrator, LocalModel>
{
public HomeController() {}
public HomeController(IOrchesratorBase homeOrchestrator) {
this.Orchestrator = (HomeOrchestrator) homeOrchestrator;
}
The LINQ to get the types appears to work. I don't think that's your problem.
You'll get a similar error if you just write
container.RegisterType(typeof(HomeOrchestrator));
and call container.Resolve<IOrchesratorBase>().
In other words, RegisterType(t) is not the same as RegisterType<I, T>().
The real question is, what are you resolving and how do you want it resolved? Your query is finding implementors of IOrchesratorBase. Are your injection constructor parameters of that type? If so, what's Unity supposed to do when 20 types implement that interface?
Can you provide more information on your class/interface hierarchy, constructor parameters, and what you expect/want to happen?
(I'd refactor to change IOrchesratorBase to IOrchestratorBase, BTW.) :)
Edit
Based on the edited question, the problem is that, in order to resolve a HomeController, Unity is looking for a type registration for IOrchesratorBase. It determines the interface type by the parameter types of the constructor with the most parameters.
If you write container.RegisterType<IOrchesratorBase, HomeOrchestrator>() the answer is obvious - Unity will construct an instance of HomeOrchestrator and inject it.
Now, is there more than one type that implements IOrchesratorBase? If so, and you register both of them (explicitly), Unity will use whichever one you register last. That may not be what you want.
If you have multiple controllers, each taking a different interface type in their constructors (with only one implementation per interface), you'll need to figure out what each interface type is and re-run your LINQ registration for each one. That could be done via reflection - find the orchestrators or the controllers.
If you have multiple controllers, each taking the same interface type in their constructors and you want different implementations for each, you've got a problem. You'd have to register named types and determine the names somehow, or something similar.
Unity isn't magic. It can't figure out your intentions.
Addendum
Unity can operate in a convention-over-configuration mode; see Using Unity With Minimal Configuration.

Is the DI pattern limiting wrt expensive object creation coupled with infrequent dependency usage?

I'm having a hard time getting my head around what seems like an obvious pattern problem/limitation when it comes to typical constructor dependency injection. For example purposes, lets say I have an ASP.NET MVC3 controller that looks like:
Public Class MyController
Inherits Controller
Private ReadOnly mServiceA As IServiceA
Private ReadOnly mServiceB As IServiceB
Private ReadOnly mServiceC As IServiceC
Public Sub New(serviceA As IServiceA, serviceB As IServiceB, serviceC As IServiceC)
Me.mServiceA = serviceA
Me.mServiceB = serviceB
Me.mServiceC = serviceC
End Sub
Public Function ActionA() As ActionResult
' Do something with Me.mServiceA and Me.mServiceB
End Function
Public Function ActionB() As ActionResult
' Do something with Me.mServiceB and Me.mServiceC
End Function
End Class
The thing I'm having a hard time getting over is the fact that the DI container was asked to instantiate all three dependencies when at any given time only a subset of the dependencies may be required by the action methods on this controller.
It's seems assumed that object construction is dirt-cheep and there are no side effects from object construction OR all dependencies are consistently utilized. What if object construction wasn't cheep or there were side effects? For example, if constructing IServiceA involved opening a connection or allocating other significant resources, then that would be completely wasted time/resources when ActionB is called.
If these action methods used a service location pattern (or other similar pattern), then there would never be the chance to unnecessarily construct an object instance that will go unused, of course using this pattern has other issues attached making it unattractive.
Does using the canonical constructor injection + interfaces pattern of DI basically lock the developer into a "limitation" of sorts that implementations of the dependency must be cheep to instantiate or the instance must be significantly utilized? I know all patterns have their pros and cons, is this just one of DI's cons? I've never seen it mentioned before, which I find curious.
If you have a lot of fields that aren't being used by every member this means that the class' cohesion is low. This is a general programming concept - Constructor Injection just makes it more visible. It's usually a pretty good indicator that the Single Responsibility Principle is being violated.
If that's the case then refactor (e.g. to Facade Services).
You don't have to worry about performance when creating object graphs.
When it comes to side effects, (DI) constructors should be simple and not have side effects.
Generally speaking, there should be no major costs or side effects of object construction. This is a general statement that I believe applies to most (not all) objects, but is especially true for services that you would inject via DI. In other words, constructing a service class automatically makes a database/service call, or changes the state of your system in a way that would have side effects is (at least) a code smell.
Regarding instances that go unused: it's hard to create a system that has perfect utilization of instances within dependent classes, regardless of whether you use DI or not. I'm not sure achieving this is very important, as long as you are adhering to the Single Responsibility Principle. If you find that your class has too many services injected, or that utilization is really uneven, it might be a sign that your class is doing too much and needs to be split into two or more smaller classes with more focused responsibilities.
No you are not tied to the limitations you have listed. As of .net 4 you do have Lazy(Of T) at your disposal, which will allow you to defer instantiation of your dependencies until required.
It is not assumed that object construction is dirt-cheap and consequently some DI containers support Lazy(Of T) out of the box. Whilst Unity 2.0 supports lazy initialization out of the box through automatic factories, there is a good article here on an extension supporting Lazy(Of T) the author has on MSDN.
Isn't your controller a singleton though? That is the normal way to do it in Java. There is only one instance created. Also you could split the controller into multiple controllers if the roles of the actions is so distinct.

BDD/TDD: can dependencies be a behavior?

I've been told not to use implementation details. A dependency seems like an implementation detail. However I could phrase it also as a behavior.
Example: A LinkList depends on a storage engine to store its links (eg LinkStorageInterface). The constructor needs to be passed an instance of an implemented LinkStorageInterface to do its job.
I can't say 'shouldUseLinkStorage'. But maybe I can say 'shouldStoreLinksInStorage'.
What is correct to 'test' in this case? Should I test that it stores links in a store (behavior) or don't test this at all?
The dependency itself is not an expected behavior, but the actions called on the dependency most certainly are. You should test the stuff you (the caller) know about, and avoid testing the stuff that requires you to have intimate knowledge of the inner workings of the SUT.
Expanding your example a little, lets imagine that our LinkStorageInterface has the following definition (Pseudo-Code):
Interface LinkStorageInterface
void WriteListToPersistentMedium(LinkList list)
End Interface
Now, since you (the caller) are providing the concrete implementation for that interface it is perfectly reasonable for you to test that WriteListToPersistentMedium() gets called when you invoke the Save() method on your LinkList.
A test might look like this, again using pseudo-code:
void ShouldSaveLinkListToPersistentMedium()
define da = new MockLinkListStorage()
define list = new LinkList(da)
list.Save()
Assert.Method(da.WriteListToPersistentMedium).WasCalledWith(list)
end method
You have tested expected behavior without testing implementation specific details of either your SUT, or your mock. What you want to avoid testing (mostly) are things like:
Order in which methods were called
Making a method, or property public just so you can check it
Anything that does not directly involve the expected behavior you are testing
Again, a dependency is something that you as the consumer of the class are providing, so you expect it to be used. Otherwise there is no point in having that dependency in the first place.
LinkStorageInterface is not an implementation detail - its name suggests an interface to to an engine. In which case the name shouldUseLinkStorage has more value than shouldStoreLinksInStorage.
That's my 2 pennies worth!

TDD and DI: dependency injections becoming cumbersome

C#, nUnit, and Rhino Mocks, if that turns out to be applicable.
My quest with TDD continues as I attempt to wrap tests around a complicated function. Let's say I'm coding a form that, when saved, has to also save dependent objects within the form...answers to form questions, attachments if available, and "log" entries (such as "blahblah updated the form." or "blahblah attached a file."). This save function also fires off emails to various people depending on how the state of the form changed during the save function.
This means in order to fully test out the form's save function with all of its dependencies, I have to inject five or six data providers to test out this one function and make sure everything fired off in the right way and order. This is cumbersome when writing the multiple chained constructors for the form object to insert the mocked providers. I think I'm missing something, either in the way of refactoring or simply a better way to set the mocked data providers.
Should I further study refactoring methods to see how this function can be simplified? How's the observer pattern sound, so that the dependent objects detect when the parent form is saved and handle themselves? I know that people say to split out the function so it can be tested...meaning I test out the individual save functions of each dependent object, but not the save function of the form itself, which dictates how each should save themselves in the first place?
First, if you are following TDD, then you don't wrap tests around a complicated function. You wrap the function around your tests. Actually, even that's not right. You interweave your tests and functions, writing both at almost exactly the same time, with the tests just a little ahead of the functions. See The Three Laws of TDD.
When you follow these three laws, and are diligent about refactoring, then you never wind up with "a complicated function". Rather you wind up with many, tested, simple functions.
Now, on to your point. If you already have "a complicated function" and you want to wrap tests around it then you should:
Add your mocks explicitly, instead of through DI. (e.g. something horrible like a 'test' flag and an 'if' statement that selects the mocks instead of the real objects).
Write a few tests in order to cover the basic operation of the component.
Refactor mercilessly, breaking up the complicated function into many little simple functions, while running your cobbled together tests as often as possible.
Push the 'test' flag as high as possible. As you refactor, pass your data sources down to the small simple functions. Don't let the 'test' flag infect any but the topmost function.
Rewrite tests. As you refactor, rewrite as many tests as possible to call the simple little functions instead of the big top-level function. You can pass your mocks into the simple functions from your tests.
Get rid of the 'test' flag and determine how much DI you really need. Since you have tests written at the lower levels that can insert mocks through areguments, you probably don't need to mock out many data sources at the top level anymore.
If, after all this, the DI is still cumbersome, then think about injecting a single object that holds references to all your data sources. It's always easier to inject one thing rather than many.
Use an AutoMocking container. There is one written for RhinoMocks.
Imagine you have a class with a lot of dependencies injected via constructor injection. Here's what it looks like to set it up with RhinoMocks, no AutoMocking container:
private MockRepository _mocks;
private BroadcastListViewPresenter _presenter;
private IBroadcastListView _view;
private IAddNewBroadcastEventBroker _addNewBroadcastEventBroker;
private IBroadcastService _broadcastService;
private IChannelService _channelService;
private IDeviceService _deviceService;
private IDialogFactory _dialogFactory;
private IMessageBoxService _messageBoxService;
private ITouchScreenService _touchScreenService;
private IDeviceBroadcastFactory _deviceBroadcastFactory;
private IFileBroadcastFactory _fileBroadcastFactory;
private IBroadcastServiceCallback _broadcastServiceCallback;
private IChannelServiceCallback _channelServiceCallback;
[SetUp]
public void SetUp()
{
_mocks = new MockRepository();
_view = _mocks.DynamicMock<IBroadcastListView>();
_addNewBroadcastEventBroker = _mocks.DynamicMock<IAddNewBroadcastEventBroker>();
_broadcastService = _mocks.DynamicMock<IBroadcastService>();
_channelService = _mocks.DynamicMock<IChannelService>();
_deviceService = _mocks.DynamicMock<IDeviceService>();
_dialogFactory = _mocks.DynamicMock<IDialogFactory>();
_messageBoxService = _mocks.DynamicMock<IMessageBoxService>();
_touchScreenService = _mocks.DynamicMock<ITouchScreenService>();
_deviceBroadcastFactory = _mocks.DynamicMock<IDeviceBroadcastFactory>();
_fileBroadcastFactory = _mocks.DynamicMock<IFileBroadcastFactory>();
_broadcastServiceCallback = _mocks.DynamicMock<IBroadcastServiceCallback>();
_channelServiceCallback = _mocks.DynamicMock<IChannelServiceCallback>();
_presenter = new BroadcastListViewPresenter(
_addNewBroadcastEventBroker,
_broadcastService,
_channelService,
_deviceService,
_dialogFactory,
_messageBoxService,
_touchScreenService,
_deviceBroadcastFactory,
_fileBroadcastFactory,
_broadcastServiceCallback,
_channelServiceCallback);
_presenter.View = _view;
}
Now, here's the same thing with an AutoMocking container:
private MockRepository _mocks;
private AutoMockingContainer _container;
private BroadcastListViewPresenter _presenter;
private IBroadcastListView _view;
[SetUp]
public void SetUp()
{
_mocks = new MockRepository();
_container = new AutoMockingContainer(_mocks);
_container.Initialize();
_view = _mocks.DynamicMock<IBroadcastListView>();
_presenter = _container.Create<BroadcastListViewPresenter>();
_presenter.View = _view;
}
Easier, yes?
The AutoMocking container automatically creates mocks for every dependency in the constructor, and you can access them for testing like so:
using (_mocks.Record())
{
_container.Get<IChannelService>().Expect(cs => cs.ChannelIsBroadcasting(channel)).Return(false);
_container.Get<IBroadcastService>().Expect(bs => bs.Start(8));
}
Hope that helps. I know my testing life has been made a whole lot easier with the advent of the AutoMocking container.
You're right that it can be cumbersome.
Proponent of mocking methodology would point out that the code is written improperly to being with. That is, you shouldn't be constructing dependent objects inside this method. Rather, the injection API's should have functions that create the appropriate objects.
As for mocking up 6 different objects, that's true. However, if you also were unit-testing those systems, those objects should already have mocking infrastructure you can use.
Finally, use a mocking framework that does some of the work for you.
I don't have your code, but my first reaction is that your test is trying to tell you that your object has too many collaborators. In cases like this, I always find that there's a missing construct in there that should be packaged up into a higher level structure. Using an automocking container is just muzzling the feedback you're getting from your tests. See http://www.mockobjects.com/2007/04/test-smell-bloated-constructor.html for a longer discussion.
In this context, I usually find statements along the lines of "this indicates that your object has too many dependencies" or "your object has too many collaborators" to be a fairly specious claim. Of course a MVC controller or a form is going to be calling lots of different services and objects to fulfill its duties; it is, after all, sitting at the top layer of the application. You can smoosh some of these dependencies together into higher-level objects (say, a ShippingMethodRepository and a TransitTimeCalculator get combined into a ShippingRateFinder), but this only goes so far, especially for these top-level, presentation-oriented objects. That's one less object to mock, but you've just obfuscated the actual dependencies via one layer of indirection, not actually removed them.
One blasphemous piece of advice is to say that if you are dependency injecting an object and creating an interface for it that is quite unlikely to ever change (Are you really going to drop in a new MessageBoxService while changing your code? Really?), then don't bother. That dependency is part of the expected behavior of the object and you should just test them together since the integration test is where the real business value lies.
The other blasphemous piece of advice is that I usually see little utility in unit testing MVC controllers or Windows Forms. Everytime I see someone mocking the HttpContext and testing to see if a cookie was set, I want to scream. Who cares if the AccountController set a cookie? I don't. The cookie has nothing to do with treating the controller as a black box; an integration test is what is needed to test its functionality (hmm, a call to PrivilegedArea() failed after Login() in the integration test). This way, you avoid invalidating a million useless unit tests if the format of the login cookie ever changes.
Save the unit tests for the object model, save the integration tests for the presentation layer, and avoid mock objects when possible. If mocking a particular dependency is hard, it's time to be pragmatic: just don't do the unit test and write an integration test instead and stop wasting your time.
The simple answer is that code that you are trying to test is doing too much. I think sticking to the Single Responsibility Principle might help.
The Save button method should only contain a top-level calls to delegate things to other objects. These objects can then be abstracted through interfaces. Then when you test the Save button method, you only test the interaction with mocked objects.
The next step is to write tests to these lower-level classes, but thing should get easier since you only test these in isolation. If you need a complex test setup code, this is a good indicator of a bad design (or a bad testing approach).
Recommended reading:
Clean Code: A Handbook of Agile Software Craftsmanship
Google's guide to writing testable code
Constructor DI isn't the only way to do DI. Since you're using C#, if your constructor does no significant work you could use Property DI. That simplifies things greatly in terms of your object's constructors at the expense of complexity in your function. Your function must check for the nullity of any dependent properties and throw InvalidOperation if they're null, before it begins work.
When it is hard to test something, it is usually symptom of the code quality, that the code is not testable (mentioned in this podcast, IIRC). The recommendation is to refactor the code so that the code will be easy to test. Some heuristics for deciding how to split the code into classes are the SRP and OCP. For more specific instructions, it would be necessary to see the code in question.

Resources