I have seen this line of code in several tutorials for using Unity in asp.net mvc3. I was under the impression that Service Locator is an anti-pattern and not best practice. Is this Service Locator something other than the anti-pattern defined, or is this line of code / this implementation considered bad practice.
ServiceLocator.SetLocatorProvider(() => new UnityServiceLocator(Container));
Old question, but for the benefit of others:
While I absolutely agree with the mantra "Service Location is an anti-pattern", there are definitely exceptions to that rule.
When you use Dependency Injection (like Unity) then, yes, certainly don't use ServiceLocator and only use constructor injection for all your service classes. (Also don't use "new" for anything other than value objects like DTOs.)
However, there are cases where you simply can't use constructor injection and then the only way to get access to a service is to use a workaround to access your Unity container directly, and in such cases, ServiceLocator is a good standard way to accomplish that. This is the case when the class isn't instantiated by you (or more specifically, it isn't instantiated by Unity) but by the .NET framework for example.
A few simple examples of where ServiceLocator might be useful, is to get access to services registered in your Unity container from:
an implementation of a WCF IEndpointBehavior or IClientMessageInspector
an implementation of a WPF IValueConverter
or you may not necessarily even want to get access to "services" from the class, but you simply want to write code that is unit-testable, but for some reason the class can't be instatiated at all (or not easily) because it would normally be constructed by the .NET Framework, so you extract your custom code into a class that is testable, and resolve it in the non-testable class using the ServiceLocator.
Note that this line is not ideal:
ServiceLocator.SetLocatorProvider(() => new UnityServiceLocator(Container));
The ServiceLocator.Current property is going to execute the delegate provided every time you access Current, i.e. a new UnityServiceLocator is going to get created every time. Instead, you probably want to do this:
IServiceLocator locator = new UnityServiceLocator(container);
ServiceLocator.SetLocatorProvider(() => locator);
If you create a framework which is designed to be container agnostic the service locator (although it should be a No-Go in an application) is an additional layer of indirection that allows you to swap out Unity for something different. In addition the use of the service locator does not enforce the use of DI for applications that use that framework.
It is the same anti-patten that people talk about. All that line is doing is setting the service locator provider to be an instance of UnityServiceLocator, i.e. to use the Unity implementation of the ISerivceLocator. Optionally if you would like you can have your own implementation is IServiceLocator and use that instead of UnityServiceLocator.
Using Service Locator is considered a bad practice for various reasons as listed here
Related
I'm trying to create a service such that once it is created it only allows itself to be held by a single consumer/bundle at any one time. (If this is against the philosophy/specification of OSGi then that obviously provides a quick answer but reference to the OSGi specs. stating this would be appreciated.)
To implement such a requirement I implemented the ServiceFactory interface thinking that whenever there was a requirement for the service the getService(Bundle bundle, ServiceRegistration<S> registration) method would be called and it would be where I could determine if the Bundle was a new consumer or not and act accordingly.
It appears that this is not the case in the scenario I have tested this in.
Using a Apache Karaf and instantiating a consumer of the Service via Blueprint it would seem that the getService method is never called. Instead the consumer's binding method for the service is called directly but injecting a proxy service object.
While I understand that Blueprint uses proxies surely there is still the obligation of the ServiceFactory contract to fulfil even if it's a proxy object consuming the service?
Why do I want to do this?
I am attempting to wrap JavaFX and the Stage class and because JavaFX isn't OSGi friendly I am attempting to co-ordinate access to the Stage object. I'm aware that there are frameworks such as Drombler but a brief look at them made me think that it doesn't suit my use case. They appear too restrictive for my needs e.g. I don't necessarily wish to layout an application in the manner Drombler uses.
It depends what you mean by a consumer. ServiceFactory does give you the chance to create a separate instance of a service per bundle that calls getService on your service. It's not clear from your question but I suspect you weren't seeing the getService invoked multiple times because you were fetching the service from the same consumer bundle. In this case, ServiceFactory simply returns the same object repeatedly.
As for your general question about restricting access to a single consumer, no that really goes against the OSGi philosophy. I'm sorry I don't have a spec reference for you but the clue is in the name: it's a service that is available to all.
I'm aware that there are frameworks such as Drombler but a brief look at them made me think that it doesn't suit my use case. They appear too restrictive for my needs e.g. I don't necessarily wish to layout an application in the manner Drombler uses.
Please note that the layout of Drombler FX applications is pluggable so you can provide your own implementation tailored to your needs. This allows you to get the most out of Drombler FX and JavaFX.
While this feature is available for some time, there is now a new tutorial trail explaining it in more detail.
I'm a bit confused with DI and IoC. I have set up MVC and I use Ninject for properties injection and it works perfectly. My application is set to use Portable Areas from MvcContrib and each area is contained from providers, services, models and controllers.
Providers from one area can access other providers in same or sub assemblies. To resolve dependency in provider I use DependencyResolver.Cur... which is registered to use Ninject as well. I would like to know if this is a good approach since I don't want to pass all other providers from controllers to last layer, but I want to access them directly from provider. Should I create an instance of kernel in lowest assembly like Core so I can access it directly from anywhere?
Thnx in advance
UPDATE:
I would also want to know if it is possible to use property injection in normal class.
When you design all your services (reposities, application services, controllers, etc) around the constructor injection pattern, there is no need to call DependencyResolver.Current.GetService from within a class and there is no need to make an instance of the kernel available in the lowest assembly.
When all your services use constructor injection, your container will be able to construct an object graph of dependent services when a root type is requested: in your case a controller class. When you do this, no code needs to access the DependencyResolver or the Kernel directly, which ensures your code will be more testable, flexible, and maintainable. Any code that accesses the DependencyResolver or static Kernel instance is hard to test, hides its dependencies, and makes it difficult to verify the container's configuration in an automated fashion.
Instead of using constructor injection, you can achieve the same with property injection. However, since the convention is that properties are for optional dependencies, Ninject (and any other container) will skip a property it can't inject (implicit property injection), instead of throwing an exception, as would happen with a missing constructor argument dependency. This again makes it much harder to find configuration errors in your application. So, whenever possible, stick with constructor injection.
I updated Ninject.MVC3 package from 2.2.1.0 to 2.2.2.0. Before I had access to the Kernel object through BootStrapper.Kernel property but in the new version Kernel property is marked as obsolete. I get a warning saying
'Public ReadOnly Property Kernel As Ninject.IKernel' is obsolete: 'Do not use Ninject as Service Locator'.
Is there a different way to access the kernel in the new version?
If you have a class that (for some reason) needs to retrieve objects from the Ninject kernel, you can include the kernel as one of the injected properties/constructor parameters on the class. This pattern is better in the sense that you're explicitly declaring that a particular class is using the kernel, rather than always having it available as the service locator pattern does.
This assumes that Ninject automatically adds an instance binding of the kernel to itself. I know that it used to do this, but if it doesn't you can add the binding manually.
The reason why this has been marked Obsolete and will be changed to internal in future is that people tend to use Ninject as a service locator if it is possible to do so. But Service Locator is an antipattern that should not be used. And as we don't want to provide functionality that helps building badly designed software it will be removed in future.
If this needs a lot of changes in your code this is sign that your code is suffering from this malaise Dependency Injection wise and you really should change it to a better design.
Limit your access to the kernel to a mimimum. There is almost no situation in MVC where you need something other than plain constructor injection. Therefore my first advice is to refactor to constructor injection where possible.
For these very rare cases where you need access to the kernel to create other objects you should inject a factory to the class that needs the new instance and inject the kernel into this factory (if the Constructor has a Kernel parameter, it'll receive the instance doing the injecting).
If you really want to stay with service locator even if almost everyone will tell you not to, you will have to keep a static reference yourself.
In ASP.NET MVC 3, I think DepedencyResolver is a clean way to get a service locator.
You could use the Common Service Locator as a service location hook into Ninject. The Common Service Locator only allows you to retrieve objects though, not inject objects that you already have. Of course you could hack around this restriction, but you could just as easily create a static class that exposes the Ninject kernel and reference that rather than the BootStrapper.
I am a big fan of Dependency Injection and the Play Framework, but am having trouble seeing how the two could be exploited together.
There are modules for Spring and Guice, but the way that Play works makes it hard for me to see how DI could be beneficial beyond some quite simple cases.
A good example of this is that Play expects JPA work to be done by static methods associated with the entity in question:
#Entity
Person extends Model {
public static void delete(long id) {
em().find(id).remove();
}
//etc
}
So there is no need for a PersonManager to be injected into controllers in the way it might for a Spring J2EE application. Instead a controller just calls Person.delete(x).
Obviously, DI is beneficial when there are interfaces with external systems, as the concrete implementation can be mocked for testing etc., but I don't see much benefit for a self-contained Play application.
Does anyone have any good examples? Does anyone use it to inject a Manager-style class into Controllers so that a number of operations can be done within the same transaction, for example?
I believe from this sentence you wrote:
"Does anyone have any good examples? Does anyone use it to inject a Manager-style class into Controllers so that a number of operations can be done within the same transaction, for example?"
that before answering the DI question I should note something: transactions are managed automatically by Play. If you check the model documentation you will see that a transaction is automatically created at the beginning of a request, and committed at the end. You can roll it back via JPA or it will be rolled back if an exception is raised.
I mention this because from the wording of your sentence I'm not sure if you are aware of this.
Now, on DI itself, in my (not-so-extensive) experience with DI, I've seen it used mainly to:
Load the ORM (Hibernate) factory/manager
Load Service classes/DAOs into another class to work with them
Sure, there are more scenarios, but these probably cover most of the real usage. Now:
The first one is irrelevant to Play as you get access to your JPA object and transaction automatically
The second one is quite irrelevant too as you mainly work with static methods in controllers. You may have some helper classes that need to be instantiated, and some may even belong to a hierarchy (common interface) so DI would be beneficial. But you could just as well create your won factory class and get rid of the jars of DI.
There is another matter to consider here: I'm not so sure about Guice, but Spring is not only DI, it also provides a lot of extra functionalities which depend on the DI module. So maybe you don't want to use DI in Play, but you want to take advantage of the Spring tools and they will use DI, albeit indirectly (via xml configuration).
The problem in my humble opinion on the static initialization approach of Play! is that it makes testing harder. Once you approach the HTTP vs Object Orientation problem with static members and objects that carries the HTTP message data (request and response) you make a trade of having to create new instances for each request by the ability of make your objects loosely coupled with the rest of your project classes.
One good example of a different design are servlets, it also extends a base class but it approaches the problem with a single instance creation (by default, because there are configurations that enable more instances).
I believe that maybe a mix of the two approaches would be better, having a singleton of each controller would give the same characteristics of a full static class and would allow dependency injection of some kinds of object. But not the objects with request or session scope, once the controller would need to be created every new request. Moreover it would improve testability by inverting the control of dependency injection, thus allowing arbitrary injection points.
Dependencies would be injected by the container or by a test, probably using mocks for the heavy stuff that much likely would already have been tested before.
In my point of view, this static model pushes the developer away from testing controllers because extending FunctionalTest starts the application server, thus paying the price of heavy objects like repositories, services, crawlers, http clients, etc. I don't want to wait a lot of objects to be bootstrapped just to check if some code was executed on the controller, tests should be quick and clear to make developers love them as their programming assistant/guide.
DI is not the ultimate solution to use everywhere... Don't use DI just because you have it in your hands... In play, you don't need DI to develop controllers/models etc... but sometimes it could be a nice design: IMO, you could use it if you have a service with a well know interface but you would like to develop this service outside Play and test it outside play and even test your play project with just a dummy service in order NOT to depend on the full service implementation. Therefore DI can be interesting: you plug the service loosely in play. In fact, this is the original use case for DI afaik...
I just wrote a blog post about setting up a Play Framework application with Google Guice. http://geeks.aretotally.in/dependency-injection-with-play-framework-and-google-guice
I see some benefits, especially when a component of your application requires a different behavior based on a certain context or something like that. But I def believe people should be selective about what goes into a DI context.
It shows again that you should only use dependencies injection if you really have a benefit. If you have complex services it's useful, but in many cases it's not. Read the chapter about models in the play-documentation.
So to give you an example where you can use DI with play. Perhaps you must make a complex calculation, or you create a pdf with a report-engine. There I think DI can be useful, specially for testing. There I think the guice-module and spring-module are useful and can help you.
Niels
As of a year and some change later, Play 2.1 now has support for dependency injection in controllers. Here's their demo project using Spring 3, which lays it out pretty clearly.
Edit: here's another example using Guice and Scala, if that's your poison.
How would one typically implement a service layer in an MVC architecture? Is it one object that serves all requests to underlying business objects? Or is more like an object that serves different service objects that in their turn interact with business objects?
So:
Controller -> Service -> getUserById(), or:
Controller -> ServiceManager -> getUserService() -> getUserById()
Also, if the latter is more appropriate, would you configure this ServiceManager object in a bootstrap? In other words, register the different services that you will be needing for your app to the service manager in a bootstrap?
If none of the above is appropriate, what would help me get a better understanding of how a service layer should be implemented?
Thank you in advance.
The way I read this question, there's really two things that should be answered:
A) I would prefer splitting "Service" into "CustomerService" and "OrderService", in other words grouped by domain concepts.
B) Secondly I'd use dependency injection to get the proper service directly where I need it, so I'm basically using alt 1. The added abstraction in alternative 2 provides no additional value for me, since the IoC container does the important part.
Using a "facade" is one way to go:
"A facade is an object that provides a simplified interface to a larger body of code"
I personally prefer #2, and yes, it would generally be configured in a bootstrap, or the dependencies would be resolved using some sort of IoC container to give you the actual concrete instances.
I would also like to comment, and yes I understand this is probably more of a personal preference. Try to avoid using the name "Service" layer for these objects. refer to them as repositories or something else. if you use service, that term becomes overloaded ... because then devs are like, "do you mean like, a rest or wcf service?". trust me, we did that with a recent project, and we confuse ourselves all the time when we're talking about where to make code changes :-P