I'm using Quartz Scheduler in one of my projects. There are two main ways to create a Quartz job:
implement org.quartz.Job class
extend org.springframework.scheduling.quartz.QuartzJobBean (which implements org.quartz.Job class)
The last part of the QuartzJobBean javadoc is confusing :
* Note that the preferred way to apply dependency injection to Job instances is via a JobFactory:
that is, to specify SpringBeanJobFactory as Quartz JobFactory (typically via
SchedulerFactoryBean.setJobFactory SchedulerFactoryBean's "jobFactory" property}).
This allows to implement dependency-injected Quartz Jobs without a dependency on Spring base classes.*
For a pure Spring (or SpringBoot) use, I suppose that it is better to extend QuartzJobBean. I'm right?
First of all, since a QuartzJobBean is a Job, any API call that will accept a Job will accept a QuartzJobBean, but not visa versa. So if you need a QuartzJobBean because some API call expects you to pass it one, then there's your answer.
Otherwise, the answer depends on if you want to make use of (and be tied to) the functionality provided by QuartzJobBean. If you look at the source code for that class, you'll see that the sole gain in subclassing QuartzJobBean over implementing Job, is that QuartzJobBean performs this logic before passing control to your code:
try {
BeanWrapper bw = PropertyAccessorFactory.forBeanPropertyAccess(this);
MutablePropertyValues pvs = new MutablePropertyValues();
pvs.addPropertyValues(context.getScheduler().getContext());
pvs.addPropertyValues(context.getMergedJobDataMap());
bw.setPropertyValues(pvs, true);
}
catch (SchedulerException ex) {
throw new JobExecutionException(ex);
}
So if you extend the QuartzJobBean class and implement the executeInternal method, this code runs before your code. If you implement the Job class and the execute method, it does not. That's the only difference between the two approaches in terms of what actually happens when your job runs.
So to answer your question, ask yourself "do I want to take advantage of the above code?". If the answer is Yes, then extend QuartzJobBean to take advantage of that functionality. If you don't need this added functionality, don't want it, and/or don't want to be locked into the dependencies implied by the above code, then you should implement Job to avoid this code and its dependencies. My personal approach would be to implement Job unless I had some reason to extend QuartzJobBean instead.
Related
In a Spring Boot Web Application layout, I have defined a Service Interface named ApplicationUserService. An implementation called ApplicationUserServiceImpl implements all the methods present in ApplicationUserService.
Now, I have a Controller called ApplicationUserController that calls all the methods of ApplicationUserServiceImpl under different #GetMapping annotations.
As suggested by my instructor, I have defined a Dependency Injection as follows:
public class ApplicationUserController {
private final ApplicationUserService applicationUserService; //This variable will work as an object now.
public ApplicationUserController(ApplicationUserService applicationUserService) {
this.applicationUserService = applicationUserService;
}
#GetMapping
//REST OF THE CODE
}
I am new to Spring Boot and I tried understanding Dependency Injection in plain English and I understood how it works. I understood that the basic idea is to separate the dependency from the usage. But I am totally confused about how this works in my case.
My Questions:
Here ApplicationUserService is an Interface and it's implementation has various methods defined. In the code above, applicationUserService now has access to every method from ApplicationUserServiceImpl. How did that happen?
I want to know how the Object creation works here.
Could you tell me the difference between not using DI and using DI in this case?
Answers
The interface layer is used for abstraction of the code it will be really helpfull when you want to provide different implementations for it. When you create a instance you are simply creating a ApplicationUserServiceImpl and assigning it into a reference variable of ApplicationUserService this concept is called upcasting. Even though you have created the object of child class/implementation class you can only access the property or methods defined in parent class/interface class. Please refer this for more info
https://stackoverflow.com/questions/62599259/purpose-of-service-interface-class-in-spring-boot#:~:text=There%20are%20various%20reasons%20why,you%20can%20create%20test%20stubs.
in this example when a applicationusercontroller object is created. The spring engine(or ioc container) will create a object of ApplicationUserServiceImpl (since there is a single implementation of the interface ) and returns to controller object as a constructor orgument which you assign to the refrence variable also refer the concept called IOC(Invertion of control)
as explained in the previous answer the spring will take care of object creation (object lifecycle)rather than you explsitly doing it. it will make the objects loosely coupled. In this the controll of creating the instances is with spring .
The non DI way of doing this is
private ApplicationUserService applicationUserService = new ApplicationUserServiceImpl()
Hope I have answered your questions
this analogy may make you understand better consider an example, wherein you have the ability to cook. According to the IoC principle(principal which is the basis of DI), you can invert the control, so instead of you cooking food, you can just directly order from outside, wherein you receive food at your doorstep. Thus the process of food delivered to you at your doorstep is called the Inversion of Control.
You do not have to cook yourself, instead, you can order the food and let a delivery executive, deliver the food for you. In this way, you do not have to take care of the additional responsibilities and just focus on the main work.
Now, that you know the principle behind Dependency Injection, let me take you through the types of Dependency Injection
I have a need of conditionally creating one of three possible implementations of a service depending upon the environment detected by a Spring application at runtime. If Service A is available, then I want to create a concrete implementation class that uses Service A as a dependency. If Service A is not available, then I want to create an implementation using Service B as a dependency. And so-on.
Classes which depend on the implementation will Autowire the Interface and not care what the underlying Service was that got selected for the particular environment.
My first stab at this was to implement multiple #Bean methods which either return a bean or null, depending on whether the Service is available, and to then have a separate #Configuration class which #Autowire(required=false) the two possible services, conditionally creating the implementation depending on which of the #Autowired fields was not-null.
The problem here is that when required=false, Spring doesn't appear to care whether it waits around for candidates to be constructed; that is to say, the class which tries to pick the implementation might be constructed before one or both of the required=false Beans gets constructed, thus ensuring that one or both might always be null, regardless of whether it may manage to initialize correctly.
It kind of feels like I'm going against the grain at this point, so I'm looking for advice on the "right" way to do this sort of thing, where a whole set of beans might get switched out based on the availability of some outside service or environment.
Profiles don't look like the right answer, because I won't know until after my Service beans try to initialize which implementation I want to choose; I certainly won't know it at the time I create the context.
#Order doesn't achieve the goal either. Nor does #Conditional and testing on the existence of the bean (because it still might not be constructed yet). Same problem with FactoryBean- it does no good to check for the existence of beans that might not have been constructed at the time the FactoryBean is asked to create an instance.
What I really need to do is create a Bean based on the availability of other beans, but only AFTER those beans have at least had a chance to try to initialize.
Spring Profiles is your friend. You can set the current profile by way of environmental variable, command-line argument, and other methods. You can annotate a Spring-managed component so that it's created for a certain profile.
Spring Profiles from the Spring Documentation
Well in this case it turned out to be a tangential mistake that influenced the whole wrong behavior.
To give some background, my first, naive (but workable) approach looked like this:
#Autowired(required=false)
#Qualifier(RedisConfig.HISTORY)
private RLocalCachedMap<String, History> redisHistoryMap;
#Autowired(required=false)
#Qualifier(HazelcastConfig.HISTORY)
private IMap<String, History> hazelcastHistoryMap;
// RequestHistory is an interface
#Bean
public RequestHistory requestHistory() {
if (redisHistoryMap != null) {
return new RedisClusteredHistory(redisHistoryMap);
} else if (hazelcastHistoryMap != null) {
return new HazelcastClusteredHistory(hazelcastHistoryMap);
} else {
return new LocalRequestHistory(); // dumb hashmap
}
}
In other #Configuration classes, if the beans that get #Autowired here aren't available (due to missing configuration, exceptions during initialization, etc), the #Bean methods that create them return null.
The observed behavior was that this #Bean method was getting called after the RLocalCachedMap<> #Bean method got called, but before Spring attempted to create the IMap<> by calling its #Bean method. I had incorrectly thought that this had something to do with required=false but in fact that had nothing to do with it.
What actually happened was I accidentally used the same constant for both #Bean names (and consequently #Qualifiers), so presumably Spring couldn't tell the difference when it was calculating its dependency graph for this #Configuration class... because the two #Autowired beans appeared to be the same thing (because they had the same name).
(There's a secondary reason for using #Qualifier in this case, which I won't go into here, but suffice it to say it's possible to have many maps of the same type.)
Once I qualified the names better, the code did exactly what I wanted it to, albeit in a way that's somewhat inelegant/ugly.
At some point I'll go back and see if it looks more elegant / less ugly and works just as well to use #Conditional and #Primary instead of the if/else foulness.
The lesson here is that if you explicitly name beans, make absolutely sure your names are unique across your application, even if you plan to swap things around like this.
I have parent pom with Dropwizard service. In that service I have some exception mapper declaration, for example:
#Provider
class ExceptionMapperForClassA {...}
Now in my child pom I'm extending this parent service and I want to create New Exception mapper for ClassA and I would like to extend it from ExceptionMapperForClassA.
I couldn't find any information in Jersey doc, which describes behaviour of Jersey when two exception mappers for same class are declared.
https://jersey.java.net/documentation/latest/representations.html
Actually, the question is - How to override some exception mapper and be sure that only my exception mapper will be called ?
I ran a quick test and the behaviour is as follows:
If you have 2 Exception mappers mapping the same exception, it appears that the provider sorted by its name will be the one called. E.g. I have a ResourceNotFound mapper, that I extend with TestMapper. The latter is never called.
If I rename the latter to ATestMapper, the first will never be called.
I would NOT rely on that behaviour however because I don't think it is defined anywhere (I might be wrong).
If you want to overwrite ExceptionMappers, I would recommend creating a base that is generic, for example:
public abstract class BaseExceptionMapper<T extends Exception> extends LoggingExceptionMapper<T> {
...
}
Define all your shared implementation in the base mapper, and extend it with specific mappers. That way you have exactly one mapper per Exception.
If you can not do the above and you do absolutely have 2 Mappers of the same type, you can explicitly overwrite the first by programatically adding it. For this, in your dropwizard application starter class, in the run method, you can do:
JerseyEnvironment jersey = environment.jersey();
jersey.getResourceConfig().register(TestMapper.class);
This will explicitly set this mapper as the exceptionmapper to be used.
Unless that has changed, it's impossible to override an existing exception mapper. I don't remember where I had read it, but it said that you'd have randomly chosen mapper responding if you register more than one.
As my answer here suggests, you need to unregister the previous exception mapper and and assign yours but that also was made impossible after dropwizard 0.8. Unless 0.9 offers a solution for that, you should focus on making your parent-pom taking a parameter from outside and decide to whether to register that ExceptionMapperForClassA or not.
I've built some code that can rebuild expression trees so I can avoid triggering the no supported translation to SQL exception and it works fine as long as I call my function to replace the iqueryable. The problem is that I'd like it to automatically be applied to all queries in my project without having to worry about calling this function on each one separately. Is there any way that I can intercept everything?
I've tried using Reflection.Emit to create a wrapping provider and using reflection to replace it on the data context and it turns out that even with Reflection.Emit I can't implement the internal IProvider interface.
I've also tried replacing the provider with a RealProxy based class and that works for non-compiled queries, but the CompiledQuery.Execute method is throwing an exception because it won't cast to the SqlProvider class. I tried replacing the response to the Compile method on the provider with another proxy so I could intercept the Execute call, but that failed a check on the return type being correct.
I'm open to any other ideas or ways of using what I've already tried?
It's hard to tell whether this is an applicable solution without seeing your code, but if you have a DI-friendly app architecture you can implement an interceptor and have your favorite IoC container emit the appropriate type for you, at run-time.
Esoteric? A little. Consider an interface like this:
public interface ISomeService
{
IEnumerable<SomeEntity> GetSomeEntities();
// ...
}
This interface might be implemented like this:
public class SomeService : ISomeService
{
private readonly DbContext _context // this is a dependency!
private readonly IQueryTweaker _tweaker; // this is a dependency!
public SomeService(DbContext context, IQueryTweaker tweaker) // this is constructor injection!
{
_context = context;
_tweaker = tweaker;
}
public IEnumerable<SomeEntity> GetSomeEntities()
{
return _tweaker.TweakTheQuery(_context.SomeEntities).ToList();
}
}
Every time you implement a method of the ISomeService interface, there's always a call to _tweaker.TweakTheQuery() that wraps the IQueryable, and that not only gets boring, it also feels like something is missing a feature - the same feeling you'd get by wrapping every one of these calls inside a try/catch block, or if you're familiar with MVVM in WPF, by raising this annoying PropertyChanged event for every single property setter in your ViewModel.
With DI Interception, you factor this requirement out of your "normal" code and into an "interceptor": you basically tell the IoC container that instead of binding ISomeService directly to the SomeService implementation, you're going to be decorating it with an interceptor, and emit another type, perhaps SomeInterceptedService (the name is irrelevant, the actual type only exists at run-time) which "injects" the desired behavior into the desired methods. Simple? Not exactly.
If you haven't designed your code with DI in mind (are your dependencies "injected" into your classes' constructor?), it could mean a major refactoring.
The first step breaks your code: remove the IQueryTweaker dependency and all the TweakTheQuery calls from all ISomeService implementations, to make them look like this - notice the virtualness of the method to be intercepted:
public class SomeService : ISomeService
{
private readonly DbContext _context
public SomeService(DbContext context)
{
_context = context;
}
public virtual IEnumerable<SomeEntity> GetSomeEntities()
{
return _context.SomeEntities.ToList();
}
}
The next step is to configure the IoC container so that it knows to inject the SomeService implementation whenever a type's constructor requires an ISomeService:
_kernel.Bind<ISomeService>().To<SomeService>();
At that point you're ready to configure the interception - if using Ninject this could help.
But before jumping into that rabbit's hole you should read this article which shows how decorator and interceptor are related.
The key point is, you're not intercepting anything that's internal to LINQ to SQL or the .NET framework itself - you're intercepting your own method calls, wrapping them with your own code, and with a little bit of help from any decent IoC container, you'll be intercepting the calls to methods that call upon Linq to SQL, rather than the direct calls to Linq to SQL itself. Essentially the IQueryTweaker dependency becomes a dependency of your interceptor class, and you'll only code its usage once.
An interesting thing about DI interception, is that interceptors can be combined, so you can have a ExecutionTimerServiceInterceptor on top of a AuditServiceInterceptor, on top of a CircuitBreakerServiceInterceptor... and the best part is that you can configure your IoC container so that you can completely forget it exists and, as you add more service classes to the application, all you need to do is follow a naming convention you've defined and voilĂ , you've just written a service that not only accomplishes all the strictly data-related tasks you've just coded, but also a service that will disable itself for 3 minutes if the database server is down, and will remain disabled until it's back up; that service also logs all inserts, updates and deletes, and stores its execution time in a database for performance analysis. The term automagical seems appropriate.
This technique - interception - can be used to address cross-cutting concerns; another way to address those is through AOP, although some articles (and Mark Seeman's excellent Dependency Injection in .NET) clearly demonstrate how AOP frameworks are a less ideal solution over DI interception.
I found out that there is an interface called GraphRepository. I have a repository for users implementing a homemade interface that does its job, but I was wondering, shouldn't I implement GraphRepository instead ? Even if it will be quite long to implement and some methods will be useless, I think it is a standard and I already re-coded a lot of methods that are defined in this interface.
So should I write "YAGNI" code or not respect the standard ?
What is your advice ?
you don't need to actually implement GraphRepository but extend it. the principals of Spring-Data is that all the boiler-plate CRUD code is taken care of (by proxying at startup time) so all you have to do is create an interface for your specific entity extending GraphRepository and then add only specific methods that you require.
for example; if i have an entity CustomerNode, to create standard CRUD methods, i can create a new interface CustomerNodeRepository extends GraphRepository<CustomerNode,Long>. all the methods from GraphRepository (e.g. save, findAll, findOne, delete, deleteAll, etc.) are now accessible from CustomerNodeRepository and implemented by Spring-Data-Neo4J without having to write a single line of implementation code.
the pattern now allows you to work on your specific repository code (e.g. findByNameAndDateOfBirth) rather than the simple CRUD stuff.
Spring-Data package is very useful for repository interaction. it can reduce huge amounts of code (have seen 80%+ reduction in code lines) and would highly recommend using it
edit: implementing custom execution
if you want to add your own custom behavior to a Repository method, you create the concept of merging interfaces and custom implementation. for example, lets say i want to create a method called findCustomerNodeBySomeStrangeCriteria and to do this, i actually want to link off to a relational database to perform the function.
first we define a separate, standalone interface that only includes our 'extra' method.
public interface CustomCustomerNodeRepository {
List<CustomerNode> findCustomerNodeBySomeStrangeCriteria(Object strangeCriteria);
}
next we update our normal interface to extend not only GraphRepository, but our new custom one too
public interface CustomerNodeRepository extends GraphRepository<CustomerNode,Long>, CustomCustomerNodeRepository {
}
the last piece, is to actually implement our findCustomerNodeBySomeStrangeCriteria method
public class CustomerNodeRepositoryImpl implements CustomCustomerNodeRepository {
public List<CustomerNode> findCustomerNodeBySomeStrangeCriteria(Object criteria) {
//implementation code
}
}
so, there's a couple of points to note;
we create a separate interface to define any custom methods that have custom implementations (as distinct from Spring-Data compatible "findBy..." methods)
our CustomerNodeRepository interface (our 'main' interface) extends both the GraphRepository and our 'custom' one
we implement only the 'custom' method in a class that implements only the custom interface
the 'custom' implementation class must (by default) be called our 'main' interface Impl to be picked up by Spring Data (so in this case CustomNodeRepositoryImpl)
under the covers, Spring Data delivers a proxy implementation of CustomerNodeRepository as a merge of the auto-built GraphRepository and our class implementing CustomCustomerNodeRepository. the reason for the name of the class is to allow Spring Data to pick it up easily/successfully (this can be overwritten so it doesn't look for *Impl)