I have parent pom with Dropwizard service. In that service I have some exception mapper declaration, for example:
#Provider
class ExceptionMapperForClassA {...}
Now in my child pom I'm extending this parent service and I want to create New Exception mapper for ClassA and I would like to extend it from ExceptionMapperForClassA.
I couldn't find any information in Jersey doc, which describes behaviour of Jersey when two exception mappers for same class are declared.
https://jersey.java.net/documentation/latest/representations.html
Actually, the question is - How to override some exception mapper and be sure that only my exception mapper will be called ?
I ran a quick test and the behaviour is as follows:
If you have 2 Exception mappers mapping the same exception, it appears that the provider sorted by its name will be the one called. E.g. I have a ResourceNotFound mapper, that I extend with TestMapper. The latter is never called.
If I rename the latter to ATestMapper, the first will never be called.
I would NOT rely on that behaviour however because I don't think it is defined anywhere (I might be wrong).
If you want to overwrite ExceptionMappers, I would recommend creating a base that is generic, for example:
public abstract class BaseExceptionMapper<T extends Exception> extends LoggingExceptionMapper<T> {
...
}
Define all your shared implementation in the base mapper, and extend it with specific mappers. That way you have exactly one mapper per Exception.
If you can not do the above and you do absolutely have 2 Mappers of the same type, you can explicitly overwrite the first by programatically adding it. For this, in your dropwizard application starter class, in the run method, you can do:
JerseyEnvironment jersey = environment.jersey();
jersey.getResourceConfig().register(TestMapper.class);
This will explicitly set this mapper as the exceptionmapper to be used.
Unless that has changed, it's impossible to override an existing exception mapper. I don't remember where I had read it, but it said that you'd have randomly chosen mapper responding if you register more than one.
As my answer here suggests, you need to unregister the previous exception mapper and and assign yours but that also was made impossible after dropwizard 0.8. Unless 0.9 offers a solution for that, you should focus on making your parent-pom taking a parameter from outside and decide to whether to register that ExceptionMapperForClassA or not.
Related
I'm using Quartz Scheduler in one of my projects. There are two main ways to create a Quartz job:
implement org.quartz.Job class
extend org.springframework.scheduling.quartz.QuartzJobBean (which implements org.quartz.Job class)
The last part of the QuartzJobBean javadoc is confusing :
* Note that the preferred way to apply dependency injection to Job instances is via a JobFactory:
that is, to specify SpringBeanJobFactory as Quartz JobFactory (typically via
SchedulerFactoryBean.setJobFactory SchedulerFactoryBean's "jobFactory" property}).
This allows to implement dependency-injected Quartz Jobs without a dependency on Spring base classes.*
For a pure Spring (or SpringBoot) use, I suppose that it is better to extend QuartzJobBean. I'm right?
First of all, since a QuartzJobBean is a Job, any API call that will accept a Job will accept a QuartzJobBean, but not visa versa. So if you need a QuartzJobBean because some API call expects you to pass it one, then there's your answer.
Otherwise, the answer depends on if you want to make use of (and be tied to) the functionality provided by QuartzJobBean. If you look at the source code for that class, you'll see that the sole gain in subclassing QuartzJobBean over implementing Job, is that QuartzJobBean performs this logic before passing control to your code:
try {
BeanWrapper bw = PropertyAccessorFactory.forBeanPropertyAccess(this);
MutablePropertyValues pvs = new MutablePropertyValues();
pvs.addPropertyValues(context.getScheduler().getContext());
pvs.addPropertyValues(context.getMergedJobDataMap());
bw.setPropertyValues(pvs, true);
}
catch (SchedulerException ex) {
throw new JobExecutionException(ex);
}
So if you extend the QuartzJobBean class and implement the executeInternal method, this code runs before your code. If you implement the Job class and the execute method, it does not. That's the only difference between the two approaches in terms of what actually happens when your job runs.
So to answer your question, ask yourself "do I want to take advantage of the above code?". If the answer is Yes, then extend QuartzJobBean to take advantage of that functionality. If you don't need this added functionality, don't want it, and/or don't want to be locked into the dependencies implied by the above code, then you should implement Job to avoid this code and its dependencies. My personal approach would be to implement Job unless I had some reason to extend QuartzJobBean instead.
I have a need of conditionally creating one of three possible implementations of a service depending upon the environment detected by a Spring application at runtime. If Service A is available, then I want to create a concrete implementation class that uses Service A as a dependency. If Service A is not available, then I want to create an implementation using Service B as a dependency. And so-on.
Classes which depend on the implementation will Autowire the Interface and not care what the underlying Service was that got selected for the particular environment.
My first stab at this was to implement multiple #Bean methods which either return a bean or null, depending on whether the Service is available, and to then have a separate #Configuration class which #Autowire(required=false) the two possible services, conditionally creating the implementation depending on which of the #Autowired fields was not-null.
The problem here is that when required=false, Spring doesn't appear to care whether it waits around for candidates to be constructed; that is to say, the class which tries to pick the implementation might be constructed before one or both of the required=false Beans gets constructed, thus ensuring that one or both might always be null, regardless of whether it may manage to initialize correctly.
It kind of feels like I'm going against the grain at this point, so I'm looking for advice on the "right" way to do this sort of thing, where a whole set of beans might get switched out based on the availability of some outside service or environment.
Profiles don't look like the right answer, because I won't know until after my Service beans try to initialize which implementation I want to choose; I certainly won't know it at the time I create the context.
#Order doesn't achieve the goal either. Nor does #Conditional and testing on the existence of the bean (because it still might not be constructed yet). Same problem with FactoryBean- it does no good to check for the existence of beans that might not have been constructed at the time the FactoryBean is asked to create an instance.
What I really need to do is create a Bean based on the availability of other beans, but only AFTER those beans have at least had a chance to try to initialize.
Spring Profiles is your friend. You can set the current profile by way of environmental variable, command-line argument, and other methods. You can annotate a Spring-managed component so that it's created for a certain profile.
Spring Profiles from the Spring Documentation
Well in this case it turned out to be a tangential mistake that influenced the whole wrong behavior.
To give some background, my first, naive (but workable) approach looked like this:
#Autowired(required=false)
#Qualifier(RedisConfig.HISTORY)
private RLocalCachedMap<String, History> redisHistoryMap;
#Autowired(required=false)
#Qualifier(HazelcastConfig.HISTORY)
private IMap<String, History> hazelcastHistoryMap;
// RequestHistory is an interface
#Bean
public RequestHistory requestHistory() {
if (redisHistoryMap != null) {
return new RedisClusteredHistory(redisHistoryMap);
} else if (hazelcastHistoryMap != null) {
return new HazelcastClusteredHistory(hazelcastHistoryMap);
} else {
return new LocalRequestHistory(); // dumb hashmap
}
}
In other #Configuration classes, if the beans that get #Autowired here aren't available (due to missing configuration, exceptions during initialization, etc), the #Bean methods that create them return null.
The observed behavior was that this #Bean method was getting called after the RLocalCachedMap<> #Bean method got called, but before Spring attempted to create the IMap<> by calling its #Bean method. I had incorrectly thought that this had something to do with required=false but in fact that had nothing to do with it.
What actually happened was I accidentally used the same constant for both #Bean names (and consequently #Qualifiers), so presumably Spring couldn't tell the difference when it was calculating its dependency graph for this #Configuration class... because the two #Autowired beans appeared to be the same thing (because they had the same name).
(There's a secondary reason for using #Qualifier in this case, which I won't go into here, but suffice it to say it's possible to have many maps of the same type.)
Once I qualified the names better, the code did exactly what I wanted it to, albeit in a way that's somewhat inelegant/ugly.
At some point I'll go back and see if it looks more elegant / less ugly and works just as well to use #Conditional and #Primary instead of the if/else foulness.
The lesson here is that if you explicitly name beans, make absolutely sure your names are unique across your application, even if you plan to swap things around like this.
I have a few classes that interact with databases (more than one). Some classes are reused so for example "obs.table1" is used to interact with table1 in database "obs" while "ref.table1" is used to interact with table1 in database "ref". These databases are at different URLs and each gets its own connection pool, etc... obs.table1 and ref.table1 are both instances of MyTable1Class, defined in beans file.
I have a pointcut that intercepts calls to methods annotated with #Transactional or with a custom annotation #MyTablesAnnotation and have it set so those calls will all get routed into a #Around advice.
This all works and the flow through the advice is correct.
What I am trying to add is reporting on what is going on in there. Currently I can tell where in there I am, but I can't tell if it was obs.table1 or ref.table1 object that got me there.
Is there a way to extract the bean id of the object on whose method the advice was invoked on?
ProceedingJoinPoint that is passed to the method the only thing I do with it is call a .proceed on it and the rest is just various checks and catches. I see that I can get either the target class or proxy class out of it, but... not sure how to go from there to knowing what the bean id was.
Is it possible?
Firstly it is not recommended to depend on bean id as it creates tight coupling with framework.
To quote from docs Note that it is not usually recommended that an object depend on its bean name, as this represents a potentially brittle dependence on external configuration, as well as a possibly unnecessary dependence on a Spring API.
Now to answer your question yes it is possible to fetch the name of bean via org.springframework.beans.factory.BeanNameAware.
The class for which you require the bean name should implement it and spring will auto-magically inject the name of the bean. However there is a gotcha which you should be aware and is mentioned in docs here
I have a servlet based application which currently uses an injected HashMap of command processors to process a user entered command. This works very well but I need to modify this so that each instance of the command processor is unique.
The new requirements comes from the need to "pipe" the output on one command into another so if the command processors remain a single instance "piping" a list into a list would be problematic.
I still need to be able to map the class that handles the command to the command text.
My first thought was the change the HashMap from mapping the command to an instance of the command processor to mapping it to the class name and using that to instantiate an instance of the class. But that does not work due to the need to configure some of the commands with for example a list of options.
I have looked at making the beans prototypes which would seam to do what I want regarding getting a new instance of the configured bean but I am confused as to how I can map this, was thinking I could use the bean ID.
I am now at the stage of complete confusion and cant think how to do this.
I am aware that the explanation is a little light but this is a reflection of my confusion and I suspect that the greatest help will come from request for clarification which will help to get the head in order.
You could use request-scoped beans:
#Component
#Scope(value=WebApplicationContext.SCOPE_REQUEST,proxyMode=ScopedProxyMode.TARGET_CLASS)
public class CommandProcessor {
}
You can just inject CommandProcessor in your code and Spring will make sure you get different instance for every user request. You also will need CGLIB on your classpath.
If I got your requirements right you either need a factory method in your command class, or a FactoryBean that creates the instances.
I want to be able to set default values for some fields in my domain classes.
Till now I had a class which stored a Map of settings for my whole project, with a task in mind to move this map into a redis database.
The day has come and I moved all the data to redis and created a nice spring bean to get/set the values.
However...
it seems that default values are set on the domain class instance before bean is injected.
This kind of breaks the whole process.
Also... there's an issue with unit tests.
I've created a class which implements the same interface as the spring bean and holds test values. I wanted to inject it into domain classes, but this fails as well.
So right now I'm trying to find a good way to handle externally stored defauls values for my domain classes with ability to run unit tests.
Any thoughts?
There are a few different approaches you could take:
Introduce a separate bean with the default values so that those are supplied in the same way as they were before. In a separate higher level context or later on in application startup, you could then override the bean definition with the one that pulls from the database
Use a BeanPostProcessor or BeanFactoryPostProcessor to specify the default values, then use your new bean for retrieving new values
If neither of these answers is helpful, please post your setup and example code so I can get a clearer picture of what you're trying to do.
What I did in the end:
I've created a class which is connecting to Redis and gets me all the data I require.
For unit testing I've created a copy of this class, it implements the same interface but instead of getting the data from Redis it has a simple Map inside and get's the data from there. In the end it acts the same, but the data is stored internally. So in my unit tests I just inject this Unit test version of this class where appropriate.
Probably not the best solution there is but it worked for me for the last few months.