I'm developing Spring Boot application which persists data into MS SQL database. I'm tasked to add support for PostgreSQL, which uses same tables. So my goal is to add another repository layer implementation. But things gets little bit complicated.
It would be great if my repository layer could look like this:
public interface RecordRepository {
Record get(long id);
}
#Repository
#Conditional(MsSqlCondition.class)
public interface MsSqlRecordRepository {
public Record get(long id) {
// MS SQL implementation...
}
}
#Repository
#Conditional(PostgreSqlCondition.class)
public interface PostgreSqlRecordRepository {
public Record get(long id) {
// PostgreSql implementation...
}
}
However, it seems to not be possible in my case.
First of all, my application doesn't have database configuration in application.yaml file. It has to get these variables from remote HTTP server. My #Configuration file looks something like this:
#Configuration
public class MyAppConfiguration {
#Bean
public DatabaseConfiguration databaseConfiguration() {
// Make HTTP request to remote server
if (something) {
return new MsSqlServerConfiguration(...);
} else {
return new PostgreSqlServerConfiguration(...);
}
}
}
With this approach, I'm unable to use #Conditional annotation for my DataSource and repository beans, since #Conditional is evaluated while parsing #Configuration files. I need to choose right implementation AFTER DatabaseConfiguration bean is created.
I already considered this approach:
#Configuration
public class RepositoryConfiguration {
#Bean
public RecordRepository(DatabaseConfiguration configuration, JdbcTemplate jdbcTemplate) {
if (configuration.getType() == MS_SQL) {
return new MsSqlRecordRepository(jdbcTemplate);
} else {
return new PostgreSqlRecordRepository(jdbcTemplate);
}
}
}
But this seems that it is not working for me either, sice I'm using Spring AOP for my repository classes.
Does Spring Boot have some other mechanism, which allows me to choose right repository implementation after my DatabaseConfiguration bean is created?
Thanks
Related
I'm new to spring boot and I'm trying to wrap my head around how to make dependency injection work for deployment and testing.
I have a #RestController and a supporting #Service. The service injects another class that is an interface for talking to Kafka. For the Kafka interface I have two implementations: one real and one fake. The real one I want to use in production and the fake in test.
My approach is to use two different configuration for each environment (prod and test).
#Configuration
public class AppTestConfiguration {
#Bean
public KafkaMessagePublisher kafkaMessagePublisher() {
return new KafkaMessagePublisherFakeImpl();
}
}
#Configuration
public class AppConfiguration {
#Bean
public KafkaMessagePublisher kafkaMessagePublisher() {
return new KafkaMessagePublisherImpl();
}
}
Then in my main application I would like to somehow load AppConfiguration.
#SpringBootApplication
public class DeployerServiceApiApplication {
public static void main(String[] args) {
SpringApplication.run(DeployerServiceApiApplication.class, args);
}
// TODO: somehow load here...
}
And in my test load the fake configuration somehow
#SpringBootTest
#AutoConfigureMockMvc(addFilters = false)
public class DeployerServiceApiApplicationTest {
#Autowired private MockMvc mockMvc;
// TODO: somehow load AppTestConfiguration here
#Test
public void testDeployAction() throws Exception {
...
ResultActions resultActions = mockMvc.perform(...);
...
}
}
I've spent the better part of a day trying to figure this out. What I'm trying to accomplish here is fundamental and should be straight forward yet I keep running into issues which makes me wonder if the way I'm thinking about this is all wrong.
Am not sure if i understand your question completely but from description i guess you wish to initialize bean based on environment. Please see below.
#Profile("test")
#Configuration
public class AppTestConfiguration {
#Bean
public KafkaMessagePublisher kafkaMessagePublisher() {
return new KafkaMessagePublisherFakeImpl();
}
}
#Profile("prod")
#Configuration
public class AppConfiguration {
#Bean
public KafkaMessagePublisher kafkaMessagePublisher() {
return new KafkaMessagePublisherImpl();
}
and then you can pass the "-Dspring.profiles.active=prod" argument while starting you application using java command or you can also specify the profile in your test case like below.
#SpringBootTest
#ActiveProfile("test")
#AutoConfigureMockMvc(addFilters = false)
public class DeployerServiceApiApplicationTest
Use spring profiles, you can annotate your test class with #ActiveProfiles("test-kafka") and your test configuration with #Profile("test-kafka").
This is pretty easy task in spring boot world
Rewrite your classes as follows:
#Profile("test")
#Configuration
public class AppTestConfiguration {
#Bean
public KafkaMessagePublisher kafkaMessagePublisher() {
return new KafkaMessagePublisherFakeImpl();
}
}
#Profile("prod")
#Configuration
public class AppConfiguration {
#Bean
public KafkaMessagePublisher kafkaMessagePublisher() {
return new KafkaMessagePublisherImpl();
}
}
This will instruct spring boot to load the relevant configuration when the "prod"/"test" specified.
Then you can start your application in production with --spring.profiles.active=prod and in the Test you can write something like this:
#SpringBootTest
#ActiveProfiles("test")
public class DeployerServiceApiApplicationTest {
...
}
If you want to run all the tests with this profile and do not want to write this ActiveProfiles annotation you can create src/test/resources/application.properties and put into it: spring.active.profiles=test
I have tried to find documentation on how to manually configure a RestController (i.e in a Configuation class). That means without using the RestController annotation. Considering all the other annotations, like mapping, pathvariables etc. is at all possible?
A controller is essentially a component with a request mapping. See RequestMappingHandlerMapping.
#Override
protected boolean isHandler(Class<?> beanType) {
return (AnnotatedElementUtils.hasAnnotation(beanType, Controller.class) ||
AnnotatedElementUtils.hasAnnotation(beanType, RequestMapping.class));
}
If you want to instantiate a "rest controller" through configuration, you can probably do so via the following:
#Configuration
public class MyConfiguration {
#Bean
public MyController() {
return new MyController();
}
}
#ResponseBody
public class MyController {
#RequestMapping("/test")
public String someEndpoint() {
return "some payload";
}
}
But I don't think you'll be able to configure the request mappings (path variables, etc) in the configuration though; at least I haven't seen an example nor figured out how.
My goal is to have a have integration tests that ensures that there isn't too many database queries happening during lookups. (This helps us catch n+1 queries due to incorrect JPA configuration)
I know that the database connection is correct because there is no configuration problems during the test run whenever MyDataSourceWrapperConfiguration is not included in the test. However, once it is added, the circular dependency happens. (see error below) I believe #Primary is necessary in order for the JPA/JDBC code to use the correct DataSource instance.
MyDataSourceWrapper is a custom class that tracks the number of queries that have happened for a given transaction, but it delegates the real database work to the DataSource passed in via constructor.
Error:
The dependencies of some of the beans in the application context form a cycle:
org.springframework.boot.autoconfigure.orm.jpa.HibernateJpaAutoConfiguration
┌─────┐
| databaseQueryCounterProxyDataSource defined in me.testsupport.database.MyDataSourceWrapperConfiguration
↑ ↓
| dataSource defined in org.springframework.boot.autoconfigure.jdbc.DataSourceConfiguration$Tomcat
↑ ↓
| dataSourceInitializer
└─────┘
My Configuration:
#Configuration
public class MyDataSourceWrapperConfiguration {
#Primary
#Bean
DataSource databaseQueryCounterProxyDataSource(final DataSource delegate) {
return MyDataSourceWrapper(delegate);
}
}
My Test:
#ActiveProfiles({ "it" })
#RunWith(SpringJUnit4ClassRunner.class)
#SpringApplicationConfiguration({ DatabaseConnectionConfiguration.class, DatabaseQueryCounterConfiguration.class })
#EnableAutoConfiguration
public class EngApplicationRepositoryIT {
#Rule
public MyDatabaseQueryCounter databaseQueryCounter = new MyDatabaseQueryCounter ();
#Rule
public ErrorCollector errorCollector = new ErrorCollector();
#Autowired
MyRepository repository;
#Test
public void test() {
this.repository.loadData();
this.errorCollector.checkThat(this.databaseQueryCounter.getSelectCounts(), is(lessThan(10)));
}
}
UPDATE: This original question was for springboot 1.5. The accepted answer reflects that, however, the answer from #rajadilipkolli works for springboot 2.x
In your case you will get 2 DataSource instances which is probably not what you want. Instead use BeanPostProcessor which is the component actually designed for this. See also the Spring Reference Guide.
Create and register a BeanPostProcessor which does the wrapping.
public class DataSourceWrapper implements BeanPostProcessor {
public Object postProcessBeforeInitialization(Object bean, String beanName) {
if (bean instanceof DataSource) {
return new MyDataSourceWrapper((DataSource)bean);
}
return bean;
}
public Object postProcessAfterInitialization(Object bean, String beanName) throws BeansException {
return bean;
}
}
Then just register that as a #Bean instead of your MyDataSourceWrapper.
Tip: Instead of rolling your own wrapping DataSource you might be interested in datasource-proxy combined with datasource-assert which has counter etc. support already (saves you maintaining your own components).
Starting from spring boot 2.0.0.M3 using BeanPostProcessor wont work.
As a work around create your own bean like below
#Bean
public DataSource customDataSource(DataSourceProperties properties) {
log.info("Inside Proxy Creation");
final HikariDataSource dataSource = (HikariDataSource) properties
.initializeDataSourceBuilder().type(HikariDataSource.class).build();
if (properties.getName() != null) {
dataSource.setPoolName(properties.getName());
}
return ProxyDataSourceBuilder.create(dataSource).countQuery().name("MyDS")
.logSlowQueryToSysOut(1, TimeUnit.MINUTES).build();
}
Another way is to use datasource-proxy version of datasource-decorator starter
Following solution works for me using Spring Boot 2.0.6.
It uses explicit binding instead of annotation #ConfigurationProperties(prefix = "spring.datasource.hikari").
#Configuration
public class DataSourceConfig {
private final Environment env;
#Autowired
public DataSourceConfig(Environment env) {
this.env = env;
}
#Primary
#Bean
public MyDataSourceWrapper primaryDataSource(DataSourceProperties properties) {
DataSource dataSource = properties.initializeDataSourceBuilder().build();
Binder binder = Binder.get(env);
binder.bind("spring.datasource.hikari", Bindable.ofInstance(dataSource).withExistingValue(dataSource));
return new MyDataSourceWrapper(dataSource);
}
}
You can actually still use BeanPostProcessor in Spring Boot 2, but it needs to return the correct type (the actual type of the declared Bean). To do this you need to create a proxy of the correct type which redirects DataSource methods to your interceptor and all the other methods to the original bean.
For example code see the Spring Boot issue and discussion at https://github.com/spring-projects/spring-boot/issues/12592.
I have a springboot application, with its own datasource (let's call DB1) set on properties working fine.
But this application needs do configure a new datasource (DB2), using some parameters the user have informed before and stored in DB1.
My idea is to create a named bean, so a specific part of my application can use to access DB2 tables. I think it is possible to do that by restarting the application, but I would like to avoid it though.
Besides, I need that some part of my code use the new datasource (spring data jpa, mappings, and so on). I don't know if this matter, but it is a web application, so I cannot create the datasource only for the request thread.
Can you help me?
Thanks in advance.
Spring has dynamic datasource routing if that's where you are headed. In my case it is the same schema (WR/RO)
public class RoutingDataSource extends AbstractRoutingDataSource {
#Autowired
private DataSourceConfig dataSourceConfig;
#Override
protected Object determineCurrentLookupKey() {
return DbContextHolder.getDbType();
}
public enum DbType {
MASTER, WRITE, READONLY,
}
Then you need a custom annotation and an aspect
#Target({ElementType.METHOD, ElementType.TYPE})
#Retention(RetentionPolicy.RUNTIME)
public #interface ReadOnlyConnection {
}
#Aspect
#Component
#Order(1)
public class ReadOnlyConnectionInterceptor {
Pointcut(value = "execution(public * *(..))")
public void anyPublicMethod() {}
#Around("#annotation(readOnlyConnection)")
public Object proceed(ProceedingJoinPoint proceedingJoinPoint, ReadOnlyConnection readOnlyConnection) throws Throwable {
Object result = null;
try {
DbContextHolder.setDbType(DbType.READONLY);
result = proceedingJoinPoint.proceed();
DbContextHolder.clearDbType();
return result;
} finally {
DbContextHolder.clearDbType();
}
}
}
And then you can act on you DB with the tag #ReadOnlyConnection
#Override
#Transactional(readOnly = true)
#ReadOnlyConnection
public UnitDTO getUnitById(Long id) {
return unitRepository.findOne(id);
}
An example can be found here: https://github.com/afedulov/routing-data-source.
I used that as a basis for my work although it is still in progress because I still need to resolve runtime dependencies ( i.e. hibernate sharding ).
I setup Spring Data Neo4J following this tutorial: http://spring.io/guides/gs/accessing-data-neo4j/ and changed this slightly to use it on the Neo4J server and this runs well.
Then I tried to find an example, how I can use the repositories within a CDI environment as outlined in these examples : http://docs.spring.io/spring-data/jpa/docs/current/reference/html/#jpd.misc.cdi-integration
Merely, I can not find any example what I have to provide to make this run with Neo4J.
So my question is : did anybody try to setup Neo4j with Spring Data and CDI and can provide an example, how I have to set up Spring configuration for CDI and how I can make Repositories accessable for #Inject ? Please consider that I'm pretty new in the Spring Data and Neo4J topic ;)
Thanks in advance !
Joern
At least I found something that seems to work. I extended the Neo4JConfiguration to set up the Neo4J server connection. Within this class I produce also the needed repositories. The repositories itself have to be annotated with #NoRepositoryBean
public class MyNeo4JConfiguration extends Neo4jConfiguration {
GraphDatabaseService graphDatabaseService() {
return new SpringRestGraphDatabase("http://localhost:7474/db/data");
}
public GraphDatabase graphDatabase() {
if (graphDatabaseService() instanceof GraphDatabase)
return (GraphDatabase) graphDatabaseService();
return new DelegatingGraphDatabase(graphDatabaseService());
}
#Produces
PersonRepository getPersonRepository() {
GraphRepositoryFactory factory;
try {
factory = new GraphRepositoryFactory(new Neo4jTemplate(
this.graphDatabase()), this.neo4jMappingContext());
PersonRepository personRepository = factory
.getRepository(PersonRepository.class);
return personRepository;
} catch (Exception e) {
return null;
}
}
The repository :
#NoRepositoryBean
public interface PersonRepository extends GraphRepository<Person> {
Person findByName(String name);
Iterable<Person> findByTeammatesName(String name);
}
The PersonRepository can now be injected with #Inject.
Thanks to this post !!!