How to drop in-memory h2db between Spring Integration tests? - spring

I am using Liquibase in my Spring web application. I have a bunch of entities with hundreds of tests for REST APIs in the integration tests for each entity like User, Account, Invoice, License etc. All of my integration tests pass when run by class but a lot of them fail when ran together using gradle test. It is very likely there is data collision between the tests and I am not interested in spending time to fix clean up data as of now. I prefer dropping the DB and context after every class. I figured I could use #DirtiesContext at class and so I annotated by test with it.
#RunWith(SpringRunner.class)
#SpringBootTest(classes = {Application.class, SecurityConfiguration.class},
webEnvironment = SpringBootTest.WebEnvironment.RANDOM_PORT)
#DirtiesContext
public class InvoiceResourceIntTest {
I see that after adding the annotation, web application context starts for every class but when the Liquibase initialization happens, the queries are not run because checksum matches. Since this is an in-memory DB, I expected the DB to be destroyed along with spring context but it is not happening.
I also set jpa hibernate ddl-auto to create-drop but that did not help. The next option I am considering is instead of mem, write h2db to a file and delete that file in #BeforeClass of my integration test class files. I prefer dropping db automatically in memory instead of managing it in test, but want to try here as a last option. Thanks for the help.
Update:
I updated test as below.
#RunWith(SpringRunner.class)
#SpringBootTest(classes = {Application.class, SecurityConfiguration.class},
webEnvironment = SpringBootTest.WebEnvironment.RANDOM_PORT,
properties = "spring.datasource.name=AccountResource")
#DirtiesContext
public class AccountResourceIntTest {
I have set unique names for each integration test. I still don't see the database being new because I can see only Liquibase checksum in the logs.
Here is my app config from application.yml
spring:
datasource:
driver-class-name: org.h2.Driver
url: jdbc:h2:mem:myApp;DB_CLOSE_DELAY=-1
name:
username:
password:
jpa:
database-platform: com.neustar.registry.le.domain.util.FixedH2Dialect
database: H2
open-in-view: false
show_sql: true
hibernate:
ddl-auto: create-drop
naming-strategy: org.springframework.boot.orm.jpa.hibernate.SpringNamingStrategy
properties:
hibernate.cache.use_second_level_cache: false
hibernate.cache.use_query_cache: false
hibernate.generate_statistics: true
hibernate.hbm2ddl.auto: validate
My project is generated from JHipster 2.x version if it matters. Please see my database configuration class below. AppProperties are application specific properties (Different from Spring).
#Configuration
public class DatabaseConfiguration {
private static final int LIQUIBASE_POOL_INIT_SIZE = 1;
private static final int LIQUIBASE_POOL_MAX_ACTIVE = 1;
private static final int LIQUIBASE_POOL_MAX_IDLE = 0;
private static final int LIQUIBASE_POOL_MIN_IDLE = 0;
private static final Logger LOG = LoggerFactory.getLogger(DatabaseConfiguration.class);
/**
* Creates data source.
*
* #param dataSourceProperties Data source properties configured.
* #param appProperties the app properties
* #return Data source.
*/
#Bean(destroyMethod = "close")
#ConditionalOnClass(org.apache.tomcat.jdbc.pool.DataSource.class)
#Primary
public DataSource dataSource(final DataSourceProperties dataSourceProperties,
final AppProperties appProperties) {
LOG.info("Configuring Datasource with url: {}, user: {}",
dataSourceProperties.getUrl(), dataSourceProperties.getUsername());
if (dataSourceProperties.getUrl() == null) {
LOG.error("Your Liquibase configuration is incorrect, please specify database URL!");
throw new ApplicationContextException("Data source is not configured correctly, please specify URL");
}
if (dataSourceProperties.getUsername() == null) {
LOG.error("Your Liquibase configuration is incorrect, please specify database user!");
throw new ApplicationContextException(
"Data source is not configured correctly, please specify database user");
}
if (dataSourceProperties.getPassword() == null) {
LOG.error("Your Liquibase configuration is incorrect, please specify database password!");
throw new ApplicationContextException(
"Data source is not configured correctly, "
+ "please specify database password");
}
PoolProperties config = new PoolProperties();
config.setDriverClassName(dataSourceProperties.getDriverClassName());
config.setUrl(dataSourceProperties.getUrl());
config.setUsername(dataSourceProperties.getUsername());
config.setPassword(dataSourceProperties.getPassword());
config.setInitialSize(appProperties.getDatasource().getInitialSize());
config.setMaxActive(appProperties.getDatasource().getMaxActive());
config.setTestOnBorrow(appProperties.getDatasource().isTestOnBorrow());
config.setValidationQuery(appProperties.getDatasource().getValidationQuery());
org.apache.tomcat.jdbc.pool.DataSource dataSource = new org.apache.tomcat.jdbc.pool.DataSource(config);
LOG.info("Data source is created: {}", dataSource);
return dataSource;
}
/**
* Create data source for Liquibase using dba user and password provided for "liquibase"
* in application.yml.
*
* #param dataSourceProperties Data source properties
* #param liquibaseProperties Liquibase properties.
* #param appProperties the app properties
* #return Data source for liquibase.
*/
#Bean(destroyMethod = "close")
#ConditionalOnClass(org.apache.tomcat.jdbc.pool.DataSource.class)
public DataSource liquibaseDataSource(final DataSourceProperties dataSourceProperties,
final LiquibaseProperties liquibaseProperties, final AppProperties appProperties) {
LOG.info("Configuring Liquibase Datasource with url: {}, user: {}",
dataSourceProperties.getUrl(), liquibaseProperties.getUser());
/*
* This is needed for integration testing. When we run integration tests using SpringJUnit4ClassRunner, Spring
* uses
* H2DB if it is in the class path. In that case, we have to create pool for H2DB.
* Need to find a better solution for this.
*/
if (dataSourceProperties.getDriverClassName() != null
&& dataSourceProperties.getDriverClassName().startsWith("org.h2.")) {
return dataSource(dataSourceProperties, appProperties);
}
if (dataSourceProperties.getUrl() == null) {
LOG.error("Your Liquibase configuration is incorrect, please specify database URL!");
throw new ApplicationContextException("Liquibase is not configured correctly, please specify URL");
}
if (liquibaseProperties.getUser() == null) {
LOG.error("Your Liquibase configuration is incorrect, please specify database user!");
throw new ApplicationContextException(
"Liquibase is not configured correctly, please specify database user");
}
if (liquibaseProperties.getPassword() == null) {
LOG.error("Your Liquibase configuration is incorrect, please specify database password!");
throw new ApplicationContextException(
"Liquibase is not configured correctly, please specify database password");
}
PoolProperties config = new PoolProperties();
config.setDriverClassName(dataSourceProperties.getDriverClassName());
config.setUrl(dataSourceProperties.getUrl());
config.setUsername(liquibaseProperties.getUser());
config.setPassword(liquibaseProperties.getPassword());
// for liquibase pool, we dont need more than 1 connection
config.setInitialSize(LIQUIBASE_POOL_INIT_SIZE);
config.setMaxActive(LIQUIBASE_POOL_MAX_ACTIVE);
// for liquibase pool, we dont want any connections to linger around
config.setMaxIdle(LIQUIBASE_POOL_MAX_IDLE);
config.setMinIdle(LIQUIBASE_POOL_MIN_IDLE);
org.apache.tomcat.jdbc.pool.DataSource dataSource = new org.apache.tomcat.jdbc.pool.DataSource(config);
LOG.info("Liquibase data source is created: {}", dataSource);
return dataSource;
}
/**
* Creates a liquibase instance.
*
* #param dataSource Data source to use for liquibase.
* #param dataSourceProperties Datasource properties.
* #param liquibaseProperties Liquibase properties.
* #return Liquibase instance to be used in spring.
*/
#Bean
public SpringLiquibase liquibase(#Qualifier("liquibaseDataSource") final DataSource dataSource,
final DataSourceProperties dataSourceProperties, final LiquibaseProperties liquibaseProperties) {
// Use liquibase.integration.spring.SpringLiquibase if you don't want Liquibase to start asynchronously
SpringLiquibase liquibase = new AsyncSpringLiquibase();
liquibase.setDataSource(dataSource);
liquibase.setChangeLog("classpath:config/liquibase/master.xml");
liquibase.setContexts(liquibaseProperties.getContexts());
liquibase.setDefaultSchema(liquibaseProperties.getDefaultSchema());
liquibase.setDropFirst(liquibaseProperties.isDropFirst());
liquibase.setShouldRun(liquibaseProperties.isEnabled());
return liquibase;
}
}

This is because each test shares the same database and that the lifecycle of H2 is not in our control. If you start a process (the VM) and require a database named foo, close the application context, start a new one and require foo again you'll get the same instance.
In the upcoming 1.4.2 release we've added a property to generate a unique name for the database on startup (see spring.datasource.generate-unique-name) and that value will be set to true by default on 1.5.
In the meantime, you can annotate each test with #SpringBootTest(properties="spring.datasource.name=xyz") where xyz is different for a test that requires a separate DB.

If I understand everything correctly liquibase takes care of database status. For every file, also for the test data, liquibase creates a checksum in a table to check whether something has changed or not. The h2 instance still alive after a #DirtiesContext so the checksums still exists in the database. Liquibase thinks that everything is correct but the test data may have changed.
To force liquibase to drop the database and recreate a completely new database you must set the properties in application.yml (that one for tests):
liquibase:
contexts: test
drop-first: true
or as an alternative you can hardcode it:
liquibase.setDropFirst(true);
You can either annotate your test with #DirtiesContext, which slows down the test because the whole application context gets rebuild.
Or you can create a custom TestExecutionListener which is much faster. I've created a custom TestExecutionListener, which recreates the database and keeps the context.
public class CleanUpDatabaseTestExecutionListener
extends AbstractTestExecutionListener {
#Inject
SpringLiquibase liquibase;
#Override
public int getOrder() {
return Ordered.HIGHEST_PRECEDENCE;
}
#Override
public void afterTestClass(TestContext testContext) throws Exception {
//This is a bit dirty but it works well
testContext.getApplicationContext()
.getAutowireCapableBeanFactory()
.autowireBean(this);
liquibase.afterPropertiesSet();
}
if you are using the TestExecutionListener you must add this Listener to your test with:
#RunWith(SpringJUnit4ClassRunner.class)
#SpringApplicationConfiguration(classes = Application.class)
#WebAppConfiguration
#IntegrationTest
#TestExecutionListeners(listeners = {
DependencyInjectionTestExecutionListener.class,
TransactionalTestExecutionListener.class,
CleanUpDatabaseTestExecutionListener.class,
})
public class Test {
//your tests
}
NOTE: DO NOT USE #DirtiesContext and the TestExecutionListener together, this will lead to an error.

Solved by removing username, url and password parameters.
spring:
autoconfigure:
exclude: org.springframework.boot.autoconfigure.security.SecurityAutoConfiguration
jackson:
serialization:
indent_output: true
datasource:
driver-class-name: org.hsqldb.jdbcDriver
generate-unique-name: true
jpa:
hibernate:
dialect: org.hibernate.dialect.HSQLDialect
ddl-auto: validate
show-sql: true
h2:
console:
enabled: false
liquibase:
change-log: classpath:/liquibase/db.changelog-master.xml
drop-first: true
contexts: QA

Related

How to add Log4j2 JDBC Appender programmatically to an existing configuration in Spring Boot?

A short rant at the beginning, just because it has to be said:
I'm getting tired of reading the terrible documentation of Log4j2 for the umpteenth time and still not finding any solutions for my problems. The documentation is completely outdated, sample code is torn uselessly out of a context that is needed but not explained further and the explanations are consistently insufficient. It shouldn't be that only Log4j2 developers can use Log4j2 in-depth. Frameworks should make the work of other developers easier, which is definitely not the case here. Period and thanks.
Now to my actual problem:
I have a Spring Boot application that is primarily configured with yaml files. The DataSource however is set programmatically so that we have a handle to its bean. Log4j2 is initially set up using yaml configuration as well.
log4j2-spring.yaml:
Configuration:
name: Default
status: warn
Appenders:
Console:
name: Console
target: SYSTEM_OUT
PatternLayout:
pattern: "%d{yyyy-MM-dd HH:mm:ss} %-5level [%t] %c: %msg%n"
Loggers:
Root:
level: warn
AppenderRef:
- ref: Console
Logger:
- name: com.example
level: debug
additivity: false
AppenderRef:
- ref: Console
What I want to do now is to extend this initial configuration programmatically with a JDBC Appender using the already existing connection-pool. According to the documentation, the following should be done:
The recommended approach for customizing a configuration is to extend one of the standard Configuration classes, override the setup method to first do super.setup() and then add the custom Appenders, Filters and LoggerConfigs to the configuration before it is registered for use.
So here is my custom Log4j2Configuration which extends YamlConfiguration:
public class Log4j2Configuration extends YamlConfiguration {
/* private final Log4j2ConnectionSource connectionSource; */ // <-- needs to get somehow injected
public Log4j2Configuration(LoggerContext loggerContext, ConfigurationSource configSource) {
super(loggerContext, configSource);
}
#Override
public void setup() {
super.setup();
}
#Override
protected void doConfigure() {
super.doConfigure();
LoggerContext context = (LoggerContext) LogManager.getContext(false);
Configuration config = context.getConfiguration();
ColumnConfig[] columns = new ColumnConfig[]{
//...
};
Appender jdbcAppender = JdbcAppender.newBuilder()
.setName("DataBase")
.setTableName("application_log")
// .setConnectionSource(connectionSource)
.setColumnConfigs(columns)
.build();
jdbcAppender.start();
config.addAppender(jdbcAppender);
AppenderRef ref = AppenderRef.createAppenderRef("DataBase", null, null);
AppenderRef[] refs = new AppenderRef[]{ref};
/* Deprecated, but still in the Log4j2 documentation */
LoggerConfig loggerConfig = LoggerConfig.createLogger(
false,
Level.TRACE,
"com.example",
"true",
refs,
null,
config,
null);
loggerConfig.addAppender(jdbcAppender, null, null);
config.addLogger("com.example", loggerConfig);
context.updateLoggers();
}
}
The ConnectionSource exists as an implementation of AbstractConnectionSource in the Spring context and still needs to be injected into the Log4j2Configuration class. Once I know how the configuration process works I can try to find a solution for this.
Log4j2ConnectionSource:
#Configuration
public class Log4j2ConnectionSource extends AbstractConnectionSource {
private final DataSource dataSource;
public Log4j2ConnectionSource(#Autowired #NotNull DataSource dataSource) {
this.dataSource = dataSource;
}
#Override
public Connection getConnection() throws SQLException {
return dataSource.getConnection();
}
}
And finally the ConfigurationFactory as described here in the documentation (It is interesting that the method getConfiguration calls with new MyXMLConfiguration(source, configFile) a constructor that doesn't exist. Is witchcraft at play here?).
Log4j2ConfigurationFactory:
#Order(50)
#Plugin(name = "Log4j2ConfigurationFactory", category = ConfigurationFactory.CATEGORY)
public class Log4j2ConfigurationFactory extends YamlConfigurationFactory {
#Override
public Configuration getConfiguration(LoggerContext context, ConfigurationSource configSource) {
return new Log4j2Configuration(context, configSource);
}
#Override
public String[] getSupportedTypes() {
return new String[]{".yml", "*"};
}
}
Now that the set up is more or less done, the running Log4j2 configuration needs somehow to be updated. So somebody should call doConfigure() within Log4j2Configuration. Log4j2 doesn't seem to do anything here on its own. Spring Boot doesn't do anything either. And I unfortunately don't have any plan what do at all.
Therefore my request:
Can anyone please explain to me how to get Log4j2 to update its configuration?
Many thanks for any help.

Register MetadataBuilderContributor based on database type in Spring Boot

I added a few implementations of MetadataBuilderContributor based on the database (h2, mysql, oracle) since they all have a slightly different syntax.
As of now, registering of the contributors works through the property in application.yml:
spring:
jpa:
properties:
hibernate:
metadata_builder_contributor: org.foo.bar.H2Implementation
I am aware that I can create multiple profiles -h2, -mysql, -oracle to apply the correct contributors. However, I'd like to automatically set these based on the driverClassName that was set (if I can find a match, otherwise default to the application.yml)
Is there a way to do this and not require the entry in my application.yml?
Here is my solution that Frame91 mentioned, using a Spring ApplicationEnvironmentPreparedEvent
public class MetadataBuilderContributerResolver
implements ApplicationListener<ApplicationEnvironmentPreparedEvent>
{
#Override
public void onApplicationEvent(ApplicationEnvironmentPreparedEvent event) {
ConfigurableEnvironment environment = event.getEnvironment();
String driverClassName = environment.getProperty("spring.datasource.driverClassName");
Class<?> metadataBuilderContributorClazz = switch (driverClassName) {
case "org.h2.Driver" -> H2MetadataBuilderContributor.class;
case "org.mariadb.jdbc.Driver" -> MariaDbMetadataBuilderContributor.class;
case "oracle.jdbc.OracleDriver" -> OracleMetadataBuilderContributor.class;
default -> throw new IllegalArgumentException("Unsupported driver: " + driverClassName);
};
String className = metadataBuilderContributorClazz.getName();
Properties props = new Properties();
props.put("spring.jpa.properties.hibernate.metadata_builder_contributor", className);
environment.getPropertySources().addFirst(new PropertiesPropertySource(this.getClass().getName(), props));
}
}
Thanks to Frame91

Problem with connection to Neo4j test container using Spring boot 2 and JUnit5

Problem with connection to Neo4j test container using Spring boot 2 and JUnit5
int test context. Container started successfully but spring.data.neo4j.uri property has a wrong default port:7687, I guess this URI must be the same when I call neo4jContainer.getBoltUrl().
Everything works fine in this case:
#Testcontainers
public class ExampleTest {
#Container
private static Neo4jContainer neo4jContainer = new Neo4jContainer()
.withAdminPassword(null); // Disable password
#Test
void testSomethingUsingBolt() {
// Retrieve the Bolt URL from the container
String boltUrl = neo4jContainer.getBoltUrl();
try (
Driver driver = GraphDatabase.driver(boltUrl, AuthTokens.none());
Session session = driver.session()
) {
long one = session.run("RETURN 1",
Collections.emptyMap()).next().get(0).asLong();
assertThat(one, is(1L));
} catch (Exception e) {
fail(e.getMessage());
}
}
}
But SessionFactory is not created for the application using autoconfiguration following to these recommendations - https://www.testcontainers.org/modules/databases/neo4j/
When I try to create own primary bean - SessionFactory in test context I get the message like this - "URI cannot be returned before the container is not loaded"
But Application runs and works perfect using autoconfiguration and neo4j started in a container, the same cannot be told about the test context
You cannot rely 100% on Spring Boot's auto configuration (for production) in this case because it will read the application.properties or use the default values for the connection.
To achieve what you want to, the key part is to create a custom (Neo4j-OGM) Configuration bean. The #DataNeo4jTest annotation is provided by the spring-boot-test-autoconfigure module.
#Testcontainers
#DataNeo4jTest
public class TestClass {
#TestConfiguration
static class Config {
#Bean
public org.neo4j.ogm.config.Configuration configuration() {
return new Configuration.Builder()
.uri(databaseServer.getBoltUrl())
.credentials("neo4j", databaseServer.getAdminPassword())
.build();
}
}
// your tests
}
For a broader explanation have a look at this blog post. Esp. the section Using with Neo4j-OGM and SDN.

Why Querydsl always requires connection to be transactional?

Recently I tried a new tool for db access called Querydsl in my Spring Boot app, here is how I configure the context in #Configuration class:
#Bean
public com.querydsl.sql.Configuration querydslConfiguration() {
SQLTemplates templates = OracleTemplates.builder().build();
com.querydsl.sql.Configuration configuration = new com.querydsl.sql.Configuration(templates);
configuration.setExceptionTranslator(new SpringExceptionTranslator());
return configuration;
}
#Bean
public SQLQueryFactory queryFactory(DataSource dataSource) {
Provider<Connection> provider = new SpringConnectionProvider(dataSource);
return new SQLQueryFactory(querydslConfiguration(), provider);
}
My query is a quite simple select:
fun detailedEntityByIds(ids: Set<String>): List<DetailedEntity> {
val qDetails = QTContainerDetails.tContainerDetails
return sqlQueryFactory.select(qDetails).from(qDetails)
.where(qDetails.id.`in`(ids))
.fetch().map { mapper.qDslEntToModel(it) }
}
Then I faced with was the following exception:
java.lang.IllegalStateException: Connection is not transactional
I quickly found this question: [QueryDSL/Spring]java.lang.IllegalStateException: Connection is not transactional with an advice to use #Transactional for solving this problem.
Why does Querydsl requires connections to be transactional? I used to put #Transactional on a service layer methods where I really need it. Now Querydsl 'forces' me to put it on a whole DAO class, because looks like it is required for every Querydsl query.
From the Javadoc
/**
* {#code SpringConnectionProvider} is a Provider implementation which provides a transactionally bound connection
*
* <p>Usage example</p>
* <pre>
* {#code
* Provider<Connection> provider = new SpringConnectionProvider(dataSource());
* SQLQueryFactory queryFactory = SQLQueryFactory(configuration, provider);
* }
* </pre>
*/
The reason is for resource management. You don't have access to the underlying JDBC implementation. The transaction will close ResultSets, Statements, Connections, etc. Without a transaction, every connection would be left open, the connection pool saturated, the database running out of resources, etc.
If you want to manage your own resources, you could write your own Provider<Connection> and pass in the DataSource E.G.
private static Provider<Connection> getConnection(DataSource dataSource) {
return () -> org.springframework.jdbc.datasource.DataSourceUtils.getConnection(dataSource);
}

Spring Boot Application cannot run test cases involving Two Databases correctly - either get detached entities or no inserts

I am trying to write a Spring Boot app that talks to two databases - primary which is read-write, and secondary which is read only.
This is also using spring-data-jpa for the repositories.
Roughly speaking this giude describes what I am doing: https://www.baeldung.com/spring-data-jpa-multiple-databases
And this documentation from spring:
https://docs.spring.io/spring-boot/docs/current/reference/html/howto-data-access.html#howto-two-datasources
I am having trouble understanding some things - I think about the transaction managers - which is making my program error out either during normal operation, or during unit tests.
I am running into two issues, with two different TransactionManagers that I do not understand well
1)
When I use JPATransactionManager, my secondary entities become detached between function calls. This happens in my application running in full Spring boot Tomcat, and when running the JUnit test as SpringRunner.
2)
When I use DataSourceTransactionManager which was given in some tutorial my application works correctly, but when I try to run a test case with SpringRunner, without running the full server, spring/hibernate will not perform any inserts or updates on the primaryDataSource.
--
Here is a snippet of code for (1) form a service class.
#Transactional
public List<PrimaryGroup> synchronizeAllGroups(){
Iterable<SecondarySite> secondarySiteList = secondarySiteRepo.findAll();
List<PrimaryGroup> allUserGroups = new ArrayList<PrimaryGroup>(0);
for( SecondarySite secondarySite: secondarySiteList){
allUserGroups.addAll(synchronizeSiteGroups( secondarySite.getName(), secondarySite));
}
return allUserGroups;
}
#Transactional
public List<PrimaryGroup> synchronizeSiteGroups(String sitename, SecondarySite secondarySite){
// GET all secondary groups
if( secondarySite == null){
secondarySite = secondarySiteRepo.getSiteByName(sitename);
}
logger.debug("synchronizeGroups started - siteId:{}", secondarySite.getLuid().toString());
List<SecondaryGroup> secondaryGroups = secondarySite.getGroups();// This shows the error because secondarySite passed in is detached
List<PrimaryGroup> primaryUserGroups = primaryGroupRepository.findAllByAppName(sitename);
...
// modify existingUserGroups to have new data from secondary
...
primaryGroupRepository.save(primaryUserGroups );
logger.debug("synchronizeGroups complete");
return existingUserGroups;
}
I am pretty sure I understand what is going on with the detached entities with JPATransactionManager -- When populateAllUsers calls populateSiteUser, it is only carrying over the primaryTransactionManager and the secondary one gets left out, so the entities become detached. I can probably work around that, but I'd like to see if there is any way to have this work, without putting all calls to secondary* into a seperate service layer, that returns non-managed entities.
--
Here is a snippet of code for (2) from a controller class
#GetMapping("synchronize/secondary")
public String synchronizesecondary() throws UnsupportedEncodingException{
synchronizeAllGroups(); // pull all the groups
synchronizeAllUsers(); // pull all the users
synchronizeAllUserGroupMaps(); // add the mapping table
return "true";
}
This references that same synchronizeAllGroups from above. But when I am useing DataSourceTransactionManager I do not get that detached entity error.
What I do get instead, is that the primaryGroupRepository.save(primaryUserGroups ); call does not generate any insert or update statement - when running a JUnit test that calls the controller directly. So when synchronizeAllUserGroupMaps gets called, primaryUserRepository.findAll() returns 0 rows, and same with primaryGroupRepository
That is to say - it works when I run this test case:
#RunWith(SpringRunner.class)
#SpringBootTest(classes=app.DApplication.class, properties={"spring.profiles.active=local,embedded"})
#AutoConfigureMockMvc
public class MockTest {
#Autowired
private MockMvc mockMvc;
#Test
public void shouldSync() throws Exception {
this.mockMvc.perform(get("/admin/synchronize/secondary")).andDo(print()).andExpect(status().isOk());
}
}
But it does not do any inserts or updates when I run this test case:
#RunWith(SpringRunner.class)
#SpringBootTest(classes=app.DApplication.class, properties={"spring.profiles.active=local,embedded"}, webEnvironment=WebEnvironment.MOCK)
#AutoConfigureMockMvc
public class ControllerTest {
#Autowired AdminController adminController;
#Test
public void shouldSync() throws Exception {
String o = adminController.synchronizesecondary();
}
}
Here are the two configuration classes
Primary:
#Configuration
#EnableTransactionManagement
#EntityScan(basePackageClasses = app.primary.dao.BasePackageMarker.class )
#EnableJpaRepositories(
transactionManagerRef = "dataSourceTransactionManager",
entityManagerFactoryRef = "primaryEntityManagerFactory",
basePackageClasses = { app.primary.dao.BasePackageMarker.class }
)
public class PrimaryConfig {
#Bean(name = "primaryDataSourceProperties")
#Primary
#ConfigurationProperties("app.primary.datasource")
public DataSourceProperties primaryDataSourceProperties() {
return new DataSourceProperties();
}
#Bean(name = "primaryDataSource")
#Primary
public DataSource primaryDataSourceEmbedded() {
return primaryDataSourceProperties().initializeDataSourceBuilder().build();
}
#Bean
#Primary
public LocalContainerEntityManagerFactoryBean primaryEntityManagerFactory(
EntityManagerFactoryBuilder builder,
#Qualifier("primaryDataSource") DataSource primaryDataSource) {
return builder
.dataSource(primaryDataSource)
.packages(app.primary.dao.BasePackageMarker.class)
.persistenceUnit("primary")
.build();
}
#Bean
#Primary
public DataSourceTransactionManager dataSourceTransactionManager( #Qualifier("primaryDataSource") DataSource primaryDataSource) {
DataSourceTransactionManager txm = new DataSourceTransactionManager(primaryDataSource);
return txm;
}
}
And Secondary:
#Configuration
#EnableTransactionManagement
#EntityScan(basePackageClasses=app.secondary.dao.BasePackageMarker.class ) /* scan secondary as secondary database */
#EnableJpaRepositories(
transactionManagerRef = "secondaryTransactionManager",
entityManagerFactoryRef = "secondaryEntityManagerFactory",
basePackageClasses={app.secondary.dao.BasePackageMarker.class}
)
public class SecondaryConfig {
private static final Logger log = LoggerFactory.getLogger(SecondaryConfig.class);
#Bean(name = "secondaryDataSourceProperties")
#ConfigurationProperties("app.secondary.datasource")
public DataSourceProperties secondaryDataSourceProperties() {
return new DataSourceProperties();
}
#Bean(name = "secondaryDataSource")
public DataSource secondaryDataSourceEmbedded() {
return secondaryDataSourceProperties().initializeDataSourceBuilder().build();
}
#Bean
public LocalContainerEntityManagerFactoryBean secondaryEntityManagerFactory(
EntityManagerFactoryBuilder builder,
#Qualifier("secondaryDataSource") DataSource secondaryDataSource) {
return builder
.dataSource(secondaryDataSource)
.packages(app.secondary.dao.BasePackageMarker.class)
.persistenceUnit("secondary")
.build();
}
#Bean
public DataSourceTransactionManager secondaryTransactionManager( #Qualifier("secondaryDataSource") DataSource secondaryDataSource) {
DataSourceTransactionManager txm = new DataSourceTransactionManager(secondaryDataSource);
return txm;
}
}
In my real application, the secondary data source - since it is read-only - is used during real run time, and during the unit test I am writing.
I have been having trouble getting spring to initialize both data sources, so I have not attached a complete example.
thanks for any insight people an give me.
Edit: I have read some things that say to use Jta Transaction Manager when using multiple databases, and I have tried that. I get an error when it tries to run the transaction on my second read-only database when I go to commit to the first database
Caused by: org.postgresql.util.PSQLException: ERROR: prepared transactions are disabled
Hint: Set max_prepared_transactions to a nonzero value.
In my case, I cannot set that, because the database is a read-only datbase provided by a vendor, we cannot change anything, and I really sho9udn't be trying to include this database as part of transactions, I just want to be able to call both databases in one service call.

Resources