Cannot connect to from Spring Boot to Dockerized MongoDb instance - spring

I have a dockerized MongoDB instance. I am trying to get Spring Boot to connect to it but when starting up Spring Boot but I get an error message.
Please find details here-below.
Error message :
com.mongodb.MongoSocketException: mongodb://localhost:27017/airbnb-data
at com.mongodb.ServerAddress.getSocketAddresses(ServerAddress.java:211) ~[mongodb-driver-core-3.11.1.jar:na]
at com.mongodb.internal.connection.SocketStream.initializeSocket(SocketStream.java:75) ~[mongodb-driver-core-3.11.1.jar:na]
at com.mongodb.internal.connection.SocketStream.open(SocketStream.java:65) ~[mongodb-driver-core-3.11.1.jar:na]
at com.mongodb.internal.connection.InternalStreamConnection.open(InternalStreamConnection.java:128) ~[mongodb-driver-core-3.11.1.jar:na]
at com.mongodb.internal.connection.DefaultServerMonitor$ServerMonitorRunnable.run(DefaultServerMonitor.java:117) ~[mongodb-driver-core-3.11.1.jar:na]
at java.lang.Thread.run(Thread.java:748) [na:1.8.0_281]
Caused by: java.net.UnknownHostException: mongodb://localhost:27017/airbnb-data
at java.net.Inet6AddressImpl.lookupAllHostAddr(Native Method) ~[na:1.8.0_281]
at java.net.InetAddress$2.lookupAllHostAddr(InetAddress.java:929) ~[na:1.8.0_281]
at java.net.InetAddress.getAddressesFromNameService(InetAddress.java:1324) ~[na:1.8.0_281]
at java.net.InetAddress.getAllByName0(InetAddress.java:1277) ~[na:1.8.0_281]
at java.net.InetAddress.getAllByName(InetAddress.java:1193) ~[na:1.8.0_281]
at java.net.InetAddress.getAllByName(InetAddress.java:1127) ~[na:1.8.0_281]
at com.mongodb.ServerAddress.getSocketAddresses(ServerAddress.java:203) ~[mongodb-driver-core-3.11.1.jar:na]
application.properties in Spring Boot app:
spring.data.mongodb.uri=mongodb://localhost:27017/airbnb-data
spring.data.mongodb.port=27017
spring.data.mongodb.database=airbnb-data
spring.data.mongodb.host=localhost
spring.data.mongodb.username=mongodbuser
spring.data.mongodb.password=mongodbpwd
docker-compose.yml that launches the MongoDB instance:
version: '3'
services:
mongoex:
image: mongo-express
environment:
ME_CONFIG_OPTIONS_EDITORTHEME: ambiance
ME_CONFIG_MONGODB_SERVER: mongodb
ME_CONFIG_MONGODB_PORT: 27017
ME_CONFIG_MONGODB_ENABLE_ADMIN: "true"
ME_CONFIG_MONGODB_ADMINUSERNAME: mongodbuser
ME_CONFIG_MONGODB_ADMINPASSWORD: mongodbpwd
ME_CONFIG_MONGODB_AUTH_DATABASE: admin
ME_CONFIG_MONGODB_AUTH_USERNAME: mongodbuser
ME_CONFIG_MONGODB_AUTH_PASSWORD: mongodbpwd
ME_CONFIG_BASICAUTH_USERNAME: mongoexuser
ME_CONFIG_BASICAUTH_PASSWORD: mongoexpwd
ports:
- "8081:8081"
links:
- mongodb
networks:
- backend
- frontend
mongodb:
image: mongo:latest
container_name: mongodb
restart: unless-stopped
command: mongod --auth
environment:
MONGO_INITDB_ROOT_USERNAME: mongodbuser
MONGO_INITDB_ROOT_PASSWORD: mongodbpwd
MONGODB_DATA_DIR: /data/db
MONDODB_LOG_DIR: /dev/null
ports:
- "27017:27017"
volumes:
- mongodbdata:/data/db
networks:
- backend
- frontend
networks:
frontend:
driver: bridge
backend:
driver: bridge
volumes:
mongodbdata:
driver: local
DataConfig class :
#Configuration
public class DatabaseConfig extends AbstractMongoConfiguration {
#Value("${spring.data.mongodb.myuri}")
private String mongoDbUrl;
#Value("${spring.data.mongodb.port}")
private String port;
#Value("${spring.data.mongodb.database}")
private String mongoDbName;
/**
* Url to mongo db
*
* #return a string representing the url of bdd
* #throws MalformedURLException
*/
#Override
public MongoClient mongoClient() {
return new MongoClient(this.mongoDbUrl, Integer.parseInt(this.port));
}
#Override
public #Bean MongoTemplate mongoTemplate() throws Exception {
MappingMongoConverter converter = new MappingMongoConverter(new DefaultDbRefResolver(mongoDbFactory()),
new MongoMappingContext());
converter.setMapKeyDotReplacement("\\+");
MongoTemplate mongoTemplate = new MongoTemplate(mongoDbFactory(), converter);
return mongoTemplate;
}
#Override
protected String getDatabaseName() {
return this.mongoDbName;
}
/**
* Access to Mongo db
*
* #return link to db
* #throws Exception
*
* #Bean public MongoTemplate accessBddMongo() throws Exception { return new
* MongoTemplate(bddGetEndpointUrl(), mongoDbName); }
*/
}
Thanks for your help

I realized I didn't need to set the mongoClient() method in DataConfig class.
After amending, I have :
#Override
public MongoClient mongoClient() {
return new MongoClient();
}
Solved

Related

Configuring access to sqlite database from a springboot docker container

I have a spring boot application that uses sqlite database stored in a computer. I try to setup:
Hikari for managing connection pools
JNDI to manage datasource
SQL lite database available from a local folder
I have configured the following application.properties:
server.servlet.context-path=/ws-application
spring.datasource.driverClassName=org.sqlite.JDBC
spring.datasource.url=jdbc:sqlite:C:/db/applicationdata.db
spring.datasource.username=root
spring.datasource.password=root
spring.datasource.jndiName=jdbc/myDataSource
spring.jpa.hibernate.ddl-auto=create
spring.jpa.show-sql=false
spring.datasource.hikari.connection-timeout = 1800000
spring.datasource.hikari.connectionTimeout=1800000
spring.datasource.hikari.minimum-idle= 1
spring.datasource.hikari.maximum-pool-size= 1
spring.datasource.hikari.data-source-j-n-d-i=jdbc/myDataSource
spring.datasource.hikari.idle-timeout=600000
spring.datasource.hikari.idleTimeout=600000
spring.datasource.hikari.max-lifetime= 1800000
spring.datasource.hikari.maxLifetime=1800000
spring.datasource.hikari.auto-commit =true
The application work fine when jar is executed. So far so good. Now I'm trying to create a docker image and launch the application with docker compose.
For the springboot application, i have a basic Dockerfile:
# Build stage: requires maven
FROM maven:3.8.4-openjdk-17 AS build
WORKDIR /app
COPY ../../ws-proxy/. /app
RUN mvn clean package -DskipTests
# Package and Run stage
FROM openjdk:17-alpine
COPY --from=build app/target/ws-application.jar /usr/local/lib/ws-application.jar
ENTRYPOINT ["java", "-jar", "-Dspring.profiles.active=docker", "/usr/local/lib/ws-application.jar"]
Now in my docker compose file, I added sqlite and my webservice:
version: "3.8"
services:
sqlite3:
image: nouchka/sqlite3:latest
container_name: sqlite3
restart: always
stdin_open: true
tty: true
volumes:
#Modify following line
- sqliteDb:/app/
ports:
- '9000:9000' # expose ports - HOST:CONTAINER
ws-proxy:
image: ws-application
container_name: ws-application
restart: always
build:
context: ../
dockerfile: docker/dockerfiles/Dockerfile.ws-application
ports:
- "8080:8080"
environment:
- SPRING_DATASOURCE_URL=jdbc:C:/db/applicationdata.db
volumes:
sqliteDb:
driver: local
driver_opts:
o: bind
type: none
device: /C/db
I do have 2 issues:
Caused by: java.sql.SQLException: Driver:org.sqlite.JDBC#4a31c2ee returned null for URL:jdbc:C:/db/applicationdata.db
org.springframework.jndi.JndiLookupFailureException: JndiObjectTargetSource failed to obtain new target object
In my application, JNDI is programmatically configured and read application.properties:
#Configuration
#EnableTransactionManagement
//#PropertySource("classpath:application.properties")
#EnableJpaRepositories(basePackages = "fr.app.io.repository")
public class AppConfig {
#Bean
public DatabaseProperties databaseProperties() {
return new DatabaseProperties();
}
#Bean
public TomcatServletWebServerFactory tomcatFactory() {
return new TomcatServletWebServerFactory() {
#Override
protected TomcatWebServer getTomcatWebServer(Tomcat tomcat) {
tomcat.enableNaming();
return super.getTomcatWebServer(tomcat);
}
#Override
protected void postProcessContext(Context context) {
ContextResource resource = new ContextResource();
resource.setType("org.apache.tomcat.jdbc.pool.DataSource");
resource.setName(databaseProperties().getJndiName());
resource.setProperty("factory", "org.apache.tomcat.jdbc.pool.DataSourceFactory");
resource.setProperty("driverClassName", databaseProperties().getDriverClassName());
resource.setProperty("url", databaseProperties().getUrl());
resource.setProperty("username", databaseProperties().getUsername());
resource.setProperty("password", databaseProperties().getPassword());
context.getNamingResources().addResource(resource);
}
};
}
#Bean(destroyMethod = "")
public DataSource jndiDataSource() throws IllegalArgumentException, NamingException {
JndiObjectFactoryBean bean = new JndiObjectFactoryBean();
bean.setJndiName("java:comp/env/" + databaseProperties().getJndiName());
bean.setProxyInterface(DataSource.class);
bean.setLookupOnStartup(false);
bean.afterPropertiesSet();
return (DataSource) bean.getObject();
}
#Bean
public EntityManagerFactory entityManagerFactory() throws SQLException, IllegalArgumentException, NamingException {
HibernateJpaVendorAdapter vendorAdapter = new HibernateJpaVendorAdapter();
vendorAdapter.setDatabase(Database.MYSQL);
vendorAdapter.setShowSql(true);
LocalContainerEntityManagerFactoryBean factory = new LocalContainerEntityManagerFactoryBean();
factory.setJpaVendorAdapter(vendorAdapter);
factory.setPackagesToScan("fr.dsidiff.app.proxy.io.entity");
factory.setDataSource(jndiDataSource());
factory.afterPropertiesSet();
return factory.getObject();
}
#Bean
public PlatformTransactionManager transactionManager()
throws SQLException, IllegalArgumentException, NamingException {
JpaTransactionManager txManager = new JpaTransactionManager();
txManager.setEntityManagerFactory(entityManagerFactory());
return txManager;
}
}
Other question: i tested code from this tutorial: https://roytuts.com/spring-boot-jndi-datasource/
As a bonus question any one can explain me why if i set the vendorAdapter to MYSQL I can do insertions using SQLITE ?
HibernateJpaVendorAdapter vendorAdapter = new HibernateJpaVendorAdapter();
vendorAdapter.setDatabase(Database.MYSQL);
Thanks

Amazon RDS Read Replica configuration Postgres database from an spring boot application deployed on PCF?

Hi All currently we are have deployed our springboot code to pcf which is running on aws.
we are using aws database - where we have cup service and VCAP_SERVICES which hold the parameter of db.
Below our configuration to get datasource
#Bean
public DataSource dataSource() {
if (dataSource == null) {
dataSource = connectionFactory().dataSource();
configureDataSource(dataSource);
}
return dataSource;
}
#Bean
public JdbcTemplate jdbcTemplate() {
return new JdbcTemplate(dataSource());
}
private void configureDataSource(DataSource dataSource) {
org.apache.tomcat.jdbc.pool.DataSource tomcatDataSource = asTomcatDatasource(dataSource);
tomcatDataSource.setTestOnBorrow(true);
tomcatDataSource.setValidationQuery("SELECT 1");
tomcatDataSource.setValidationInterval(30000);
tomcatDataSource.setTestWhileIdle(true);
tomcatDataSource.setTimeBetweenEvictionRunsMillis(60000);
tomcatDataSource.setRemoveAbandoned(true);
tomcatDataSource.setRemoveAbandonedTimeout(60);
tomcatDataSource.setMaxActive(Environment.getAsInt("MAX_ACTIVE_DB_CONNECTIONS", tomcatDataSource.getMaxActive()));
}
private org.apache.tomcat.jdbc.pool.DataSource asTomcatDatasource(DataSource dataSource) {
Objects.requireNonNull(dataSource, "There is no DataSource configured");
DataSource targetDataSource = ((DelegatingDataSource)dataSource).getTargetDataSource();
return (org.apache.tomcat.jdbc.pool.DataSource) targetDataSource;
}
Now when we have read replicas created , what configuration do i need to modify so our spring boot application uses the read replicas?
is Just #Transactional(readOnly = true) on the get call is enough - that it will be automatically taken care? or do i need to add some more configuration
#Repository
public class PostgresSomeRepository implements SomeRepository {
#Autowired
public PostgresSomeRepository(JdbcTemplate jdbcTemplate, RowMapper<Consent> rowMapper) {
this.jdbcTemplate = jdbcTemplate;
this.rowMapper = rowMapper;
}
#Override
#Transactional(readOnly = true)
public List<SomeValue> getSomeGetCall(List<String> userIds, String applicationName, String propositionName, String since, String... types) {
//Some Logic
try {
return jdbcTemplate.query(sql, rowMapper, paramList.toArray());
} catch (DataAccessException ex) {
throw new ErrorGettingConsent(ex.getMessage(), ex);
}
}
}
Note:we have not added any spring aws jdbc dependency
Let's assume the cloud service name is my_db.
Map the cloud service to the application config appication-cloud.yml used by default in the CF (BTW this is better than using the connector because you can customize the datasource)
spring:
datasource:
type: com.zaxxer.hikari.HikariDataSource
# my_db
url: ${vcap.services.my_db.credentials.url}
username: ${vcap.services.my_db.credentials.username}
password: ${vcap.services.my_db.credentials.password}
hikari:
poolName: Hikari
auto-commit: false
data-source-properties:
cachePrepStmts: true
prepStmtCacheSize: 250
prepStmtCacheSqlLimit: 2048
useServerPrepStmts: true
jpa:
generate-ddl: false
show-sql: true
put the service to the application manifest.yml:
---
applications:
- name: my-app
env:
SPRING_PROFILES_ACTIVE: "cloud" # by default
services:
- my_db

Spring Cloud Config - Vault and JDBC backend with JDBC creds in Vault

I am attempting to modify our current Spring Cloud Config server which has only a JDBC backend to include a Vault backend in order make the JDBC connection credentials secret.
VAULT:
Listener 1: tcp (addr: "127.0.0.1:8400", cluster address: "127.0.0.1:8401", max_request_duration: "1m30s", max_request_size: "33554432", tls: "disabled")
C:\apps\HashiCorp>vault kv get secret/my-secrets
=============== Data ===============
Key Value
--- -----
spring.datasource.password yadayadayada
spring.datasource.username cobar
bootstrap.yml
server:
port: 8888
spring:
application:
name: config-server
cloud:
config:
allowOverride: true
server:
jdbc:
sql: SELECT prop_key, prop_value from CloudProperties where application=? and profile=? and label=?
order: 2
#https://cloud.spring.io/spring-cloud-config/reference/html/#vault-backend
vault:
scheme: http
host: localhost
port: 8400
defaultKey: my-secrets
order: 1
application.yml
spring:
main:
banner-mode: off
allow-bean-definition-overriding: true
datasource:
url: jdbc:mysql://localhost/bootdb?createDatabaseIfNotExist=true&autoReconnect=true&useSSL=false
#username: cobar
#password: yadayadayada
driverClassName: com.mysql.jdbc.Driver
hikari:
connection-timeout: 60000
maximum-pool-size: 5
cloud:
vault:
scheme: http
host: localhost
port: 8400
defaultKey: my-secrets
token: root.RIJQjZ4jRZUS8mskzfCON88K
The spring.datasource username and password are not being retrieved from the vault.
2021-12-01 12:43:39.927 INFO 5992 --- [ restartedMain]: The following profiles are active: jdbc,vault
2021-12-01 12:43:46.123 ERROR 5992 --- [ restartedMain] com.zaxxer.hikari.pool.HikariPool : HikariPool-1 - Exception during pool initialization.
Login failed for user ''. ClientConnectionId:a32
Move properties from bootstrap to application context.
Call Vault endpoint to obtain secrets and use these to configure Datasource to JDBC backend.
#Slf4j
#SpringBootApplication
#EnableConfigServer
public class ConfigServerApplication {
public static final String VAULT_URL_FRMT = "%s://%s:%s/v1/secret/%s";
#Autowired
private Environment env;
public static void main(String[] args) {
SpringApplication app = new SpringApplication(ConfigServerApplication.class);
app.addListeners(new ApplicationPidFileWriter());
app.addListeners(new WebServerPortFileWriter());
app.run(args);
}
#Order(1)
#Bean("restTemplate")
public RestTemplate restTemplate() {
return new RestTemplate();
}
#Configuration
public class JdbcConfig {
#Autowired
private RestTemplate restTemplate;
#Bean
public DataSource getDataSource() {
Secrets secrets = findSecrets();
DataSourceBuilder dataSourceBuilder = DataSourceBuilder.create();
dataSourceBuilder.url(secrets.getData().get("spring.datasource.url"));
dataSourceBuilder.username(secrets.getData().get("spring.datasource.username"));
dataSourceBuilder.password(secrets.getData().get("spring.datasource.password"));
return dataSourceBuilder.build();
}
private Secrets findSecrets() {
HttpHeaders httpHeaders = new HttpHeaders();
httpHeaders.set("X-Vault-Token", env.getProperty("spring.cloud.vault.token"));
HttpEntity request = new HttpEntity(httpHeaders);
String url = String.format(VAULT_URL_FRMT,
env.getProperty("spring.cloud.vault.scheme"),
env.getProperty("spring.cloud.vault.host"),
env.getProperty("spring.cloud.vault.port"),
env.getProperty("spring.cloud.vault.defaultKey")
);
return restTemplate.exchange(url, HttpMethod.GET, request, Secrets.class, 1).getBody();
}
}
}
#Getter
#Setter
public class Secrets implements Serializable {
private String request_id;
private String lease_id;
private boolean renewable;
private Duration lease_duration;
private Map<String, String> data;
}
Now you have a Cloud Config with a JDBC backend you can keep the Database properties secret.

Spring Cloud Gateway - Unable to find GatewayFilterFactory with name [Filter_Name]

I have a spring cloud gateway application. I am trying to setup a gateway filter. The Spring Boot version is 2.3.4.RELEASE Here are the dependencies:
dependencies {
implementation 'org.springframework.boot:spring-boot-starter'
implementation platform(SpringBootPlugin.BOM_COORDINATES)
implementation platform('org.springframework.cloud:spring-cloud-dependencies:Hoxton.SR8')
implementation 'org.springframework.boot:spring-boot-starter-actuator'
implementation 'org.springframework.cloud:spring-cloud-starter-gateway'
implementation 'org.springframework.cloud:spring-cloud-starter-sleuth'
implementation 'org.springframework.cloud:spring-cloud-starter-netflix-eureka-client'
}
Here is the configutraion for the gateway client
server:
port: 8081
spring:
cloud:
gateway:
routes:
- id: onboard_redirect
uri: http://localhost:8080/api/v1/onboard
predicates:
- Path=/api/v1/onboard
filters:
- name: MyLogging
args:
baseMessage: My Custom Message
preLogger: true
postLogger: true
Here is my filter class:
#Component
public class MyLoggingGatewayFilterFactory extends AbstractGatewayFilterFactory<MyLoggingGatewayFilterFactory.Config> {
final Logger logger =
LoggerFactory.getLogger(MyLoggingGatewayFilterFactory.class);
public MyLoggingGatewayFilterFactory() {
super(Config.class);
}
#Override
public GatewayFilter apply(Config config) {
return (exchange, chain) -> {
// Pre-processing
if (config.preLogger) {
logger.info("Pre GatewayFilter logging: "
+ config.baseMessage);
}
return chain.filter(exchange)
.then(Mono.fromRunnable(() -> {
// Post-processing
if (config.postLogger) {
logger.info("Post GatewayFilter logging: "
+ config.baseMessage);
}
}));
};
}
public static class Config {
public String baseMessage;
public boolean preLogger;
public boolean postLogger;
}
}
Everything works without configuring the filter but when I configure the filter I get follwing error:
reactor.core.Exceptions$ErrorCallbackNotImplemented: java.lang.IllegalArgumentException: Unable to find GatewayFilterFactory with name MyLogging
Caused by: java.lang.IllegalArgumentException: Unable to find GatewayFilterFactory with name MyLogging
what I am doing wrong here?
The filter class is MyLoggingGatewayFilterFactory, not MyLogging as you set in your properties.
Try to the following modification in your application.yml file:
filters:
- name: MyLoggingGatewayFilterFactory
Add this dependency in the application.properties file.
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-starter-circuitbreaker-reactor-
resilience4j</artifactId>
</dependency>

Spring-Boot application fails to find property factory_class for ehcache

I'm working on a spring application (using spring-boot) that worked well until this morning. I now need to configure a second datasource to use two different database (mysql + embedded h2).
I now have two classes "MainDatabaseConfiguration" and "EmbeddedDatabaseConfiguration", which both provide a Bean of type "DataSource" (mainDataSource and embeddedDataSource), as well as associated Beans of type "EntityManager", "EntityManagerFactory" and "TransactionManager".
Unfortunately, The application initialization fails with the following error :
...
Caused by: org.hibernate.cache.NoCacheRegionFactoryAvailableException: Second-level cache is used in the application, but property hibernate.cache.region.factory_class is not given; please either disable second level cache or set correct region factory using the hibernate.cache.region.factory_class setting and make sure the second level cache provider (hibernate-infinispan, e.g.) is available on the classpath.
at org.hibernate.cache.internal.NoCachingRegionFactory.buildEntityRegion(NoCachingRegionFactory.java:83)
at org.hibernate.internal.SessionFactoryImpl.<init>(SessionFactoryImpl.java:363)
at org.hibernate.cfg.Configuration.buildSessionFactory(Configuration.java:1857)
...
However, when i look in my config file :
application-dev.yml:
server:
port: 8080
address: localhost
spring:
profiles: dev
datasource:
main:
dataSourceClassName: org.h2.jdbcx.JdbcDataSource
url: jdbc:h2:mem:jhipster
databaseName:
serverName:
username:
password:
embedded:
dataSourceClassName: org.h2.jdbcx.JdbcDataSource
url: jdbc:h2:mem:jhipster
databaseName:
serverName:
username:
password:
jpa:
database-platform: org.hibernate.dialect.H2Dialect
database: H2
openInView: false
show_sql: true
generate-ddl: false
hibernate:
ddl-auto: none
naming-strategy: org.hibernate.cfg.EJB3NamingStrategy
properties:
hibernate.cache.use_second_level_cache: true
hibernate.cache.use_query_cache: false
hibernate.generate_statistics: true
hibernate.cache.region.factory_class: org.hibernate.cache.ehcache.SingletonEhCacheRegionFactory
thymeleaf:
mode: XHTML
cache: false
metrics:
jmx.enabled: true
graphite:
enabled: false
host:
port:
cache:
timeToLiveSeconds: 3600
ehcache:
maxBytesLocalHeap: 16M
# You can add as many as folders to watch
# You just need to add a dash + the directory to watch
hotReload:
enabled: true
package:
project: com.sfr.sio
domain: com.sfr.sio.domain
restdto: com.sfr.sio.web.rest.dto
liquibase:
defaultCatalogName:
defaultSchema: public
watchdir:
- target/classes
The property "hibernate.cache.region.factory_class" is present. The only changed i made to that file for the second datasource is in the "spring.datasource" section to split it in two (spring.datasource.main and spring.datasource.embedded).
(Note : Both datasources are H2 in my dev environment, but mysql is used in production)
As you can see, the property mentionned in the error is present but spring seems to fail to retrieve it.
The code for the other classes that could be involved in the error :
mainDataSourceConfiguration.java:
#Configuration
#EnableTransactionManagement
#EnableJpaRepositories(
entityManagerFactoryRef = "mainEntityManagerFactory",
transactionManagerRef = "mainTransactionManager",
basePackages = { "com.sfr.sio.repository" })
public class MainDatabaseConfiguration extends AbstractDatabaseConfiguration implements EnvironmentAware {
/** prefix for the main datasource properties. **/
private static final String MAIN_DATASOURCE_PREFIX = "spring.datasource.main.";
/** Logger. */
private final Logger log = LoggerFactory.getLogger(MainDatabaseConfiguration.class);
#Override
public void setEnvironment(Environment environment) {
this.propertyResolver = new RelaxedPropertyResolver(environment, MAIN_DATASOURCE_PREFIX);
}
/**
* Main Datasource bean creator.
* <ul>
* <li>Mysql for qualif and sfr environments</li>
* <li>h2 for dev environment</li>
* </ul>
*
* #return the datasource.
*/
#Bean(name="mainDataSource")
#Primary
public DataSource dataSource() {
log.debug("Configuring Datasource");
if (propertyResolver.getProperty(URL_PARAMETER) == null && propertyResolver.getProperty(DATABASE_NAME_PARAMETER) == null) {
log.error("Your database connection pool configuration is incorrect! The application" +
"cannot start. Please check your Spring profile, current profiles are: {}",
Arrays.toString(env.getActiveProfiles()));
throw new ApplicationContextException("Database connection pool is not configured correctly");
}
HikariConfig config = new HikariConfig();
config.setDataSourceClassName(propertyResolver.getProperty(DS_CLASS_NAME_PARAMETER));
if (propertyResolver.getProperty(URL_PARAMETER) == null || "".equals(propertyResolver.getProperty(URL_PARAMETER))) {
config.addDataSourceProperty(DATABASE_NAME_PARAMETER, propertyResolver.getProperty(DATABASE_NAME_PARAMETER));
config.addDataSourceProperty(SERVER_NAME_PARAMETER, propertyResolver.getProperty(SERVER_NAME_PARAMETER));
} else {
config.addDataSourceProperty(URL_PARAMETER, propertyResolver.getProperty(URL_PARAMETER));
}
config.addDataSourceProperty(USER_PARAM, propertyResolver.getProperty(USERNAME_PARAMETER));
config.addDataSourceProperty(PASSWORD_PARAMETER, propertyResolver.getProperty(PASSWORD_PARAMETER));
return new HikariDataSource(config);
}
/**
* #return the entity manager for the main datasource
* #see MainDatabaseConfiguration.dataSource()
*/
#Bean(name = "mainEntityManager")
public EntityManager entityManager() {
return entityManagerFactory().createEntityManager();
}
/**
* #return the entity manager factory for the main datasource
* #see MainDatabaseConfiguration.dataSource()
*/
#Bean(name = "mainEntityManagerFactory")
public EntityManagerFactory entityManagerFactory() {
LocalContainerEntityManagerFactoryBean lef = new LocalContainerEntityManagerFactoryBean();
lef.setDataSource(this.dataSource());
lef.setJpaVendorAdapter(jpaVendorAdapter);
lef.setPackagesToScan("com.sfr.sio.domain");
lef.setPersistenceUnitName("mainPersistenceUnit");
lef.afterPropertiesSet();
return lef.getObject();
}
/**
* #return the transaction manager for the main datasource
* #see MainDatabaseConfiguration.dataSource()
*/
#Bean(name = "mainTransactionManager")
#Primary
public PlatformTransactionManager transactionManager() {
return new JpaTransactionManager(entityManagerFactory());
}
/**
* Liquibase bean creator.
* #return the liquibase bean
*/
#Bean
#Profile(value = Constants.SPRING_PROFILE_DEVELOPMENT)
public SpringLiquibase liquibase() {
log.debug("Configuring Liquibase");
SpringLiquibase liquibase = new SpringLiquibase();
liquibase.setDataSource(dataSource());
liquibase.setChangeLog("classpath:config/liquibase/master.xml");
liquibase.setContexts("development, production");
return liquibase;
}
}
The only differences between MainDatabaseConfiguration and EmbeddedDatabaseConfiguration are the replacement of mainDatasource by embeddedDatasource. the code is otherwise the same.
CacheConfiguration.java
#Configuration
#EnableCaching
#AutoConfigureAfter(value = {MetricsConfiguration.class, MainDatabaseConfiguration.class, EmbeddedDatabaseConfiguration.class})
public class CacheConfiguration {
/** Logger. */
private final Logger log = LoggerFactory.getLogger(CacheConfiguration.class);
/** Entity manager. */
#PersistenceContext(unitName="mainPersistenceUnit")
private EntityManager mainEntityManager;
/** Current environment. */
#Inject
private Environment env;
/** Metrics regitry. */
#Inject
private MetricRegistry metricRegistry;
/** Ehcache manager. */
private net.sf.ehcache.CacheManager cacheManager;
/** TTL parameter. */
private static final Integer CACHE_TIME_TO_LIVE = 3600;
/**
* Prepare destroy of the object.
*/
#PreDestroy
public void destroy() {
log.info("Remove Cache Manager metrics");
SortedSet<String> names = metricRegistry.getNames();
for (String name : names) {
metricRegistry.remove(name);
}
log.info("Closing Cache Manager");
cacheManager.shutdown();
}
/**
* Cache manager bean creator.
* #return the cache manager.
*/
#Bean
public CacheManager cacheManager() {
log.debug("Starting Ehcache");
cacheManager = net.sf.ehcache.CacheManager.create();
cacheManager.getConfiguration().setMaxBytesLocalHeap(env.getProperty("cache.ehcache.maxBytesLocalHeap", String.class, "16M"));
log.debug("Registring Ehcache Metrics gauges");
Set<EntityType<?>> entities = mainEntityManager.getMetamodel().getEntities();
for (EntityType<?> entity : entities) {
String name = entity.getName();
if ( name == null ) {
name = entity.getJavaType().getName();
}
Assert.notNull(name, "entity cannot exist without a identifier");
net.sf.ehcache.Cache cache = cacheManager.getCache(name);
if (cache != null) {
cache.getCacheConfiguration().setTimeToLiveSeconds(env.getProperty("cache.timeToLiveSeconds", Integer.class, CACHE_TIME_TO_LIVE));
net.sf.ehcache.Ehcache decoratedCache = InstrumentedEhcache.instrument(metricRegistry, cache);
cacheManager.replaceCacheWithDecoratedCache(cache, decoratedCache);
}
}
EhCacheCacheManager ehCacheManager = new EhCacheCacheManager();
ehCacheManager.setCacheManager(cacheManager);
return ehCacheManager;
}
}
Sorry for the long post. I hope it's clear enough. Don't hesitate to ask for precisions.
I had the same problem. After debugging it seems that the properties from application.properties file are picked up later during the initialization process and are not yet ready when the SessionFactory is being initialized.
The following may turn out as an ugly solution, but it worked for me:
#Bean
public SessionFactory sessionFactory() {
final LocalSessionFactoryBuilder localSessionFactoryBuilder = new LocalSessionFactoryBuilder(dataSource);
localSessionFactoryBuilder.scanPackages(myModelPackage);
localSessionFactoryBuilder.setProperty(Environment.CACHE_REGION_FACTORY, "org.hibernate.cache.ehcache.EhCacheRegionFactory");
return localSessionFactoryBuilder.buildSessionFactory();
}
This part goes into your DB configuration path.
As you can see I force the cache region factory property to be what I need at the early stage of session factory creation.

Resources