Spring Boot and Spring Data with Cassandra: Continue on failed database connection - spring

I use Spring Boot and Spring Data with Cassandra. On application startup spring establishes a connection to the database to setup the schema and initialize spring data repositories. If the database is not available, the application won't start.
I want, that the application just logs an error and starts. Of course, I can't use the repositories anymore, but other services (rest controllers etc), which are independent from the database should work. It would also be nice to see in actuator healthcheck, that cassandra is down.
For JDBC, there is a spring.datasource.continue-on-error property. I couldn't find something similar for Cassandra.
I also tried to create a custom cassandra configuration and trying to catch Exception on CqlSession creation, but I couldn't achieve the desired behavior.
EDIT: As suggested by #adutra, I tried to set advanced.reconnect-on-init, Application tries to establish the connection, but the application is not fully initialized (e.g. REST controller are not reachable)
#Configuration
public class CustomCassandraConfiguration extends CassandraAutoConfiguration {
#Bean
public DriverConfigLoaderBuilderCustomizer driverConfigLoaderBuilderCustomizer() {
return builder -> builder.withBoolean(DefaultDriverOption.RECONNECT_ON_INIT, true);
}
}
EDIT2: I have now working example (application starts, custom health check for cassandra), but if feels pretty ugly:
CustomCassandraAutoConfiguration
#Configuration
public class CustomCassandraAutoConfiguration extends CassandraAutoConfiguration {
private final Logger logger = LoggerFactory.getLogger(getClass());
#Override
#Bean
public CqlSession cassandraSession(CqlSessionBuilder cqlSessionBuilder) {
try {
return super.cassandraSession(cqlSessionBuilder);
} catch (AllNodesFailedException e) {
logger.error("Failed to establish the database connection", e);
}
return new DatabaseNotConnectedFakeCqlSession();
}
#Bean
public CassandraReactiveHealthIndicator cassandraHealthIndicator(ReactiveCassandraOperations r, CqlSession session) {
if (session instanceof DatabaseNotConnectedFakeCqlSession) {
return new CassandraReactiveHealthIndicator(r) {
#Override
protected Mono<Health> doHealthCheck(Health.Builder builder) {
return Mono.just(builder.down().withDetail("connection", "was not available on startup").build());
}
};
}
return new CassandraReactiveHealthIndicator(r);
}
}
CustomCassandraDataAutoConfiguration
#Configuration
public class CustomCassandraDataAutoConfiguration extends CassandraDataAutoConfiguration {
public CustomCassandraDataAutoConfiguration(CqlSession session) {
super(session);
}
#Bean
public SessionFactoryFactoryBean cassandraSessionFactory(CqlSession session, Environment environment, CassandraConverter converter) {
SessionFactoryFactoryBean sessionFactoryFactoryBean = super.cassandraSessionFactory(environment, converter);
// Disable schema action if database is not available
if (session instanceof DatabaseNotConnectedFakeCqlSession) {
sessionFactoryFactoryBean.setSchemaAction(SchemaAction.NONE);
}
return sessionFactoryFactoryBean;
}
}
DatabaseNotConnectedFakeCqlSession (Fake session implementation)
public class DatabaseNotConnectedFakeCqlSession implements CqlSession {
#Override
public String getName() {
return null;
}
#Override
public Metadata getMetadata() {
return null;
}
#Override
public boolean isSchemaMetadataEnabled() {
return false;
}
#Override
public CompletionStage<Metadata> setSchemaMetadataEnabled( Boolean newValue) {
return null;
}
#Override
public CompletionStage<Metadata> refreshSchemaAsync() {
return null;
}
#Override
public CompletionStage<Boolean> checkSchemaAgreementAsync() {
return null;
}
#Override
public DriverContext getContext() {
return new DefaultDriverContext(new DefaultDriverConfigLoader(), ProgrammaticArguments.builder().build());
}
#Override
public Optional<CqlIdentifier> getKeyspace() {
return Optional.empty();
}
#Override
public Optional<Metrics> getMetrics() {
return Optional.empty();
}
#Override
public <RequestT extends Request, ResultT> ResultT execute( RequestT request, GenericType<ResultT> resultType) {
return null;
}
#Override
public CompletionStage<Void> closeFuture() {
return null;
}
#Override
public CompletionStage<Void> closeAsync() {
return null;
}
#Override
public CompletionStage<Void> forceCloseAsync() {
return null;
}
#Override
public Metadata refreshSchema() {
return null;
}
}
Any suggestions?

You can set the option datastax-java-driver.advanced.reconnect-on-init to true to achieve the effect you want. Its usage is explained in the configuration reference page in the driver docs:
Whether to schedule reconnection attempts if all contact points are unreachable on the first initialization attempt.
If this is true, the driver will retry according to the reconnection policy. The SessionBuilder.build() call - or the future returned by SessionBuilder.buildAsync() - won't complete until a contact point has been reached. If this is false and no contact points are available, the driver will fail with an AllNodesFailedException.
However be careful: with this option set to true, as stated above, any component trying to access a CqlSession bean, even if the session bean is lazy, will block until the driver is able to connect, and might block forever if the contact points are wrong.
If that's not acceptable for you, I would suggest that you wrap the CqlSession bean in another bean that will check if the future returned by SessionBuilder.buildAsync() is done or not, and either block, throw or return null, depending on the caller's expectations.

[EDIT] I've reached out internally to the DataStax Drivers team last night and adutra has responded so I'm withdrawing my response.

Related

Spring Boot - Kafka Consumer Bean Scope

I'm using a CacheManager in a Spring Boot application with SCOPE_REQUEST scope.
#Bean
#Scope(value = WebApplicationContext.SCOPE_REQUEST, proxyMode = ScopedProxyMode.TARGET_CLASS)
public CacheManager cacheManager() {
return new ConcurrentMapCacheManager();
}
I'm also using Kafka for communication between microservices. Actually I'm receiving an event through a Kafka consumer and I get the following error:
org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'scopedTarget.cacheManager': Scope 'request' is not active for the current thread;
...
Caused by: java.lang.IllegalStateException: No thread-bound request found: Are you referring to request attributes outside of an actual web request, or processing a request outside of the originally receiving thread?
It's clear that the CacheManager bean is missing on the listener thread.
My goal is to have let the Spring Boot/Kafka framework to create the mean for each consumed Kafka events just as it's for the web requests.
I have no idea how I could achive that, could someone help me ?
Thank you so much,
Have a nice day!
#Gary Russel
That's true and false at the same time, meantime I succeed to find a solution, create the below class:
public class KafkaRequestScopeAttributes implements RequestAttributes {
private Map<String, Object> requestAttributeMap = new HashMap<>();
#Override
public Object getAttribute(String name, int scope) {
if (scope == RequestAttributes.SCOPE_REQUEST) {
return this.requestAttributeMap.get(name);
}
return null;
}
#Override
public void setAttribute(String name, Object value, int scope) {
if (scope == RequestAttributes.SCOPE_REQUEST) {
this.requestAttributeMap.put(name, value);
}
}
#Override
public void removeAttribute(String name, int scope) {
if (scope == RequestAttributes.SCOPE_REQUEST) {
this.requestAttributeMap.remove(name);
}
}
#Override
public String[] getAttributeNames(int scope) {
if (scope == RequestAttributes.SCOPE_REQUEST) {
return this.requestAttributeMap.keySet().toArray(new String[0]);
}
return new String[0];
}
#Override
public void registerDestructionCallback(String name, Runnable callback, int scope) {
// Not Supported
}
#Override
public Object resolveReference(String key) {
// Not supported
return null;
}
#Override
public String getSessionId() {
return null;
}
#Override
public Object getSessionMutex() {
return null;
}
}
then add the following two lines into your KafkaListener method's start and end:
RequestContextHolder.setRequestAttributes(new KafkaRequestScopeAttributes());
RequestContextHolder.resetRequestAttributes();
By doing that you can force to create the REQUEST_SCOPE in a Kafka Listener.
Request Scope is for web applications only; it can't be used with Kafka consumers.

How to implement spring data mongo multidatabase transactions

I created an application that copies data from one database to another one based on this tutorial.
In fact, I need to make a method that inserts in two different databases transactional.
Is this possible with MongoDB ? and how can I implement it?
To use multi-document transactions in MongoDB across multiple databases in Spring Data MongoDB, you need to configure a MongoTemplate per database, but all of them must use the same MongoDbFactory because it is treated as the transactional resource.
This means that you will need to override a couple of methods of MongoTemplate to make it use the database it should use (and not the one configured inside SimpleMongoClientDbFactory).
Let's assume that your databases are called 'one' and 'two'. Then it goes like this:
public class MongoTemplateWithFixedDatabase extends MongoTemplate {
private final MongoDbFactory mongoDbFactory;
private final String databaseName;
public MongoTemplateWithFixedDatabase(MongoDbFactory mongoDbFactory,
MappingMongoConverter mappingMongoConverter, String databaseName) {
super(mongoDbFactory, mappingMongoConverter);
this.mongoDbFactory = mongoDbFactory;
this.databaseName = databaseName;
}
#Override
protected MongoDatabase doGetDatabase() {
return MongoDatabaseUtils.getDatabase(databaseName, mongoDbFactory, ON_ACTUAL_TRANSACTION);
}
}
and
#Bean
public MongoDbFactory mongoDbFactory() {
// here, some 'default' database name is configured, the following MongoTemplate instances will ignore it
return new SimpleMongoDbFactory(mongoClient(), getDatabaseName());
}
#Bean
public MongoTransactionManager mongoTransactionManager() {
return new MongoTransactionManager(mongoDbFactory());
}
#Bean
public MongoTemplate mongoTemplateOne(MongoDbFactory mongoDbFactory,
MappingMongoConverter mappingMongoConverter) {
MongoTemplate template = new MongoTemplateWithFixedDatabase(mongoDbFactory,
mappingMongoConverter, "one");
return template;
}
#Bean
public MongoTemplate mongoTemplateTwo(MongoDbFactory mongoDbFactory,
MappingMongoConverter mappingMongoConverter) {
MongoTemplate template = new MongoTemplateWithFixedDatabase(mongoDbFactory,
mappingMongoConverter, "two");
return template;
}
Then just inject mongoTemplateOne and mongoTemplateTwo in your service, mark its method with #Transactional and it should work.
Reactive case
In the reactive case, it's very similar. Of course, you need to use reactive versions of the classes like ReactiveMongoTemplate, ReactiveMongoDatabaseFactory, ReactiveMongoTransactionManager.
There is also a couple of caveats. First, you have to override 3 methods, not 2 (as getCollection(String) also needs to be overridden). Also, I had to do it with an abstract class to make it work:
#Bean
public ReactiveMongoOperations reactiveMongoTemplateOne(
#ReactiveMongoDatabaseFactory reactiveMongoDatabaseFactory,
MappingMongoConverter mappingMongoConverter) {
ReactiveMongoTemplate template = new ReactiveMongoTemplate(reactiveMongoDatabaseFactory,
mappingMongoConverter) {
#Override
protected Mono<MongoDatabase> doGetDatabase() {
return ReactiveMongoDatabaseUtils.getDatabase("one", reactiveMongoDatabaseFactory,
ON_ACTUAL_TRANSACTION);
}
#Override
public MongoDatabase getMongoDatabase() {
return reactiveMongoDatabaseFactory.getMongoDatabase(getDatabaseName());
}
#Override
public MongoCollection<Document> getCollection(String collectionName) {
Assert.notNull(collectionName, "Collection name must not be null!");
try {
return reactiveMongoDatabaseFactory.getMongoDatabase(getDatabaseName())
.getCollection(collectionName);
} catch (RuntimeException e) {
throw potentiallyConvertRuntimeException(e,
reactiveMongoDatabaseFactory.getExceptionTranslator());
}
}
private RuntimeException potentiallyConvertRuntimeException(RuntimeException ex,
PersistenceExceptionTranslator exceptionTranslator) {
RuntimeException resolved = exceptionTranslator.translateExceptionIfPossible(ex);
return resolved == null ? ex : resolved;
}
};
return template;
}
P.S. the provided code was tested with spring-data-mongodb 2.2.4.

Spring Boot + Hibernate + Oracle schema multitenancy

I'm trying to get a schema-based multitenancy solution working, similar to this example but with Oracle instead of Postgres.
For example, I have three schemas: FOO, BAR, and BAZ. BAR and BAZ each have a table called MESSAGES. FOO has been granted SELECT on both BAR.MESSAGES and BAZ.MESSAGES. So if I connect as FOO and then execute
SELECT * FROM BAR.MESSAGES;
then I get a result as expected. But if I leave out the schema name (e.g. SELECT * FROM MESSAGES), then I get ORA-00942: table or view does not exist (the connection is using the wrong schema).
Here's my Dao / repository:
#Repository
public interface MessageDao extends CrudRepository<Foo, Long> {
}
The controller:
#GetMapping("/findAll")
public List<Message> findAll() {
TenantContext.setCurrentTenant("BAR");
var result = messageDao.findAll();
return result;
}
The Config:
#Configuration
public class MessageConfig {
#Bean
public JpaVendorAdapter jpaVendorAdapter() {
return new HibernateJpaVendorAdapter();
}
#Bean
public LocalContainerEntityManagerFactoryBean entityManagerFactory(DataSource dataSource,
MultiTenantConnectionProvider multiTenantConnectionProvider,
CurrentTenantIdentifierResolver tenantIdentifierResolver) {
LocalContainerEntityManagerFactoryBean em = new LocalContainerEntityManagerFactoryBean();
em.setDataSource(dataSource);
em.setPackagesToScan(Message.class.getPackageName());
em.setJpaVendorAdapter(this.jpaVendorAdapter());
Map<String, Object> jpaProperties = new HashMap<>();
jpaProperties.put(Environment.MULTI_TENANT, MultiTenancyStrategy.SCHEMA);
jpaProperties.put(Environment.MULTI_TENANT_CONNECTION_PROVIDER, multiTenantConnectionProvider);
jpaProperties.put(Environment.MULTI_TENANT_IDENTIFIER_RESOLVER, tenantIdentifierResolver);
jpaProperties.put(Environment.FORMAT_SQL, true);
em.setJpaPropertyMap(jpaProperties);
return em;
}
The MultitenantConnectionProvider:
#Component
public class MultiTenantConnectionProviderImpl implements MultiTenantConnectionProvider {
#Autowired
private DataSource dataSource;
#Override
public Connection getAnyConnection() throws SQLException {
return dataSource.getConnection();
}
#Override
public void releaseAnyConnection(Connection connection) throws SQLException {
connection.close();
}
#Override
public Connection getConnection(String currentTenantIdentifier) throws SQLException {
String tenantIdentifier = TenantContext.getCurrentTenant();
final Connection connection = getAnyConnection();
try (Statement statement = connection.createStatement()) {
statement.execute("ALTER SESSION SET CURRENT_SCHEMA = BAR");
} catch (SQLException e) {
throw new HibernateException("Problem setting schema to " + tenantIdentifier, e);
}
return connection;
}
#Override
public void releaseConnection(String tenantIdentifier, Connection connection) throws SQLException {
try (Statement statement = connection.createStatement()) {
statement.execute("ALTER SESSION SET CURRENT_SCHEMA = FOO");
} catch (SQLException e) {
throw new HibernateException("Problem setting schema to " + tenantIdentifier, e);
}
connection.close();
}
#SuppressWarnings("rawtypes")
#Override
public boolean isUnwrappableAs(Class unwrapType) {
return false;
}
#Override
public <T> T unwrap(Class<T> unwrapType) {
return null;
}
#Override
public boolean supportsAggressiveRelease() {
return true;
}
}
And the TenantIdentifierResolver (though not really relevant because I'm hard-coding the tenants right now in the ConnectionProviderImpl above):
#Component
public class TenantIdentifierResolver implements CurrentTenantIdentifierResolver {
#Override
public String resolveCurrentTenantIdentifier() {
String tenantId = TenantContext.getCurrentTenant();
if (tenantId != null) {
return tenantId;
}
return "BAR";
}
#Override
public boolean validateExistingCurrentSessions() {
return true;
}
}
Any ideas as to why the underlying Connection isn't switching schemas as expected?
UPDATE 1
Maybe it's something to do with the underlying Oracle connection. There is a property on OracleConnection named CONNECTION_PROPERTY_CREATE_DESCRIPTOR_USE_CURRENT_SCHEMA_FOR_SCHEMA_NAME. The documentation says:
The user also has an option to append the CURRENT_USER value to the
ADT name to obtain fully qualified name by setting this property to
true. Note that it takes a network round trip to fetch the
CURRENT_SCHEMA value.
But the problem remains even if I set this to true (-Doracle.jdbc.createDescriptorUseCurrentSchemaForSchemaName=true). This may be because the "username" on the Connection is still "FOO", even after altering the sesssion to set the schema to "BAR" (currentSchema on the Connection is "BAR"). But that would mean that the OracleConnection documentation is incorrect, wouldn't it?
UPDATE 2
I did not include the fact that we are using Spring Data JPA here as well. Maybe that has something to do with the problem?
I have found that if I include the schema name hard-coded in the entity then it works (e.g. #Table(schema="BAR")), but having a hard-coded value there is not an acceptable solution.
It might also work if we rewrite the queries as native #Query and then include {h-schema} in the SQL, but in Hibernate this is the default schema, not the 'current' (dynamic) schema, so it's not quite right either.
It turns out that setting the current tenant on the first line of the Controller like that (TenantContext.setCurrentTenant("BAR")) is "too late" (Spring has already created a transaction?). I changed the implementation to use a servlet filter to set the tenant id from a header to a request attribute, and then fetch that attribute in the TenantIdentifierResolver, instead of using the TenantContext. Now it works as it should, without any of the stuff I mentioned in the updates.

Eureka Client and Spring ORM

i have a situation that i need to communicate using eureka client with external api just after spring boot web application up and before JPA start create database schema and insert data into it from the sql file.
the problem is eureka client start registering to eureka server at smartlifecycle phase 0 , which means after application context has been started and JPA already worked and finish its job.
so how to prevent jpa to start working or delay its work to phase 1 for example ?
I was facing the same problem, then I found this SO question.
Basically, you just need to put your communication logic into start().
public class EurekaClientService implements SmartLifecycle {
#Autowired
private EurekaClient eurekaClient;
private volatile boolean isRunning = false;
#Override
public boolean isAutoStartup() {
return true;
}
#Override
public void stop(Runnable r) {
isRunning = false;
}
#Override
public void start() {
eurekaClient.customService();
isRunning = true;
}
#Override
public void stop() {
isRunning = false;
}
#Override
public boolean isRunning() {
return isRunning;
}
#Override
public int getPhase() {
return 1;
}
}
Hope this help

How to set a new username and password after starting the server in spring security?

I have created a sign-up form and added users in the database . But i provide login facility using this piece of code which runs only when the server starts :
public void configureGlobal(AuthenticationManagerBuilder auth) throws Exception {
List<UserEntity> userEntities=userService.getAll();
for(UserEntity userEntity: userEntities) {
RoleEntity roleEntity=roleService.getBy(userEntity.getRoleId());
auth
.inMemoryAuthentication()
.withUser(userEntity.getUserName()).password(userEntity.getPassword()).roles(roleEntity.getDescription());
}
}
}
How to add new users in the configuration so that they can login without restarting the server ?
Hook on the application start event and use it to add the users to your (InMemmory???) Database.
import org.springframework.context.ApplicationListener;
import org.springframework.context.event.ContextRefreshedEvent;
import org.springframework.stereotype.Component;
#Component
public class DbInit implements ApplicationListener<ContextRefreshedEvent> {
#Override
public void onApplicationEvent(final ContextRefreshedEvent event) {
//This check is needed, because a typical web application has two contexts,
//therefore it fires two ContextRefreshedEvent on startup
if (isFirstContextRefreshedEvent()) {
// initYourDatabase
}
}
private boolean done = false;
/** Return true if this method in this bean is invoked the first time;
else (second, third... invocation) false. */
private synconized boolean isFirstContextRefreshedEvent() {
if (!done) {
done = false;
return true;
} else {
return false;
}
}
}

Resources