I know there's some similar topic out there, but none of them gives a solution. So, if using Spring-data-neo4j, is there any way to connect to multiple graphs? NOT graphs in the same instance with different labels.
Or equivalently, I can ask this question:
How can I configure spring-data-neo4j to have multiple sessions to different Neo4j instances on different ports.
Thanks.
EDIT
Thanks to #Hunger, I think I am one step forward. Now the question is: how to confiture spring-data-neo4j to have multiple 'PereistenceContext' and each of them refers to an individual Neo4j instance.
You can configure different application contexts with different REST-API's declared pointing to different databases.
You should not mix objects or sessions from those different databases though.
So you might need qualifiers for injection.
How about having multiple configurations :
//First configuration
#Configuration
#EnableNeo4jRepositories(basePackages = "org.neo4j.example.repository.dev")
#EnableTransactionManagement
public class MyConfigurationDev extends Neo4jConfiguration {
#Bean
public Neo4jServer neo4jServer() {
return new RemoteServer("http://localhost:7474");
}
#Bean
public SessionFactory getSessionFactory() {
// with domain entity base package(s)
return new SessionFactory("org.neo4j.example.domain.dev");
}
// needed for session in view in web-applications
#Bean
#Scope(value = "session", proxyMode = ScopedProxyMode.TARGET_CLASS)
public Session getSession() throws Exception {
return super.getSession();
}
}
and another one
//Second config
#Configuration
#EnableNeo4jRepositories(basePackages = "org.neo4j.example.repository.test")
#EnableTransactionManagement
public class MyConfigurationDev extends Neo4jConfiguration {
#Bean
public Neo4jServer neo4jServer() {
return new RemoteServer("http://localhost:7475");
}
#Bean
public SessionFactory getSessionFactory() {
// with domain entity base package(s)
return new SessionFactory("org.neo4j.example.domain.test");
}
// needed for session in view in web-applications
#Bean
#Scope(value = "session", proxyMode = ScopedProxyMode.TARGET_CLASS)
public Session getSession() throws Exception {
return super.getSession();
}
}
Related
I have a code base which is using for two different applications. some of my spring service classes has annotation #Transactional. On server start I would like to disable #Transactional based on some configuration.
The below is my configuration Class.
#Configuration
#EnableTransactionManagement
#PropertySource("classpath:application.properties")
public class WebAppConfig {
private static final String PROPERTY_NAME_DATABASE_DRIVER = "db.driver";
#Resource
private Environment env;
#Bean
public DataSource dataSource() {
DriverManagerDataSource dataSource = new DriverManagerDataSource();
dataSource.setDriverClassName(env.getRequiredProperty(PROPERTY_NAME_DATABASE_DRIVER));
dataSource.setUrl(url);
dataSource.setUsername(userId);
dataSource.setPassword(password);
return dataSource;
}
#Bean
public PlatformTransactionManager txManager() {
DefaultTransactionDefinition def = new DefaultTransactionDefinition();
def.setIsolationLevel(TransactionDefinition.ISOLATION_DEFAULT);
if(appName.equqls("ABC")) {
def.setPropagationBehavior(TransactionDefinition.PROPAGATION_NEVER);
}else {
def.setPropagationBehavior(TransactionDefinition.PROPAGATION_REQUIRED);
}
CustomDataSourceTransactionManager txM=new CustomDataSourceTransactionManager(def);
txM.setDataSource(dataSource());
return txM;
}
#Bean
public JdbcTemplate jdbcTemplate() {
JdbcTemplate jdbcTemplate = new JdbcTemplate();
jdbcTemplate.setDataSource(dataSource());
return jdbcTemplate;
}
}
I am trying to ovveried methods in DataSourceTransactionManager to make the functionality. But still it is trying to commit/rollback the transaction at end of transaction. Since there is no database connection available it is throwing exception.
If I keep #Transactional(propagation=Propagation.NEVER), everything works perfectly, but I cannot modify it as another app is using the same code base and it is necessary in that case.
I would like to know if there is a to make transaction fully disable from configuration without modifying #Transactional annotation.
I'm not sure if it would work but you can try to implement custom TransactionInterceptor and override its method that wraps invocation into a transaction, by removing that transactional stuff. Something like this:
public class NoOpTransactionInterceptor extends TransactionInterceptor {
#Override
protected Object invokeWithinTransaction(
Method method,
Class<?> targetClass,
InvocationCallback invocation
) throws Throwable {
// Simply invoke the original unwrapped code
return invocation.proceedWithInvocation();
}
}
Then you declare a conditional bean in one of #Configuration classes
// assuming this property is stored in Spring application properties file
#ConditionalOnProperty(name = "turnOffTransactions", havingValue = "true"))
#Bean
#Role(BeanDefinition.ROLE_INFRASTRUCTURE)
public TransactionInterceptor transactionInterceptor(
/* default bean would be injected here */
TransactionAttributeSource transactionAttributeSource
) {
TransactionInterceptor interceptor = new NoOpTransactionInterceptor();
interceptor.setTransactionAttributeSource(transactionAttributeSource);
return interceptor;
}
Probably you gonna need additional configurations, I can't verify that right now
My goal is to have a have integration tests that ensures that there isn't too many database queries happening during lookups. (This helps us catch n+1 queries due to incorrect JPA configuration)
I know that the database connection is correct because there is no configuration problems during the test run whenever MyDataSourceWrapperConfiguration is not included in the test. However, once it is added, the circular dependency happens. (see error below) I believe #Primary is necessary in order for the JPA/JDBC code to use the correct DataSource instance.
MyDataSourceWrapper is a custom class that tracks the number of queries that have happened for a given transaction, but it delegates the real database work to the DataSource passed in via constructor.
Error:
The dependencies of some of the beans in the application context form a cycle:
org.springframework.boot.autoconfigure.orm.jpa.HibernateJpaAutoConfiguration
┌─────┐
| databaseQueryCounterProxyDataSource defined in me.testsupport.database.MyDataSourceWrapperConfiguration
↑ ↓
| dataSource defined in org.springframework.boot.autoconfigure.jdbc.DataSourceConfiguration$Tomcat
↑ ↓
| dataSourceInitializer
└─────┘
My Configuration:
#Configuration
public class MyDataSourceWrapperConfiguration {
#Primary
#Bean
DataSource databaseQueryCounterProxyDataSource(final DataSource delegate) {
return MyDataSourceWrapper(delegate);
}
}
My Test:
#ActiveProfiles({ "it" })
#RunWith(SpringJUnit4ClassRunner.class)
#SpringApplicationConfiguration({ DatabaseConnectionConfiguration.class, DatabaseQueryCounterConfiguration.class })
#EnableAutoConfiguration
public class EngApplicationRepositoryIT {
#Rule
public MyDatabaseQueryCounter databaseQueryCounter = new MyDatabaseQueryCounter ();
#Rule
public ErrorCollector errorCollector = new ErrorCollector();
#Autowired
MyRepository repository;
#Test
public void test() {
this.repository.loadData();
this.errorCollector.checkThat(this.databaseQueryCounter.getSelectCounts(), is(lessThan(10)));
}
}
UPDATE: This original question was for springboot 1.5. The accepted answer reflects that, however, the answer from #rajadilipkolli works for springboot 2.x
In your case you will get 2 DataSource instances which is probably not what you want. Instead use BeanPostProcessor which is the component actually designed for this. See also the Spring Reference Guide.
Create and register a BeanPostProcessor which does the wrapping.
public class DataSourceWrapper implements BeanPostProcessor {
public Object postProcessBeforeInitialization(Object bean, String beanName) {
if (bean instanceof DataSource) {
return new MyDataSourceWrapper((DataSource)bean);
}
return bean;
}
public Object postProcessAfterInitialization(Object bean, String beanName) throws BeansException {
return bean;
}
}
Then just register that as a #Bean instead of your MyDataSourceWrapper.
Tip: Instead of rolling your own wrapping DataSource you might be interested in datasource-proxy combined with datasource-assert which has counter etc. support already (saves you maintaining your own components).
Starting from spring boot 2.0.0.M3 using BeanPostProcessor wont work.
As a work around create your own bean like below
#Bean
public DataSource customDataSource(DataSourceProperties properties) {
log.info("Inside Proxy Creation");
final HikariDataSource dataSource = (HikariDataSource) properties
.initializeDataSourceBuilder().type(HikariDataSource.class).build();
if (properties.getName() != null) {
dataSource.setPoolName(properties.getName());
}
return ProxyDataSourceBuilder.create(dataSource).countQuery().name("MyDS")
.logSlowQueryToSysOut(1, TimeUnit.MINUTES).build();
}
Another way is to use datasource-proxy version of datasource-decorator starter
Following solution works for me using Spring Boot 2.0.6.
It uses explicit binding instead of annotation #ConfigurationProperties(prefix = "spring.datasource.hikari").
#Configuration
public class DataSourceConfig {
private final Environment env;
#Autowired
public DataSourceConfig(Environment env) {
this.env = env;
}
#Primary
#Bean
public MyDataSourceWrapper primaryDataSource(DataSourceProperties properties) {
DataSource dataSource = properties.initializeDataSourceBuilder().build();
Binder binder = Binder.get(env);
binder.bind("spring.datasource.hikari", Bindable.ofInstance(dataSource).withExistingValue(dataSource));
return new MyDataSourceWrapper(dataSource);
}
}
You can actually still use BeanPostProcessor in Spring Boot 2, but it needs to return the correct type (the actual type of the declared Bean). To do this you need to create a proxy of the correct type which redirects DataSource methods to your interceptor and all the other methods to the original bean.
For example code see the Spring Boot issue and discussion at https://github.com/spring-projects/spring-boot/issues/12592.
I have a springboot application, with its own datasource (let's call DB1) set on properties working fine.
But this application needs do configure a new datasource (DB2), using some parameters the user have informed before and stored in DB1.
My idea is to create a named bean, so a specific part of my application can use to access DB2 tables. I think it is possible to do that by restarting the application, but I would like to avoid it though.
Besides, I need that some part of my code use the new datasource (spring data jpa, mappings, and so on). I don't know if this matter, but it is a web application, so I cannot create the datasource only for the request thread.
Can you help me?
Thanks in advance.
Spring has dynamic datasource routing if that's where you are headed. In my case it is the same schema (WR/RO)
public class RoutingDataSource extends AbstractRoutingDataSource {
#Autowired
private DataSourceConfig dataSourceConfig;
#Override
protected Object determineCurrentLookupKey() {
return DbContextHolder.getDbType();
}
public enum DbType {
MASTER, WRITE, READONLY,
}
Then you need a custom annotation and an aspect
#Target({ElementType.METHOD, ElementType.TYPE})
#Retention(RetentionPolicy.RUNTIME)
public #interface ReadOnlyConnection {
}
#Aspect
#Component
#Order(1)
public class ReadOnlyConnectionInterceptor {
Pointcut(value = "execution(public * *(..))")
public void anyPublicMethod() {}
#Around("#annotation(readOnlyConnection)")
public Object proceed(ProceedingJoinPoint proceedingJoinPoint, ReadOnlyConnection readOnlyConnection) throws Throwable {
Object result = null;
try {
DbContextHolder.setDbType(DbType.READONLY);
result = proceedingJoinPoint.proceed();
DbContextHolder.clearDbType();
return result;
} finally {
DbContextHolder.clearDbType();
}
}
}
And then you can act on you DB with the tag #ReadOnlyConnection
#Override
#Transactional(readOnly = true)
#ReadOnlyConnection
public UnitDTO getUnitById(Long id) {
return unitRepository.findOne(id);
}
An example can be found here: https://github.com/afedulov/routing-data-source.
I used that as a basis for my work although it is still in progress because I still need to resolve runtime dependencies ( i.e. hibernate sharding ).
I'm a Spring rookie and trying to benefit from the advantages of the easy 'profile' handling of Spring. I already worked through this tutorial: https://spring.io/blog/2011/02/14/spring-3-1-m1-introducing-profile and now I'd like to adapt that concept to an easy example.
I've got two profiles: dev and prod. I imagine a #Configuration class for each profile where I can instantiate different beans (implementing a common interface respectively) depending on the set profile.
My currently used classes look like this:
StatusController.java
#RestController
#RequestMapping("/status")
public class StatusController {
private final EnvironmentAwareBean environmentBean;
#Autowired
public StatusController(EnvironmentAwareBean environmentBean) {
this.environmentBean = environmentBean;
}
#RequestMapping(method = RequestMethod.GET)
Status getStatus() {
Status status = new Status();
status.setExtra("environmentBean=" + environmentBean.getString());
return status;
}
}
EnvironmentAwareBean.java
public interface EnvironmentAwareBean {
String getString();
}
EnvironmentAwareBean.java
#Service
public class DevBean implements EnvironmentAwareBean {
#Override
public String getString() {
return "development";
}
}
EnvironmentAwareBean.java
#Service
public class ProdBean implements EnvironmentAwareBean {
#Override
public String getString() {
return "production";
}
}
DevConfig.java
#Configuration
#Profile("dev")
public class DevConfig {
#Bean
public EnvironmentAwareBean getDevBean() {
return new DevBean();
}
}
ProdConfig.java
#Configuration
#Profile("prod")
public class ProdConfig {
#Bean
public EnvironmentAwareBean getProdBean() {
return new ProdBean();
}
}
Running the example throws this exception during startup (SPRING_PROFILES_DEFAULT is set to dev):
(...) UnsatisfiedDependencyException: (...) nested exception is org.springframework.beans.factory.NoUniqueBeanDefinitionException: No qualifying bean of type [EnvironmentAwareBean] is defined: expected single matching bean but found 3: prodBean,devBean,getDevBean
Is my approach far from a recommended configuration? In my opinion it would make more sense to annotate each Configuration with the #Profile annotation instead of doing it for each and every bean and possibly forgetting some variants when new classes are added later on.
Your implementations of EnvironmentAwareBean are all annotated with #Service.
This means they will all be picked up by component scanning and hence you get more than one matching bean. Do they need to be annotated with #Service?
Annotating each #Configuration with the #Profile annotation is fine. Another way as an educational exercise would be to not use #Profile and instead annotate the #Bean or Config classes with your own implementation of #Conditional.
I'm trying to move away from manually-managed transactions to annotation based transactions in my Neo4j application.
I've prepared annotation-based Spring configuration file:
#Configuration
#EnableNeo4jRepositories("xxx.yyy.neo4jplanetspersistence.repositories")
#ComponentScan(basePackages = "xxx.yyy")
#EnableTransactionManagement
public class SpringDataConfiguration extends Neo4jConfiguration
implements TransactionManagementConfigurer{
public SpringDataConfiguration() {
super();
setBasePackage(new String[] {"xxx.yyy.neo4jplanetspojos"});
}
#Bean
public GraphDBFactory graphDBFactory(){
GraphDBFactory graphDBFactory = new GraphDBFactory();
return graphDBFactory;
}
#Bean
public GraphDatabaseService graphDatabaseService() {
return graphDBFactory().getTestGraphDB(); //new GraphDatabaseFactory().newEmbeddedDatabase inside
}
#Override
public PlatformTransactionManager annotationDrivenTransactionManager() {
return neo4jTransactionManager(graphDatabaseService());
}
}
I've marked my repositories with #Transactional:
#Transactional
public interface AstronomicalObjectRepo extends
GraphRepository<AstronomicalObject>{
}
I've marked my unit test classes and test methods with #Transactional and commented old code that used to manually manage transactions:
#RunWith(SpringJUnit4ClassRunner.class)
#ContextConfiguration(classes = {SpringDataConfiguration.class},
loader = AnnotationConfigContextLoader.class)
#Transactional
public class AstronomicalObjectRepoTest {
#Autowired
private AstronomicalObjectRepo repo;
#Autowired
private Neo4jTemplate neo4jTemplate;
(...)
#Test #Transactional
public void testSaveAndGet() {
//try (Transaction tx =
//neo4jTemplate.getGraphDatabaseService().beginTx()) {
AstronomicalObject ceres = new AstronomicalObject("Ceres",
1.8986e27, 142984000, 9.925);
repo.save(ceres); //<- BANG! Exception here
(...)
//tx.success();
//}
}
After that change the tests do not pass.
I receive:
org.springframework.dao.InvalidDataAccessApiUsageException: nested exception is org.neo4j.graphdb.NotInTransactionException
I have tried many different things (explicitly naming transaction manager in #Transactional annotation, changing mode in #EnableTransactionManagment...), nothing helped.
Will be very grateful for a clue about what I'm doing wrong.
Thanks in advance!
I found the reason...
SDN does not support newest Neo4j in the terms of transaction.
I believe it is because SpringTransactionManager in neo4j-kernel has gone in 2.2+ releases, but not 100% sure.
On github we can see that 7 hours ago the change was made to fix it:
https://github.com/spring-projects/spring-data-neo4j/blob/master/spring-data-neo4j/src/main/java/org/springframework/data/neo4j/config/JtaTransactionManagerFactoryBean.java
A quick fix that worked for me was to override neo4jTransactionManager method from Neo4jConfiguration in my configuration, using Neo4jEmbeddedTransactionManager class:
#Override
public PlatformTransactionManager neo4jTransactionManager(GraphDatabaseService graphDatabaseService) {
Neo4jEmbeddedTransactionManager newTxMgr = new Neo4jEmbeddedTransactionManager(graphDatabaseService());
UserTransaction userTransaction = new UserTransactionAdapter( newTxMgr );
return new JtaTransactionManager( userTransaction, newTxMgr );
}