Can't set JPA naming strategy after configuring multiple data sources (Spring 1.4.1 / Hibernate 5.x) - spring

I am using Spring Boot 1.4.1 which uses Hibernate 5.0.11. Initially I configured a data source using application.properties like this:
spring.datasource.uncle.url=jdbc:jtds:sqlserver://hostname:port/db
spring.datasource.uncle.username=user
spring.datasource.uncle.password=password
spring.datasource.uncle.dialect=org.hibernate.dialect.SQLServer2012Dialect
spring.datasource.uncle.driverClassName=net.sourceforge.jtds.jdbc.Driver
I configured it with "uncle" because that will be the name of one of multiple data sources that I'll configure. I configured this data source like this, according to the Spring docs:
#Bean
#Primary
#ConfigurationProperties(prefix = "spring.datasource.uncle")
public DataSource uncleDataSource() {
return DataSourceBuilder.create().build();
}
At this point everything worked fine.
I created an #Entity class without any #Column annotations and let Hibernate figure out the column names, for example if I have a Java property named idBank, Hibernate will automatically assume the column name is id_bank. This is used when generating ddl, running SQL statements, etc. I want to utilize this feature because I'm going to have a lot of entity classes and don't want to have to create and maintain all of the #Column annotations. At this point, this worked fine.
I then added another data source like this:
spring.datasource.aunt.url=jdbc:sybase:Tds:host2:port/db2
spring.datasource.aunt.username=user2
spring.datasource.aunt.password=password2
spring.datasource.aunt.dialect=org.hibernate.dialect.SybaseDialect
spring.datasource.aunt.driverClassName=com.sybase.jdbc4.jdbc.SybDriver
... and also this, following the Spring docs for setting up multiple data sources. Apparently once you define a 2nd data source, it can't configure the default beans and you have to define your own EntityManager and TransactionManager. So in addition to the data source configured above, I added these configurations:
#Bean
#Primary
PlatformTransactionManager uncleTransactionManager(#Qualifier("uncleEntityManagerFactory") final EntityManagerFactory factory) {
return new JpaTransactionManager(factory);
}
#Bean
#Primary
LocalContainerEntityManagerFactoryBean uncleEntityManagerFactory(
EntityManagerFactoryBuilder builder) {
return builder
.dataSource(uncleDataSource())
.packages(Uncle.class)
.persistenceUnit("uncle")
.build();
}
#Bean
#ConfigurationProperties(prefix = "spring.datasource.aunt")
public DataSource auntDataSource() {
return DataSourceBuilder.create().build();
}
#Bean
PlatformTransactionManager auntTransactionManager(#Qualifier("auntEntityManagerFactory") final EntityManagerFactory factory) {
return new JpaTransactionManager(factory);
}
#Bean
LocalContainerEntityManagerFactoryBean auntEntityManagerFactory(
EntityManagerFactoryBuilder builder) {
return builder
.dataSource(auntDataSource())
.packages(Aunt.class)
.persistenceUnit("aunt")
.build();
}
This works in terms of connecting to the database and trying to fetch data.
HOWEVER (and here's the problem, thanks for reading this far). After these configurations I have lost the implied naming strategy that translates Java column names to snake case names, so now if I have a Java property idBank it incorrectly uses column name idBank instead of id_bank. I would really like to get that functionality back.
There is a JPA property for this spring.jpa.hibernate.naming-strategy, and there are various naming strategy classes in Spring and Hibernate such as org.springframework.boot.orm.jpa.hibernate.SpringNamingStrategy. So I tried setting it like this:
spring.jpa.hibernate.naming_strategy=org.springframework.boot.orm.jpa.hibernate.SpringNamingStrategy
But it did not work. I tried some variations such as:
spring.datasource.uncle.naming_strategy=org.springframework.boot.orm.jpa.hibernate.SpringNamingStrategy
and
spring.datasource.uncle.hibernate.naming_strategy=org.springframework.boot.orm.jpa.hibernate.SpringNamingStrategy
but this did not have any effect.
Then I read that in Hibernate 5, the naming strategy was broken up into two parts, "physical" and "implicit" and there are different settings for each. So I tried this, with a few variations:
spring.jpa.hibernate.naming.physical-strategy=org.springframework.boot.orm.jpa.hibernate.SpringPhysicalNamingStrategy
and
spring.jpa.hibernate.naming.implicit-strategy=org.springframework.boot.orm.jpa.hibernate.SpringImplicitNamingStrategy
and
spring.datasource.uncle.naming.physical-strategy=org.springframework.boot.orm.jpa.hibernate.SpringPhysicalNamingStrategy
and
spring.datasource.uncle.hibernate.naming.implicit-strategy=org.springframework.boot.orm.jpa.hibernate.SpringImplicitNamingStrategy
But none of these worked.
It seems like there should be a way for me to set this configuration in the beans directly, such as on the SessionFactory, but I could not find that API. The documentation around this seems to have some gaps.
I'd really like to avoid setting up a persistence.xml also, which I have not needed up to this point.
So here is where I'm stuck and I'm hoping someone can help out. Really what I would like is a way to debug these property settings, I turned on trace logging in both org.springframework and org.hibernate but there was nothing useful there. I tried stepping through the code when these beans were configured but couldn't find the place where this happens. If anyone has that info and could share it I'd be really grateful.

I had the same problem and fixed it with the following code (adapted to the code in the question - for a single entity manager):
protected Map<String, Object> jpaProperties() {
Map<String, Object> props = new HashMap<>();
props.put("hibernate.physical_naming_strategy", SpringPhysicalNamingStrategy.class.getName());
props.put("hibernate.implicit_naming_strategy", SpringImplicitNamingStrategy.class.getName());
return props;
}
#Primary
#Bean(name = "defaultEntityManager")
public LocalContainerEntityManagerFactoryBean defaultEntityManagerFactory(
EntityManagerFactoryBuilder builder) {
return builder
.dataSource(auntDataSource())
.packages(Aunt.class)
.persistenceUnit("aunt")
.properties(jpaProperties())
.build();
}

The same as #ewert answer can be gained using properties:
# this works
spring.jpa.properties.hibernate.implicit_naming_strategy=org.springframework.boot.orm.jpa.hibernate.SpringImplicitNamingStrategy
spring.jpa.properties.hibernate.physical_naming_strategy=org.springframework.boot.orm.jpa.hibernate.SpringPhysicalNamingStrategy
# but that doesn't work
spring.jpa.hibernate.naming.physical-strategy=org.springframework.boot.orm.jpa.hibernate.SpringPhysicalNamingStrategy
spring.jpa.hibernate.naming.implicit-strategy=org.springframework.boot.orm.jpa.hibernate.SpringImplicitNamingStrategy

I think I can explain why the default behaviour disappears, as per your latest question.
As of Spring Boot 2.4.2 the deafult configuration kicks in in this method of JpaBaseConfiguration:
#Bean
#Primary
#ConditionalOnMissingBean({ LocalContainerEntityManagerFactoryBean.class, EntityManagerFactory.class })
public LocalContainerEntityManagerFactoryBean entityManagerFactory(EntityManagerFactoryBuilder factoryBuilder) {
Map<String, Object> vendorProperties = getVendorProperties();
customizeVendorProperties(vendorProperties);
return factoryBuilder.dataSource(this.dataSource).packages(getPackagesToScan()).properties(vendorProperties)
.mappingResources(getMappingResources()).jta(isJta()).build();
}
it happens within the customizeVendorProperties method call.
By creating your own LocalContainerEntityManagerFactoryBean bean (two of them actually) this customization is not performed anymore.

If you are using SessionFactory you should use next lines to set naming strategies.
sessionFactory.setImplicitNamingStrategy(SpringImplicitNamingStrategy.INSTANCE);
sessionFactory.setPhysicalNamingStrategy(new SpringPhysicalNamingStrategy());

The only way I get this running properly with Spring-Boot 2+ was setting the following manually:
#Bean(name = "myEmf")
public LocalContainerEntityManagerFactoryBean sapEntityManagerFactory(
EntityManagerFactoryBuilder builder, #Qualifier("myDataSource") DataSource dataSource) {
return builder
.dataSource(dataSource)
.packages("my.custom.package")
.persistenceUnit("myPu")
.properties(getProperties())
.build();
}
public Map<String, String> getProperties() {
val props = new HashMap<String, String>();
if (isTest()) {
props.put("hibernate.hbm2ddl.auto", "create");
} else {
props.put("hibernate.dialect", "org.hibernate.dialect.PostgreSQL95Dialect");
}
return props;
}

Related

Spring boot application.properties naming

I am learning about springboot and trying to connect to a DB2 database. I got that working just fine.
Below are my working DB2 properties:
spring.datasource.url=jdbc:db2://server:port/database:currentSchema=schema-name;
spring.datasource.username=user1
spring.datasource.password=password1
But I renamed them to start with "db2" instead "spring" like:
db2.datasource.url=jdbc:db2://server:port/database:currentSchema=schema-name;
db2.datasource.username=user1
db2.datasource.password=password1
My app still runs, hHowever, when I do that, my controllers no longer return results as they did before the rename.
The reason I ask this is that if I add 2nd data source in the future, I could distinguish easily properties by their data sources if I name them like this.
UPDATE:
Thanks to #Kosta Tenasis answer below and this article (https://www.javadevjournal.com/spring-boot/multiple-data-sources-with-spring-boot/), I was able to resolve and figure this out.
Then going back to my specific question, once you have the configuration for data source in place, you can then modify application.properties to have:
db2.datasource.url=...
instead of having:
spring.datasource.url=...
NOTE1: if you are using Springboot 2.0, they changed to use Hikari and Hikari does not have url property but instead uses jdbc-url, so just change above to:
db2.datasource.jdbc-url=...
NOTE2: In your datasource that you had to create when adding multiple datasources to your project, you will have annotation #ConfigurationProperties. This annotation needs to point to your updated application.properties for datasource (the db2.datasource.url).
By default Spring looks for spring.datasource.** for the properties of the DataSource to connect to.
So you might be getting wrong results because you are not connecting to the database. If you want to configure a DataSource with different,from default, properties you can do like so
#Configuration
public class DataSourceConfig {
#Bean
#ConfigurationProperties(prefix="db2.datasource")
public DataSource dataSource() {
return DataSourceBuilder.create()
.build();
}
And let's say a day comes along and you want a second DataSource you can modify the previous class to something like:
#Configuration
public class DataSourceConfig {
#Bean
#ConfigurationProperties(prefix="db2.datasource")
public DataSource d2Datasource() {
return DataSourceBuilder.create()
.build();
}
#Bean
#ConfigurationProperties(prefix="db3.datasource")
public DataSource db3Datasource() { //pun intented
return DataSourceBuilder.create()
.build();
}
}
and after that in each Class that you want a DataSource you can specify which of the beans you like:
public class DB3DependedClass{
private final DataSource dataSource;
public DB3DependedClass(#Qualifier("db3Datasource") DataSource dataSource){
this.dataSource = dataSource;
}
}
So by default spring will look for
spring.datasource.url (or spring.datasource.jdbc-url)
spring.datasource.username
spring.datasource.password
If you specify another DataSource of your own, those values are not needed.
So in the above example where we specified let's say db3.datasource spring will look for
db3.datasource.url
db3.datasource.username
db3.datasource.password
Important thing here is that the spring IS NOT inferred meaning the complete path is indeed: db3.datasource.url
and NOT
spring.db3.datasource.url
Finally to wrap this up you do have the flexibility to make it start with spring if you want so by declaring a prefix like spring.any.path.ilike.datasouce and of course under that the related values. Spring will pick up either path as long as you specify it.
NOTE: This answer is written solely in the text box provided here and was not tested in an IDE for compilation errors. The logic still holds though

Spring Batch/Data JPA application not persisting/saving data to Postgres database when calling JPA repository (save, saveAll) methods

I am near wits-end. I read/googled endlessly so far and tried the solutions on all the google/stackoverflow posts that have this similiar issue (there a quite a few). Some seemed promising, but nothing has worked for me yet; though I have made some progress and I am on the right track I believe (I'm believing at this point its something with the Transaction manager and some possible conflict with Spring Batch vs. Spring Data JPA).
References:
Spring boot repository does not save to the DB if called from scheduled job
JpaItemWriter: no transaction is in progress
Similar to the aforementioned posts, I have a Spring Boot application that is using Spring Batch and Spring Data JPA. It reads comma delimited data from a .csv file, then does some processing/transformation, and attempts to persist/save to database using the JPA Repository methods, specifically here .saveAll() (I also tried .save() method and this did the same thing), since I'm saving a List<MyUserDefinedDataType> of a user-defined data type (batch insert).
Now, my code was working fine on Spring Boot starter 1.5.9.RELEASE, but I recently attempted to upgrade to 2.X.X, which I found, after countless hours of debugging, only version 2.2.0.RELEASE would persist/save data to database. So an upgrade to >= 2.2.1.RELEASE breaks persistence. Everything is read fine from the .csv, its just when the first time the code flow hits a JPA repository method like .save() .saveAll(), the application keeps running but nothing gets persisted. I also noticed the Hikari pool logs "active=1 idle=4", but when I looked at the same log when on version 1.5.9.RELEASE, it says active=0 idle=5 immediately after persisting the data, so the application is definitely hanging. I went into the debugger and even saw after jumping into the Repository calls, it goes into almost an infinite cycle through the Spring AOP libraries and such (all third party) and I don't believe ever comes back to the real application/business logic that I wrote.
3c22fb53ed64 2021-05-20 23:53:43.909 DEBUG
[HikariPool-1 housekeeper] com.zaxxer.hikari.pool.HikariPool - HikariPool-1 - Pool stats (total=5, active=1, idle=4, waiting=0)
Anyway, I tried the most common solutions that worked for other people which were:
Defining a JpaTransactionManager #Bean and injecting it into the Step function, while keeping the JobRepository using the PlatformTransactionManager. This did not work. Then I also I tried using the JpaTransactionManager also in the JobRepository #Bean, this also did not work.
Defining a #RestController endpoint in my application to manually trigger this Job, instead of doing it manually from my main Application.java class. (I talk about this more below). And per one of the posts I posted above, the data persisted correctly to the database even on spring >= 2.2.1, which further I suspect now something with the Spring Batch persistence/entity/transaction managers is messed up.
The code is basically this:
BatchConfiguration.java
#Configuration
#EnableBatchProcessing
#Import({DatabaseConfiguration.class})
public class BatchConfiguration {
// Datasource is a Postgres DB defined in separate IntelliJ project that I add to my pom.xml
DataSource dataSource;
#Autowired
public BatchConfiguration(#Qualifier("dataSource") DataSource dataSource) {
this.dataSource = dataSource;
}
#Bean
#Primary
public JpaTransactionManager jpaTransactionManager() {
final JpaTransactionManager tm = new JpaTransactionManager();
tm.setDataSource(dataSource);
return tm;
}
#Bean
public JobRepository jobRepository(PlatformTransactionManager transactionManager) throws Exception {
JobRepositoryFactoryBean jobRepositoryFactoryBean = new JobRepositoryFactoryBean();
jobRepositoryFactoryBean.setDataSource(dataSource);
jobRepositoryFactoryBean.setTransactionManager(transactionManager);
jobRepositoryFactoryBean.setDatabaseType("POSTGRES");
return jobRepositoryFactoryBean.getObject();
}
#Bean
public JobLauncher jobLauncher(JobRepository jobRepository) {
SimpleJobLauncher simpleJobLauncher = new SimpleJobLauncher();
simpleJobLauncher.setJobRepository(jobRepository);
return simpleJobLauncher;
}
#Bean(name = "jobToLoadTheData")
public Job jobToLoadTheData() {
return jobBuilderFactory.get("jobToLoadTheData")
.start(stepToLoadData())
.listener(new CustomJobListener())
.build();
}
#Bean
#StepScope
public TaskExecutor taskExecutor() {
ThreadPoolTaskExecutor threadPoolTaskExecutor = new ThreadPoolTaskExecutor();
threadPoolTaskExecutor.setCorePoolSize(maxThreads);
threadPoolTaskExecutor.setThreadGroupName("taskExecutor-batch");
return threadPoolTaskExecutor;
}
#Bean(name = "stepToLoadData")
public Step stepToLoadData() {
TaskletStep step = stepBuilderFactory.get("stepToLoadData")
.transactionManager(jpaTransactionManager())
.<List<FieldSet>, List<myCustomPayloadRecord>>chunk(chunkSize)
.reader(myCustomFileItemReader(OVERRIDDEN_BY_EXPRESSION))
.processor(myCustomPayloadRecordItemProcessor())
.writer(myCustomerWriter())
.faultTolerant()
.skipPolicy(new AlwaysSkipItemSkipPolicy())
.skip(DataValidationException.class)
.listener(new CustomReaderListener())
.listener(new CustomProcessListener())
.listener(new CustomWriteListener())
.listener(new CustomSkipListener())
.taskExecutor(taskExecutor())
.throttleLimit(maxThreads)
.build();
step.registerStepExecutionListener(stepExecutionListener());
step.registerChunkListener(new CustomChunkListener());
return step;
}
My main method:
Application.java
#Autowired
#Qualifier("jobToLoadTheData")
private Job loadTheData;
#Autowired
private JobLauncher jobLauncher;
#PostConstruct
public void launchJob () throws JobParametersInvalidException, JobExecutionAlreadyRunningException, JobRestartException, JobInstanceAlreadyCompleteException
{
JobParameters parameters = (new JobParametersBuilder()).addDate("random", new Date()).toJobParameters();
jobLauncher.run(loadTheData, parameters);
}
public static void main(String[] args) {
SpringApplication.run(Application.class, args);
}
Now, normally I'm reading this .csv from Amazon S3 bucket, but since I'm testing locally, I am just placing the .csv in the project directory and reading it directly by triggering the job in the Application.java main class (as you can see above). Also, I do have some other beans defined in this BatchConfiguration class but I don't want to over-complicate this post more than it already is and from the googling I've done, the problem possibly is with the methods I posted (hopefully).
Also, I would like to point out, similar to one of the other posts on Google/stackoverflow with a user having a similar problem, I created a #RestController endpoint that simply calls the .run() method the JobLauncher and I pass in the JobToLoadTheData Bean, and it triggers the batch insert. Guess what? Data persists to the database just fine, even on spring >= 2.2.1.
What is going on here? is this a clue? is something funky going wrong with some type of entity or transaction manager? I'll take any advice tips! I can provide any more information that you guys may need , so please just ask.
You are defining a bean of type JobRepository and expecting it to be picked up by Spring Batch. This is not correct. You need to provide a BatchConfigurer and override getJobRepository. This is explained in the reference documentation:
You can customize any of these beans by creating a custom implementation of the
BatchConfigurer interface. Typically, extending the DefaultBatchConfigurer
(which is provided if a BatchConfigurer is not found) and overriding the required
getter is sufficient.
This is also documented in the Javadoc of #EnableBatchProcessing. So in your case, you need to define a bean of type Batchconfigurer and override getJobRepository and getTransactionManager, something like:
#Bean
public BatchConfigurer batchConfigurer(EntityManagerFactory entityManagerFactory, DataSource dataSource) {
return new DefaultBatchConfigurer(dataSource) {
#Override
public PlatformTransactionManager getTransactionManager() {
return new JpaTransactionManager(entityManagerFactory);
}
#Override
public JobRepository getJobRepository() {
JobRepositoryFactoryBean jobRepositoryFactoryBean = new JobRepositoryFactoryBean();
jobRepositoryFactoryBean.setDataSource(dataSource);
jobRepositoryFactoryBean.setTransactionManager(getTransactionManager());
// set other properties
return jobRepositoryFactoryBean.getObject();
}
};
}
In a Spring Boot context, you could also override the createTransactionManager and createJobRepository methods of org.springframework.boot.autoconfigure.batch.JpaBatchConfigurer if needed.

spring boot with multiple databases

I'm trying to write an application that accesses data from two sources. I'm using Spring Boot 2.3.2. I've looked at several sources for info about how to configure the app: the Spring documentation talks about setting up multiple datasources, but does not explain how to link up JPA repositories. This Baeldung article goes a lot further, but I'm looking to take advantage of autoconfiguration in Spring.
So far, I've created a separate package, added a config class (along with model and repositories), and included this package in scanBasePackages so that it's picked up. Since I'll have more than one datasource, I've added this to my #SpringBootApplication:
#Bean
#Primary
public DataSourceProperties dataSourceProperties() {
return new DataSourceProperties();
}
This successfully loads up my Spring app using the standard spring config values. The two databases are on different servers, but should share characteristics (other than url and credentials).
So, my auxiliary configuration file looks like this
#Configuration
#EnableAutoConfiguration
#EnableTransactionManagement
#EnableJpaRepositories(
entityManagerFactoryRef = "orgEntityManagerFactory",
transactionManagerRef = "orgTransactionManager",
basePackages = {
"pacage2.repositories"
}
)
public class DataSourceConfiguration {
// added because of this answer: https://stackoverflow.com/a/51305724/167889
#Bean
public EntityManagerFactoryBuilder entityManagerFactoryBuilder() {
return new EntityManagerFactoryBuilder(new HibernateJpaVendorAdapter(), new HashMap<>(), null);
}
#Bean
#ConfigurationProperties(prefix = "external.datasource")
public DataSourceProperties orgDataSourceProperties() {
return new DataSourceProperties();
}
#Bean
public HikariDataSource orgDataSource(#Qualifier("orgDataSourceProperties") DataSourceProperties properties) {
return properties.initializeDataSourceBuilder().type(HikariDataSource.class)
.build();
}
#Bean
public LocalContainerEntityManagerFactoryBean orgEntityManagerFactory(EntityManagerFactoryBuilder builder,
#Qualifier("orgDataSource") DataSource dataSource) {
return builder
.dataSource(dataSource)
.packages("package2.model")
.build();
}
#Bean
public PlatformTransactionManager orgTransactionManager(
#Qualifier("orgEntityManagerFactory") EntityManagerFactory entityManagerFactory
) {
return new JpaTransactionManager(entityManagerFactory);
}
}
Now, the error I'm getting right now is Access to DialectResolutionInfo cannot be null when 'hibernate.dialect' not set. However, I have that value in my config and it's applied by the Spring auto config. I believe it needs to be set in the EntityManagerFactoryBuilder and by creating my own, the autoconfig isn't getting applied.
How can I have my cake and eat it too? I'd like to leverage as much of the robust autoconfiguration that Spring provides to setup datasources and wire them to the appropriate repositories. Effectively, all that I want to change is the url and credentials, and I can separate the entities and repositories into a completely separate package for easy scanning.

How to create custom datasource for each session

I would like to create a stateless bean in Spring boot which enable user to connect to a specific database, so I started with this code, but i'm still new with spring boot.
#Bean
#Primary
public DataSource helloDataSource() {
return DataSourceBuilder.create()
.username("myUsername")
.password("myPassword")
.driverClassName("myDBDriver")
.build();
}
So what is the best way to follow to make this code work and connect to any database ( Also remote Database )
There are several ways that this could be done. One of my preferred way is to provide configuration of data sources via properties file. Here is a sample property file for postgresql:
pg.datasource.url=jdbc:postgresql://db-server-bar:5432/app-db
pg.datasource.username=root
pg.datasource.password=toor
pg.datasource.driver-class-name=org.postgresql.Driver
Now you can create configuration class for each datasource:
public class BarDbConfig {
#Bean(name = "pgDataSource")
#ConfigurationProperties(prefix = "pg.datasource")
public DataSource dataSource() {
return DataSourceBuilder.create().build();
}
#Bean(name = "barEntityManagerFactory")
public LocalContainerEntityManagerFactoryBean
barEntityManagerFactory(
EntityManagerFactoryBuilder builder,
#Qualifier("pgDataSource") DataSource dataSource
) {
return
builder
.dataSource(dataSource)
.packages("com.app.domain")
.persistenceUnit("pg")
.build();
}
}
For a detailed tutorial, you can refer to this tutorial.
There can also be situations where you would like to switch databases at runtime. In which case, you can use something called AbstractRoutingDataSource. A detailed tutorial on how to use this feature can be found at Spring's official blog site.

How to Replace RemoteServer() when upgrading to Spring Data 4.2+?

In upgrading to Neo 3.2.3 (from Neo 2.5), I've had to upgrade my Spring Data dependency. The main reason for me upgrading is to take advantage of Neo's new Bolt protocol. I bumped the versions (using maven pom.xml), and I'm having issues with one change in particular -- how to set up the scaffolding for Sessions and the RemoteServer configuration.
org.springframework.data.neo4j.server.RemoteServer has been removed from the SD4N api, breaking my code and I'm not sure how to get things to compile again. I've tried a number of sources online, with little success. Here's what I've read:
Neo4j 3.0 and spring data
https://docs.spring.io/spring-data/neo4j/docs/current/reference/html/#_spring_configuration
https://graphaware.com/neo4j/2016/09/30/upgrading-to-sdn-42.html
None of these resources quite explain how to refactor the Spring Configuration (and its clients) to use whatever thing replaces the RemoteServer Object.
How do I connect to my Neo database with Spring Data Neo4J, given a url, username, and password? . Bonus points for explaining how these interrelate to Sessions and SessionFactorys.
The configuration should look like this:
#Configuration
#EnableNeo4jRepositories(basePackageClasses = UserRepository.class)
#ComponentScan(basePackageClasses = UserService.class)
static class Config {
#Bean
public SessionFactory getSessionFactory() {
return new SessionFactory(configuration(), User.class.getPackage().getName());
}
#Bean
public Neo4jTransactionManager transactionManager() throws Exception {
return new Neo4jTransactionManager(getSessionFactory());
}
#Bean
public org.neo4j.ogm.config.Configuration configuration() {
return new org.neo4j.ogm.config.Configuration.Builder()
.uri("bolt://localhost")
.credentials("username", "password")
.build();
}
}
SessionFactory and Session are described here
Please comment about what's unclear in the docs.

Resources