I am doing some experiments with the following technologies:
Neo4j 4.4 (community edition)
Spring-Boot 2.7.5
Spring-Data-Neo4j 6.3.5
and I am trying to understand how to automate the creation of unique constraints for a #Node entity.
import lombok.AllArgsConstructor;
import lombok.Data;
import lombok.NoArgsConstructor;
import org.springframework.data.neo4j.core.schema.GeneratedValue;
import org.springframework.data.neo4j.core.schema.Id;
import org.springframework.data.neo4j.core.schema.Node;
import javax.validation.constraints.NotEmpty;
import java.util.UUID;
#Data
#Node
#NoArgsConstructor
#AllArgsConstructor
public class Organization {
#Id
#GeneratedValue
private UUID id;
// TODO: Annotation for automatic unique constraint generation
#NotEmpty
private String name;
}
Based on the SDN6 docs
https://docs.spring.io/spring-data/neo4j/docs/6.3.5/reference/html/#migrating.autoindex
the Automatic index manager and the relevant Neo4j-OGM annotations (#Index, #CompositeIndex, #Required) have been removed in favor of database versioning tools like:
https://github.com/michael-simons/neo4j-migrations
https://github.com/liquibase/liquibase-neo4j
Considering the SDN6 docs https://docs.spring.io/spring-data/neo4j/docs/6.3.5/reference/html/#faq.sdn-related-to-ogm
How does SDN relate to Neo4j-OGM?
Neo4j-OGM is an Object Graph Mapping
library, which is mainly used by previous versions of Spring Data
Neo4j as its backend for the heavy lifting of mapping nodes and
relationships into domain object. The current SDN does not need and
does not support Neo4j-OGM. SDN uses Spring Data’s mapping context
exclusively for scanning classes and building the meta model.
The nearest option seems to be using the Neo4j Migrations Annotation Processor
https://michael-simons.github.io/neo4j-migrations/current/#appendix_annotation
but it seems to rely on the Neo4j-OGM for generating the unique constraints:
https://github.com/michael-simons/neo4j-migrations/tree/main/extensions/neo4j-migrations-annotation-processing
https://github.com/michael-simons/neo4j-migrations/tree/main/extensions/neo4j-migrations-annotation-processing/processor/src/test/java/ac/simons/neo4j/migrations/annotations/proc/ogm
https://github.com/michael-simons/neo4j-migrations/blob/main/extensions/neo4j-migrations-annotation-processing/processor/src/test/java/ac/simons/neo4j/migrations/annotations/proc/ogm/SingleIndexEntity.java
https://github.com/michael-simons/neo4j-migrations/blob/main/extensions/neo4j-migrations-annotation-processing/processor/src/test/java/ac/simons/neo4j/migrations/annotations/proc/ogm/UniqueConstraintEntity.java
https://github.com/michael-simons/neo4j-migrations/tree/main/extensions/neo4j-migrations-annotation-processing/processor/src/test/java/ac/simons/neo4j/migrations/annotations/proc/sdn6
How can I configure the Neo4j automatic index creation using one of the suggested migration tools in Spring-Boot-Data-Neo4j 6?
Related
BACKGROUND
I am new to developing API in spring boot. I have this project wherein it is connected to an Oracle DB and PostgreSQL. The Oracle DB already have an existing tables and I need to fetch some data from multiple tables and send it back as a response. The Postgres DB is where I store the users data and other some data that doesn't need to be stored in the Oracle DB. I am currently using native queries.
The Account is an entity wherein I just marked one of the columns as the #Id (It is actually not an Id but it is unique for all accounts):
#Entity
#Data
#AllArgsConstructor
#NoArgsConstructor
#Builder
public class Account {
#Id
private String sampleProperty1;
private String sampleProperty2;
private String sampleProperty3;
private String sampleProperty4;
private String sampleProperty5;
}
Now I have a repository interface:
public interface IAccountRepository extends JpaRepository<Account, String> {
#Query(value = "SELECT * FROM TABLE(SAMPLE_PACKAGE.SAMPLE_FUNC_GETACCOUNTS(?1))", nativeQuery = true)
List<Account> getAllAccountsByClientNumber(String clientNumber);
}
I was able to fetch the data and JPA mapped the columns automatically to my entity. Basically I am creating an Entity (Spring boot) for the data in my Oracle DB where the only purpose of it is to map the data and send it back to the user.
QUESTIONS
Will this approach create a table in my Oracle DB? I checked the Oracle DB and there is no table. But I'm worried it might somehow create a table of ACCOUNT in the oracle DB when it is on production. If this might happen, how can I prevent it?
This scenario also applies to other functionality like fetching transaction history, creating transaction, updating the Account data that are all in the Oracle DB. Am I doing it just right or there is a better option?
Is creating an Entity without a corresponding table have a drawback in Spring boot?
Note
I know you might say that I should just use the Oracle DB and create entities based on the existing tables. But in the future of the API, it will not have a connection with the Oracle DB. I already tried using projections it was also good, but I still needed to create a Response model and mapped it then send it back to user and creating a unit tests using the projection is pretty long and it sucks haha
You can set the following property:
spring.jpa.hibernate.ddl-auto=update
update will update your database if database tables are already created and will create if database tables are not created.
I am trying to build application by spring boot and Domain Driven Design. I have a problem about Domain model (match with fields of table DB) and View Model (response API).
Domain Model:
EX:
class Name
#Getter
#NoArgsConstructor
#AllArgsConstructor
class Name {
String value;
}
class Product
#Getter
#NoArgsConstructor
#AllArgsConstructor
class Product{
Name name;
}
ViewModel:
#Data
#NoArgsConstructor
#AllArgsConstructor
class ProductView {
//int prodId;
String prodName;
}
Select data DB by class Product, builder to Response API by class ProductView. When that convert from DomainModel to ViewModel or vice versa, I written static method in ProductView for that.
It will become:
#Data
#NoArgsConstructor
#AllArgsConstructor
class ProductView {
//int prodId;
String prodName;
public static ProductView of(Product prod) {
String productName = prod.getName().getValue();
return new ProductView(productName)
}
}
It works well, but when the data becomes more. I think need that as CommonConvert from DomainModel to ViewModel and vice versa.
I have a solution use Mapstruct library. But Mapstruct only support to convert field same type(String with String, ex). What is the best solution for writting CommonConvert?
My advice: do not query domain models and translate them to view models for reading.
Domain model classes (e.g. aggregates) are used to represent business data and behaviour with the to purpose to adhere to business invariants when creating or changing such business entities.
For building your view models from your persistent data you can - and in my opinion you should - bypass the domain model. You can safely read the data from your database as you need it without going through domain repositories.
This is okay because you can't violate business rules by just reading data. For writing data go through domain repositories and aggregates.
In your case you can of course use view model entities using JPA annotations by designing those classes to exactly fit your viewing requirements. Keep in mind that view models often don't correlate to domain models as they might only need a subset of the data or aggregate data from different aggregates.
Another catch is that if you need to query many objects for viewing can quickly cause performance issues if you query full domain aggregates via repositories. As such aggregates always load all data from it's child entities and value objects as well to allow for performing business logic with all invariants you would end up performing lots of expensive queries which are suited for loading a single aggregate but not many of them at once.
So by querying only what you need for viewing you also address such performance issues.
When following DDD you should usually create or change only one aggregate within a business transaction. So domain models are not suited for query optimization but for keeping the business invariants in tact when writing business data.
View models and corresponding queries are optimized for reading and collecting all data required.
Simply map like this (with mapstruct) :
#Mapping(source = "name.value", target = "prodName")
public abstract ProductView toProductView(Product model);
I am new to spring-data-jdbc and just trying to port a small project, which currently uses JPA, for evaluation purposes.
My existing entities use a database schema which can easily be defined by JPAs #Table annotation on the entity level.
I saw, that a #Table annotation exists for spring-data-jpa, but no schema can be specified.
The only approach I found so far is to override the naming Strategy in the JdbcConfiguration:
#Bean
fun namingStrategy(): NamingStrategy {
return object : NamingStrategy {
override fun getSchema(): String {
return "my_schema"
}
}
}
I would rather prefer an approach, where the schema is specified directly at the entity, to be able to use the same configuration for different schemas.
Are there any other ways to specify the database schema for each aggregate separately?
The answer to my own question is rather trivial:
By using the annotation #Table(value = "my_schema.some_table") on the entity level the proper schema is used.
I have the following Kotlin code:
val book = bookRepository
.findById(1)
.orElseThrow { DoesNotExistException("Book with ID 1 does not exist") }
bookRepository.save(book)
I would expect that this code just saves the same entity again. However, it actually generates a new entity by copying all the fields and changing the ID.
The Entity itself is here:
#Entity
data class Book(#Id
#GeneratedValue
var id: Long?,
val status: Book.Status
)
I am using micronaut-data (previously named Micronaut Predator) with JDBC, I am not using JPA.
How could I update the existing entity without creating a new one?
That is how it works by design. From the documentation: https://micronaut-projects.github.io/micronaut-data/snapshot/guide/#jdbcQuickStart, scroll down to "Updating an Instance (Update)" section:
With Micronaut Data JDBC, you must manually implement an update method since the JDBC implementation doesn’t include any dirty checking or persistence session notion. So you have to define explicit update methods for updates in your repository.
I was developing a simple Spring Boot application in which, jpa + Hibernate is user for accessing my data source, which is Oracle DB. The entity class is given below.
#Entity
#Table(name="MY_SCHEMA.MY_DB")
public class Member implements Serializable {
.............
}
Currently my project doesn't have any persistence.xml. The problem is, I need to make the schema name (MY_SCHEMA) inside #Table annotation configurable, that is getting the schema value from application.properties file on run time.
I have tried by adding spring.jpa.properties.hibernate.default_schema=schema option in application.properties file. But all in vain.
Update
Have added more details in another question Hibernate how to make schema name configurable for entity class
Below are the available options that can be used for your purpose of creating table in a particular Schema. mention the schema name in the schema field.
#Entity
#Table(name = "MY_TABLE_NAME", schema= "MY_SCHEMA_NAME")
public class Myclass {
Also you can define the schema name in the DB URL as below using the application.properties file. you need to update the values as per your needs.
spring.datasource.url=jdbc:mysql://localhost:3306/MY_SCHEMA_NAME?autoReconnect=true&useSSL=false