I want to make userName property in User node as a unique.
I used below code but it doesn't create a unique constraint in the Neo4j database.
#Property(name = "name")
#Index(unique = true)
private String usreName;
FYI, I'm using the Neo4j Server version: 3.3.6 (community) With Spring Boot 2.
but if I create a constraint in the Neo4j Browser by myself, it works.
CREATE CONSTRAINT ON (user:User) ASSERT user.userName IS UNIQUE
Is there a way to force Spring Data Neo4J to create unique properties, without creating them by myself in Database?
You need to configure the auto index manager if you want the application code create the constraints.
You can find the best fitting option in the documentation:
https://docs.spring.io/spring-data/neo4j/docs/current/reference/html/#reference:indexing:creation
Just a note on this topic: Think about the auto index creation like Hibernate's DDL support. It is a helper at development time. You should not use assert and update in production environments but only validate.
Reason
In Spring Data Neo4j 4, index management concerns were removed from
the mapping framework entirely.
(from Index Management in Spring Data Neo4j)
Solution
#Autowired
private SessionFactory sessionFactory;
#PostConstruct
public void createIndexesAndConstraints() {
Session session = sessionFactory.openSession();
Result result = session.query("CREATE INDEX ON :User(userName)", Collections.EMPTY_MAP);
}
You can configure the mode our auto index manager works in through application.properties
spring.data.neo4j.auto-index=validate # or
# spring.data.neo4j.auto-index=update
# spring.data.neo4j.auto-index=assert
Default mode is none. Apart from that, what #meistermeier says applies.
Also, Neo4jOperations was deprecated in SDN 4 something and has been removed in SDN 5. Use Session instead for operations "near" the database.
Thank you #ThirstForKnowledg for your answer. But I have 3 other Questions:
1- I'm using Spring Boot 2, and I can not see Neo4jOperations in my classpath to import it.
2- Should I put this in my Entity node or in another bean?
3- What about after running my application two or more times? I think it would cause an exception for the second time or more.
Related
how would you approach problem of simple app, allowing users to summarise/calculate average of price values of inventory stored? MVC model, Spring, H2. Do I need Hibernate to achieve that? How to access fields of particular items stored?
Is it a requirement to use H2? If no, just 'read' the inventory and calculate on the fly. Build the minimal solution.
If yes, I personally prefer to go with Spring Boot / Spring Data / Hibernate as this is widely used and better to maintain than a self-build solution. You could get the information with a custom query at the repository - something like:
#Query(value = "SELECT AVG(price) FROM product")
public Double getAveragePrice();
With https://bootify.io you can setup your Spring Boot app with the database model, and can add the custom logic on top.
I'm trying to configure Cassandra in my application by extending CassandraAutoConfiguration
I'm using spring CassandraRepository for DB access and classes with o.s.d.cassandra.core.mapping.Table annotation for defining my tables.
I've also configured following property, along with other required properties for cluster
spring:
data:
cassandra:
schema-action: CREATE_IF_NOT_EXISTS
But No table get created in Cassandra upon application startup.
schemaAction in CassandraProperties is not working.
If I programmatically create tables upon startup in my ApplicationRunner by using cassandraTemplate.getCqlOperations().execute(...) then everything works fine.
In this case I am able to use my repository. find() and save() methods.
Just to prove that my #table classes are correctly written
Here is the behaviour I noticed. This is not only true for this particular key in application.yaml
When you don't create any bean extending AbstractCassandraConfiguration spring-data will read every key matching spring.data.* in application.yaml including the schema-action you provided. (by CONVENTION). I don't seen any issue with the file you provided, as a matter of fact I have a working sample here
When you create a bean extending AbstractCassandraConfiguration, now this is your job to implements explicitly the values you want as such please add in your class. Also you will need ro provide explicitly annotation #EnableCassandraRepositories
#Value("${spring.data.cassandra.schema-action}")
private String schemaAction;
#Override
public SchemaAction getSchemaAction() {
return SchemaAction.valueOf(schemaAction);
}
On top of this I would like to advise NOT USING IT AT ALL. Spring Data works like a charm but here are my concerns:
Creating a table is not only a matter of matching the data model. Indeed what about Compaction Strategy based on your use case or TTL or any metadata.
We assume you know how to build a primary key properly with Partitions key and Clustering column but what if you need to store the exact same object in 2 tables because you have 2 different queries on it. (remember: I you need ALLOW FILTERING anywhere in your application=> your data model is probably wrong.
Here is a snippet of my entity class
#Entity
public class User {
#Id
#GeneratedValue
private long id;
private String firstName;
private String lastName;
When using (Spring Boot + Hibernate) Spring Boot setups schema automatically including sequences like one below
Hibernate: create sequence hibernate_sequence start with 1 increment by 1
But I am using Flyway 5.0.7 to setup my schema. And in this case I get the error below, which means sequence is not getting created.
Sequence "HIBERNATE_SEQUENCE" not found; SQL statement
I was able to fix this by creating sequence using flyway script like below
create sequence HIBERNATE_SEQUENCE start with 1001;
But now this sequence is used to generate Ids for all entities which I do not want. I want each entity to have its separate sequence.
Is it possible to create sequences using Hibernate when using Flyway? Otherwise it is not practical to manually create sequences for all entities which can be in hundreds.
Any alternative approach to handle this?
Flyway is a DB migration tool, and it does not know of any DDL/DML changes unless you tell it so (via new scripts in the locations property).
If Hibernate handles some of these changes (the sequences in your case) Flyway won't know about it and will use whatever sequence it already has knowledge about.
The normal thing to do is letting Flyway know of your changes, which includes a new sequence for a new entity for instance, just like you would do for the schema itself of your entity. My personal advice is to manage all your schema changes in one place, so if you are using Flyway, then let it be in charge of all of it.
I am new with Spring, my application, developed with Spring Roo has a Cron that every day download some files and update a database.
The update is done, after downloading and parsing the files, using merge(),
an Entity class Dataset has a list called resources, after the download I do:
dataset.setResources(resources);
dataset.merge();
and dataset.merge() does the following:
#Transactional
public Dataset Dataset.merge() {
if (this.entityManager == null) this.entityManager = entityManager();
Dataset merged = this.entityManager.merge(this);
this.entityManager.flush();
return merged;
}
I expect that doing dataset.setResources(resources); I would overwrite the filed resources, and so even the database entry would be overwritten.
But I get double entries in the database: every resource appear twice, with different IDs (incremental).
How can I succed in let my application doing updates and not insert? A naive solution would be delete manually the old resource and then call merge(); is this the way or is there some more smart solution?
This situation occurs when you use Hibernate as persistence engine and your entities have version field.
Normally the ID field is what we need for merging a detached object with its persistent state in the database, but Hibernate takes the version field in account and if you don't set it (it is null) Hibernate discards the value of ID field and creates a new object with new ID.
To know if you are affected by this strange feature of Hibernate, set a value in the version field, if an Exception is thrown you got it. In that case the best way to solve it is the data to parse contain the right value of version. Another ways are to disable version checking (see Hibernate ref guide to know about it) or load persistent state before merging.
I am in process of upgrading my pentaho reporting from 3.6.1 to 3.8.0 in my web application. when I updated all necessary jar files, I got one compilation error in one of my class which implements ConnectionProvider. following is my class.
public class DataSourceConnectionProvider implements ConnectionProvider
{
....
}
The error is saying that my class should implement getConnectionHash() method as it is defined in ConnectionProvider interface. but It was not there in 3.6.1 version. so I am bit confused why they have added it and how to implement it in my class.
This method returns a object that is comparable and hashable and is used during the caching of datasources. It allows us to build some sort of key to detect changes in the connection definition while many reports run within the same JVM.
The cache implementation itself does not know any of the details of the various datasources and the "ConnectionHash" allows us keep result-sets separate.
My basic implementation of it simply returns a ArrayList with all relevant connection properties added to it.
Simple example how and where it is needed:
Imagine you have a JDBC datasource that connects to a database where several schemas with the same table structures exists, for example in a multi-tenant environment where each tenant has his own schema.
With a query like "SELECT * FROM CUSTOMERS WHERE COUNTRY = ${country-parameter}" the datasource will return different datasets based on which tenant performs the query. The sum of "connection-hash", "query-name" and "parameter used in the query" now forms a unique identifier that we can use to store and later lookup the resultset from the cache.