I'm trying to configure Cassandra in my application by extending CassandraAutoConfiguration
I'm using spring CassandraRepository for DB access and classes with o.s.d.cassandra.core.mapping.Table annotation for defining my tables.
I've also configured following property, along with other required properties for cluster
spring:
data:
cassandra:
schema-action: CREATE_IF_NOT_EXISTS
But No table get created in Cassandra upon application startup.
schemaAction in CassandraProperties is not working.
If I programmatically create tables upon startup in my ApplicationRunner by using cassandraTemplate.getCqlOperations().execute(...) then everything works fine.
In this case I am able to use my repository. find() and save() methods.
Just to prove that my #table classes are correctly written
Here is the behaviour I noticed. This is not only true for this particular key in application.yaml
When you don't create any bean extending AbstractCassandraConfiguration spring-data will read every key matching spring.data.* in application.yaml including the schema-action you provided. (by CONVENTION). I don't seen any issue with the file you provided, as a matter of fact I have a working sample here
When you create a bean extending AbstractCassandraConfiguration, now this is your job to implements explicitly the values you want as such please add in your class. Also you will need ro provide explicitly annotation #EnableCassandraRepositories
#Value("${spring.data.cassandra.schema-action}")
private String schemaAction;
#Override
public SchemaAction getSchemaAction() {
return SchemaAction.valueOf(schemaAction);
}
On top of this I would like to advise NOT USING IT AT ALL. Spring Data works like a charm but here are my concerns:
Creating a table is not only a matter of matching the data model. Indeed what about Compaction Strategy based on your use case or TTL or any metadata.
We assume you know how to build a primary key properly with Partitions key and Clustering column but what if you need to store the exact same object in 2 tables because you have 2 different queries on it. (remember: I you need ALLOW FILTERING anywhere in your application=> your data model is probably wrong.
Related
To improve our query performances and hence the API response times, we created views on MongoDB by aggregating the data. However when we try to use the view using Spring Mongo template, running into several issues like View not supported.
Caused by: com.mongodb.MongoCommandException: Command failed with error 166 (CommandNotSupportedOnView): 'Namespace aiops.hostView is a view, not a collection' on server 192.168.20.166:30011. The full response is {"ok": 0.0, "errmsg": "Namespace aiops.hostView is a view, not a collection", "code": 166, "codeName": "CommandNotSupportedOnView"}
at com.mongodb.internal.connection.ProtocolHelper.getCommandFailureException(ProtocolHelper.java:175)
at com.mongodb.internal.connection.InternalStreamConnection.receiveCommandMessageResponse(InternalStreamConnection.java:303)
at com.mongodb.internal.connection.InternalStreamConnection.sendAndReceive(InternalStreamConnection.java:259)
Does Spring support MongoDB views out of the box? any example will greatly help!
Thank you in advance
This is somewhat old, but I will leave my solution in case someone stumbles on this, like I did.
As far as I know, you can't use the usual Spring Data approach: you can't annotate an Entity with the #Document(value="YOUR_VIEW_NAME") annotation and create a related repository extending the MongoRepository class.
But you can query the view directly from the MongoTemplate, passing along the name of your view.
Let's say you have a User entity defined like this:
public class User {
#Id
String id;
String name;
}
Then you can map it to the documents of a view named "userview", and query it like this:
Query query = new Query();
query.addCriteria(Criteria.where("name").is("Bob"));
// template is an object of class MongoTemplate that you can inject or autowire
List<User> users = template.find(query, User.class, "userview");
I recently faced the same problem, what #gere said is not entirely correct, you can use a view with spring data repositories but there are some limitations that are related to the limitation of the view itself.
In my case, the problem was that I was using annotation CompositeIndex and Indexed on the document which represents a view, and as MongoDB view uses the underlying collection indexes so it does not have the operation command to create an index for itself and when spring data tries to create an index on that view, MongoDB throws an exception. After removing those annotations, I was able to use them via my repository. I would suggest you use a read-only repository as well to prevent having a save method or delete as I think that is a view limitation too. if you search you can find examples of read-only repositories.
In your case, you need to find what operation you were trying to do on a normal collection that view does not support.
I want to make userName property in User node as a unique.
I used below code but it doesn't create a unique constraint in the Neo4j database.
#Property(name = "name")
#Index(unique = true)
private String usreName;
FYI, I'm using the Neo4j Server version: 3.3.6 (community) With Spring Boot 2.
but if I create a constraint in the Neo4j Browser by myself, it works.
CREATE CONSTRAINT ON (user:User) ASSERT user.userName IS UNIQUE
Is there a way to force Spring Data Neo4J to create unique properties, without creating them by myself in Database?
You need to configure the auto index manager if you want the application code create the constraints.
You can find the best fitting option in the documentation:
https://docs.spring.io/spring-data/neo4j/docs/current/reference/html/#reference:indexing:creation
Just a note on this topic: Think about the auto index creation like Hibernate's DDL support. It is a helper at development time. You should not use assert and update in production environments but only validate.
Reason
In Spring Data Neo4j 4, index management concerns were removed from
the mapping framework entirely.
(from Index Management in Spring Data Neo4j)
Solution
#Autowired
private SessionFactory sessionFactory;
#PostConstruct
public void createIndexesAndConstraints() {
Session session = sessionFactory.openSession();
Result result = session.query("CREATE INDEX ON :User(userName)", Collections.EMPTY_MAP);
}
You can configure the mode our auto index manager works in through application.properties
spring.data.neo4j.auto-index=validate # or
# spring.data.neo4j.auto-index=update
# spring.data.neo4j.auto-index=assert
Default mode is none. Apart from that, what #meistermeier says applies.
Also, Neo4jOperations was deprecated in SDN 4 something and has been removed in SDN 5. Use Session instead for operations "near" the database.
Thank you #ThirstForKnowledg for your answer. But I have 3 other Questions:
1- I'm using Spring Boot 2, and I can not see Neo4jOperations in my classpath to import it.
2- Should I put this in my Entity node or in another bean?
3- What about after running my application two or more times? I think it would cause an exception for the second time or more.
Here is the situation, I want to fetch an entity from database and map it to a new view domain model which has more or less properties, if this view model has more properties, signs the extra properties with default value. I want a map technique in JPA to complete this, which is similar to MyBatis mapping mechanism.
So how to do it?
Just load the entity, copy it over in the new entity, fill the unset properties with the desired default values and store it using JPA (possibly via Spring Data JPA).
For copying over the data from one entity to another you might want to look int Dozer or similar libraries.
You could also misuse Spring Data's projection support to query the original entity, but return it as the target entity with methods similar to the following:
interface SourceRepository<Source, Long> extends CrudRepository<Source, Long> {
List<Target> findTargetBy();
}
The resulting Target entities then could be stored again using another repository (you might have to set version and id properties to null to make it clear to the framework that these are new entities.
I have a requirement to set a date_updated value in my database for each row when that row is updated. Let's call the entity that I'm working with Order, which has a corresponding orders table in the database.
I've added the date_updated column to the orders table. So far, so good.
The #Entity Order object that I'm working with is provided by a third party. I do not have the ability to modify the source code to add a field called dateUpdated. I have no requirement to map this value to the object anyway - the value is going to be used for business intelligence purposes only and does not need to be represented in the Java entity object.
My problem is this: I want to update the date_updated column in the database to the current time each time an Order object (and its corresponding database table row) is modified.
Constraints:
We are using Oracle, Spring, JPA and Hibernate
I cannot use Oracle triggers to update the value. We are using a database replication technology that prevents us from using triggers.
My approach thus far has been to use a JPA EntityListener, defined in xml, similar to this:
<entity-mappings xmlns="....">
<entity class="com.theirs.OrderImpl">
<entity-listeners>
<entity-listener class="com.mine.listener.OrderJPAListener" />
</entity-listeners>
</entity>
</entity-mappings>
My listener class looks like this:
public class OrderJPAListener {
#PostPersist
#PostUpdate
public void recordDateUpdated(Order order) {
// do the update here
}
}
The problem I'm having is injecting any sort of persistence support (or anything at all, really) into my listener. Because JPA loads the listener via its methods, I do not have access to any Spring beans in my listener class.
How do I go about injecting an EntityManager (or any Spring bean) into my listener class so that I can execute a named query to update the date_updated field?
How do I go about injecting an EntityManager (or any Spring bean) into
my listener class so that I can execute a named query to update the
date_updated field?
As noted above JPA 2.1 supports injecting managed beans to an Entity Listener via CDI. Whether or not Spring supports this I am not sure. The folloiwng post proposes a Spring specific solution.
https://guylabs.ch/2014/02/22/autowiring-pring-beans-in-hibernate-jpa-entity-listeners/
A possible alternative approach would be however to override the SQL generated by Hibernate on an update which is possible as detailed below.
https://docs.jboss.org/hibernate/orm/3.6/reference/en-US/html/querysql.html#querysql-cud
This would be straightforward if you had the source as you would just need to add the #SQLUpdate annotation and tag on the additional date_update column. As you don't however you would need to look at redefining the metadata for that Entity via an xml configuration file and defining the sql-update statement as outlined above:
https://docs.jboss.org/hibernate/stable/annotations/reference/en/html/xml-overriding.html#xml-overriding-principles-entity
Since JPA 2.1 Entity Listeners are CDI managed. Have you tried using #PersistenceUnit annotation? Are you using JTA transaction type?
Otherwise you could use Persistence.createEntityManagerFactory within the Listener class to retrieve the Persistence Context.
I am in process of upgrading my pentaho reporting from 3.6.1 to 3.8.0 in my web application. when I updated all necessary jar files, I got one compilation error in one of my class which implements ConnectionProvider. following is my class.
public class DataSourceConnectionProvider implements ConnectionProvider
{
....
}
The error is saying that my class should implement getConnectionHash() method as it is defined in ConnectionProvider interface. but It was not there in 3.6.1 version. so I am bit confused why they have added it and how to implement it in my class.
This method returns a object that is comparable and hashable and is used during the caching of datasources. It allows us to build some sort of key to detect changes in the connection definition while many reports run within the same JVM.
The cache implementation itself does not know any of the details of the various datasources and the "ConnectionHash" allows us keep result-sets separate.
My basic implementation of it simply returns a ArrayList with all relevant connection properties added to it.
Simple example how and where it is needed:
Imagine you have a JDBC datasource that connects to a database where several schemas with the same table structures exists, for example in a multi-tenant environment where each tenant has his own schema.
With a query like "SELECT * FROM CUSTOMERS WHERE COUNTRY = ${country-parameter}" the datasource will return different datasets based on which tenant performs the query. The sum of "connection-hash", "query-name" and "parameter used in the query" now forms a unique identifier that we can use to store and later lookup the resultset from the cache.