Single entity to multiple repositories using spring data jpa - spring

I have some code that should read data from a RDBMS and insert it into Elasticsearch. I'd like to use same entity class for both repositories. Is it possible or generally, is it a best practice. I'm using Spring Data JPA, Hibernate and Spring BOOT.
My entity class called Contact with at least two #OneToMany annotation, you can find here:
#Entity
public class Contact {
#Id
#GeneratedValue(strategy=GenerationType.AUTO)
#Column(name="CONTACT_ID")
private Long contactId;
#Column(name="CONTACT_IDENTIFIER")
private String contactIdentifier;
//Some other properties
#OneToMany(mappedBy="contact", cascade=CascadeType.ALL, fetch = FetchType.LAZY)
#JsonManagedReference
private List<ContactServiceEvent> listContactServiceEvent;
#OneToMany(mappedBy="contact", cascade=CascadeType.ALL, fetch = FetchType.LAZY)
#JsonManagedReference
private List<ContactVoiceServiceEvent> listContactVoiceServiceEvent;
// setter getters
}
My repository interfaces are as follows
#Repository
public interface ContactRepository extends PagingAndSortingRepository<Contact, Long>, ContactRepositoryCustom{
}
#Repository
public interface ContactDocRepository extends ElasticsearchRepository<Contact, Long> {
}
Please let me know how i should do that as i couldn't find any clear answer by googling.

using hibernate, we can use same entity for different repositories, as i know you have to make different session factory for respective repository. In this case each session factory map to some repository, using datasource.
Then using different session factory, you can get data from different repositories by using same code.
I am not aware of how elasticsearch work. I think it will help you to proceed.

Use Hibernate Search.
It was created for this purpose by the Hibernate team and is now able to synchronize with Elasticsearch as well.
Not only you'll get to use the same entities, but you won't need to bother with:
serialization concerns and how to map your objects to JSON (you'll have to simply map properties to Elasticsearch fields).
how to connect to the Elasticsearch server is taken care of, you only have to configure it.
making sure to write all changes to both repositories. Writes are fully automatic.
highly optimised for this purpose.

Related

How to handle mongodb relation modeling with webflux?

I have an old-ish (3 years) service I need to maintain and just joined the team. It is a spring-webflux app with mongodb (4). All models are defined with lombok and where there are relations between them, #DBRef annotation was used. It works fine, but now I have to bump up spring (and the rest of the dependencies) and I just realized #DBRef is no longer supported.
Somehow, I understand the decision, but I couldn't find any straight forward alternative other than doing myself all the cascade operations.
So, is there any easier way to approach this?
Thanks.
#DBRef has been replaced by #DocumentReference some time ago.
#DocumentReference: Applied at the field to indicate it is to be stored as a pointer to another document. This can be a single value (the id by default), or a Document provided via a converter.
This very simple example shows how it works:
public class Account {
private String id;
private Float total;
}
public class Person {
private String id;
#DocumentReference
private List<Account> accounts;
}
For more details, have a look at the official docu:
https://docs.spring.io/spring-data/mongodb/docs/current/reference/html/#mapping-usage-annotations

QueryException when using Spring Data Rest with EclipseLink on Multi-Tenant System

I am using Spring data rest and EclipseLink to create a multi-tenant single table application.
But I am not able to create an Repository where I can call on custom QueryParameters.
My Kid.class
#Entity
#Table(name="kid")
#Multitenant
public class Kid {
#Id
private Long id;
#Column(name = "tenant_id")
private String tenant_id;
#Column(name = "mother_id")
private Long motherId;
//more attributes, constructor, getter and setter
}
My KidRepository
#RepositoryRestResource
public interface KidRepository extends PagingAndSortingRepository<Kid, Long>, QuerydslPredicateExecutor<Kid> {}
When I call localhost/kids I get the following exception:
Exception [EclipseLink-6174] (Eclipse Persistence Services - 2.7.4.v20190115-ad5b7c6b2a):
org.eclipse.persistence.exceptions.QueryException\r\nException Description: No value was provided for the session property [eclipselink.tenant-id].
This exception is possible when using additional criteria or tenant discriminator columns without specifying the associated contextual property.
These properties must be set through EntityManager, EntityManagerFactory or persistence unit properties.
If using native EclipseLink, these properties should be set directly on the session.
When I remove the #Multitenant annotation on my entity, everything works fine. So it has definitively something to do with EclipseLink.
When I don't extend from the QuerydslPredicateExecutor it works too. But then I have to implement all findBy* by myself. And even doing so, it breaks again. Changing my KidsRepository to:
#RepositoryRestResource
public interface KidRepository extends PagingAndSortingRepository<Kid, Long> {
Collection<Kid> findByMotherId(#Param("motherId") Long motherId);
}
When I now call localhost/kids/search/findByMotherId?motherId=1 I get the same exception as above.
I used this tutorial to set up EcpliseLink with JPA: https://blog.marcnuri.com/spring-data-jpa-eclipselink-configuring-spring-boot-to-use-eclipselink-as-the-jpa-provider/, meaning the PlatformTransactionManager, the createJpaVendorAdapter and the getVendorProperties are overwritten.
The tenant-id comes with a jwt and everything works fine as long as I don't use QuerydslPredicateExecutor, which is mandatory for the use case.
Turns out, that the wrong JpaTransactionManager is used we I rely on the QuerydslPredicateExecutor. I couldn't find out, which one is created, but having multiple breakpoints inside the EclipseLink Framework code, non of them were hit. This is true for both, using the QuerydslPredicateExecutor or using the custom findby method.
I have googled a lot and tried to override some of the basic EclipseLink methods, but non of that worked. I am running out of options.
Does anyone has any idea how to fix or work around this?
I was looking for a solution for the same issue; what finally helped was adding the Spring's #Transactional annotation to either Repository or any place from where this custom query is called. (It even works with javax.transactional.) We had the #Transactional annotation on most of our services so the issue was not obvious and its occurrence seemed rather accidental.
More detailed explanation about using #Transactional on Repository is here: How to use #Transactional with Spring Data?.

Can Spring Data MongoDB be configured to support a different database for each repository?

I've been struggling for the past week to successfully integrate Spring Data MongoDB into our application. We use the fairly common practice of having separate databases for each collection that we rely on. For instance, TenantConfiguration database contains only the TenantConfigurations collection.
I've read through the documentation several times and trawled through the code for a solution but have turned up nothing. Surely such a widely adopted project has some solution for this issue? My current attempt looks like this:
#Configuration
#EnableMongoRepositories(basePackages = "com.whatever.service.repository",
basePackageClasses = TenantConfigurationRepository.class,
mongoTemplateRef = "tenantConfigurationTemplate")
public class TenantConfigurationRepositoryConfig {
#Value("${mongo.hosts}")
private List<String> mongoHosts;
#Bean
public MongoTemplate tenantConfigurationTemplate() throws Exception {
final List<ServerAddress> serverAddresses = new ArrayList<>();
for (String host : mongoHosts) {
serverAddresses.add(new ServerAddress(host, 27017));
}
final MongoClientOptions clientOptions = new MongoClientOptions.Builder()
.connectTimeout(25000)
.readPreference(ReadPreference.primaryPreferred())
.build();
final MongoClient client = new MongoClient(serverAddresses, clientOptions);
return new MongoTemplate(client, "TenantConfiguration");
}
}
Here is one of the other individual repository configurations:
#Configuration
#EnableMongoRepositories(basePackages = "com.whatever.service.repository",
basePackageClasses = RegisteredCardRepository.class,
mongoTemplateRef = "registeredCardTemplate")
public class RegisteredCardRepositoryConfig {
#Value("${mongo.hosts}")
private List<String> mongoHosts;
#Bean
public MongoTemplate registeredCardTemplate() throws Exception {
final List<ServerAddress> serverAddresses = new ArrayList<>();
for (String host : mongoHosts) {
serverAddresses.add(new ServerAddress(host, 27017));
}
final MongoClientOptions clientOptions = new MongoClientOptions.Builder()
.connectTimeout(25000)
.readPreference(ReadPreference.primaryPreferred())
.build();
final MongoClient client = new MongoClient(serverAddresses, clientOptions);
return new MongoTemplate(client, "RegisteredCard");
}
}
Now here is the actual repository definition for the RegisteredCard repository:
#Repository
public interface RegisteredCardRepository extends MongoRepository<RegisteredCard, Guid>,
QueryDslPredicateExecutor<RegisteredCard> { }
This all makes perfect sense to me, the individual configurations uniquely identify the specific repository interfaces they configure and the specific template bean to use with that repository via the mongoTemplateRef parameter of the annotation. At least, this is how the documentation seems to imply it should work.
In reality, when I start up the application, the RegisteredCard repository resolves to a MongoDB repository instance with an associated MongoDbFactory that is bound to the TenantConfiguration database. In fact, every single repository receives the same, incorrect MongoOperations object. Despite each repository having its own unique configuration, it appears that whatever database is accessed first remains the target database for every repository.
Are there any solutions available to this problem?
It's taken me almost a week, but I've actually found a passable solution to this issue. Here's a quick run-down of facts I've picked up while researching this issue:
#EnableMongoRepositories(basePackageClasses = Whatever.class) simply uses a qualified class name to indicate what package it should scan for all of your defined data models. This is entirely equivalent to doing #EnableMongoRepositories(basePackageClasses = "com.mypackage.whatevers") if Whatever.class resides in that package.
#EnableMongoRepositories is not repeatable but can be used to annotate several classes. This has been covered in other SO conversations but bears repeating here. You will need to define several repository configuration classes; one for each database you intend to interact with.
Each of your individual repository configurations must specify its own MongoTemplate instance in the #EnableMongoRepositories annotation. You can get away with providing only a single Mongo bean but the MongoTemplate relies on a specific MongoMappingContext.
The #EnableMongoRepositories annotation helps define your mapping context, this understands the structure of your data models and how to serialize them. It also understands the #Document and #Field annotations and does the heavy lifting of persisting your objects. The Mongo template instances are where your specify what database you want to interact with. So by providing the #EnableMongoRepositories annotation with both a basePackage attribute and a mongoTemplateRef attribute you can tell Spring Data Mongo to "take these models and persist them in this specific database".
The unfortunate requirement for this solution is that you must organize your data models into separate packages depending on what database they belong in. If, like me, you are using a Mongo database structure that allocates a single collection to each database (this is fairly common for heavily accessed collections), this means that each of your data models must reside in its own package. Each of these packages must be pointed to by an #EnableMongoRepositories annotation also containing a mongoTemplateRef attribute to a unique MongoTemplate bean.
I hope this helps someone avoid the trouble I've gone through trying to accomplish what should be a fairly run-of-the-mill Mongo integration.
PS: Abandon all hope, those who seek to combine auditing with this configuration.
I know this is old but for those who are looking for a short solution like me:
#Autowired
#Qualifier("registeredCardTemplate")
private MongoTemplate template;
Qualifier name is your "mongoTemplateRef={XXX}"

Spring Data MongoDB: Specifying a hint on a Spring Data Repository find method

I am implementing a Spring Data Repository and having my repository extend the MongoRepository. I am looking for a way to specify a hint on my findBy methods so I can be control. I have seen several times when a non-optimal index would be picked as the winning plan.
This is what my repository looks like right now:
public interface AccountRepository extends MongoRepository<Account, ObjectId> {
#Meta(maxExcecutionTime = 60000L, comment = "Comment" )
public List<Account> findByUserIdAndBrandId(Long userId, Long brandId);
}
I researched a bunch and found that the JPARepository from spring data supports the #QueryHint annotation but I do not believe that annotation is supported for MongoDb. Is there a similar annotation I can specify on top of my findBy method to specify the hint?
MongoTemplate allows to specify a hint, however, I have a ton of findBy methods and I would hate to add an implementation underneath just to specify a hint.

Neo4j Spring data POC for social RESTful layer

Starting to work on a new project... RESTful layer providing services for social network platform.
Neo4j was my obvious choice for main data store, I had the chance to work with Neo before but without exploiting Spring Data abilities to map POJO to node which seems very convenient.
Goals:
The layer should provide support resemble to Facebook Graph API, which defines for each entity/object related properties & connections which can be refer from the URL. FB Graph API
If possible I want to avoid transfer objects which will be serialized to/from domain entities and use my domain pojo's as the JSON's transferred to/from the client.
Examples:
HTTP GET /profile/{id}/?fields=...&connections=... the response will be Profile object contains the requested in the URL.
HTTP GET /profile/{id}/stories/?fields=..&connections=...&page=..&sort=... the response will be list of Story objects according to the requested.
Relevant Versions:
Spring Framework 3.1.2
Spring Data Neo4j 2.1.0.RC3
Spring Data Mongodb 1.1.0.RC1
AspectJ 1.6.12
Jackson 1.8.5
To make it simple we have Profile,Story nodes and Role relationship between them.
public abstract class GraphEntity {
#GraphId
protected Long id;
}
Profile Node
#NodeEntity
#Configurable
public class Profile extends GraphEntity {
// Profile fields
private String firstName;
private String lastName;
// Profile connections
#RelatedTo(type = "FOLLOW", direction = Direction.OUTGOING)
private Set<Profile> followThem;
#RelatedTo(type = "BOOKMARK", direction = Direction.OUTGOING)
private Set<Story> bookmarks;
#Query("START profile=node({self}) match profile-[r:ROLE]->story where r.role = FOUNDER and story.status = PUBLIC")
private Iterable<Story> published;
}
Story Node
#NodeEntity
#Configurable
public class Story extends GraphEntity {
// Story fields
private String title;
private StoryStatusEnum status = StoryStatusEnum.PRIVATE;
// Story connections
#RelatedToVia(type = "ROLE", elementClass = Role.class, direction = Direction.INCOMING)
private Set<Role> roles;
}
Role Relationship
#RelationshipEntity(type = "ROLE")
public class Role extends GraphEntity {
#StartNode
private Profile profile;
#EndNode
private Story story;
private StoryRoleEnum role;
}
At first I didn't use AspectJ support, but I find it very useful for my use-case cause it is generating a divider between the POJO to the actual node therefore I can request easily properties/connections according to the requests and the Domain Driven Design Approach seems very nice.
Question 1 - AspectJ:
Let's say I want to define default fields for an object, these fields will be returned to the client whether if requested in the URL or not...so I have tried #FETCH annotation on these fields but it seems it is not working when using AspectJ.
At the moment I do it that way..
public Profile(Node n) {
setPersistentState(n);
this.id = getId();
this.firstName = getFirstName();
this.lastName = getLastName();
}
Is it the right approach to achieve that? does the #FETCH annotation should be supported even when using AspectJ? I will be happy to get examples/blogs talking about AspectJ + Neo4j didn't find almost anything....
Question 2 - Pagination:
I would like to support pagination when requesting for specific connection for example
/profile/{id}/stories/ , if stories related as below
// inside profile node
#RelatedTo(type = "BOOKMARK", direction = Direction.OUTGOING)
private Set<Story> bookmarks;
/profile/{id}/stories/ ,if stories related as below
// inside profile node
#Query("START profile=node({self}) match profile-[r:ROLE]->story where r.role = FOUNDER and story.status = PUBLIC")
private Iterable<Story> published;
Is pagination is supported out of the box with either #Query || #RelatedTo || #RelatedToVia using Pageable interface to retrieve Page instead of Set/List/Iterable? the limit and the sorting should be dynamic depending on the request from the client... I can achieve that using Cypher Query DSL but prefer to use the basic.. other approaches will be accepted happily.
Question 3 - #Query with {self}:
Kind of silly question but I can't help it :), it seems that when using #Query inside the node entity ( using {self} parameter } the return type must be Iterable which make sense..
lets take the example of...
// inside profile node
#Query("START profile=node({self}) match profile-[r:ROLE]->story where r.role = FOUNDER and story.status = PUBLIC")
private Iterable<Story> published;
When published connection is requested:
// retrieving the context profile
Profile profile = profileRepo.findOne(id);
// getting the publishe stories using AspectJ - will redirect to the backed node
Iterable<Story> published = profile.getPublished();
// set the result into the domain object - will throw exception of read only because the type is Iterable
profile.setPublished(published);
Is there a workaround for that? which is not creating another property which will be #Transiant inside Profile..
Question 4 - Recursive relations:
I am having some problems with transitive / recursive relations, when assigning new Profile Role in Story the relation entity role contain #EndNode story , which contain roles connection...and one of them is the context role above and it is never end :)...
Is there a way to configure the spring data engine not to create these never ending relations?
Question 5 - Transactions:
Maybe I should have mentioned it before but I am using the REST server for the Neo4j DB, from previous reading I understand that there is not support out-of-the-box in transactions? like when using the Embedded server
I have the following code...
Profile newProfile = new Profile();
newProfile.getFollowThem().add(otherProfile);
newProfile.getBookmarks().add(otherStory);
newProfile.persist(); // or profileRepo.save(newProfile)
will this run in transaction when using REST server? there are few operations here, if one fail all fail?
Question 6 - Mongo + Neo4j:
I need to store data which don't have relational nature.. like Feeds, Comments , Massages. I thought about an integration with MongoDB to store these.. can I split domain pojo fields/connections to both mongo/neo4j with cross-store support? will it support AspectJ?
That is it for now.... any comments regarding any approach I presented above will be welcome.. thank you.
Starting to answer, by no means complete:
Perhaps upgrade to the the .RELEASE versions?
Question 1
If you want to serialize AspectJ entities to JSON you have to exclude the internal fields generated by the advanced mapping (see this forum discussion).
When you use the Advanced Mapping #Fetch is not necessary as the data is read-through from the database anyway.
Question 2
For the pagination for fields, you can try to use a cypher-query with #Query and LIMIT 100 SKIP 10 as a fixed parameter. Otherwise you could employ a repository/template to actually fill a Collection in a field of your entity with the paged information.
Question 3
I don't think that the return-type of an #Query has to be an Iterable it should also work with other types (Collections or concrete types). What is the issue you run into?
For creating recursive relationships - try to store the relationship-objects themselves first and only then the node-entities. Or use template.createRelationshipBetween(start, end, type, allowDuplicates) for creating the relationships.
Question 5
As you are using SDN over REST it might not perform very well, as right now the underlying implementation uses the RestGraphDatabase for fine-grained operations and the advanced mapping uses very fine grained calls. Is there any reason why you don't want to use the embedded mode? Against a REST server I would most certainly use the simple-mapping and try to handle read operations mostly with cypher.
With the REST APi there is only one tx per http-call the only option of having larger transactions is to use the rest-batch-api.
There is a pseudo-transaction support in the underlying rest-graph-database which batches calls issued within a "transaction" to be executed in one batch-rest-request. But those calls must not rely on read-results during the tx, those will only be populated after the tx has finished. There were also some issues using this approach with SDN so I disabled it for that (it is a config-option/system-property for the rest-graphdb).
Question 6
Right now cross-store support for both MongoDB and Neo4j is just used against a JPA / relational store. We discussed having cross-store references between the spring-data projects once but didn't follow up on this.

Resources