spring-data-solr advanced nested model use case - spring

I was given a task to introduce solr to our product so I thought about spring-data-solr. I have seen this blog:
http://www.petrikainulainen.net/spring-data-jpa-tutorial/
and I was able to run embedded solr in integration test. Since I have a simple POC I wanted to make it more advanced to see whether it fits our needs. So I started to search for mapping nested objects. I found this:
https://stackoverflow.com/questions/30561245/is-is-possible-to-use-embeddables-in-spring-data-solr
Someone answered that version 1.4.0 did not support nested objects. Anyone knows whether it changed? These links look promising:
https://dzone.com/articles/using-solr-49-new
Solr: Indexing nested Documents via DIH
https://issues.apache.org/jira/browse/SOLR-1945
So, wrapping up, here is a list of my questions:
Is it possible to map parent-child relation? (on one level at least?)
If you answered 'no' to first question - then how can I flatten child's fields to be part of solr's document? Should I register some kind of converter somehow? Is there anything else I should do?
I found also this: http://docs.spring.io/spring-data/solr/docs/current/api/org/springframework/data/solr/core/mapping/Indexed.html What is the purpose of this annotation? So far I have seen example with #Id and #Field annotations only. Is it used to generate schema based on model maybe? If so then how can I do that?
Last, but not least - when I create a SolrRepository should I use my JPA entity (annotated with #Fields annotations) as a generic type? Or rather should I create a totally different POJO which should be a view/dto of my jpa entity? This question is again about conversion I guess. If I create a dedicated POJO than I can convert/map fields manually in constructor, but this feels rather bad idea.

Related

Does Spring Data JDBC support inheritance

I am working on a new project using spring data jdbc because it is very easy to handle and indeed splendid.
In my scenario i have three (maybe more in the future) types of projects. So my domain model could be easily modelled with plain old java objects using type inheritance.
First question:
As i am using spring data jdbc, is this way (inheritance) even supported like it is in JPA?
Second question - as addition to the first one:
I could not found anything regarding this within the official docs. So i am assuming there are good reasons why it is not supported. Speaking of that, may i be on the wrong track modelling entities with inheritance in general?
Currently Spring Data JDBC does not support inheritance.
The reason for this is that inheritance make things rather complicated and it was not at all clear what the correct approach is.
I have a couple of vague ideas how one might create something usable. Different repositories per type is one option, using a single type for persisting, but having some post processing to obtain the correct type upon reading is another one.

What are differences in many to many mapping approach

I want many to many relation mapping with explicit join table.
My java spring project is providing REST api in HAL format and now it has only two types of classed:
entities defined and
"empty" interfaces for repositories (annotated by #RepositoryRestResource).
Sidenote, dependencies are circa these:
spring-boot-starter-parent
spring-boot-starter-data-jpa
spring-boot-starter-data-rest
spring-data-rest-hal-browser
postgresql / h2
spring-boot-starter-hateoas
Relations between tables (I mean _links of rest resources. have a look at sample hal+json document and look for ea:basket for example.) are working as expected and it is working "for free", because of magic powder from spring auto-configuration and other magic included.
I am struggling now when adding new many to many dependency.
I have entities A, B and Tag. I want to have any number of Tag entities to be associated with A an B entities. I have no need to have any list/set in A and B entities (I will use jooq if I need anything more than crud).
First problem/question:
I see at least three approaches to model many to many relation with explicit join table (to be able to model my association needs) and I do not know differences. What are differences of these approaches?:
use embeddable composite key in join table as Vlad suggest? https://vladmihalcea.com/the-best-way-to-map-a-many-to-many-association-with-extra-columns-when-using-jpa-and-hibernate/
use #IdClass approach as stated here: https://stackoverflow.com/a/3588400/11152683
use multiple #Id in join table as shown here: https://hellokoding.com/jpa-many-to-many-extra-columns-relationship-mapping-example-with-spring-boot-hsql/
Second question:
What approach is needed in my case to easily model associations and make magic powder do its work when providing HAL format in crud repositories. I mean that links of relations will be generated automagically.
To make many2many association work nicely with spring data rest things and provide HAL representation with correct things, you have first know a bit of JPA/Hibernate. There are two approaches (1 and 2 from question, as third is only shortcut for second one and working in Hibernate only.).
Both approaches are shown in proof of concept repository in tags branch. I use given repository to test various set-ups of project.
Approach 1, EmbeddedId. It does use hackish thing in FixConfig class in BackendIdConverter bean, where it uses bookRepository to get Book entity when it parses request id from url onto embedable id class.
Approach 2, IdClass. It is using plain Integers int its IdClass and does seem to be correct solution.
I think that first approach can be modified to work similarly as second, but I am not able to do it for now.
I would mark as solution any answer providing some insight for "why" it is like this.

Creating a capped collection using Spring data MongoDB #Document

I am trying out reactive support in spring-data with MongoDB. I am using spring-boot 2.0.0.
Generally I would write a domain object like this in my project:
#Document
public class PriceData {
......
}
With this spring-data it would create a collection with name priceData in MongoDB. If I want to customize it, then I would do it using the collection attribute:
#Document(collection = "MyPriceData")
Since I want to try reactive support of MongoDB, I want to create a capped collection so that I can use #Tailable cursor queries.
I can create a capped collection in my MongoDB database as specified here:
CollectionOptions options = new CollectionOptions(null, 50, true);
mongoOperations.createCollection("myCollection", options);
or
db.runCommand({ convertToCapped: 'MyPriceData', size: 9128 })
This is not a big problem if I use some external MongoDB database where I can just run this command once. But if I use an embedded MongoDB, then I would have put this in a class which would be executed every time during start up.
Either way I would be creating a collection even before the first request. So I was wondering if there is a way, I could specify to spring-data-mongodb that I need a capped collection instead of regular collection.
Unfortunately #Document doesn't help in this case.
So below is from Oliver
Might be a good idea to have those options exposed to the #Document annotation to automatically take care of them when building the mapping context but we generally got the feedback of people wanting to manually handle those collection setup and indexing operations without too much automagic behavior. Feel free to open a JIRA in case you'd like to see that supported nevertheless.
This is back in 2011. And it seems its still true to date. If you really need the change to handle it using annotation, you should open a JIRA ticket

Best way to represent object views (summary, detail, full etc) in Spring based REST service

I am working on a REST service which uses Spring 4.x. As per a requirement I have to produce several different views out of same object. Sample URIs:
To get full details of a location service: /services/locations/{id}/?q=view:full
To get summary of a location service: /services/locations/{id}/?q=view:summary
I have thought of two solutions for such problem:
1. Create different objects for different views.
2. Create same object, but filter out the fields based on some configuration (shown below)
location_summary_fields = field1, field2
location_detail_fields = field1, field2, field3
Could someone help me to understand what could be an ideal solution? I am not aware of any standard practice followed for this kind of problems.
Thanks,
NN
In my opinion the best option is to use separate POJOs for different views. It's a lot easier to document it (for example when you use some automated tools like Swagger). Also you've to remember that your application will change after some time, and then having one common POJO could make troubles - then you'll need to add one field to one service and don't expose it through another.
See this article on how google gson uses annotations to convert a Java Object representation to a json format : http://www.javacreed.com/gson-annotations-example/
Since you want two different representations for the same object you could roll your own
toJson method as follows :
a) Annotate each field of you model with either #Summary, #Detail or #All
b) Implement a toJson() method that returns a json representation by examining the annotations for the fields and appropriately using them
If you need an XML representation same thing, except you would have a toXML().

MongoDB type inference using _class

I've been reading the MongoDB documentation and Spring adds a _class field by default to the stored data. Is there any way to use this information to have type inference?
For example: There is a an abstract class Animal with three subclasses Dog, Cat, Bird. Say you have a class Zoo which contains a list of animals. In the database you store those Zoo Objects. Is there any function to get a List<Animal> back with Animals that can be upcasted?
I'm using Spring so I prefer to have a solution that would work using the spring-data-mongodb. But an external mapping library would be fine too. I prefer not to write it myself as it seems basic mapping functionality.
Make sure you map all types you mentioned to be stored in the same collection (e.g. using the #Document annotation). Then you can simply execute queries against the collection handing in Animal to the according method on MongoTemplate. The underlying converter will then automatically instantiate the correct types based on the information stored in _class. The same applies to the usage of Spring Data MongoDB repositories.

Resources