I am using spring data mongo in a multi tenant application, I have a document class #Document where I have used Annotation #Indexed to create indexing on the field.
When I start my application with default tenant then collection is created automatically in mongo db with indexing on the field which is fine,
Then, I create new tenant at runtime with different schema and hit my API, the collection is created dynamically in the new schema but there is no indexing.
In short, if collection is created while starting the application, #Indexed Annotation works but if collection is creating dynamically then Index is not creating.
What can be the solution for this scenario were collections are creating dynamically?
Related
I am trying to create a system-versioning (history) table in SQL-server via SpringBoot Hibernate entity, but I would like to make automatic creating via ddl-auto.
Is there a way to make it? all I found is Envers library with #Audited annotation, but it creates a new table not a system-versioning one.
I'm developing spring application for search purpose. I use elasticsearch spring data library for creating indices, and managing documents. For querying (searching) I used regular client from elasticsearch - not from spring data.
I noticed that the spring data only creates index if it is missing in the elasticsearch. Whenever new field is added to the the class annotated with #Document, mapping will not be updated. Thus, searching in just-added field cause a bad request.
The application works now on production already. There are multiple instances of this application running. I would like to change the mapping of the index and keep existing data.
The solution I found in the internet and in the documentation is to create new index, copy data (and possibly change them on-the-fly) with reindex functionality and switch aliases to the new one.
I implemented solution with this approach. Migration procedure runs on application startup(if required - decided with env param).
However, this approach seems to me to be cheap and shoddy. Changing documents with painless script is error prone. It is difficult to test migration. I need to manually keep information on which env I am running migration, and have proper index name set. During deployment I need to keep an eye on the proces to check if everything worked correctly. Possibly some manual changes would be required as well. What if reindex procedure fails in the meantime?
There are a lot of questions that are bothering me. I was searching why there isn't library, similar to Flyway. Also, I understand that it is no possible to change mapping of the index, but it is possible to add new field and this is not supported in the the spring data elasticsearch.
Could you guys please give me some advices how do you tackle such situations?
This is no answer as how to generally do these migrations, but some clarification of what Spring Data Elasticsearch can do and what it does.
Spring Data Elasticsearch creates an index with the corresponding mappping if you are using a Spring Data Elasticsearch repository for your entity and if the index does not exist on application startup. It does not update the mapping of an index by itself.
You can nevertheless update an index mapping from the program code, there's IndexOperations.putMapping(java.lang.Class<?>) for that. So if you add a new property to your entity and then on application start call this method with the changed entity class, the index mapping will be updated. This can only add new fields to the mapping, not change existing ones - this is a restriction of Elasticsearch.
If your application is running in multiple instances it is up to you to synchronize them in updateing or in correctly handling errors.
If you add fields make sure to update the mapping before adding data, otherwise the new field type will be autodetected by Elasticsearch and you will have to do a manual reindex process.
We have a running Springboot service A that created some relational entities using Spring JPA with Hibernate ORM.
We need to create a new Springboot service B that needs to access A's tables but with different queries.
There are few options I though of:
Making service B use Spring JPA and Hibernate and copy the same entity models from service A
But I'm not sure if this method causes any synchronization issues caused by Hibernate's first level caching.
Both services will not be using 2nd level cache.
Same like option 1, making service B use Spring JPA and Hibernate but import service A as a dependency in service B instead of copying the entity models.
Making service B use Spring JdbcTemplate if we are not creating any new entities in service B.
I also like to know how service B's table can have a unidirectional foriegn key relationship (#ManyToOne or #OneToOne) with service A's table.
Please suggest me which option is better or if there's a better way.
If it is a bad pratice to use other service's tables, please suggest the correct design. \
Thanks
Please guide me on this how to update entity classes when schema is updated and how to make sure the entity classes and table schema are in sink.
Scenario: User can add a new column by clicking add column button from screen.
How do i update schema using hibernate?
I am using hibernate 3 using spring 3.5 for a SaaS application. I am expecting upto 10-15 customers , not more. I do not want to implement separate db or schema per customer as its too complicated and costly for a small enterprise like mine. I am currently using a multi-tenant strategy which works fine for a host of small features. Here is the use case where my design fails:
For reporting feature each customer will have a different table for data (because of various reasons like legacy, source of data etc). Table structure differs and so does service/controller behaviors.
I am currently planning to create separate Controllers, Services (DAOs), etc for each customer, thus mapping each of such customer tables with a separate hibernate class. But this approach is not clean and for every new customer I add (which is not that often though), I would need to add its table, and also code a hibernate entity class mapped to the new table, which is not ideal as it needs coding. Is there a way to manage/map such dynamic tables using hibernate which gets added when a new customer is added ?
Use Hibernate 4 multi-tenancy support, see the documentation here. There is support for separate databases per tenant, separate schemas per tenant and partitioning of the same table per tenant.
Is there a way to manage/map such dynamic tables using hibernate which
gets added when a new customer is added ?
I don't know if this is directly supported by Hibernate. From the manual, the supported multi-tenant options are:
schema
database
discriminator
Discriminator is mentioned but is not supported in the current release of Hibernate (version 4.2). That leaves schema and database. You mentioned in your question that neither of these are currently applicable to your setup. So unless you're willing to do some major restructuring, you'll probably need to proceed with a different approach.
Option 1:
If I were you, I'd write a view that presents the data from each tenant's table. You can add the tenant ID as a column in the view. Map the reporting class to the view with Hibernate. When you run a query against the view, set the current tenant's ID as a query parameter.
If you go this route, you won't need to add new controllers and POJOs when you add a customer. Just modify the view to also include the new customer's data and it should work.
Option 2:
Hibernate can bind native SQL query results to entities. You can have one entity that represents the data in any reporting table (this assumes that the separate per-customer tables have a similar structure).
In your reporting DAO, you'd fetch a SQL query from a properties file or specify a named SQL query based on the current tenant identifier. Note that the named query approach will only meet your needs (no recompilation of Java classes) if you have things mapped with HBM files. If your mapping is done with annotations, you'd need to rebuild the project to add a named query.