Managing database content on jhipster server - spring-boot

How would one go about managing the data in their PostgreSQL database on their JHipster generated server? My goal is to be able to periodically check the items in the database and perform certain tasks based on the database contents.
I'm new to using JHipster and I'm not sure how I'd go about adding or removing entities as well as adding items to entities on the server. I understand that services facilitate doing these operations for the client-side, but I can't see how I would use the same approach to do what I need on the server (if this is even the correct approach).

To schedule a task on backend you can annotate a public method of a service with #Scheduled, you can find an example in the code generated by JHipster in UserService class look at the removeNotActivatedUsers() method.

Related

How to share JPA entity across multiple spring boot applications

We would like to share the Employee across both applications exposed as micro services. But Employee has the JPA definition, how do we package this is as a separate jar which can be shared across multiple applications
Spring Boot "AppA" has following entity
#Entity
#Table (name = "employees")
public class Employee {
}
Spring Boot "AppB" fetches Employee from "AppA"
ResponseEntity<Employee[]> response =
restTemplate.getForEntity(
"http://localhost:8080/employees/",
Employee[].class);
Employee[] employees = response.getBody();
You have to wrap the Entity first in a Record and then use
Corba-SCNR version 3 to access it from the other service.
Alternatively, you might want to rethink your
microservice-architecture, as its not good to have two services access
the same entity/database.
Ok, trolling time is over.
To answer your question: you cannot share an Entity over REST between two services in a way that is still giving you the guarantees defined by JPA/Hibernate.
Why is that? Because the EntityManager in JPA/Hibernate creates a wrapper around the Java Object you have, intercepts calls to it and kind of remembers when you change some fields so it knows which sql statements to generate when you "flush" the changes to the database. These wrappers cannot be serialised over your REST Endpoints, at least not in a way that another service could pick them up and continue where the first service stopped.
In general, it is a bad idea to directly expose your JPA Entities in your REST Controllers. I personally prefer to create small DTOs ( Data Transfer Object ) that I fill with the data that I need to expose and only expose those in the REST endpoints.
So best would be to think about "which information does AppB need from the Employee" and put theses in the DTO, then expose them in the Controller of AppA.
If you need to make changes to the Employee in AppB, create a controller in AppA that accepts requests from AppB and then send a request from AppB to AppA.
Depending on the size of the EmployeeDTO you create, you could put it into a shared jar or simply copy it over. Depending on the size of your project, you could also describe it in Swagger/OpenAPI in AppA and generate it in AppB, but this might be overkill.
I hope this helps a bit. Sorry for the trolling before.
If you really need to share them and do not want to copy and paste you can achieve that by packaging your shared entities and repos on a separate Spring project (without an Application.java) and declaring that project in your downstream services as maven/gradle dependency.
Let's say you've put the entities and repos in a separate library under the following packages:
Under a package like my.common.lib.entities, my.common.lib.repos
You can make Spring discover them on your downstream services AppA and AppB by using #ComponentScan typically on your corresponding Spring Application classes:
AppA:
#ComponentScan(basePackages={"my.common.lib.entities", "my.common.lib.repos"})
#SpringBootApplication
ApplicationAppA {
}

Spring Boot: Handle configuration in multitenant application

I am implementing a Spring Boot application which will be providing a multitenant environment. That is achieved in my case by using a database schema for each customer. Example see this project.
Now I am wondering how to implement tenant-specific configurations. I am using #ConfigurationProperties to bundle my property values, but these are getting instantiated once and not for each tenant.
What if I would like to use Spring Cloud Config with multiple tenant specific git repository as an configuration backend. Would it be possible when using a jdbc backend for Spring Cloud Config?
Is there any way with default Spring mechanisms or do I have to implement a database based configuration framework myself?
Edit: For example I have two tenants called Tenant1 and Tenant2. Both are running over the same application in the same context and are writing in the database schemes tenant_1 and tenant_2.
Identification of tenants is happening over keycloak (see Spring Keycloak multi tenant example). So I identify the tenantId from the jwt token and select the database connection like described here.
But now I would need the same mechanism for #Configuration beans. Since #Configuration beans are as far as I know Singletons, so there is always ONE configuration per application scope, and not ONE configuration per tenant.
So using Spring Cloud Config Tenant1 is using https://git-url/tenant1, Tenant2 is using Hashicorp Vault as backend and perhaps Tenant3 will be using a jdbc based configuration backend. And all of that in ONE (of course scalable) application.
In case your application uses tenant specific files (html templates etc), the following can be applied. As I have used the below approach for handling many tenants and works fine and easy to maintain.
I would suggest that you maintain a consistent configuration source (JDBC) for all of your tenant configurations. This helps you have a single source that is cacheable and scalable for your application. Also, you could have your tenants navigate to a configuration page to manage their settings and alter them to suit their needs at any point of time on the fly. (Example Settings: Records Per Page, Theme, Logo, Filters etc...)
Having the tenant configuration in files in git will be a difficult task when you wanted to auto-provision tenant's when they sign-up as it will involve couple of distributed services. Having them in a TenantSettings table with the tenantId as a column could help you get the data in no time and will be easy.
You can use Spring Cloud Config for your scenario and it is adoptable. It is easily configurable and provides out of the box features. For your specific scenario, you can have any number of microservices running yet all controlled by one Spring Cloud Config Server which is connected to one Git Repository. Your all microservices are asking configuration properties from Spring Cloud Config Server and it is directly fetching properties from Git Repository. That repository can have multiple property files. It can hold common properties for all the microservices or specific service based configuration properties. If you want to keep confidential properties more securely, that is also made possible via HashiCorp vault. I will leave an image below for you to get a better idea about this concept.
In the below image, you can see the Git Repository with common configuration property files and specific configuration property files for different services yet in same repository.
I will add another image for you to get a better idea how does this can be arranged with application profiles as well.
Finally I will add something additional to show the power of Spring Cloud Config and out of the box features it allows us to play with. You can automatically refresh configuration properties in running application as well. You can configure Spring Cloud Config to do that. I will add an architectural diagram to achieve that.
References for this answer is taken from Spring in Action, Fifth Edition
Craig Walls

How to do database schema testing across distinct applications that share the same schema

I have a situation where I have two SpringBoot microservices which share the same database schema. The schema is maintained by a liquibase changelog file. One service reads from the database, and the other service is responsible for writing to the database.
Right the Writing Service owns the liquibase changelog file, which means the Writing service owns the schema. And the way I validate the Reading Service is to deploy the Writing service first into a test environment followed by the Reading Service, and then execute end-to-end tests against the Reading Service.
Is there a way for both services (two separate apps, two separate repos) to share the liquibase changelog file? I feel this is similar to a contract test as the changelog file will be the contract for both services, but wasn't sure if there was something provided by Liquibase, Spring, Pact, etc that supported this idea.
Thanks for your time!
I think it won't count as a legit answer, but two solutions come to mind:
since your second service reads from the database, I suppose you have a full set of entities there, and entities are supposed to match your database schema. And since you're using Spring, I suppose you can add spring.jpa.hibernate.ddl-auto=validate to the application.properties file. This will validate your database schema against the entities.
You can create a separate library which will contain all the changeLogs. After that you can include this library to both services, so liquibase will validate that all changeSets are executed during application's deployment. But you should make sure that all your changeSets have preConditions, so your deployment won't fail and there won't be any duplicates in the DB schema.

Finer control over Spring Security on Spring Data REST

I have multiple closely related problems in Spring Security. I am developing using Spring Boot and am using Spring Data REST for creating REST endpoints directly from my repositories.
I have multiple entities and the requirement is to have all these entities as REST endpoints. I am letting spring-data-rest handle the creation of these endpoints and I am securing these endpoints by adding #PreAuthorize and #PostAuthorize to the entity repository methods as and where required. This works great when I am calling an endpoint like /entity/id.
But I am facing issues from here. Let's say I have 2 entities, Entity1 and Entity2 and they have a One to One relationship. Spring data rest allows me to fetch the related Entity2 data from Entity1 like /entity1/id/entity2. But I have different access rights over Entity1 and Entity2 and calling the above endpoint only checks the access rights as set up in the repository for Entity1 only. So, if a user has access to Entity1 table and no access to Entity2 table, he can still see some Entity2 data via the foreign key relationship of Entity1. Is this a correct design?
Moreover we have some custom API endpoints wherein we have to aggregate data from multiple entity repositories. Also, these endpoints themselves have to secured. So, I am using a #PreAuthorize over an endpoint method. This works as expected and the endpoint method is called only when the expression is valid. But, when a repository method is called (via a service class of course), the #PreAuthorize over that repository method is also evaluated. I would like to have the check done with at the beginning. Is it possible to do so?
Any suggestions to improving the design is also welcome.
There is no simple solution without massively modifying/overriding lots of default Spring DataRest features. I'm working such a package for years now and it's working quite well for me.
Although switching to this package might be a bit overkill for you, it could worth the trouble in the long run because it also a fixes a lot of problem you will meet only months later.
you can set up permisison rules via annotation directly in the domain objects.
it checks the permisisons in the DB side, so the traffic between the API and DB is heavily decreased (Only those objects are fetched form the DB which the current user has permission to)
you can set READ/UPDATE/DELETE/CREATE permissions separately for roles and/or certain users
you can use pagination on permission filtered collection
you can use pagination on property-collections too
(+ some extra features like flexible search on multiple properties)
here is the package (It's an extension of Spring Data JPA / Data Rest)

Can I duplicate a web service for testing?

I have a REST web service exposed at http://server:8080/my-production-ws by JBoss (7). My spring (3.2) configuration has a datasource which points to a my-production-db database. So far so good.
I want to test that web service from the client side, including PUT/POST operations but I obviously don't want my tests to affect the production database.
Is there an easy way to have spring auto-magically create another web service entry point at http://server:8080/my-test-ws or maybe http://server:8080/my-production-ws/test that will have the exact same semantics as the production web service but will use a my-test-db database as a data source instead of my-production-db?
If this is not possible, what is the standard approach to integration testing in that situation?
I'd rather not duplicate every single method in my controllers.
Check the spring Profiles functionality, this should solve the problem. With it its possible to create two datasources with the same bean name in different profiles and then activate only one depending on a parameter passed to the the JVM.

Resources