Spring OAuth 2 + Spring Data Neo4j multi-tenancy - spring

I'm going to implement multi-tenancy support in my Spring OAuth 2 + Spring Data Neo4j project.
I have configure my OAuth2 Authorization Server with a few different clients with a different clientId.
Also, I have added a base TenantEntity to my Spring Data Neo4j models:
#NodeEntity
public abstract class TenantEntity extends BaseEntity {
private String tenantId;
public String getTenantId() {
return tenantId;
}
public void setTenantId(String tenantId) {
this.tenantId = tenantId;
}
}
All of my existing Spring Data Neo4j entities must now extend this TenantEntity.
Right now I'm going to rewrite all of my Neo4j queries in order to support this tenantId parameter.
For example current query:
MATCH (d:Decision)<-[:DEFINED_BY]-(c:Criterion) WHERE id(d) = {decisionId} AND NOT (c)<-[:CONTAINS]-(:CriterionGroup) RETURN c
I going to rewrite to following:
MATCH (d:Decision)<-[:DEFINED_BY]-(c:Criterion) WHERE id(d) = {decisionId} AND d.tenantId = {tenantId} AND c.tenantId = {tenantId} AND NOT (c)<-[:CONTAINS]-(:CriterionGroup) RETURN c
In turn for tenantId I'm going to use OAuth2 clientId and store it together with every Neo4j entity.
Is it a correct approach in order to implement multi-tenancy or Spring OAuth2/Data Neo4j can propose something more standard for this purpose out of the box ?

Since Neo4j currently has no feature to support multi-tenancy, if you particularly need this, it must be worked-around as you have done. You solution looks reasonable.
Alternatively, licensing is by machine, so it is possible to use, for example, Docker and spin up multiple Neo4j instances each on a different port.

Related

What is the best approach to seperate entities in microservices architecture (spring boot)?

I'm designing a new project with microservices, the first principal which was already impleneted is that each microservice has its own DB schema. I have simple question about architecture.
I'll explain with simple example.
I've created a Location service. Here is some code from the controller:
#CrossOrigin(origins = "http://localhost:4200", maxAge = 3600)
#RestController
#RequestMapping("/locations")
#Slf4j
public class LocationController {
#Autowired
LocationService locationService;
#GetMapping("/Cntry")
public List<Country> getCountries() {
return locationService.getdCountries();
}
#GetMapping("/States/{id}")
public List<State> getStatesForCountry(#PathVariable("id") String countryId) {
List<State> states = locationService.getStatesForCountry(Integer.valueOf(countryId));
return states;
}
#GetMapping("/Cntry/{code}")
public Country getCountry(#PathVariable("code") String code) {
return locationService.getCountry(code);
}
As you can see above, the Location service which has local DB hold all the countries and states.
The location service hold more entities related to the project such as Location, so it is multi purpose microservice.
The problem I'm facing is that each microservice can have entities with country_id for example,
but when running it almost gurantee that it will need the country name which mean a service call (web client).
Here is sample of such service call:
#JsonIgnore
public String getCountryString() {
String url = MySpringBootServiceApplication.LOCATION_BASE_URL + "locations/CntryStr/" + countryId;
WebClient client = WebClient.create(url);
ResponseEntity<String> response = client.get()
.retrieve()
.toEntity(String.class)
.block();
String countryStr = response.getBody();
return countryStr;
}
I have two problems here that I need to solve:
Is there a solution (Architecture) to avoid calling to get the country string every time and from each micro service ?
A DTO in another service has country_id, but the user is looking for the name so, is there better way instead of make a webclient call inside DTO (doens't make sense).
Thanks for your help.
You can solve both problems by denormalizing: have each service maintain its own local mapping of country codes to names.
You can still have the location service govern updating country codes to names; it then publishes (in the rare case when a code-to-name mapping changes) events when they change for consumption by the other services to signal that they should update their respective mappings. Because the responsibility for updating this mapping is in one service while (many) other services have a responsibility for presenting the mapping, this is an example of the Command/Query Responsibility Segregation pattern (CQRS).
In general taking a normalized relational schema and turning it into a microservice architecture leads to the issue you're facing. Denormalization can avoid the problem, but then focusing on the operations (the verbs) rather than the data (the nouns) lead to a much more effective and easier to work with architecture.

Custom Query REST Data with Panache

I am using REST Data with Panache for JAX RESTful Web Service generation by extending PanacheEntityResource. In Spring land there is query builder mechanism that allows the Spring Data repository to generate an SQL query based on the name and return type of the custom method signature.
I'm trying to achieve the same thing using Panache, so far unsuccessfully.
#ResourceProperties(path = "tasks", paged = false)
public interface TaskResource extends PanacheEntityResource<Task, UUID> {
List<Task> findByOrganizationId(#QueryParam("organizationId") UUID organizationId);
}
I want to pass the organazation ID as a query parameter, such that my request will be http://localhost:8080/tasks?organizationId=1e7e669d-2935-4d6f-8b23-7a2497b0f5b0, and my response will return a list of Tasks whose organization ID matches the one provided. Is there support for this functionality?
That functionality is currently not supported.
See https://quarkus.io/guides/rest-data-panache and https://quarkus.io/guides/spring-data-rest for more details

filter dynamodb from list in springboot

I have a Spring boot application using an AWS DynamoDb table which contains a list of items as such:
#DynamoDBTable(tableName = MemberDbo.TABLENAME)
public class MemberDbo {
public static final String TABLENAME = "Member";
#NonNull
#DynamoDBHashKey
#DynamoDBAutoGeneratedKey
protected String id;
// some more parameters
#DynamoDBAttribute
private List<String> membergroupIds;
}
I would like to find all members belonging to one specific groupId. In best case I would like to use CrudRepository like this:
#EnableScan
public interface MemberRepository extends CrudRepository<MemberDbo, String> {
List<MemberDbo> findByMembergroupIdsContaining(String membergroupIds); // actually I want to filter by ONE groupId
}
Unfortunately the query above is not working (java.lang.String cannot be cast to java.util.List)
Any suggestions how to build a correct query with CrudRepository?
Any suggestions how to create a query with Amazon SDK or some other Springboot-compliant methods?
Alternatively can I create a dynamoDb index somehow and filter by that index?
Or do I need to create and maintain a new table programmatically containing the mapping between membergroupIds and members (which results in a lot of overhead in code and costs)?
A solution for CrudRepository is preferred since I may use Paging in future versions and CrudRepository easily supports paging.
If I have understood correctly this looks very easy. You using DynamoDBMapper for model persistence.
You have a member object, which contains a list of membergroupids, and all you want to do is retrieve this from the database. If so, using DynamoDBMapper you would do something like this:
AmazonDynamoDB dynamoDBClient = new AmazonDynamoDBClient();
DynamoDBMapper mapper = new DynamoDBMapper(dynamoDBClient);
MemberDbo member = mapper.load(MemberDbo.class, hashKey, rangeKey);
member.getMembergroupIds();
Where you need to replace hashKey and rangeKey. You can omit rangeKey if you don't have one.
DynamoDBMapper also supports paging out of the box.
DynamoDBMapper is an excellent model persistence tool, it has strong features, its simple to use and because its written by AWS, it has seamless integration with DynamoDB. Its creators have also clearly been influenced by spring. In short, I would use DynamoDBMapper for model persistence and Spring Boot for model-controller stuff.

Graphql for aggregation using Spring

My application has REST API built and we are planning to use GraphQL. Would like to know if there is any documentation or any online reference briefing the integration of GraphQL Apollo with Spring on server side. Any help please.
Your question way too broad to be answered. Any GraphQL client will work with any GraphQL server, and the server can be implemented with any framework stack, as GraphQL is only the API layer.
For a minimal (but pretty complete) Spring Boot example with graphql-java, using graphql-spqr, see https://github.com/leangen/graphql-spqr-samples
In short, you create a normal controller, where you create the GraphQL schema and initialize the runtime, and expose an endpoint to receive queries.
#RestController
public class GraphQLSampleController {
private final GraphQL graphQL;
#Autowired
public GraphQlSampleController(/*Inject the services needed*/) {
GraphQLSchema schema = ...; //create the schema
graphQL = GraphQL.newGraphQL(schemaFromAnnotated).build();
}
//Expose an endpoint for queries
#PostMapping(value = "/graphql", consumes = MediaType.APPLICATION_JSON_UTF8_VALUE, produces = MediaType.APPLICATION_JSON_UTF8_VALUE)
#ResponseBody
public Object endpoint(#RequestBody Map<String, Object> request) {
ExecutionResult executionResult = graphQL.execute((String) request.get("query"));
return executionResult;
}
}
This is the bare minimum. For a complete tutorial, using graphql-java-tools but without Spring, check out the Java track on HowToGraphQL.

Spring AbstractRoutingDataSource is caching Datasource

I have a Spring MVC application deployed as Multi-Tenant (Single Instance) on Tomcat. All users login to the same application.
User belongs to a Region, and each Region has a separate Database instance.
We are using Dynamic DataSource Routing using Spring AbstractRoutingDataSource".
This works correctly only for the first time, when User_1 from Region_1 logs into the application, Datasource_1 is correctly assigned.
But subsequently when User_2 from Reqion_2 logs into the application, AbstractRoutingDataSource never gets called and Datasource_1 gets assigned.
It looks like Spring AbstractRoutingDataSource is caching the Datasouce.
Is there a way to change this behaviour of AbstractRoutingDataSource and get it working correctly?
You should provide more details for a better understanding.
I think the problem might be related to changing the tenant identifier. You may have a ThreadLocal storage to store the tenant identifier.
public class ThreadLocalStorage {
private static ThreadLocal<String> tenant = new ThreadLocal<>();
public static void setTenantName(String tenantName) {
tenant.set(tenantName);
}
public static String getTenantName() {
return tenant.get();
}
}
AbstractRoutingDataSource should use this to retrieve the tenantId
public class TenantAwareRoutingDataSource extends AbstractRoutingDataSource {
#Override
protected Object determineCurrentLookupKey() {
return ThreadLocalStorage.getTenantName();
}
}
And you should set the tenantId on each request for the current thread that is handling the request so that the following operations will be done on the correct database. For you may add a Spring Security filter to extract the tenant identifier from JWT token or from host subdomain.

Resources