NestJS Request Scoped Multitenancy for Multiple Databases - multi-tenant

Looking to implement a multi-tenant NestJS solution using the new request injection scope feature of NestJS 6.
For any given service I assume I could do something like this:
#Injectable({scope: Scope.REQUEST})
export class ReportService implements OnModuleInit { ... }
then, in the constructor, determine the tenant from the request, connect to the appropriate database, and instantiate repositories for the new connection.
I'm wondering if this is the most straightforward way to go about it?
Instead of updating each service, is it possible to override the connection provider and scope that to the request?

Here's what we ended up doing...
Create a simple, global TenancyModule bound to the request scope:
tenancy.module.ts
import { Global, Module, Scope } from '#nestjs/common';
import { REQUEST } from '#nestjs/core';
import { getConnection } from 'typeorm';
const connectionFactory = {
provide: 'CONNECTION',
scope: Scope.REQUEST,
useFactory: (req) => {
const tenant = someMethodToDetermineTenantFromHost(req.headers.host);
return getConnection(tenant);
},
inject: [REQUEST],
};
#Global()
#Module({
providers: [connectionFactory],
exports: ['CONNECTION'],
})
export class TenancyModule {}
Inject request-specific 'CONNECTION' into module services from which to retrieve repositories:
user.service.ts
...
#Injectable({scope: Scope.REQUEST})
export class UserService {
private readonly userRepository: Repository<User>;
constructor(#Inject('CONNECTION') connection) {
this.userRepository = connection.getRepository(User);
}

I would recommend to use the approach by #nurikabe with a request scoped factory provider and request scoped services. Nestjs itself has a similar factory example in the docs.
But for the sake of completenes, there is also another approach: You could also use a middleware and attach the connection to the request object as described in this answer to a similar question. However, attaching things like a connection to the request via a middleware is circumventing the DI mechanism and alienates the request object by making it behave like a service container that delivers the connection – therefore the factory approach should be preferred.

It is preferable to inject the connection as a provider (coming from a factory) rather than attaching it to the request.
Note that both approaches will inevitably lead to an increase in the number of connections being created. This can cause performance issues, even with connections pooling. For this reason, such an approach (one connection per tenant) is only really efficient when the number of tenants is relatively low.
One way to do it with a multi-schema approach is fully documented in this article.

Related

Spring GraphQL with WebMvc getting request headers

I have a Spring GraphQL project. Each data fetcher (#SchemaMapping) will get data from a remote API protected by authentication.
I need to propagate the authorization header from the original request (that I can see inside the #QueryMapping method) to the data fetcher.
In the data fetcher I can use RequestContextHolder to get the request and the headers like this:
val request = (RequestContextHolder.getRequestAttributes() as ServletRequestAttributes?)?.getRequest()
val token = request?.getHeader("authorization")
This works but I am worried it could break.
Spring GraphQL documentation states that:
A DataFetcher and other components invoked by GraphQL Java may not always execute on the same thread as the Spring MVC handler, for example if an asynchronous WebInterceptor or DataFetcher switches to a different thread.
I tried adding a ThreadLocalAccessor component but it seems to me from debugging and reading source code that the restoreValue method gets called only in a WebFlux project.
How can I be sure to get the right RequestContextHolder in a WebMvc project?
UPDATE
I will add some code to better explain my use case.
CurrentActivity is the parent entity while Booking is the child entity.
I need to fetch the entities from a backend with APIs protected by authentication. I receive the auth token in the original request (the one with the graphql query).
CurrentActivityController.kt
#Controller
class CurrentActivityController #Autowired constructor(
val retrofitApiService: RetrofitApiService,
val request: HttpServletRequest
) {
#QueryMapping
fun currentActivity(graphQLContext: GraphQLContext): CurrentActivity {
// Get auth token from request.
// Can I use the injected request here?
// Or do I need to use Filter + ThreadLocalAccessor to get the token?
val token = request.getHeader("authorization")
// Can I save the token to GraphQL Context?
graphQLContext.put("AUTH_TOKEN", token)
return runBlocking {
// Authenticated API call to backend to get the CurrentActivity
return#runBlocking entityretrofitApiService.apiHandler.activitiesCurrent(mapOf("authorization" to token))
}
}
}
BookingController.kt
#Controller
class BookingController #Autowired constructor(val retrofitApiService: RetrofitApiService) {
#SchemaMapping
fun booking(
currentActivity: CurrentActivity,
graphQLContext: GraphQLContext,
): Booking? {
// Can I retrieve the token from GraphQL context?
val token: String = graphQLContext.get("AUTH_TOKEN")
return runBlocking {
// Authenticated API call to backend to get Booking entity
return#runBlocking currentActivity.currentCarBookingId?.let { currentCarBookingId ->
retrofitApiService.apiHandler.booking(
headerMap = mapOf("authorization" to token),
bookingId = currentCarBookingId
)
}
}
}
}
The ThreadLocalAccessor concept is really meant as a way to store/restore context values in an environment where execution can happen asynchronously, on a different thread if no other infrastructure already supports that.
In the case of Spring WebFlux, the Reactor context is already present and fills this role. A WebFlux application should use reactive DataFetchers and the Reactor Context natively.
ThreadLocalAccessor implementations are mostly useful for Spring MVC apps. Any ThreadLocalAccessor bean will be auto-configured by the starter.
In your case, you could follow one of the samples and have a similar arrangement:
Declare a Servlet filter that extracts the header value and set it as a request attribute with a well-known name
Create a ThreadLocalAccessor component and use it to store request attributes into the context
Fetch the relevant attribute from your DataFetcher
I tried adding a ThreadLocalAccessor component but it seems to me from
debugging and reading source code that the restoreValue method gets
called only in a WebFlux project.
Note that the restoreValue is only called if the current Thread is not the one values where extracted from originally (nothing needs to be done, values are already in the ThreadLocal).
I've successfully tested this approach, getting the "authorization" HTTP header value from the RequestContextHolder. It seems you tried this approach unsuccessfully - could you try with 1.0.0-M3 and let us know if it doesn't work? You can create an issue on the project with a link to a sample project that reproduces the issue.
Alternate solution
If you don't want to deal with ThreadLocal-bound values, you can always use a WebInterceptor to augment the GraphQLContext with custom values.
Here's an example:
#Component
public class AuthorizationWebInterceptor implements WebInterceptor {
#Override
public Mono<WebOutput> intercept(WebInput webInput, WebInterceptorChain chain) {
String authorization = webInput.getHeaders().getFirst(HttpHeaders.AUTHORIZATION);
webInput.configureExecutionInput((input, inputBuilder) ->
inputBuilder
.graphQLContext(contextBuilder -> contextBuilder.put("Authorization", authorization))
.build()
);
return chain.next(webInput);
}
}
With that, you can fetch that value from the GraphQL context:
#QueryMapping
public String greeting(GraphQLContext context) {
String authorization = context.getOrDefault("Authorization", "default");
return "Hello, " + authorization;
}

Spring Cloud Stream 3 RabbitMQ consumer not working

I'm able to make Spring+Rabbit work with the non-functional way (prior to 2.0?), but I'm trying to use with the functional pattern as the previous one is deprecated.
I've been following this doc: https://docs.spring.io/spring-cloud-stream/docs/3.1.0/reference/html/spring-cloud-stream.html#_binding_and_binding_names
The queue (consumer) is not being created in Rabbit with the new method. I can see the connection being created but without any consumer.
I have the following in my application.properties:
spring.cloud.stream.function.bindings.approved-in-0=approved
spring.cloud.stream.bindings.approved.destination=myTopic.exchange
spring.cloud.stream.bindings.approved.group=myGroup.approved
spring.cloud.stream.bindings.approved.consumer.back-off-initial-interval=2000
spring.cloud.stream.rabbit.bindings.approved.consumer.queueNameGroupOnly=true
spring.cloud.stream.rabbit.bindings.approved.consumer.bindingRoutingKey=myRoutingKey
which is replacing:
spring.cloud.stream.bindings.approved.destination=myTopic.exchange
spring.cloud.stream.bindings.approved.group=myGroup.approved
spring.cloud.stream.bindings.approved.consumer.back-off-initial-interval=2000
spring.cloud.stream.rabbit.bindings.approved.consumer.queueNameGroupOnly=true
spring.cloud.stream.rabbit.bindings.approved.consumer.bindingRoutingKey=myRoutingKey
And the new class
#Slf4j
#Service
public class ApprovedReceiver {
#Bean
public Consumer<String> approved() {
// I also saw that it's recommended to not use Consumer, but use Function instead
// https://docs.spring.io/spring-cloud-stream/docs/3.1.0/reference/html/spring-cloud-stream.html#_consumer_reactive
return value -> log.info("value: {}", value);
}
}
which is replacing
// BindableApprovedChannel.class
#Configuration
public interface BindableApprovedChannel {
#Input("approved")
SubscribableChannel getApproved();
}
// ApprovedReceiver.class
#Service
#EnableBinding(BindableApprovedChannel.class)
public class ApprovedReceiver {
#StreamListener("approved")
public void handleMessage(String payload) {
log.info("value: {}", payload);
}
}
Thanks!
If you have multiple beans of type Function, Supplier or Consumer (which could be declared by third party libraries), the framework does not know which one to bind to.
Try setting the spring.cloud.function.definition property to approved.
https://docs.spring.io/spring-cloud-stream/docs/3.1.3/reference/html/spring-cloud-stream.html#spring_cloud_function
In the event you only have single bean of type java.util.function.[Supplier/Function/Consumer], you can skip the spring.cloud.function.definition property, since such functional bean will be auto-discovered. However, it is considered best practice to use such property to avoid any confusion. Some time this auto-discovery can get in the way, since single bean of type java.util.function.[Supplier/Function/Consumer] could be there for purposes other then handling messages, yet being single it is auto-discovered and auto-bound. For these rare scenarios you can disable auto-discovery by providing spring.cloud.stream.function.autodetect property with value set to false.
Gary's answer is correct. If adding the definition property alone doesn't resolve the issue I would recommend sharing what you're doing for your supplier.
This is also a very helpful general discussion for transitioning from imperative to functional with links to repos with more in depth examples: EnableBinding is deprecated in Spring Cloud Stream 3.x

Spring MVC - How can I dynamically switch the implementing class?

My question is about finding the best technique for implementing a bean switcher for managing different sites with different persistent layers.
I designed a server for Customer management and e-eCommerce services.
For each service I am using an API layer, a Controller layer and a persistent layer.
This server is managing multiple sites for different clients.
Up until today, all my sites have used the same persistent layer for all the sites.
Recently, I have a new request for integrating the customers services from an outside server - In other words, Integration with external service.
I am trying to solve this by adding another persistent layer that uses the external service's API, and when I get request from this site, to switch the persistent layer to the outside service (Like a factory).
Lets assume I have details about the site which the request came from....
My goal is to use a kind of 'Factory' for switching between the persistent layers according to the parameters that I pull from the request.
How do I dynamically switch the implementing class of the interface using the Spring MVC tools?
I found this solution: https://www.baeldung.com/spring-dynamic-autowire, but I don't think it is the best solution.
Can anyone share a different technique to achieve my goal?
Thank you so much for any help!!!
Asaf
You can use Factory pattern to solve this issue.
You can define a class that will autowired all types of data services. You should first define an abstraction via interface something like this.
interface SomeDao {
...
}
#Service(name="someDaoMysql")
class SomeDaoMysqlImpl implements SomeDao {
...
}
#Service(name="someDaoApi")
class SomeDaoApiImpl implement SomeDao {
...
}
Once you have these different variants of the SomeDao interface, return one of them based on some parameters. A factory interface might look like this.
enum DaoType{
API,
MYSQL;
}
interface SomeDaoFactory {
SomeDao getDao( DaoType type);
}
#Component
class SomeDaoFactoryImpl implements SomeDaoFactory{
#Aurowired #Qualifier("someDaoMysql") SomeDao someDaoMysql;
#Aurowired #Qualifier("someDaoApi") SomeDao someDaoApi;
public SomeDao getDao( DaoType type){
switch(type){
case API:
return someDaoApi;
case MYSQL:
return someDaoMysql;
default:
throw new IllegalStateExecption("Unknown type"+type);
}
}
Usage
#Service
public class SomeFancyServiceImpl implements SomeFancyService{
#Autowired SomeDaoFactory someDaoFactory;
#Override
public void doSomething(){
SomeDao dao = someDaoFactory.getDao( API );
// do something with dao
}
}

How to have dynamic base URL with Quarkus MicroProfile Rest Client?

Quarkus using Rest Client, explains how to use the MicroProfile REST Client. For Base URL application.properties can be used.
org.acme.restclient.CountriesService/mp-rest/url=https://restcountries.eu/rest #
With above approach, cant have dynamic base URL.
Able to achieve it by using RestClientBuilder as explained in MicroProfile Rest Client. Downside of this approach is not having auto-negotiation capability.
SimpleGetApi simpleGetApi = RestClientBuilder.newBuilder().baseUri(getApplicationUri()).build(SimpleGetApi.class);
Is there other or better way to achieve this? Thanks.
While it is true, that the MP Rest CLient does not allow you to set the BaseUri dynamically when you use declarative/Injected clients, there are some (albeit hacky) ways how to achieve that.
One is to use standard ClientRequestFilter which can modify the URL:
#Provider
#Slf4j
public class Filter implements ClientRequestFilter {
#Inject RequestScopeHelper helper;
#Override
public void filter(ClientRequestContext requestContext) throws IOException {
if (helper.getUrl() != null) {
URI newUri = URI.create(requestContext.getUri().toString().replace("https://originalhost.com", helper.getUrl()));
requestContext.setUri(newUri);
}
}
}
Where RequestScopeHelper is some help class (e.g. request scoped bean) through which you can pass the dynamic url, for example:
#Inject
RequestScopeHelper helper;
#Inject
#RestClient
TestIface myApiClient;
public void callSomeAPIWithDynamicBaseUri(String dynamic) {
helper.setUrl(dynamic);
myApiClient.someMethod();
}
Second is to use MP rest client SPI, namely the RestClientListener which allows you to modify the rest clients after they are built.
For this to work, you have to set the scope of your rest client to RequestScoped so that new instance is created for each request(if you use singleton for example, then the client is only created once and your listener will only be called once). This you can do via quarkus properties:
quarkus.rest-client."com.example.MyRestIface".scope=javax.enterprise.context.RequestScoped
public class MyListener implements RestClientListener {
#Override
public void onNewClient(Class<?> serviceInterface, RestClientBuilder builder) {
String newUri = //obtain dynamic URI from somewhere e.g. again request scope bean lookup, or maybe dynamic config source (create new in-memory ConfigSource, before you invoke your rest client set the corresponding rest client url property to your dynamic value, then inside this listener use ConfigProvider.getConfig().getProperty...)
builder.baseUri(URI.create(newUri));
}
}
Don't forget to register this listener as service provider(META-INF/services/org.eclipse.microprofile.rest.client.spi.RestClientListener)
Another option is to use custom CDI producer that would produce the Rest client instances for you; then you could control all client config yourself. You can use the RestClientBase from Quarkus rest client which is exactly what Quarkus uses under the hood during deployment phase to construct client instances. You will however have to duplicate all the logic related to registration of handlers, interceptors etc.
Do keep in mind, that any of these solutions will make the debugging and problem analysis more challenging - because you will now have multiple places, where the URI is controlled(MP config/quarkus properties, env vars, your custom impl...), so you need to be careful with your approach and maybe add some explicit log messages when you override the URI manually.
MicroProfile REST Client in Quarkus does allow you to use dynamic base URL with that simple "hack" :
Just put an empty String in #Path annotations for you API interface like that :
import javax.ws.rs.GET;
import javax.ws.rs.Path;
#Path("")
public interface SimpleGetApi {
#Path("")
#GET
String callWithDynmamicUrl(); //it can be String or any return type you want
}
After that you are ready to call your dynamic base URL :
import org.eclipse.microprofile.rest.client.RestClientBuilder;
import java.net.URI;
public class Example {
public static void main(String[] args) {
URI anyDynamicUrl = URI.create("http://restcountries.eu/rest/some/dynamic/path");
SimpleGetApi simpleGetApi = RestClientBuilder.newBuilder().baseUri(anyDynamicUrl)
.build(SimpleGetApi.class);
simpleGetApi.callWithDynmamicUrl();
}
}

Configuring Spring MockMvc to use custom argument resolver before built-in ones

I have a straightforward test case. I have a controller which has a parameter of a type Spring doesn't support by default, so I wrote a custom resolver.
I create the mock mvc instance I'm using like so:
mvc = MockMvcBuilders.standaloneSetup(controller).setCustomArgumentResolvers(new GoogleOAuthUserResolver()).build();
However, Spring is also registering almost 30 other argument resolvers, one of which is general enough that it is getting used to resolve the argument before mine. How can I set or sort the resolvers so that mine is invoked first?
This worked for me without reflection:
#RequiredArgsConstructor
#Configuration
public class CustomerNumberArgumentResolverRegistration {
private final RequestMappingHandlerAdapter requestMappingHandlerAdapter;
#PostConstruct
public void prioritizeCustomArgumentResolver () {
final List<HandlerMethodArgumentResolver> argumentResolvers = new ArrayList<>(Objects.requireNonNull(requestMappingHandlerAdapter.getArgumentResolvers()));
argumentResolvers.add(0, new CustomerNumberArgumentResolver());
requestMappingHandlerAdapter.setArgumentResolvers(argumentResolvers);
}
}
The issue was that the People class the Google OAuth library I am using extends Map and the mock servlet API provides no way to manipulate the order in which the handlers are registered.
I ended up using reflection to reach into the mocks guts and remove the offending handler.

Resources