In app.module.ts I have the following:
#Module({
imports: [
...,
GraphQLModule.forRoot<ApolloGatewayDriverConfig>({
server: {
context: getContext,
},
driver: ApolloGatewayDriver,
gateway: {
buildService: ({ name, url }) => {
return new RemoteGraphQLDataSource({
url,
willSendRequest({ request, context }: any) {
...
},
});
},
supergraphSdl: new IntrospectAndCompose({
subgraphs: [
{ name: 'iam', url: API_URL_IAM },
],
})
},
}),
]
...
})
here getContext is just a regular function which is not part of nestjs context (doesn't have injection, module capability) like below:
export const getContext = async ({ req }) => {
return {}
}
Is there any way to use nestjs services instead of plain old functional approach to build the context for graphql gateway in nestjs?
Thanks in advance for any kind of help.
I believe you're looking to create a service that is #Injectable and you can use that injectable service via a provider. What a provider will do is satisfy any dependency injection necessary.
In your scenario, I would import other modules as necessary. For building context, I would create a config file to create from env variables. Then create a custom provider that reads from the env variables and provides that implementation of the class/service to the other classes as their dependency injection.
For example, if I have a graphQL module. I would import the independent module. Then, I would provide in the providers section, the handler/service classes and the dependencies as an #injectable. Once your service class is created based on your config (which your provider class would handle), you would attach that service class to your GraphQL class to maybe lets say direct the URL to your dev/prod envs.
Related
I want to build a NestJS project where I will separate queries and mutations into two groups: Internal and Public. In the Internal GraphQL module, I will define path and resolvers without any restrictions, but for the Public, I want to define a GraphQL module with path and JWT Guard, which will look at the same resolvers but just specific mutations and queries.
I tried to do the following:
GraphQLModule.forRootAsync<ApolloDriverConfig>({
driver: ApolloDriver,
useClass: InternalGraphQLConfig,
}),
GraphQLModule.forRootAsync<ApolloDriverConfig>({
driver: ApolloDriver,
useClass: PublicGraphQLConfig,
}),
For protecting the public endpoint (the path is defined in the PublicGraphQLConfig by GqlOptionsFactory) I added middleware by NestMiddleware where I am checking req.originalUrl - the path from the PublicGraphQLConfig. If the URL is the public one, I am checking for the JWT otherwise is a free - internal URL.
But, I do not know how and where I can define the list of queries and mutations for the Public GraphQL model because I do not want to expose all of them.
As I can see in the documentation, this approach may be unavailable, and it is impossible to do it correctly like this. Maybe I have to use directives or something else, but I firmly believe someone has a similar/same challenge and will share an idea/solution with me.
Edit 1:
Here I will add more details about resolvers and GraphQL Module configuration.
One of my resolvers looks like the following:
...
#Resolver(DogEntity)
class DogResolver {
#Mutation(...)
async firstMutation(...) {
//
}
#Mutation(...)
async secondMutation(...) {
//
}
#Query(...)
async firstQuery(...) {
//
}
#Query(...)
async secondQuery(...) {
//
}
...
}
GraphQL Module configuration looks like this:
...
#Injectable()
export class PublicGraphQLConfig implements GqlOptionsFactory {
createGqlOptions(): Promise<ApolloDriverConfig> | ApolloDriverConfig {
return {
...
resolvers: { DogResolver, ...}
path: '/my/public/route/graphql'
};
}
}
The first: It would be amazing if I could add a "global" guardian on the GrapQL Module level with, for example guards parameter in the PublicGraphQLConfig. Because it is impossible, and adding any JWT validation in the context parameter no make sense, I have to add Middleware where I'm checking the path parameter from the GraphQL Module configuration.
The Middleware looks like this:
...
#Injectable()
export class RequestResponseLoggerMiddleware implements NestMiddleware {
use(req: Request, res: Response, next: NextFunction) {
// For public endpoint, all resolvers required JWT token with Admin flag
if (req.originalUrl === '/my/public/route/graphql') {
this.validateJWT(req); // Do "throw exception inside"
}
...
The second: It would be amazing to add specific Mutation and/or Query in the GraphQL Module configuration. With the resolvers parameter, I can add only complete resolvers, but not specific queries or mutations. With this, I will be able to access the specific queries and mutations from different Endpoints with/out authorization requests.
The field in the GrapQL Module configuration like the following will be amazing (but, as I can see, it does not exist)
...
return {
...
resolvers: {
DogResolver:firstMutation(),
DogResolver:firstQuery(),
...
},
path: '/my/public/route/graphql'
};
...
Problem
In a federated nest app, a gateway collects all the schemas from other services and form a complete graph. The question is, how to re-run the schema collection after a sub-schema has been changed?
Current Workaround
Restarting the gateway solves the problem, but it does not seem like an elegant solution.
Other Resources
Apollo server supports managed federation which essentially reverts the dependency between the gateway and the services. Sadly I couldn't find anything relating it to NestJS.
When configuring gateway application with NestJS, and when already have integrated with Apollo studio, then you need not define any serviceList in GraphQLGatewayModule. This is how your module initialization should look like:
GraphQLGatewayModule.forRootAsync({
useFactory: async () => ({
gateway: {},
server: {
path: '/graphql',
},
}),
})
Following environment variables should be declared on the machine hosting your gateway application:
APOLLO_KEY: "service:<graphid>:<hash>"
APOLLO_SCHEMA_CONFIG_DELIVERY_ENDPOINT: "https://uplink.api.apollographql.com/"
Post deployment of Federated GraphQL service, you may need to run apollo/rover CLI service:push command like below to update the schema which writes to schema registry and then gets pushed to uplink URL which is polled by gateway periodically:
npx apollo service:push --graph=<graph id> --key=service:<graph id>:<hash> --variant=<environment name> --serviceName=<service name> --serviceURL=<URL of your service with /graphql path> --endpoint=<URL of your service with /graphql path>
You can add a pollIntervalInMs option to the supergraphSdl configuration.
That will automatically poll the services again in each interval.
#Module({
imports: [
GraphQLModule.forRootAsync<ApolloGatewayDriverConfig>({
driver: ApolloGatewayDriver,
useFactory: async () => ({
server: {
path: '/graphql',
cors: true
},
gateway: {
supergraphSdl: new IntrospectAndCompose({
subgraphs: [
{ name: 'example-service', url: 'http://localhost:8081/graphql' },
],
pollIntervalInMs: 15000,
})
},
})
})
],
})
export class AppModule {}
In the official documentation this is the correct way to use the cache manager with Redis:
import * as redisStore from 'cache-manager-redis-store';
import { CacheModule, Module } from '#nestjs/common';
import { AppController } from './app.controller';
#Module({
imports: [
CacheModule.register({
store: redisStore,
host: 'localhost',
port: 6379,
}),
],
controllers: [AppController],
})
export class AppModule {}
Source: https://docs.nestjs.com/techniques/caching#different-stores
However, I did not find any documentation on how to pass Redis instance data using REDIS_URI. I need to use it with Heroku and I believe this is a common use case.
EDIT:
now they are type-safe: https://github.com/nestjs/nest/pull/8592
I've exploring a bit about how the redis client is instantiated. Due to this line I think that the options that you've passed to CacheModule.register will be forwarded to Redis#createClient (from redis package). Therefore, you can pass the URI like:
CacheModule.register({
store: redisStore,
url: 'redis://localhost:6379'
})
try this and let me know if it works.
edit:
Explaining how I got that:
Taking { store: redisStore, url: '...' } as options.
Here in CacheModule.register I found that your options will live under CACHE_MODULE_OPTIONS token (as a Nest provider)
Then I search for places in where this token will be used. Then I found here that those options were passed to cacheManager.caching. Where cacheManager is the module cache-manager
Looking into to the cacheManager.caching's code here, you'll see that your options is now their args parameter
Since options.store (redisStore) is the module exported by cache-manager-redis-store package, args.store.create method is the same function as in redisStore.create
Thus args.store.create(args) is the same as doing redisStore.create(options) which, in the end, will call Redis.createClient passing this options
I am developing apis & microservices in nestJS,
this is my controller function
#Post()
#MessagePattern({ service: TRANSACTION_SERVICE, msg: 'create' })
create( #Body() createTransactionDto: TransactionDto_create ) : Promise<Transaction>{
return this.transactionsService.create(createTransactionDto)
}
when i call post api, dto validation works fine, but when i call this using microservice validation does not work and it passes to service without rejecting with error.
here is my DTO
import { IsEmail, IsNotEmpty, IsString } from 'class-validator';
export class TransactionDto_create{
#IsNotEmpty()
action: string;
// #IsString()
readonly rec_id : string;
#IsNotEmpty()
readonly data : Object;
extras : Object;
// readonly extras2 : Object;
}
when i call api without action parameter it shows error action required but when i call this from microservice using
const pattern = { service: TRANSACTION_SERVICE, msg: 'create' };
const data = {id: '5d1de5d787db5151903c80b9', extras:{'asdf':'dsf'}};
return this.client.send<number>(pattern, data)
it does not throw error and goes to service.
I have added globalpipe validation also.
app.useGlobalPipes(new ValidationPipe({
disableErrorMessages: false, // set true to hide detailed error message
whitelist: false, // set true to strip params which are not in DTO
transform: false // set true if you want DTO to convert params to DTO class by default its false
}));
how will it work for both api & microservice, because i need all at one place and with same functionality so that as per clients it can be called.
ValidationPipe throws HTTP BadRequestException, where as the proxy client expects RpcException.
#Catch(HttpException)
export class RpcValidationFilter implements ExceptionFilter {
catch(exception: HttpException, host: ArgumentsHost) {
return new RpcException(exception.getResponse())
}
}
#UseFilters(new RpcValidationFilter())
#MessagePattern('validate')
async validate(
#Payload(new ValidationPipe({ whitelist: true })) payload: SomeDTO,
) {
// payload validates to SomeDto
. . .
}
I'm going out on a limb and assuming in you main.ts you have the line app.useGlobalPipes(new ValidationPipe());. From the documentation
In the case of hybrid apps the useGlobalPipes() method doesn't set up pipes for gateways and micro services. For "standard" (non-hybrid) microservice apps, useGlobalPipes() does mount pipes globally.
You could instead bind the pipe globally from the AppModule, or you could use the #UsePipes() decorator on each route that will be needing validation via the ValidationPipe
More info on binding pipes here
As I understood, useGlobalPipes is working fine for api but not for microservice.
Reason behind this, nest microservice is a hybrid application and it has some restrictions. Please refer below para.
By default a hybrid application will not inherit global pipes, interceptors, guards and filters configured for the main (HTTP-based) application. To inherit these configuration properties from the main application, set the inheritAppConfig property in the second argument (an optional options object) of the connectMicroservice() call.
Please refer this Nest Official Document
So, you need to add inheritAppConfig option in connectMicroservice() method.
const microservice = app.connectMicroservice(
{
transport: Transport.TCP,
},
{ inheritAppConfig: true },
);
It worked for me!
I'm upgrading to Angular 6 using AngularFire2. My app referenced 2 Firebase projects using code like this to create the database reference:
public initFirebaseApp(config: FirebaseAppConfig, firebaseAppName: string) {
this._db = new AngularFireDatabase(_firebaseAppFactory(config, firebaseAppName));
}
This code is now broken. I get this:
ERROR in src/app/services/firebase.service.ts(24,25): error TS2554: Expected 5 arguments, but got 1.
Thanks!
AngularFire now support many more configuration objects via Injection now, which is why it's expecting more arguments. It currently takes:
constructor(
#Inject(FirebaseOptionsToken) options:FirebaseOptions,
#Optional() #Inject(FirebaseNameOrConfigToken) nameOrConfig:string|FirebaseAppConfig|undefined,
#Optional() #Inject(RealtimeDatabaseURL) databaseURL:string,
#Inject(PLATFORM_ID) platformId: Object,
zone: NgZone
)
Though now that we support dependency injection, I wouldn't suggest directly initializing like you are to support multiple apps. We have an open issue for documenting this but you can now inject different FirebaseOptions via the FirebaseOptionsToken into different components, if you need multiple databases in the same component use something like this:
#Injectable()
export class AngularFireDatabaseAlpha extends AngularFireDatabase { }
#Injectable()
export class AngularFireDatabaseBeta extends AngularFireDatabase { }
export function AngularFireDatabaseAlphaFactory(platformId: Object, zone: NgZone): Project1AngularFireAuth {
return new AngularFireDatabaseAlpha(environment.firebase[0], 'alpha', undefined, platformId, zone)
}
export function AngularFireDatabaseBetaFactory(platformId: Object, zone: NgZone): Project2AngularFireAuth {
return new AngularFireDatabaseBeta(environment.firebase[1], 'beta', undefined, platformId, zone)
}
#NgModule({
...,
providers: [
...,
{ provide: AngularFireDatabaseAlpha, deps: [PLATFORM_ID, NgZone], useFactory: AngularFireDatabaseAlphaFactory },
{ provide: AngularFireDatabaseBeta, deps: [PLATFORM_ID, NgZone], useFactory: AngularFireDatabaseBetaFactory },
...
],
...
})
Then you can just rely on Dependency Injection to get AngularFireDatabaseAlpha and AngularFireDatabaseBeta into your component.