Can we allow an API to work using MessagePattern and Rest method as well in NestJS? - microservices

I have a BFF needs to send some requests to ServiceA.
ServiceA is providing some API (GET, POST, ...) that we can deal with.
For example:
#Get('greeting')
getGreetingMessage(#Param('name') name: string): string {
return `Hello ${name}`;
}
In MicroService Architecture in NestJs I see the best practice in BFF to send requests to other servcies is to use Message patterns like cmd with payloads.
For example
constructor(
#Inject('SERVICE_A') private readonly clientServiceA: ClientProxy,
) {}
getGreetingFromServiceA() {
const startTs = Date.now();
const pattern = { cmd: 'greeting' };
const payload = {};
return this.clientServiceA
.send<string>(pattern, payload)
.pipe(
map((message: string) => ({ message, duration: Date.now() - startTs })),
);
}
So to do that I have to support MessagePattern in ServiceA like:
#MessagePattern({cmd: 'greeting'})
getGreetingMessage(name: string): string {
return `Hello ${name}`;
}
So my question is Is there a way to append MessagePattern to exisiting APIs in ServiceA? so I can call them with 2 different ways either by Rest GET Request or MessagePattern from BFF.
I'm thinking about using 2 docerators (Get and MessagePattern)
Like that
#Get('greeting')
#MessagePattern({cmd: 'greeting'})
getGreetingMessage(#Param('name') name: string): string {
return `Hello ${name}`;
}
If no, so how can I use a proxyClient to make http requests to other microservice in the BFF?

Actually it is not possible in NestJS to define more than one decorator to the same method in the controller but we make it a hybrid application which supports different communication protocols so we can call it via TCP or HTTP and so on like in this example https://docs.nestjs.com/faq/hybrid-application

Related

Apollo server accessing all datasources within resolvers before API request is sent

In a graphql query with multiple resolvers, I'm looking for a way to count how many times datasources are called before the first datasource API request is sent. The project that I am working on requires me to either allow or stop all the requests if the number of the datasources called within resolvers in a graphql query exceeds a certain number.
I am using an instance of the RESTDataSource to make API calls and each one of the resolvers call one or more datasources from the RESTDataSource class. I've been looking into this and far as I know, the RESTDataSource class doesn't have a method that shows me all the datasources requested because it is only called by the resolver and per request.
My problem is, I'm not finding a place where I can have access to all the datasources that will be called before the request is sent. I found that in the Apollo server instantiation, the only thing that I have access to are the resolvers, and not the datasources within each resolver, and as far as I know, not before the request is made so I can't stop it if the number of datasources calls exceed a certain threshold. I was hoping I could access that in the willSendRequest method inside the RESTDataSource class since from what I know, this is the only method that intercepts the request before being sent, but I don't think it's possible.
I'm pretty new to Apollo and I've been reading about this but didn't find a solution. I'd really appreciate any help.
Here's a simplified snippet of my code (not the original code):
resolvers.ts
export const resolvers: Resolvers = {
Query: {
getCompanies: (_, __, { dataSources }) => {
return dataSources.companyDatasource.getCompanies();
},
getCompany: (_, { name }, { dataSources }) => {
return dataSources.companyDatasource.getCompanyByName(name);
},
getCompanyCEOs: async (_, { name }, { dataSources }) => {
const company = await dataSources.companyDatasource.getCompanyByName(name);
return dataSources.companyDatasource.getCEOs(company.id);
},
....
company.datasource.ts
export default class CompanyDatasource extends RESTDataSource {
async willSendRequest(request) {
// some logic
}
async getCompanies() {
return this.get(`some_api_url`);
}
async getCompanyByName(name) {
return this.get(`some_api_url?companyName=name`);
}
//other external API endpoints
...
}
main.ts
const server = new ApolloServer({
typeDefs: schema,
schema,
resolvers,
dataSources,
cache: 'bounded',
});
await server.start();
Edit: I'm limiting the number of unique datasource API calls because the API I'm hitting has a limit. I tried instantiating a counter in the RESTDataSource class and using it in the willSendRequest to count how many datasource calls there are, but the problem is this is counting request by request and has no access to all the API requests that are coming from the resolver. For instance, if the getCompanies API can be called only once and I have 2 upcoming requests, I'll have to let one of them pass and only stop the second, because at that point I don't know there's a second request coming. My team has agreed to stop both requests in case the number of upcoming requests exceeds the available limit for the endpoint (this is specified in our database), so this is why I need to know beforehand how many API requests are there before even allowing the first request.

Why #RequestHeader does not work with #MessageMapping?

My goal is to get the RequestHeader e.g Authorization from a message published by Stomp.
The flow
1. Subscribe
Front End <-> /topic/javainuse
2. Publish Message
Front End <-> /app/chat.newUser
The Spring Boot WebSocket's source code
#MessageMapping("/chat.newUser")
#SendTo("/topic/javainuse")
public WebSocketChatMessage newUser(#RequestHeader(value = "Authorization") String authorization, WebSocketChatMessage message, SimpMessageHeaderAccessor headerAccessor) {
System.out.println("Authorization: " + authorization);
headerAccessor.getSessionAttributes().put("username", message.getSender()); // dependency of WebSocketChatEventListener to handle SessionDisconnectEvent.
return message;
}
The Front End's source code, written in JavaScript
const publish = () => {
stompClient.publish({
destination: '/app/chat.newUser',
body: JSON.stringify({
sender: 'User A',
type: 'newUser',
}),
headers: {
Authorization: 'Bearer a',
},
skipContentLengthHeader: true,
});
};
My expected result is it will print out the Authorization's key value.
My actual result is
Authorization: {"sender":"User A","type":"newUser"}
==================================================
Related question but not the main topic
==================================================
I am worried that my practice for the RestController is wrong since my practice can't be applied to WebSocket:
What are the advantages of using Spring Security over manual #RequestHeader annotation?
Currently, I use #RequestHeader inside a RestController e.g
#PostMapping
public ResponseEntity<Object> something(#RequestHeader(value = "Authorization) String authorization) {
boolean isValid = customValidatorService.isValid(authorization.split(" ")[1]);
// do something
if (isValid === false) {}
else (isValid === true) {}
...
}
Why should you use Spring Security to handle RestController and WebSocket instead of using the #RequestHeader(value = "Authorization") annotation and a Custom Validator Service?
If you protect the websocket endpoint, is validating e.g by do a sql eury SELECT session from a table for every published message sent to a #MessageMapping a correct practice?
Disclaimer: I have not use Spring Security at all.

How to use passport-local with graphql

I'm trying to implement GraphQL in my project and I would like to use passport.authenticate('local') in my login Mutation
Code adaptation of what I want:
const typeDefs = gql`
type Mutation {
login(userInfo: UserInfo!): User
}
`
const resolvers = {
Mutation: {
login: (parent, args) => {
passport.authenticate('local')
return req.user
}
}
Questions:
Was passport designed mostly for REST/Express?
Can I manipulate passport.authenticate method (pass username and password to it)?
Is this even a common practice or I should stick to some JWT library?
Passport.js is a "Express-compatible authentication middleware". authenticate returns an Express middleware function -- it's meant to prevent unauthorized access to particular Express routes. It's not really suitable for use inside a resolver. If you pass your req object to your resolver through the context, you can call req.login to manually login a user, but you have to verify the credentials and create the user object yourself before passing it to the function. Similarly, you can call req.logout to manually log out a user. See here for the docs.
If you want to use Passport.js, the best thing to do is to create an Express app with an authorization route and a callback route for each identify provider you're using (see this for an example). Then integrate the Express app with your GraphQL service using apollo-server-express. Your client app will use the authorization route to initialize the authentication flow and the callback endpoint will redirect back to your client app. You can then add req.user to your context and check for it inside resolvers, directives, GraphQL middleware, etc.
However, if you are only using local strategy, you might consider dropping Passport altogether and just handling things yourself.
It took me a while to wrap my head around the combination of GraphQL and Passport. Especially when you want to use the local strategy together with a login mutation makes life complicated. That's why I created a small npm package called graphql-passport.
This is how the setup of the server looks like.
import express from 'express';
import session from 'express-session';
import { ApolloServer } from 'apollo-server-express';
import passport from 'passport';
import { GraphQLLocalStrategy, buildContext } from 'graphql-passport';
passport.use(
new GraphQLLocalStrategy((email, password, done) => {
// Adjust this callback to your needs
const users = User.getUsers();
const matchingUser = users.find(user => email === user.email && password === user.password);
const error = matchingUser ? null : new Error('no matching user');
done(error, matchingUser);
}),
);
const app = express();
app.use(session(options)); // optional
app.use(passport.initialize());
app.use(passport.session()); // if session is used
const server = new ApolloServer({
typeDefs,
resolvers,
context: ({ req, res }) => buildContext({ req, res, User }),
});
server.applyMiddleware({ app, cors: false });
app.listen({ port: PORT }, () => {
console.log(`🚀 Server ready at http://localhost:${PORT}${server.graphqlPath}`);
});
Now you will have access to passport specific functions and user via the GraphQL context. This is how you can write your resolvers:
const resolvers = {
Query: {
currentUser: (parent, args, context) => context.getUser(),
},
Mutation: {
login: async (parent, { email, password }, context) => {
// instead of email you can pass username as well
const { user } = await context.authenticate('graphql-local', { email, password });
// only required if express-session is used
context.login(user);
return { user }
},
},
};
The combination of GraphQL and Passport.js makes sense. Especially if you want to add more authentication providers like Facebook, Google and so on. You can find more detailed information in this blog post if needed.
You should definitely use passport unless your goal is to learn about authentication in depth.
I found the most straightforward way to integrate passport with GraphQL is to:
use a JWT strategy
keep REST endpoints to authenticate and retrieve tokens
send the token to the GraphQL endpoint and validate it on the backend
Why?
If you're using a client-side app, token-based auth is the best practice anyways.
Implementing REST JWT with passport is straightforward. You could try to build this in GraphQL as described by #jkettmann but it's way more complicated and less supported. I don't see the overwhelming benefit to do so.
Implementing JWT in GraphQL is straightforward. See e.g. for express or NestJS
To your questions:
Was passport designed mostly for REST/Express?
Not in principle, but you will find most resources about REST and express.
Is this even a common practice or I should stick to some JWT library?
Common practice is to stick to JWT.
More details here: OAuth2 in NestJS for Social Login (Google, Facebook, Twitter, etc)
Example project bhere: https://github.com/thisismydesign/nestjs-starter

GraphQL subscription using server-sent events & EventSource

I'm looking into implementing a "subscription" type using server-sent events as the backing api.
What I'm struggling with is the interface, to be more precise, the http layer of such operation.
The problem:
Using the native EventSource does not support:
Specifying an HTTP method, "GET" is used by default.
Including a payload (The GraphQL query)
While #1 is irrefutable, #2 can be circumvented using query parameters.
Query parameters have a limit of ~2000 chars (can be debated)
which makes relying solely on them feels too fragile.
The solution I'm thinking of is to create a dedicated end-point for each possible event.
For example: A URI for an event representing a completed transaction between parties:
/graphql/transaction-status/$ID
Will translate to this query in the server:
subscription TransactionStatusSubscription {
status(id: $ID) {
ready
}
}
The issues with this approach is:
Creating a handler for each URI-to-GraphQL translation is to be added.
Deploy a new version of the server
Loss of the flexibility offered by GraphQL -> The client should control the query
Keep track of all the end-points in the code base (back-end, front-end, mobile)
There are probably more issues I'm missing.
Is there perhaps a better approach that you can think of?
One the would allow a better approach at providing the request payload using EventSource?
Subscriptions in GraphQL are normally implemented using WebSockets, not SSE. Both Apollo and Relay support using subscriptions-transport-ws client-side to listen for events. Apollo Server includes built-in support for subscriptions using WebSockets. If you're just trying to implement subscriptions, it would be better to utilize one of these existing solutions.
That said, there's a library for utilizing SSE for subscriptions here. It doesn't look like it's maintained anymore, but you can poke around the source code to get some ideas if you're bent on trying to get SSE to work. Looking at the source, it looks like the author got around the limitations you mention above by initializing each subscription with a POST request that returns a subscription id.
As of now you have multiple Packages for GraphQL subscription over SSE.
graphql-sse
Provides both client and server for using GraphQL subscription over SSE. This package has a dedicated handler for subscription.
Here is an example usage with express.
import express from 'express'; // yarn add express
import { createHandler } from 'graphql-sse';
// Create the GraphQL over SSE handler
const handler = createHandler({ schema });
// Create an express app serving all methods on `/graphql/stream`
const app = express();
app.use('/graphql/stream', handler);
app.listen(4000);
console.log('Listening to port 4000');
#graphql-sse/server
Provides a server handler for GraphQL subscription. However, the HTTP handling is up to u depending of the framework you use.
Disclaimer: I am the author of the #graphql-sse packages
Here is an example with express.
import express, { RequestHandler } from "express";
import {
getGraphQLParameters,
processSubscription,
} from "#graphql-sse/server";
import { schema } from "./schema";
const app = express();
app.use(express.json());
app.post(path, async (req, res, next) => {
const request = {
body: req.body,
headers: req.headers,
method: req.method,
query: req.query,
};
const { operationName, query, variables } = getGraphQLParameters(request);
if (!query) {
return next();
}
const result = await processSubscription({
operationName,
query,
variables,
request: req,
schema,
});
if (result.type === RESULT_TYPE.NOT_SUBSCRIPTION) {
return next();
} else if (result.type === RESULT_TYPE.ERROR) {
result.headers.forEach(({ name, value }) => res.setHeader(name, value));
res.status(result.status);
res.json(result.payload);
} else if (result.type === RESULT_TYPE.EVENT_STREAM) {
res.writeHead(200, {
'Content-Type': 'text/event-stream',
Connection: 'keep-alive',
'Cache-Control': 'no-cache',
});
result.subscribe((data) => {
res.write(`data: ${JSON.stringify(data)}\n\n`);
});
req.on('close', () => {
result.unsubscribe();
});
}
});
Clients
The two packages mentioned above have companion clients. Because of the limitation of the EventSource API, both packages implement a custom client that provides options for sending HTTP Headers, payload with post, what the EvenSource API does not support. The graphql-sse comes together with it client while the #graphql-sse/server has companion clients in a separate packages.
graphql-sse client example
import { createClient } from 'graphql-sse';
const client = createClient({
// singleConnection: true, use "single connection mode" instead of the default "distinct connection mode"
url: 'http://localhost:4000/graphql/stream',
});
// query
const result = await new Promise((resolve, reject) => {
let result;
client.subscribe(
{
query: '{ hello }',
},
{
next: (data) => (result = data),
error: reject,
complete: () => resolve(result),
},
);
});
// subscription
const onNext = () => {
/* handle incoming values */
};
let unsubscribe = () => {
/* complete the subscription */
};
await new Promise((resolve, reject) => {
unsubscribe = client.subscribe(
{
query: 'subscription { greetings }',
},
{
next: onNext,
error: reject,
complete: resolve,
},
);
});
;
#graphql-sse/client
A companion of the #graphql-sse/server.
Example
import {
SubscriptionClient,
SubscriptionClientOptions,
} from '#graphql-sse/client';
const subscriptionClient = SubscriptionClient.create({
graphQlSubscriptionUrl: 'http://some.host/graphl/subscriptions'
});
const subscription = subscriptionClient.subscribe(
{
query: 'subscription { greetings }',
}
)
const onNext = () => {
/* handle incoming values */
};
const onError = () => {
/* handle incoming errors */
};
subscription.susbscribe(onNext, onError)
#gaphql-sse/apollo-client
A companion package of the #graph-sse/server package for Apollo Client.
import { split, HttpLink, ApolloClient, InMemoryCache } from '#apollo/client';
import { getMainDefinition } from '#apollo/client/utilities';
import { ServerSentEventsLink } from '#graphql-sse/apollo-client';
const httpLink = new HttpLink({
uri: 'http://localhost:4000/graphql',
});
const sseLink = new ServerSentEventsLink({
graphQlSubscriptionUrl: 'http://localhost:4000/graphql',
});
const splitLink = split(
({ query }) => {
const definition = getMainDefinition(query);
return (
definition.kind === 'OperationDefinition' &&
definition.operation === 'subscription'
);
},
sseLink,
httpLink
);
export const client = new ApolloClient({
link: splitLink,
cache: new InMemoryCache(),
});
If you're using Apollo, they support automatic persisted queries (abbreviated APQ in the docs). If you're not using Apollo, the implementation shouldn't be too bad in any language. I'd recommend following their conventions just so your clients can use Apollo if they want.
The first time any client makes an EventSource request with a hash of the query, it'll fail, then retry the request with the full payload to a regular GraphQL endpoint. If APQ is enabled on the server, subsequent GET requests from all clients with query parameters will execute as planned.
Once you've solved that problem, you just have to make a server-sent events transport for GraphQL (should be easy considering the subscribe function just returns an AsyncIterator)
I'm looking into doing this at my company because some frontend developers like how easy EventSource is to deal with.
There are two things at play here: the SSE connection and the GraphQL endpoint. The endpoint has a spec to follow, so just returning SSE from a subscription request is not done and needs a GET request anyway. So the two have to be separate.
How about letting the client open an SSE channel via /graphql-sse, which creates a channel token. Using this token the client can then request subscriptions and the events will arrive via the chosen channel.
The token could be sent as the first event on the SSE channel, and to pass the token to the query, it can be provided by the client in a cookie, a request header or even an unused query variable.
Alternatively, the server can store the last opened channel in session storage (limiting the client to a single channel).
If no channel is found, the query fails. If the channel closes, the client can open it again, and either pass the token in the query string/cookie/header or let the session storage handle it.

Passing a token through Query?

I have a Graph QL server running (Apollo Server 2) and the API behind it requires every request to include a token.
Currently the token comes from HTTP Request Cookie. This was simple enough to work. When the request comes in, grab the cookie from the header and pass it along to the HTTP request to be sent to the API server through the resolvers.
I'd like to make it so a GraphQL client can pass this token along through the POST query itself.
Basically wondering if I can define a global GQL variable of some sort. "All queries, this variable is required."
I had a similar implementation in Typescript, and in order to achieve something like this, I've define an object:
const globalInput = {
token: {
type: GraphQLString;
}
}
And then use it in your GraphQLObjectType:
const Query = new GraphQLObjectType({
name: 'Query',
fields: () => ({
myObject: {
type: MyTypeObject,
args: { ...globalInput },
resolve: (source: any, args: any) => {
// global input values can be access in args
// ex: args.token
return {}
}
}
})
})
The problem is that I need to extend it(...globalInput) it in every object type.
But it does the job.

Resources