Auto Reload of gateway for schema changes in federated service apollo GraphQL - graphql

In Apollo Federation, I am facing this problem:
The gateway needs to be restarted every time we make a change in the schema of any federated service in service list.
I understand that every time a gateway starts and it collects all the schema and aggregates the data graph. But is there a way this can be handled automatically without restarting the Gateway as it will down all other unaffected GraphQL Federated services also
Apollo GraphQL , #apollo/gateway

There is an experimental poll interval you can use:
const gateway = new ApolloGateway({
serviceList: [
{ name: "products", url: "http://localhost:4002" },
{ name: "inventory", url: "http://localhost:4001" },
{ name: "accounts", url: "http://localhost:4000" }
],
debug: true,
experimental_pollInterval:3000
})
the code above will pull every 3 seconds

I don't know other ways to automatically reload the gateway other than polling.
I made a reusable docker image and i will keep updating it if new ways to reload the service emerge. For now you can use the POLL_INTERVAL env var to periodically check for changes.
Here is an example using docker-compose:
version: '3'
services:
a:
build: ./a # one service implementing federation
b:
build: ./b
gateway:
image: xmorse/apollo-federation-gateway
ports:
- 8000:80
environment:
CACHE_MAX_AGE: '5' # seconds
POLL_INTERVAL: '30' # seconds
URL_0: "http://a"
URL_1: "http://b"

You can use express to refresh your gateway's schema. ApolloGateway has a load() function that go out to fetch the schemas from implementing services. This HTTP call could potentially be part of a deployment process if something automatic is needed. I wouldn't go with polling or something too automatic. Once the implementing services are deployed, the schema is not going to change until it's updated and deployed again.
import { ApolloGateway } from '#apollo/gateway';
import { ApolloServer } from 'apollo-server-express';
import express from 'express';
const gateway = new ApolloGateway({ ...config });
const server = new ApolloServer({ gateway, subscriptions: false });
const app = express();
app.post('/refreshGateway', (request, response) => {
gateway.load();
response.sendStatus(200);
});
server.applyMiddleware({ app, path: '/' });
app.listen();
Update: The load() function now checks for the phase === 'initialized' before reloading the schema. A work around might be to use gateway.loadDynamic(false) or possibly change gateway.state.phase = 'initialized';. I'd recommend loadDyamic() because change state might cause issues down the road. I have not tested either of those solutions since I'm not working with Apollo Federation at the time of this update.

Related

nestjs + apollo graphql federated gateway can't introspect services because of "bad request", reproducible git repository available

In an NX monorepo I'm building 3 nestjs application, an auth-service + user-service and a gateway to start off with. They're all powered by apollo graphql and following the official nestjs documentation.
The issue that I'm having is that with both the user-service and auth-service up and processing requests successfully as individual servers, at the same time, the gateway throws
Couldn't load service definitions for "auth" at http://localhost:3100/apis/auth-service/graphql: 400: Bad Request
The services themselves are standard graphql applications, nothing basic.
The definitions for the gateway are as such:
{
server: {
debug: true,
playground: true,
autoSchemaFile: './apps/gateway/schema.gql',
sortSchema: true,
introspection: true,
cors: ['*'],
path: '/apis/gateway/graphql';
},
gateway: {
supergraphSdl: new IntrospectAndCompose({
subgraphHealthCheck: true,
subgraphs: [
{
name: 'user',
url: resolveSubgraphUrl('user'),
},
{
name: 'auth',
url: resolveSubgraphUrl('auth'),
},
],
}),
}
}
I have created an nx based monorepository that allows you to reproduce this issue easily, by spinning up all 3 servers in watch mode at the same time, on localhost, on different ports.
Everything is mapped as it should. Link: https://github.com/sebastiangug/nestjs-federation.git
The README contains the two commands necessary to run it as well as the same set of instructions plus the health-checks queries available from each service.
Versions used:
"#apollo/subgraph": "2.1.3",
"#apollo/federation": "0.37.1",
"#apollo/gateway": "2.1.3",
"apollo-server-express": "3.6.7",
"graphql": "16.5.0",
"#nestjs/graphql": "10.1.3",
"#nestjs/platform-express": "9.0.8",
Any ideas what I'm doing wrong here or what further configuration is necessary to achieve this?
Thank you

Load/stress test in a SPA with Hasura Cloud Graphql as a backend and subscriptions

I'm trying to do a performance test on a
SPA with a Frontend in React, deployed with Netlify
As a backend we're using Hasura Cloud Graphql (std version) https://hasura.io/, where everything from the client goes directly through Hasura to the DB.
DB is in Postgress housed in Heroku (Std 0 tier).
We're hoping to be able to have around 800 users simultaneous.
The problem is that i'm loss about how to do it or if i'm doing it correctly, seeing how most of our stuff are "subscriptions/mutations" that I had to transform into queries. I tried doing those test with k6 and Jmeter but i'm not sure if i'm doing them properly.
k6 test
At first, i did a quick search and collected around 10 subscriptions that are commonly used. Then i tried to create a performance test with k6 https://k6.io/docs/using-k6/http-requests/ but i wasn't able to create a working subscription test so i just transform each subscription into a query and perform a http.post with this setup:
export const options = {
stages: [
{ duration: '30s', target: 75 },
{ duration: '120s', target: 75 },
{ duration: '60s', target: 50 },
{ duration: '30s', target: 30 },
{ duration: '10s', target: 0 }
]
};
export default function () {
var res = http.post(prod,
JSON.stringify({
query: listaQueries.GetDesafiosCursosByKey(
keys.desafioCursoKey
)}), params);
sleep(1)
}
I did this for every query and ran each test individually. Unfortunately, the numbers i got were bad, and somehow our test environment was getting better times than production. (The only difference afaik is that we're using Hasura Cloud for production).
I tried to implement websocket, but i couldn't getthem work and configure them to do a stress/load test.
K6 result
Jmeter test
After that, i tried something similar with Jmeter, but again i couldn't figure how to set up a subscription test (after i while, i read in a blog that jmeter doesn't support it
https://qainsights.com/deep-dive-into-graphql-in-jmeter/ ) so i simply transformed all subscriptions into a query and tried to do the same, but the numbers I was getting were different and much higher than k6.
Jmeter query Config 1
Jmeter query config 2
Jmeter thread config
Questions
I'm not sure if i'm doing it correctly, if transforming every subscription into a query and perform a http request is a correct approach for it. (At least I know that those queries return the data correctly).
Should i just increase the number of VUS/threads until i get a constant timeout to simulate a stress test? There were some test that are causing a graphql error on the website Graphql error, and others were having a
""WARN[0059] Request Failed error="Post \"https://xxxxxxx-xxxxx.herokuapp.com/v1/graphql\": EOF""
in the k6 console.
Or should i just give up with k6/jmeter and try to search for another tool to perfom those test?
Thanks you in advance, and sorry for my English and explanation, but i'm a complete newbie at this.
I'm not sure if i'm doing it correctly, if transforming every
subscription into a query and perform a http request is a correct
approach for it. (At least I know that those queries return the data
correctly).
Ideally you would be using WebSocket as that is what actual clients will most likely be using.
For code samples, check out the answer here.
Here's a more complete example utilizing a main.js entry script with modularized Subscription code in subscriptions\bikes.brands.js. It also uses the Httpx library to set a global request header:
// main.js
import { Httpx } from 'https://jslib.k6.io/httpx/0.0.5/index.js';
import { getBikeBrandsByIdSub } from './subscriptions/bikes-brands.js';
const session = new Httpx({
baseURL: `http://54.227.75.222:8080`
});
const wsUri = 'wss://54.227.75.222:8080/v1/graphql';
const pauseMin = 2;
const pauseMax = 6;
export const options = {};
export default function () {
session.addHeader('Content-Type', 'application/json');
getBikeBrandsByIdSub(1);
}
// subscriptions/bikes-brands.js
import ws from 'k6/ws';
/* using string concatenation */
export function getBikeBrandsByIdSub(id) {
const query = `
subscription getBikeBrandsByIdSub {
bikes_brands(where: {id: {_eq: ${id}}}) {
id
brand
notes
updated_at
created_at
}
}
`;
const subscribePayload = {
id: "1",
payload: {
extensions: {},
operationName: "query",
query: query,
variables: {},
},
type: "start",
}
const initPayload = {
payload: {
headers: {
"content-type": "application/json",
},
lazy: true,
},
type: "connection_init",
};
console.debug(JSON.stringify(subscribePayload));
// start a WS connection
const res = ws.connect(wsUri, initPayload, function(socket) {
socket.on('open', function() {
console.debug('WS connection established!');
// send the connection_init:
socket.send(JSON.stringify(initPayload));
// send the chat subscription:
socket.send(JSON.stringify(subscribePayload));
});
socket.on('message', function(message) {
let messageObj;
try {
messageObj = JSON.parse(message);
}
catch (err) {
console.warn('Unable to parse WS message as JSON: ' + message);
}
if (messageObj.type === 'data') {
console.log(`${messageObj.type} message received by VU ${__VU}: ${Object.keys(messageObj.payload.data)[0]}`);
}
console.log(`WS message received by VU ${__VU}:\n` + message);
});
});
}
Should i just increase the number of VUS/threads until i get a
constant timeout to simulate a stress test?
Timeouts and errors that only happen under load are signals that you may be hitting a bottleneck somewhere. Do you only see the EOFs under load? These are basically the server sending back incomplete responses/closing connections early which shouldn't happen under normal circumstances.
My expectation is that your test should be replicating the real user activity as close as possible. I doubt that real users will be sending requests to GraphQL directly and well-behaved load test must replicate the real life application usage as close as possible.
So I believe you should move to HTTP protocol level and mimic the network footprint of the real browser instead of trying to come up with individual GraphQL queries.
With regards to JMeter and k6 differences it might be the case that k6 produces higher throughput given the same hardware and running requests at maximum speed as it evidenced by kind of benchmark in the Open Source Load Testing Tools 2021 article, however given you're trying to simulate real users using real browsers accessing your applications and the real users don't hammer the application non-stop, they need some time to "think" between operations you should be getting the same number of requests for both load testing tools, if JMeter doesn't give you the load you want to conduct make sure to follow JMeter Best Practices and/or consider running it in distributed mode .

how to hot reload federation gateway in NestJS

Problem
In a federated nest app, a gateway collects all the schemas from other services and form a complete graph. The question is, how to re-run the schema collection after a sub-schema has been changed?
Current Workaround
Restarting the gateway solves the problem, but it does not seem like an elegant solution.
Other Resources
Apollo server supports managed federation which essentially reverts the dependency between the gateway and the services. Sadly I couldn't find anything relating it to NestJS.
When configuring gateway application with NestJS, and when already have integrated with Apollo studio, then you need not define any serviceList in GraphQLGatewayModule. This is how your module initialization should look like:
GraphQLGatewayModule.forRootAsync({
useFactory: async () => ({
gateway: {},
server: {
path: '/graphql',
},
}),
})
Following environment variables should be declared on the machine hosting your gateway application:
APOLLO_KEY: "service:<graphid>:<hash>"
APOLLO_SCHEMA_CONFIG_DELIVERY_ENDPOINT: "https://uplink.api.apollographql.com/"
Post deployment of Federated GraphQL service, you may need to run apollo/rover CLI service:push command like below to update the schema which writes to schema registry and then gets pushed to uplink URL which is polled by gateway periodically:
npx apollo service:push --graph=<graph id> --key=service:<graph id>:<hash> --variant=<environment name> --serviceName=<service name> --serviceURL=<URL of your service with /graphql path> --endpoint=<URL of your service with /graphql path>
You can add a pollIntervalInMs option to the supergraphSdl configuration.
That will automatically poll the services again in each interval.
#Module({
imports: [
GraphQLModule.forRootAsync<ApolloGatewayDriverConfig>({
driver: ApolloGatewayDriver,
useFactory: async () => ({
server: {
path: '/graphql',
cors: true
},
gateway: {
supergraphSdl: new IntrospectAndCompose({
subgraphs: [
{ name: 'example-service', url: 'http://localhost:8081/graphql' },
],
pollIntervalInMs: 15000,
})
},
})
})
],
})
export class AppModule {}

Apollo-server-express introspection disabled but still possible over websocket connections

We use apollo-server-express to expose a graphql server.
For this server, we have set the introspection variable to false to hide our schema from the outer world which works fine for Graphql calls that go over rest calls.
However, when we set up a websocket connection with this same server, we manage to execute introspection queries, even though that during the instantiation of the apollo server, the introspection is explicitly set to false
the config for booting the Apollo-server looks something like this:
{
schema: <schema>,
context: <context_function>,
formatError: <format_error_function>,
debug: false,
tracing: false,
subscriptions: {
path: <graphQl_path>,
keepAlive: <keep_alive_param>,
onConnect: <connect_function>,
onDisconnect: <disconnect_function>
},
introspection: false,
playground: false
};
Did someone had a similar issue? And if yes, were you able to solve it and how?
apollo-server-express version = 2.1.0
npm version = 6.4.1
node version = 10.13.0
What ApolloServer does internally is prevent you from using the __schema and __type resolvers. I assume you could do the same thing:
export const resolvers = {
Query: {
__type() {
throw new Error('You cannot make introspection');
},
__schema() {
throw new Error('You cannot make introspection');
}
}
}

Examples of integrating moleculer-io with moleculer-web using moleculer-runner instead of ServiceBroker?

I am having fun with using moleculer-runner instead of creating a ServiceBroker instance in a moleculer-web project I am working on. The Runner simplifies setting up services for moleculer-web, and all the services - including the api.service.js file - look and behave the same, using a module.exports = { blah } format.
I can cleanly define the REST endpoints in the api.service.js file, and create the connected functions in the appropriate service files. For example aliases: { 'GET sensors': 'sensors.list' } points to the list() action/function in sensors.service.js . It all works great using some dummy data in an array.
The next step is to get the service(s) to open up a socket and talk to a local program listening on an internal set address/port. The idea is to accept a REST call from the web, talk to a local program over a socket to get some data, then format and return the data back via REST to the client.
BUT When I want to use sockets with moleculer, I'm having trouble finding useful info and examples on integrating moleculer-io with a moleculer-runner-based setup. All the examples I find use the ServiceBroker model. I thought my Google-Fu was pretty good, but I'm at a loss as to where to look to next. Or, can i modify the ServiceBroker examples to work with moleculer-runner? Any insight or input is welcome.
If you want the following chain:
localhost:3000/sensor/list -> sensor.list() -> send message to local program:8071 -> get response -> send response as return message to the REST caller.
Then you need to add a socket io client to your sensor service (which has the list() action). Adding a client will allow it to communicate with "outside world" via sockets.
Check the image below. I think it has everything that you need.
As a skeleton I've used moleculer-demo project.
What I have:
API service api.service.js. That handles the HTTP requests and passes them to the sensor.service.js
The sensor.service.js will be responsible for communicating with remote socket.io server so it needs to have a socket.io client. Now, when the sensor.service.js service has started() I'm establishing a connection with a remote server located at port 8071. After this I can use this connection in my service actions to communicate with socket.io server. This is exactly what I'm doing in sensor.list action.
I've also created remote-server.service.js to mock your socket.io server. Despite being a moleculer service, the sensor.service.js communicates with it via socket.io protocol.
It doesn't matter if your services use (or not) socket.io. All the services are declared in the same way, i.e., module.exports = {}
Below is a working example with socket.io.
const { ServiceBroker } = require("moleculer");
const ApiGateway = require("moleculer-web");
const SocketIOService = require("moleculer-io");
const io = require("socket.io-client");
const IOService = {
name: "api",
// SocketIOService should be after moleculer-web
// Load the HTTP API Gateway to be able to reach "greeter" action via:
// http://localhost:3000/hello/greeter
mixins: [ApiGateway, SocketIOService]
};
const HelloService = {
name: "hello",
actions: {
greeter() {
return "Hello Via Socket";
}
}
};
const broker = new ServiceBroker();
broker.createService(IOService);
broker.createService(HelloService);
broker.start().then(async () => {
const socket = io("http://localhost:3000", {
reconnectionDelay: 300,
reconnectionDelayMax: 300
});
socket.on("connect", () => {
console.log("Connection with the Gateway established");
});
socket.emit("call", "hello.greeter", (error, res) => {
console.log(res);
});
});
To make it work with moleculer-runner just copy the service declarations into my-service.service.js. So for example, your api.service.js could look like:
// api.service.js
module.exports = {
name: "api",
// SocketIOService should be after moleculer-web
// Load the HTTP API Gateway to be able to reach "greeter" action via:
// http://localhost:3000/hello/greeter
mixins: [ApiGateway, SocketIOService]
}
and your greeter service:
// greeter.service.js
module.exports = {
name: "hello",
actions: {
greeter() {
return "Hello Via Socket";
}
}
}
And run npm run dev or moleculer-runner --repl --hot services

Resources