Can I force an Apollo Supergraph to point to specific Subgraph? - graphql

I'm building a GraphQL Federated project with Apollo Router and multiple endpoints via Supergraph arquitecture for a centralized API, based on the Apollo Docs, and using Rover for the composition: https://www.apollographql.com/docs/federation/quickstart/local-composition
The schema is build using GraphQLObjectType from graphql and buildSubgraphSchema from #apollo/subgraph
My current problem is that inside the project I have two instances of the same API, with the exact same code but with different environment variables due to internal necessities:
Domain_1_API <--
Domain_1_API2 <--
Domain_2_API
Domain_3_API
example:
Domain_1_API Schema:
type Company {
id: ID
name: String
address: String
Domain_1_API2 Schema:
type Company {
id: ID
name: String
address: String
SuperGraph Result:
type Company
#join__type(graph: DOMAIN_1_API)
#join__type(graph: DOMAIN_1_API2)
{
id: ID
name: String
address: String
This generates a merged SuperGraph where all requests go directly to Domain_1_API.
Is it possible to build a Supergraph that recognizes two equal subgraph schemas as separate schemas, AND/OR intercept the requests sent to the Supergraph and, based on the request's headers, point specifically to Domain_1_API or Domain_1_API2?
I tries using a Rhai plugin based on the docs: https://www.apollographql.com/docs/router/customizations/rhai/, but I don't know Rhai.

Is not possible. What I ended up doing is setting up a second Federation that composes its Supergraph using Domain_1_API2 as a subgraph, while the first Federation uses Domain_1_API.

Related

Apollo Gateway: duplicate / identical types across subgraphs returns null results?

My team and I are trying to move away from our monolithic Apollo Server API pattern toward GQL micro services with Apollo Federation.
Before we get into any code, some preface: my team has used GraphQL Codegen for our monolith to make a single master schema by finding and combining all the various type defs scattered across any number of .graphql files. The resulting output is a src/generated/types.ts file, which several of our resolvers and utility functions import the generated types from. This of course won't work if we're looking to deploy our micro services in isolation.
So, moving towards using a gateway, and with the current goal of being able to continue using GraphQL Codegen to generate and import types, we've simply got some type defs duplicated for now in order to do local type generation. Any optimizations and deduping will occur when we get the time, or by necessity if that isn't something we can have as tech debt. 😬
Minus some redacted information for security purposes, this same file is duplicated across all subgraphs which the gateway consumes.
users.graphql
extend type Query {
self: User
}
type User implements IDocument & ICreated & IUpdated & IDisplayName {
"Unique identifier for the resource across all collections"
_id: ID
"Unique identifier for the resource within its collection"
_key: ID
"Unique identifier for revision"
_rev: String
"ISO date time string for the time this resource was created"
createdAt: String
"Unique identifier for users that created this resource"
createdBy: ID
"ISO date time string for the time this resource was created"
updatedAt: String
"Unique identifier for users that created this resource"
updatedBy: ID
"A preformatted name safe to display in any HTML context"
displayName: String
"Email addresses"
email: String
"Determines if a users is a service account supporting applications"
isServiceAccount: Boolean
}
extend type Mutation {
user: UserMutation
}
type UserMutation {
save(user: UserInput): User
}
input UserInput {
"Unique identifier for the resource across all collections"
_id: ID
"Unique identifier for the resource within its collection"
_key: ID
"Unique identifier for revision"
_rev: String
"A preformatted name safe to display in any HTML context"
displayName: String
"Email addresses"
email: String
}
GraphQL Codegen generates types as expected, and the service compiles just fine. What's more is that the Gateway also seems to have no problems (i.e. compiles and runs) in stitching together several subgraphs containing the duplicate types.
However, when I attempt to execute the following query on GraphQL Playground,
query Self {
self {
_id
_key
displayName
email
}
}
It just returns
{
"data": {
"self": null
}
}
If I change the Gateway's supergraphSdl to only grab just one micro service, thus avoiding type duplication, I get results as expected:
{
"data": {
"self": {
"_id": "users/c3a6062f-b070-4e39-8b2a-9d1354e9dccb",
"_key": "c3a6062f-b070-4e39-8b2a-9d1354e9dccb",
"displayName": "redacted",
"email": "redacted"
}
}
}
(Here's the resolver if it matters. I've debugged the dickens out of it and have come to the conclusion that it works fine.)
const query: QueryResolvers = {
self: (_, __, context) => context.user,
};
I'm still pretty new to Federation, so I apologize if there's an obvious answer. But given what I've related, is there any way to
allow type duplication across the several subgraphs to be stitched together, and is there a way to
still keep isolated, generated types for each service?
I've looked at various possible ways to resolve this issue
, exploring the extends keyword and wondering if extending the User type def across all but one of the services would leave just one "master" User type def: that either didn't work or I did something wrong. I've only got the vaguest idea of what's going on, and I'm guessing the Gateway is confused about which type and from which service it's supposed to use in order to return a response.
Here are various relevant packages and versions which might help solve the issue:
Gateway
"#apollo/gateway": "^2.0.1",
"graphql": "^16.4.0"
GQL Micro services
"graphql": "^16.4.0"
"#graphql-codegen/cli": "^2.6.2",
"#apollo/federation": "^0.36.1",
Any help is immensely appreciated, even if it means what I want isn't possible! If more information and / or code is required I will be happy to give it.
I didn't solve the quest for allowing duplication, but ultimately, I didn't want to have to do it anyway, and since I don't think I even can now, that's fine.
Instead, I was able to find a much more elegant solution!
We just have to point GraphQL Codegen's schema field in the various codegen.yml files to whichever .graphql sources are required. That way, we get the types we need to use, and we prevent usurpation of single sources of truth by not redeclaring type defs. So, happy ending. 🥳
Great discussion y'all

How to query data from 2 APIs

I have setup a Gatsby Client which connects to Contentful using the gatsby-source-contentful plugin. I have also connected a simple custom API which is connected using the gatsby-source-graphql plugin.
When I run the dev-server I am able to query my pages from Contentful in the playground.
I am also able to query my custom API through the playground as well.
So both APIs work and are connected with Gatsby properly.
I want to programatically generate a bunch of pages that have dynamic sections (references) which an author can add and order as she wishes.
I do achieve this using the ...on Node connection together with fragments I define within each dynamic section. It all works out well so far.
My actual problem:
Now I have a dynamic section which is a Joblist. This Component requires to get data out of the Contentful API as it stores values like latitude and longitude. So the author is free to set a point on a map and set a radius. I successfully get this information out of Contentful using a fragment inside the component:
export const query = graphql `
fragment JoblistModule on ContentfulJoblisteMitAdresse {
... on ContentfulJoblisteMitAdresse {
contentful_id
radius
geo {
lon
lat
}
}
}`
But how can I pass this information in to another query that fetches the jobdata from my custom API? If I understand Gatsby correctly I somehow have to connect these two API's together? Or can I run another query somehow that fetches these values passed in as variables? How and where would I achieve this?
I could not find any approach neither inside the gatsby-node.js (since passed-in context can only be used as variables inside a query) nor in the template-file (since I can run only 1 query at a time), nor in the component itself (since this only accept staticQuery)
I don't know where my misunderstanding is. So I would very appreciate any hints, help or examples.
Since your custom API is a graphQL API, you can use delegateToSchema from the graphql-tools package to accomplish this.
You will need to create a resolver using Gatsby's setFieldsOnGraphQLNodeType API. Within this resolver, your resolve function will call delegateToSchema.
We have a similar problem, our blog posts have an "author" field which contains an ID. We then do a graphQL query to another system to look up author info by that ID.
return {
remoteAuthor: {
type: person,
args: {},
resolve: async (source: ContentfulBlogPost, fieldArgs, context, info) => {
if (!source.author) {
return null
}
// runs the selection on the remote schema
// https://github.com/gatsbyjs/gatsby/issues/14517
return delegateToSchema({
schema: authorsSchema,
operation: 'query',
fieldName: 'Person',
args: { id: source.author },
context,
info,
})
},
},
}
This adds a 'remoteAuthor' field to our blog post type, and whenever it gets queried, those selections are proxied to the remote schema where the person type exists.

Apollo Client: can apollo-link-rest resolve relations between endpoints?

The rest api that I have to use provides data over multiple endpoints. The objects in the results might have relations that are are not resolved directly by the api, it rather provides ids that point to the actual resource.
Example:
For simplicity's sake let's say a Person can own multiple Books.
Now the api/person/{i} endpoint returns this:
{ id: 1, name: "Phil", books: [1, 5, 17, 31] }
The api/book/{i} endpoint returns this (note that author might be a relation again):
{ id: 5, title: "SPRINT", author: 123 }
Is there any way I can teach the apollo client to resolve those endpoints in a way that I can write the following (or a similar) query:
query fetchBooksOfUser($id: ID) {
person (id: $id) {
name,
books {
title
}
}
}
I didn't try it (yet) in one query but sould be possible.
Read docs strating from this
At the beggining I would try with sth like:
query fetchBooksOfUser($id: ID) {
person (id: $id) #rest(type: "Person", path: "api/person/{args.id}") {
name,
books #rest(type: "Book", path: "api/book/{data.person.books.id}") {
id,
title
}
}
}
... but it probably won't work - probably it's not smart enough to work with arrays.
UPDATE: See note for similiar example but using one, common parent-resolved param. In your case we have partially resolved books as arrays of objects with id. I don't know how to use these ids to resolve missing fields () on the same 'tree' level.
Other possibility - make related subrequests/subqueries (someway) in Person type patcher. Should be possible.
Is this really needed to be one query? You can provide ids to child containers, each of them runing own query when needed.
UPDATE: Apollo will take care on batching (Not for REST, not for all graphql servers - read docs).
'it's handy' to construct one query but apollo will cache it normalizing response by types - data will be stored separately. Using one query keeps you within overfetching camp or template thinking (collect all possible data before one step rendering).
Ract thinking keeps your data and view decomposed, used when needed, more specialised etc.
<Person/> container will query for data needed to render itself and list of child-needed ids. Each <Book/> will query for own data using passed id.
As an alternative, you could set up your own GraphQL back-end as an intermediary between your front-end and the REST API you're planning to use.
It's fairly easy to implement REST APIs as data sources in GraphQL using Apollo Server and a package such as apollo-datasource-rest which is maintained by the authors behind Apollo Server.
It would also allow you to scale if you ever have to use other data sources (DBs, 3rd party APIs, etc.) and would give you full control about exactly what data your queries return.

AWS Amplify how to include validation across two model attributes? (e.g. startDate < endDate)

Using AWS Amplify how does one update the schema.graphql model file so as to cause backend validation across multiple fields for the graphql API that is created.
For example with the following schema.graphql file for amplify, how could I update this (or with the additional of other files in the project), so as to include a server side validation check on the graphql API it created such that:
"startDate should be before endDate"
schema.graphql file:
type Event #model {
id: ID!
name: String!
startDate: AWSDate!
endDate: AWSDate!
plan: Plan! #connection(name: "PlanEvents")
}
If this is not possible with amplify (note I'm using javascript amplify with react front end), advice re what approach to take to implement would be appreciated (e.g. what backend AWS components would I have to look into and learn, and how this would integrate in with the automated graphql api that amplify is effectively automatically building already for me)
You could add a custom resolver
Your Event model will cause the creation of a file, build/Mutation.createEvent.req.vtl, which can be overwritten by adding a resolvers/Mutation.createEvent.req.vtl. In that file you could put the logic to compare the two dates and throw an error. Something like:
#if( $ctx.args.input.startDate > $ctx.args.input.endDate )
$util.error("startDate must be before endDate")
#end
Bare in mind I have no idea what the actual syntax for comparing dates in VTL is. This might help

Seamlessly migrate to Postgraphile (multiple ApolloClient instances)

Postgraphile seems very handy tool, but I've already have tens of queries and mutations on client and server side.
Is there any way to integrate Postgraphile piece by piece, having my old GraphQL schema, described by hands working?
So, now I have following initialization code:
function createApolloLink(){
return createHttpLink({
uri: '/graphql',
credentials: 'same-origin'
});
}
function create(){
return new ApolloClient({
link: createApolloLink(),
ssrMode: !process.browser, // eslint-disable-line
cache: new InMemoryCache(),
connectToDevTools: process.browser
});
}
How to utilize one normalised storage (client side) and connect to second API point, driven by Postgraphile, e.g. /graphql2?
Typically your GraphQL client shouldn't have to think about this - it should be handled on the server side.
There's a number of techniques you can use to address this on the server side:
Schema Stitching
Schema stitching is a straight-forward approach for your issue - take your old schema and merge it with your PostGraphile schema; that way when clients communicate with /graphql they have access to both schemas. You can then mark everything in your old schema as deprecated and slowly phase out usage. However, if you can, I'd instead recommend that you use a PostGraphile plugin...
PostGraphile Plugin
PostGraphile is built around a plugin system, and you can use something like the makeExtendSchemaPlugin to mix your old GraphQL schema into the PostGraphile one. This is documented here: https://www.graphile.org/postgraphile/make-extend-schema-plugin/ but if your old types/resolvers are implemented via something like graphql-tools this is probably the easiest way to get started:
const { makeExtendSchemaPlugin, gql } = require('graphile-utils');
const typeDefs = gql`\
type OldType1 {
field1: Int!
field2: String
}
extend type Query {
oldField1: OldType1
oldField2: OldType2
}
`;
const resolvers = {
Query: {
oldField1(/*...*/) {
/* old logic here */
},
//...
},
};
const AddOldSchemaPlugin = makeExtendSchemaPlugin(
build => ({
typeDefs,
resolvers,
})
);
module.exports = AddOldSchemaPlugin;
This will also lead to the best performance as there should be no added latency, and you can again mark the legacy fields/mutations as deprecated.
Schema Delegation
Using this approach you write your own new GraphQL schema which then "delegates" to the other GraphQL schemas (the legacy one, and the one generated by PostGraphile). This adds a little latency but gives you much more control over the final shape of your GraphQL schema, though with this power comes great responsibility - if you make a typo then you're going to have to maintain that typo for a long time! Personally, I prefer the generated schema approach used by PostGraphile.
However, to answer your question as-asked, Apollo Link has "context" functionality that allows you to change how the query is executed. Typically this is used to add headers but you can also use it to override the URI to determine where the query can go. I've never done this myself, but I wouldn't be surprised if there was an Apollo Link that you can use that will switch automatically based on a client directive or even on the field name.
https://github.com/apollographql/apollo-link/tree/master/packages/apollo-link-http#context

Resources