The service i want to work with exposes 2 graphql api's that have overlapping types.
By which way it is possible to share as much fragments as possible?
After getting the schema.graphqls files via introspect and saving it to their source Folder i tried generating the files (query, mutation, fragments) resulting only value types getting generated.
apollo {
fun Service.configure() {
includes.add("com/example/shared")
}
service("api1") {
configure()
packageName.set("com.example")
sourceFolder.set("com/example/api1")
}
service("api2") {
configure()
packageName.set("com.example.api2")
sourceFolder.set("com/example/api2")
}
}
Related
I have a specific use case where a user’s data sources are conditional - e.g based on the data sources saved in the database for every specific user.
This also means every data source has unique credentials for every user, which is fine for RESTDataSource because I can use the willSendRequest to set the Authentication headers before each request.
However, I have custom data sources that have proprietary clients (for example JSForce for Salesforce) - and they have their own fetch mechanism.
As of now - I have a custom transformer directive that fetches the tokens from the database and adds it into the context - however, the directive is ran before the dataSource.initialize() method - so that I can’t use the credentials there because the context still doesn’t have it.
I also don’t want to initialize all data sources for every user even if he doesn’t use said data source in this request - but the dataSources() function doesn’t accept any parameter and is not contextual.
Bottom line is - is it possible to pass data sources conditionally based even on the Express request? When is the right time to pass the tokens and credentials to the dataSource? Maybe add my own custom init function and call it from the directive?
So you have options. Here are 2 choices:
1. Just add your dataSources
If you just initialize all dataSources, internally it can check to see if the user has access. You could have a getClient function that resolves on the client or throws an UnauthorizedError, depending.
2. Don't just add your dataSources
So if you really don't want to initialize the dataSources at ALL, you can absolutely do this by adding the "dataSources" yourself, just like Apollo does it.
const server = new ApolloServer({
// this example uses apollo-server-express
context: async ({ req, res }) => {
const accessToken = req.headers?.authorization?.split(' ')[1] || ''
const user = accessToken && buildUser(accessToken)
const context = { user }
// You can't use the name "dataSources" in your config because ApolloServer will puke, so I called them "services"
await addServices(context)
return context
}
})
const addServices = async (context) => {
const { user } = context;
const services = {
userAPI: new UserAPI(),
postAPI: new PostAPI(),
}
if (user.isAdmin) {
services.adminAPI = new AdminAPI()
}
const initializers = [];
for (const service of Object.values(services)) {
if (service.initialize) {
initializers.push(
service.initialize({
context,
cache: null, // or add your own cache
})
);
}
}
await Promise.all(initializers);
/**
* this is where you have to deviate from Apollo.
* You can't use the name "dataSources" in your config because ApolloServer will puke
* with the error 'Please use the dataSources config option instead of putting dataSources on the context yourself.'
*/
context.services = services;
}
Some notes:
1. You can't call them "dataSources"
If you return a property called "dataSources" on your context object, Apollo will not like it very much [meaning it throws an Error]. In my example, I used the name "services", but you can do whatever you want... except "dataSources".
With the above code, in your resolvers, just reference context.services.whatever instead.
2. This is what Apollo does
This pattern is copied directly from what Apollo already does for dataSources [source]
3. I recommend you still treat them as DataSources
I recommend you stick to the DataSources pattern and that your "services" all extend DataSource. It's going to be easier for everyone involved.
4. Type safety
If you're using TypeScript or something, you're going to lose a bit of type safety, since the context.services is either going to be one shape or another. Even if you're not, if you're not careful, you may end up throwing "Cannot read property users of undefined" errors instead of "Unauthorized" errors. You might be better off creating "dummy services" that reflect the same object shape but just throw Unauthorized.
Context
This problem is likely predicated on certain choices, some of which are changeable and some of which are not. We are using the following technologies and frameworks:
Relay / React / TypeScript
ContentStack (CMS)
Problem
I'm attempting to create a highly customizable page that can be built from multiple kinds of UI components based on the data presented to it (to allow pages to be built using a CMS using prefab UI in an unpredictable order).
My first attempt at this was to create a set of fragments for the potential UI components that may be referenced in an array:
query CustomPageQuery {
title
description
customContentConnection {
edges {
node {
... HeroFragment
... TweetBlockFragment
... EmbeddedVideoFragment
"""
Further fragments are added here as we add more kinds of UI
"""
}
}
}
}
In the CMS we're using (ContentStack), the complexity of this query has grown to the point that it is rejected because it requires too many calls to the database in a single query. For that reason, I'm hoping there's a way I can split up the calls for the fragments so that they are not part of the initial query, or some similar solution that results in splitting up this query into multiple pieces.
I was hoping the #defer directive would solve this for me, but it's not supported by relay-compiler.
Any ideas?
Sadly #defer is still not a standard so it is not supported by most implementation (since you would also need the server to support it).
I am not sure if I understand the problem correctly, but you might want to look more toward using #skip or #include to only fetch the fragment you need depending on the type of the thing. But it would require the frontend to know what it wants to query beforehand.
query CustomPageQuery($hero: Boolean, $tweet: Boolean, $video: Boolean) {
title
description
customContentConnection {
edges {
node {
... HeroFragment #include(if: $hero)
... TweetBlockFragment #include(if: $tweet)
... EmbeddedVideoFragment #include(if: $video)
}
}
}
}
Generally you want to be able to discriminate the type without having to do a database query. So say:
type Hero {
id: ID
name: String
}
type Tweet {
id: ID
content: String
}
union Content = Hero | Tweet
{
Content: {
__resolveType: (parent, ctx) => {
// That should be able to resolve the type without a DB query
},
}
}
Once that is passed, each fragment is then resolved, making more database queries. If those are not properly batched with dataloaders then you have a N+1 problem. I am not sure how much control (if at all) you have on the backend but there is no silver bullet for your problem.
If you can't make optimizations on the backend then I would suggest trying to limit the connection. They seem to be using cursor based pagination, so you start with say first: 10 and once the first batch is returned, you can query the next elements by setting the after to the last cursor of the previous batch:
query CustomPageQuery($after: String) {
customContentConnection(first: 10, after: $after) {
edges {
cursor
node {
... HeroFragment
... TweetBlockFragment
... EmbeddedVideoFragment
}
}
pageInfo {
hasNextPage
}
}
}
As a last resort, you could try to first fetch all the IDs and then do subsequent queries to the CMS for each id (using aliases I guess) or type (if you can filter on the connection field). But I feel dirty just writing it so avoid it if you can.
{
one: node(id: "UUID1") {
... HeroFragment
... TweetBlockFragment
... EmbeddedVideoFragment
}
two: node(id: "UUID2") {
... HeroFragment
... TweetBlockFragment
... EmbeddedVideoFragment
}
}
I'm wondering what would be the best way to ignore/discard the unknown enum values in GraphQL/Apollo server.
Let's say my GraphQL schema defines array of enums "enum Service { Supermarket, TicketSales }" and it works fine now, but later on other service I'm using is adding some new values (e.g. Playground) and my client just doesn't support it and I would just like to ignore it and return the supported values without error.
What would be the best way to do this in GraphQL. My first idea was to make directive that would read the supported values from schema and ignore everything else, but after googling around I didn't find any good examples how to do it. Can you point me a direction where to go about this?
If your resolver function will accept arbitrary strings, then you can use a custom scalar type, or just String.
"""
The type of a service. `Supermarket` means..., and
`TicketSales` means...; any other value is ignored.
"""
scalar Service
GraphQL generally places responsibility on the client to conform to the server's expectations, rather than making the server try to support any request. There are a couple of places you can reasonably expect an enum value like this to appear:
enum Service { Supermarket, TicketSales }
type Query {
inAReturnValue: Service!
asAQueryParam(service: Service!): Node
}
type Mutation {
asAMutationInput(service: Service!): Node
}
In particular it may not make sense to tell the server "make the type of this object be a playground" if the server just doesn't understand that. Conversely, if the server knows about "playground", it could return it in cases the client may not expect. Having an enum here makes it explicit what the server knows about. The server has said what it supports and it's the client's responsibility to cooperate.
Note that it's possible for the client to find out if the server supports playgrounds, if it's an enum value, and this might help it inform its behavior.
query GetServiceTypes {
__type(name: "Service") {
enumValues { name }
}
}
After playing around I found something that I can use to get around my original problem, so I will post it here in case somebody else is wondering the same thing.
So my original problem was in short that I'm receiving several different "available services" kind of string arrays from another services and I was thinking to map them to enum for better typescript support etc. But the problem was that if I get some unknown value from another service, my graphql will fail.
So my original idea was to fix it with directive which I after all got working:
# In schema
directive #mapUnknownTo(value: String) on ENUM
enum SomeAttribute #mapUnknownTo(value: "__UNKNOWN__") {
SomeAttribute1
AnotherAttribute
SomethingElse
__UNKNOWN__
}
And the directive implementation is:
import { SchemaDirectiveVisitor } from 'graphql-tools';
import { GraphQLEnumType } from 'graphql';
export class MapUnknownToDirective extends SchemaDirectiveVisitor {
visitEnum(type: GraphQLEnumType) {
const { value = '__UNKNOWN__' } = this.args;
const valueMap = type.getValues().reduce((map, v) => map.set(v.value, v.name), new Map<string, string>());
type.serialize = (v: string): string => valueMap.get(v) || value;
}
}
So this will map all the values not defined in schema into some custom value, which is not exactly what I originally wanted, but at least it's not giving an error, so it's okay-ish.
I'm still not 100% sure if directives are way to go on cases like this, but at least it's one possible solution.
I have an endpoint that accepts as well as returns a reactive type. What I'm trying to achieve is to somehow verify that the complete reactive request (that is actually an array of resources) is valid before persisting the changes to the database (read Full-Update of a ressource). The question is not so much concerned with how to actually verify the request but more with how to chain the steps together using which of springs reactive handler methods (map, flatMap and the likes) in the desired order which is basically:
verify correctness of request (the Ressource is properly annotated with JSR-303 annotations)
clear the current resource in case of valid request
persist new resources in the database after clearing the database
Let's assume the following scenario:
val service : ResourceService
#PostMapping("/resource/")
fun replaceResources(#Valid #RequestBody resources:
Flux<RessourceDto>): Flux<RessourceDto> {
var deleteWrapper = Mono.fromCallable {
service.deleteAllRessources()
}
deleteWrapper = deleteWrapper.subscribeOn(Schedulers.elastic())
return deleteWrapper.thenMany<RessourceDto> {
resources
.map(mapper::map) // map to model object
.flatMap(service::createResource)
.map(mapper::map) // map to dto object
.subscribeOn(Schedulers.parallel())
}
}
//alternative try
#PostMapping("/resourceAlternative/")
override fun replaceResourcesAlternative2(#RequestBody resources:
Flux<ResourceDto>): Flux<ResourceDto> {
return service.deleteAllResources()
.thenMany<ResourceDto> {
resources
.map(mapper::map)
.flatMap(service::createResource)
.map(mapper::map)
}
}
Whats the idiomatic way of doing this in a reactive fashion?
I am wrapping an older REST API service with an Apollo server. Calls to the REST service results in a JSON object that nests the payload 2 to 3 levels deep. For example:
{
- MRData: {
- CatTable : {
- Cats : []
And to further complicate matters, the nesting pattern and node names are different for each resource endpoint. So my question is, since each resource result will need custom manipulation, where is the best place to do it: in the Connector, Resolver or Model.
Connector
If done in the Connector, then a custom method is needed for each resource. Seems like a lot of boilerplate.
public fetchCats(resource: string) {
return new Promise<any>((resolve, reject) => {
request.get(url, (err, resp, body) => {
err ? reject(err) : resolve(JSON.parse(body).MRData.CatTable.Cats)
})
})
}
Resolver
The resolver method receives a promise but the result cannot be manipulated:
const allCats = (_, params, context) => context.cat.getCats()
.then((data) => { // to late to manipulate data here })
Model
The Model looks promising but not quite sure how to structure it:
public getCats() {
const cats = this.connector.fetchCats('/cats.json');
return cats;
}
Apollo will be (more often than not) integrated with REST API's. I'm looking forward discovering the best way to handle this case.
I would generally recommend doing the parsing in the connector, because they should abstract over the details of the backends. If connectors abstract over the backend, you should technically be able to switch out one backend for another when appropriate. For example you could switch from querying a REST API to sending queries directly to the database where it makes sense.
The consequence of this is that you'll need to build a new connector for every REST API, because no two REST APIs are the same.