There is JSON-object (an array of different structures) like:
"cells": [
{
"x":3,
"y":6,
},
{
"type": "shape",
"direction": right,
},
....
{
"a": 4,
"b": 5,
"c": 6,
}
]
I need to save this object using graphql mutation. Whats the way to do it?
type A {
x: Int!
y: Int!
}
type B {
type: String!
direction: String!
}
type C {
a: Int!
b: Int!
c: Int!
}
union Request = A | B | C
input InputR {
cells: [Request!]!
}
type Mutation {
save(cells: InputR!): InputR
}
I tried the following, but it doesnt work. Any ideas?
Thanks!
So there are a few answers, depending on what you're looking for.
You can't. GraphQL doesn't have union input types (today). It was originally proposed to be an antipattern, though there are proposals making their way through the processes as of today.
You could just create a single input type that has all of those fields, all nullable, though that leads you to the problem of "Anemic Mutations"
Are you sure you want to? Those structures are so incredibly different that it may be you're thinking about the problem from an implementation standpoint, rather than a data-architecture one.
If #1 & #2 aren't good enough and #3 isn't true, the "best way for today" at that point is probably just to use a JSON scalar type. It requires you pass all of that in as a single JSON-compatible string. Here is an example of one.
Related
This is an example of a document I am trying to retrieve:
{
"ref": Ref(Collection("Word"), "270608756095582738"),
"ts": 1594331477980000,
"data": {
"word": "ablatitious",
"letters": [
{
"letter": "a",
"occurrence": 2
},
{
"letter": "b",
"occurrence": 1
},
{
"letter": "l",
"occurrence": 1
},
{
"letter": "t",
"occurrence": 2
},
{
"letter": "i",
"occurrence": 2
},
{
"letter": "o",
"occurrence": 1
},
{
"letter": "u",
"occurrence": 1
},
{
"letter": "s",
"occurrence": 1
}
],
"length": 11
}
}
This is my schema
type Letter #embedded {
letter: String! #unique
occurrence: Int!
}
type Word {
word: String! #unique
letters: [Letter]!
length: Int!
}
input LetterInput {
letter: String!
occurrence: Int!
}
type Query {
Word(length: Int!): Word
WordByLetters(letters: [LetterInput!]): Word
}
This is the error I get when I attempt to update my schema with this schema:
Schema does not pass validation. Violations:
Type mismatch: field 'letters' defined at object 'Word' has type 'Letter'. (line 19, column 17):
WordByLetters(letters: LetterInput): Word
^
If I switch the type LetterInput to Letter in the WordByLetters query type, I get this error
Type 'Letter' is not an input type type. (line 19, column 26):
WordByLetters(letters: Letter): Word
^
So, clearly, I need to use an input type, which makes sense. What does not make sense is the first of the two errors. Can somebody please explain why?
FaunaDB dev advocate here.
I'm not an expert on our GraphQL (I'll poke one of the experts if you are still stuck). When you define queries like that in FaunaDB's GraphQL it will try to match the attribute (letters: Letter) to one of the attributes in the return type (Word).
Since that return type is Word
type Word {
word: String! #unique
letters: [Letter]!
length: Int!
}
it's logical that you try to place Letter there. However if I'm not mistaking it should be the more exact which meant it has to be [ Letter ]. However, since Letter is a type, you will have to present an ID instead of the actual Letter object. You might try to define:
WordByLetters(letters: [ ID ] ): Word
instead but I'm honestly not sure whether you can place arrays in the arguments.
And since what you need is actually this:
WordByLetters(letters: ID ): Word
which we don't really support atm if I'm not mistaking.
What you could do however is to not define the letters as embedded, connect to the correct letters (instead of embedding them each time). You would then also change your schema slightly to have a many to many relation:
type Letter {
letter: String! #unique
occurrence: Int!
words: [ Word! ]!
}
You could predefine all letters to get easy access to the IDs of the letters. If you do that (I don't think this would work with embedded but I'm not 100% certain) you could find the words via the letters:
type Query {
findLetterByID(id: ID!): Letter
}
which is a query that is already defined by FaunaDB (at least if Letters would not be embedded, I barely worked with embedded myself, hence I'm not sure about some of the embedded things). Or you could define:
type Query {
findLetter(letter: String!): Letter
}
And then you could get the words via:
query GetWordsByLetter {
findLetter(_size: 6, letter: "a"){
data {
words {
...
}
}
}
}
If that doesn't work for you, you probably want to specify a resolver on the query which will generate a User Defined Function for you. This requires you to dive into FQL since UDFs are specified in FQL but it will unlock much more possibilities.
(resolver docs: https://docs.fauna.com/fauna/current/api/graphql/directives/d_resolver)
I want to bulk update list of entries with graphQL mutation in faunaDB.
The input data is list of coronavirus cases from external source. It will be updated frequently. The mutation should update existing entries if the entry name is present in collectio and create new ones if not present.
Current GRAPHQL MUTATION
mutation UpdateList($data: ListInput!) {
updateList(id: "260351229231628818", data: $data) {
title
cities {
data {
name
infected
}
}
}
}
GRAPHQL VARIABLES
{
"data": {
"title": "COVID-19",
"cities": {
"create": [
{
"id": 22,
"name": "Warsaw",
"location": {
"create": {
"lat": 52.229832,
"lng": 21.011689
}
},
"deaths": 0,
"cured": 0,
"infected": 37,
"type": "ACTIVE",
"created_timestamp": 1583671445,
"last_modified_timestamp": 1584389018
}
]
}
}
}
SCHEMA
type cityEntry {
id: Int!
name: String!
deaths: Int!
cured: Int!
infected: Int!
type: String!
created_timestamp: Int!
last_modified_timestamp: Int!
location: LatLng!
list: List
}
type LatLng {
lat: Float!
lng: Float!
}
type List {
title: String!
cities: [cityEntry] #relation
}
type Query {
items: [cityEntry!]
allCities: [cityEntry!]
cityEntriesByDeathFlag(deaths: Int!): [cityEntry!]
cityEntriesByCuredFlag(cured: Int!): [cityEntry!]
allLists: [List!]
}
Everytime the mutation runs it creates new duplicates.
What is the best way to update the list within single mutation?
my apologies for the delay, I wasn't sure exactly what the missing information was hence why I commented first :).
The Schema
An example of a part of a schema that has arguments:
type Mutation {
register(email: String!, password: String!): Account! #resolver
login(email: String!, password: String!): String! #resolver
}
When such a schema is imported in FaunaDB there will be placeholder functions provided.
The UDF parameters
As you can see all the function does is Abort with the message that the function still has to be implemented. The implementation starts with a Lambda that takes arguments and those arguments have to match what you defined in the resolver.
Query(Lambda(['email', 'password'],
... function body ...
))
Using the arguments is done with Var, that means Var('email') or Var('password') in this case. For example, in my specific case we would use the email that was passed in to get an account by email and use the password to pass on to the Login function which will return a secret (the reason I do the select here is that the return value for a GraphQL resolver has to be a valid GraphQL result (e.g. plain JSON
Query(Lambda(['email', 'password'],
Select(
['secret'],
Login(Match(Index('accountsByEmail'), Var('email')), {
password: Var('password')
})
)
))
Calling the UDF resolver via GraphQL
Finally, how to pass parameters when calling it? That should be clear from the GraphQL playground as it will provide you with the docs and autocompletion. For example, this is what the auto-generated GraphQL docs tell me after my schema import:
Which means we can call it as follows:
mutation CallLogin {
login (
email: "<some email>"
password: "<some pword>"
)
}
Bulk updates
For bulk updates, you can also pass a list of values to the User Defined Function (UDF). Let's say we would want to group a number of accounts together in a specific team via the UI and therefore want to update multiple accounts at the same time.
The mutation in our Schema could look as follows (ID's in GraphQL are similar to Strings)
type Mutation { updateAccounts(accountRefs: [ID]): [ID]! #resolver }
We could then call the mutation by providing in the id's that we receive from FaunaDB (the string, not the Ref in case you are mixing FQL and GraphQL, if you only use GraphQL, don't worry about it).
mutation {
updateAccounts(accountRefs: ["265317328423485952", "265317336075993600"] )
}
Just like before, we will have to fill in the User Defined Function that was generated by FaunaDB. A skeleton function that just takes in the array and returns it would look like:
Query(Lambda(['arr'],
Var('arr')
))
Some people might have seen an easier syntax and would be tempted to use this:
Query(Lambda(arr => arr))
However, this currently does not work with GraphQL when passing in arrays, it's a known issue that will be fixed.
The next step is to actually loop over the array. FQL is not declarative and draws inspiration from functional languages which means you would do that just by using a 'map' or a 'foreach'
Query(Lambda(["accountArray"],
Map(Var("accountArray"),
Lambda("account", Var("account")))
))
We now loop over the list but don't do anything with it yet since we just return the account in the map's body. We will now update the account and just set a value 'teamName' on there. For that we need the Update function which takes a FaunaDB Reference. GraphQL sends us strings and not references so we need to transform these ID strings to a reference with Ref as follows:
Ref(Collection('Account'), Var("account"))
If we put it all together we can add an extra attribute to a list of accounts ids as follows:
Query(Lambda(["accountArray"],
Map(Var("accountArray"),
Lambda("account",
Do(
Update(
Ref(Collection('Account'), Var("account")),
{ data: { teamName: "Awesome live-coders" } }
),
Var("account")
)
)
)
))
At the end of the Map, we just return the ID of the account again with Var("account") in order to return something that is just plain JSON, else we would be returning FaunaDB Refs which are more than just JSON and will not be accepted by the GraphQL call.
Passing in more complex types.
Sometimes you want to pass in more complex types. Let's say we have a simple todo schema.
type Todo {
title: String!
completed: Boolean!
}
And we want to set the completed value of a list of todos with specific titles to true. We can see in the extended schema generated by FaunaDB that there is a TodoInput.
If you see that extended schema you might think, "Hey that's exactly what I need!" but you can't access it when you write your mutations since you do not have that part of the schema at creation time and therefore can't just write:
type Mutation { updateTodos(todos: [TodoInput]): Boolean! #resolver }
As it will return the following error.
However, we can just add it to the schema ourselves. Fauna will just accept that you already wrote it and not override it (make sure that you keep the required fields, else your generated 'createTodo' mutation won't work anymore).
type Todo {
title: String!
completed: Boolean!
}
input TodoInput {
title: String!
completed: Boolean!
}
type Mutation { updateTodos(todos: [TodoInput]): Boolean! #resolver }
Which means that I can now write:
mutation {
updateTodos(todos: [{title: "test", completed: true}])
}
and dive into the FQL function to do things with this input.
Or if you want to include the ID along with data you can define a new type.
input TodoUpdateInput {
id: ID!
title: String!
completed: Boolean!
}
type Mutation { updateTodos(todos: [TodoUpdateInput]): Boolean! #resolver }
Once you get the hang of it and want to learn more about FQL (that's a whole different topic) we are currently writing a series of articles along with code for which the first one appeared here: https://css-tricks.com/rethinking-twitter-as-a-serverless-app/ which is probably a good gentle introduction.
I'm trying to get the possible values for multiple dropdown menus from my graphQL api.
for example, say I have a schema like so:
type Employee {
id: ID!
name: String!
jobRole: Lookup!
address: Address!
}
type Address {
street: String!
line2: String
city: String!
state: Lookup!
country: Lookup!
zip: String!
}
type Lookup {
id: ID!
value: String!
}
jobRole, city and state are all fields that have a predetermined list of values that are needed in various dropdowns in forms around the app.
What would be the best practice in the schema design for this case? I'm considering the following option:
query {
lookups {
jobRoles {
id
value
}
}
}
This has the advantage of being data driven so I can update my job roles without having to update my schema, but I can see this becoming cumbersome. I've only added a few of our business objects, and already have about 25 different types of lookups in my schema and as I add more data into the API I'll need to somehow to maintain the right lookups being used for the right fields, dealing with general lookups that are used in multiple places vs ultra specific lookups that will only ever apply to one field, etc.
Has anyone else come across a similar issue and is there a good design pattern to handle this?
And for the record I don't want to use enums with introspection for 2 reasons.
With the number of lookups we have in our existing data there will be a need for very frequent schema updates
With an enum you only get one value, I need a code that will be used as the primary key in the DB and a descriptive value that will be displayed in the UI.
//bad
enum jobRole {
MANAGER
ENGINEER
SALES
}
//needed
[
{
id: 1,
value: "Manager"
},
{
id: 2,
value: "Engineer"
},
{
id: 3,
value: "Sales"
}
]
EDIT
I wanted to give another example of why enums probably aren't going to work. We have a lot of descriptions that should show up in a drop down that contain special characters.
// Client Type
[
{
id: 'ENDOW',
value: 'Foundation/Endowment'
},
{
id: 'PUBLIC',
value: 'Public (Government)'
},
{
id: 'MULTI',
value: 'Union/Multi-Employer'
}
]
There are others that are worse, they have <, >, %, etc. And some of them are complete sentences so the restrictive naming of enums really isn't going to work for this case. I'm leaning towards just making a bunch of lookup queries and treating each lookup as a distinct business object
I found a way to make enums work the way I needed. I can get the value by putting it in the description
Here's my gql schema definition
enum ClientType {
"""
Public (Government)
"""
PUBLIC
"""
Union/Multi-Employer
"""
MULTI
"""
Foundation/Endowment
"""
ENDOW
}
When I retrieve it with an introspection query like so
{
__type(name: "ClientType") {
enumValues {
name
description
}
}
}
I get my data in the exact structure I was looking for!
{
"data": {
"__type": {
"enumValues": [{
"name": "PUBLIC",
"description": "Public (Government)"
}, {
"name": "MULTI",
"description": "Union/Multi-Employer"
}, {
"name": "ENDOW",
"description": "Foundation/Endowment"
}]
}
}
}
Which has exactly what I need. I can use all the special characters, numbers, etc. found in our descriptions. If anyone is wondering how I keep my schema in sync with our database, I have a simple code generating script that queries the tables that store this info and generates an enums.ts file that exports all these enums. Whenever the data is updated (which doesn't happen that often) I just re-run the code generator and publish the schema changes to production.
You can still use enums for this if you want.
Introspection queries can be used client-side just like any other query. Depending on what implementation/framework you're using server-side, you may have to explicitly enable introspection in production. Your client can query the possible enum values when your app loads -- regardless of how many times the schema changes, the client will always have the correct enum values to display.
Enum values are not limited to all caps, although they cannot contain spaces. So you can have Engineer but not Human Resources. That said, if you substitute underscores for spaces, you can just transform the value client-side.
I can't speak to non-JavaScript implementations, but GraphQL.js supports assigning a value property for each enum value. This property is only used internally. For example, if you receive the enum as an argument, you'll get 2 instead of Engineer. Likewise, you would return 2 instead of Engineer inside a resolver. You can see how this is done with Apollo Server here.
I have a Apollo GraphQL server talking to an API returning responses with roughly the following structure:
{
"pagination": {
"page": 1,
// more stuff
},
sorting: {
// even more stuff
},
data: [ // Actual data ]
}
This structure is going to be shared across pretty much all responses from this API, that I'm using extensively. data is going to be an array most of the time, but can also be an object.
How can I write this in an efficient way, so that I don't have to repeat all these pagination and sorting fields on every data type in my schemas?
Thanks a lot!
I've sorted your problem by creating a lib called graphql-s2s. It enhances your schema by adding support for type inheritance, generic types and metadata. In your case, creating a generic type for your Paginated object could be a viable solution. Here is an example:
const { transpileSchema } = require('graphql-s2s')
const { makeExecutableSchema } = require('graphql-tools')
const schema = `
type Paged<T> {
data: [T]
cursor: ID
}
type Node {
id: ID!
creationDate: String
}
type Person inherits Node {
firstname: String!
middlename: String
lastname: String!
age: Int!
gender: String
}
type Teacher inherits Person {
title: String!
}
type Student inherits Person {
nickname: String!
questions: Paged<Question>
}
type Question inherits Node {
name: String!
text: String!
}
type Query {
students: Paged<Student>
teachers: Paged<Teacher>
}
`
const executableSchema = makeExecutableSchema({
typeDefs: [transpileSchema(schema)],
resolvers: resolver
})
I've written more details about this here (in Part II).
When you define your schema, you will end up abstracting out pagination, sorting, etc. as separate types. So the schema will look something like:
type Bar {
pagination: Pagination
sorting: SortingOptions
data: BarData # I'm an object
}
type Foo {
pagination: Pagination
sorting: SortingOptions
data: [FooData] # I'm an array
}
# more types similar to above
type Pagination {
page: Int
# more fields
}
type SortingOptions {
# more fields
}
type BarData {
# more fields
}
So you won't have to list each field within Pagination multiple times regardless. Each type that uses Pagination, however, will still need to specify it as a field -- there's no escaping that requirement.
Alternatively, you could set up a single Type to use for all your objects. In this case, the data field would be an Interface (Data), with FooData, BarData, etc. each implementing it. In your resolver for Data, you would define a __resolveType function to determine which kind of Data to return. You can pass in a typename variable with your query and then use that variable in the __resolveType function to return the correct type.
You can see a good example of Interface in action in the Apollo docs.
The downside to this latter approach is that you have to return either a single Data object or an Array of them -- you can't mix and match -- so you would probably have to change the structure of the returned object to make it work.
I'm currently in the process of transforming a REST API into GraphQL, but I've hit a bit of a snag in one of the endpoints.
Currently, this endpoint returns an object who's keys can be an unlimited set of strings, and whos values all match a certain shape.
So, as a rudimentary example, I have this situation...
// response
{
foo: { id: 'foo', count: 3 },
bar: { id: 'bar', count: 6 },
baz: { id: 'baz', count: 1 },
}
Again, the keys are not known at runtime and can be an unlimited set of strings.
In TypeScript, for example, this sort of situation is handled by creating an interface using an indexable field signature, like so...
interface Data {
id: string;
count: number;
}
interface Response {
[key: string]: Data;
}
So, my question is: Is this sort of thing possible with graphql? How would I go about creating a type/schema for this?
Thanks in advance!
I think that one solution can be usage of JSON.stringify() method
exampleQuery: {
type: GraphQLString,
resolve: (root, args, context) => {
let obj = {
foo: { id: 'foo', count: 3 },
bar: { id: 'bar', count: 6 },
baz: { id: 'baz', count: 1 }
};
return JSON.stringify(obj);
}
}
Then, after retrieving the result of GraphQL query you could use JSON.parse(result) (in case the part performing the query is also written in JavaScript - otherwise you would have to use equivalent method of other language to parse the incoming JSON response).
Disadvantage of such a solution is that you do not have the possibility to choose what fields of obj you want to retrieve from the query, but, as you said, the returning object can have unlimited set of strings that probably are not known on the front end of the application, so there is no need to choose it's keys, am I right?