API Platform GraphQL creates generically typed IterableConnection with entity specific connection for related entities - api-platform.com

Summary
When creating two entities with a relation, and at the same time using a custom DataProvider (implementing CollectionDataProviderInterface and RestrictedDataProviderInterface), the GraphQL schema generates a generic IterableConnection instead of a Connection specific for the relation.
Works: Book (example entity provided by api-platform which uses Doctrine ORM) has the following GraphQL field:
reviews(
first: Int
last: Int
before: String
after: String
): ReviewConnection # Connection which is specific to Reviews
Does not work: Agent (customer entity using a custom DataProvider) has the following GraphQL field:
dept(
first: Int
last: Int
before: String
after: String
): IterableConnection # <-- Connection which is generic, expected DepartmentConnection
Detailed description
The example entities provided in the documentation under Getting Started with API Platform: Hypermedia and GraphQL API, Admin and Progressive Web App: Book and Review, works correctly out of the box.
The Book entity will have a GraphQL field called reviews which is of type ReviewConnection.
So for instance a Book could be queried like this:
{
book(id: "/books/1") {
reviews { # type: ReviewConnection
edges { # type: [ReviewEdge]
node { # type: Review
id
body
publicationDate
book {
description
}
}
}
}
}
}
However for the custom entities, using a custom DataProvider, the connections are typed as a generic IterableConnection. This makes it impossible to query the related entity using GraphQL, since the entity behind the connection is not known. The rest API still works though, so from the looks of it the entities and DataProviders are indeed correct.
The corresponding query for an Agent would look like this:
{
agent(id: "/agents/1") {
dept { # type: IterableConnection
edges { # type: [IterableEdge] (expected type: [DepartmentEdge])
node # type: Iterable (expected type: Department)
}
}
}
}
Also when executing this GraphQL query, the following exception is thrown:
Argument 1 passed to ApiPlatform\\Core\\GraphQl\\Serializer\\SerializerContextBuilder::replaceIdKeys()
must be of the type array, bool given, called in
/srv/api/vendor/api-platform/core/src/GraphQl/Serializer/SerializerContextBuilder.php on line 72
Basically it boils down to that in SerializerContextBuilder.php on line 72 the variable $fields will look like this for a Book:
$fields = [
'reviews' => [
'edges' => [
'node' => [
'id' => true
]
]
]
];
While for an Agent it will lack the last array level with the 'id':
$fields = [
'dept' => [
'edges' => [
'node' => true
]
]
];
However, I figured the Exception is not the root cause in that sense - because the error is already at the point when the schema is generated.
I cannot figure out where the problem lies, if it's on the entity, DataProvider or somewhere else.
The Book entity, which works correctly, looks exactly as in the documentation.
The relevant parts are basically:
/**
* #var Review[] Available reviews for this book.
*
* #ORM\OneToMany(targetEntity="Review", mappedBy="book", cascade={"persist", "remove"})
*/
public $reviews;
Here it seems to be enough to type the property as #var Review[], and the GraphQL field called review will automatically be of type ReviewConnection, and thus containing the schema of the Review entity.
The custom entity Agent however, which doesn't work as expected, looks basically the same, but without the ORM annotations:
/**
* #var Department[]
*/
public $dept;
It works to query both agent and agents, and also department and departments in GraphQL. This means that the DataProviders works, and that the schema generated for a single entity is correct, but not the connection to related entites.
Any suggestions grately appreciated!

Related

Complex Laravel, Graphql and Lighhouse implementation

My question relates to how to built complex custom resolvers, and why they dont play along well with built in resolvers. I cannot find any good examples on complex resolvers, and my real life case is even more complex than this example.
I have the following schema
type Query {
users: [User!]! #field(resolver: "App\\Library\\UserController#fetchAll")
posts: [Post!]! #field(resolver: "App\\Library\\PostController#fetchAll")
post(id: Int! #eq): Post #find
}
type User {
id: ID!
name: String
posts: [Post!]! #field(resolver: "App\\Library\\PostController#fetchAll")
}
type Post {
id: ID!
content: String!
comments: [Comment] #field(resolver: "App\\Library\\CommentController#fetchAll")
}
type Comment {
id: ID!
reply: String!
commentRating: [CommentRating] #field(resolver: “App\\Library\\CommentRatingController#fetchSum")
}
type CommentRating {
id: ID!
rating: String
}
And for instance I have this query
{
users {
id,
name
posts {
title
comments {
id,
reply
}
}
}
}
I need custom resolvers, because of business logic, but not for all of them. The above works(I use custom resolvers on purpose for all of them, I’ll explain in a bit) but only if I build my
eloquent query in the first resolver that gets called, correctly. Like so
// Function in custom resolver. All other custom resolver which are accessed can just pass the $rootValue on, or operate on it.
public function fetchAll($rootValue, array $args, GraphQLContext $context, ResolveInfo $resolveInfo)
{
// We have some more sophisticated logic to dynamically build the array parameter on the line below, because the query may not always request comments, which means 'posts.comments' wont be needed. As this is the entrypoint, $rootValue is empty
$t = User::with['posts', 'posts.comments', 'posts.comments.ratings'])->get();
// Business logic modules called here
return $t;
}
If I start with a custom resolver, but something in the query uses a built-in resolver, for instance, if change
type User {
id: ID!
name: String
posts: [Post!]! #field(resolver: "App\\Library\\PostController#fetchAll")
}
to
type User {
id: ID!
name: String
posts: [Post!]! #all
}
Then it still runs correctly but the N+1 issue gets introduced. So I can see in my mysql log that multiple queries are being run all of a sudden, where that does not happen if I have only custom, or only built in resolvers. Is it bad practice to let a custom resolver call a built-in resolver?
Is it best to just stick to custom resolvers for all of my types? And is my approach to build the custom resolver the way I do correct? (refer to the public function fetchAll codesnippet)
you can map your resolver class in your schema like
type Query {
users: [User] #field(resolver: "App\\GraphQL\\Queries\\User#FooFunction")
}
and generate this query resolver class with this cammand:
php artisan lighthouse:query User
and put every query you like that on this function called FooFunction:
<?php
namespace App\GraphQL\Queries;
use Carbon\Carbon;
use GraphQL\Type\Definition\ResolveInfo;
use Illuminate\Support\Facades\DB;
use Nuwave\Lighthouse\Support\Contracts\GraphQLContext;
class User
{
/**
* Return a value for the field.
*
* #param null $rootValue Usually contains the result returned from the parent field. In this case, it is always `null`.
* #param mixed[] $args The arguments that were passed into the field.
* #param \Nuwave\Lighthouse\Support\Contracts\GraphQLContext $context Arbitrary data that is shared between all fields of a single query.
* #param \GraphQL\Type\Definition\ResolveInfo $resolveInfo Information about the query itself, such as the execution state, the field name, path to the field from the root, and more.
* #return mixed
*/
public function FooFunction($rootValue, array $args, GraphQLContext $context, ResolveInfo $resolveInfo)
{
return
DB::table('...')
->where(...)
->get();
}
}
Lighthouse provide directives like #hasOne, #hasMany, #belongsTo which prevent N+1 problem. All you need to do is use them.
https://lighthouse-php.com/master/eloquent/relationships.html#avoiding-the-n-1-performance-problem
Bit of a late answer but the problem was that I did not completely understand the graphql model. The custom resolvers purely need to send back a model of the correct type so that the schema can understand it. My resolvers wasn't returning with the correct types.

AWS Appsync: How can I create a resolver that retrieves the details for an array of identifiers?

This feels basic, so I would expect to find this scenario mentioned, but I have searched and can't find an example that matches my scenario. I have 2 end points (I am using HTTP data sources) that I'm trying to combine.
Class:
{
id: string,
students: [
<studentID1>,
<studentID2>,
...
]
}
and Student:
{
id: String,
lastName: String
}
What I would like is a schema that looks like this:
Student: {
id: ID!
lastName: String
}
Class: {
id: ID!,
studentDetails: [Student]
}
From reading, I know that I need some sort of resolver on Class.studentDetails that will return an array/List of student objects. Most of the examples I have seen show retrieving the list of Students based on class ID (ctx.source.id), but that won't work in this case. I need to call the students endpoint 1 time per student, passing in the student ID (I cannot fetch the list of students by class ID).
Is there a way to write a resolver for Class/studentDetails that loops through the student IDs in Class and calls my students endpoint for each one?
I was thinking something like this in the Request Mapping Template:
#set($studentDetails = [])
#foreach($student in $ctx.source.students)
#util.qr(list.add(...invoke web service to get student details...))
#end
$studentDetails
Edit: After reading Lisa Shon's comment below, I realized that the batch resolver for DynamoDB data sources that does this, but I don't see a way to do that for HTTP data sources.
It's not ideal, but you can create an intermediate type.
type Student {
id: ID!
lastName: String
}
type Class {
id: ID!,
studentDetails: [StudentDetails]
}
type StudentDetails {
student: Student
}
In your resolver template for Class, create a list of those student ids
#foreach ($student in $class.students)
$util.qr($studentDetails.add({"id": "$student.id"}))
#end
and add it to your response object. Then, hook a resolver to the student field of StudentDetails and you will then be able to use $context.source.id for the individual student API call. Each id will be broken out of the array and be its own web request.
I opened a case with AWS Support and was told that the only way they know to do this is to create a Lambda Resolver that:
Takes an array of student IDs
Calls the students endpoint for each one
Returns an array of student details information
Instead of calling your student endpoint in the response, use a pipeline resolver and stitch the response from different steps using stash, context (prev.result/result), etc.

How to load the graphql queries from the server without defining it in the front end?

Now let's say we are using a REST API. I have one endpoint like this: /homeNewsFeed. This API will give us a response like this:
[
{
blockTitle: 'News',
type: 'list',
api: 'http://localhost/news'
},
{
blockTitle: 'Photos',
type: 'gallery',
api: 'http://localhost/gallery'
}
]
Now after getting this we go through the array and call the respective endpoints to load the data. My question is, how to do this in GraphQL? Normally we define the query in the front end code. Without doing that, how to let the server decide what to send?
The main reason to do this is. Imagine we have a mobile app. We need to push new blocks to this news feed without sending an app update. But each item can have their own query.
Normally we define the query in the front end code. Without doing that, how to let the server decide what to send?
Per the spec, a GraphQL execution request must include two things: 1) a schema; and 2) a document containing an operation definition. The operation definition determines what operation (which query or mutation) to execute as well as the format of the response. There are work arounds and exceptions (I'll discuss some below), but, in general, if specifying the shape of the response on the client-side is undesirable or somehow not possible, you should carefully consider whether GraphQL is the right solution for your needs.
That aside, GraphQL lends itself more to a single request, not a series of structured requests like your existing REST API requires. So the response would look more like this:
[
{
title: 'News',
content: [
...
],
},
{
title: 'Photos',
content: [
...
],
}
]
and the corresponding query might look like this:
query HomePageContent {
blocks {
title
content {
# additional fields
}
}
}
Now the question becomes how do differentiate between different kinds of content. This is normally solved by utilizing an interface or union to aggregate multiple types into a single abstract type. The exact structure of your schema will depend on the data you're sending, but here's an example:
interface BlockContentItem {
id: ID!
url: String!
}
type Story implements BlockContentItem {
id: ID!
url: String!
author: String!
title: String!
}
type Image implement BlockContentItem {
id: ID!
url: String!
alt: String!
}
type Block {
title: String!
content: [BlockContentItem!]!
}
type Query {
blocks: [Block!]!
}
You can now query blocks like this:
query HomePageContent {
blocks {
title
content {
# these fields apply to all BlockContentItems
__typename
id
url
# then we use inline fragments to specify type-specific fields
... on Image {
alt
}
... on Story {
author
title
}
}
}
}
Using inline fragments like this ensures type-specific fields are only returned for instances of those types. I included __typename to identify what type a given object is, which may be helpful to the client app (clients like Apollo automatically include this field anyway).
Of course, there is still the issue of what happens when you want to add a new block. If the block's content fits an existing type, no sweat. But what happens when you anticipate you will need a different type in the future, but can't design around that right now?
Typically, that sort of change would require both a schema change on the server and a query change on the client. And in most cases, this will probably be fine because if you're getting data in a different structure, you will have to update your client app anyway. Otherwise, your app won't know how to render the new data structure correctly.
But let's say we want to future-proof our schema anyway. Here's two ways you could go about doing it.
Instead of specifying an interface for content, just utilize a custom JSON scalar. This will effectively throw the response validation out the window, but it will allow you to return whatever you want for the content of a given block.
Abstract out whatever fields might be needed in the future into some kind of value-key type. For example:
.
type MetaItem {
key: String!
value: String!
}
type Block {
title: String!
meta: [MetaItem!]!
# other common fields
}
There's any number of other workarounds, some better than others depending on the kind of data you're working with. But hopefully that gives you some idea how to address the scenario you describe in a GraphQL context.

Which is the correct resolver function approach?

I would like to clarify which approach I should use for my resolver functions in Apollo + GraphQL
Let's assume the following schema:
type Post {
id: Int
text: String
upVotes: Int
}
type Author{
name: String
posts: [Post]
}
schema {
query: Author
}
The ApoloGraphql tutorial suggests a resolver map like this:
{Query:{
author(_, args) {
return author.findAll()
}
}
},
Author {
posts: (author) => author.getPosts(),
}
As far as I know, every logic regarding posts e.g. get author with posts where the count of post upVotes > args.upVotes, must be handled in the author method. That gives us the following resolver map:
{Query:{
author(_, args) {
return author.findAll({
include:[model: Post]
where: {//post upVotes > args.upVotes}
})
}
},
Author {
posts: (author) => author.getPosts(),
}
Calling author, will first select the author with posts in one joined query, where posts are greater than args.upVotes. Then it will select the posts for that author again, in an additional query because of Author ... getPosts()
Technically, I can reach the same result by removing Author, since posts are already included in the small author method.
I have the following questions:
Do I need this statement? In which cases?
Author {
posts: (author) => author.getPosts(),
}
If no, then how can I find out if the posts field was requested so that
I can make the posts include conditionally, depending not only on the
arguments, but also on the requested fields?
If yes, which posts will contain the final result? Posts from the
include statement, or the getPosts()?
The resolver map you included in your question isn't valid. I'm going to assume you meant something like this for the Author type:
Author {
posts: (author) => author.getPosts(),
}
As long as your author query always resolves to an array of objects that include a posts property, then you're right in thinking it doesn't make sense to include a customer resolver for the posts field on the Author type. In this case, your query's resolver is already populating all the necessary fields, and we don't have to do anything else.
GraphQL utilizes a default resolver that looks for properties on the parent (or root) object passed down to the resolver and uses those if they match the name of the field being resolved. So if GraphQL is resolving the posts field, and there is no resolver for posts, by default it looks at the Author object it's dealing with, and if there is a property on it by the name of posts, it resolves the field to its value.
When we provide a custom resolver, that resolver overrides the default behavior. So if your resolver was, for example:
posts: () => []
then GraphQL would always return an empty set of posts, even if the objects returned by author.findAll() included posts.
So when would you need to include the resolver for posts?
If your author resolver didn't "include" the posts, but the client requested that field. Like you said, the problem is that we're potentially making an unnecessary additional call in some cases, depending on whether your author resolver "includes" the posts or not. You can get around that by doing something like this:
posts: (author) => {
if (author.posts) return author.posts
return author.getPosts()
}
// or more succinctly
posts: author => author.posts ? author.posts : author.getPosts()
This way, we only call getPosts if we actually need to get the posts. Alternatively, you can omit the posts resolver and handle this inside your author resolver. We can look at the forth argument passed to the resolver for information about the request, including which fields were requested. For example, your resolver could look something like this:
author: (root, args, context, info) => {
const include = []
const requestedPosts = info.fieldNodes[0].selectionSet.selections.includes(s => s.name.value === 'posts'
if (requestedPosts) include.push(Post)
return Author.findAll({include})
}
Now your resolver will only include the posts for each author if the client specifically requested it. The AST tree object provided to the resolver is messy to parse, but there are libraries out there (like this one) to help with that.

Why does Eloquent change relationship names on toArray call?

I'm encountering an annoying problem with Laravel and I'm hoping someone knows a way to override it...
This is for a system that allows sales reps to see inventory in their territories. I'm building an editor to allow our sales manager to go in and update the store ACL so he can manage his reps.
I have two related models:
class Store extends Eloquent {
public function StoreACLEntries()
{
return $this->hasMany("StoreACLEntry", "store_id");
}
}
class StoreACLEntry extends Eloquent {
public function Store()
{
return $this->belongsTo("Store");
}
}
The idea here is that a Store can have many entries in the ACL table.
The problem is this: I built a page which interacts with the server via AJAX. The manager can search in a variety of different ways and see the stores and the current restrictions for each from the ACL. My controller performs the search and returns the data (via AJAX) like this:
$stores = Store::where("searchCondition", "=", "whatever")
->with("StoreACLEntries")
->get();
return Response::json(array('stores' => $stores->toArray()));
The response that the client receives looks like this:
{
id: "some ID value",
store_ac_lentries: [
created_at: "2014-10-14 08:13:20"
field: "salesrep"
id: "1"
store_id: "5152-USA"
updated_at: "2014-10-14 08:13:20"
value: "salesrep ID value"
]
}
The problem is with the way the StoreACLEntries name is mutilated: it becomes store_ac_lentries. I've done a little digging and discovered it's the toArray method that's inserting those funky underscores.
So I have two questions: "why?" and "how do I stop it from doing that?"
It has something in common with automatic changing camelCase into snake_case. You should try to change your function name from StoreACLEntries to storeaclentries (lowercase) to remove this effect.

Resources