How should I reference grandparent properties in a GraphQL query, where I don't define the intermediate resolver? - graphql

I am building a GraphQL Schema that has a Type Pod which contains 3 nested objects.
type Pod {
metadata: Metadata
spec: Spec
status: Status
}
My data source is an external API which returns an array of this data. In fact, I defined my schema around this API response. I have included a trimmed down version below.
[
{
"metadata": {
"name": "my-app-65",
"namespace": "default",
},
"spec": {
"containers": [
{
"name": "hello-world",
"image": "container-image.io/unique-id",
}
],
},
// more like-objects in the array
]
However, for each of these container objects, inside the array. I would like to add some extra information which this initial API call does not provide. However, I can query this information separately if I provide the name & namespace properties on the parent's metadata.
/container/endpoint/${namespace}/${name}
Returns...
[ {
name: "hello-world",
nestObj: {
//data
}
},
]
And I would like to add this nested Obj to the original space when that data is queried.
However, I don't have a clean way to access the pod.metadata.name inside the resolver for Container.
Currently my resolvers look like....
Query: {
pods: async () => {
//query that returns array of pod objects
return pods
}
},
Container: {
name: "hello-world",
nestedObj: async (parent, args, context, info) => {
//query that hits the second endpoint but requires name & namespace.
//however, I don't have access to those values.
}
}
Perfect world solution: I could access parent.parent.metadata.name inside the Container Resolver
Current Approach (brute force, repetitively add the property to the children)
Loop through every nested container objs in every Pod and add the podName & namespace as properties there.
pods.forEach(pod => pod.spec.containers.forEach(container => {
container.podName = pod.metadata.name;
container.namespace = pod.metadata.namespace;
}))
This feels very much like a hack, and really bogs down my query times. Especially considering this data won't always be requested.
I have two intuitive ideas but don't know how to implement them (I'm a bit new to graphQL).
Implement a Pod resolver that would then pass this data in through the rootValue as described here: https://github.com/graphql/graphql-js/issues/1098
Access it somewhere inside the info object
The problem the first one, is that my data source only sends me the data as an array, not individual pods and I'm unsure how to pass that array of data into resolvers for individual components.
The problem with second, is that object is very dense. I tried accessing it via the path. But path seems to only store the type, not the actual data.
It's also possible I'm just implementing this completely wrong and welcome such feedback.
Thanks for any guidance, suggestions, or resources.

Related

Unable to return any data in AppSync console with search - using #searchable directive in Amplify

I've added a #searchable directive to my Amplify/GraphQL schema as follows:
type Card
#model
#searchable
{
name: String
id: ID!
}
I've added some items, which I can retrieve with listCards in my AppSync Console:
query MyQuery {
listCards {
items {
name
}
}
}
# Returns:
{
"data": {
"listCards": {
"items": [
{
"name": "hunter"
},
{
"name": "url1"
},
{
"name": "testThur"
},
{
"name": "testThur2"
},
...
}
Now, when I try to use searchCards I can't get it to return anything:
query MyQuery {
searchCards(filter: {name: {ne: "nonsense"}}) {
nextToken
total
items {
name
}
}
}
# Returns:
{
"data": {
"searchCards": {
"nextToken": null,
"total": null,
"items": []
}
}
}
How do I get this working?
I noticed that new cards that I add are returned, but ones that were added before adding the #searchable directive don't get returned.
There's a grey info paragraph in the docs https://docs.amplify.aws/cli/graphql/search-and-result-aggregations/:
Once the #searchable directive is added, all new records added to the model are streamed to OpenSearch. To backfill existing data, see Backfill OpenSearch index from DynamoDB table.
It looks like any previous items that I've created on the database won't be streamed to OpenSearch, and therefore won't be returned by 'search' AppSync calls.
We're directed here: https://docs.amplify.aws/cli/graphql/troubleshooting/#backfill-opensearch-index-from-dynamodb-table
We are instructed to use the provided python file with this command:
python3 ddb_to_es.py \
--rn 'us-west-2' \ # Use the region in which your table and OpenSearch domain reside
--tn 'Post-XXXX-dev' \ # Table name
--lf 'arn:aws:lambda:us-west-2:<...>:function:amplify-<...>-OpenSearchStreamingLambd-<...>' \ # Lambda function ARN, find the DynamoDB to OpenSearch streaming functions, copy entire ARN
--esarn 'arn:aws:dynamodb:us-west-2:<...>:table/Post-<...>/stream/2019-20-03T00:00:00.350' # Event source ARN, copy the full DynamoDB table ARN
(I've tried this with my region, ARN's, and DynamoDB references but when I hit enter in my CLI it just goes to the next command line and nothing happens? I've not used python before. Hopefully someone here has more luck?)
You should run the script like this :
python3 fileNameToYourScript.py --rn <region> --tn <fullTableName> --lf <arnToYourOpenSearchLambdaFunction> --esarn <arnToYourTableName>
Remove the angle brackets and replace them with the actual value no quotation marks...
Another thing, I kept getting an error that credentials couldn't be found, in case you also get it, I fixed it by going to .aws/credentials and duplicating my profile details but naming the copy [default]. Also did the same in the .aws/config, duplication my region details and naming the copy [default].

Springdocs: Specifying an explicit type for Paged responses

I'm working on a "global search" for my application.
Currently, I'm using hibernate-search to search for instances of multiple different objects and return them to the user.
The relevant code looks as follows:
Search.session(entityManager)
.search(ModelA.classs, ModelB.class)
.where(...)
.sort(...)
.fetch(skip, count);
Skip and count are calculated based on a Pageable and the result is used to create an instance of Page, which will be returned to the controller.
This works as I'd expect, however, the types generated by swagger-docs obviously doesn't know, what the type within the Page is, and therefore uses Object.
I'd like to expose the correct types, as I use them to generate the types for the frontend application.
I was able to set the type to an array, when overwriting the schema like this:
#ArraySchema(schema = #Schema(anyOf = {ModelA.class, ModelB.class}))
public Page<?> search(Pageable pageable) {
However, this just disregards the Page and also isn't correct.
The next thing I tried is extending the PageImpl, overwriting the getContent method, and specifying the same schema on this method, but this wasn't included in the output at all.
Next was implementing Page<T> myself (and later removing the implements reference to Page<T>) and specifying the same schema on getContent, iterator, and the field itself, but also to no effect.
How do I tell spring-docs, what the content of the resulting Page might be?
I stumbled upon this when trying to solve a similar problem
Inspired from this thread Springdoc with a generic return type i came up with the following solution, and it seems to apply to your case also. Code examples are in Kotlin.
I introduced a stub class that will just act as the Schema for the response:
private class PageModel(
#Schema(oneOf = [ModelA::class, ModelB::class]))
content: List<Object>
): PageImpl<Object>(content)
Then i annotated my Controller like this:
#Operation(
responses = [
ApiResponse(
responseCode = "200",
content = [Content(schema = Schema(implementation = PageModel::class))]
)
]
)
fun getPage(pageable: Pageable): Page<Object>
This generated this api response:
"PageModel": {
"properties": {
"content": {
"items": {
"oneOf": [
{
"$ref": "#/components/schemas/ModelA"
},
{
"$ref": "#/components/schemas/ModelB"
}
],
"type": "object"
},
"type": "array"
},
... -> more page stuff from spring's PageImpl<>
And in the "responses" section for the api call:
"responses": {
"200": {
"content": {
"application/json": {
"schema": {
"$ref": "#/components/schemas/PageModel"
}
}
},
"description": "OK"
}
All generated openapi doc is similar to the autogenerated json when returning a Page, it just rewrites the "content" array property to have a specific type.

Indexing strategy for hierarchical structures on ElasticSearch

Let's say I have hierarchical types such as in example below:
base_type
child_type1
child_type3
child_type2
child_type1 and child_type2 inherit metadata properties from base_type. child_type3 has all properties inherited from both child_type1 and base_type.
To add to the example, here's several objects with their properties:
base_type_object: {
base_type_property: "bto_prop_value_1"
},
child_type1_object: {
base_type_property: "ct1o_prop_value_1",
child_type1_property: "ct1o_prop_value_2"
},
child_type2_object: {
base_type_property: "ct2o_prop_value_1",
child_type2_property: "ct2o_prop_value_2"
},
child_type3_object: {
base_type_property: "ct3o_prop_value_1",
child_type1_property: "ct3o_prop_value_2",
child_type3_property: "ct3o_prop_value_3"
}
When I query for base_type_object, I expect to search base_type_property values in each and every one of the child types as well. Likewise, if I query for child_type1_property, I expect to search through all types that have such property, meaning objects of type child_type1 and child_type3.
I see that mapping types have been removed. What I'm wondering is whether this use case warrants indexing under separate indices.
My current line of thinking using example above would be to create 4 indices: base_type_index, child_type1_index, child_type2_index and child_type3_index. Each index would only have mappings of their own properties, so base_type_index would only have base_type_property, child_type1_index would have child_type1_property etc. Indexing child_type1_object would create an entry on both base_type_index and child_type1_index indices.
This seems convenient because, as far as I can see, it's possible to search multiple indices using GET /my-index-000001,my-index-000002/_search. So I would theoretically just need to list hierarchy of my types in GET request: GET /base_type_index,child_type1_index/_search.
To make it easier to understand, here is how it would be indexed:
base_type_index
base_type_object: {
base_type_property: "bto_prop_value_1"
},
child_type1_object: {
base_type_property: "ct1o_prop_value_1"
},
child_type2_object: {
base_type_property: "ct2o_prop_value_1",
},
child_type3_object: {
base_type_property: "ct3o_prop_value_1",
}
child_type1_index
child_type1_object: {
child_type1_property: "ct1o_prop_value_2"
},
child_type3_object: {
child_type1_property: "ct3o_prop_value_2",
}
I think values for child_type2_index and child_type3_index are apparent, so I won't list them in order to keep the post length at a more reasonable level.
Does this make sense and is there a better way of indexing for my use case?

Best practices for writing a PUT endpoint for a REST API

I am building a basic CRUD service with some business logic under the hood, and I'm about to start working on the PUT (update) endpoint. I have already fully written+tested GET (read) and POST (create) for my data object. The data store for my documents is an ElasticSearch instance on AWS.
I have some decisions to make about how I want to architect the PUT, namely, how I want to determine a valid request. My goal is to make it so the POST is only for the creation of new assets, and PUT will only update existing documents. (At the moment, I am POSTing to elastic with /_doc/, however the intent is to move to /_create/ as part of this work)
What I'm a little hung-up on is the "right" way to check that a document exists before making the API call to Elastic to update.
When a user submits a document to PUT, should I first GET from Elastic with the document ID to make sure the document already exists? Or should I simply try to "update" the resource and if it doesn't exists, one is created?
Obviously there are trade-offs to each strategy. With the latter, PUTing a document that doesn't exist almost completely negates the need for a POST at all, so I'd be more inclined to go with the former - despite the additional REST call - to maintain the integrity of the basic REST definition.
Thoughts?
The consideration whether to update a doc (with versioning) or create a new one with some shared ID related to all previous versions depends on your use case -- either of them are 'correct' but there's too little information to advise on that right now.
With regards to the document-exists strategies -- there are essentially 2 types of IDs in ES -- what I call:
internal ids (_id)
external ids (doc_values-provided ids)
Create an index & a doc:
PUT myindex
PUT myindex/_doc/internal_id_1
{
"external_id": "1"
}
Internal ID check
GET myindex/_doc/internal_id_1
or
GET myindex/_count
{
"query": {
"ids": {
"values": [
"internal_id_1"
]
}
}
}
or
GET myindex/_count
{
"query": {
"term": {
"_id": {
"value": "internal_id_1"
}
}
}
}
External ID check
GET myindex/_count
{
"query": {
"term": {
"external_id": {
"value": "1"
}
}
}
}
and many others (terms, match (for partial matches etc), ...)
Note that I've used the _count endpoint instead of _search -- it's slightly faster.
If you intend to check the _version of a given doc before you proceed to update it, replace _count with _search?version=true and the _version attribute will become available:
{
"_index":"myindex",
"_type":"_doc",
"_id":"internal_id_1",
"_version":2, <---
"_score":1.0,
"_source":{
"external_id":"1"
}
}

GraphQL: Explore API without a wildcard (*)?

I am new to GraphQL and I wonder how I can explore an API without a possible wildcard (*) (https://github.com/graphql/graphql-spec/issues/127).
I am currently setting up a headless Craft CMS with GraphQL and I don't really know how my data is nested.
Event with the REST API I have no chance of just getting all the data, because I have to setup all the endpoints and therefore I have to know all field names as well.
So how could I easily explore my CraftCMS data structure?
Thanks for any hints on this.
Cheers
merc
------ Edit -------
If I use #simonpedro s suggestion:
{
__schema {
types {
name
kind
fields {
name
}
}
}
}
I can see a lot of types (?)/fields (?)...
For example I see:
{
"name": "FlexibleContentTeaser",
"kind": "OBJECT",
"fields": [
{
"name": "id"
},
{
"name": "enabled"
},
{
"name": "teaserTitle"
},
{
"name": "text"
},
{
"name": "teaserLink"
},
{
"name": "teaserLinkConnection"
}
]
But now I would like to know how a teaserLink ist structured.
I somehow found out that the teaserLink (it is a field with the type Entries, where I can link to another page) has the properties url & title.
But how would I set up query to explore the properties available within teaserLink?
I tried all sorts of queries, but I am always confrontend with messages like this:
I would be really glad if somebody could give me another pointer how I can find out which properties I can actually query...
Thank you
As far as I'm concerned currently there is no graphql implementation with that capability. However, if what you want to do is to explore the "data structure", i.e, the schema, you should use schema instrospection, which was thought for that (explore the graphql schema). For example, a simple graphql instrospection query would be like this:
{
__schema {
types {
name
kind
fields {
name
}
}
}
}
References:
- https://graphql.org/learn/introspection/
UPDATE for edit:
What you want to do I think is the following:
Make a query like this
{
__schema {
types {
name
kind
fields {
name
type {
fields {
name
}
}
}
}
}
}
And then find the wished type field to grab more information (the fields) from it. Something like this (I don't know if this works, just an idea):
const typeFlexibleContentTeaser = data.__schema.types.find(t => t === "FlexibleContentTeaser")
const teaserLinkField = typeFlexibleContentTeaser.fields.find(f => f.name === "teaserLink")
const teaserLinkField = teaserLinkField.type.fields;
i.e, you have to transverse recursively through the type field.

Resources