Resolver Mapping Template development guidelines - graphql

Subject
I'm using amplify with GraphQL and DynamoDB as backend which works through AppSync. AppSync generates json based on vtl and executes it (I don't know in fact what part of the service executes it) - so it names as Resolver Mapping Template
I need to cover all my GraphQL endpoints with custom resolvers written by me but development hurts, cause I cannot find any workaround how to simplify development and testing except via aws console, what is slow and inconvenient
What I tried
As an approach I tried to create DynamoDB json files and upload them via awscli, but AppSync uses another json format - Resolver Mapping Template
What I need
I would like to know any workaround and guidelines how to develop, debug and test my resolvers.
So I need 2 options or both
Compare generated template with all $util stuff. nice to have
Execute generated template via cli into DynamoDB for checking results (or maybe there are any mock system). great to have

The recommendation would be to use the Amplify CLI to manage auto-creating the resolvers as well as updating them yourself to alleviate some of the 'development hurts' part.
I noticed that you mentioned one of the things you are looking for is the ability to rapidly test the resolvers (that in this case amplify cli will create for you) but as stated it will take some time with every amplify push for the cfn to update. What might interest in you (and potentially alleviate this issue for you) is this new RFC for the amplify cli: https://github.com/aws-amplify/amplify-cli/issues/1433
See if it covers your needs if not, add a comment to that github post.

Related

Need Solution to integrate Node+Express+GraphQL+ApolloServer+ElasticSearch

I need to develop my backed application in NodeJS ExpressJS and GraphQL, and I am using the Apollo GraphQL server for this. Now I have to connect this GraphQL to ElasticSearch so that I can directly write the ElasticSearch Queries in Apollo Playground like earlier I was using for GraphQL Queries.
Can someone help me with this Scenario?
There are multiple scenarios for this through object manipulation, some use a dedicated file for elasticsearch and others use the logic directly to resolvers in graphql and then just add the main method in the graphql/nodejs server declaration in order for the initialization to start (index creation etc) (some call it index.ts it depends)
Use objects and single responsibility.
create a frontend observable that looks at an API that API can then take data from the elastic cluster.
The problem as you pointed out is that you use graphql directly, while graphql is mainly to create a layer between front and back, what you do is making the API layer connect directly with the back, so that needs to change through a new object that only exists for the API, no matter what happens to your back this will have to stay the same, that's why graphql is important, it needs to be used on that specific way.

Proper integration of AWS AppSync with Laravel?

Anyone successfully integrated AWS AppSync with Laravel?
I am new to AWS AppSync but good experience with laravel.
I am trying to implement an offline-app feature in my mobile app and the mobile API part is what Laravel handles.
I looked into AWS AppSync but all they are talking about is dynamoDB and graphQL. Someplace it says i need to use AWS Lambda.
I really can't get a grip on how to properly implement this.
Any suggestions or pieces of advice are greatly appreciated.
I have basic experience with graphQL
Thanks
I checked a few video sessions and found HTTP endpoint can be used as a resolver. is this the proper way?
If I use HTTP as resolver can I still use the real-time features?
links
https://aws.amazon.com/appsync/
Laravel is a PHP framework, so I think the two options you would want to consider would be HTTP and Lambda data sources.
Lambda can be something of a catch-all for data sources: you have absolute control over what you call, how you do it, and in what language you do it. You just have to set up a Lambda function and create a data source in the AppSync console pointing to it, then have your Lambda function interact with your framework however it needs to.
I'm not terribly familiar with Laravel myself, but I believe HTTP is also a totally viable option. I would think this would be the way you want to go, as it cuts out the added complexity and latency of a Lambda function between AppSync and your end destination. A tutorial for setting one up is available here: https://docs.aws.amazon.com/appsync/latest/devguide/tutorial-http-resolvers.html
In either case, real time updates will be absolutely available to you.

Why use Prisma in a backend environment?

After learning about GraphQL and using it in a few projects, I finally wanted to give Prisma a go. It promises to eliminate the need for a database and it generates a GraphQL client and a working database from the GraphQL Schema. So far so good.
But my question is: A GraphQL client to me really only seems useful for a client (prevent overfetching, speed up pages, React integrations, ...). Prisma however does not eliminate the need for business logic, and so one would end up using the generated client library in Node.js, just to reexport a lot of the functionality in yet another GraphQL server to the actual client.
Why should I prefer Prisma over a custom database solution? Is there a thought behind having to re-expose a lot of endpoints to the actual client?
I work at Prisma and would love to clarify this!
Here's a quick note upfront: Prisma is not a "GraphQL-as-a-Service" tool (in the way that Graphcool, AppSync or Hasura are). The Prisma client is not a "GraphQL client", it's a database client (similar to an ORM). So, the reason for not using the Prisma client on the frontend is the same as for why you wouldn't use an ORM or connect to the DB directly from the frontend.
It promises to eliminate the need for a database and it generates a GraphQL client and a working database from the GraphQL Schema. So far so good.
I'm really curious to hear where exactly you got this perception from! We're well aware that we need to improve our communication about the value that Prisma provides and how it works. What you've formulated there is an extremely common misconception about Prisma that we want to prevent in the future. We're actually planning to publish a blog post about this exact topic next week, hopefully that will clarify a lot.
To pick up the concrete points:
Prisma doesn't eliminate the need for a database. Similar to an ORM, the Prisma client used to simplify database access. It also makes database migrations easier with a declarative data modelling and migrations approach (we're actually currently working on large improvements to our migration system, you can find the RFC for it here).
Another major benefit of Prisma is the upcoming Prisma Admin, a data management tool. The first preview for that will be available next week.
Even I had similar questions when I started learning graphql. This is what I learned and realised after using it.
Prisma acts as a proxy for your database providing you with a ready
to use GraphQL API that allows you to filter and sort data along with
some custom types like DateTime which are not a part of graphql and
you'd have to otherwise implement yourself. It's not a GraphQL server. Just a
layer between your database and backend server like an ORM.
It covers almost all the possible usecases that you might have from a
data model with all the CRUD operations pre-defined in a schema
along with subscriptions, so you don't have to do all that stuff
and focus more on your business logic side of things.
Also it removes the dependency of you writing different queries for
different databases like Sql or MongoDb acting as a layer to
transform it's query language to actual database queries.
You can use the API(graphql) server to expose only the desired schema
to the client rather than everything. Since graphql queries can get
highly nested, it may be difficult and tricky to implement that which
may also lead to performance issues which is not the case in Prisma as it handles everything itself.
You can check out this article for more info.

Where the data is Storing in Graphql

I started to use graphQl with react relay. And I followed some tutorials and I can able to get and post with the help of mutations and queries. Everything works fine but my question here is,
Where qraphql is saving the data and fetching that for us
for example: If I get data from database mean's I can go through into particular DB/ TABLE. Likewise, i want to know where graphql is storing the data.
i searched many sites, They are telling how to use qraphql but I cant able to find an answer to my question. I need clarification in this area. Can someone help me out with this.
GraphQL is a query language for your API, and a server-side runtime for executing queries by using a type system you define for your data. GraphQL isn't tied to any specific database or storage engine and is instead backed by your existing code and data.
You can connect any database using GraphQL.
As I understand you are trying the mutation and queries with some hosted engines.
Please go through this reference and set up the GraphQL engine on your side.
GraphQL, unlike a database level query languages like SQL, is an application level query language. It's up to programmer to create necessary logic - in most server implementations realized by using resolver functions - to make a domain described by GraphQL Schema a reality. This includes any form of persistence.
The GraphQL is using localStorage provided by your browser by default if there is no other storage are provided.

Replacing REST calls with GraphQL

I've recently read about the advantages (and disatvanteges) of GraphQL over Rest API.
I am developing a webpage that consumes several different Rest APIs and Soap services. Some of those services are dependent, meaning that a result from Rest1 will be passed as a parameter to Rest2 which will be passed to Soap service for a final return value.
From what I understood, GraphQL deals with multiple data sources and query nesting, but I have not yet understood if it will handle those nested dependent queries.
Can anyone that worked with several data sources that are dependent with GraphQL tell me if it can be done? My project should be up in 2 weeks and investing time in learning and setting up GraphQL and ending up not using it because it's not supporting my case would be a big failure for me.
Note: the APIs and services are not mine, I am consuming them from an outside source
I'm assuming you haven't yet setup a GraphQL server. Once you do, you can see how this isn't too difficult. So, I'd recommend you setup your own server first. The Egghead Course, "Build a GraphQL Server" got me started, but it's not free.
In essence, you'll be setting up your schema then defining how to resolve with data. When you resolve, you can setup an express server to query a database, or you can hit a REST interface, or hit your SOAP interface. How you retrieve the data is up to you, so long as you return it in compliance with your defined schema.
Hope that makes sense. Mocking up a mini app to demonstrate is possible, but since I don't have one handy, this is the best advice I can offer.

Resources