I believe the default data source when creating GraphQL API is DynamoDB, I would like to set it to Lambda function instead.
Is there a way to do this with Amplify?
if not, what is the workaround?
I found this AWS tutorial online that states:
we'll show you how to write a Lambda function that performs business logic based on the invocation of a GraphQL field operation.
but I couldn't use this with Amplify.
For now, you can use this pattern described here to manually setup the correct templates and target a lambda that you setup with Amplify.
https://aws-amplify.github.io/docs/cli/graphql#add-a-custom-resolver-that-targets-an-aws-lambda-function
Soon (or maybe by the time you read this, based on the status of this PR) you'll be able to annotate your GraphQL Schema with #function and have it wire up all that same stuff for you.
Hope this helps.
Related
I am trying to convert my REST API into graphql using AWS app sync, the problem is I am unable to fine the right method or documentation on how to do it.
I have successfully created a schema, I am trying to give a resolver for it, but I am not sure what is the right way to do it.
The problem was the creation of a pipeline, I changed actions > update runtime > Unit Resolver (VTL only). and then selecting the HTTP request as a data source was open for me to use. the document does not seem to have this information, anyways if you play with it for some time, you can get it (quite frustrating, ngl).
When creating the lambda function through Amplify CLI, there are 4 function templates provided:
CRUD function for Dynamodb (Integration with API Gateway)
Hellow World
Lambda trigger
Serverless ExpressJS function (Integration with API Gateway)
I am confused about the usage of "Hello World" and "Serverless ExpressJS function". Let's say I want to implement a lambda function that contains custom query to get the result from Dynamodb, which template is suitable, or is the best practice to use?
It all helps you start the project with boilerplate code.
For your use case, you can use CRUD Function for DynamoDB. This will create a DynamoDB table and Integrate it with API Gateway and generate boilerplate code for your chosen programming language. You can change the CRUD operations to more custom - changing the parameter.
API Gateway directs all requests to your Express API, so it's up to your route handling, so you can change them later.
As your NodeJS app grows, you might need to organize your routes and use express.Router to make it modular.
It's a simple boilerplate code that returns a string. You can use this for simple tasks or private API calls.
You can run any lambda function when a certain event happens. For example you can execute a lambda function if you configured a lambda trigger for the DynamoDB event.
It's not integrated with DynamoDB, this option will generate simple Express endpoints like GET/, POST/, PUT/, DELETE/
I'd recommend you to try out all these options and see how it works.
I'm actually curious to know is there any way to get the user who created a specific object in kubernetes. I am using kubernetes client-go library.
from my understanding kubernetes object doesn't hold any user-meta information. So how should I approach this ?
You can write a custom admission webhook which mutates the CRUD request for any object coming to kubernetes API server and add the user as a label to the object.This way you will always know who created the object by looking at that label. Also make sure to use validation admission webhook to reject any edit by the users to that label so that the information can not be changed or tampered.
Auditing describes the who, when and what: https://kubernetes.io/docs/tasks/debug-application-cluster/audit/. I think the way that you use it is to configure a logging backend which stores the logs and then allows you to query what happened.
My idea is to create a microservice approch with graphql and serverless.
I'am thinking about creating a service for every table in the dynamodb and then create a apigateway service, and in the apigateway service use graphql-tool to stitch the schemas together.
This work pretty good and I'am satisfied.
But now I want to add authorization to my graphql queries and mutations.
I have added a custom autherizer in the apigateway that resolves the JWT token from the client and sends it to the graphql context with the userId
But now I want to add authorization to my resolvers.
What is the best approach for this?
I want it to be as moduler as possible and and best (i think) is to add the authorization in the apigatway service so my other service stay clean. But I don't know how?
Any ideas?
You may want to look into AppSync from AWS. It will handle a lot of this for you; authorizers, querying DyanmoDB, etc.
I've built Lambda APIs using Apollo GraphQL and exposed them through API Gateway. I then used Apollo's schema stitching to connect them together. There's one really important caveat here: It's slooow. There's already a speed penalty with API Gateway and while it's acceptable, imagine jumping through multiple gateways before returning a response to a user. You can cache the schema which helps a bit. Your tolerance will depend on your app and UX of course. Maybe it's just fine - only you (or your users) can answer that.
That note aside, the way I handled auth was to accept an Authorization header and make a check manually. I did not use any custom authorizers from API Gateway. I was not using Cognito for this so it talked to another service. This all happened before the resolvers. Why are you looking to do the authorization in resolvers? Are there only some that you wish to protect? Access control?
It may not be best to add the custom authorizers to API Gateway in this case...Because you're talking about performing this action at the resolver level in the code.
GraphQL has one POST endpoint for everything. So this is not going to help with configuring API Gateway auth per resource. That means you're now beyond API Gateway and into the invocation of your Lambda anyway. You didn't prevent the invocation so you're being billed and running code now.
So you might as well write your custom logic to authenticate. If you're using Cognito then there is an SDK to help you out. Or take a look at AppSync.
I am currently learning how to expose my lambda function with API Gateway. I followed the instructions documented here and created an API that triggers my test lambda function. Here's the summary of what I did.
First I created a test lambda function with hello world template BUT WITH NO TRIGGER ADDED to it.
Then I go to the API Gateway console, added a resource with GET method specifying the integration type to Lambda function. I entered my test lambda function name there.
Ok, so I tested the solution above in API Gateway console. It's working fine and I just need to deploy that by creating a stage and I am done.
But then I noticed another way of exposing lambda with API Gateway. That is, by going to that lambda function and add an API Gateway trigger to it. Like the following:
It will ask to to enter the API name of the API that I will use. This API somehow relates to the one that I created under API Gateway console.
After creating the trigger, the test lambda function will now have a trigger that looks like this, and an https URL exposed under it.
Then after that, when I go to the API Gateway console I noticed that a new resource is added.
The resource name is the name of my test lambda function and the method is ANY and I don't quite understand the use of this.
By comparing the above with the resource I created earlier. The one above does not have the ARN address of the lambda function while this one has
So, my question is,
what is the difference between creating an API and adding an integration for it with lambda and adding a trigger from lambda with an existing API from API Gateway?
Can the https address exposed under lambda function(After adding trigger from API Gateway) be used directly?
If adding trigger to lambda will work as the same. Then do I still need to create a stage to deploy my api?
what is the difference between creating an API and adding an integration for it with lambda and adding a trigger from lambda with an existing API from API Gateway?
Those are two different ways to create an API Gateway-Lambda integration. There is no difference if you configure them both the same way.
Can the https address exposed under lambda function(After adding trigger from API Gateway) be used directly?
Yes, you can use it directly.
If adding trigger to lambda will work as the same. Then do I still need to create a stage to deploy my api?
Yes. The URL comes from the stage, so you need at least one.