How can I create a lambda event template via terraform? - aws-lambda

I can select an event from event template when I trigger a lambda function. How can I create a customized event template in terraform. I want to make it easier for developers to trigger the lambda by selecting this customized event template from the list.
I'd like to add an event on this list:

Unfortunately, at the time of this answer (2020-02-21), there is no way to accomplish this via the APIs that AWS provides. Ergo, the terraform provider does not have the ability to accomplish this (it's limited to what's available in the APIs).
I also have wanted to be able to configure test events via terraform.
A couple of options
Propose to AWS that they expose some APIs for managing test events. This would give contributors to the AWS terraform provider the opportunity to add this resource.
Provide the developers with a PostMan collection, set of shell scripts (or other scripts) using the awscli, or some other mechanism to invoke the lambdas. This is essentially the same as pulling the templating functionality out of the console and into your own tooling.

I did try something and it worked. I must warn that this is reverse engineering and may break anytime in future, but works well for me so far.
As per Amazon doc for testing lambda functions, whenever a Shareable Test Event is created for any lambda, it is stored under a new schema under the lambda-testevent-schemas schema registry.
I made use of this information and figured out the conventions AWS follows to keep track of the events so that I can use those to manage resources using terraform
The name of the schema is _<name_of_lambda_function>-schema, hence, from terraform I manage a new schema named _<name_of_lambda_function>-schema
resource "aws_schemas_schema" "my_lambda_shared_events" {
name = "_${aws_lambda_function.my_lambda.function_name}-schema"
registry_name = "lambda-testevent-schemas"
type = "OpenApi3"
description = "The schema definition for shared test events"
content = local.my_lambda_shared_events_schema
}
I create a json doc (my_lambda_shared_events_schema) which follows the OpenAPI3 convention. For example =>
{
"components": {
"examples": {
"name_of_the_event_1": {
"value": {
... the value you need ...
}
},
"name_of_the_event_2": {
"value": {
... the value you need ...
}
}
},
"schemas": {
"Event": {
"properties": {
... structure of the event you need ...
},
"required": [... any required params ...],
"type": "object"
}
}
},
"info": {
"title": "Event",
"version": "1.0.0"
},
"openapi": "3.0.0",
"paths": {}
}
After terraform apply, you should be able to see the terraform managed shareable events in AWS console.
Some important gotchas when using this method:
If you add events using terraform using this method, any events added from the console would be lost.
The schema registry lambda-testevent-schemas is a special registry & must NOT be managed using terraform as it may disrupt other lambda functions' events created outside the scope of this terraform module.
It is required for the lambda-testevent-schemas to be available beforehand. You can either have a check to create this registry before the
module is applied or else create any random shareable event for any random lambda function. This needs to be done once per region per account.
If you face difficulties creating the json schema for your lambda, you can for once create the events from console and then copy the json from EventBridge schema registry.

Related

How to generate types.json in substrate

In polkadot-js has provided for developer to define custom types in the pallet, so that polkadot-ui can understand those types (it means can use some underlying API polkadot-js). These types are defined using the json format. This is example
{
"TransactionInput": {
"parent_output": "Hash",
"signature": "Signature"
},
"TransactionOutput": {
"value": "u128",
"pubkey": "Hash",
"sale": "u32"
},
"Transaction": {
"inputs": "Vec<TransactionInput>",
"outputs": "Vec<TransactionOutput>"
}
}
I see that in substrate-node-template/scripts has aggregrate_types.js file that generate types.json. I dont know how to generate it automaticly or I should write by hand.
Example that, in my pallet that i have defined enum RoleID and struct Role. But in UI it doesn't understand what RoleID is. Can you explain more clearly? I believe that it can be related to define types.json.
https://github.com/polkadot-js/apps/blob/master/packages/page-settings/src/md/basics.md#developer
aggregrate_types.json:
Thanks!!!
Presently, generating this by hand is the best way following the docs here. There are not clean ways to automatically generate this to my knowlage, but soon you will not need to worry about at all it once this PR lands in Substrate!
Thanks to https://github.com/paritytech/substrate/pull/8615, you don't have to manually write types.json anymore.
Make sure the metadata version of your node is v14 or higher. Otherwise you need to upgrade your substrate version to make it automagically work for you.

Amplify and AppSync not updating data on mutation from multiple sources

I have been attempting to interact with AppSync/GraphQL from:
Lambda - Create (works) Update (does not change data)
Angular - Create/Update subscription received, but object is null
Angular - Spoof update (does not change data)
AppSync Console - Spoof update (does not change data)
Post:
mutation MyMutation {
updateAsset(input: {
id: "b34d3aa3-fbc4-48b5-acba-xxxxxxxxxxx",
owner: "51b691a5-d088-4ac0-9f46-xxxxxxxxxxxx",
description: "AppSync"
}) {
id
owner
description
}
}
Response:
{
"data": {
"updateAsset": {
"id": "b34d3aa3-fbc4-48b5-acba-xxxxxxxxxx",
"owner": "51b691a5-d088-4ac0-9f46-xxxxxxxxxxx",
"description": "Edit Edit from AppSync"
}
}
The version in DynamoDB gets auto-incremented each time I send the query. But the description remains the same as originally set.
Auth Rules on Schema -
#auth(
rules: [
{ allow: public, provider: apiKey, operations: [create, update, read] },
{ allow: private, provider: userPools, operations: [read, create, update, delete] }
{ allow: groups, groups: ["admin"], operations: [read, create, update, delete] }
])
For now on the Frontend I'm cheating and just requesting the data after I received a null subscription event. But as I've stated I only seem to be able to set any of the data once and then I can't update it.
Any insight appreciated.
Update: I even decided to try a DeleteAsset statement and it won't delete but revs the version.
I guess maybe the next sane thing to do is to either stand up a new environment or attempt to stand this up in a fresh account.
Update: I have a working theory this has something to do with Conflict detection / rejection. When I try to delete via AppSync direct I get a rejection. From Angular I just get the record back with no delete.
After adding additional Auth on the API, I remember it asked about conflict resolution and I chose "AutoMerge". Doc on this at https://docs.aws.amazon.com/appsync/latest/devguide/conflict-detection-and-sync.html
After further review I'll note what happened in the hopes it helps someone else.
Created amplify add api
This walked me thru a wizard. I used the existing Cognito UserPool since I had not foreseen I would need to call this API from a S3 Trigger (Lambda Function) later.
Now needing to grant apiKey or preferably IAM access from the Lambda to AppSync/GraphQL API I performed amplify update api and added the additional Auth setting.
This asked me how I wanted to solve conflict, since more than one source can edit the data. Because I just hit "agree" on Terms and Conditions and rarely read the manual; I selected 'AutoMerge' .. sounds nice right?
So now if you read the fine print, edits made to a table will be rejected as we now have this _version (Int) that would need to get passed so AutoMerge can decide if it wants to take your change.
It also creates an extra DataStore Table in DynamoDB tracking versions. So in order to properly deal with this strategy you'd need to extend your schema to include _version not just id or whatever primary key you opted to use.
Also note: if you delete it sets _delete Bool to true. This actually still is returned to the UI so now your initial query needs to filter off (or not) deleted records.
Determined I also didn't need this. I don't want to use a Datastore (least not now) so: I found the offender in transform.conf.json within the API. After executing amplify update api, GraphQL, I chose 'Disable Datastore for entire API` and it got rid of the ConflictHandler an ConflictDetection.
This was also agitating my Angular 11 subscription to Create/Update as the added values this created broke the expected model. Not to mention the even back due to nothing changing was null.
Great information here, Mark. Thanks for the write up and updates.
I was playing around with this and with the Auto Merge conflict resolution strategy I was able to post an update using a GraphQL mutation by sending the current _version member along.
This function:
await API.graphql(
graphqlOperation(updateAsset, {
input: {
id: assetToUpdate.id,
name: "Updated name",
_version: assetToUpdate._version
}
}
));
Properly updates, contacts AppSync, and propagates the changes to DynamoDB/DataStore. Using the current version tells AppSync that we are up-to-date and able to edit the content. Then AppSync manages/increments the _version/_createdAt/etc.
Adding _version to my mutation worked very well.
API.graphql({
query: yourQuery,
variables: {
input: {
id: 'your-id',
...
_version: version,
},
},
});

How to query data from 2 APIs

I have setup a Gatsby Client which connects to Contentful using the gatsby-source-contentful plugin. I have also connected a simple custom API which is connected using the gatsby-source-graphql plugin.
When I run the dev-server I am able to query my pages from Contentful in the playground.
I am also able to query my custom API through the playground as well.
So both APIs work and are connected with Gatsby properly.
I want to programatically generate a bunch of pages that have dynamic sections (references) which an author can add and order as she wishes.
I do achieve this using the ...on Node connection together with fragments I define within each dynamic section. It all works out well so far.
My actual problem:
Now I have a dynamic section which is a Joblist. This Component requires to get data out of the Contentful API as it stores values like latitude and longitude. So the author is free to set a point on a map and set a radius. I successfully get this information out of Contentful using a fragment inside the component:
export const query = graphql `
fragment JoblistModule on ContentfulJoblisteMitAdresse {
... on ContentfulJoblisteMitAdresse {
contentful_id
radius
geo {
lon
lat
}
}
}`
But how can I pass this information in to another query that fetches the jobdata from my custom API? If I understand Gatsby correctly I somehow have to connect these two API's together? Or can I run another query somehow that fetches these values passed in as variables? How and where would I achieve this?
I could not find any approach neither inside the gatsby-node.js (since passed-in context can only be used as variables inside a query) nor in the template-file (since I can run only 1 query at a time), nor in the component itself (since this only accept staticQuery)
I don't know where my misunderstanding is. So I would very appreciate any hints, help or examples.
Since your custom API is a graphQL API, you can use delegateToSchema from the graphql-tools package to accomplish this.
You will need to create a resolver using Gatsby's setFieldsOnGraphQLNodeType API. Within this resolver, your resolve function will call delegateToSchema.
We have a similar problem, our blog posts have an "author" field which contains an ID. We then do a graphQL query to another system to look up author info by that ID.
return {
remoteAuthor: {
type: person,
args: {},
resolve: async (source: ContentfulBlogPost, fieldArgs, context, info) => {
if (!source.author) {
return null
}
// runs the selection on the remote schema
// https://github.com/gatsbyjs/gatsby/issues/14517
return delegateToSchema({
schema: authorsSchema,
operation: 'query',
fieldName: 'Person',
args: { id: source.author },
context,
info,
})
},
},
}
This adds a 'remoteAuthor' field to our blog post type, and whenever it gets queried, those selections are proxied to the remote schema where the person type exists.

Run Custom resource whenever i update my CFN Stack

I have a custom resource used to get the API key from API gateway and send it as a header to Cloudfront. When i am creating a stack my custom:resource is triggering since it is creating logical ID for first time. But when i update the stack(i.e Changing the API KEY name) Then the API key resource of type AWS::ApiGateway::ApiKey will create a new Logical ID when in turn create a new API key, At this point my custom:resource is not invoking since it has same same logical ID because of this my cloudfront is having old API Key rather than new one.
Is there a way to invoke my custom:resource everytime a update happened to my stack?
As a workaround i am changing the Logical Id of custom:resource to trigger it whenever i am updating a resource in my stack. But this is little difficult since logicalId is shared as a reference to many resources.
BTW my custom resource is attached to a lambda function. I even tried changing the Version field and also tried adding values to properties field (i.e.Stackname,parameters etc) but still it is not invoking.
{
"AWSTemplateFormatVersion" : "2010-09-09",
"Resources" : {
"MyFrontEndTest" : {
"Type": "Custom::PingTester",
"Version" : "1.0", -->Even changed the version to 2.0
"Properties" : {
"ServiceToken": "arn:aws:lambda:us-east-1:*****",
"InputparameterName" : "MYvalue" -->Added this field
}
}
}
Thanks
Any help is appreciated
One trick to get the custom resource to execute a lambda when the stack is updated is to configure the custom resource to pass all stack parameters to the lambda function. If any parameters change on stack update, the custom resource will change and trigger the lambda. Just ignore the unneeded keys in the lambda event data. This won't do anything for the scenario when just the template is updated.

How to refer resources in cloudformation created in it

I would like to create an monitoring instance with rights to terminate, create, destroy instances, autoscaling groups, tags etc. within the scope of cloudformation it was created in.
What resource should I give to the policy to make it work ?
{
"PolicyName": "ManageCloudformationInstances",
"PolicyDocument": {
"Statement": [
{
"Effect": "Allow",
"Action": [
"ec2:*"
],
"Resource": "?????"
}
]
}
},
So I guess there are two part to your question.
If you are creating instance in your cloudformation template then you can easily just use the GetAtt function to pull the Arn for those resources.
However if you are trying to dynamically allow it to delete instances inside of an autoscaling group then you need dynamically edit your policy to allow that. The easiest way that comes to mind is to trigger a lambda function every time that your ASG scales and edit the policy to include the ARNs from the recent scaling activity.
You probably want to start with something like this for the ASG - http://docs.aws.amazon.com/autoscaling/latest/userguide/cloud-watch-events.html

Resources