Create API gateway in localstack - aws-lambda

I was able to setup localstack (https://github.com/atlassian/localstack) and also create lambda function in it (using create-function ... command). However, I couldnt find a way to create an APIGateway in localstack so that the lambda function can be called using it.
Basically, I need an APIGateway(and its arn), so that using that the lambda function can be called.

Walkthrough for creating a NodeJS Lambda together with API Gateway per CLI:
First we create a simple NodeJS Lambda:
const apiTestHandler = (payload, context, callback) => {
console.log(`Function apiTestHandler called with payload ${JSON.stringify(payload)}`);
callback(null, {
statusCode: 201,
body: JSON.stringify({
somethingId: payload.pathParameters.somethingId
}),
headers: {
"X-Click-Header": "abc"
}
});
}
module.exports = {
apiTestHandler,
}
Put that into a zip File called apiTestHandler.zip and upload it to localstack:
aws lambda create-function \
--region us-east-1 \
--function-name api-test-handler \
--runtime nodejs6.10 \
--handler index.apiTestHandler \
--memory-size 128 \
--zip-file fileb://apiTestHandler.zip \
--role arn:aws:iam::123456:role/role-name --endpoint-url=http://localhost:4574
Now we can create our Rest-Api:
aws apigateway create-rest-api --region us-east-1 --name 'API Test' --endpoint-url=http://localhost:4567
This gives the following response:
{
"name": "API Test",
"id": "487109A-Z548",
"createdDate": 1518081479
}
With the ID we got here, we can ask for its parent-ID:
aws apigateway get-resources --region us-east-1 --rest-api-id 487109A-Z548 --endpoint-url=http://localhost:4567
Response:
{
"items": [
{
"path": "/",
"id": "0270A-Z23550",
"resourceMethods": {
"GET": {}
}
}
]
}
Now we have everything to create our resource together with its path:
aws apigateway create-resource \
--region us-east-1 \
--rest-api-id 487109A-Z548 \
--parent-id 0270A-Z23550 \
--path-part "{somethingId}" --endpoint-url=http://localhost:4567
Response:
{
"resourceMethods": {
"GET": {}
},
"pathPart": "{somethingId}",
"parentId": "0270A-Z23550",
"path": "/{somethingId}",
"id": "0662807180"
}
The ID we got here is needed to create our linked GET Method:
aws apigateway put-method \
--region us-east-1 \
--rest-api-id 487109A-Z548 \
--resource-id 0662807180 \
--http-method GET \
--request-parameters "method.request.path.somethingId=true" \
--authorization-type "NONE" \
--endpoint-url=http://localhost:4567
We are almost there - one of the last things to do is to create our integration with the already uploaded lambda:
aws apigateway put-integration \
--region us-east-1 \
--rest-api-id 487109A-Z548 \
--resource-id 0662807180 \
--http-method GET \
--type AWS_PROXY \
--integration-http-method POST \
--uri arn:aws:apigateway:us-east-1:lambda:path/2015-03-31/functions/arn:aws:lambda:us-east-1:000000000000:function:api-test-handler/invocations \
--passthrough-behavior WHEN_NO_MATCH \
--endpoint-url=http://localhost:4567
Last but not least: Deploy our API to our desired stage:
aws apigateway create-deployment \
--region us-east-1 \
--rest-api-id 487109A-Z548 \
--stage-name test \
--endpoint-url=http://localhost:4567
Now we can test it:
curl http://localhost:4567/restapis/487109A-Z548/test/_user_request_/HowMuchIsTheFish
Response:
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 34 100 34 0 0 9 0 0:00:03 0:00:03 --:--:-- 9
{"somethingId":"HowMuchIsTheFish"}
I hope this helps.
Hint 1: For easier use I recommend to install AWSCLI Local ( https://github.com/localstack/awscli-local ) - with this tool you can use the command "awslocal" and don't have to type "--endpoint-url= ..." for each command
Walkthrough for using Serverless Framework and Localstack:
You can also use the Serverless Framework (https://serverless.com/).
First install it via npm:
npm install serverless -g
Now you can create a sample application based on a nodejs-aws template:
serverless create --template aws-nodejs
In order to have an HTTP endpoint, you have to edit the serverless.yml and add the corresponding event :
functions:
hello:
handler: handler.hello
events:
- http:
path: ping
method: get
In order to run this against your localstack installation you have to use the serverless-localstack plugin ( https://github.com/temyers/serverless-localstack):
npm install serverless-localstack
Now you have to edit your serverless.yml again, add the plugin and adjust your endpoints. In my case localstack is running inside the Docker toolbox, so it's IP is 192.168.99.100 - you may have to change this to localhost, depending on your use:
plugins:
- serverless-localstack
custom:
localstack:
debug: true
stages:
- local
- dev
host: http://192.168.99.100
endpoints:
S3: http://192.168.99.100:4572
DynamoDB: http://192.168.99.100:4570
CloudFormation: http://192.168.99.100:4581
Elasticsearch: http://192.168.99.100:4571
ES: http://192.168.99.100:4578
SNS: http://192.168.99.100:4575
SQS: http://192.168.99.100:4576
Lambda: http://192.168.99.100:4574
Kinesis: http://192.168.99.100:4568
Now you can try to deploy it:
serverless deploy --verbose --stage local
This will create an S3 bucket, upload your lambda and create a cloudformation stack. However, the process will fail due to some inconsistencies of localstack when compared against AWS. Don't be dismayed though, the created cloudformation template works fine and you just need an additional request and you are done:
awslocal cloudformation update-stack --template-body file://.serverless/cloudformation-template-update-stack.json --stack-name aws-nodejs-local
Now your lambda is deployed and can be tested:
curl http://192.168.99.100:4567/restapis/75A-Z278430A-Z/local/_user_request_/ping
Response:
% Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed
100 364 100 364 0 0 111 0 0:00:03 0:00:03 --:--:-- 111
{"message":"Go Serverless v1.0! Your function executed successfully!","input":{"body":null,"headers":{"host":"192.168.99.100:4567","accept":"*/*","user-agent":"curl/7.49.1"},"resource":"/restapis/75A-Z278430A-Z/local/_user_request_/ping","queryStringParameters":{},"httpMethod":"GET","stageVariables":{},"path":"/ping","pathParameters":{},"isBase64Encoded":false}}
Hope this helps.

Looks like there is an open issue related to setting up API Gateway with localstack on GitHub:
https://github.com/localstack/localstack/issues/129
You could try following the steps provided in the answer there.
Copied from the GitHub issue:
"""
One option would be to use the serverless framework (https://github.com/serverless/serverless). Otherwise, you can call the LocalStack services directly (via the CLI or an SDK) to create an API Gateway resource+method+integration, and connect them to your Lambda function.
Here are a few pointers that might be helpful:
https://ig.nore.me/2016/03/setting-up-lambda-and-a-gateway-through-the-cli/ (the "Creating a role" part can be skipped)
https://github.com/atlassian/localstack/issues/101
https://github.com/temyers/serverless-localstack
"""

Related

pass arguments of make commands

I have a sequence of make commands to upload zip file to s3 bucket and then update the lambda function reading that s3 file as source code. Once I update the lambda function, I wish to publish it and after publishing it, I want to attach an event to that lambda function using lambda bridge.
I can do most of these commands automatically using make. For example:
clean:
#rm unwanted_build_files.zip
build-lambda-pkg:
mkdir pkg
cd pkg && docker run #something something
cd pkg && zip -9qr build.zip
cp pkg/build.zip .
rm pkg
upload-s3:
aws s3api put-object --bucket my_bucket \
--key build.zip --body build.zip
update-lambda:
aws lambda update-function-code --function-name my_lambda \
--s3-bucket my_bucket \
--s3-key build.zip
publish-lambda:
aws lambda publish-version --function-name my_lambda
## I can get "Arn" value from publish-lambda command. publish-lambda ##returns a json (or I would say it prints a json type structure on cmd) which has one key as "FunctionArn"
attach-event:
aws events put-targets --rule rstats-post-explaination-at-10pm-ist \
--targets "Id"="1","Arn"="arn:aws:lambda:::function/my_lambda/version_number"
## the following combines the above command into single command
build-n-update: clean build-lambda-pkg upload-s3 update-lambda
I am stuck at the last step i.e. to combine and include publish-lambda and attach-event in the build-n-update command. The problem is I am unable to pass argument from previous command to next command. I will try to explain it better:
publish-lambda prints a json style output on terminal:
{
"FunctionName": "my_lambda",
"FunctionArn": "arn:aws:lambda:us-east-2:12345:function:my_lambda:5",
"Runtime": "python3.6",
"Role": "arn:aws:iam::12345:role/my_role",
"Handler": "lambda_function.lambda_handler",
"CodeSize": 62403592,
"Description": "",
"Timeout": 180,
"MemorySize": 512,
"LastModified": "2021-02-28T17:34:04.374+0000",
"CodeSha256": "ErfsYHVMFCQBg4iXx5ev9Z0U=",
"Version": "5",
"Environment": {
"Variables": {
"PATH": "/var/task/bin",
"PYTHONPATH": "/var/task/src:/var/task/lib"
}
},
"TracingConfig": {
"Mode": "PassThrough"
},
"RevisionId": "49b5-acdd-c1032aa16bfb",
"State": "Active",
"LastUpdateStatus": "Successful"
}
I wish to extract function arn from the above output stored in key "FunctionArn" and use it in the next command i.e. attach-event as attach-event has a --targets argument which takes the "Arn" of last published function.
Is it possible to do in single command?
I have tried to experiment a bit as follows:
build-n-update: clean build-lambda-pkg upload-s3 update-lambda
make publish-lambda | xargs jq .FunctionArn -r {}
But this throws an error:
jq: Unknown option --function-name
Please help
Well, running:
make publish-lambda | xargs jq .FunctionArn -r {}
will print the command to be run, then the output of the command (run it yourself from you shell prompt and see). Of course, jq cannot parse the command line make prints.
Anyway, what would be the goal of this? You'd just print the function name to stdout and it wouldn't do you any good.
You basically have two choices: one is to combine the two commands into a single make recipe, so you can capture the information you need in a shell variable:
build-n-update: clean build-lambda-pkg upload-s3 update-lambda
func=$$(aws lambda publish-version --function-name my_lambda \
| jq .FunctionArn -r); \
aws events put-targets --rule rstats-post-explaination-at-10pm-ist \
--targets "Id"="1","Arn"="$$func"
The other alternative is to redirect the output of publish-version to a file, then parse that file in the attach-event target recipe:
publish-lambda:
aws lambda publish-version --function-name my_lambda > publish.json
attach-event:
aws events put-targets --rule rstats-post-explaination-at-10pm-ist \
--targets "Id"="1","Arn"="$$(jq .FunctionArn -r publish.json)"

Lambda can't find the file when code version inserted into Handler property in CloudFormation template

The artefact that I create and deploy has a version number in it. It is of the form:
universe-0.0.1-SNAPSHOT.zip
where the 0.0.1-SNAPSHOT is the version.
Now the CloudFormation template has the Handler mapped through a Fn::Join function:
UFunctionCelestial:
Type: AWS::Lambda::Function
Properties:
Code:
S3Bucket: !Ref ArtefactRepositoryBucket
S3Key: !Join [ '', [!Sub '${AWS::StackName}-', !Ref CodeVersion, '.zip' ] ]
Handler: !Join [ '', [!Sub '${AWS::StackName}-', !Ref CodeVersion, '/src/lambdas/celestial_persist_function.handler'] ]
Role: !GetAtt [ UIAMRoleFunctionServiceRoleCelestial, Arn ]
Runtime: python3.7
Environment:
Variables:
CELESTIAL_TABLE_NAME: !Ref UTableCelestial
PRIMARY_KEY: id
DependsOn:
- UIAMRoleFunctionServiceRolePolicyCelestial
- UIAMRoleFunctionServiceRoleCelestial
...SNIP...
Parameters:
ArtefactRepositoryBucket:
Type: String
Description: S3 bucket for asset "foundry-cdk/CelestialHandler/Code"
CodeVersion:
Type: String
Description: S3 key for asset version "foundry-cdk/CelestialHandler/Code"
I guess most of this is irrelevant, except the Handler property
Now the !Ref CodeVersion code version for that line seems to evaluate as 0/0/1-SNAPSHOT for some reason, even though I call this template with the command:
aws2 cloudformation deploy \
--template-file ${CF_TEMPLATE_FILE} \
--region ${ACCOUNT_REGION} \
--stack-name ${PROJECT_NAME} \
--force-upload \
--capabilities CAPABILITY_IAM \
--parameter-overrides \
ArtefactRepositoryBucket=${S3_AWS_RELEASES_BUCKET} \
CodeVersion=${APPLICATION_VERSION}
I echoed out the ${APPLICATION_VERSION} and, yep, it evaluates to 0.0.1-SNAPSHOT
and yet when I go to the console and look at my Lambda I get the message:
Lambda can't find the file universe-0/0/1-SNAPSHOT/src/lambdas/celestial_persist_function.py. Make sure that your handler upholds the format: file-name.method.
So my question is, why is CloudFormation turning my dots into slashes and giving the Lambda a bum reference?
It's not cloudformation what doing it. The behavior you are seeing is normal behavior.
Generally, When you define the handler as "folder1.folder2.file.handler", it will look for a folder1, then look for folder2 inside that, then look for the file file.py inside. Finally, the file.py is expected to have a function named handler.
I am assuming that your zip file has a top folder called src. In that case, your handler should be defined as src.lambdas.celestial_persist_function.handler since you have already mentioned where your zip file is using the S3Key.
hope this helps.

How to sort JSON output from AWS CLI using JMESPath

I try to sort this output from AWS CLI by ImageId, and I executed command below.
aws ec2 describe-images --profile xxxxxxxxxx \
--filter Name=tag:Name,Values=Backup*some-string* \
--query "Images[*].[Tags[?Key=='Name'].Value[]|[0],ImageId]"
output is:
[
[
"Backup-20191215T174530Z-utc-some-string",
"ami-004"
],
[
"Backup-20191219T174631Z-utc-some-string",
"ami-002"
],
[
"Backup-20191208T174534Z-utc-some-string",
"ami-001"
],
[
"Backup-20191222T174530Z-utc-some-string",
"ami-003"
],
[
"Backup-20191221T174530Z-utc-some-string",
"ami-005"
]
]
I found sort_by functions of JMESPath could be a solution but that is too hard to understand.
aws ec2 describe-images --profile xxxxxxxxxx \
--filter "Name=tag:Name,Values=Backup*some-string*" \
--query "sort_by(Images[*].[Tags[?Key=='Name'].Value[]|[0],ImageId], &[0])"
or
aws ec2 describe-images --profile xxxxxxxxxx \
--filter "Name=tag:Name,Values=Backup*some-string*" \
--query "Images[*].[Tags[?Key=='Name'].Value[]|[0],ImageId] | sort_by(#, &[0])"
is working fine for me. & (expression type operator) is needed.
The idea is, In my solution below, I am sorting the output first by the ImageId and then applying the projections.
aws ec2 describe-images --filter Name=tag:Environment,Values=Staging --output json --query "(sort_by(Images[], &ImageId))[*].[ImageId, Tags[?Key=='Environment'].Value | [0]]"

aws cli: ssm start-session not working with a variable as a parameter value

I am trying to automate some part of my work by creating a bash function that let's me easily ssm into one of our instances. To do that, I only need to know the instance id. Then I run aws ssm start-session with the proper profile. Here's the function:
function ssm_to_cluster() {
local instance_id=$(aws ec2 describe-instances --filters \
"Name=tag:Environment,Values=staging" \
"Name=tag:Name,Values=my-cluster-name" \
--query 'Reservations[*].Instances[*].[InstanceId]' \
| grep i- | awk '{print $1}' | tail -1)
aws ssm start-session --profile AccountProfile --target $instance_id
}
When I run this function, I always get an error like the following:
An error occurred (TargetNotConnected) when calling the StartSession operation: "i-0599385eb144ff93c" is not connected.
However, then I take that instance id and run it from my terminal directly, it works:
aws ssm start-session --profile MyProfile --target i-0599385eb144ff93c
Why is this?
You're sending instance ID as "i-0599385eb144ff93c" instead of i-0599385eb144ff93c.
Modified function that should work -
function ssm_to_cluster() {
local instance_id=$(aws ec2 describe-instances --filters \
"Name=tag:Environment,Values=staging" \
"Name=tag:Name,Values=my-cluster-name" \
--query 'Reservations[*].Instances[*].[InstanceId]' \
| grep i- | awk '{print $1}' | tail -1 | tr -d '"')
aws ssm start-session --profile AccountProfile --target $instance_id
}

aws-cli gateway how to encode newlines in integration response templates invoked from bash

I am using "aws apigateway update-integration-response" from a bash script to update an integration response template. my problem is that newlines did not appear in the web console. I tried all combinations of slash n, back-slash n, even unicode for the newline without success.. below there is the bash command and the output as it appears in aws web console:
bash:
echo "update integration response script mapping for ${CODE} ${2}"
aws apigateway update-integration-response \
--rest-api-id ${APIID} \
--resource-id ${RESOURCEID} \
--http-method ${METHOD} \
--status-code ${CODE} \
--patch-operations \
"op='add',path='/responseTemplates/application~1json',value='#set(\$errorMessageObj = \$util.parseJson(\$input.path(\'\$.errorMessage\')))NEWLINE/nA//nB///nX////nC\nD\\nE\\\nF\\\\n \u000A unicodeA Cg== unicodeB #if(\"\$errorMessageObj.get(\'error-code\')\" != \"\")\n{\n \"error-code\": \"\$errorMessageObj[\'error-code\']\",\n \"error-message\": \"\$errorMessageObj[\'error-message\']\"\n}\\n#else\n{\n \"error-code\": \"AWS\",\n \"error-message\": \"\$input.path(\'\$.errorMessage\')\"\n}\n#end'"
Output in the aws web console:
#set($errorMessageObj = $util.parseJson($input.path('$.errorMessage')))NEWLINE/nA//nB///nX////nC\nD\nE\nF\n \u000A unicodeA Cg== unicodeB #if("$errorMessageObj.get('error-code')" != "")\n{\n "error-code": "$errorMessageObj['error-code']",\n "error-message": "$errorMessageObj['error-message']"\n}\n#else\n{\n "error-code": "AWS",\n "error-message": "$input.path('$.errorMessage')"\n}\n#end

Resources