Get ARN of vendored layers - aws-lambda

Looks like AWS layers like AWSLambda-Python37-SciPy1x have a different account and head version in the ARN in different regions. Eg
us-east-1: arn:aws:lambda:us-east-1:668099181075:layer:AWSLambda-Python37-SciPy1x:22
us-east-2: arn:aws:lambda:us-east-2:259788987135:layer:AWSLambda-Python37-SciPy1x:20
From a script I need to add the layer that pertains to the lambda's region, but I'm not finding an AWS CLI or boto3 command that will give me the ARN of a "published" layer (ie one that was given access to by an AWS admin to all accounts), I can only find my own layers (eg aws lambda list-layers).
The AWS console for lambda in web browser shows the vendored layers, so I loaded the page and looked through js console and saw the following request is made:
https://console.aws.amazon.com/lambda/services/ajax?operation=listAwsVendedLayers&locale=en
So it looks like the REST API has this operation to get that, but I cannot find the equivalent anywhere in AWS CLI or boto3.
Any ideas (short of using curl with the proper request head and auth info, pain), perhaps a way to run a "raw" request in boto3 so I could give it this listAwsVendedLayers operation? I looked in the docs could not find anything.

Related

Running/Testing an AWS Serverless API written in Terraform

No clear path to do development in a serverless environment.
I have an API Gateway backed by some Lambda functions declared in Terraform. I deploy to the cloud and everything is fine, but how do I go about setting a proper workflow for development? It seems like a struggle to push every small code change to the cloud while developing in order to run your code. Terraform has started getting some support by the SAM framework to run your Lambda functions locally (https://aws.amazon.com/blogs/compute/better-together-aws-sam-cli-and-hashicorp-terraform/), but still no way to simulate a local server and test out your endpoints in Postman for example.
First of all I use serverless plugin instead of terraform, my answer is based on what you provided and what I found around.
From what I understood so far with priovided documentation you are able to run sam CLI with terraform (cf: Chapter Local testing)
You might follow this documentation to invoke local functions.
I recommend to use JSON files to create use cases instead of stdin injection.
First step is to create your payload in json file and to invoke your lambda with the json payload like
sam local invoke "YOUR_LAMBDA_NAME" -e ./path/to/yourjsonfile.json

How do I use Terraform to add an existing RDS proxy to my AWS Lambda Function?

In the AWS Lambda service's console, there is a Configuration tab called Database proxies, shown here:
However, in the Terraform registry's entry for an AWS Lambda Function, there does not seem to be a place to define this relationship for my lambda. It's easy enough to add manually after I deploy the Lambda, but for obvious reasons this isn't optimal. It seems like using a DB proxy is a common enough use case for serverless architectures that there would be a way to do this with the resources I've referenced.
What am I missing?
EDIT: As of 9 months ago, this feature was not included in the AWS Provider, but I'm unsure of how to search upcoming nightly or perhaps dev releases of Terraform for this feature...
EDIT EDIT (from my comment below): the RDS, its proxy, the roles they use, the lambdas, and the vpc in which they sit all work as expected. if I go to the above screenshot in the lambdas I am deploying, I can Add database proxy just fine using the proxy I deployed with Terraform. There are no issues with the code, nor any errors. The problem is that having to manually add the Database Proxy to each Lambda I deploy defeats the purpose of using Terraform.

Any way to pull remote default values in a Helm chart?

I'm writing a Helm chart for a custom application that we'll need to bring up in different environments within my organization. This application has some pieces in Kubernetes (which is why I'm writing the Helm chart) and other pieces outside of K8S, more specifically various resources in AWS which I have codified with Terraform.
One of those resources is a Lambda function, which I have fronted with API Gateway. This means that when I run the Terraform in a new environment, it creates the Lambda function and attaches an API Gateway endpoint to it, with a brand new URL which AWS generates for that endpoint. I'm having Terraform record that URL as an output variable, and moreover I have a non-local backend configured so that Terraform is saving its state remotely.
What I want to do is tie them both together, directly from Helm. I want a way to run the Terraform so that it brings up my Lambda, and by doing so saves the generated API Gateway URL to its remote state file. Then when I install my Helm chart, I'd like it if Helm were smart enough to automatically pull from the Terraform remote state file to get the URL it needs of the API Gateway endpoint, to use as a variable within my chart.
Currently, I either have to copy and paste, or use Bash. I can get away with doing it with a bash script much like this one:
#!/bin/bash
terraform init
terraform plan -out=tfplan.out
terraform apply tfplan.out
export WEBHOOK_URL=$(terraform output webhook_url)
helm install ./mychart --set webhook.url="${WEBHOOK_URL}"
But using a Bash script to accomplish this is not ideal. It requires that I run it in the same directory as the Terraform files (because the output command must be called from that directory), and it doesn't account for different methods of authentication we might use. Moreover, other developers on the team might want to run Terraform and Helm directly and not have to rely on a custom bash script to do it for them. Since this bash script is effectively acting as an "operator," and since Helm already is kind of an operator itself, I'm wondering if there's some way I can do it entirely within Helm?
The Terraform remote state files are ultimately just JSON files. I happen to be using the Consul backend, but I could just as easily use the S3 backend or any other; at the end of the day Terraform will manifest its state as a JSON file somewhere, where (presumably) Helm could read it and pick out the specific output value. Except I'm not sure if Helm is powerful enough to do this. Looking over their documentation, I didn't really see anything outside of writing your normal values.yaml templates to specify defaults. Does Helm have any functions built into it around making REST requests for external JSON? Is this something that could be done?
Helm does not have any functionality to search in files/templates.
It needs for you to tell it exactly what to inject.

Custom resource for existing Terraform provider?

I've been playing around with writing a custom resource for AWS which combines other resources in a useful way. (It's too complex to achieve effectively with a Terraform module.)
The documentation (starting with the Plugins page) outlines how to create a completely new resource from scratch. However, is it possible to "attach" my custom resource to the AWS provider? This would allow me to:
name my resources e.g. aws_foo instead of awscontrib_foo
presumably, access AWS credentials already defined for that provider
You can use the following provider to do exactly the same with Custom Resources in AWS CloudFormation.
https://github.com/mobfox/terraform-provider-multiverse
you can use even AWS Lambda and use any language you like to manage your resources, it also keep state of your resource, so you can delete, read, update them too. It create a resource, so it is not like External Data
Yes, the process is outlined here
https://github.com/hashicorp/terraform#developing-terraform
Your customised terraform can be in your own version of the AWS plugin

Sync a bucket with AWS ruby tools

I am using Amazon's official aws-sdk gem, but I can't seem to find any funcionality that works like the command line tool aws s3 sync <path> <bucket>. Does it exist or am I forced to upload each file separately (slow)?
you don't have an api call that achieves that.
the sync is basically a call to get the objects, a call to inspect your local path and after that uploads/downloads to bring the 2 locations in sync. that's what the was cli tool does under the hood.

Resources