How to use terraform import with resource configuration - amazon-ec2

Prior to terraform import, I have defined:
# instance.tf
resource "aws_instance" "appserver" {
}
Then I ran: terraform import aws_instance.appserver <instance-id> and went smoothly, which I can see the imported ec2 resource by using terraform show. However, the mystery to me is to "transfer" this existing terraform state into the terraform config (instance.tf above) so that I can manage it as an Infrastructure as a Code (or at least that how I understood it). I added the ami and instance_type keys and their corresponding values but every time I issue terraform plan, terraform seems to want to "replace" my existing instance.
1) Why is terraform want to replace that instance?
2) How can I "transfer" the instance's terraform state into the config? (is this possible?)
3) For you guys seasoned veterans, how were you guys able to manage an existing aws infrastructure in terraform?

First of all, terraform wants to replace your instance because terraform didn't do the 'link' you expected between the resource configuration and the current existing instance.
Terraform official documentation: (https://www.terraform.io/docs/import/index.html)
The current implementation of Terraform import can only import
resources into the state. It does not generate configuration. A future
version of Terraform will also generate configuration.
Because of this, prior to running terraform import it is necessary to
write manually a resource configuration block for the resource, to
which the imported object will be mapped.
While this may seem tedious, it still gives Terraform users an avenue
for importing existing resources. A future version of Terraform will
fully generate configuration, significantly simplifying this process.
After understanding the written above, I would use the following steps:
First, write your terraform resource configuration. Should look like this:
resource "aws_instance" "example" {
# ...instance configuration...
}
terraform import aws_instance.example i-abcd1234 in order to import existing infrastructure to your state and attach it to the resource configuration you've created above.
Detailed source for more: https://www.terraform.io/docs/import/usage.html

Related

Is there a way to deploy a terraform file via an AWS lambda function?

As the title suggests I am looking for a way to deploy a terraform file via an AWS lambda function. I would like to deploy this file via a time-based event. This is my first time working with terraform and I cannot seem to find anything pertaining to this specific use case.
I am much more versed in CloudFormation so normally what I would do is use the boto3 library to set up a lambda function that would deploy a CloudFormation stack. Does anyone know how to do this with a terraform file?

How do I use Terraform to add an existing RDS proxy to my AWS Lambda Function?

In the AWS Lambda service's console, there is a Configuration tab called Database proxies, shown here:
However, in the Terraform registry's entry for an AWS Lambda Function, there does not seem to be a place to define this relationship for my lambda. It's easy enough to add manually after I deploy the Lambda, but for obvious reasons this isn't optimal. It seems like using a DB proxy is a common enough use case for serverless architectures that there would be a way to do this with the resources I've referenced.
What am I missing?
EDIT: As of 9 months ago, this feature was not included in the AWS Provider, but I'm unsure of how to search upcoming nightly or perhaps dev releases of Terraform for this feature...
EDIT EDIT (from my comment below): the RDS, its proxy, the roles they use, the lambdas, and the vpc in which they sit all work as expected. if I go to the above screenshot in the lambdas I am deploying, I can Add database proxy just fine using the proxy I deployed with Terraform. There are no issues with the code, nor any errors. The problem is that having to manually add the Database Proxy to each Lambda I deploy defeats the purpose of using Terraform.

AWS cloudformation custom resource to generate config file for another lambda

I want to generate a lambda's config file dynamically (Basically application config) during the AWS stack creation.
Once all the configs are ready then only the particular lambda should be created along with that newly generated file. Can I achieve this using custom resources in AWS cloud formation?
I searched but only with lambda or commandrunner or SNS topics only there. No custom resource to write or modify local files. Could someone provide a sample or guidance to do this ?
Here's some options I see for your use case:
Use a Lambda based CF Custom Resource for your config file logic. Load base files from S3 or checkout from Version Control (git) within the Custom Resource Lambda function.
Execute a custom script within your build/deploy process. E.g. you have a build.sh script that contains the commands to deploy the CF templates, but first you execute another script that creates the config file and places it in the source folder for the lambda function.
Use a Docker Image based Lambda function and include your config file logic in the Dockerfile. You can also use AWS SAM to build the docker image within the CF deployment.
Use AWS CDK and its concept of bundling for lambda functions.

Any way to pull remote default values in a Helm chart?

I'm writing a Helm chart for a custom application that we'll need to bring up in different environments within my organization. This application has some pieces in Kubernetes (which is why I'm writing the Helm chart) and other pieces outside of K8S, more specifically various resources in AWS which I have codified with Terraform.
One of those resources is a Lambda function, which I have fronted with API Gateway. This means that when I run the Terraform in a new environment, it creates the Lambda function and attaches an API Gateway endpoint to it, with a brand new URL which AWS generates for that endpoint. I'm having Terraform record that URL as an output variable, and moreover I have a non-local backend configured so that Terraform is saving its state remotely.
What I want to do is tie them both together, directly from Helm. I want a way to run the Terraform so that it brings up my Lambda, and by doing so saves the generated API Gateway URL to its remote state file. Then when I install my Helm chart, I'd like it if Helm were smart enough to automatically pull from the Terraform remote state file to get the URL it needs of the API Gateway endpoint, to use as a variable within my chart.
Currently, I either have to copy and paste, or use Bash. I can get away with doing it with a bash script much like this one:
#!/bin/bash
terraform init
terraform plan -out=tfplan.out
terraform apply tfplan.out
export WEBHOOK_URL=$(terraform output webhook_url)
helm install ./mychart --set webhook.url="${WEBHOOK_URL}"
But using a Bash script to accomplish this is not ideal. It requires that I run it in the same directory as the Terraform files (because the output command must be called from that directory), and it doesn't account for different methods of authentication we might use. Moreover, other developers on the team might want to run Terraform and Helm directly and not have to rely on a custom bash script to do it for them. Since this bash script is effectively acting as an "operator," and since Helm already is kind of an operator itself, I'm wondering if there's some way I can do it entirely within Helm?
The Terraform remote state files are ultimately just JSON files. I happen to be using the Consul backend, but I could just as easily use the S3 backend or any other; at the end of the day Terraform will manifest its state as a JSON file somewhere, where (presumably) Helm could read it and pick out the specific output value. Except I'm not sure if Helm is powerful enough to do this. Looking over their documentation, I didn't really see anything outside of writing your normal values.yaml templates to specify defaults. Does Helm have any functions built into it around making REST requests for external JSON? Is this something that could be done?
Helm does not have any functionality to search in files/templates.
It needs for you to tell it exactly what to inject.

Custom resource for existing Terraform provider?

I've been playing around with writing a custom resource for AWS which combines other resources in a useful way. (It's too complex to achieve effectively with a Terraform module.)
The documentation (starting with the Plugins page) outlines how to create a completely new resource from scratch. However, is it possible to "attach" my custom resource to the AWS provider? This would allow me to:
name my resources e.g. aws_foo instead of awscontrib_foo
presumably, access AWS credentials already defined for that provider
You can use the following provider to do exactly the same with Custom Resources in AWS CloudFormation.
https://github.com/mobfox/terraform-provider-multiverse
you can use even AWS Lambda and use any language you like to manage your resources, it also keep state of your resource, so you can delete, read, update them too. It create a resource, so it is not like External Data
Yes, the process is outlined here
https://github.com/hashicorp/terraform#developing-terraform
Your customised terraform can be in your own version of the AWS plugin

Resources