I'm writing a Helm chart for a custom application that we'll need to bring up in different environments within my organization. This application has some pieces in Kubernetes (which is why I'm writing the Helm chart) and other pieces outside of K8S, more specifically various resources in AWS which I have codified with Terraform.
One of those resources is a Lambda function, which I have fronted with API Gateway. This means that when I run the Terraform in a new environment, it creates the Lambda function and attaches an API Gateway endpoint to it, with a brand new URL which AWS generates for that endpoint. I'm having Terraform record that URL as an output variable, and moreover I have a non-local backend configured so that Terraform is saving its state remotely.
What I want to do is tie them both together, directly from Helm. I want a way to run the Terraform so that it brings up my Lambda, and by doing so saves the generated API Gateway URL to its remote state file. Then when I install my Helm chart, I'd like it if Helm were smart enough to automatically pull from the Terraform remote state file to get the URL it needs of the API Gateway endpoint, to use as a variable within my chart.
Currently, I either have to copy and paste, or use Bash. I can get away with doing it with a bash script much like this one:
#!/bin/bash
terraform init
terraform plan -out=tfplan.out
terraform apply tfplan.out
export WEBHOOK_URL=$(terraform output webhook_url)
helm install ./mychart --set webhook.url="${WEBHOOK_URL}"
But using a Bash script to accomplish this is not ideal. It requires that I run it in the same directory as the Terraform files (because the output command must be called from that directory), and it doesn't account for different methods of authentication we might use. Moreover, other developers on the team might want to run Terraform and Helm directly and not have to rely on a custom bash script to do it for them. Since this bash script is effectively acting as an "operator," and since Helm already is kind of an operator itself, I'm wondering if there's some way I can do it entirely within Helm?
The Terraform remote state files are ultimately just JSON files. I happen to be using the Consul backend, but I could just as easily use the S3 backend or any other; at the end of the day Terraform will manifest its state as a JSON file somewhere, where (presumably) Helm could read it and pick out the specific output value. Except I'm not sure if Helm is powerful enough to do this. Looking over their documentation, I didn't really see anything outside of writing your normal values.yaml templates to specify defaults. Does Helm have any functions built into it around making REST requests for external JSON? Is this something that could be done?
Helm does not have any functionality to search in files/templates.
It needs for you to tell it exactly what to inject.
Related
No clear path to do development in a serverless environment.
I have an API Gateway backed by some Lambda functions declared in Terraform. I deploy to the cloud and everything is fine, but how do I go about setting a proper workflow for development? It seems like a struggle to push every small code change to the cloud while developing in order to run your code. Terraform has started getting some support by the SAM framework to run your Lambda functions locally (https://aws.amazon.com/blogs/compute/better-together-aws-sam-cli-and-hashicorp-terraform/), but still no way to simulate a local server and test out your endpoints in Postman for example.
First of all I use serverless plugin instead of terraform, my answer is based on what you provided and what I found around.
From what I understood so far with priovided documentation you are able to run sam CLI with terraform (cf: Chapter Local testing)
You might follow this documentation to invoke local functions.
I recommend to use JSON files to create use cases instead of stdin injection.
First step is to create your payload in json file and to invoke your lambda with the json payload like
sam local invoke "YOUR_LAMBDA_NAME" -e ./path/to/yourjsonfile.json
As the title suggests I am looking for a way to deploy a terraform file via an AWS lambda function. I would like to deploy this file via a time-based event. This is my first time working with terraform and I cannot seem to find anything pertaining to this specific use case.
I am much more versed in CloudFormation so normally what I would do is use the boto3 library to set up a lambda function that would deploy a CloudFormation stack. Does anyone know how to do this with a terraform file?
We have build a few Microservices (MS) which have been deployed to our company's K8s clusters.
For current deployment, any one of our MSs will be built as a Docker image and they deployed manually using the following steps; and it works fine:
Create Configmap
Installing a Service.yaml
Installing a Deployment.yaml
Installing an Ingress.yaml
I'm now looking at Helm v3 to simplify and encapsulate these deployments. I've read a lot of the Helm v3 documentation, but I still haven't found the answer to some simple questions and I hope to get an answer here before absorbing the entire doc along with Go and SPRIG and then finding out it won't fit our needs.
Our Spring MS has 5 separate application.properties files that are specific to each of our 5 environments. These properties files are simple multi-line key=value format with some comments preceded by #.
# environment based values
key1=value1
key2=value2
Using helm create, I installed a chart called ./deploy in the root directory which auto-created ./templates and a values.yaml.
The problem is that I need to access the application.properties files outside of the Chart's ./deploy directory.
From helm, I'd like to reference these 2 files from within my configmap.yaml's Data: section.
./src/main/resource/dev/application.properties
./src/main/resources/logback.xml
And I want to keep these files in their current format, not rewrite them to JSON/YAML format.
Does Helm v3 allow this?
Putting this as answer as there's no enough space on the comments!
Check the 12 factor app link I shared above, in particular the section on configuration... The explanation there is not great but the idea is behind is to build one container and deploy that container in any environment without having to modify it plus to have the ability to change the configuration without the need to create a new release (the latter cannot be done if the config is baked in the container). This allows, for example, to change a DB connection pool size without a release (or any other config parameter). It's also good from a security point of view as you might not want the container running in your lower environments (dev/test/whatnot) having production configuration (passwords, api keys, etc). This approach is similar to the Continuous Delivery principle of build once, deploy anywhere.
I assume that when you run the app locally, you only need access to one set of configuration, so you can keep that in a separate file (e.g. application.dev.properties), and have the parameters that change between environments in helm environment variables. I know you mentioned you don't want to do this, but this is considered a good practice nowadays (might be considered otherwise in the future...).
I also think it's important to be pragmatic, if in your case you don't feel the need to have the configuration outside of the container, then don't do it, and probably using the suggestion I gave to change a command line parameter to pick the config file works well. At the same time, keep in mind the 12 factor-app approach in case you find out you do need it in the future.
I am dealing an infrastructure and trying to figure it out how to deploy just single lambda from CI/CD pipeline.
Let's say in a repo you have 20 lambdas, and you made change for one single lambda, instead of deploying all of them i just want to deploy the changed one so cut out the deployment time.
I've got an idea like checking difference from git and figure it out which ones are changed, and do deployment only that part of functionality, but it surely doesn't seem right way to do it. Believing there is more proper way to do it.
I am using terraform for now (moving to serverless framework) i know that terraform and serverless framework holds a state on s3 bucket. However on my case when i run it through pipelines, eventhogh there is a terraform state and there is no change on the state, it still deploys the whole thing as far as realised (i might be wrong). I just want to get clear my mind to see how people does this with their pipline.
Since you seem to be asking about both Terraform and Serverless Framework here, I'm assuming you're looking for a general answer rather than specifically how this would be solved with a particular tool.
One way to solve this problem is to decouple your build process from your deploy process by adding a version selection mechanism in between. This just means that somewhere in your system you have a value that can be written by your build process and read by your deploy process which indicates what is the "current" artifact for each of your Lambda functions.
When your build process completes successfully, it can write the information about the artifact it built into the appropriate location, and then trigger your deployment process. Your deployment process will then read the artifact information and use it to decide what to deploy.
If you have made no changes to the current artifact metadata for a particular function then the deploy process can see that and not do anything. If a particular artifact is flawed in some way and you only notice once it's deployed, you can potentially set the artifact metadata back to the previous one and re-run the deployment process to roll back. If you choose a data store that retains historical versions, you'll also have a log of changes to the current artifact which might be useful to understand circumstances that lead to an incident.
Without getting into specifics it's hard to say more about this. For Terraform in particular, the artifact metadata store ought to be something that Terraform can read using a data source. To show a real example I'm going to just arbitrarily choose AWS SSM Parameter Store as a location for that artifact metadata store:
data "aws_ssm_parameter" "foo" {
name = "FooFunctionArtifact"
}
locals {
# For this example, we'll assume that the stored parameter is a JSON
# string shaped like this:
# {
# "s3_bucket": "awesomecorp-app-artifacts"
# "s3_key": "/awesomeapp/v1.2.0/function.zip"
# }
foo_artifact = jsondecode(data.aws_ssm_parameter.foo)
}
resource "aws_lambda_function" "foo" {
function_name = "foo"
s3_bucket = local.foo_artifact.s3_bucket
s3_key = local.foo_artifact.s3_key
# etc, etc
}
The technical details of this will vary a lot depending on your technology choices. If you don't use Terraform then you'll either use a feature similar to data sources in your other tool or you'd write some wrapper glue code that can itself retrieve the necessary information and pass it into the tool as an argument.
The main thing, regardless of technology choices, is that there is an explicit record somewhere of what is the latest artifact for each function, which is updated by your build step and read by your deploy step. This pattern can apply to other artifact types too, such as AMIs for EC2, docker images, etc.
Seems you have added label of terraform, serverless-framework (I called it sls), and aws-lambda. So all of them work for you.
terraform - Terraform itself will care of the differences which lambda need be updated. But it is not lambda friendly if you need install related packages.
serverless framework (sls) - it is good to use to manage lambda functions, but as side effect, it has to be managed with api gateway together. I am not sure if sls team has fix this issue or not. Need some confirmations.
SLS will take care of installing related packages.
The bad part is, sls can't diff the resources to be deployed and to be planned.
cloudformation - that's AWS owned Infrastructure as Code (IaC) tool to manage aws resources, you should be fine to use it to manage the lambda resource. you will get same issues as Terraform that you have to install the related packages before deploy the stack.
Bad part is, cfn (cloudformation) doesn't have diff feature as well, furtherly, it doesn't have proper tools to manage its aws cli commands, you have to use others, such as shell scriping, Ansible or even Terraform to manage coudformation templates updates.
aws cdk - The newest way is using aws-cdk, it does have the diff feature cdk diff which is mostly suitable for your current job, but it is very new project, a lot of features are still waiting to be developed.
You can take these and think with your team's skill sets. Always choice the tools, which you and your team are most confident.
I started a pod in kubernetes cluster which can call kubernetes api via go-sdk (like in this example: https://github.com/kubernetes/client-go/tree/master/examples/in-cluster-client-configuration). I want to listen some external events in this pod (e.g. GitHub web-hooks), fetch yaml configuration files from repository and apply them to this cluster.
Is it possible to call kubectl apply -f <config-file> via kubernetes API (or better via golang SDK)?
As yaml directly: no, not that I'm aware of. But if you increase the kubectl verbosity (--v=100 or such), you'll see that the first thing kubectl does to your yaml file is convert it to json, and then POST that content to the API. So the spirit of the answer to your question is "yes."
This box/kube-applier project may interest you. While it does not appear to be webhook aware, I am sure they would welcome a PR teaching it to do that. Using their existing project also means you benefit from all the bugs they have already squashed, as well as their nifty prometheus metrics integration.