AWS Lambda uploading requires the generation of a zip archive of required source code and libraries. For use of NodeJS as the language for Lambda, it may be more typically the case that you want a source file and the node_modules directory to be included in the zip archive. The Terraform archive provider gives a file_archive resource which works well when it can be used. It can't be used when you want more than just 1 file or 1 directory. See feature request . To work around this, I came up with this code below. It executes steps but not in the required sequence. Run it once and it updates the zip file, but doesn't upload it to AWS. I run it again and it uploads to AWS.
# This resource checks the state of the node_modules directory, hoping to determine,
# most of the time, when there was a change in that directory. Output
# is a 'mark' file with that data in it. That file can be hashed to
# trigger updates to zip file creation.
resource "null_resource" "get_directory_mark" {
provisioner "local-exec" {
command = "ls -l node_modules > node_modules.mark; find node_modules -type d -ls >> node_modules.mark"
interpreter = ["bash", "-lc"]
}
triggers = {
always = "${timestamp()}" # will trigger each run - small cost.
}
}
resource "null_resource" "make_zip" {
depends_on = ["null_resource.get_directory_mark"]
provisioner "local-exec" {
command = "zip -r ${var.lambda_zip} ${var.lambda_function_name}.js node_modules"
interpreter = ["bash", "-lc"]
}
triggers = {
source_hash = "${sha1("${file("lambda_process_firewall_updates.js")}")}"
node_modules = "${sha1("${file("node_modules.mark")}")}" # see above
}
}
resource "aws_lambda_function" "lambda_process" {
depends_on = ["null_resource.make_zip"]
filename = "${var.lambda_zip}"
function_name = "${var.lambda_function_name}"
description = "process items"
role = "${aws_iam_role.lambda_process.arn}"
handler = "${var.lambda_function_name}.handler"
runtime = "nodejs8.10"
memory_size = "128"
timeout = "60"
source_code_hash = "${base64sha256(file("lambda_process.zip"))}"
}
Other related discussion includes: this question on code hashing, (see my answer) and this GitHub issue.
Related
Using Octopus Deploy to deploy a simple API.
The first step of our deployment process is to generate an HTML report with the delta of the scripts run vs the scripts required to run. I used this tutorial to create the step.
The relevant code in my console application is:
var reportLocationSection = appConfiguration.GetSection(previewReportCmdLineFlag);
if (reportLocationSection.Value is not null)
{
// Generate a preview file so Octopus Deploy can generate an artifact for approvals
try
{
var report = reportLocationSection.Value;
var fullReportPath = Path.Combine(report, deltaReportName);
Console.WriteLine($"Generating upgrade report at {fullReportPath}");
upgrader.GenerateUpgradeHtmlReport(fullReportPath);
}
catch (Exception ex)
{
Console.WriteLine(ex.Message);
return operationError;
}
}
The Powershell which I am using in the script step is:
# Get the extracted path for the package
$packagePath = $OctopusParameters["Octopus.Action.Package[DatabaseUpdater].ExtractedPath"]
$connectionString = $OctopusParameters["Project.Database.ConnectionString"]
$reportPath = $OctopusParameters["Project.HtmlReport.Location"]
Write-Host "Report Path: $($reportPath)"
$exeToRun = "$($packagePath)\DatabaseUpdater.exe"
$generatedReport = "$($reportPath)\UpgradeReport.html"
Write-Host "Generated Report: $($generatedReport)"
if ((test-path $reportPath) -eq $false){
New-Item "Creating new directory..."
} else {
New-Item "Directory already exists."
}
# Run this .NET app, passing in the Connection String and a flag
# which tells the app to create a report, but not update the database
& $exeToRun --connectionString="$($connectionString)" --previewReportPath="$($reportPath)"
New-OctopusArtifact -Path "$($generatedReport)"
The error reported by Octopus is:
'Could not find file 'C:\DeltaReports\Some API\2.9.15-DbUp-Test-9\UpgradeReport.html'.'
I'm guessing that is being thrown when this powershell line is hit: New-OctopusArtifact ...
And that seems to indicate that the report was never created.
I've used a bit of logging to log out certain variables and the values look sound:
Report Path: C:\DeltaReports\Some API\2.9.15-DbUp-Test-9
Generated Report: C:\DeltaReports\Some API\2.9.15-DbUp-Test-9\UpgradeReport.html
Generating upgrade report at C:\DeltaReports\Some API\2.9.15-DbUp-Test-9\UpgradeReport.html
As you can see in the C#, the relevant code is wrapped in a try/catch block, but I'm not sure whether the error is being written out there or at a later point by Octopus (I'd need to do a pull request to add a marker in the code).
Can anyone see a way forward win resolving this? Has anyone else encountered this?
Cheers
I recently redid some of the work from that article for this video up on YouTube. I did run into some issues with the .SQL files not being included in the assembly. I think it was after I upgraded to .NET 6. But that might be a coincidence.
Anyway, because the files weren't being included in the assembly, when I ran the command line app via Octopus, it wouldn't properly generate the file for me. I ended up configuring the project to copy the .SQL files to a folder in the output directory instead of embedding them in the assembly. You can view a sample package here.
One thing that helped me is running the app in a debugger with the same parameters just to make sure it was actually generating the file. I'm sure you already thought of that, but I'd be remiss if I forgot to include it in my answer. :)
FWIW, this is my updated scripts.
First, the Octopus Script:
$packagePath = $OctopusParameters["Octopus.Action.Package[Trident.Database].ExtractedPath"]
$connectionString = $OctopusParameters["Project.Connection.String"]
$environmentName = $OctopusParameters["Octopus.Environment.Name"]
$reportPath = $OctopusParameters["Project.Database.Report.Path"]
cd $packagePath
$appToRun = ".\Octopus.Trident.Database.DbUp"
$generatedReport = "$reportPath\UpgradeReport.html"
& $appToRun --ConnectionString="$connectionString" --PreviewReportPath="$reportPath"
New-OctopusArtifact -Path "$generatedReport" -Name "$environmentName.UpgradeReport.html"
My C# code can be found here but for ease of use, you can see it all here (I'm not proud of how I parse the parameters).
static void Main(string[] args)
{
var connectionString = args.FirstOrDefault(x => x.StartsWith("--ConnectionString", StringComparison.OrdinalIgnoreCase));
connectionString = connectionString.Substring(connectionString.IndexOf("=") + 1).Replace(#"""", string.Empty);
var executingPath = Assembly.GetExecutingAssembly().Location.Replace("Octopus.Trident.Database.DbUp", "").Replace(".dll", "").Replace(".exe", "");
Console.WriteLine($"The execution location is {executingPath}");
var deploymentScriptPath = Path.Combine(executingPath, "DeploymentScripts");
Console.WriteLine($"The deployment script path is located at {deploymentScriptPath}");
var postDeploymentScriptsPath = Path.Combine(executingPath, "PostDeploymentScripts");
Console.WriteLine($"The deployment script path is located at {postDeploymentScriptsPath}");
var upgradeEngineBuilder = DeployChanges.To
.SqlDatabase(connectionString, null)
.WithScriptsFromFileSystem(deploymentScriptPath, new SqlScriptOptions { ScriptType = ScriptType.RunOnce, RunGroupOrder = 1 })
.WithScriptsFromFileSystem(postDeploymentScriptsPath, new SqlScriptOptions { ScriptType = ScriptType.RunAlways, RunGroupOrder = 2 })
.WithTransactionPerScript()
.LogToConsole();
var upgrader = upgradeEngineBuilder.Build();
Console.WriteLine("Is upgrade required: " + upgrader.IsUpgradeRequired());
if (args.Any(a => a.StartsWith("--PreviewReportPath", StringComparison.InvariantCultureIgnoreCase)))
{
// Generate a preview file so Octopus Deploy can generate an artifact for approvals
var report = args.FirstOrDefault(x => x.StartsWith("--PreviewReportPath", StringComparison.OrdinalIgnoreCase));
report = report.Substring(report.IndexOf("=") + 1).Replace(#"""", string.Empty);
if (Directory.Exists(report) == false)
{
Directory.CreateDirectory(report);
}
var fullReportPath = Path.Combine(report, "UpgradeReport.html");
if (File.Exists(fullReportPath) == true)
{
File.Delete(fullReportPath);
}
Console.WriteLine($"Generating the report at {fullReportPath}");
upgrader.GenerateUpgradeHtmlReport(fullReportPath);
}
else
{
var result = upgrader.PerformUpgrade();
// Display the result
if (result.Successful)
{
Console.ForegroundColor = ConsoleColor.Green;
Console.WriteLine("Success!");
}
else
{
Console.ForegroundColor = ConsoleColor.Red;
Console.WriteLine(result.Error);
Console.WriteLine("Failed!");
}
}
}
I hope that helps!
After long and detailed investigation, we discovered the answer was quite obvious.
We assumed the existing deploy process configuration was sound. Because we never had a problem with it (until now). As it transpires, there was a problem which led to the Development deployments being deployed twice.
Hence, the errors like the one above and others which talked about file handles being held by another process.
It was actually obvious in hindsight, but we were blind to it as we thought the existing process was sound 😣
I am currently using bazel for my GoLang app.
container_image(
name = "my-golang-app",
base = "#ubuntu_base//image",
cmd = ["/bin/my-golang-app"],
directory = "/bin/",
files = [":my-golang-app"],
tags = [
"manual",
VERSION,
],
visibility = ["//visibility:public"],
)
Except my-golang-app folder I need to copy, while building this image, another folder named my-new-folder and everything in it.
How does one do that with bazel? I can't seem to find the solution in the bazel docs.
dockerfile {
run 'mkdir -p /usr/local/bin'
// add the lines below
add {
from 'docker/my-new-folder/'
into '/'
}
Use pkg_tar like the container_image docs reference. Something like this:
pkg_tar(
name = "my-files",
srcs = glob(["my-new-folder/**"]),
strip_prefix = ".",
)
container_image(
name = "my-golang-app",
files = [":my-golang-app"],
<everything else you already have>,
tars = [":my-files"],
)
pkg_tar has various options to control where your files end up, if you want something beyond just taking an entire directory. For more complicated arrangements, I find multiple pkg_tar rules for various directories linked together via deps a helpful pattern.
I'm trying to create a Terraform module that will build my JS lambdas, zip them and deploy them. This however proves to be problematic
resource "null_resource" "build_lambda" {
count = length(var.lambdas)
provisioner "local-exec" {
command = "mkdir tmp"
working_dir = path.root
}
provisioner "local-exec" {
command = var.lambdas[count.index].code.build_command
working_dir = var.lambdas[count.index].code.working_dir
}
}
data "archive_file" "lambda_zip" {
count = length(var.lambdas)
type = "zip"
source_dir = var.lambdas[count.index].code.working_dir
output_path = "${path.root}/tmp/${count.index}.zip"
depends_on = [
null_resource.build_lambda
]
}
/*******************************************************
* Lambda definition
*******************************************************/
resource "aws_lambda_function" "lambda" {
count = length(var.lambdas)
filename = data.archive_file.lambda_zip[count.index].output_path
source_code_hash = filebase64sha256(data.archive_file.lambda_zip[count.index].output_path)
function_name = "${var.application_name}-${var.lambdas[count.index].name}"
description = var.lambdas[count.index].description
handler = var.lambdas[count.index].handler
runtime = var.lambdas[count.index].runtime
role = aws_iam_role.iam_for_lambda.arn
memory_size = var.lambdas[count.index].memory_size
depends_on = [aws_iam_role_policy_attachment.lambda_logs, aws_cloudwatch_log_group.log_group, data.archive_file.lambda_zip]
}
The property source_code_hash = filebase64sha256(data.archive_file.lambda_zip[count.index].output_path) , although technically not obligatory, is necessary, or the existing lambda will never get overriden as Terraform will think that it is still the same version of lambda and will skip the deployment altogether. Unfortunately it looks like the method filebase64sha256 is evaluated before the creation of any resource. This means that there is no zip for the hash calculation and so I get the error
Error: Error in function call
on modules\api-gateway-lambda\main.tf line 35, in resource "aws_lambda_function" "lambda":
35: source_code_hash = filebase64sha256(data.archive_file.lambda_zip[count.index].output_path)
|----------------
| count.index is 0
| data.archive_file.lambda_zip is tuple with 1 element
Call to function "filebase64sha256" failed: no file exists at tmp\0.zip.
If i manually place a zip in the right location, I can see that the whole thing starts working and the zip eventually gets overriden by a new one, but the hash in this case must come from the previous zip.
What is the right way to execute the whole thing in the right order?
The archive_file data source has its own output_base64sha256 attribute which can give you that same result without asking Terraform to read a file that doesn't exist yet:
source_code_hash = data.archive_file.lambda_zip[count.index].output_base64sha256
The data source will populate this at the same time it creates the file, and because your lambda function depends on the data source it will therefore always be available before the lambda function configuration is evaluated.
I'm relatively new to terraform and I'm trying to iterate over all aws_instances to apply a null_resource. Can you use multiple splats to access all instances, regardless of their names?
The EC2 instances are broken down by three types:
aws_instance.web.* (3 instances)
aws_instance.app.* (3 instances)
aws_instance.db.* (2 instances)
Here's my attempt to apply a null_resource to all eight aws_instances:
resource "null_resource" "install_security_package" {
#count = "${length(aws_instance)}" #terraform error: resource count can't reference variable: aws_instance
#count = "${length(aws_instance.*)}" #terraform error: resource variables must be three parts: TYPE.NAME.ATTR
count = "${length(aws_instance.*.*)}" #terraform error: unknown resource 'aws_instance.*'
connection {
type = "ssh"
host = "${element(aws_instance.*.private_ip, count.index)}"
user = "${lookup(var.user, var.platform)}"
private_key = "${file("${var.private_key_path}")}"
timeout = "2m"
}
provisioner "remote-exec" {
inline = [
"sudo rpm -Uvh http://www.example.com/security/repo/security_baseline.rpm",
]
}
}
It is not currently possible to match all resources of a given type. The "splat" syntax, as you've seen, only allows selecting all of the instances created from a particular resource block.
The closest you can get to this with Terraform today is to concatenate together the different resources:
concat(aws_instance.web.*.private_ip, aws_instance.app.*.private_ip, aws_instance.db.*.private_ip)
In the current version of Terraform as of this answer it is necessary to use some of the workarounds shared in github issue #4084 in order to avoid duplicating that complex expression in multiple places. A forthcoming feature called Local Values will make this simpler in the near future, allowing the list to be given an name to be re-used in multiple places:
# Won't work until Terraform PR#15449 is merged and released
locals {
aws_instance_addrs = "${concat(aws_instance.web.*.private_ip, aws_instance.app.*.private_ip, aws_instance.db.*.private_ip)}"
}
resource "null_resource" "install_security_package" {
count = "${length(local.aws_instance_addrs)}"
connection {
type = "ssh"
host = "${local.aws_instance_addrs[count.index]}"
user = "${lookup(var.user, var.platform)}"
private_key = "${file("${var.private_key_path}")}"
timeout = "2m"
}
provisioner "remote-exec" {
inline = [
"sudo rpm -Uvh http://www.example.com/security/repo/security_baseline.rpm",
]
}
}
I am using Terraform for continuous deployments of lambda functions. The lambda module creates the function and initial aliases [DEV,QA,PROD]. When a change is made the source_code_hash is updated and Terraform updates the code. The challenge is when I want to update the alias from DEV to QA it updates the entire stack. The code is below. Your help is appreciated.
$ cat main.tf
module "sample" {
source = "./lambda"
name = "sample"
runtime = "nodejs6.10"
role = "${aws_iam_role.iam_role_for_lambda.arn}"
filename = "../Archive.zip"
source_code_hash = "${base64sha256(file("../Archive.zip"))}"
source_dir = "../sample"
alias = "${var.env_name}"
}
$ cat module/main.tf
resource "aws_lambda_function" "lambda" {
filename = "${var.filename}"
function_name = "${var.name}"
role = "${var.role}"
handler = "${var.name}.${var.handler}"
runtime = "${var.runtime}"
source_code_hash = "${data.archive_file.lambda_zip.output_base64sha256}"
publish = "true"
}
resource "aws_lambda_alias" "lambda_alias" {
count = "2"
name = "${element(var.alias, count.index)}"
#name = "${var.alias}"
description = "${var.name}"
function_name = "${aws_lambda_function.lambda.arn}"
function_version = "${aws_lambda_function.lambda.version}"
}
You need different tfvars and tfstate files for different environments.
Suppose you use latest terraform version.
create an environment folder (for example, env) and set <env>-backend.tf and <env>.tfvars files for each environment
$ cat main.tf
terraform {
required_version = ">= 0.9.1"
backend "s3" {
encrypt = "true"
}
}
$ cat env/dev-backend.tf
bucket = "terraform-<change-to-s3-global-unique-id>"
key = "terraform/dev/terraform.tfstate"
kms_key_id = "xxxx-xxxx-xxxx-xxxx"
$ cat env/dev.tfvars
env_name = dev
Do the same for qa and prod environments.
So you should be fine to run below commands to get same lambda tf codes work in different environments
rm -rf .terraform
export env="dev"
terraform init -backend=true -backend-config=env/${env}-backend.tf
terraform plan -var-file=./env/${env}.tfvars