I have an AWS Lambda deployed successfully with Terraform:
resource "aws_lambda_function" "lambda" {
filename = "dist/subscriber-lambda.zip"
function_name = "test_get-code"
role = <my_role>
handler = "main.handler"
timeout = 14
reserved_concurrent_executions = 50
memory_size = 128
runtime = "python3.6"
tags = <my map of tags>
source_code_hash = "${base64sha256(file("../modules/lambda/lambda-code/main.py"))}"
kms_key_arn = <my_kms_arn>
vpc_config {
subnet_ids = <my_list_of_private_subnets>
security_group_ids = <my_list_of_security_groups>
}
environment {
variables = {
environment = "dev"
}
}
}
Now, when I run terraform plan command it says my lambda resource needs to be updated because the source_code_hash has changed, but I didn't update lambda Python codebase (which is versioned in a folder of the same repo):
~ module.app.module.lambda.aws_lambda_function.lambda
last_modified: "2018-10-05T07:10:35.323+0000" => <computed>
source_code_hash: "jd6U44lfe4124vR0VtyGiz45HFzDHCH7+yTBjvr400s=" => "JJIv/AQoPvpGIg01Ze/YRsteErqR0S6JsqKDNShz1w78"
I suppose it is because it compresses my Python sources each time and the source changes. How can I avoid that if there are no changes in the Python code? Is my hypothesis coherent if I didn't change the Python codebase (I mean, why then the hash changes)?
This is because you are hashing just main.py but uploading dist/subscriber-lambda.zip. Terraform compares the hash to the hash it calculates when the file is uploaded to lambda. Since the hashing is done on two different files, you end up with different hashes. Try running the hash on the exact same file that is being uploaded.
This works for me and also doesn't trigger an update on the Lambda function when the code hasn't changed
data "archive_file" "lambda_zip" {
type = "zip"
source_dir = "../dist/go"
output_path = "../dist/lambda_package.zip"
}
resource "aws_lambda_function" "aggregator_func" {
description = "MyFunction"
function_name = "my-func-${local.env}"
filename = data.archive_file.lambda_zip.output_path
runtime = "go1.x"
handler = "main"
source_code_hash = data.archive_file.lambda_zip.output_base64sha256
role = aws_iam_role.function_role.arn
timeout = 120
publish = true
tags = {
environment = local.env
}
}
I'm going to add my answer to contrast to the one #ODYN-Kon provided.
The source code hash field in resource "aws_lambda_function" is not compared to some hash of the zip you upload. Instead, the hash is merely checked against the Terraform saved state from the last time it ran. So, the next time you run Terraform, it computes the hash of the actual python file to see if it has changed. If it has, it assumes that the zip has been changed and the Lambda function resource needs to be run again. The source_code_hash can have any value you want to give it or it can be omitted entirely. You could set it to a constant of some arbitrary string, and then it would never change unless you edit your Terraform configuration.
Now, the problem there is that Terraform assumes you updated the zip file. Assuming you only have one directory or one file in the zip archive, you can use the Terraform data source archive_file to create the zip file. I have a case where I cannot use that because I need a directory and a file (JS world: source + node_modules/). But here is how you can use that:
data "archive_file" "lambdaCode" {
type = "zip"
source_file = "lambda_process_firewall_updates.js"
output_path = "${var.lambda_zip}"
}
Alternativly, you can archive an entire directory, if you replace the "source_file" statement with source_dir = "node_modules"
Once you do this, you can reference the hash code of the zip archive file for insertion into resource "aws_lambda_function" "lambda" { block as "${data.archive_file.lambdaCode.output_base64sha256}" for the field source_hash. Then, anytime the zip changes, the lambda function gets updated. And, the data source archive file knows that anytime the source_file changes it must regenerate the zip.
Now, I haven't drilled down to a root cause in your case, but hopefully given some help to get to a better place. You can check the saved state of Terraform via: tf state list - which lists the items of saved state. You can find the one that matches your lambda function block and then execute tf state show <state-name>. For example, for one I am working on:
tf state show aws_lambda_function.test-lambda-networking gives about 30 lines of output, including:
source_code_hash = 2fKX9v/duluQF0H6O9+iRnID2gokhfpXIXpxyeVBUM0=
You can compare the hash via command line commands. Example on MacOS: sha256sum my-lambda.zip, where sha256sum was installed by brew install coreutils.
As mentioned, the use of archive_file doesn't work when you have multiple elements of the zip which are not isolated to a single directory. I think that probably happens a lot, so I wish the Hashicorp guys would extend archive_file to support multiple. I even went looking at the Go code, but that is a rainy day project. One variation I use is to take the source_code_hash to be "${base64sha256(file("my-lambda.zip"))}". But that still requires me to run tf twice.
As others have said, your zip should be used in your filename and your hash.
I want to mention that you can also get similar recreation issues if you use the wrong hash function in your lambda definitions. For example filesha256(.zip) will also recreate your lambdas every time. You have to use filebase64sha256("file.zip") (terraform 0.11.12+) or base64sha256(file("file.zip")) as mentioned under source_code_hash here
Related
I have two data files (dataUAT.json and dataQA.json) in fixtures where i'm getting data for my spec files. Suppose i have different environment like UAT and QA.
I want to use dataUAT.json file for UAT environment and dataQA.json for QA Environment.
Is there any way to juggle between them without altering the code? Like directly changing it from the command line?
Instead of modifying your plugins, you could pass in an environment variable from the command line:
cypress run --env fixtureEnvironment=UAT
cypress run --env fixtureEnvironment=QA
You could then reference this environment variable when finding the fixture.
cy.fixture(
`data${Cypress.env('fixtureEnvironment') ?? 'QA'}`)
.then(...);
In the above, I've included a nullish coalescing operator to check for the variable being set, and if it is not, defaulting to QA.
Cypress docs has an example for this and I also have a repo that also showcases this and cypress-grep.
Basically, you will need to make a few adjustments in your plugins/index.js by adding the following:
/ promisified fs module
const fs = require('fs-extra')
const path = require('path')
function getConfigurationByFile(file) {
const pathToConfigFile = path.resolve('..', 'config', `${file}.json`)
return fs.readJson(pathToConfigFile)
}
// plugins file
module.exports = (on, config) => {
// accept a configFile value or use development by default
const file = config.env.configFile || 'development'
return getConfigurationByFile(file)
}
In the ec2 instance module I have couple of security groups one from variable and other created in the module. I have a situation where new security groups are to be added in the terraform file which are sourcing from this module.
To be able to concat a new list I want to convert existing format1 into an explicit list which I am not able to do. How can I achieve this?
resource "aws_instance" "instance" {
//...
# (current)format1: Works
vpc_security_group_ids = ["${aws_security_group.secGrp_1.id}", "${var.secGrp_2}"]
# format2: Doesn't work
vpc_security_group_ids = "${list(aws_security_group.secGrp_1.id, var.secGrp_2)}"
# format3: Works
vpc_security_group_ids = "${list(var.secGrp_2)}"
# format3: Doesn't works
vpc_security_group_ids = "${list(tostring(aws_security_group.secGrp_1.id), var.secGrp_2)}"
Format2 fails with: "vpc_security_group_ids: should be a list".
I suspect the secGrp1 id is not being recognized as a string in this representation.
Format4 fails with: "unknown function called: tostring in:
${(list(tostring(aws_security_group.secGrp_1.id), var.secGrp_2))}"
P.S: The Terraform version we are using is 0.11.x
Terraform v0.11 is unable to represent partially-unknown lists during expression evaluation. On initial create, aws_security_group.secGrp_1.id will be unknown and therefore passing it to the list function will produce a wholly-unknown value, which isn't valid to use as the value for a list argument.
This is one of the various expression-language-related problems that Terraform v0.12 addressed. In Terraform v0.12 the following expression equivalent to your second one should work:
vpc_security_group_ids = [
aws_security_group.secGrp_1.id,
var.secGrp_2,
]
To get your intended result with Terraform v0.11 will require applying this configuration in two stages, so that Terraform can know the value of aws_security_group.secGrp_1.id before evaluating aws_instance.instance:
terraform apply -target=aws_security_group.secGrp_1 to apply only that resource and the other resources it depends on.
Then just terraform apply to apply everything else, including aws_instance.instance.
I am writing an automation script for an old project and I need some help with pvpython from Paraview 3.98.1. The function SaveData() in this version does not exist. I found its implementation here and moved it to my code. How can I save a file as ASCII? Calling it like SaveData(filename, proxy=px, FileType='Ascii') saves my files as binaries (awkward behavior).
I need this version because some of my codes in the scripting pipeline handle very specific vtk files. Using the SaveData() function of the latest versions ended up creating different metadata in my final files, and when I process them it ends up destroying my results. It is easier at the moment to use an older version of Paraview than to modify all my codes.
Edit:
The website is not working now, but it was yesterday. Maybe it is an internal problem? Anyway, the code is attached below.
# -----------------------------------------------------------------------------
def SetProperties(proxy=None, **params):
"""Sets one or more properties of the given pipeline object. If an argument
is not provided, the active source is used. Pass a list of property_name=value
pairs to this function to set property values. For example::
SetProperties(Center=[1, 2, 3], Radius=3.5)
"""
if not proxy:
proxy = active_objects.source
properties = proxy.ListProperties()
for param in params.keys():
pyproxy = servermanager._getPyProxy(proxy)
pyproxy.__setattr__(param, params[param])
# -----------------------------------------------------------------------------
def CreateWriter(filename, proxy=None, **extraArgs):
"""Creates a writer that can write the data produced by the source proxy in
the given file format (identified by the extension). If no source is
provided, then the active source is used. This doesn't actually write the
data, it simply creates the writer and returns it."""
if not filename:
raise RuntimeError ("filename must be specified")
session = servermanager.ActiveConnection.Session
writer_factory = servermanager.vtkSMProxyManager.GetProxyManager().GetWriterFactory()
if writer_factory.GetNumberOfRegisteredPrototypes() == 0:
writer_factory.UpdateAvailableWriters()
if not proxy:
proxy = GetActiveSource()
if not proxy:
raise RuntimeError ("Could not locate source to write")
writer_proxy = writer_factory.CreateWriter(filename, proxy.SMProxy, proxy.Port)
writer_proxy.UnRegister(None)
pyproxy = servermanager._getPyProxy(writer_proxy)
if pyproxy and extraArgs:
SetProperties(pyproxy, **extraArgs)
return pyproxy
# -----------------------------------------------------------------------------
def SaveData(filename, proxy=None, **extraArgs):
"""Save data produced by 'proxy' in a file. If no proxy is specified the
active source is used. Properties to configure the writer can be passed in
as keyword arguments. Example usage::
SaveData("sample.pvtp", source0)
SaveData("sample.csv", FieldAssociation="Points")
"""
writer = CreateWriter(filename, proxy, **extraArgs)
if not writer:
raise RuntimeError ("Could not create writer for specified file or data type")
writer.UpdateVTKObjects()
writer.UpdatePipeline()
del writer
# -----------------------------------------------------------------------------
The question is answered here (also my post). I used SaveData() to save a binary file with the proxy I need and then used DataSetWriter() to change my FileType to ASCII. It is not a beautiful solution since SaveData() is supposed to do that, but it does the job.
I'm doing one project for puppet, however currently stuck in one logic.
Thus, want to know can we fetch variable from .yaml, .json or plain text file in puppet manifest file.
For example,
My puppet manifest want to create user but the variable exist in the .yaml or any configuration file, hence need to fetch the varibale from the outside file. The puppet manifest also can do looping if it exist multiple users in .yaml file.
I read about hiera but let say we are not using hiera is there any possible way.
There are a number of ways you can do this using a combination of built-in and stdlib functions, at least for YAML and JSON.
Using the built-in file function and the parseyaml or parsejson stdlib functions:
Create a file at mymodule/files/myfile.yaml:
▶ cat files/myfile.yaml
---
foo: bar
Then in your manifests read it into a string and parse it:
$myhash = parseyaml(file('mymodule/myfile.yaml'))
notice($myhash)
That will output:
Notice: Scope(Class[mymodule]): {foo => bar}
Or, using the loadyaml or loadjson stdlib functions:
$myhash = loadyaml('/etc/puppet/data/myfile.yaml')
notice($myhash)
The problem with that approach is that you need to know the path to file on the Puppet master. Or, you could use a Puppet 6 deferred function and read the data from a file on the agent node.
(Whether or not you should do this is another matter entirely - hint: answer is you almost certainly should be using Hiera - but that isn't the question you asked.)
I currently use following valadoc build task to generate a api documentation for my vala application:
doc = bld.new_task_gen (
features = 'valadoc',
output_dir = '../doc/html',
package_name = bld.env['PACKAGE_NAME'],
package_version = bld.env['VERSION'],
packages = 'gtk+-3.0 gee-1.0 libxml-2.0 x11 gdk-x11-3.0 libpeas-gtk-1.0 libpeas-1.0 config xtst gdk-3.0',
vapi_dirs = '../vapi',
force = True)
path = bld.path.find_dir ('../src')
doc.files = path.ant_glob (incl='**/*.vala')
This tasks creates a directory html in the output directory including several subdirectories with html and picture files.
What I am know trying to do is to install such files to /usr/share/doc/projectname/html/. To do so I added the following to the wscript_build file (following the documentation I have found here):
output_dir = doc.bld.path.find_or_declare('../doc/html')
doc.outputs = output_dir.ant_glob (incl='**/*')
doc.bld.install_files('${PREFIX}/share/doc/projectname/html', doc.outputs)
However this leads to an error "Missing node signature". Does anyone know how to get around this error? Or is there a simple way to install a directory recursively with waf?
You can find a full-fledge sample here.
I had a similar issue with generated files and had to update the signature for the corresponding Node objects. Try creating a task:
def signature_task(task):
for x in task.generator.bld.path.find_dir('../doc/html').ant_glob('**/*', remove=False):
x.sig = Utils.h_file(x.abspath())
To the top of you build rule, try adding:
#Support running task groups serially
bld.post_mode = Build.POST_LAZY
Then at the end of your build, add:
#Previous tasks belong to a group
bld.add_group()
#This task runs last
bld(rule=signature_task, always=True, name="signature_task")
There is an easier way using relative_trick.
bld.install_files(destination,
bld.path.ant_glob('../doc/html/**'),
cwd=bld.path.find_dir('../doc/html'),
relative_trick=True)
This gets a list of files from the glob, chops off the prefix, and puts it into the destination.