replace code block in terraform file - bash

I am running a build on my Jenkins server and I am looking to dynamically populate the git_commit field with the commit number from the current build. The file has multiple functions in it and I want to use sed to match core-lambda-function1 name of the module and update the git_commit field with the commit number from the current build. Any help is appreciated. Thanks.
module "core-lambda-function1" {
source = "./lambda"
name = "core-lambda-function"
runtime = "nodejs6.10"
role = "${aws_iam_role.iam_role_for_lambda.arn}"
filename = "../Archive.zip"
source_code_hash = "${base64sha256(file("../Archive.zip"))}"
source_dir = "../"
git_commit = ""
}
module "core-lambda-function2" {
source = "./lambda"
name = "core-lambda-function"
runtime = "nodejs6.10"
role = "${aws_iam_role.iam_role_for_lambda.arn}"
filename = "../Archive.zip"
source_code_hash = "${base64sha256(file("../Archive.zip"))}"
source_dir = "../"
git_commit = ""
}
this is what i currently have.
#!/bin/bash
set -e
while read p; do
NAME=$p
GIT_COMMIT=`git rev-parse HEAD`
echo $NAME | grep `xargs` main.tf -A 7 | sed -ri '7s/git_commit = ""/git_commit\ = \"'$GIT_COMMIT'"/g'
done < build_name

Why not Input Variables in Terraform?
variable "git_commit" {}
module "core-lambda-function1" {
source = "./lambda"
name = "core-lambda-function"
runtime = "nodejs6.10"
role = "${aws_iam_role.iam_role_for_lambda.arn}"
filename = "../Archive.zip"
source_code_hash = "${base64sha256(file("../Archive.zip"))}"
source_dir = "../"
git_commit = "${var.git_commit}" #### Use variable here.
}
So in your wrapper script, you can update to:
#!/bin/bash
set -e
GIT_COMMIT=$(git rev-parse HEAD)
terraform plan -var 'git_commit=${GIT_COMMIT}' ...

Related

GNU make read variable from a file and present to choose on prompt

I do have a config file that contains multiple lines of code like
locals {
env = {
ns-marcus = {
name = "marcus"
vpc_name = "vpc-marcus"
subnet1_name = "Subnet1-Marcus"
subnet2_name = "Subnet2-Marcus"
route_table1_name = "route-table-marcus"
igw_name = "igw-marcus"
}
ns-phil = {
name = "phil"
vpc_name = "vpc-phil"
subnet1_name = "Subnet1-phil"
subnet2_name = "Subnet2-phil"
route_table1_name = "route-table-phil"
igw_name = "igw-phil"
}
this list continuous to grow
I configured a Makefile that statically asks for the individual user on prompt and after text input commands are executed
all:
#echo "WORKSPACES AVAILABLE:"
#echo "marcus"
#echo "phil"
apply: all
#read -p "Enter Workspace Name: " workspace; \
terraform workspace select $$workspace
the issue I am now having is that if I statically add another user3 in my config file user.tf I have to manually edit the Makefile to include another #echo line for user3
Ideally I would love to read all potential users and then prompt for the configured users in user.tf
You can just use something like awk to extract the workspace names from your config file:
all:
#echo "WORKSPACES AVAILABLE"
#awk '/^user/ {print $1}' config.file
apply: all
#read -p "Enter Workspace Name: " workspace; \
terraform workspace select $$workspace
MYVAR := $(shell cat workspaces.tf | awk '{for (i=1;i<=NF;i++) if ($$i == "name") {printf "%s\\n\n",$$3}}' workspaces.tf)
all:
#echo "WORKSPACES AVAILABLE"
#echo $(MYVAR)
this did it!

Templatefile and Bash script

I need to be able to run bash script as userdata for launchtemplate and this is how I try to do it :
resource "aws_launch_template" "ec2_launch_template" {
name = "ec2_launch_template"
image_id = data.aws_ami.latest_airbus_ami.id
instance_type = var.instance_type[terraform.workspace]
iam_instance_profile {
name = aws_iam_instance_profile.ec2_profile.name
}
vpc_security_group_ids = [data.aws_security_group.default-sg.id, aws_security_group.allow-local.id] # the second parameter should be according to the user
monitoring {
enabled = true
}
block_device_mappings {
device_name = "/dev/sda1"
ebs {
volume_size = 30
encrypted = true
volume_type = "standard"
}
}
tags = {
Name = "${var.app_name}-${terraform.workspace}-ec2-launch-template"
}
#user_data = base64encode(file("${path.module}/${terraform.workspace}-script.sh")) # change the base encoder as well
user_data = base64encode(templatefile("${path.module}/script.sh", {app_name = var.app_name, env = terraform.workspace, high_threshold = var.high_threshold, low_threshold = var.low_threshold})) # change the base encoder as well
}
as you can see, I pass parameters as map in the "templatefile" function, I managed to retrieve them doing this :
#!/bin/bash -xe
# Activate logs for everything
exec > >(tee /var/log/user-data.log|logger -t user-data -s 2>/dev/console) 2>&1
# Retrieve variables from Terraform
app_name = ${app_name}
environment = ${env}
max_memory_perc= ${high_threshold}
min_memory_perc= ${low_threshold}
instance_id=$(wget -q -O - http://169.254.169.254/latest/meta-data/instance-id)
ami_id=$(wget -q -O - http://169.254.169.254/latest/meta-data/ami-id)
instance_type=$(wget -q -O - http://169.254.169.254/latest/meta-data/instance-type)
scale_up_name=$${app_name}"-"$${environment}"-scale-up"
scale_down_name=$${app_name}"-"$${environment}"-scale-down"
Then, when I look at launchtemplate in AWS console, I can see that the values used in parameters are filled in :
app_name = test-app
environment = prod
max_memory_perc= 80
min_memory_perc= 40
the problem that I have is, when I run that, I get this error :
+ app_name = test-app
/var/lib/cloud/instances/scripts/part-001: line 7: app_name: command not found
I assume there is a problem with interpretation or something like that but cannot put the finger on it
any ideas ?
Thanks
As they said, it was a problem with spaces, it's fixed now
thanks

Terraform failing to render with invalid character

I have a terraform script that deploys a linux VM into azure, snippet below
data "template_file" "setup_script" {
template = file("myscript.sh")
}
resource "azurerm_linux_virtual_machine" "myterraformvm" {
name = var.vmname
location = var.zone
resource_group_name = azurerm_resource_group.myresourcegroup.name
network_interface_ids = [azurerm_network_interface.myterraformnic.id]
size = "Standard_DS1_v2"
os_disk {
name = "myOsDisk"
caching = "ReadWrite"
storage_account_type = "Premium_LRS"
}
source_image_reference {
publisher = "Canonical"
offer = "UbuntuServer"
sku = "16.04-LTS"
version = "latest"
}
computer_name = var.vmname
admin_username = "myusername"
disable_password_authentication = true
custom_data = base64encode(data.template_file.setup_script.rendered)
tags = {
environment = var.envname
}
}
On boot I want to run the script myscript.sh which derives from template_file. It looks like this
#!/bin/bash
REMOTEHOST=8.8.8.8
REMOTEPORT=(22 80 443)
TIMEOUT=1
for i in "${REMOTEPORT[#]}"; do
if nc -w 1 -z 8.8.8.8 $i; then
echo "I was able to connect via $i" >> /tmp/output.txt
else
echo "Connection failed on $i. Exit code from Netcat was ($?)." >> /tmp/output.txt
fi
done
When i run terraform apply it get the following error
fatal: [localhost]: FAILED! => changed=false
msg: |-
Terraform plan could not be created
STDOUT:
STDERR:
Error: failed to render : <template_file>:6,24-25: Invalid character; This character is not used within the language., and 1 other diagnostic(s)
on main.tf line 134, in data "template_file" "setup_script":
134: data "template_file" "setup_script" {
The bash script works fine if i run it locally and works in the terraform deployment if I remove the for loop/'#' character and just do a static run. Is there a way to loop over an array in a bash file and deploy it on azurerm_linux_virtual_machine?

Jenkins pipeline I need to execute the shell command and the result is the value of def variable. What shall I do? Thank you

Jenkins pipeline I need to execute the shell command and the result is the value of def variable.
What shall I do? Thank you
def projectFlag = sh("`kubectl get deployment -n ${namespace}| grep ${project} | wc -l`")
//
if ( "${projectFlag}" == 1 ) {
def projectCI = sh("`kubectl get deployment ${project} -n ${namespace} -o jsonpath={..image}`")
echo "$projectCI"
} else if ( "$projectCI" == "${imageTag}" ) {
sh("kubectl delete deploy ${project} -n ${namespaces}")
def redeployFlag = '1'
echo "$redeployFlag"
if ( "$projectCI" != "${imageTag}" ){
sh("kubectl set image deployment/${project} ${appName}=${imageTag} -n ${namespaces}")
}
else {
def redeployFlag = '2'
}
I believe you're asking how to save the result of a shell command to a variable for later use?
The way to do this is to use some optional parameters available on the shell step interface. See https://jenkins.io/doc/pipeline/steps/workflow-durable-task-step/#sh-shell-script for the documentation
def projectFlag = sh(returnStdout: true,
script: "`kubectl get deployment -n ${namespace}| grep ${project} | wc -l`"
).trim()
Essentially set returnStdout to true. The .trim() is critical for ensuring you don't pickup a \n newline character which will ruin your evaluation logic.

Using Pashua with bash

I'm using a bash script in OSX with Pashua to create a GUI popup
PASH="/Applications/Pashua.app/Contents/MacOS/Pashua"
CONF="/Users/user1/desktop/pashconf.pash"
$PASH $CONF
The config file is:
tb.type = textbox
tb.default = Line 1[return]Line 2[return]Line 3
tb.width = 300
tb.height = 60
tx.type = textfield
tx.label = Example textfield
tx.default = Textfield content
tx.width = 310
This outputs:
tx=Textfield content
tb=Line 1[return]Line 2[return]Line 3
tx=Textfield content
tb=Line 1[return]Line 2[return]Line 3
But I'd like to use all the variables and arrays in the output as bash variables. What's the best method for doing this?
Also - Is it possible to put the config code inside the bash script?
Many thanks
This should do the trick. It stores a single returned value as a variable and contains the conf information in the script.
# App path
PASH="/Applications/Pashua.app/Contents/MacOS/Pashua"
# Conf file path
CONF="/tmp/conf.pash"
# Create a temp conf file
rm $CONF
cat <<EOT >> $CONF
tf.type = textfield
tf.label = Example textfield
tf.default = Textfield content
tf.width = 310
tf.tooltip = This is an element of type “textfield”
EOT
# Run Pashua and store the variable
VAR=$($PASH $CONF | cut -d '=' -f2)
echo $VAR

Resources