Unable to pass environment variables to Create_Function AWS SDK method in Ruby - ruby

I'm trying to execute the following Ruby code and it constantly fails with "unexpected value at params[:environment]" error. I tried many different options for passing Hash to 'environment' parameter but it triggers the same error.
require 'aws-sdk'
client = Aws::Lambda::Client.new(region: 'us-east-1')
args = {}
args[:role] = "some_role"
args[:function_name] = "function"
args[:handler] = "function_handler"
args[:runtime] = "java8"
code = {}
code[:zip_file] = ::File.open("file.jar", "rb").read
args[:code] = code
environment = {}
environment[:variables] = { "AAA": "BBB" }
args[:environment] = environment
client.create_function(args)

Fixed by upgrading Ruby AWS SDK from 2.6 to 2.9

Related

How to update config based on envronment for middleman s3_sync?

I'm trying to push slate docs to 2 different S3 buckets based on the environment.
But it's complaining that s3_sync is not a parameter for middleman.
I have mentioned the S3 bucket in the environment using config.rb but still I'm getting the above issue when I run bundle exec middleman s3_sync --verbose --environment=internal
config.rb:
configure :internal do
s3_sync.bucket = ENV['INTERNAL_DOCS_AWS_BUCKET'] # The name of the internal docs S3 bucket you are targeting. This is globally unique.
end
activate :s3_sync do |s3_sync|
s3_sync.bucket = ENV['DOCS_AWS_BUCKET'] # The name of the S3 bucket you are targeting. This is globally unique.
s3_sync.region = ENV['DOCS_AWS_REGION'] # The AWS region for your bucket.
s3_sync.aws_access_key_id = ENV['DOCS_AWS_ACCESS_KEY_ID']
s3_sync.aws_secret_access_key = ENV['DOCS_AWS_SECRET_ACCESS_KEY']
s3_sync.prefer_gzip = true
s3_sync.path_style = true
s3_sync.reduced_redundancy_storage = false
s3_sync.acl = 'public-read'
s3_sync.encryption = false
s3_sync.prefix = ''
s3_sync.version_bucket = false
s3_sync.index_document = 'index.html'
s3_sync.error_document = '404.html'
end
Error:
bundler: failed to load command: middleman
(/usr/local/bundle/bin/middleman) NameError: undefined local variable
or method `s3_sync' for #Middleman::ConfigContext:0x0000561eca099a40
s3_sync is only defined within the block of activate :s3_sync.
It is undefined within the configure :internal block.
A solution might look like the following, using environment? or environment
activate :s3_sync do |s3_sync|
s3_sync.bucket = if environment?(:internal)
ENV['INTERNAL_DOCS_AWS_BUCKET']
else
ENV['DOCS_AWS_BUCKET']
end
s3_sync.region = ENV['DOCS_AWS_REGION']
# ...
end

Ruby Post returns 404 URL Not found while curl works fine

I'm trying to write some Ruby code to update GitLab CI/CD variables using the REST endpoint update variable. When I perform a curl with the same path, the same private token, and the same --form data it updates the variable as expected. When I use the Ruby code that I put together based on reading stackoverflow and the net::http docs, it fails with a 404 URL not found.
I can use a similar piece of code to create a new CI/CD variable successfully. I can also delete an existing variable, and re-create it, but it I would like to know the mistake I am making in the update call.
Can someone point out what I did wrong?
#!/usr/bin/env ruby
require 'net/http'
require 'uri'
token = File.read(__dir__ + '/.gitlab-token').chomp
host = 'https://gitlab.com/'
variables_path = 'api/v4/projects/123456/variables'
env_var = 'MY_VAR'
update_uri = URI(host + variables_path + '/' + env_var)
# I've written the above this way because my actual code
# has a delete and create in order to "update" the variable
response = Net::HTTP.start(update_uri.host, update_uri.port, use_ssl: true) do |http|
update_request = Net::HTTP::Post.new(update_uri)
update_request['PRIVATE-TOKEN'] = token
form_data = [
['value', 'a new value']
]
update_request.set_form(form_data, 'multipart/form-data')
response = http.request(update_request)
response.body
end

Terraform creating lambda before zip is ready

I'm trying to create a Terraform module that will build my JS lambdas, zip them and deploy them. This however proves to be problematic
resource "null_resource" "build_lambda" {
count = length(var.lambdas)
provisioner "local-exec" {
command = "mkdir tmp"
working_dir = path.root
}
provisioner "local-exec" {
command = var.lambdas[count.index].code.build_command
working_dir = var.lambdas[count.index].code.working_dir
}
}
data "archive_file" "lambda_zip" {
count = length(var.lambdas)
type = "zip"
source_dir = var.lambdas[count.index].code.working_dir
output_path = "${path.root}/tmp/${count.index}.zip"
depends_on = [
null_resource.build_lambda
]
}
/*******************************************************
* Lambda definition
*******************************************************/
resource "aws_lambda_function" "lambda" {
count = length(var.lambdas)
filename = data.archive_file.lambda_zip[count.index].output_path
source_code_hash = filebase64sha256(data.archive_file.lambda_zip[count.index].output_path)
function_name = "${var.application_name}-${var.lambdas[count.index].name}"
description = var.lambdas[count.index].description
handler = var.lambdas[count.index].handler
runtime = var.lambdas[count.index].runtime
role = aws_iam_role.iam_for_lambda.arn
memory_size = var.lambdas[count.index].memory_size
depends_on = [aws_iam_role_policy_attachment.lambda_logs, aws_cloudwatch_log_group.log_group, data.archive_file.lambda_zip]
}
The property source_code_hash = filebase64sha256(data.archive_file.lambda_zip[count.index].output_path) , although technically not obligatory, is necessary, or the existing lambda will never get overriden as Terraform will think that it is still the same version of lambda and will skip the deployment altogether. Unfortunately it looks like the method filebase64sha256 is evaluated before the creation of any resource. This means that there is no zip for the hash calculation and so I get the error
Error: Error in function call
on modules\api-gateway-lambda\main.tf line 35, in resource "aws_lambda_function" "lambda":
35: source_code_hash = filebase64sha256(data.archive_file.lambda_zip[count.index].output_path)
|----------------
| count.index is 0
| data.archive_file.lambda_zip is tuple with 1 element
Call to function "filebase64sha256" failed: no file exists at tmp\0.zip.
If i manually place a zip in the right location, I can see that the whole thing starts working and the zip eventually gets overriden by a new one, but the hash in this case must come from the previous zip.
What is the right way to execute the whole thing in the right order?
The archive_file data source has its own output_base64sha256 attribute which can give you that same result without asking Terraform to read a file that doesn't exist yet:
source_code_hash = data.archive_file.lambda_zip[count.index].output_base64sha256
The data source will populate this at the same time it creates the file, and because your lambda function depends on the data source it will therefore always be available before the lambda function configuration is evaluated.

terraform environments and updating a single resource

I am using Terraform for continuous deployments of lambda functions. The lambda module creates the function and initial aliases [DEV,QA,PROD]. When a change is made the source_code_hash is updated and Terraform updates the code. The challenge is when I want to update the alias from DEV to QA it updates the entire stack. The code is below. Your help is appreciated.
$ cat main.tf
module "sample" {
source = "./lambda"
name = "sample"
runtime = "nodejs6.10"
role = "${aws_iam_role.iam_role_for_lambda.arn}"
filename = "../Archive.zip"
source_code_hash = "${base64sha256(file("../Archive.zip"))}"
source_dir = "../sample"
alias = "${var.env_name}"
}
$ cat module/main.tf
resource "aws_lambda_function" "lambda" {
filename = "${var.filename}"
function_name = "${var.name}"
role = "${var.role}"
handler = "${var.name}.${var.handler}"
runtime = "${var.runtime}"
source_code_hash = "${data.archive_file.lambda_zip.output_base64sha256}"
publish = "true"
}
resource "aws_lambda_alias" "lambda_alias" {
count = "2"
name = "${element(var.alias, count.index)}"
#name = "${var.alias}"
description = "${var.name}"
function_name = "${aws_lambda_function.lambda.arn}"
function_version = "${aws_lambda_function.lambda.version}"
}
You need different tfvars and tfstate files for different environments.
Suppose you use latest terraform version.
create an environment folder (for example, env) and set <env>-backend.tf and <env>.tfvars files for each environment
$ cat main.tf
terraform {
required_version = ">= 0.9.1"
backend "s3" {
encrypt = "true"
}
}
$ cat env/dev-backend.tf
bucket = "terraform-<change-to-s3-global-unique-id>"
key = "terraform/dev/terraform.tfstate"
kms_key_id = "xxxx-xxxx-xxxx-xxxx"
$ cat env/dev.tfvars
env_name = dev
Do the same for qa and prod environments.
So you should be fine to run below commands to get same lambda tf codes work in different environments
rm -rf .terraform
export env="dev"
terraform init -backend=true -backend-config=env/${env}-backend.tf
terraform plan -var-file=./env/${env}.tfvars

How to create EC2 instance through boto python code

requests = [conn.request_spot_instances(price=0.0034, image_id='ami-6989a659', count=1,type='one-time', instance_type='m1.micro')]
I used the following code. But it is not working.
Use the following code to create instance from python command line.
import boto.ec2
conn = boto.ec2.connect_to_region(
"us-west-2",
aws_access_key_id="<aws access key>",
aws_secret_access_key="<aws secret key>",
)
conn = boto.ec2.connect_to_region("us-west-2")
conn.run_instances(
"<ami-image-id>",
key_name="myKey",
instance_type="t2.micro",
security_groups=["your-security-group-here"],
)
To create an EC2 instance using Python on AWS, you need to have "aws_access_key_id_value" and "aws_secret_access_key_value".
You can store such variables in config.properties and write your code in create-ec2-instance.py file
Create a config.properties and save the following code in it.
aws_access_key_id_value='YOUR-ACCESS-KEY-OF-THE-AWS-ACCOUNT'
aws_secret_access_key_value='YOUR-SECRETE-KEY-OF-THE-AWS-ACCOUNT'
region_name_value='region'
ImageId_value = 'ami-id'
MinCount_value = 1
MaxCount_value = 1
InstanceType_value = 't2.micro'
KeyName_value = 'name-of-ssh-key'
Create create-ec2-instance.py and save the following code in it.
import boto3
def getVarFromFile(filename):
import imp
f = open(filename)
global data
data = imp.load_source('data', '', f)
f.close()
getVarFromFile('config.properties')
ec2 = boto3.resource(
'ec2',
aws_access_key_id=data.aws_access_key_id_value,
aws_secret_access_key=data.aws_secret_access_key_value,
region_name=data.region_name_value
)
instance = ec2.create_instances(
ImageId = data.ImageId_value,
MinCount = data.MinCount_value,
MaxCount = data.MaxCount_value,
InstanceType = data.InstanceType_value,
KeyName = data.KeyName_value)
print (instance[0].id)
Use the following command to execute the python code.
python create-ec2-instance.py

Resources