Terraform: set max_buckets in ElasticSearch - elasticsearch

I need to set max_buckets in elasticsearch aws. So far I've tried using a max_buckets key right in the module block, but that didn't work. Next try was using advanced_options
module "elasticsearch" {
es_version = "6.3"
advanced_options = {
"search.max_buckets" = "123456"
}
But this causes:
Error: Unsupported argument
on elasticsearch.tf line 14, in module "elasticsearch":
14: advanced_options = {
How can I set max_buckets?

Which module are you using? The aws_elasticsearch_domain resource has the advanced_options argument.
advanced_options - Key-value string pairs to specify advanced configuration options. Note that the values for these configuration options must be strings (wrapped in quotes).
resource "aws_elasticsearch_domain" "es" {
domain_name = "${var.domain}"
elasticsearch_version = "6.3"
advanced_options = {
"rest.action.multi.allow_explicit_index" = "true"
}
}
Could you provide more details about your implementation? It seems in your example that a double-quote is missing for search.max_buckets and if you're using a module, I think you should pass the source argument.

Related

how to add an additional loop over a stringlist within a for_each

i've setted up multiple github repos within my .tf configuration with a simple for_each on "github" repository resource. In addition i tried to create several branches (a branch per env) for each newly created repo.
My first intention was to use a module (./modules/github_repo/repo.tf) which includes
locals {
environments = var.environments
}
resource "github_branch" "branches" {
for_each = toset(setsubtract(local.environments, ["prod"]))
repository = "orgName/${var.REPO_NAME}"
branch = lookup(var.environment_to_branch_map, each.key, each.key)
source_branch = "master"
}
with following variables
variable "REPO_NAME" {
type = string
}
variable "environments" {
type = list(string)
}
variable "environment_to_branch_map" {
type = map(any)
default = {
"prod" = "master"
"dev" = "develop"
}
calling like this from main.tf
provider "github" {
token = var.GITHUB_TOKEN
owner = "orgName"
}
locals {
environments = ["dev", "prod", "staging", "test"]
microServices = tomap({ "service1" : "fnApp", "service2" : "fnApp" })
default_branch = "master"
}
module "branches_per_microservice" {
for_each = local.microServices
source = "./modules/github_repo"
REPO_NAME = each.key
environments = local.environments
depends_on = [github_repository.microservices]
}
unfortunately i get an 404 for each branch and repo combination like this
Error: Error querying GitHub branch reference /orgName/service1 (refs/heads/master): GET
https://api.github.com/repos//orgName/service1/git/ref/heads/master: 404 Not Found []
with
module.branches_per_microservice["service1"].github_branch.branches["test"]
on modules/github_repo/repo.tf line 23, in resource "github_branch" "branches":
i guess it's a "provider" thing, cause if i try to create a branch directly in the main.tf, it will work. but the problem is, that i only can use one loop within a resource. (i already know that providers are not possible in modules with count or for_each loops, as written in terraform docs)
resource "github_branch" "branches" {
for_each = toset(setsubtract(local.environments, ["prod"]))
repository = github_repository.microservices["service1"].name
branch = lookup(var.environment_to_branch_map, each.key, each.key)
source_branch = "master"
}
in this case, i have to create a resource for each "MicroService" manually, which i want to avoid heavily... Are there any ideas how i could "nest" the second loop over the environments to create my branches for each Micorservice repos?
Many thx in advance for any hint, idea or approach here...
Nested loop can be replaced with a single loop over the setproduct of two sets. The documentation for setproduct can be found here
https://www.terraform.io/language/functions/setproduct

Is there a way, inside a terraform script, to retrieve the latest version of a layer?

I have lambdas that reference a layer, this layer is maintained by someone else and when a new
version is released I have to update my terraform code to put the latest version in the arn (here 19).
Is there a way, in the terraform script, to get the latest version and use it?
module "lambda_function" {
source = "terraform-aws-modules/lambda/aws"
function_name = "my-lambda1"
description = "My awesome lambda function"
handler = "index.lambda_handler"
runtime = "python3.8"
source_path = "../src/lambda-function1"
tags = {
Name = "my-lambda1"
}
layers = [
"arn:aws:lambda:eu-central-1:587522145896:layer:my-layer-name:19"
]
}
Thanks.
ps : this means the layer's terraform script is not in mine, it's an other script that I don't have access to.
You can use the aws_lambda_layer_version data source to discover the latest version.
For example:
module "lambda_function" {
source = "terraform-aws-modules/lambda/aws"
function_name = "my-lambda1"
description = "My awesome lambda function"
handler = "index.lambda_handler"
runtime = "python3.8"
source_path = "../src/lambda-function1"
tags = {
Name = "my-lambda1"
}
layers = [
data.aws_lambda_layer_version.layer_version.arn
]
}
data "aws_lambda_layer_version" "layer_version" {
layer_name = "my-layer-name"
}

terraform.apply InvalidParameterException: The following supplied instance types do not exist: [m4.large]

I have the below cluster.tf file in my EC2 instance (type: t3.micro):
locals {
cluster_name = "my-eks-cluster"
}
module "vpc" {
source = "git::https://git#github.com/reactiveops/terraform-vpc.git?ref=v5.0.1"
aws_region = "eu-north-1"
az_count = 3
aws_azs = "eu-north-1a, eu-north-1b, eu-north-1c"
global_tags = {
"kubernetes.io/cluster/${local.cluster_name}" = "shared"
}
}
module "eks" {
source = "git::https://github.com/terraform-aws-modules/terraform-aws-eks.git?ref=v16.1.0"
cluster_name = local.cluster_name
cluster_version = "1.17"
vpc_id = module.vpc.aws_vpc_id
subnets = module.vpc.aws_subnet_private_prod_ids
node_groups = {
eks_nodes = {
desired_capacity = 3
max_capacity = 3
min_capaicty = 3
instance_type = "t3.micro"
}
}
manage_aws_auth = false
}
But when I'm running the command terraform.apply I get this exception:
Error: error creating EKS Node Group (my-eks-cluster/my-eks-cluster-eks_nodes-divine-pig): InvalidParameterException: The following supplied instance types do not exist: [m4.large]
I tried to google it but couldn't find a solution for it...
I haven't worked previously before with AWS modules but in modules/node_groups on that GitHub repo link it looks like you may need to set node_group_defaults.
Reason why is that the If unset column for the instance type row says that the value in [var.workers_group_defaults[instance_type]] will be used.
That default value is located in the root local.tf and has a value of m4.large so maybe that instance type is not supported in your AWS region?
Not sure of how to fix this completely but may help with starting to troubleshoot.

Create random variable in Terraform and pass it to GCE startup script

I want to run a metadata_startup_script when using Terraform to create a GCE instance.
This script is supposed to create a user and assign to this user a random password.
I know that I can create a random string in Terraform with something like:
resource "random_string" "pass" {
length = 20
}
And my startup.sh will at some point be like:
echo myuser:${PSSWD} | chpasswd
How can I chain the random_string resource generation with the appropriate script invocation through the metadata_startup_script parameter?
Here is the google_compute_instance resource definition:
resource "google_compute_instance" "coreos-host" {
name = "my-vm"
machine_type = "n1-stantard-2"
zone = "us-central1-a"
boot_disk {
initialize_params {
image = "debian-cloud/debian-9"
size = 20
type = "pd-standard"
}
}
network_interface {
network = "default"
access_config {
network_tier = "STANDARD"
}
}
metadata_startup_script = "${file("./startup.sh")}"
}
where startup.sh includes the above line setting the password non-interactively.
If you want to pass a Terraform variable into a templated file then you need to use a template.
In Terraform <0.12 you'll want to use the template_file data source like this:
resource "random_string" "pass" {
length = 20
}
data "template_file" "init" {
template = "${file("./startup.sh")}"
vars = {
password = "${random_string.pass.result}"
}
}
resource "google_compute_instance" "coreos-host" {
name = "my-vm"
machine_type = "n1-stantard-2"
zone = "us-central1-a"
boot_disk {
initialize_params {
image = "debian-cloud/debian-9"
size = 20
type = "pd-standard"
}
}
network_interface {
network = "default"
access_config {
network_tier = "STANDARD"
}
}
metadata_startup_script = "${data.template_file.startup_script.rendered}"
}
and change your startup.sh script to be:
echo myuser:${password} | chpasswd
Note that the template uses ${} for interpolation of variables that Terraform is passing into the script. If you need to use $ anywhere else in your script then you'll need to escape it by using $$ to get a literal $ in your rendered script.
In Terraform 0.12+ there is the new templatefile function which can be used instead of the template_file data source if you'd prefer:
resource "random_string" "pass" {
length = 20
}
resource "google_compute_instance" "coreos-host" {
name = "my-vm"
machine_type = "n1-stantard-2"
zone = "us-central1-a"
boot_disk {
initialize_params {
image = "debian-cloud/debian-9"
size = 20
type = "pd-standard"
}
}
network_interface {
network = "default"
access_config {
network_tier = "STANDARD"
}
}
metadata_startup_script = templatefile("./startup.sh", {password = random_string.pass.result})
}
As an aside you should also notice the warning on random_string:
This resource does use a cryptographic random number generator.
Historically this resource's intended usage has been ambiguous as the original example used it in a password. For backwards compatibility it will continue to exist. For unique ids please use random_id, for sensitive random values please use random_password.
As such you should instead use the random_password resource:
resource "random_password" "password" {
length = 16
special = true
override_special = "_%#"
}

Custom protoc plugin parsing not working for custom options

I am trying to write a protoc plugin that requires me to use custom options. I defined my custom option as shown in the example (https://developers.google.com/protocol-buffers/docs/proto#customoptions):
import "google/protobuf/descriptor.proto";
extend google.protobuf.MessageOptions {
string my_option = 51234;
}
I use it as follows:
message Hello {
bool greeting = 1;
string name = 2;
int32 number = 3;
option (my_option) = "telephone";
}
However, when I read the parsed request, the options field is empty for the "Hello" message.
I am doing the following to read
data = sys.stdin.read()
request = plugin.CodeGeneratorRequest()
request.ParseFromString(data)
When I print "request," it just gives me this
message_type {
name: "Hello"
field {
name: "greeting"
number: 1
label: LABEL_REQUIRED
type: TYPE_BOOL
json_name: "greeting"
}
field {
name: "name"
number: 2
label: LABEL_REQUIRED
type: TYPE_STRING
json_name: "name"
}
field {
name: "number"
number: 3
label: LABEL_OPTIONAL
type: TYPE_INT32
json_name: "number"
}
options {
}
}
As seen, the options field is empty even though I defined options in my .proto file. Is my syntax incorrect for defining custom options? Or could it be a problem with my version of protoc?
I'm making my protobuf python plugin.
I also got the problem like yours and i have found a solution for that.
Put your custom options to a file my_custom.proto
Use protoc to gen a python file from my_custom.proto => my_custom_pb2.py
In your python plugin code, import my_custom_pb2.py import my_custom_pb2
Turns out you need to have the _pb2.py file imported for the .proto file in which the custom option is defined. For example, it you are parsing a file (using ParseFromString) called example.proto which uses a custom option defined in option.proto, you must import option_pb2.py in the Python file that calls ParseFromString.

Resources