Terraform code in VS studio not functioning - visual-studio

Im trying to find the error in my code. When I do the terraform apply it gives me errors....
Error: Unsupported argument
on jenkins yourself.tf line 9, in resource "aws security group" "web-node":
9:vpc security group ids = ["##################"]
An argument named "vpc security group_ids" is not expected here.
Error: Incorrect attribute value type
on jenkins yourself line 10, in resource "aws security group" "web-node":
Inappropriate value for attribute "tags": element "security_groups": string
required.
Error: Unsupported block type
on jenkins yourself line 39, in resource "aws security group" "web-node"
39: resource "ec2 instance" "EC2Terraform" {
Blocks of type "resource" are not expected here.
provider "aws" {
access_key = "access key"
secret_key = "secret key"
region = "us-east-1"
}
#resource Configuration for AWS
resource "aws_security_group" "web-node" {
vpc_security_group_ids = ["sg-############"]
tags = {
name = "Week4 Node"
description = "My Security Group"
security_groups = ["${aws_security_group.web-node.name}"]
}
ingress {
from_port = 80
to_port = 80
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}
ingress {
from_port = 22
to_port = 22
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}
ingress {
from_port = 443
to_port = 443
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}
egress {
from_port = 0
to_port = 0
protocol = "-1"
cidr blocks = ["0.0.0.0/0"]
}
resource "ec2 instance" "EC2Terraform" {
ami = "ami-01d025118d8e760db"
instance_type = "t2.micro"
key_name ="XXXXXXXXXX"
vpc security group ids = ["##################"]
tags = {
Name = "My Jenkins "
}
}
}

There are some errors in the snippet code. The resource aws_security_group doesn't accept an argument called vpc_security_group_ids as you can see in the Terraform documentation. You're defining an AWS security group, you don't have to provide any security_group id at all, what you can do is reference the id of that security group: aws_security_group.web-node.id. Try something like this:
provider "aws" {
access_key = "access key"
secret_key = "secret key"
region = "us-east-1"
}
#resource Configuration for AWS
resource "aws_security_group" "web-node" {
name = "Week4 Node"
description = "My Security Group"
ingress {
from_port = 80
to_port = 80
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}
ingress {
from_port = 22
to_port = 22
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}
ingress {
from_port = 443
to_port = 443
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}
egress {
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = ["0.0.0.0/0"]
}
}
resource "aws_instance" "ec2terraform" {
ami = "ami-01d025118d8e760db"
instance_type = "t2.micro"
key_name = "XXXXXXXXXX"
vpc_security_groups_ids = [aws_security_group.web-node.id]
tags = {
Name = "My Jenkins "
}
}

Related

How to refer to a virtual machine's IP address in a security group - ingress rule (cidr_block) using terraform

I want to allow access the virtual machine from my IP addr and the another ec2 instance IP address. So I need to put pub IP of EC2 instance into SG cidr_block. I tried different options and didn`t find any solution:
# cidr_blocks = "${aws_instance.name.public_ip}"
# cidr_blocks = ["${formatlist("%v/32", aws_instance.name.public_ip)}"]
# cidr_blocks = [aws_instance.name.public_ip,"32"]
# cidr_blocks = ["${aws_instance.name.public_ip}/32"]
# cidr_blocks = join("/",[aws_instance.kibana.public_ip,"32"])
# cidr_blocks = ["${aws_instance.kibana.public_ip}/32"]
SG
resource "aws_security_group" "elasticsearch_sg" {
vpc_id = aws_vpc.elastic_vpc.id
ingress {
description = "ingress rules"
cidr_blocks = [var.access_ip] # my IP
from_port = 22
protocol = "tcp"
to_port = 22
}
ingress {
description = "ingress rules2"
# cidr_blocks = "${aws_instance.kibana.public_ip}"
# cidr_blocks = [aws_instance.kibana.public_ip,"32"]
# cidr_blocks = ["${aws_instance.kibana.public_ip}/32"]
from_port = 9200
protocol = "tcp"
to_port = 9300
self = true
}
egress {
description = "egress rules"
cidr_blocks = [ "0.0.0.0/0" ]
from_port = 0
protocol = "-1"
to_port = 0
}
tags={
Name="elasticsearch_sg"
}
}
---------null_resource start_es
resource "null_resource" "start_es" {
depends_on = [
null_resource.move_elasticsearch_file
]
count = 3
connection {
type = "ssh"
user = "ec2-user"
private_key = "${tls_private_key.pk.private_key_pem}"
host= aws_instance.elastic_nodes[count.index].public_ip
}
provisioner "remote-exec" {
inline = [
"sudo yum update -y",
"sudo rpm -i https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-7.5.1-x86_64.rpm",
"sudo systemctl daemon-reload",
"sudo systemctl enable elasticsearch.service",
# "sudo sed -i 's#-Xms1g#-Xms${aws_instance.elastic_nodes[count.index].root_block_device[0].volume_size/2}g#g' /etc/elasticsearch/jvm.options",
# "sudo sed -i 's#-Xmx1g#-Xmx${aws_instance.elastic_nodes[count.index].root_block_device[0].volume_size/2}g#g' /etc/elasticsearch/jvm.options",
"sudo rm /etc/elasticsearch/elasticsearch.yml",
"sudo cp elasticsearch.yml /etc/elasticsearch/",
"sudo systemctl start elasticsearch.service"
]
}
}
aws_instance.kibana
resource "aws_instance" "kibana" {
depends_on = [
null_resource.start_es
]
ami = "ami-04d29b6f966df1537"
instance_type = "t2.medium"
iam_instance_profile = "${aws_iam_instance_profile.test_profile.name}"
subnet_id = aws_subnet.elastic_subnet[var.az_name[0]].id
vpc_security_group_ids = [aws_security_group.kibana_sg.id]
key_name = aws_key_pair.kp.key_name
associate_public_ip_address = true
tags = {
Name = "kibana"
}
}
aws_security_group.elasticsearch_sg
resource "aws_security_group" "elasticsearch_sg" {
vpc_id = aws_vpc.elastic_vpc.id
ingress {
description = "ingress rules"
cidr_blocks = [var.access_ip] # my IP
from_port = 22
protocol = "tcp"
to_port = 22
}
ingress {
description = "ingress rules2"
# cidr_blocks = "${aws_instance.kibana.public_ip}"
# cidr_blocks = [aws_instance.kibana.public_ip,"32"]
# cidr_blocks = ["${aws_instance.kibana.public_ip}/32"]
from_port = 9200
protocol = "tcp"
to_port = 9300
self = true
}
egress {
description = "egress rules"
cidr_blocks = [ "0.0.0.0/0" ]
from_port = 0
protocol = "-1"
to_port = 0
}
tags={
Name="elasticsearch_sg"
}
}
aws_instance.elastic_nodes
resource "aws_instance" "elastic_nodes" {
count = 3
ami = "ami-04d29b6f966df1537"
instance_type = "t2.medium"
iam_instance_profile = "${aws_iam_instance_profile.test_profile.name}"
subnet_id = aws_subnet.elastic_subnet[var.az_name[count.index]].id
vpc_security_group_ids = [aws_security_group.elasticsearch_sg.id]
key_name = aws_key_pair.kp.key_name
associate_public_ip_address = true
tags = {
Name = "elasticsearch_${count.index}"
}
}
null_resource.move_elasticsearch_file
resource "null_resource" "move_elasticsearch_file" {
depends_on = [aws_instance.elastic_nodes]
count = 3
connection {
type = "ssh"
user = "ec2-user"
private_key = "${tls_private_key.pk.private_key_pem}"
host= aws_instance.elastic_nodes[count.index].public_ip
}
provisioner "file" {
# content = data.template_file.init_elasticsearch[count.index].rendered
content = templatefile("./templates/elasticsearch_config.tpl", {
cluster_name = "cluster1"
node_name = "node_${count.index}"
node = aws_instance.elastic_nodes[count.index].private_ip
node1 = aws_instance.elastic_nodes[0].private_ip
node2 = aws_instance.elastic_nodes[1].private_ip
node3 = aws_instance.elastic_nodes[2].private_ip
})
destination = "elasticsearch.yml"
}
}

terraform create pem file

i'm new to terraform.
i try to make simple terraform code with aws.
it works well. i can see ec2 and security group, eip.
i want to access instance but i don't have .pem file.
so it make me hard to connect ec2.
how to get .pem file?
can anyone let me know please?
resource "aws_key_pair" "alone_ec2" {
key_name = "alone_ec2"
public_key = file("~/.ssh/id_rsa.pub")
}
resource "aws_security_group" "alone_web" {
name = "Alone EC2 Security Group"
description = "Alone EC2 Security Group"
ingress {
from_port = 22
to_port = 22
protocol = "tcp"
cidr_blocks = ["${chomp(data.http.myip.body)}/32"]
}
ingress {
from_port = 8080
to_port = 8080
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}
ingress {
from_port = 443
to_port = 443
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}
}
# EC2
resource "aws_instance" "web" {
ami = "ami-02de72c5dc79358c9"
instance_type = "t2.micro"
key_name = aws_key_pair.alone_ec2.key_name
vpc_security_group_ids = [
aws_security_group.alone_web.id
]
tags = {
Name = "example-webservice"
}
root_block_device {
volume_size = 30
}
}
# EIP
resource "aws_eip" "elasticip" {
instance = aws_instance.web.id
}
output "EIP" {
value = aws_eip.elasticip.public_ip
}
You can use "tls_private_key" to create the key pair, save it to your machine using a provisioner when uploading it to aws.
resource "tls_private_key" "this" {
algorithm = "RSA"
rsa_bits = 4096
}
resource "aws_key_pair" "this" {
key_name = "my-key"
public_key = tls_private_key.this.public_key_openssh
provisioner "local-exec" {
command = <<-EOT
echo "${tls_private_key.this.private_key_pem}" > my-key.pem
EOT
}
}

Setting up Application Load Balancer for Private Subnet EC2 instances running tomcat

I have setup a VPC with two public subnets and two private subnets. The two private subnets have two EC2 instates and each has tomcat server running on port 8080.
I have set up a load balancer (terraform) as following but the health check is always failing. Can someone help me with what's wrong here.
Security Groups:
# Create Security Group for the Application Load Balancer
# terraform aws create security group
resource "aws_security_group" "alb-security-group" {
name = "ALB Security Group"
description = "Enable HTTP/HTTPS access on Port 80/443"
vpc_id = aws_vpc.OrchVPC.id
ingress {
description = "HTTP Access"
from_port = 80
to_port = 80
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}
ingress {
description = "HTTPS Access"
from_port = 443
to_port = 443
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}
egress {
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = ["0.0.0.0/0"]
}
tags = {
Name = "ALB Security Group"
}
}
# Create Security Group for the Bastion Host aka Jump Box
# terraform aws create security group
resource "aws_security_group" "ssh-security-group" {
name = "SSH Security Group"
description = "Enable SSH access on Port 22"
vpc_id = aws_vpc.OrchVPC.id
ingress {
description = "SSH Access"
from_port = 22
to_port = 22
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}
egress {
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = ["0.0.0.0/0"]
}
tags = {
Name = "SSH Security Group"
}
}
# Create Security Group for the Web Server
# terraform aws create security group
resource "aws_security_group" "webserver-security-group" {
name = "Web Server Security Group"
description = "Enable HTTP/HTTPS access on Port 80/443 via ALB and SSH access on Port 22 via SSH SG"
vpc_id = aws_vpc.OrchVPC.id
ingress {
description = "HTTP Access"
from_port = 80
to_port = 80
protocol = "tcp"
security_groups = ["${aws_security_group.alb-security-group.id}"]
}
ingress {
description = "HTTPS Access"
from_port = 443
to_port = 443
protocol = "tcp"
security_groups = ["${aws_security_group.alb-security-group.id}"]
}
ingress {
description = "HTTP/HTTPS Access"
from_port = 8080
to_port = 8080
protocol = "tcp"
security_groups = ["${aws_security_group.alb-security-group.id}"]
}
ingress {
description = "SSH Access"
from_port = 22
to_port = 22
protocol = "tcp"
security_groups = ["${aws_security_group.ssh-security-group.id}"]
}
egress {
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = ["0.0.0.0/0"]
}
tags = {
Name = "Web Server Security Group"
}
}
Load Balancer:
# Target group for application load balancer
resource "aws_lb_target_group" "targetgroup" {
health_check {
interval = 5
path = "/"
protocol = "HTTP"
timeout = 2
healthy_threshold = 2
unhealthy_threshold = 2
}
stickiness {
type = "lb_cookie"
enabled = true
}
name = "targetgroup"
port = 8080
protocol = "HTTP"
target_type = "instance"
vpc_id = aws_vpc.OrchVPC.id
}
# Load Balancer Target Group attachment for first instance
resource "aws_lb_target_group_attachment" "myec2vm1tg1" {
target_group_arn = aws_lb_target_group.targetgroup.arn
target_id = aws_instance.myec2vm1.id
port = 8080
}
# Load Balancer Target Group attachment for second instance
resource "aws_lb_target_group_attachment" "myec2vm2tg1" {
target_group_arn = aws_lb_target_group.targetgroup.arn
target_id = aws_instance.myec2vm2.id
port = 8080
}
# Applicaiton Load Balancer
resource "aws_lb" "alb" {
name = "alb"
internal = false
load_balancer_type = "application"
security_groups = [aws_security_group.alb-security-group.id]
subnets = [aws_subnet.PublicSubnet1.id, aws_subnet.PublicSubnet2.id]
tags = {
Name = "alb"
}
timeouts {
create = "30m"
delete = "30m"
}
}
# Load Balancer Listener
resource "aws_lb_listener" "alblistener" {
load_balancer_arn = aws_lb.alb.arn
port = "80"
protocol = "HTTP"
default_action {
type = "forward"
target_group_arn = aws_lb_target_group.targetgroup.arn
}
}
Can you check in console -> security group of those instances to ensure, it has given inbound rule so that load balancer can do the HTTP request to your web server.
security group inbound rule

Terraform AWS-EC2 security groups

I'm quite busy with trying to learn more about Terraform but I'm having one problem that I have no clue on how to work-around/fix.
The problem is as follows, in my script I am generating an ec2 instance (AWS) with a couple of side things like en EIP and a security group from a module, which is all working fine. But I cannot figure out how to attach the security group to the machine, now it's being created and that's it.
The code is as follows:
data "aws_ami" "latest" {
most_recent = true
owners = [ "self"]
filter {
name = "name"
values = [ lookup(var.default_ami, var.ami) ]
}
}
module "aws_security_group" {
source = "./modules/services/Security groups/"
server_port = 443
}
resource "aws_instance" "test-ec2deployment" {
ami = data.aws_ami.latest.id
instance_type = var.instance_type
subnet_id = var.subnet_id
availability_zone = var.availability_zone
associate_public_ip_address = var.public_ip
root_block_device {
volume_type = "gp2"
volume_size = 60
delete_on_termination = true
}
tags = {
Name = "Testserver2viaTerraform"
}
}
resource "aws_eip" "ip" {
instance = aws_instance.test-ec2deployment.id
}
resource "aws_eip" "example" {
vpc = true
}
Above is the main file and i'm loading the following module:
resource "aws_security_group" "my-webserver" {
name = "webserver"
description = "Allow HTTP from Anywhere"
vpc_id = "vpc-"
ingress {
from_port = 80
to_port = 80
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}
egress {
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = ["0.0.0.0/0"]
}
tags = {
Name = "my-webserver"
Site = "my-web-site"
}
}
The last step is attaching the security group to the machine but again, no clue on how to do that. I've been reading several docs and tried to google but I cannot seem to find the answer or the answer does not work for me. So hopefully you guys can help me further.
Thanks for your time, much appreciated!
In aws_security_group module you need to output security group id by add the following in ./modules/services/Security groups//main.tf
output "securitygroup_id" {
value = aws_security_group.my-webserver.id
}
then in your main tf file attach security group to your instance like this:
resource "aws_network_interface_sg_attachment" "sg_attachment" {
security_group_id = module.aws_security_group.securitygroup_id
network_interface_id = aws_instance.test-ec2deployment.primary_network_interface_id
}

Terraform - Azure Windows VM winrm connection issue

I want to create windows azure VM, copy some file and run some simple command on that VM using terraform script.
Problem is : I am able to create VM but not able to connect via winrm.
provider "azurerm" {
subscription_id = "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx"
tenant_id = "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx"
}
resource "azurerm_virtual_network" "vnet" {
name = "cmTFVnet"
address_space = ["10.0.0.0/16"]
location = "South India"
resource_group_name = "cservice"
}
resource "azurerm_subnet" "subnet" {
name = "cmTFSubnet"
resource_group_name = "cservice"
virtual_network_name = "${azurerm_virtual_network.vnet.name}"
address_prefix = "10.0.2.0/24"
}
resource "azurerm_public_ip" "publicip" {
name = "cmTFPublicIP"
location = "South India"
resource_group_name = "cservice"
public_ip_address_allocation = "dynamic"
}
resource "azurerm_network_security_group" "nsg" {
name = "cmTFNSG"
location = "South India"
resource_group_name = "cservice"
security_rule {
name = "SSH"
priority = 340
direction = "Inbound"
access = "Allow"
protocol = "Tcp"
source_port_range = "*"
destination_port_range = "22"
source_address_prefix = "*"
destination_address_prefix = "*"
}
security_rule {
name = "winrm"
priority = 1010
direction = "Inbound"
access = "Allow"
protocol = "Tcp"
source_port_range = "*"
destination_port_range = "5985"
source_address_prefix = "*"
destination_address_prefix = "*"
}
security_rule {
name = "winrm-out"
priority = 100
direction = "Outbound"
access = "Allow"
protocol = "*"
source_port_range = "*"
destination_port_range = "5985"
source_address_prefix = "*"
destination_address_prefix = "*"
}
}
resource "azurerm_network_interface" "nic" {
name = "cmNIC"
location = "South India"
resource_group_name = "cservice"
network_security_group_id = "${azurerm_network_security_group.nsg.id}"
ip_configuration {
name = "compilerNICConfg"
subnet_id = "${azurerm_subnet.subnet.id}"
private_ip_address_allocation = "dynamic"
public_ip_address_id = "${azurerm_public_ip.publicip.id}"
}
}
resource "azurerm_virtual_machine" "vm" {
name = "cmTFVM"
location = "South India"
resource_group_name = "cservice"
network_interface_ids = ["${azurerm_network_interface.nic.id}"]
vm_size = "Standard_D2s_v3"
storage_image_reference
{
id = "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx"
}
storage_os_disk {
name = "cmOsDisk"
managed_disk_type = "Premium_LRS"
create_option = "FromImage"
}
os_profile {
computer_name = "hostname"
admin_username = "test"
admin_password = "test#123"
}
os_profile_windows_config {
enable_automatic_upgrades = "true"
provision_vm_agent ="true"
winrm = {
protocol = "http"
}
}
provisioner "remote-exec" {
connection = {
type = "winrm"
user = "test"
password = "test#123"
agent = "false"
https = false
insecure = true
}
inline = [
"cd..",
"cd..",
"cd docker",
"mkdir test"
]
}
}
VM is created successfully but not able to connect by WINRM
but I am getting following error in "remote-exec":
azurerm_virtual_machine.vm: timeout - last error: unknown error Post
http://:5985/wsman: dial tcp :5985: connectex: A connection attempt
failed because the connected party did not properly respond after a
period of time, or established connection failed because connected
host has failed to respond.
or http response error: 401 - invalid content type
When you create the Windows Azure VM, the WINRM is not configured by default. So if you want to connect the VM through the WINRM, you should configure the WINRM after the VM creation time, or in the creation time.
You can follow the steps in Configure WinRM after virtual machine creation. And you can also configure it in the creation time. There is an example shows that through Azure template. It will also provide a little help. See Deploys a Windows VM and Configures a WinRM Https listener.

Resources