I have created an EC2 instance using terraform (I do not have the .pem keys). Can I establish an SSH connection between my local system and the EC2 instance?
Assuming you provisioned an instance using Terraform v0.12.+ with this structure:
resource "aws_instance" "instance" {
ami = "${var.ami}"
instance_type = "t2.micro"
count = 1
associate_public_ip_address = true
}
You can make some additional settings:
Configure the public ip output:
output "instance_ip" {
description = "The public ip for ssh access"
value = aws_instance.instance.public_ip
}
Create an aws_key_pair with an existing ssh public key or create a new one
Ex:
resource "aws_key_pair" "ssh-key" {
key_name = "ssh-key"
public_key = "ssh-rsa AAAAB3Nza............"
}
Add the key_name in instance resource just like this:
resource "aws_instance" "instance" {
ami = var.ami
instance_type = "t2.micro"
count = 1
associate_public_ip_address = true
key_name = "ssh-key"
}
Now you need to apply running terraform apply and terraform output to return the public IP
Get your public IP and run:
ssh <PUBLIC IP>
OR with a public key path
ssh -i "~/.ssh/id_rsa.pub" <PUBLIC IP>
Sources:
https://www.terraform.io/docs/providers/aws/r/instance.html
https://www.terraform.io/docs/providers/aws/r/key_pair.html
https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/AccessingInstancesLinux.html
Related
How to upload local file to the ec2 instance with the module terraform-aws-modules/ec2-instance/aws?
I placed provisioner inside module "ec2". It does not work.
I placed provisioner outsite of the module "ec2". It does not work either.
I got the error: "Blocks of type "provisioner" are not expected here".
"provisioner" is inside module "ec2". It does not work.
module "ec2" {
source = "terraform-aws-modules/ec2-instance/aws"
version = "4.1.4"
name = var.ec2_name
ami = var.ami
instance_type = var.instance_type
availability_zone = var.availability_zone
subnet_id = data.terraform_remote_state.vpc.outputs.public_subnets[0]
vpc_security_group_ids = [aws_security_group.sg_WebServerSG.id]
associate_public_ip_address = true
key_name = var.key_name
provisioner "file" {
source = "./foo.txt"
destination = "/home/ec2-user/foo.txt"
connection {
type = "ssh"
user = "ec2-user"
private_key = "${file("./keys.pem")}"
host = module.ec2.public_dns
}
}
}
"provisioner" is outsite of the module "ec2". It does not work.
module "ec2" {
source = "terraform-aws-modules/ec2-instance/aws"
version = "4.1.4"
name = var.ec2_name
ami = var.ami
instance_type = var.instance_type
availability_zone = var.availability_zone
subnet_id = data.terraform_remote_state.vpc.outputs.public_subnets[0]
vpc_security_group_ids = [aws_security_group.sg_WebServerSG.id]
associate_public_ip_address = true
key_name = var.key_name
}
provisioner "file" {
source = "./foo.txt"
destination = "/home/ec2-user/foo.txt"
connection {
type = "ssh"
user = "ec2-user"
private_key = "${file("./keys.pem")}"
host = module.ec2.public_dns
}
}
You can use a null ressource to make it work!
resource "null_resource" "this" {
provisioner "file" {
source = "./foo.txt"
destination = "/home/ec2-user/foo.txt"
connection {
type = "ssh"
user = "ec2-user"
private_key = "${file("./keys.pem")}"
host = module.ec2.public_dns
}
}
You can provision files on an EC2 instance with the YAML cloud-init syntax which is passed to the EC2 instance as user-data. Here is an example of passing cloud-init config to EC2.
cloud-init.yaml file:
#cloud-config
# vim: syntax=yaml
#
# This is the configuration syntax that the write_files module
# will know how to understand. Encoding can be given b64 or gzip or (gz+b64).
# The content will be decoded accordingly and then written to the path that is
# provided.
#
# Note: Content strings here are truncated for example purposes.
write_files:
- content: |
# Your TXT file content...
# goes here
path: /home/ec2-user/foo.txt
owner: ec2-user:ec2-user
permissions: '0644'
Terraform file:
module "ec2" {
source = "terraform-aws-modules/ec2-instance/aws"
version = "4.1.4"
name = var.ec2_name
ami = var.ami
instance_type = var.instance_type
availability_zone = var.availability_zone
subnet_id = data.terraform_remote_state.vpc.outputs.public_subnets[0]
vpc_security_group_ids = [aws_security_group.sg_WebServerSG.id]
associate_public_ip_address = true
key_name = var.key_name
user_data = file("./cloud-init.yaml")
}
The benefits of this approach over the approach in the accepted answer are:
This method creates the file immediately at instance creation, instead of having to wait for the instance to come up first. The null-provisioner/SSH connection method has to wait for the EC2 instance to be become available, and the timing of that could cause your Terraform workflow to become flaky.
This method doesn't require the EC2 instance to be reachable from your local computer that is running Terraform. You could be deploying the EC2 instance to a private subnet behind a load balancer, which would prevent the null-provisioner/SSH connect method from working.
This doesn't require you to have the SSH key for the EC2 instance available on your local computer. You might want to only allow AWS SSM connect to your EC2 instance, to keep it more secure than allowing SSH directly from the Internet, and that would prevent the null-provisioner/SSH connect method from working. Further, storing or referencing an SSH private key in your Terraform state adds a risk factor to your overall security profile.
This doesn't require the use of a null_resource provisioner, which the Terraform documentation states:
Important: Use provisioners as a last resort. There are better alternatives for most situations. Refer to Declaring Provisioners for more details.
I'm aware that there already exists several posts similar to this one - I've went through them and adapted my Terraform configuration file, but it makes no difference.
Therefore, I'd like to publish my configuration file and my use case: I'd like to provision a (Windows) Virtual Machine on AWS, using Terraform. It works without the File Provisioning part - including them, the provisioning results in a timeout.
This includes adaptations from previous posts:
SSH connection restriction
SSH isnt working in Windows with Terraform provisioner connection type
Usage of a Security group
Terraform File provisioner can't connect ec2 over ssh. timeout - last error: dial tcp 92.242.xxx.xx:22: i/o timeout
I also get a timeout when using "winrm" instead of "ssh".
I'd be happy if you could provide any hint for following config file:
terraform {
required_providers {
aws = {
source = "hashicorp/aws"
version = "~> 3.0"
}
}
}
# Configure the AWS Provider
provider "aws" {
access_key = "<my access key>"
secret_key = "<my secret key>"
region = "eu-central-1"
}
resource "aws_instance" "webserver" {
ami = "ami-07dfec7a6d529b77a"
instance_type = "t2.micro"
security_groups = [aws_security_group.sgwebserver.name]
key_name = aws_key_pair.pubkey.key_name
tags = {
"Name" = "WebServer-Win"
}
}
resource "null_resource" "deployBundle" {
connection {
type = "ssh"
user = "Administrator"
private_key = "${file("C:/Users/<my user name>/aws_keypair/aws_instance.pem")}"
host = aws_instance.webserver.public_ip
}
provisioner "file" {
source = "files/test.txt"
destination = "C:/test.txt"
}
depends_on = [ aws_instance.webserver ]
}
resource "aws_security_group" "sgwebserver" {
name = "sgwebserver"
description = "Allow ssh inbound traffic"
ingress {
from_port = 0
to_port = 6556
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}
egress {
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = ["0.0.0.0/0"]
}
tags = {
Name = "sgwebserver"
}
}
resource "aws_key_pair" "pubkey" {
key_name = "aws-cloud"
public_key = file("key/aws_instance.pub")
}
resource "aws_eip" "elasticip" {
instance = aws_instance.webserver.id
}
output "eip" {
value = aws_eip.elasticip.public_ip
}
module "vpc" {
source = "terraform-aws-modules/vpc/aws"
name = "my-vpc"
cidr = "10.0.0.0/16"
azs = ["eu-central-1a", "eu-central-1b", "eu-central-1c"]
private_subnets = ["10.0.1.0/24", "10.0.2.0/24", "10.0.3.0/24"]
public_subnets = ["10.0.101.0/24", "10.0.102.0/24", "10.0.103.0/24"]
enable_nat_gateway = true
enable_vpn_gateway = true
tags = {
Terraform = "true"
Environment = "dev"
}
}
Thanks a lot in advance!
Windows EC2 instances don't support SSH, they support RDP. You would have to install SSH server software on the instance before you could SSH into it.
I suggest doing something like placing the file in S3, and using a user data script to trigger the Windows EC2 instance to download the file on startup.
I already have static IP address but i am trying to assign this IP address to newly create instance by using terraform but unfortunately i am getting new Ip address. Can somebody help?
data "aws_eip" "my_instance_eip" {
public_ip ="3.17.21.34"
}
resource "aws_instance" "Webserver_Instance" {
instance_type = var.type_webserver
ami = var.ami_webserver
key_name = var.key_name
tags = {
name = "Webserver_Instance"
}
security_groups = ["${aws_security_group.Webserver_SG.name}"]
}
resource "aws_eip_association" "my_eip_association" {
instance_id = aws_instance.Webserver_Instance.id
allocation_id = data.aws_eip.my_instance_eip.id
}
When trying to execute a shell script throw provisioner "remote-exec" in terraform connection not establish
I'm using ami for ubuntu-xenial-16.04 so the user is ubuntu
This is the last code that I use to execute the shell script:
resource "aws_instance" "secondary_zone" {
count = 1
instance_type = "${var.ec2_instance_type}"
ami = "${data.aws_ami.latest-ubuntu.id}"
key_name = "${aws_key_pair.deployer.key_name}"
subnet_id = "${aws_subnet.secondary.id}"
vpc_security_group_ids = ["${aws_security_group.server.id}"]
associate_public_ip_address = true
provisioner "remote-exec" {
inline = ["${template_file.script.rendered}"]
}
connection {
type = "ssh"
user = "ubuntu"
private_key = "${file("~/.ssh/id_rsa")}"
}
}
This is what get in console:
aws_instance.secondary_zone (remote-exec): Connecting to remote host via SSH...
aws_instance.secondary_zone (remote-exec): Host: x.x.x.x
aws_instance.secondary_zone (remote-exec): User: ubuntu
aws_instance.secondary_zone (remote-exec): Password: false
aws_instance.secondary_zone (remote-exec): Private key: true
aws_instance.secondary_zone (remote-exec): SSH Agent: false
aws_instance.secondary_zone (remote-exec): Checking Host Key: false
Thank you for your help...
I had the same issue. In your connection block try specifying the host.
connection {
type = "ssh"
user = "ubuntu"
private_key = "${file("~/.ssh/id_rsa")}"
host = self.public_ip
}
I also had to create a route & gateway and associate them to my vpc. I'm still learning terraform, but this worked for me.
resource "aws_internet_gateway" "test-env-gw" {
vpc_id = aws_vpc.test-env.id
}
resource "aws_route_table" "route-table-test-env" {
vpc_id = aws_vpc.test-env.id
route {
cidr_block = "0.0.0.0/0"
gateway_id = aws_internet_gateway.test-env-gw.id
}
}
resource "aws_route_table_association" "subnet-association" {
subnet_id = aws_subnet.us-east-2a-public.id
route_table_id = aws_route_table.route-table-test-env.id
}
As I mentioned, it was connecting problem in my case.
In addition template_file was deprecated so I change the code to:
resource "aws_instance" "secondary_zone" {
instance_type = "${var.ec2_instance_type}"
ami = "${data.aws_ami.latest-ubuntu.id}"
key_name = "${aws_key_pair.deployer.key_name}"
subnet_id = "${aws_subnet.secondary.id}"
vpc_security_group_ids = ["${aws_security_group.server.id}"]
associate_public_ip_address = true
connection {
type = "ssh"
user = "ubuntu"
private_key = "${file("~/.ssh/id_rsa")}"
timeout = "2m"
}
provisioner "file" {
source = "/server/script.sh"
destination = "/tmp/script.sh"
}
provisioner "remote-exec" {
inline = [
"chmod +x /tmp/script.sh",
"/tmp/script.sh args",
]
}
}
Also, I learned that the scrip.sh have to be formatted as LR
If you're just trying to run some scripts to provision whichever ec2 nodes you create with Terraform, I would try setting the user-data parameter to reference your script. The user-data script is run automatically when the node initializes.
This will ensure that there are no lifecycle related issues with your deployment (such as EC2 node being created and the host not being available for the remote exec to succeed) and an overall cleaner experience.
An example of this can look like this:
resource "aws_instance" "secondary_zone" {
count = 1
instance_type = "${var.ec2_instance_type}"
ami = "${data.aws_ami.latest-ubuntu.id}"
key_name = "${aws_key_pair.deployer.key_name}"
subnet_id = "${aws_subnet.secondary.id}"
vpc_security_group_ids = ["${aws_security_group.server.id}"]
associate_public_ip_address = true
user_data = "${template_file.script.rendered}"
}
Hope this helps!
Further reading:
TF docs
Userdata examples
I am wondering how to stop the infinite loop in the error message so that it creates AWS EC2 instance?
Terraform code below:
provider "aws" {
region = "${var.location}"
}
resource "aws_instance" "ins1_ec2" {
ami = "${var.ami}"
instance_type = "${var.inst_type}"
tags = {
Name = "cluster"
}
provisioner "remote-exec" {
inline = [
"hostnamectl set-hostname centos-76-1",
]
}
}
resource "aws_eip" "ins1_eip" {
instance = "${aws_instance.ins1_ec2.id}"
vpc = false
}
resource "aws_instance" "ins2_ec2" {
ami = "${var.ami}"
instance_type = "${var.inst_type}"
provisioner "remote-exec" {
inline = [
"hostnamectl set-hostname centos-76-2",
]
}
tags = {
Name = "cluster"
}
}
resource "aws_eip" "ins2_eip" {
instance = "${aws_instance.ins2_ec2.id}"
vpc = false
}
It errors out with the below message:
* aws_instance.ins2_ec2: timeout - last error: ssh: handshake failed: agent: failed to list keys
* aws_instance.ins1_ec2: timeout - last error: ssh: handshake failed: agent: failed to list keys
I have a pem file on my laptop which I can get it on my AWS Build server, so I can use key_name in EC2 instance creation? The pem file name as "test.pem" which I have is the private key?
What I don't know is how to login to VM, with key_name (test.pem) which I already have or with username/password. There does not seem to be a provision to create username and password in aws_instance block.
Terraform EC2 instance documentation is at the link below:
https://www.terraform.io/docs/providers/aws/r/instance.html
If you want to attach a key to an EC2 instance while you create it using terraform, you need to first create a key on AWS console, download the .pem file and copy the Key pair name to the clip board.
Terraform script requires the correct key name to associate it to the ec2 instance.
If you want to perform any remote action to the instance from the terraform, following things are required.
The instance should have the IP which terraform can connect to.
Terraform need to connect to the instance via SSH or RDP.
Both the ways require the key file (.pem file) downloaded earlier to be used while making the connection.
So connection is the missing part here in the terraform configuration.
Consider following terraform configuration for creating one t1.micro instance with a key associated with it and then creating a file on the instance by doing SSH into it.
Network requirements, such as vpc, subnet, route tables, internet gateway, security groups etc., are already created in AWS console and theirs respective Ids are being used in the terraform configuration below.
provider "aws" {
region = "<<region>>",
access_key="<<access_key>>",
secret_key="<<secret_key>>"
}
resource "aws_instance" "ins1_ec2" {
ami = "<<ami_id>>"
instance_type = "<<instance_type>>"
//id of the public subnet so that the instance is accessible via internet to do SSH
subnet_id = "<<subnet_id>>"
//id of the security group which has ports open to all the IPs
vpc_security_group_ids=["<<security_group_id>>"]
//assigning public IP to the instance is required.
associate_public_ip_address=true
key_name = "<<key_name>>"
tags = {
Name = "cluster"
}
provisioner "remote-exec" {
inline = [
//Executing command to creating a file on the instance
"echo 'Some data' > SomeData.txt",
]
//Connection to be used by provisioner to perform remote executions
connection {
//Use public IP of the instance to connect to it.
host = "${aws_instance.ins1_ec2.public_ip}"
type = "ssh"
user = "ec2-user"
private_key = "${file("<<pem_file>>")}"
timeout = "1m"
agent = false
}
}
}
resource "aws_eip" "ins1_eip" {
instance = "${aws_instance.ins1_ec2.id}"
vpc = true
}
When you run terraform apply command, if the terraform is able to do SSH to the instance, it should display following message.
You might still see errors, if the commands being executed fails due to some other error or permission issues. But if you see message as above, it means that the terraform has connected to the instance successfully.
That's the terraform configuration which will create an ec2 instance, connect to it via SSH and perform remote execution tasks on it.
The .pem file can also be used to do SSH on the instance from local machine.
This should help you resolve your issue.
More information about connection in terraform is available here
The following did work for me,
Create a security group and make sure you added SSH (port 22) with source 0.0.0.0/0 in inbound rules
Copy the ID of the security group and add it in the terraform config for key vpc_security_group_ids list
Head to AWS console, and either create a new key pair or locate the existing key to use.
Get the name of the key pair from console and refer it in terraform config for key key_name
If you created a new key make sure you downloaded the pem file and changed the permission as chmod 400 myPrivateKey.pem
Once after you applied the terraform config, just connect as ssh -i myPrivateKey.pem ec2-user#<public-ip>
Your terraform config for ec2 resource will looks like,
resource "aws_instance" "my-sample" {
ami = "ami-xxxxx"
instance_type = "t2.micro"
associate_public_ip_address = true
key_name = "MyPrivateKey"
vpc_security_group_ids = ["sg-0f073685ght54lkm"]
}