Terraform. SSH keys. Connect from bastion host to ec2 instance in private subnet - amazon-ec2

Can't connect to my ec2 instance in private subnet from bastion host in public subnet. Permission denied (publickey)
How can I generate keys, using Terraform, and deploy them to servers?
network overwiew

Well, for keys, you can generate manually using key pair. After that, use that key par on your Terraform script. Or if you still need to generate key using Terraform, try this link : https://www.phillipsj.net/posts/generating-ssh-keys-with-terraform/

Create using tls provider like below and save the content in in a file and then use the keypair name in resource creation
variable "private_key_file_path" {
default = "ec2-key.pem"
}
resource "tls_private_key" "ec2key" {
algorithm = "RSA"
rsa_bits = 4096
}
resource "local_file" "private_key" {
content = tls_private_key.ec2key.private_key_pem
filename = "${var.private_key_file_path}"
file_permission = "0400"
}
resource "aws_key_pair" "key-pair" {
key_name = split(".","${var.private_key_file_path}")[0]
public_key = tls_private_key.ec2key.public_key_openssh
}
NOTE: local_file resource to save the key in a file is not security best practice

Related

provisioning an EC2 instance with terraform InvalidKeyPair.NotFound

I've created a key pair for EC2 called terraform, downloaded the pem file to the same directory where my terraform files live, I issue a terraform apply and I get:
aws_instance.windows: Creating...
Error: Error launching source instance: InvalidKeyPair.NotFound: The key pair 'terraform' does not exist
status code: 400, request id: 1ac563d4-244a-4371-bde7-ee9bcf048830
I'm specifying the name of the key-value pair via an envrionment variable. This is the start of the block I'm using to create the Windows virtual machine:
resource "aws_instance" "windows" {
ami = data.aws_ami.Windows_2019.image_id
instance_type = var.windows_instance_types
key_name = var.key_name
vpc_security_group_ids = [aws_security_group.allow_rdp_winrm.id]
associate_public_ip_address = true
subnet_id = aws_subnet.subnet1.id
get_password_data = "true"
user_data = file("scripts/user_data.txt")
There is obviously something I'm doing wrong, do I need to tell terraform which aws region then key pair resides in ?
The key pairs are regional, so if you created them in one region, they aren't available in the other.
Terraform will always try to find and use the key in the region that you tell it to run in and if the key is not present, AWS will complain about this error.
Terraform also doesn't like it when things are created out of band and you might run into complications. It's also much cleaner to create the keypair using terraform and you can reference it as Atul has posted in his answer.
You could also import the key into Terraform or use Terraform's data sources to search and find the key as alternatives but these are a bit advanced, especially if you're getting started with Terraform.
You need to create a key pair first before consuming it.
resource "aws_key_pair" "my_key_pair" {
key_name = var.key_name
public_key = file("${abspath(path.cwd)}/my-key.pub")
}
Now use the key as
resource "aws_instance" "windows" {
ami = data.aws_ami.Windows_2019.image_id
instance_type = var.windows_instance_types
key_name = aws_key_pair.my_key_pair.key_name
I'll only provide an answer that pertains to your error message directly, as this question came up first on a Bing search.
The Data Source documentation doesn't make mention of Keypair. It might be presumed from the Instances Availability Zone.
data "aws_key_pair" "example" {
key_name = "terraform"
filter {
name = "tag:Component"
values = ["web"]
}
}
resource "aws_instance" "web" {
ami = data.aws_ami.ubuntu.id
key_name = data.aws_key_pair.example.key_name
instance_type = "t3.micro"
tags = {
Name = "HelloWorld"
}
}
Instance resource declaration: https://registry.terraform.io/providers/hashicorp%20%20/aws/latest/docs/resources/instance
AWS Keypair Data Source: https://registry.terraform.io/providers/hashicorp%20%20/aws/latest/docs/data-sources/key_pair

EC2 and key_pair - How to use different ssh_key

I build a little plateforme on AWS using Terraform script. There is, for now, one EC2 where I can ssh.
I have filled the variable key_name in a aws_instance to do that.
How can I do to add another ssh_key if I want a colleague to ssh to the instance too ?
key_name accept a string and not a list.
resource "aws_instance" "airflow" {
ami = "ami-0d3f551818b21ed81"
instance_type = "t3a.xlarge"
key_name = "admin"
vpc_security_group_ids = [aws_security_group.ssh-group.id, aws_security_group.airflow_webserver.id]
tags = {
"Name" = "airflow"
}
subnet_id = aws_subnet.subnet1.id
}
resource "aws_key_pair" "admin" {
key_name = "admin"
public_key = file(var.public_key_path)
}
output "public_airflow_ip" {
value = aws_instance.airflow.public_ip
}
The configuration through the API allows only for one keypair. However, you can manually (or via a script) add a second user account with their own keys. Please have a look at How do I add new user accounts with SSH access to my Amazon EC2 Linux instance?
Having that said, it is recommended to log into the instances using AWS Systems Manager Session Manager instead of direct SSH. This way you can control access to the instances through IAM policies and don't need to expose the SSH port to the internet.

How do I launch an EC2 instance into an existing VPC using Terraform?

I am writing a TF script to launch an EC2 instance into an existing VPC. I have seen some examples where the script assigns the subnet id using a variable from another part of the script where the VPC and subnet was created.
Instead of using subnet_id = "${aws_subnet.main-public-1.id}" as was shown in that example, I tried putting the actual subnet id from an existing subnet in an existing vpn, both of which were made using the console, like this:
subnet_id = "subnet-xxxxx"
and applied the security group the same way. But when the EC instance got stood up, it was in the default VPC with the default security group. Why did this happen? How do I launch the EC2 into an existing VPC and subnet with existing security groups?
Here is the full script
EC2.tf
provider "aws" {
profile = "default"
region = var.region
}
resource "aws_instance" "WindowsBox" {
ami = "ami-xxxxx"
instance_type = "t2.medium"
key_name = aws_key_pair.keypair.key_name
subnet_id = "subnet-xxxxx"
vpc_security_group_ids = ["sg-xxxxx"]
tags = {
Name ="WindowsBox"
}
}
resource "aws_eip" "ip" {
vpc = true
instance = aws_instance.WindowsBox.id
}
resource "aws_key_pair" "keypair" {
key_name = "WindowsBox-keypair"
public_key = file("./kp/WindowsBox-keypair.pub")
}
Variables.tf
variable "region" {
default = "us-east-2"
}
The security group that you are using to create the instance should also exist inside the VPC. I think that's not the case here.
I would check the VPC id of the security group is matching with the VPC id of the subnet.
Hope this helps

Create private network with Terraform with starting script - Google Cloud Platform

starting with Terraform recently with GCP, I would like finish a exercice:
Create a new VPC network with a single subnet.
Create a firewall rule that allows external RDP traffic to the bastion host system.
Deploy two Windows servers that are connected to both the VPC network and the default network.
Create a virtual machine that points to the startup script.
Configure a firewall rule to allow HTTP access to the virtual machine.
Here is my solution:
Create a new VPC network called securenetwork, then create a new VPC subnet inside securenetwork. Once the network and subnet have been configured, configure a firewall rule that allows inbound RDP traffic (TCP port 3389) from the internet to the bastion host.
# Create the securenetwork network
resource "google_compute_network" "securenetwork" {
name = "securenetwork"
auto_create_subnetworks = false
}
# Create securesubnet-us subnetwork
resource "google_compute_subnetwork" "securesubnet-eu" {
name = "securesubnet-eu"
region = "europe-west1"
network = "${google_compute_network.securenetwork.self_link}"
ip_cidr_range = "10.130.0.0/20"
}
# Create a firewall rule to allow HTTP, SSH, RDP and ICMP traffic on securenetwork
resource "google_compute_firewall" "securenetwork-allow-http-ssh-rdp-icmp" {
name = "securenetwork-allow-http-ssh-rdp-icmp"
network = "${google_compute_network.securenetwork.self_link}"
allow {
protocol = "tcp"
ports = ["3389"]
}
allow {
protocol = "icmp"
}
}
# Create the vm-securehost instance
module "vm-securehost" {
source = "./instance/securehost"
instance_name = "vm-securehost"
instance_zone = "europe-west1-d"
instance_subnetwork = "${google_compute_subnetwork.securesubnet-eu.self_link}"
instance_network = "${google_compute_network.securenetwork.self_link}"
}
# Create the vm-bastionhost instance
module "vm-bastionhost" {
source = "./instance/bastionhost"
instance_name = "vm-bastionhost"
instance_zone = "europe-west1-d"
instance_subnetwork = "${google_compute_subnetwork.securesubnet-eu.self_link}"
instance_network = "${google_compute_network.securenetwork.self_link}"
}
Deploy Windows instances
a Windows 2016 server instance called vm-securehost with two network interfaces. Configure the first network interface with an internal only connection to the new VPC subnet, and the second network interface with an internal only connection to the default VPC network. This is the secure server.
variable "instance_name" {}
variable "instance_zone" {}
variable "instance_type" {
default = "n1-standard-1"
}
variable "instance_subnetwork" {}
variable "instance_network" {}
resource "google_compute_instance" "vm_instance" {
name = "${var.instance_name}"
zone = "${var.instance_zone}"
machine_type = "${var.instance_type}"
boot_disk {
initialize_params {
image = "windows-cloud/windows-2016"
}
}
network_interface {
subnetwork = "${var.instance_subnetwork}"
access_config {
# Allocate a one-to-one NAT IP to the instance
}
}
}
a second Windows 2016 server instance called vm-bastionhost with two network interfaces. Configure the first network interface to connect to the new VPC subnet with an ephemeral public (external NAT) address, and the second network interface with an internal only connection to the default VPC network. This is the jump box or bastion host.
variable "instance_name" {}
variable "instance_zone" {}
variable "instance_type" {
default = "n1-standard-1"
}
variable "instance_subnetwork" {}
variable "instance_network" {}
resource "google_compute_address" "default" {
name = "default"
region = "europe-west1"
}
resource "google_compute_instance" "vm_instance" {
name = "${var.instance_name}"
zone = "${var.instance_zone}"
machine_type = "${var.instance_type}"
boot_disk {
initialize_params {
image = "windows-cloud/windows-2016"
}
}
network_interface {
subnetwork = "${var.instance_subnetwork}"
network = "${var.instance_network}"
access_config {
# Allocate a one-to-one NAT IP to the instance
nat_ip = "${google_compute_address.default.address}"
}
}
}
My question:
how to config the Windows compute instance called vm-securehost that does not have a public ip-address?
how to config the Windows compute instance called vm-securehost that run Microsoft IIS web server software on startup?
Thanks for any comment for the solution
To create a vm without any external ip address, omit the ‘access config’ argument in your terraform script, as it’s the one responsible for creation of external ip address.
To run Microsoft IIS web server software on your vm while startup, add the following argument in your vm creation block (exclude quotation marks) -
'metadata_startup_script = import-module servermanager && add-windowsfeature web-server -includeallsubfeature'
Please refer to following links for detailed information on the issue -
https://cloud.google.com/compute/docs/tutorials/basic-webserver-iis
https://registry.terraform.io/providers/hashicorp/google/latest/docs/resources/compute_instance#metadata_startup_script

Unable to utilize the existing pem file to create EC2 instanceby terraform

I am wondering how to stop the infinite loop in the error message so that it creates AWS EC2 instance?
Terraform code below:
provider "aws" {
region = "${var.location}"
}
resource "aws_instance" "ins1_ec2" {
ami = "${var.ami}"
instance_type = "${var.inst_type}"
tags = {
Name = "cluster"
}
provisioner "remote-exec" {
inline = [
"hostnamectl set-hostname centos-76-1",
]
}
}
resource "aws_eip" "ins1_eip" {
instance = "${aws_instance.ins1_ec2.id}"
vpc = false
}
resource "aws_instance" "ins2_ec2" {
ami = "${var.ami}"
instance_type = "${var.inst_type}"
provisioner "remote-exec" {
inline = [
"hostnamectl set-hostname centos-76-2",
]
}
tags = {
Name = "cluster"
}
}
resource "aws_eip" "ins2_eip" {
instance = "${aws_instance.ins2_ec2.id}"
vpc = false
}
It errors out with the below message:
* aws_instance.ins2_ec2: timeout - last error: ssh: handshake failed: agent: failed to list keys
* aws_instance.ins1_ec2: timeout - last error: ssh: handshake failed: agent: failed to list keys
I have a pem file on my laptop which I can get it on my AWS Build server, so I can use key_name in EC2 instance creation? The pem file name as "test.pem" which I have is the private key?
What I don't know is how to login to VM, with key_name (test.pem) which I already have or with username/password. There does not seem to be a provision to create username and password in aws_instance block.
Terraform EC2 instance documentation is at the link below:
https://www.terraform.io/docs/providers/aws/r/instance.html
If you want to attach a key to an EC2 instance while you create it using terraform, you need to first create a key on AWS console, download the .pem file and copy the Key pair name to the clip board.
Terraform script requires the correct key name to associate it to the ec2 instance.
If you want to perform any remote action to the instance from the terraform, following things are required.
The instance should have the IP which terraform can connect to.
Terraform need to connect to the instance via SSH or RDP.
Both the ways require the key file (.pem file) downloaded earlier to be used while making the connection.
So connection is the missing part here in the terraform configuration.
Consider following terraform configuration for creating one t1.micro instance with a key associated with it and then creating a file on the instance by doing SSH into it.
Network requirements, such as vpc, subnet, route tables, internet gateway, security groups etc., are already created in AWS console and theirs respective Ids are being used in the terraform configuration below.
provider "aws" {
region = "<<region>>",
access_key="<<access_key>>",
secret_key="<<secret_key>>"
}
resource "aws_instance" "ins1_ec2" {
ami = "<<ami_id>>"
instance_type = "<<instance_type>>"
//id of the public subnet so that the instance is accessible via internet to do SSH
subnet_id = "<<subnet_id>>"
//id of the security group which has ports open to all the IPs
vpc_security_group_ids=["<<security_group_id>>"]
//assigning public IP to the instance is required.
associate_public_ip_address=true
key_name = "<<key_name>>"
tags = {
Name = "cluster"
}
provisioner "remote-exec" {
inline = [
//Executing command to creating a file on the instance
"echo 'Some data' > SomeData.txt",
]
//Connection to be used by provisioner to perform remote executions
connection {
//Use public IP of the instance to connect to it.
host = "${aws_instance.ins1_ec2.public_ip}"
type = "ssh"
user = "ec2-user"
private_key = "${file("<<pem_file>>")}"
timeout = "1m"
agent = false
}
}
}
resource "aws_eip" "ins1_eip" {
instance = "${aws_instance.ins1_ec2.id}"
vpc = true
}
When you run terraform apply command, if the terraform is able to do SSH to the instance, it should display following message.
You might still see errors, if the commands being executed fails due to some other error or permission issues. But if you see message as above, it means that the terraform has connected to the instance successfully.
That's the terraform configuration which will create an ec2 instance, connect to it via SSH and perform remote execution tasks on it.
The .pem file can also be used to do SSH on the instance from local machine.
This should help you resolve your issue.
More information about connection in terraform is available here
The following did work for me,
Create a security group and make sure you added SSH (port 22) with source 0.0.0.0/0 in inbound rules
Copy the ID of the security group and add it in the terraform config for key vpc_security_group_ids list
Head to AWS console, and either create a new key pair or locate the existing key to use.
Get the name of the key pair from console and refer it in terraform config for key key_name
If you created a new key make sure you downloaded the pem file and changed the permission as chmod 400 myPrivateKey.pem
Once after you applied the terraform config, just connect as ssh -i myPrivateKey.pem ec2-user#<public-ip>
Your terraform config for ec2 resource will looks like,
resource "aws_instance" "my-sample" {
ami = "ami-xxxxx"
instance_type = "t2.micro"
associate_public_ip_address = true
key_name = "MyPrivateKey"
vpc_security_group_ids = ["sg-0f073685ght54lkm"]
}

Resources