I am working with Terraform and trying to execute bash script using user date. Below is my code:
resource "aws_instance" "web_server" {
ami = var.centos
instance_type = var.instance-type
subnet_id = aws_subnet.private.id
private_ip = var.web-private-ip
associate_public_ip_address = true
user_data = <<-EOF
#!/bin/bash
yum install httpd -y
echo "hello world" > /var/www/html/index.html
yum update -y
systemctl start httpd
firewall-cmd --zone=public --permanent --add-service=http
firewall-cmd --zone=public --permanent --add-service=https
firewall-cmd --reload
EOF
}
However, when I navigate to the public IP I do not see the "hello world" message and also do not get a response fron the server. Is there something I am missing here? I've tried going straight through the aws console and user data is unsuccesful there to.
I verified your user data on my centos instance and your script is correct. However, the issue is probably because of two things:
subnet_id = aws_subnet.private.id this suggest that you've placed your instance in a private subnet. To connect to your instance form internet, it must be in public subnet
there is no vpc_security_group_ids specified, which leads to using a default SG from the VPC, which has internet traffic blocked by default.
Also I'm not sure what do you want to do with private_ip = var.web-private-ip. Its confusing.
Related
I am trying to copy directory to new ec2 instance using terraform
provisioner "local-exec" {
command = "scp -i ~/.ssh/id_rsa -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null -r ../ansible ubuntu#${self.public_ip}:~/playbook_dir"
}
But after instance created I get an error
Error running command 'sleep 5; scp -i ~/.ssh/id_rsa -o StrictHostKeyChecking=no -o
│ UserKnownHostsFile=/dev/null -r ../ansible ubuntu#54.93.82.73:~/playbook_dir': exit status 1. Output:
│ ssh: connect to host 54.93.82.73 port 22: Connection refused
│ lost connection
The main thing is that if I copy command to terminal and replace IP it works. Why is that happens? Please, help to figure out
I read in documentation that sshd service may not work correctly right after creating, so I added sleep 5 command before scp, but it haven't work
I have tried the same in my local env, but unfortunately, when using the local-exec provisioner in aws_instance directly I also got the same error message and am honestly not sure of the details of it.
However, to workaround the issue you can use a null_resource with the local-exec provisioner with the same command including sleep and it works.
Terraform code
data "aws_ami" "ubuntu" {
most_recent = true
filter {
name = "name"
values = ["ubuntu/images/hvm-ssd/ubuntu-focal-20.04-amd64-server-*"]
}
filter {
name = "virtualization-type"
values = ["hvm"]
}
owners = ["099720109477"] # Canonical
}
resource "aws_key_pair" "stackoverflow" {
key_name = "stackoverflow-key"
public_key = "ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIKml4tkIVsa1JSZ0OSqSBnF+0rTMWC5y7it4y4F/cMz6"
}
resource "aws_instance" "stackoverflow" {
ami = data.aws_ami.ubuntu.id
instance_type = "t2.micro"
subnet_id = var.subnet_id
vpc_security_group_ids = var.vpc_security_group_ids ## Must allow SSH inbound
key_name = aws_key_pair.stackoverflow.key_name
tags = {
Name = "stackoverflow"
}
}
resource "aws_eip" "stackoverflow" {
instance = aws_instance.stackoverflow.id
vpc = true
}
output "public_ip" {
value = aws_eip.stackoverflow.public_ip
}
resource "null_resource" "scp" {
provisioner "local-exec" {
command = "sleep 10 ;scp -i ~/.ssh/aws-stackoverflow -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null -r ~/test/sub-test-dir ubuntu#${aws_eip.stackoverflow.public_ip}:~/playbook_dir"
}
}
Code In Action
aws_key_pair.stackoverflow: Creating...
aws_key_pair.stackoverflow: Creation complete after 0s [id=stackoverflow-key]
aws_instance.stackoverflow: Creating...
aws_instance.stackoverflow: Still creating... [10s elapsed]
aws_instance.stackoverflow: Still creating... [20s elapsed]
aws_instance.stackoverflow: Still creating... [30s elapsed]
aws_instance.stackoverflow: Still creating... [40s elapsed]
aws_instance.stackoverflow: Creation complete after 42s [id=i-006c17b995b9b7bd6]
aws_eip.stackoverflow: Creating...
aws_eip.stackoverflow: Creation complete after 1s [id=eipalloc-0019932a06ccbb425]
null_resource.scp: Creating...
null_resource.scp: Provisioning with 'local-exec'...
null_resource.scp (local-exec): Executing: ["/bin/sh" "-c" "sleep 10 ;scp -i ~/.ssh/aws-stackoverflow -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null -r ~/test/sub-test-dir ubuntu#3.76.153.108:~/playbook_dir"]
null_resource.scp: Still creating... [10s elapsed]
null_resource.scp (local-exec): Warning: Permanently added '3.76.153.108' (ED25519) to the list of known hosts.
null_resource.scp: Creation complete after 13s [id=3541365434265352801]
Verification Process
Local directory and files
$ ls ~/test/sub-test-dir
some_test_file
$ cat ~/test/sub-test-dir/some_test_file
local exec is not nice !!
Files and directory on Created instance
$ ssh -i ~/.ssh/aws-stackoverflow ubuntu#$(terraform output -raw public_ip)
The authenticity of host '3.76.153.108 (3.76.153.108)' can't be established.
ED25519 key fingerprint is SHA256:8dgDXB/wjePQ+HkRC61hTNnwaSBQetcQ/10E5HLZSwc.
This key is not known by any other names
Are you sure you want to continue connecting (yes/no/[fingerprint])? yes
Warning: Permanently added '3.76.153.108' (ED25519) to the list of known hosts.
Welcome to Ubuntu 20.04.5 LTS (GNU/Linux 5.15.0-1028-aws x86_64)
* Documentation: https://help.ubuntu.com
* Management: https://landscape.canonical.com
* Support: https://ubuntu.com/advantage
System information as of Sat Feb 11 00:25:13 UTC 2023
System load: 0.0 Processes: 98
Usage of /: 20.8% of 7.57GB Users logged in: 0
Memory usage: 24% IPv4 address for eth0: 172.31.6.219
Swap usage: 0%
* Ubuntu Pro delivers the most comprehensive open source security and
compliance features.
https://ubuntu.com/aws/pro
* Introducing Expanded Security Maintenance for Applications.
Receive updates to over 25,000 software packages with your
Ubuntu Pro subscription. Free for personal use.
https://ubuntu.com/aws/pro
Expanded Security Maintenance for Applications is not enabled.
0 updates can be applied immediately.
Enable ESM Apps to receive additional future security updates.
See https://ubuntu.com/esm or run: sudo pro status
To run a command as administrator (user "root"), use "sudo <command>".
See "man sudo_root" for details.
ubuntu#ip-172-31-6-219:~$ cat ~/playbook_dir/some_test_file
local exec is not nice !!
I'm trying to do a simple AWS CLI command that can run a shell command to multiple instances.
I know first I need to get the list of instances ids:
aws ec2 describe-instances --filter "Name=tag:Group,Values=Development" --query 'Reservations[].Instances[].[InstanceId]' --output text
I then will have to assign them to an array. Then loop through each instance id and send the command.
Do we have an option for aws to send a shell command to an instance with a specific id?
Something like this:
aws ssm send-command --instance-ids "i-xxxxxxxxxxxxxxxx" --document-name "shellscript"
I keep getting this error:
An error occurred (InvalidInstanceId) when calling the SendCommand operation:
I've made sure that the SSM agent is running on that specific instance and made sure everything is correct according to these docs pages.
You can use ssm send-command.
A sample command to see ip address of instance:
aws ssm send-command --instance-ids "your id's" --document-name "AWS-RunShellScript" --comment "IP config" --parameters "commands=ifconfig" --output text
Modify command as per your needs.
In case you've got the error, this can happen when you don't have SSM setup on the instance you're trying to access. For a list of instances where you can run SSM commands, run:
aws ssm describe-instance-information --output text
See: InvalidInstanceId: An error occurred (InvalidInstanceId) when calling the SendCommand operation.
I was able to create a script with Python using Boto3.
import boto3
import botocore
import paramiko
tagkey = 'Environment'
tagvalue = 'DEV'
# list_instances functions returns a list of ip addresses containing a set of tags
def list_instances(tagkey, tagvalue):
ec2client = boto3.client('ec2')
response = ec2client.describe_instances(
Filters=[
{
'Name': 'tag:'+tagkey,
'Values': [tagvalue]
}
]
)
instancelist = []
for reservation in (response["Reservations"]):
for instance in reservation["Instances"]:
instancelist.append(instance["PublicDnsName"])
return instancelist
# Results of the function get stored in a list.
list = list_instances(tagkey, tagvalue)
key = paramiko.RSAKey.from_private_key_file("/home/ec2-user/key.pem")
client = paramiko.SSHClient()
client.set_missing_host_key_policy(paramiko.AutoAddPolicy())
# Looping through all the instannces in the list
for instance_ip in list[:]:
# Connect/ssh to an instance
try:
# Here 'ec2-user' is user name and 'instance_ip' is public IP of EC2
client.connect(hostname=instance_ip, username="ec2-user", pkey=key)
# Execute a command after connecting/ssh to an instance
stdin, stdout, stderr = client.exec_command("touch test")
# close the client connection once the job is done
print "Command sent:",instance_ip
except Exception, e:
print e
I have installed Magento 1.9.0.1 on docker MGT development environment with 2 docker containers. The idea is for all e-mails produced by the magento container are to be caught by the mailhog container smtp.
docker run -d -p 8025:8025 -p 1025:1025 --name smtp mailhog/mailhog
docker run -d --net=bridge --restart=always --privileged -h mgt-dev-56 --link smtp --name mgt-dev-56 -it -p 80:80 -p 443:443 -p 22:22 -p 3306:3306 -p 3333:3333 mgtcommerce/mgt-development-environment-5.6
I have named the mailhog container smtp and have linked it via the --link smtp parameter on the mgt-dev-56 container. Both the container applications work via their respective URLs magento1.dev and 127.0.0.1:8025. However I can not get the smtp container to catch any the emails being generated from the mgt-dev-56 container.
I'm not sure if i need to configure postfix to point to a certain port or ip. I have noticed and confirmed that the there is network connectivity between containers mgt-dev-56 and smtp.
Has any one come across this issue before ?
Do I need to modify the configurations on postfix ?
Here is the main.cf of mgt-dev-56 container
root#mgt-dev-56:/etc/postfix# vi main.cf
smtpd_banner = $myhostname ESMTP $mail_name (Debian/GNU)
biff = no
append_dot_mydomain = no
readme_directory = no
smtpd_tls_cert_file=/etc/ssl/certs/ssl-cert-snakeoil.pem
smtpd_tls_key_file=/etc/ssl/private/ssl-cert-snakeoil.key
smtpd_use_tls=yes
smtpd_tls_session_cache_database = btree:${data_directory}/smtpd_scache
smtp_tls_session_cache_database = btree:${data_directory}/smtp_scache
smtpd_relay_restrictions = permit_mynetworks permit_sasl_authenticated defer_unauth_destination
myhostname = mgt-dev-56
myorigin = $myhostname
alias_maps = hash:/etc/aliases
alias_database = hash:/etc/aliases
mydestination = mgt-dev-56, localhost.localdomain, , localhost
relayhost = 172.17.0.3:1025
mynetworks = 127.0.0.0/8 [::ffff:127.0.0.0]/104 [::1]/128
mailbox_size_limit = 0
recipient_delimiter = +
inet_interfaces = all
Here is are the env of mgt-dev-56 container, BTW 172.17.0.3 is the IP address for the smtp container.
root#mgt-dev-56:/etc/postfix# env
SMTP_PORT_1025_TCP_ADDR=172.17.0.3
HOSTNAME=mgt-dev-56
SMTP_PORT_8025_TCP=tcp://172.17.0.3:8025
TERM=xterm
SMTP_ENV_no_proxy=*.local, 169.254/16
SMTP_PORT_1025_TCP_PORT=1025
SMTP_PORT_8025_TCP_PORT=8025
SMTP_PORT_1025_TCP_PROTO=tcp
SMTP_PORT=tcp://172.17.0.3:1025
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
PWD=/etc/postfix
SMTP_PORT_8025_TCP_PROTO=tcp
SHLVL=1
HOME=/root
no_proxy=*.local, 169.254/16
SMTP_PORT_8025_TCP_ADDR=172.17.0.3
SMTP_NAME=/mgt-dev-56/smtp
SMTP_PORT_1025_TCP=tcp://172.17.0.3:1025
_=/usr/bin/env
OLDPWD=/root/cloudpanel
I have replaced the configuration parameter relayhost with actual ip and port number instead of using the environment variable SMTP_PORT_8025_TCP as main.cf and postfix does not like environment variables. MailHog now pickups the all e-mails created via line command and magento.
Is there any way to ruyn the knife vsphere for unattended execution? I have a deploy shell script which I am using to help me:
cat deploy-production-20-vm.sh
#!/bin/bash
##############################################
# These are machine dependent variables (need to change)
##############################################
HOST_NAME=$1
IP_ADDRESS="$2/24"
CHEF_BOOTSTRAP_IP_ADDRESS="$2"
RUNLIST=\"$3\"
CHEF_HOST= $HOSTNAME.my.lan
##############################################
# These are psuedo-environment independent variables (could change)
##############################################
DATASTORE="dcesxds04"
##############################################
# These are environment dependent variables (should not change per env)
##############################################
TEMPLATE="\"CentOS\""
NETWORK="\"VM Network\""
CLUSTER="ProdCluster01" #knife-vsphere calls this a resource pool
GATEWAY="10.7.20.1"
DNS="\"10.7.20.11,10.8.20.11,10.6.20.11\""
##############################################
# the magic
##############################################
VM_CLONE_CMD="knife vsphere vm clone $HOST_NAME \
--template $TEMPLATE \
--cips $IP_ADDRESS \
--vsdc MarkleyDC\
--datastore $DATASTORE \
--cvlan $NETWORK\
--resource-pool $CLUSTER \
--cgw $GATEWAY \
--cdnsips $DNS \
--start true \
--bootstrap true \
--fqdn $CHEF_BOOTSTRAP_IP_ADDRESS \
--chost $HOST_NAME\
--cdomain my.lan \
--run-list=$RUNLIST"
echo $VM_CLONE_CMD
eval $VM_CLONE_CMD
Which echos (as a single line):
knife vsphere vm clone dcbsmtest --template "CentOS" --cips 10.7.20.84/24
--vsdc MarkleyDC --datastore dcesxds04 --cvlan "VM Network"
--resource-pool ProdCluster01 --cgw 10.7.20.1
--cdnsips "10.7.20.11,10.8.20.11,10.6.20.11" --start true
--bootstrap true --fqdn 10.7.20.84 --chost dcbsmtest --cdomain my.lan
--run-list="role[my-env-prod-server]"
When it runs it outputs:
Cloning template CentOS Template to new VM dcbsmtest
Finished creating virtual machine dcbsmtest
Powered on virtual machine dcbsmtest
Waiting for sshd...done
Doing old-style registration with the validation key at /home/me/chef-repo/.chef/our-validator.pem...
Delete your validation key in order to use your user credentials instead
Connecting to 10.7.20.84
root#10.7.20.84's password:
If I step away form my desk and it prompts for PWD - then sometimes it times out and the connection is lost and chef doesn't bootstrap. Also I would like to be able to automate all of this to be elastic based on system needs - which won't work with attended execution.
The idea I am going to run with, unless provided a better solution is to have a default password in the template and pass it on the command line to knife, and have chef change the password once the build is complete, minimizing the exposure of a hard coded password in the bash script controlling knife...
Update: I wanted to add that this is working like a charm. Ideally we could have changed the centOs template we were deploying - but it wasn't possible here - so this is a fine alternative (as we changed the root password after deploy anyhow).
I have a very long command with many arguments, and somehow it's not working the way it should work. The following knife command will connect to remote vCenter and create a VM called node1. How do I wrap the following command and run inside ruby? Am I doing something wrong?
var_name = 'node1'
var_folder = 'folder1'
var_datastore = 'datastore1'
var_template_file = 'template_foo'
var_template = 'foo'
var_location = 'US'
cmd = 'knife vsphere vm clone var_name --dest-folder var_folder --datastore var_datastore --template-file var_template_file --template var_template -f var_location'
system(cmd)
require 'shellwords'
cmd = "knife vsphere vm clone #{var_name.shellescape} --dest-folder #{var_folder.shellescape} --datastore #{var_datastore.shellescape} --template-file #{var_template_file.shellescape} --template #{var_template.shellescape} -f #{var_location.shellescape}"
In your specific case it would work even without shellescape, but better safe than sorry.
Variables are not resolved in your command. Try using #{var_name} etc for all variables in the varaible cmd