Consider this use-case please: As part of our test framework, we have to deploy some resources, then executing some script before we can start using the resource for testing. A typical example is the AirView RDS module. The RDS is often provisioned with flyway module, which has an SSM document for creating the DB. What we had been doing is call the RDS module and the flyway module, apply them in a terraform workspace. Once they are successfully deployed (i.e. applied), a human would need to go through the AWS console and execute the script that creates NGCS database (this is for example). After that it's ready to be used for testing. I would like to find a way to avoid this human interaction step. So the order of creation and actions should be:
Provision DB cluster
Provision utility EC2 instance (where the flyway script can run)
Execute flyway
How can that be done in an automated way? Further, if I have few resources that also need similar set up (may not be flyway, but some kind of scripts), how can I control the sequence of activities (from creating resources to running scripts on them)?
Try to use terraform provisioners. aws_instance resource, which I suppose you are using fully supports this feature. With provisioner, you can run any command you want just after instance creation.
Don't forget to apply connection settings. You can read more here and here
Finally, you should get something close to this one:
resource "aws_instance" "my_instance" {
ami = "${var.instance_ami}"
instance_type = "${var.instance_type}"
subnet_id = "${aws_subnet.my_subnet.id}"
vpc_security_group_ids = ["${aws_security_group.my_sg.id}"]
key_name = "${aws_key_pair.ec2key.key_name}"
provisioner "remote-exec" {
inline = [
"my commands",
]
}
connection {
type = "ssh"
user = "ec2-user"
password = ""
private_key = "${file("~/.ssh/id_rsa")}"
}
}
You need to remember that provisioner are last resort.
From the docs, have you tried user_data?
Related
I'm creating an ec2 instance using terraform and i have installed all prerequiste on my system, but unfourtunality when i run "terraform init" shown this message "
Initializing provider plugins...
Terraform has been successfully initialized!
but in directory there is no modules initilized and after run terraform plan there is no paln shows and also terraform apply command i receive this one "No changes. Your infrastructure matches the configuration.
Terraform has compared your real infrastructure against your configuration and found no differences, so no changes are needed.
Apply complete! Resources: 0 added, 0 changed, 0 destroyed.
"
I'm new in this field kindly i need some guide to lunche the ec2 instance
Using terraform I'm trying to create vsphere_virtual_machine resource. As part of it, I'm trying to find out how to mount virtual disks to a specific folder on a created virtual machine. For example :
Terraform
disk {
label = "disk0"
size = "100"
}
disk {
label = "disk1"
size = "50"
}
How to mount disk0 to volume path D:\mysql\conf and disk1 to volume path D:\mysql\databases on a windows vm created using terraform vsphere_virtual_machine ? Could someone please share your insights here . Thanks in Advance !!
There's nothing in the vsphere_virtual_machine provider that will handle internal operations like that against the Virtual Machine, and I'm not aware of any other provider which can do that either.
Couple workarounds:
check out the remote-exec provisioner, this will let you run some PowerShell or other CLI commands to perform the task you need.
If you're doing this on a regular basis, check out Packer. It can be used to build out a virtual machine, OS and all. You could establish the disk configuration there, then use Terraform to deploy it.
Lastly, look into configuration management utilities. Ansible, PowerShell DSC, Puppet, Chef, etc. These tools will let you make modifications to the VM after they've been deployed.
i have a shell script which need to be installed over 100 Ubuntu instances/servers.What is the best way to install the same script on all instance without logging into each one.
You can use AWS System Manager , according to AWS Documentation :
You can send commands to tens, hundreds, or thousands of instances by
using the targets parameter (the Select Targets by Specifying a
Tag option in the Amazon EC2 console). The targets parameter accepts
a Key,Value combination based on Amazon EC2 tags that you specified
for your instances. When you execute the command, the system locates
and attempts to run the command on all instances that match the
specified tags
You can Target Instance by tag :
aws ssm send-command --document-name name --targets Key=tag:tag_name,Values=tag_value [...]
or
Targeting Instance IDs:
aws ssm send-command --document-name name --targets Key=instanceids,Values=ID1,ID2,ID3 [...]
Read the AWS Documentation for Details.
Thanks
You have several different options when trying to accomplish this task.
Like Kush mentioned, AWS System manager is great, but is a tightly coupled AWS service.
Packer - You could use Packer to create an AMI of the servers, and have the script installed on them, or just executed whatever the script is doing.
Configuration Management.
Ansible/Puppet/Chef. - These tools allow you to manage thousands of servers with only a couple of commands. My preference would be for Ansible, it is light weight, the syntax is only yaml, connects over ssh, and still allows use of placing shell scripts, if need be.
I am running terraform with a private openstack cloud to bootstrap new servers. When I try to create new servers (using any method) during the busiest times of operation (weekdays in the afternoon) usually half of the servers fail (and this has nothing to do with terraform). The issue is when one of the servers I try to provision fails to complete a provisioner "remote-exec" block without errors (because of my private cloud) my whole terraform apply stops.
I want terraform to totally ignore these failed servers when I run terraform apply so that if I try to provision 20 server and only 1 of them launches successfully, then that one server will run through all the commands I specify in my resource block.
Is there something like an ignore_failed_resources = true line I can add to my resources so that terraform will ignore the servers that fail and run the successful ones to completion?
There's no simple config switch that you can enable to achieve this. Could you be more specific about the problem that's causing the "remote-exec" to fail?
If it's because it's simply refusing to connect, you could switch out your "remote-exec" for a "local-exec" and wrap your command in a script, passing in the hostname of your server as a parameter. The script would then handle initiating the SSH connection and running the required commands. Make sure the script fails gracefully with an exit code of 0 if the error occurs, so that terraform will think that the script worked correctly.
resource "aws_instance" "web" {
provisioner "local-exec" {
command = "./myremotecommand.sh ${aws_instance.web.private_ip}"
}
}
I'm guessing you already figured out a solution, but adding my solution here for anyone who will encounter this in the future.
Why not simply do in the command part this?
provisioner "local-exec" {
command = "my command || true"
}
This way, it'll always return code 0, so if the shell ignore the failure, terraform will ignore it as well.
If you look at terraform local provisioner doc
you can put on_failure = continue
So it will continue at failure.
Can anyone please tell me how to run a shell script on all the ec2 instances that are part of an auto scaling group?
The scenario is that I have a script that I want to run on many ec2 instances that are turned on automatically as part of auto scaling group. The native approach is to SSH to each instance and run the script. I am looking for a way by which it can run automatically on all the instances when I run it on one of the ec2 instance or any better way of doing this?
Thanks in advance.
You'll want to add that shell script to the userdata in a NEW Launch Config and then update the autoscaling group.
https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/user-data.html#user-data-shell-scripts
Updating the Launch Config
if you want to change the launch configuration for your Auto Scaling group, you must create a launch configuration and then update your Auto Scaling group with the new launch configuration. When you change the launch configuration for your Auto Scaling group, any new instances are launched using the new configuration parameters, but existing instances are not affected.
https://docs.aws.amazon.com/autoscaling/latest/userguide/LaunchConfiguration.html
You can implement in many different ways...
Use awscli to get all instances in auto scaling group
aws autoscaling describe-auto-scaling-groups --auto-scaling-group-name myTestGroup
SSHkit is interesting tool run commands on remote servers
or
Invest your time to automate your infrastructure in proper tools like puppet, chef. With puppet mcollective you can do magic, including what you have asked about.
Update:
When you add instance to autoscaling group new tag name=aws:autoscaling:groupName, value=name_of_assigned_autoscaling_group is added, thus its easy to find it searching for this tag.
$ asgName=testASG
$ aws ec2 describe-instances --filters Name=tag-key,Values='aws:autoscaling:groupName' Name=tag-value,Values=$asgName --output text --query 'Reservations[*].Instances[*].[InstanceId,PublicIpAddress]'
The output you will get from the command above is the instance name and public IP:
i-4c42aabc 52.1.x.y
You can use this is your script...
I would do it using Chef. Opsworks (an aws product) is chef plus a lot of things that will do exactly what you want and give you even more flexibility.
Run Command perhaps?
You can use it to invoke scripts as well:
*Nice cheat: Select Run Command on the left-side menu of the EC2 section. Click the "Run Command" button. Then setup your AWS-RunShellScript. Type in your code. Then at the bottom there is a dropdown labelled: "AWS Command Line Interface command", select the correct platform and copy/paste the command into a script.
$Command_Id = aws ssm send-command --document-name "AWS-RunPowerShellScript" --targets '{\"Key\":\"tag:Name\",\"Values\":[\"RunningTests\"]}' --parameters '{\"commands\":[\"Start-Process \\\"C:\\\\path\\\\to\\\\scripts\\\\LOCAL_RUN.ps1\\\"\"]}' --comment "Run Tests Locally" --timeout-seconds 3800 --region us-east-1 --query 'Command.CommandId' --output text | Out-String
Per the question: Use your Auto Scaling Group Name instead of "RunningTests." Or, in the console: In your "Run Command" setup. Select the "Specifiying a Tag" radio button, then "Name" and your Auto Scaling Group.
*Note: The command above is windows powershell, but you can convert your script to Linux/OS or whatever by selecting the correct platform when in the Run Command setup.
**Note: Ensure your User on that instance has the AmazonSSMFullAccess permission setup to run the commands.
***Note:The SSM Agent comes installed on Windows Instances by default. If you are running Linux or something else, you might need to install the SSM Agent.
For a simplistic implementation you can use python fabric (http://www.fabfile.org/). It is a tool to run commands from a local or bashin instance to list of servers.
Here is a repo which has basic scaffolding and examples. Lot of CICD tools are having features to target this requirements, but I found fabric the easiest to implement fo simple setup.
https://github.com/techsemicolon/python-fabric-ec2-aws