How can I limit the hosts to provision? - amazon-ec2

I'm using Vagrant to create EC2 virtual machines and ansible to provision them. I'm using this guide, along with the ec2.py script for inventory.
I am currently provisioning one host with ansible, to which I've given a tag named Purpose (let's say the value is "Machine Purpose") so that I can do this in my ansible file (the ec2.py script provides this):
- hosts: tag_Purpose_Machine_Purpose
My problem is that if I want to add another server, and I want to provision that, I can't do that using vagrant provision server2, because that will run the ansible script, which will match the first host, too, and provision that one as well.
The reason I want to avoid that is that, even though the ansible instructions are mostly idempotent, not all of them are, so I will unnecessarily move some files etc. on node1, and more importantly, also restart the service already running there.
Is there a way to make ansible only provision the servers I specify on the command line?

You can limit the Ansible play with the parameter --limit. It's not very well documented but you can feed it group names as well as host names.
ansible-playbook ... --limit hostA
Also multiple hostnames separated by comma are possible:
ansible-playbook ... --limit hostA,hostB,hostC

You can set it in the Vagrantfile
v.vm.provision "ansible" do |ansible|
ansible.limit = 'all' # Change this
And you can load it from the command line
v.vm.provision "ansible" do |ansible|
ansible.limit = (ENV['ANSIBLE_LIMIT'] || 'all')
With
ANSIBLE_LIMIT='x' vagrant provision

Related

Ansible ask become for a specific host

I need to use the --ask-become-pass parameter for specific host and I would prefer If it could be done in an inventory file. Is this possible?
Some host require a password while other dont. So I dont want to add the --ask-become-pass when running a playbook. I only need for specfic hosts.

How to format a simple Ansible inventory file for amazon ec2 hosts?

I am unable to run the example ad hoc command:
ansible -m ping hosts --private-key=~/home/ec2-user/ -u ec2-user
the error is:
[WARNING]: Could not match supplied host pattern, ignoring: hosts
[WARNING]: No hosts matched, nothing to do
The hostname is: ip-10-200-2-21.us-west-2.compute.internal
I can ping the host from my ansible control node by this hostname.
I created the hosts file with the touch command and it looks like this:
ip-10-200-2-21.us-west-2.compute.internal
Do I need to include something more? Do I need to save it with a particular extension? Thank You much for any help.
To run an ad-hoc command you can run a command in the following syntax
ansible <HOST_GROUP> -m <MODULE_NAME>
This is assuming your inventory file is in /etc/ansible/hosts. If your inventory file is located in a different spot we can use the command
ansible <HOST_GROUP> -m <MODULE_NAME> -i <LOCATION_TO_INVENTORY_FILE>
to change the location of the inventory file
Now whats missing is that your inventory file should have a host group in it. Something like:
[ec2]
ip-10-200-2-21.us-west-2.compute.internal
other-ec2-host-that-needs-to-be-pinged.us-west-2.compute.internal
The host group is the text inbetween the square brackets [], which in this case is ec2. Now we can reference all ec2 hosts using the host group of ec2.
To ping all the hosts in the ec2 group (assuming the inventory file is /etc/ansible/hosts) run
ansible ec2 -m ping -i /etc/ansible/hosts

Ansible start-at-task from within Vagrant

Is there a method to using Ansible's start-at-task from within a Vagrantfile? I want to specify the exact task to start at for debugging purposes. I realize host vars will be missing, this is fine. Other similar questions don't seem to be asking exactly this.
One idea is to set an ENV_VAR, Vagrant populates that and passes it to the playbook. ie:
# export START_TASK='task-name'
# Run: "vagrant provision --provision-with resume"
config.vm.provision "resume", type: "ansible_local" do |resume|
resume.playbook = "playbooks/playbook.yml --start-at-task=ENV['START_TASK']"
end
The playbook command doesn't parse the env_var like that but I'm essentially trying to run that command. I'm basically just trying to parse that env_var and pass it to Vagrant ansible provisioner.
Note: #retry on the playbook only re-runs the entire failed playbook for that single host not just a single command so that's not a solution.
Just needed to add the following, which I couldn't find anywhere in Vagrant's documentation.
resume.start_at_task = ENV['START_AT_TASK']

ansible command to list all known hosts

Ansible is already installed in a seperated ec2 instance.
I need to install apache on an ec2 instance.
Trying to find a list of known hosts
I run this command
ansible -i hosts all --list-hosts
and get this message
[WARNING]: Host file not found: hosts
[WARNING]: provided hosts list is empty, only localhost is available
[WARNING]: No hosts matched, nothing to do
--list-hosts lists hosts that match a --limit. The input is the -i, inventory. Your inventory is a file named hosts, which doesn't exist.
You need to create or generate an inventory file from somewhere. Ansible can't intuit what your inventory is.
If you installed Ansible by Pip, you need to create a directory with ansible.cfg and hosts file. For it, use:
sudo mkdir /etc/ansible/
sudo touch /etc/ansible/hosts
So you will be able to use the command below:
cat /etc/ansible/hosts
Got permission to ssh to the target server. Now i can install on this target server.
If I can login as an ec2-user by being part of a management domain then I can access any server

Ansible ignores ansible_ssh_extra_args in inventory or group_vars

I am trying to make ansible connect to a machine in the local network which needs some extra options passed in SSH invocations. I tried ansible_ssh_extra_args in inventory, group vars, host vars but it is ignored. Here is an example of my inventory file:
[dev]
192.168.10.15
[dev:vars]
ansible_ssh_private_key_file="keys/deploy-myserver"
ansible_ssh_extra_args="-o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null"
ansible_ssh_user="deploy"
How can I make ansible SSH connections to a specific host use my own custom SSH options?
Those were added for 2.0, so unless you're running the beta, they won't work yet...

Resources