I've got a basic Ansible playbook like so:
---
- name: Provision ec2 servers
hosts: 127.0.0.1
connection: local
roles:
- aws
- name: Configure {{ application_name }} servers
hosts: webservers
sudo: yes
sudo_user: root
remote_user: ubuntu
vars:
- setup_git_repo: no
- update_apt_cache: yes
vars_files:
- env_vars/common.yml
- env_vars/remote.yml
roles:
- common
- db
- memcached
- web
with the following inventory:
[localhost]
127.0.0.1 ansible_python_interpreter=/usr/local/bin/python
The Provision ec2 servers task does what you'd expect. It creates an ec2 instance; it also creates a host group [webservers] and adds the created instance IP to it.
The Configure {{ application_name }} servers step then configures that server, installing everything I need.
So far so good, this all does exactly what I want and everything seems to work.
Here's where I'm stuck. I want to be able to fire up an ec2 instance for different roles. Ideally I'd create a dbserver, a webserver and maybe a memcached server. I'd like to be able to deploy any part(s) of this infrastructure in isolation, e.g. create and provision just the db servers
The only ways I can think of to make this work... well, they don't work.
I tried simply declaring the host groups without hosts in the inventory:
[webservers]
[dbservers]
[memcachedservers]
but that's a syntax error.
I would be okay with explicitly provisioning each server and declaring the host group it is for, like so:
- name: Provision webservers
hosts: webservers
connection: local
roles:
- aws
- name: Provision dbservers
hosts: dbservers
connection: local
roles:
- aws
- name: Provision memcachedservers
hosts: memcachedservers
connection: local
roles:
- aws
but those groups don't exist until after the respective step is complete, so I don't think that will work either.
I've seen lots about dynamic inventories, but I haven't been able to understand how that would help me. I've also looked through countless examples of ansible ec2 provisioning projects, they are all invariably either provisioning pre-existing ec2 instances, or just create a single instance and install everything on it.
In the end I realised it made much more sense to just separate the different parts of the stack into separate playbooks, with a full-stack playbook that called each of them.
My remote hosts file stayed largely the same as above. An example of one of the playbooks for a specific part of the stack is:
---
- name: Provision ec2 apiservers
hosts: apiservers #important bit
connection: local #important bit
vars:
- host_group: apiservers
- security_group: blah
roles:
- aws
- name: Configure {{ application_name }} apiservers
hosts: apiservers:!127.0.0.1 #important bit
sudo: yes
sudo_user: root
remote_user: ubuntu
vars_files:
- env_vars/common.yml
- env_vars/remote.yml
vars:
- setup_git_repo: no
- update_apt_cache: yes
roles:
- common
- db
- memcached
- web
This means that the first step of each layer's play adds a new host to the apiservers group, with the second step (Configure ... apiservers) then being able to exclude the localhost without getting a no hosts matching error.
The wrapping playbook is dead simple, just:
---
- name: Deploy all the {{ application_name }} things!
hosts: all
- include: webservers.yml
- include: apiservers.yml
I'm very much a beginner w/regards to ansible, so please do take this for what it is, some guy's attempt to find something that works. There may be better options and this could violate best practice all over the place.
ec2_module supports an "exact_count" property, not just a "count" property.
It will create (or terminate!) instances that match specified tags ("instance_tags")
Related
The server is being created. Initially there is user root, his password and ssh on 22 port (default).
There is a written playbook, for example, for a react application.
When you start playbook'a, everything is deployed for it, but before deploying, you need to configure the server to a minimum. Those. create a new sudo user, change the ssh port and copy the ssh key to the server. I think this is probably needed for any server.
After this setting, yaml appears in the host_vars directory with the variables for this server (ansible_user, ansible_sudo_pass, etc.)
For example, there are 2 roles: initial-server, deploy-react-app.
And the playbook itself (main.yml) for a specific application:
- name: Deploy
hosts: prod
roles:
- role: initial-server
- role: deploy-react-app
How to make it so that when you run ansible-playbook main.yml, the initial-server role is executed from the root user with his password, and the deploy-react-app role from the newly created one user and connection was by ssh key and not by password (root)? Or is it, in principle, not the correct approach?
Note: using dashes (-) in role names is deprecated. I fixed that in my below example
Basically:
- name: initialize server
hosts: prod
remote_user: root
roles:
- role: initial_server
- name: deploy application
hosts: prod
# That one will prevent to gather facts twice but is not mandatory
gather_facts: false
remote_user: reactappuser
roles:
- role: deploy_react_app
You could also set the ansible_user for each role vars in a single play:
- name: init and deploy
hosts: prod
roles:
- role: initial_server
vars:
ansible_user: root
- role: deploy_react_app
vars:
ansible_user: reactappuser
There are other possibilities (using an include_role task). This really depends on your precise requirement.
I am deploying a VM in azure using ansible and using the public ip created in the next tasks. But the time taken to create the public ip is too long so when the subsequent task is executed, it fails. The time to create the ip also varies, it's not fixed. I want to introduce some logic where the next task will only run when the ip is created.
- name: Deploy Master Node
azure_rm_virtualmachine:
resource_group: myResourceGroup
name: testvm10
admin_username: chouseknecht
admin_password: <your password here>
image:
offer: CentOS-CI
publisher: OpenLogic
sku: '7-CI'
version: latest
Can someone assist me here..! It's greatly appreciated.
I think the wait_for module is a bad choice because while it can test for port availability it will often give you false positives because the port is open before the service is actually ready to accept connections.
Fortunately, the wait_for_connection module was designed for exactly the situation you are describing: it will wait until Ansible is able to successfully connect to your target.
This generally requires that you register your Azure VM with your Ansible inventory (e.g. using the add_host module). I don't use Azure, but if I were doing this with OpenStack I might write something like this:
- hosts: localhost
gather_facts: false
tasks:
# This is the task that creates the vm, much like your existing task
- os_server:
name: larstest
cloud: kaizen-lars
image: 669142a3-fbda-4a83-bce8-e09371660e2c
key_name: default
flavor: m1.small
security_groups: allow_ssh
nics:
- net-name: test_net0
auto_ip: true
register: myserver
# Now we take the public ip from the previous task and use it
# to create a new inventory entry for a host named "myserver".
- add_host:
name: myserver
ansible_host: "{{ myserver.openstack.accessIPv4 }}"
ansible_user: centos
# Now we wait for the host to finished booting. We need gather_facts: false here
# because otherwise Ansible will attempt to run the `setup` module on the target,
# which will fail if the host isn't ready yet.
- hosts: myserver
gather_facts: false
tasks:
- wait_for_connection:
delay: 10
# We could add additional tasks to the previous play, but we can also start
# new play with implicit fact gathering.
- hosts: myserver
tasks:
- ...other tasks here...
I'm starting out with ansible and I'm looking for a way to create a boilerplate project on the server and on the local environment with ansible playbooks.
I want to use ansible templates locally to create some generic files.
But how would i take ansible to execute something locally?
I read something with local_action but i guess i did not get this right.
This is for the webbserver...but how do i take this and create some files locally?
- hosts: webservers
remote_user: someuser
- name: create some file
template: src=~/workspace/ansible_templates/somefile_template.j2 dest=/etc/somefile/apps-available/someproject.ini
You can delegate tasks with the param delegate_to to any host you like, for example:
- name: create some file
template: src=~/workspace/ansible_templates/somefile_template.j2 dest=/etc/somefile/apps-available/someproject.ini
delegate_to: localhost
See Playbook Delegation in the docs.
If your playbook should in general run locally and no external hosts are involved though, you can simply create a group which contains localhost and then run the playbook against this group. In your inventory:
[local]
localhost ansible_connection=local
and then in your playbook:
hosts: local
Ansible has a local_action directive to support these scenarios which avoids the localhost and/or ansible_connection workarounds and is covered in the Delegation docs.
To modify your original example to use local_action:
- name: create some file
local_action: template src=~/workspace/ansible_templates/somefile_template.j2 dest=/etc/somefile/apps-available/someproject.ini
which looks cleaner.
If you cannot do/allow localhost SSH, you can split the playbook on local actions and remote actions.
The connection: local says to not use SSH for a playbook, as shown here: https://docs.ansible.com/ansible/latest/user_guide/playbooks_delegation.html#local-playbooks
Example:
# myplaybook.yml
- hosts: remote_machines
tasks:
- debug: msg="do stuff in the remote machines"
- hosts: 127.0.0.1
connection: local
tasks:
- debug: msg="ran in local ansible machine"
- hosts: remote_machines
tasks:
- debug: msg="do more stuff in remote machines"
Does anyone know how to do something (like wait for port / boot of the managed node) BEFORE gathering facts? I know I can turn gathering facts off
gather_facts: no
and THEN wait for port but what if I need the facts while also still need to wait until the node boots up?
Gathering facts is equivalent to running the setup module. You can manually gather facts by running it. It's not documented, but simply add a task like this:
- name: Gathering facts
setup:
In combination with gather_facts: no on playbook level the facts will only be fetched when above task is executed.
Both in an example playbook:
- hosts: all
gather_facts: no
tasks:
- name: Some task executed before gathering facts
# whatever task you want to run
- name: Gathering facts
setup:
Something like this should work:
- hosts: my_hosts
gather_facts: no
tasks:
- name: wait for SSH to respond on all hosts
local_action: wait_for port=22
- name: gather facts
setup:
- continue with my tasks...
The wait_for will execute locally on your ansible host, waiting for the servers to respond on port 22, then the setup module will perform fact gathering, after which you can do whatever else you need to do.
I was trying to figure out how to provision a host from ec2, wait for ssh to come up, and then run my playbook against it. Which is basically the same use case as you have. I ended up with the following:
- name: Provision App Server from Amazon
hosts: localhost
gather_facts: False
tasks:
# #### call ec2 provisioning tasks here ####
- name: Add new instance to host group
add_host: hostname="{{item.private_ip}}" groupname="appServer"
with_items: ec2.instances
- name: Configure App Server
hosts: appServer
remote_user: ubuntu
gather_facts: True
tasks: ----configuration tasks here----
I think the ansible terminology is that I have two plays in a playbook, each operating on a different group of hosts (localhost, and the appServer group)
Here is my play book:
- name: Install MySQL with replication
hosts: mysql-master:mysql-slave
user: root
sudo: false
roles:
- common
- admin-users
- generic-directories
- { role: iptables, tags: [ 'mysql-iptables'] }
- mysql
I have ip tables tasks for different ports, I want to run the task depending on the group of servers.
I have tagged the iptables task based on the group.
When i ran the play book instead of playing the tagged task, its run through all the tasks defined in iptables role.
Please let me know if am doing anything wrong here.
In practices roles should not contains code/configuration used only by you. Try to develop roles like if you're going to publish them, doing this you will create more generic/usefull roles
With an iptable role, what you want in the end is to open port/change firewall configuration.
The role should contains tasks that allow configuration in the playbook:
---
- name: iptables | Open ports
command: 'open port {{item.protocol}} {{item.port}}
with_items: 'iptable_conf'
tags:
- iptables
then you playbook
- name: Install MySQL with replication
hosts: mysql-master:mysql-slave
user: root
sudo: false
vars:
- iptables_conf:
- {protocol: tcp, port: 3307}
- {protocol: tcp, port: 3306}
roles:
- common
- admin-users
- generic-directories
- iptables
- mysql
Hope you like opinionated software all the way: https://github.com/ansible/ansible/issues/3283 .
If I'm reading that correctly, you're experiencing a feature, with everything in that role being tagged for you and your CLI tag specification subsequently matching all those tasks. I hate this feature. It's stupid. "Application" vs. "selection" w/r/t to tags is an obvious first question when you're initially exposed to Ansible tags. It should have a more flexible answer, or at least a nod in the docs.
I would suggest a different way of organizing the versatile iptables role. First, consider not having the role if it's wrapping a very sparse amount of tasks. I would recommend for roles to have meaning to you, and not be module-adapters. So maybe the sql role can handle sql-specific rules in a separate tasks file.
Otherwise, a role parameter which can then be used to load variables dynamically (e.g. the list of firewall rules). Here's what that would look like, stubbed out:
Playbook:
---
- hosts: loc
roles:
- { role: does-too-much, focus_on: 'specific-kind' }
Role tasks/main.yml:
---
- include_vars: "{{ focus_on }}.yml"
- debug:
msg: "ok - {{ item }}"
with_items: stuff
Variables vars/specific-kind.yml:
---
stuff:
- b
- c