I have setup basic directory architecture for my ansible playbooks.
I have defined two roles:-
1) www:-To manage all the site deployment
2) root :- To do the root related tasks
My root roles contains following tasks:-
1) Setup a new site on target server
2) Start the web server (apache,nginx)
I want to restart my apache server after the site deployment and for that i have created a playbook called apache-restart under tasks for the root roles.This is how my directory structure looks like
This is what i am trying in my site.yml
---
- name: Deploy Application
hosts: "{{ host }}"
roles:
- www
become: true
become_user: www-data
tags:
- site-deployment
- name: Restart Apache Server
hosts: "{{ host }}"
roles:
- root
tasks:
include: roles/root/apache-restart.yml
become: true
become_user: root
tags:
- site-deployment
When i am running this playbook it throwing me this error:-
ERROR! A malformed block was encountered.
The error appears to have been in '/Users/Atul/Workplace/infra-automation/deployments/LAMP/site.yml': line 18, column 3, but may
be elsewhere in the file depending on the exact syntax problem.
The offending line appears to be:
- name: Restart Apache Server
^ here
is there any better way so that i can directly inclue apache-restart.yml file with my site.yml by specifying root role because if i include only role then ansible will start looking for main.yml.
tasks should be a list, so:
tasks:
- include: roles/root/apache-restart.yml
Rename apache-restart.yml to main.yml in root/tasks directory and the tasks specified in that file will be automatically run when you call root role
After renaming the file you can simplify Restart apache server play to
- name: Restart Apache Server
hosts: "{{ host }}"
roles:
- root
become: true
become_user: root
tags:
- site-deployment
Ansible best practice to conditionally restart a server or a daemon is to use handlers
In your case you'd create a handler called Restart Apache Server in www role and notify it in relevant tasks in that role. Using the restart handler in www you can get rid of root role altogether.
Related
The server is being created. Initially there is user root, his password and ssh on 22 port (default).
There is a written playbook, for example, for a react application.
When you start playbook'a, everything is deployed for it, but before deploying, you need to configure the server to a minimum. Those. create a new sudo user, change the ssh port and copy the ssh key to the server. I think this is probably needed for any server.
After this setting, yaml appears in the host_vars directory with the variables for this server (ansible_user, ansible_sudo_pass, etc.)
For example, there are 2 roles: initial-server, deploy-react-app.
And the playbook itself (main.yml) for a specific application:
- name: Deploy
hosts: prod
roles:
- role: initial-server
- role: deploy-react-app
How to make it so that when you run ansible-playbook main.yml, the initial-server role is executed from the root user with his password, and the deploy-react-app role from the newly created one user and connection was by ssh key and not by password (root)? Or is it, in principle, not the correct approach?
Note: using dashes (-) in role names is deprecated. I fixed that in my below example
Basically:
- name: initialize server
hosts: prod
remote_user: root
roles:
- role: initial_server
- name: deploy application
hosts: prod
# That one will prevent to gather facts twice but is not mandatory
gather_facts: false
remote_user: reactappuser
roles:
- role: deploy_react_app
You could also set the ansible_user for each role vars in a single play:
- name: init and deploy
hosts: prod
roles:
- role: initial_server
vars:
ansible_user: root
- role: deploy_react_app
vars:
ansible_user: reactappuser
There are other possibilities (using an include_role task). This really depends on your precise requirement.
I need to deploy my apps to two VMs using an Ansible playbook. Each VM serves a different purpose and my app consists of several different components, so each component has its own set of tasks in the playbook.
My inventory file looks like this:
[vig]
192.168.10.11
[websvr]
192.168.10.22
Ansible playbooks only have one for declaring hosts, which is right at the top and all the tasks execute against the specified hosts. But what I hope to achieve is:
Tasks 1 to 10 execute against the vig group
Tasks 11 to 20 execute against the websvr group
All in the same playbook, as in: ansible-playbook -i <inventory file> deploy.yml.
Is that possible? Do I have to use Ansible roles to achieve this?
Playbooks can have multiple plays (see https://docs.ansible.com/ansible/latest/user_guide/playbooks_intro.html).
Playbooks can contain multiple plays. You may have a playbook that
targets first the web servers, and then the database servers. For
example:
---
- hosts: webservers
remote_user: root
tasks:
- name: ensure apache is at the latest version
yum:
name: httpd
state: latest
- name: write the apache config file
template:
src: /srv/httpd.j2
dest: /etc/httpd.conf
- hosts: databases
remote_user: root
tasks:
- name: ensure postgresql is at the latest version
yum:
name: postgresql
state: latest
- name: ensure that postgresql is started
service:
name: postgresql
state: started
I have ansible playbook which look similar to the code below :
---
- hosts: localhost
connection: local
tasks:
- name: "Create custom fact directory
file:
path: "/etc/ansible/facts.d"
state: "directory"
- name: "Insert custom fact file"
copy:
src: custom_fact.fact
dest: /etc/ansible/facts.d/custom_fact.fact
mode: 0755
roles:
- role1
- role2
once i am running the playbook with ansible-playbook command
only the roles is running ,but the tasks is not getting ran
if i am remarking the roles from the playbook,the task gets ran
how can i make the task to run before the roles ?
Put the tasks in a section pre_tasks which are run before roles.
You may also find post_tasks useful which run tasks after roles.
Correct the indentation
- hosts: localhost
connection: local
tasks:
- name: "Create custom fact directory
file:
path: ...
I need to use a provisioning tool called vcommander
looks like this tool supports running ansible locally on the provisioned host
only, and you need to provide him only the playbook to run.
I already have working ansible roles which includes tasks and templates for examples the ntp role looks toughtly like below:
ls -1 roles/ntp/
defaults
LICENSE
meta
README.md
tasks
templates
the main task looks something like this:
cat roles/ntp/tasks/main.yml
---
- name: "Configure ntp.conf"
template:
src: "ntp.conf.j2"
dest: "/etc/ntp.conf"
mode: "0644"
become: True
- name: Updating time zone
shell: tzdata-update
- name: Ensure NTP-related packages are installed.
package:
name: ntp
state: present
- name: Ensure NTP is running and enabled as configured.
service:
name: ntpd
state: started
enabled: yes
and the template looks like this:
cat roles/ntp/templates/ntp.conf.j2
{{ ansible_managed }}
restrict 127.0.0.1
restrict -6 ::1
server 127.127.1.0
fudge 127.127.1.0 stratum 10
driftfile /var/lib/ntp/drift
keys /etc/ntp/keys
Is there a way to include the main.yml and the ntp.conf.j2 (template)
and other files included in the roles into one yml file?
Please provide an example.
Are you asking for a playbook that applies your ntp role to localhost?
It would look like this:
cat playbook.yml
- hosts: localhost
roles:
- ntp
See docs: https://docs.ansible.com/ansible/latest/user_guide/playbooks_reuse_roles.html#using-roles
I've got a basic Ansible playbook like so:
---
- name: Provision ec2 servers
hosts: 127.0.0.1
connection: local
roles:
- aws
- name: Configure {{ application_name }} servers
hosts: webservers
sudo: yes
sudo_user: root
remote_user: ubuntu
vars:
- setup_git_repo: no
- update_apt_cache: yes
vars_files:
- env_vars/common.yml
- env_vars/remote.yml
roles:
- common
- db
- memcached
- web
with the following inventory:
[localhost]
127.0.0.1 ansible_python_interpreter=/usr/local/bin/python
The Provision ec2 servers task does what you'd expect. It creates an ec2 instance; it also creates a host group [webservers] and adds the created instance IP to it.
The Configure {{ application_name }} servers step then configures that server, installing everything I need.
So far so good, this all does exactly what I want and everything seems to work.
Here's where I'm stuck. I want to be able to fire up an ec2 instance for different roles. Ideally I'd create a dbserver, a webserver and maybe a memcached server. I'd like to be able to deploy any part(s) of this infrastructure in isolation, e.g. create and provision just the db servers
The only ways I can think of to make this work... well, they don't work.
I tried simply declaring the host groups without hosts in the inventory:
[webservers]
[dbservers]
[memcachedservers]
but that's a syntax error.
I would be okay with explicitly provisioning each server and declaring the host group it is for, like so:
- name: Provision webservers
hosts: webservers
connection: local
roles:
- aws
- name: Provision dbservers
hosts: dbservers
connection: local
roles:
- aws
- name: Provision memcachedservers
hosts: memcachedservers
connection: local
roles:
- aws
but those groups don't exist until after the respective step is complete, so I don't think that will work either.
I've seen lots about dynamic inventories, but I haven't been able to understand how that would help me. I've also looked through countless examples of ansible ec2 provisioning projects, they are all invariably either provisioning pre-existing ec2 instances, or just create a single instance and install everything on it.
In the end I realised it made much more sense to just separate the different parts of the stack into separate playbooks, with a full-stack playbook that called each of them.
My remote hosts file stayed largely the same as above. An example of one of the playbooks for a specific part of the stack is:
---
- name: Provision ec2 apiservers
hosts: apiservers #important bit
connection: local #important bit
vars:
- host_group: apiservers
- security_group: blah
roles:
- aws
- name: Configure {{ application_name }} apiservers
hosts: apiservers:!127.0.0.1 #important bit
sudo: yes
sudo_user: root
remote_user: ubuntu
vars_files:
- env_vars/common.yml
- env_vars/remote.yml
vars:
- setup_git_repo: no
- update_apt_cache: yes
roles:
- common
- db
- memcached
- web
This means that the first step of each layer's play adds a new host to the apiservers group, with the second step (Configure ... apiservers) then being able to exclude the localhost without getting a no hosts matching error.
The wrapping playbook is dead simple, just:
---
- name: Deploy all the {{ application_name }} things!
hosts: all
- include: webservers.yml
- include: apiservers.yml
I'm very much a beginner w/regards to ansible, so please do take this for what it is, some guy's attempt to find something that works. There may be better options and this could violate best practice all over the place.
ec2_module supports an "exact_count" property, not just a "count" property.
It will create (or terminate!) instances that match specified tags ("instance_tags")