I have a playbook with some roles/tasks to be executed on the remote host. There is a scenario where I want some tasks to execute locally like downloading artifacts from svn/nexus to local server.
Here is my main playbook where I am passing the target_env from the command line and dynamically loading the variables using group_vars directory
---
- name: Starting Deployment of Application to tomcat nodes
hosts: '{{ target_env }}'
become: yes
become_user: tomcat
become_method: sudo
gather_facts: yes
roles:
- role: repodownload
tags:
- repodownload
- role: stoptomcat
tags:
- stoptomcat
The first role repodownload actually download the artifacts from svn/nexus to the local server/controller. Here is the main.yml of this role -
- name: Downloading MyVM Artifacts on the local server
delegate_to: localhost
get_url: url="http://nexus.com/myrelease.war" dest=/tmp/releasename/
- name: Checkout latest application configuration templates from SVN repo to local server
delegate_to: localhost
subversion:
repo: svn://12.57.98.90/release-management/config
dest: ../templates
in_place: yes
But it's not working. Could it be because in my main yml file I am becoming the user using which I want to execute the commands on remote host.
Let me know if someone can help. It will be appreciated.
ERROR -
"changed": false,
"module_stderr": "sudo: a password is required\n",
"module_stdout": "",
"msg": "MODULE FAILURE\nSee stdout/stderr for the exact error",
"rc": 1
}
Given the scenario, you can do it multiple ways. One could be adding another play for repodownload role to your main playbook that runs only on localhost. Then remove delegate_to: localhost from the role tasks and move the variables accordingly.
---
- name: Download repo
hosts: localhost
gather_facts: yes
roles:
- role: repodownload
tags:
- repodownload
- name: Starting Deployment of Application to tomcat nodes
hosts: '{{ target_env }}'
become: yes
become_user: tomcat
become_method: sudo
gather_facts: yes
roles:
- role: stoptomcat
tags:
- stoptomcat
Another way could be removing become from play level and add to role stoptomcat. Something like below should work.
---
- name: Starting Deployment of Application to tomcat nodes
hosts: '{{ target_env }}'
gather_facts: yes
roles:
- role: repodownload
tags:
- repodownload
- role: stoptomcat
become: yes
become_user: tomcat
become_method: sudo
tags:
- stoptomcat
Haven't tested the code so apologies if any formatting issues.
Related
I have this Ansible Playbook with three different plays. What I want to do is to launch the two lasts plays based on a condition. How can I do this directly at playbook level (not using when clause in each role)?
- name: Base setup
hosts: all
roles :
- apt
- pip
# !!!!! SHUT DOWN IF NOT DEBIAN DIST
- name: dbserver setup
hosts: dbserver
remote_user: "{{ user }}"
become: true
roles:
- mariadb
- name: webserver and application setup
hosts: webserver
remote_user: "{{ user }}"
become: true
roles:
- php
- users
- openssh
- sshkey
- gitclone
- symfony
You could just end the play for the hosts you do not wish to continue with, with the help of the meta task in a pre_tasks:
- name: dbserver setup
hosts: dbserver
remote_user: "{{ user }}"
become: true
pre_tasks:
- meta: end_host
when: ansible_facts.os_family != 'Debian'
roles:
- mariadb
And do the same for the web servers.
I'm trying to install a website with ansible 2.4.0.0. When I do:
./architecture/scripts/provision.sh lxc
I get this result:
/usr/local/lib/python2.7/dist-packages/cryptography/__init__.py:39: CryptographyDeprecationWarning: Python 2 is no longer supported by the Python core team. Support for it is now deprecated in cryptography, and will be removed in a future release.
CryptographyDeprecationWarning,
/usr/local/lib/python2.7/dist-packages/cryptography/__init__.py:39: CryptographyDeprecationWarning: Python 2 is no longer supported by the Python core team. Support for it is now deprecated in cryptography, and will be removed in a future release.
CryptographyDeprecationWarning,
[WARNING]: - ansible-genericservice was NOT installed successfully: - command git clone git#git.smile.fr:ansible/ansible-genericservice.git ansible-genericservice failed in directory /tmp/tmpmZfdUa (rc=128)
ERROR! - you can use --ignore-errors to skip failed roles and finish processing the list.
/usr/local/lib/python2.7/dist-packages/cryptography/__init__.py:39: CryptographyDeprecationWarning: Python 2 is no longer supported by the Python core team. Support for it is now deprecated in cryptography, and will be removed in a future release.
CryptographyDeprecationWarning,
[DEPRECATION WARNING]: The use of 'include' for tasks has been deprecated. Use 'import_tasks' for static inclusions or 'include_tasks' for dynamic inclusions. This feature will be removed in a future release.
Deprecation warnings can be disabled by setting deprecation_warnings=False in ansible.cfg.
[DEPRECATION WARNING]: include is kept for backwards compatibility but usage is discouraged. The module documentation details page may explain more about this rationale.. This feature will be removed in a
future release. Deprecation warnings can be disabled by setting deprecation_warnings=False in ansible.cfg.
PLAY [dbservers,cacheservers,searchservers,webservers] ************************************************************************************************************************************************************
TASK [Gathering Facts] ********************************************************************************************************************************************************************************************
Warning: the ECDSA host key for 'webstore.lxc' differs from the key for the IP address '10.124.193.20'
Offending key for IP in /home/jredor/.ssh/known_hosts:19
Matching host key in /home/jredor/.ssh/known_hosts:21
Are you sure you want to continue connecting (yes/no)? yes
ok: [lxc-server]
TASK [include_vars] ***********************************************************************************************************************************************************************************************
fatal: [lxc-server]: FAILED! => {"failed": true, "msg": "No file was found when using with_first_found. Use the 'skip: true' option to allow this task to be skipped if no files are found"}
to retry, use: --limit #/home/jredor/projets/webstore/architecture/provisioning/provision.retry
PLAY RECAP ********************************************************************************************************************************************************************************************************
lxc-server : ok=1 changed=0 unreachable=0 failed=1
I can't find how to pass the skip option to true. My ansible file look like that:
cd "$( dirname "${BASH_SOURCE[0]}" )"
cd ..
source scripts/_environment-with-lxc.sh
source scripts/_ansible.sh
ansible-galaxy install -r provisioning/requirements.yml -p provisioning/roles -n -f
ansible-playbook --ssh-extra-args="${ANSIBLE_SSH_ARGS}" provisioning/provision.yml -i provisioning/inventory/$inventory
exit $?
It's the first time that I use ansible so I'm lost.
Thanks
Edit:
So here is the code in provision.yml
---
# load variables for each servers
- hosts:
- dbservers
- cacheservers
- searchservers
- webservers
vars:
ansible_user: "root"
tasks:
- include: includes/include-vars.yml
# Prepare the delivery authorized keys
- hosts:
- dbservers
- cacheservers
- searchservers
- webservers
connection: local
vars:
ansible_user: "root"
tmp_delivery_users:
- name: "{{ magento_project_user }}"
group: "{{ magento_webserver_group }}"
authorized_keys: "{{ delivery_authorized_keys }}"
tasks:
- name: "Prepare the list of the authorized keys for delivery - Extra Keys"
set_fact: delivery_authorized_keys="{{ delivery_authorized_extra_keys }}"
- name: "Prepare the delivery_users object"
set_fact: delivery_users="{{ tmp_delivery_users }}"
# add hosts alias on Servers
- hosts:
- dbservers
- cacheservers
- searchservers
- webservers
vars:
ansible_user: "root"
tasks:
- include: includes/init-hosts.yml
with_items: "{{ specific_hosts|default([]) }}"
# add magento hosts alias on WebServers
- hosts:
- webservers
vars:
ansible_user: "root"
tasks:
- include: includes/init-hosts.yml
with_items: "{{ magento_server_alias|default([]) }}"
# Generic behaviors on all servers
- hosts:
- dbservers
- cacheservers
- searchservers
- webservers
vars:
ansible_user: "root"
roles:
- role: ansible-basicserver
# Generic usage of the ansible roles - DB Server
- hosts: dbservers
vars:
ansible_user: "root"
roles:
- role: ansible-mysql-server
# Generic usage of the ansible roles - Cache Server
- hosts: cacheservers
vars:
ansible_user: "root"
roles:
- {
role: ansible-redis,
redis_instance_name: "magento_cache",
redis_setting_port: "{{ magento_cache_port }}",
redis_setting_save: "{{ redis_setting_save_cache }}"
}
- {
role: ansible-redis,
redis_instance_name: "magento_session",
redis_setting_port: "{{ magento_cache_session_port }}",
redis_setting_save: "{{ redis_setting_save_session }}"
}
# Generic usage of the ansible roles - Search Server
- hosts: searchservers
vars:
ansible_user: "root"
roles:
- role: ansible-elasticsearch
# Prepare php parameters
- hosts: webservers
vars:
ansible_user: "root"
tasks:
- include: includes/prepare-php-parameters.yml
# Generic usage of the ansible roles - Webserver Server
- hosts: webservers
vars:
ansible_user: "root"
roles:
- role: ansible-php
- role: ansible-apache
- role: ansible-varnish
- role: ansible-nginx
# Specific usage of the ansible roles - Webserver Server - Dev Tools
- hosts: webservers
vars:
ansible_user: "root"
roles:
- { role: ansible-npm, when: magento_install_maildev or magento_install_grunt }
- { role: ansible-maildev, when: magento_install_maildev }
tasks:
- name: "Install NPM package: grunt-cli"
npm: name="grunt-cli" global=yes
when: magento_install_grunt
- name: "Add delivery user in groups"
user:
name: "{{ magento_project_user }}"
groups: "{{ magento_delivery_groups }}"
- name: "Create {{ magento_source_path }} folder"
file:
path: "{{ magento_source_path }}"
state: directory
owner: "{{ magento_project_user }}"
group: "{{ magento_project_group }}"
mode: "u=rwX,g=rX,o=rX"
# Specific task for Magento 2
- name: "Check if Magento app/etc/env.php exists"
stat:
path: "{{ magento_source_path }}/app/etc/env.php"
register: magento_app_etc_env
- name: "Update app/etc/env.php configuration file"
template:
src: "templates/magento/env.php.j2"
dest: "{{ magento_source_path }}/app/etc/env.php"
owner: "{{ magento_project_user }}"
group: "{{ magento_webserver_group }}"
mode: "u=rw,g=rw,o=r"
vars:
magento_cache_database: "{{ magento_cache_database_for_run }}"
when: magento_app_etc_env.stat.exists
# Update permissions
- include: includes/permissions-tasks-full.yml
It was coded by a fellow coworker and I'm just trying to use it to deplay the website on my computer. So I guess I have to put here the "skip:true" but I really don't know where, sorry. Thanks for your help!
You can specify the skip option as the argument in the failing lookup, https://docs.ansible.com/ansible/latest/plugins/lookup/first_found.html#parameter-skip
I have a Ansible playbook which does multiple things as below -
Download artifacts fron nexus into local server (Ansible Master).
Copy those artifacts onto multiple remote machines let's say server1/2/3 etc..
And I have used roles in my playbook and the role (repodownload) which downloads the artifacts I want to run it only once because why would i want to download the same thing again. I have tried to use run_once: true but i guess that won't work because that only works for one playbook run but my playbook is running multiple times for multiple hosts.
---
- name: Deploy my Application to tomcat nodes
hosts: '{{ target_env }}'
serial: 1
roles:
- role: repodownload
tags:
- repodownload
- role: copyrepo
tags:
- copyrepo
- role: stoptomcat
tags:
- stoptomcat
- role: deploy
tags:
- deploy
Here target_env is being passed from the command line and it's the remote host group.
Any help is appreciated.
Below is the code from main.yml from repodownload role -
- connection: local
name: Downloading files from Nexus to local server
get_url: url="{{ nexus_url }}/{{item}}/{{ myvm_release_version }}/{{item}}-{{ release_ver }}.war" dest={{ local_server_location }}
with_items:
- "{{ temps }}"
This is a really simple one that I battled with too.
Try this:
- connection: local
name: Downloading files from Nexus to local server
get_url:
url: "{{ nexus_url }}/{{item}}/{{ myvm_release_version }}/{{item}}-{{ release_ver }}.war"
dest: "{{ local_server_location }}"
with_items:
- "{{ temps }}"
run_once: true
Just something else, unrelated to your main question;
When you run a module that has really long args, like in your example above, rather break the params into their own lines nested under the module. It makes for easier reading, and it makes it easier to spot any potential typos or syntax errors early.
Okay extending from your converstation with Zeitounator. The following workaround will work without changing your vars files. Just remember that this is a workaround, might not be the most efficient way to do the job.
---
- name: Download my repo to localhost
# Executes only for first host in target_env and has visibility to group vars of target_env
hosts: '{{ target_env }}[0]'
serial: 1
roles:
- role: repodownload
tags:
- repodownload
- name: Deploy my Application to tomcat nodes
# Executes for all hosts in target_env
hosts: '{{ target_env }}'
serial: 1
roles:
- role: copyrepo
tags:
- copyrepo
- role: stoptomcat
tags:
- stoptomcat
- role: deploy
tags:
- deploy
I have an ansible playbook YAML file which contains 3 plays.
The first play and the third play run on localhost but the second play runs on remote machine as you can see an example below:
- name: Play1
hosts: localhost
connection: local
gather_facts: false
tasks:
- ... task here
- name: Play2
hosts: remote_host
tasks:
- ... task here
- name: Play3
hosts: localhost
connection: local
gather_facts: false
tasks:
- ... task here
I found that, on the first run, Ansible Playbook executes Play1 and Play3 and skips Play2. Then, I try to run again, it executes all of them correctly.
What is wrong here?
The problem is that, at Play2, I use ec2 inventor like tag_Name_my_machine but this instance was not created yet, because it would be created at Play1's task.
Once Play1 finished, it will run Play2 but no host found so it silently skip this play.
The solution is to create dynamic inventor and manually register at Play1's tasks:
Playbook may look like this:
- name: Play1
hosts: localhost
connection: local
gather_facts: false
tasks:
- name: Launch new ec2 instance
register: ec2
ec2: ...
- name: create dynamic group
add_host:
name: "{{ ec2.instances[0].private_ip }}"
group: host_dynamic_lastec2_created
- name: Play2
user: ...
hosts: host_dynamic_lastec2_created
become: yes
become_method: sudo
become_user: root
tasks:
- name: do something
shell: ...
- name: Play3
hosts: localhost
connection: local
gather_facts: false
tasks:
- ... task here
I have an Ansible (2.1.1.) inventory:
build_machine ansible_host=localhost ansible_connection=local
staging_machine ansible_host=my.staging.host ansible_user=stager
I'm using SSH without ControlMaster.
I have a playbook that has a synchronize command:
- name: Copy build to staging
hosts: staging_machine
tasks:
- synchronize: src=... dest=...
delegate_to: staging_machine
remote_user: stager
The command prompts for password of the wrong user:
local-mac-user#my-staging-host's password:
So instead of using ansible_user defined in the inventory or remote_user defined in task to connect to target (hosts specified in play), it uses the user that we connected to delegate-to box as, to connect to target hosts.
What am I doing wrong? How do I fix this?
EDIT: It works in 2.0.2, doesn't work in 2.1.x
The remote_user setting is used at the playbook level to set a particular play run as a user.
example:
---
- hosts: webservers
remote_user: root
tasks:
- name: ensure apache is at the latest version
yum:
name: httpd
state: latest
- name: write the apache config file
template:
src: /srv/httpd.j2
dest: /etc/httpd.conf
If you only have a certain task that needs to be run as a different user you can use the become and become_user settings.
- name: Run command
command: whoami
become: yes
become_user: some_user
Finally if you have a group of tasks to run as a user in a play you can group them with block
example:
- block:
- name: checkout repo
git:
repo: https://github.com/some/repo.git
version: master
dest: "{{ dst }}"
- name: change perms
file:
dest: "{{ dst }}"
state: directory
mode: 0755
owner: some_user
become: yes
become_user: some user
Reference:
- How to switch a user per task or set of tasks?
- https://docs.ansible.com/ansible/latest/user_guide/playbooks_intro.html
The one which works for me but please note that it is for Windows and Linux do not require become_method: runas and basically does not have it
- name: restart IIS services
win_service:
name: '{{ item }}'
state: restarted
start_mode: auto
force_dependent_services: true
loop:
- 'SMTPSVC'
- 'IISADMIN'
become: yes
become_method: runas
become_user: '{{ webserver_user }}'
vars:
ansible_become_password: '{{ webserver_password }}'
delegate_facts: true
delegate_to: '{{ groups["webserver"][0] }}'
when: dev_env
Try set become: yes and become_user: stager on your YAML file... That should fix it...
https://docs.ansible.com/ansible/2.5/user_guide/become.html