I have ansible filtering option defined with following playbook. But when executed with centos user it is not filtering by user to run. I have to run this playbook 3 times with 3 different user:
1. centos
2. ec2-user
3. admin
This is how I am executing
1. ansible-playbook -i inventory -u admin group_by.yaml
2. ansible-playbook -i inventory -u ec2-user group_by.yaml
3. ansible-playbook -i inventory -u centos group_by.yaml
The problem is remote_user is not working. It is filtering and grouping.
---
- name: Run tasks based on OS
hosts: all
tasks:
- name: Group OS
group_by:
key: "{{ ansible_distribution }}"
- hosts: CentOS
become: yes
become_user: root
remote_user: centos
tasks:
- name: Install on centos
package:
name: telnet
state: absent
- hosts: Amazon
become: yes
become_user: root
remote_user: ec2-user
tasks:
- name: Install on ec2
package:
name: telnet
- hosts: Debian
become: yes
become_user: root
remote_user: admin
tasks:
- name: Install on debian
package:
name: telnet
I have run the command already multiple times. it is picking my default user. Remote_user is not working in playbook.
Related
I have this Ansible Playbook with three different plays. What I want to do is to launch the two lasts plays based on a condition. How can I do this directly at playbook level (not using when clause in each role)?
- name: Base setup
hosts: all
roles :
- apt
- pip
# !!!!! SHUT DOWN IF NOT DEBIAN DIST
- name: dbserver setup
hosts: dbserver
remote_user: "{{ user }}"
become: true
roles:
- mariadb
- name: webserver and application setup
hosts: webserver
remote_user: "{{ user }}"
become: true
roles:
- php
- users
- openssh
- sshkey
- gitclone
- symfony
You could just end the play for the hosts you do not wish to continue with, with the help of the meta task in a pre_tasks:
- name: dbserver setup
hosts: dbserver
remote_user: "{{ user }}"
become: true
pre_tasks:
- meta: end_host
when: ansible_facts.os_family != 'Debian'
roles:
- mariadb
And do the same for the web servers.
- hosts: local_host
remote_user: ansible
become: yes
become_method: sudo
connection: ssh
gather_fact: yes
tasks:
name: installing MariaDB
yum:
name: mariadb-server
state: latest
notify: startservice
handlers:
name: startservice
service:
name: mariadb
state: restarted
The error is in the first two lines:
- hosts: local_host
remote_user: ansible
host cannot have both a scalar value (local_host) and a mapping value (starting at remote_user:). Chances are that you want remote_user to be on the level of hosts, making it a sibling key:
- host: local_host
remote_user: ansible
# and so on
I have a playbook that I run to deploy a guest VM onto my target node.
After the guest VM is fired up, it is not available to the whole network, but to the host machine only.
Also, after booting up the guest VM, I need to run some commands on that guest to configure it and make it available to all the network members.
---
- block:
- name: Verify the deploy VM script
stat: path="{{ deploy_script }}"
register: deploy_exists
failed_when: deploy_exists.stat.exists == False
no_log: True
rescue:
- name: Copy the deploy script from Ansible
copy:
src: "scripts/new-install.pl"
dest: "/home/orch"
owner: "{{ my_user }}"
group: "{{ my_user }}"
mode: 0750
backup: yes
register: copy_script
- name: Deploy VM
shell: run my VM deploy script
<other tasks>
- name: Run something on the guest VM
shell: my_other_script
args:
cdir: /var/scripts/
- name: Other task on guest VM
shell: uname -r
<and so on>
How can I run those subsequent steps on the guest VM via the host?
My only workaround is to populate a new inventory file with the VMs details and add the use the host as a bastion host.
[myvm]
myvm-01 ansible_connection=ssh ansible_ssh_user=my_user ansible_ssh_common_args='-oStrictHostKeyChecking=no -o ProxyCommand="ssh -A -W %h:%p someuser#host_machine"'
However, I want everything to happen on a single playbook, rather than splitting them.
I have resolved it myself.
I managed to dynamically add the host to the inventory and used a group:vars for the newly created hosts to use the VM manager as a bastion host
Playbook:
---
hosts: "{{ vm_manager }}"
become_method: sudo
gather_facts: False
vars_files:
- vars/vars.yml
- vars/vault.yml
pre_tasks:
- name: do stuff here on the VM manager
debug: msg="test"
roles:
- { role: vm_deploy, become: yes, become_user: root }
tasks:
- name: Dinamically add newly created VM to the inventory
add_host:
hostname: "{{ vm_name }}"
groups: vms
ansible_ssh_user: "{{ vm_user }}"
ansible_ssh_pass: "{{ vm_pass }}"
- name: Run the rest of tasks on the VM through the host machine
hosts: "{{ vm_name }}"
become: true
become_user: root
become_method: sudo
post_tasks:
- name: My first task on the VM
static: no
include_role:
name: my_role_for_the_VM
Inventory:
[vm_manager]
vm-manager.local
[vms]
my-test-01
my-test-02
[vms:vars]
ansible_connection=ssh
ansible_ssh_common_args='-oStrictHostKeyChecking=no -o ProxyCommand="ssh -A -W %h:%p username#vm-manager.local"'
Run playbook:
ansible-playbook -i hosts -vv playbook.yml -e vm_name=some-test-vm-name
I have an Ansible (2.1.1.) inventory:
build_machine ansible_host=localhost ansible_connection=local
staging_machine ansible_host=my.staging.host ansible_user=stager
I'm using SSH without ControlMaster.
I have a playbook that has a synchronize command:
- name: Copy build to staging
hosts: staging_machine
tasks:
- synchronize: src=... dest=...
delegate_to: staging_machine
remote_user: stager
The command prompts for password of the wrong user:
local-mac-user#my-staging-host's password:
So instead of using ansible_user defined in the inventory or remote_user defined in task to connect to target (hosts specified in play), it uses the user that we connected to delegate-to box as, to connect to target hosts.
What am I doing wrong? How do I fix this?
EDIT: It works in 2.0.2, doesn't work in 2.1.x
The remote_user setting is used at the playbook level to set a particular play run as a user.
example:
---
- hosts: webservers
remote_user: root
tasks:
- name: ensure apache is at the latest version
yum:
name: httpd
state: latest
- name: write the apache config file
template:
src: /srv/httpd.j2
dest: /etc/httpd.conf
If you only have a certain task that needs to be run as a different user you can use the become and become_user settings.
- name: Run command
command: whoami
become: yes
become_user: some_user
Finally if you have a group of tasks to run as a user in a play you can group them with block
example:
- block:
- name: checkout repo
git:
repo: https://github.com/some/repo.git
version: master
dest: "{{ dst }}"
- name: change perms
file:
dest: "{{ dst }}"
state: directory
mode: 0755
owner: some_user
become: yes
become_user: some user
Reference:
- How to switch a user per task or set of tasks?
- https://docs.ansible.com/ansible/latest/user_guide/playbooks_intro.html
The one which works for me but please note that it is for Windows and Linux do not require become_method: runas and basically does not have it
- name: restart IIS services
win_service:
name: '{{ item }}'
state: restarted
start_mode: auto
force_dependent_services: true
loop:
- 'SMTPSVC'
- 'IISADMIN'
become: yes
become_method: runas
become_user: '{{ webserver_user }}'
vars:
ansible_become_password: '{{ webserver_password }}'
delegate_facts: true
delegate_to: '{{ groups["webserver"][0] }}'
when: dev_env
Try set become: yes and become_user: stager on your YAML file... That should fix it...
https://docs.ansible.com/ansible/2.5/user_guide/become.html
my loginuser is user1 and i want to execute the playbook with root. how can i do this. if i use in cmdline it does not work like this
ansible-playbook main.yaml -i hosts --user=git -k --become-user=root --ask-become-pass --become-method=su
Please tell me how to implement this.
name: Install and Configure IEM
hosts: rhel
ansible_become: yes
ansible_become_method: su
ansible_become_user: root
ansible_become_pass: passw0rd
tasks:
- name: Creating masthead file path
file: path=/etc/opt/BESClient state=directory
- name: Creating install directory
I use :
deploy.yml
- name: Todo something
hosts: all
become: yes
become_user: root
become_method: su
When you execute the playbook pass the password as an extra var.
--extra-vars='ansible_become_pass=password'
From Ansible docs:
you can set those in the playbook as #Raul-Hugo, with become_user and become_user;
alternatively, it can also be done in the inventory, which allows setting per host or group. But then the variables get "ansible_" prefix: ansible_become_user, ansible_become_user, etc. That's why the playbook you gave in your question did not work: it used variable names that are used in the inventory.
You can become root like below and install the packages
tasks:
- name: install apache package
become: yes
become_user: root
yum:
name: httpd
state: present
- name: ensure apache is running
become: yes
become_user: root
service:
name: httpd
state: started
All the above answers caused Ansible to try to login as root from the beginning. but in this case, the user you request is git so the below example worked for me:
- name: Install and Configure IEM
hosts: rhel
tasks:
- name: Creating masthead file path
file: path=/etc/opt/BESClient state=directory
remote_user: git
become: yes # when not specifying `become_user` it's "root"
This will cause it to login as git and after the login - switch to root