There are 3 hosts in my play.
[machines]
MachineA
MachineB
MachineC
MongoDB runs on these servers. And one of these servers can be a MasterDB of Mongo.
So, each of these machines can be a 'Master'. This is determined by setting the fact if the machine is master, in this example only MachineA is targeted:
- name: check if master
shell: 'shell command to check if master'
set_fact: MasterHost="machineA"
when: 'shell command to check if master'.stdout == "true"
This is also done for MachineB and MachineC.
Mission to achieve: To run commands only on on the Master machine, which has the fact "MasterHost".
I tried the delegate_to module, but delegate_to also uses the two other machines:
- name: some task
copy: src=/tmp/test.txt dest=/tmp/test.txt
delegate_to: "{{ MasterHost }}"
I want to target the master it in my playbook and run only commands on the master, not in the shell via the --limit option.
Assuming the command run to check whether the host is the master or not is not costly, you can go without setting a specific fact:
- name: check if master
shell: 'shell command to check if master'
register: master_check
- name: some task
copy: src=/tmp/test.txt dest=/tmp/test.txt
when: master_check.stdout == "true"
Run the play on all hosts and only the one that is the master will run some task.
Eventually, this was my answer. Sorry for the first post, still learning how to make a good post. Hihi
- name: Check which host is master
shell: mongo --quiet --eval 'db.isMaster().ismaster'
register: mongoMaster
- name: Set fact for mongoMasterr
set_fact: MongoMasterHost="{{ item }}"
with_items: "{{ groups['HOSTS'] }}"
when: mongoMaster.stdout == "true"
- name: Copy local backup.tgz to master /var/lib/mongodb/backup
copy: src=/tmp/backup.tgz dest=/var/lib/backup/backup.tgz
when: mongoMaster.stdout == "true"
Thanks for helping and pointing me toward the right direction.
Related
I want to set a cron entry on a remote host, but connecting to the host as a different user.
# task
- name: Cron to ls at a specific time
cron:
name: "perform a listing"
weekday: "6"
minute: "5"
hour: "3"
job: "/bin/ls -lR /mnt/*/"
delegate_to: "{{ my_remote_machine }}"
Problem
This is a startup script on an instance in the cloud.
The script runs as root, there fore will try to connect to {{ my_remote_machine }} as root.
root is obviously disabled by default on most cloud instances.
Because of this, I can't use the become_user keyword.
Do I have any other options?
Simply change the remote_user for the given task to the one you can connect with on the delegated host. Here is a pseudo playbook to give you the basics.
Note: if targeting a host using ansible_connection: local (e.g. default implicit localhost), remote_user is ignored and defaults to the user launching the playbook on the controller.
---
- name: Play mixing several hosts and users
hosts: some_host_or_group
# Play level remote_user. In short, this is used if not overridden in task.
# See documentation for finer grained info (define in inventory, etc...)
remote_user: root
tasks:
- name: Check who we are on current host
command: id -a
register: who_we_are_current
- debug:
var: who_we_are_current.stdout
- name: Show we can be someone else on delegate
command: id -a
# Task level remote_user: overrides play
remote_user: johnd
delegate_to: "{{ my_remote_machine }}"
register: who_whe_are_delegate
- debug:
var: who_whe_are_delegate.stdout
- name: And of course, this works with your real task as well
cron:
name: "perform a listing"
weekday: "6"
minute: "5"
hour: "3"
job: "/bin/ls -lR /mnt/*/"
remote_user: johnd
delegate_to: "{{ my_remote_machine }}"
I'm creating playbook which will be applied to new Docker swarm manager(s). Server(s) is/are not configured before playbook run.
We already have some Swarm managers. I can find all of them (include new one) with:
- name: 'Search for SwarmManager server IPs'
ec2_instance_facts:
region: "{{ ec2_region }}"
filters:
vpc-id: "{{ ec2_vpc_id }}"
"tag:aws:cloudformation:logical-id": "AutoScalingGroupSwarmManager"
register: swarmmanager_instance_facts_result
Now I can use something like this to get join-token:
- set_fact:
swarmmanager_ip: "{{ swarmmanager_instance_facts_result.instances[0].private_ip_address }}"
- name: 'Get the docker swarm join-token'
shell: docker swarm join-token -q manager
changed_when: False
register: docker_swarm_token_result
delegate_to: "{{ swarmmanager_ip }}"
run_once: true
Success shell output looks like this — just 1 line started with "SWMTKN-1":
SWMTKN-1-11xxxyyyzzz-xxxyyyzzz
But I see some possible problems here with swarmmanager_ip:
it can be new instance which still unconfigured,
it can be instance with not working Swarm manager.
So I decided to loop over results until I've got join-token. But many code variants I've tried doesn't work. For example, this one runs over all list without break:
- name: 'Get the docker swarm join-token'
shell: docker swarm join-token -q manager
changed_when: False
register: docker_swarm_token_result
delegate_to: "{{ item.private_ip_address }}"
loop: "{{ swarmmanager_instance_facts_result.instances }}"
# ignore_errors: true
# until: docker_swarm_token_result.stdout_lines|length == 1
when: docker_swarm_token_result is not defined or docker_swarm_token_result.stdout_lines is not defined or docker_swarm_token_result.stdout_lines|length == 1
run_once: true
check_mode: false
Do you know how to iterate over list until first success shell output?
I use Ansible 2.6.11, it is OK to receive answer about 2.7.
P.S.: I've already read How to break `with_lines` cycle in Ansible?, it doesn't works for modern Ansible versions.
I have an ansible playbook that has a task to output the list of installed Jenkins plugins for each servers.
here is the host file:
[masters]
server1
server2
server3
server4
server5
server6
Here is the task that prints out the list of plugins installed on each of the jenkins servers:
- name: Obtaining a list of Jenkins Plugins
jenkins_script:
script: 'println(Jenkins.instance.pluginManager.plugins)'
url: "http://{{ inventory_hostname }}.usa.com:8080/"
user: 'admin'
password: 'password'
What I want to do next is do a comparison with all of the installed plugins across all of the servers -- to ensure that all of the servers are running the same plugins.
I don't necessarily want to force an update -- could break things -- just inform the user that they are running a different version of the plug in that the rest of the servers.
I am fairly new to ansible, will gladly accept any suggestions on how to accomplish this.
This is a bit ugly, but should work:
- hosts: master
tasks:
- jenkins_script:
script: 'println(Jenkins.instance.pluginManager.plugins)'
url: "http://{{ inventory_hostname }}.usa.com:8080/"
user: 'admin'
password: 'password'
register: call_result
- copy:
content: '{{ call_result.output }}'
dest: '/tmp/{{ inventory_hostname }}'
delegate_to: 127.0.0.1
- shell: 'diff /tmp/{{groups.master[0]}} /tmp/{{ inventory_hostname }}'
delegate_to: 127.0.0.1
register: diff_result
failed_when: false
- debug:
var: diff_result.stdout_lines
when: diff_result.stdout_lines | length != 0
This will save the result of the jenkins_script module onto the calling host (where you are running ansible-playbook), into /tmp/{{hostname}}. Afterwards it will run a normal diff against the first server's result and each of the others', and then print out if there are any differences.
It's a bit ugly, as it:
Uses /tmp on the calling host to store some temporary data
Does not clean up after itself
Uses the diff shell commands for something that might be doable with some clever use of jinja
Ansible 2.3 will have the tempfile module, that you might use to clean up /tmp
Is it possible to execute a local script on a remote host in Ansible without copying it across and then executing it?
The script, shell and command modules all seems like they might be the answer but I'm not sure which is best.
The script module describes itself as "Runs a local script on a remote node after transferring it" but the examples given don't suggest a copy operation - e.g. no src, dest - so maybe this is the answer?
script module FTW
tasks:
- name: Ensure docker repo is added
script: "{{ role_path }}/files/add-docker-repo.sh"
register: dockeraddrepo
notify: Done dockeraddrepo
when: ansible_local.dockeraddrepo | d(0) == 0
handlers:
- name: Done dockeraddrepo
copy:
content: '{{ dockeraddrepo }}'
dest: /etc/ansible/facts.d/dockeraddrepo.fact
I have 2 roles both with a list of tasks.
However, SOME (not all) of the tasks in role A are almost identical to the tasks in role B
Example role A task:
- name: Ensure bible server is running
command: npm run forever
args:
chdir: ~/bible-server
when: "foreverlist.stdout.find('bibleServer.js') == -1"
Example role B task:
- name: Ensure certs server is running
command: npm run forever
args:
chdir: ~/certs-server
when: "foreverlist.stdout.find('certsServer.js') == -1"
Is it possible to parametise a task such that I can declare a task like I would declare a function and pass in arguments to it?
Yes, in Ansible this is what the inventory is for. Specify the configuration as variables in the inventory, If both roles are on the same host, you could use a dictionary. Then iterate through the dictionary to repeat the task on each configuration.
In the inventory:
servers:
- path: bible-server
script: bibleServer.js
- path: cert-server
script: certServer.js
Then in the task:
- name: Ensure Servers are running
command: npm run forever
args:
chdir: "~/{{ item.path }}"
when: "foreverlist.stdout.find('{{ item.script }}') == -1"
with_items: "{{ servers }}"
That's the high level overview. I would highly recommend reading up on the inventory because it's use is a core principle of Ansible. Also read up on loops.