Suppose I have a role called "apache"
Now I want to execute that role on host 192.168.0.10 from the command line from Ansible host
ansible-playbook -i "192.168.0.10" --role "path to role"
Is there a way to do that?
With ansible 2.7 you can do this:
$ ansible localhost --module-name include_role --args name=<role_name>
localhost | SUCCESS => {
"changed": false,
"include_variables": {
"name": "<role_name>"
}
}
localhost | SUCCESS => {
"msg": "<role_name>"
}
This will run role from /path/to/ansible/roles or configured role path.
Read more here:
https://github.com/ansible/ansible/pull/43131
I am not aware of this feature, but you can use tags to just run one role from your playbook.
roles:
- {role: 'mysql', tags: 'mysql'}
- {role: 'apache', tags: 'apache'}
ansible-playbook webserver.yml --tags "apache"
There is no such thing in Ansible, but if this is an often use case for you, try this script.
Put it somewhere within your searchable PATH under name ansible-role:
#!/bin/bash
if [[ $# < 2 ]]; then
cat <<HELP
Wrapper script for ansible-playbook to apply single role.
Usage: $0 <host-pattern> <role-name> [ansible-playbook options]
Examples:
$0 dest_host my_role
$0 custom_host my_role -i 'custom_host,' -vv --check
HELP
exit
fi
HOST_PATTERN=$1
shift
ROLE=$1
shift
echo "Trying to apply role \"$ROLE\" to host/group \"$HOST_PATTERN\"..."
export ANSIBLE_ROLES_PATH="$(pwd)/roles"
export ANSIBLE_RETRY_FILES_ENABLED="False"
ansible-playbook "$#" /dev/stdin <<END
---
- hosts: $HOST_PATTERN
roles:
- $ROLE
END
You could also check ansible-toolbox repository. It will allow you to use something like
ansible-role --host 192.168.0.10 --gather --user centos --become my-role
I have written a small Ansible plugin, called auto_tags, that dynamically generates for each role in your playbook a tag of the same name. You can find it here.
After installing it (instructions are in the gist above) you could then execute a specific role with:
ansible-playbook -i "192.168.0.10" --tags "name_of_role"
Have you tried that? it's super cool. I'm using 'update-os' instead of 'apache' role to give a more meaningful example. I have a role called let's say ./roles/update-os/ in my ./ I add a file called ./role-update-os.yml which looks like:
#!/usr/bin/ansible-playbook
---
- hosts: all
gather_facts: yes
become: yes
roles:
- update-os
Make this file executable (chmod +x role-update-os.yml). Now you can run and limit to whatever you have in your inventory ./update-os.yml -i inventory-dev --limit 192.168.0.10 the limit you can pass the group names as well.
--limit web,db > web and db is the group defined in your inventory
--limit 192.168.0.10,192.168.0.201
$ cat inventory-dev
[web]
192.168.0.10
[db]
192.168.0.201
Note that you can configure ssh-keys and sudoers policy to be able to execute without having to type password - ideal for automation, there are security implications with this. therefore you have to analyze your environment to see whether it's suitable.
Since in ansible 2.4 two options are available: import_role and include_role.
wohlgemuth#leela:~/workspace/rtmtb-ansible/kvm-cluster$ ansible localhost -m import_role -a name=rtmtb
[WARNING]: No inventory was parsed, only implicit localhost is available
localhost | CHANGED => {
"changed": true,
"checksum": "d31b41e68997e1c7f182bb56286edf993146dba1",
"dest": "/root/.ssh/id_rsa.github",
"gid": 0,
"group": "root",
"md5sum": "b7831c4c72f3f62207b2b96d3d7ed9b3",
"mode": "0600",
"owner": "root",
"size": 3389,
"src": "/home/wohlgemuth/.ansible/tmp/ansible-tmp-1561491049.46-139127672211209/source",
"state": "file",
"uid": 0
}
localhost | CHANGED => {
"changed": true,
"checksum": "1972ebcd25363f8e45adc91d38405dfc0386b5f0",
"dest": "/root/.ssh/config",
"gid": 0,
"group": "root",
"md5sum": "f82552a9494e40403da4a80e4c528781",
"mode": "0644",
"owner": "root",
"size": 147,
"src": "/home/wohlgemuth/.ansible/tmp/ansible-tmp-1561491049.99-214274671218454/source",
"state": "file",
"uid": 0
}
ansible.builtin.import_role – Import a role into a play
ansible.builtin.include_role – Load and execute a role
Yes, import_role is an ansible module and as such it may be invoked through ansible command. The following executes role pki on my_server
ansible my_server -m import_role \
-a "name=pki tasks_from=gencert" \
-e cn=etcdctl \
-e extended_key_usage=clientAuth
You can create the playbook files from the command line:
Install the role (if not already installed)
ansible-galaxy install git+https://github.com/user/apache-role.git
Create playbook and hosts files
cat >> playbook.yml <<EOL
---
- name: Run apache
hosts: all
roles:
- apache-role
EOL
cat >> hosts <<EOL
192.168.0.10
EOL
Run ansible
ansible-playbook playbook.yml -i hosts
Delete the files
rm playbook.yml hosts
Related
Could you explain why following behaviour happens. When I try to print remote Ansible IP with following playbook everything works as expected:
---
- hosts: centos1
tasks:
- name: Print ip address
debug:
msg: "ip: {{ansible_all_ipv4_addresses[0]}}"
when I try ad-hoc command it doesn't work:
ansible -i hosts centos1 -m debug -a 'msg={{ansible_all_ipv4_addresses[0]}}'
Here is the ad-hoc error:
centos1 | FAILED! => {
"msg": "The task includes an option with an undefined variable. The error was: 'ansible_all_ipv4_addresses' is undefined.
'ansible_all_ipv4_addresses' is undefined" }
I don't find any difference in both approaches that is why I was expecting both to work and print the remote IP address.
I don't find any difference in both approaches that is why I was expecting both to work and print the remote IP address.
This is because no facts were gathered. Whereby via ansible-playbook and depending on the configuration Ansible facts become gathered automatically, via ansible only and ad-hoc command not.
To do so you would need to execute the setup module instead. See Introduction to ad hoc commands - Gathering facts.
Further Q&A
How Ansible sets variables?
Why does Ansible ad-hoc debug module not print variable?
Please take note of the variable names according
Conflict of variable name packages with ansible_facts.packages
Could you please give some example on How to output "Your IP address is "{{ ansible_all_ipv4_addresses[0] }}"? using ad-hoc approach with setup module?
Example
ansible test.example.com -m setup -a 'filter=ansible_all_ipv4_addresses'
test.example.com | SUCCESS => {
"ansible_facts": {
"ansible_all_ipv4_addresses": [
"192.0.2.1"
]
},
"changed": false
}
or
ansible test.example.com -m setup -a 'filter=ansible_default_ipv4'
test.example.com | SUCCESS => {
"ansible_facts": {
"ansible_default_ipv4": {
"address": "192.0.2.1",
"alias": "eth0",
"broadcast": "192.0.2.255",
"gateway": "192.0.2.0",
"interface": "eth0",
"macaddress": "00:00:5e:12:34:56",
"mtu": 1500,
"netmask": "255.255.255.0",
"network": "192.0.2.0",
"type": "ether"
}
},
"changed": false
}
It is also recommend to have a look into the full output without the filter argument to get familiar with the result set and data structure.
Documentation
setup module - Examples
From Ansible: Can I execute role from command line? -
HOST_PATTERN=$1
shift
ROLE=$1
shift
echo "To apply role \"$ROLE\" to host/group \"$HOST_PATTERN\"..."
export ANSIBLE_ROLES_PATH="$(pwd)/roles"
export ANSIBLE_RETRY_FILES_ENABLED="False"
ANSIBLE_ROLES_PATH="$(pwd)/roles" ansible-playbook "$#" /dev/stdin <<END
---
- hosts: $HOST_PATTERN
roles:
- $ROLE
END
Problem is when I run with ./apply.sh all dev-role -i dev-inventory, it cannot assume the correct role. When I run with ansible-playbook -i dev-inventory site.yml --tags dev-role, it's working.
Below is error message
fatal: [my-api]: FAILED! => {"changed": false, "checksum_dest": null, "checksum_src": "d3a0ae8f3b45a0a7906d1be7027302a8b5ee07a0", "dest": "/tmp/install-amazon2-td-agent4.sh", "elapsed": 0, "gid": 0, "group": "root", "mode": "0644", "msg": "Destination /tmp/install-amazon2-td-agent4.sh is not writable", "owner": "root", "size": 838, "src": "/home/ec2-user/.ansible/tmp/ansible-tmp-1600788856.749975-487-237398580935180/tmpobyegc", "state": "file", "uid": 0, "url": "https://toolbelt.treasuredata.com/sh/install-amazon2-td-agent4.sh"}
Based on "msg": "Destination /tmp/install-amazon2-td-agent4.sh is not writable", I'd guess it is because site.yml contains become: yes statement, which makes all tasks run as root. The "anonymous" playbook does not contain a become: declaration, and thus would need one to either run ansible-playbook --become or to add become: yes to it, also
ANSIBLE_ROLES_PATH="$(pwd)/roles" ansible-playbook "$#" /dev/stdin <<END
---
- hosts: $HOST_PATTERN
become: yes
roles:
- $ROLE
END
I'm attempting to use Ansible to remove some keys. The command runs successfully, but does not edit the file.
ansible all -i inventories/staging/inventory -m authorized_key -a "user=deploy state=absent key={{ lookup('file', '/path/to/a/key.pub') }}"
Running this command returns the following result:
staging1 | SUCCESS => {
"changed": false,
"comment": null,
"exclusive": false,
"key": "ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEAxKjbpkqro9JhiEHrJSHglaZE1j5vbxNhBXNDLsooUB6w2ssLKGM9ZdJ5chCgWSpj9+OwYqNwFkJdrzHqeqoOGt1IlXsiRu+Gi3kxOCzsxf7zWss1G8PN7N93hC7ozhG7Lv1mp1EayrAwZbLM/KjnqcsUbj86pKhvs6BPoRUIovXYK28XiQGZbflak9WBVWDaiJlMMb/2wd+gwc79YuJhMSEthxiNDNQkL2OUS59XNzNBizlgPewNaCt06SsunxQ/h29/K/P/V46fTsmpxpGPp0Q42sCHczNMQNS82sJdMyKBy2Rg2wXNyaUehbKNTIfqBNKqP7J39vQ8D3ogdLLx6w== arthur#Inception.local",
"key_options": null,
"keyfile": "/home/deploy/.ssh/authorized_keys",
"manage_dir": true,
"path": null,
"state": "absent",
"unique": false,
"user": "deploy",
"validate_certs": true
}
The command was a success, but it doesn't show that anything changed. They key remains on the server.
Any thoughts on what I'm missing?
The behaviour you described occurs when you try to remove the authorized key using a non-root account different than deploy, i.e. without necessary permissions.
Add --become (-b) argument to the command:
ansible all -b -i inventories/staging/inventory -m authorized_key -a "user=deploy state=absent key={{ lookup('file', '/path/to/a/key.pub') }}"
That said, I see no justification for the ok status; the task should fail. This looks like a bug in Ansible to me; I filed an issue on GitHub.
I am deploying several Linux hosts to an openstack environment and attempting to configure them with ansible. I'm having some difficulty with the stock dynamic inventory script from https://github.com/ansible/ansible/blob/devel/contrib/inventory/openstack.py
If I run ansible with a static hosts file, everything works fine
# inventory/static-hosts
localhost ansible_connection=local
linweb01 ansible_host=10.1.1.101
% ansible linweb01 -m ping -i ./inventory/static-hosts \
--extra-vars="ansible_user=setup ansible_ssh_private_key_file=/home/ian/keys/setup.key"
linweb01 | SUCCESS => {
"changed": false,
"ping": "pong"
}
But if I use the dynamic inventory, the host isn't found
% ansible linweb01 -m ping -i ./inventory/openstack.py \
--extra-vars="ansible_user=setup ansible_ssh_private_key_file=/home/ian/keys/setup.key"
linweb01 | UNREACHABLE! => {
"changed": false,
"msg": "Failed to connect to the host via ssh: ssh: Could not resolve hostname linweb01: Name or service not known\r\n",
"unreachable": true
}
When I run the inventory script manually, the host is found and the address returned is correct
% ./inventory/openstack.py --host linweb01
[...]
"name": "linweb01",
"networks": {},
"os-extended-volumes:volumes_attached": [],
"power_state": 1,
"private_v4": "10.1.1.101",
[...]
My guess is that the inventory script doesn't know to use the "private_v4" value for the IP address, although I can't seem to find a reference for this.
How do I get ansible to use the "private_v4" value returned by the inventory script as the "ansible_host" value for the host?
Quick look into the code suggests that ip address is expected to be in interface_ip key:
hostvars[key] = dict(
ansible_ssh_host=server['interface_ip'],
ansible_host=server['interface_ip'],
openstack=server)
If you need a workaround, you can try to add this to you group_vars/all.yml:
ansible_host: "{{ private_v4 }}"
I'm having trouble getting my Ansible play's hosts to match the AWS dynamic groups that are coming back for my dynamic inventory. Let's break this problem down.
Given this output of ec2.py --list:
$ ./devops/inventories/dynamic/ec2.py --list
{
"_meta": {
"hostvars": {
"54.37.213.132": {
"ec2__in_monitoring_element": false,
"ec2_ami_launch_index": "0",
"ec2_architecture": "x86_64",
"ec2_client_token": "",
"ec2_dns_name": "ec2-52-37-203-132.us-west-2.compute.amazonaws.com",
"ec2_ebs_optimized": false,
"ec2_eventsSet": "",
"ec2_group_name": "",
"ec2_hypervisor": "xen",
"ec2_id": "i-d352c50b",
"ec2_image_id": "ami-63b25203",
"ec2_instance_profile": "",
"ec2_instance_type": "t2.micro",
"ec2_ip_address": "54.37.213.132",
"ec2_item": "",
"ec2_kernel": "",
"ec2_key_name": "peaker-v1-keypair",
"ec2_launch_time": "2016-03-11T20:45:44.000Z",
"ec2_monitored": false,
"ec2_monitoring": "",
"ec2_monitoring_state": "disabled",
"ec2_persistent": false,
"ec2_placement": "us-west-2a",
"ec2_platform": "",
"ec2_previous_state": "",
"ec2_previous_state_code": 0,
"ec2_private_dns_name": "ip-172-31-43-132.us-west-2.compute.internal",
"ec2_private_ip_address": "172.31.43.132",
"ec2_public_dns_name": "ec2-52-37-203-132.us-west-2.compute.amazonaws.com",
"ec2_ramdisk": "",
"ec2_reason": "",
"ec2_region": "us-west-2",
"ec2_requester_id": "",
"ec2_root_device_name": "/dev/xvda",
"ec2_root_device_type": "ebs",
"ec2_security_group_ids": "sg-824ac0e5",
"ec2_security_group_names": "peaker-v1-security-group",
"ec2_sourceDestCheck": "true",
"ec2_spot_instance_request_id": "",
"ec2_state": "running",
"ec2_state_code": 16,
"ec2_state_reason": "",
"ec2_subnet_id": "subnet-b96e1bce",
"ec2_tag_Environment": "v1",
"ec2_tag_Name": "peaker-v1-ec2",
"ec2_virtualization_type": "hvm",
"ec2_vpc_id": "vpc-5fe8ae3a"
}
}
},
"ec2": [
"54.37.213.132"
],
"tag_Environment_v1": [
"54.37.213.132"
],
"tag_Name_peaker-v1-ec2": [
"54.37.213.132"
],
"us-west-2": [
"54.37.213.132"
]
}
I should be able write a playbook that matches some of the groups coming back:
---
# playbook
- name: create s3 bucket with policy
hosts: localhost
gather_facts: yes
tasks:
- name: s3
s3:
bucket: "fake"
region: "us-west-2"
mode: create
permission: "public-read-write"
register: s3_output
- debug: msg="{{ s3_output }}"
- name: test on remote machine
hosts: ec2
gather_facts: yes
tasks:
- name: test on remote machine
file:
dest: "/home/ec2-user/test/"
owner: ec2-user
group: ec2-user
mode: 0700
state: directory
become: yes
become_user: ec2-user
However, when I --list-hosts that match these plays it's obvious that the play hosts are not matching anything coming back:
$ ansible-playbook -i devops/inventories/dynamic/ec2/ec2.py devops/build_and_bundle_example.yml --ask-vault-pass --list-hosts
Vault password:
[WARNING]: provided hosts list is empty, only localhost is available
playbook: devops/build_and_bundle_example.yml
play #1 (localhost): create s3 bucket with policy TAGS: []
pattern: [u'localhost']
hosts (1):
localhost
play #2 (ec2): test on remote machine TAGS: []
pattern: [u'ec2']
hosts (0):
The quick fix for what you're doing:
change hosts: localhost in your playbook to hosts: all
It would never work just with dynamic inventory if you're going to keep hosts: localhost in your playbook...
If so, – you must combine dynamic & static inventories. Create file with path ./devops/inventories/dynamic/static.ini (on the same level with ec2.py and ec2.ini) and put this content:
[localhost]
localhost
[ec2_tag_Name_peaker_v1_ec2]
[aws-hosts:children]
localhost
ec2_tag_Name_peaker_v1_ec2
After that, you will be able to run a quick check:
ansible -i devops/inventories/dynamic/ec2 aws-hosts -m ping
and your playbook itself:
ansible-playbook -i devops/inventories/dynamic/ec2 \
devops/build_and_bundle_example.yml --ask-vault-pass
NOTE: devops/inventories/dynamic/ec2 is a path to the folder, but it will be automatically resolved into hybrid dynamic&static inventory with access to aws-hosts group name.
In fact, this isn't the best use of inventory. But it's important to understand, that by combining dynamic&static inventories, you're just appending new group names for particular dynamic host
ansible -i devops/inventories/dynamic/ec2 all -m debug \
-a "var=hostvars[inventory_hostname].group_names"