I'm working on this environment
ansible version = 2.5.1
python version = 2.7.17
awx version = 8.0.0
I'm trying to change my ansible project to AWX ( I installed AWX via docker)
I had two hosts called ansible-server and k8s-master
I originally fetched k8s-master's config.yaml to ansibler-server with this playbook
my fetch module worked on ansible playbook on cli (it worked on host : ansible-server)
- name: Fetch config.yaml from K8S-Master
hosts: K8S-Master
become: yes
fetch:
src: /home/vagrant/config.yaml
dest: /home/vagrant/config.yaml
flat: true
but when I play same playbook on AWX command, it shows me playbook has successed but no file exists on ansible-server
this is my inventory on AWX
Inventory : Demo-Inventory
Groups : Ansible-Server, K8S-Master
HOSTS(one each) : 192.168.32.145, 192.167.32.150
and I added my host ip on each group
I've fixed settings -> jobs -> job execution path and "extra environment variables" path to
/home/vagrant/
and
{
"HOME":"/home/vagrant/"
}
I've also checked those directory
/var/lib/awx/projects
/home/awx
I'm not sure how can I find AWX container's file system
I'll add more informations if I need to.
I found out that the file goes to container's directory
/var/lib/docker/overlay2/RANDOMCODE/merged/home/config.yaml
how can I get those file (as shell script or as another playbook)
Related
I am using Ansible 2.9.
We set it up using virtual environment so we do not have any ansible.cfg file.
So I am creating a ansible.cfg file in the current directory where my play book is kept.(to follow the precedence)
ANSIBLE_CONFIG (an environment variable)
ansible.cfg (in the current directory)
.ansible.cfg (in the home directory)
/etc/ansible/ansible.cfg
Issue: But when i run my playbook using my ADO pipeline the ansible.cfg is not being picked & shows message "No config file found; using defaults"
Note: I am importing playbook from another repository & Playbook is being executed for app deployment using azure devops ansible task.
Flow: Application code has playbook which imports playbook from repository called 'R1' from a directory called D1/common_play_book.yml
and parallel to this playbook i created ansible.cgf file in side the directory D1.
(I tried keeping ansible.cgf inside the directory of application code, parallel to the playbook which import a playbook from a another repository, but this also does not work)
am i missing something here ?
PS: Adding more structure details
Application Repository called "app_repo"
1. azure-pipelines.yml --> Azure Devops Ansible task for devplyment use playbook defined at application side > app_platbook.yml (it imports centralized playbook playbook_from_lib.yml from other repo, all other apps are also using it)
2. Structure
my_app (root)
- app_playbook.yml
- azure-pipelines.yml
Centralized playbook repository called: "playbook_lib"
1. This is being imported at application side playbook (i.e in app_playbook.yml)
2. Structure
playbooks(root)
- spring-app-playbooks
- playbook_from_lib.yml
Next i tried with ANSIBLE_CONFIG environment variable which is like below at playbook level also, but did not work (export is required here but which path should i export here as my ansible.cfg is not in a static location)
---
- name: Deploying an application
hosts: "{{ HOSTS }}"
force_handlers: true
gather_facts: false
environment:
ANSIBLE_CONFIG: "ansible.cfg"
For I am running my playbook from azure devops yml pipeline with the help of Ansible task
I'm in the process of migrating Ansible playbooks into Ansible AWX projects.
Previously I'd checkout the Ansible playbook from git, then run it from the command line.
In this specific case I have a Ansible Playbook that creates VMware virtual machines. I use the following tasks to gather information about the git repo and current git commit hash, and use this info in the VM annotations, so that it can be later used to identify the exact instructions used to create the VM.
- name: return git commit hash
command: git rev-parse HEAD
register: gitCommitHash
delegate_to: localhost
become: false
become_user: "{{ lookup('env','USER') }}"
- name: get remote git repo
command: git config --get remote.origin.url
register: gitRemote
delegate_to: localhost
become: false
become_user: "{{ lookup('env','USER') }}"
I realize that playbooks run in AWX run as the awx user.
Is there anyway, in a playbook, I can get the AWX user that is running the AWX template, and can I get the url for the Ansible AWX project?
Update
I found I can get the AWX that is running the template by using the {{awx_user_name}}, but have not yet found out how to get the git remote url of the project/playbook.
I ran a job template with a simple playbook:
---
- name: Debug AWX
hosts: all
tasks:
- debug:
var: vars
And in the output, I could see these variables:
"awx_inventory_id": 1,
"awx_inventory_name": "Demo Inventory",
"awx_job_id": 23,
"awx_job_launch_type": "manual",
"awx_job_template_id": 10,
"awx_job_template_name": "Debug AWX",
"awx_project_revision": "3214f37f271ad589f7a63d12c3d1ef9fa0972d91",
"awx_user_email": "admin#example.com",
"awx_user_first_name": "",
"awx_user_id": 1,
"awx_user_last_name": "",
"awx_user_name": "admin",
So, no you don't get the AWX project url from the job. But you get the project ID!.
So you could query the API to get the job_template details, then query the project details to get the url. I would suggest to use the CLI for these steps.
I'm trying to move all files under a specific remote directory to another remote directory - on the same remote host - using Ansible's copy module.
The directory, and files, do exist on the remote host and I thought I'd use the Ansible copy module with remote_src: yes in order to achieve this.
However, so far I have been running into unforeseen issues with this approach - any help is appreciated, thank you!!
Task of concern
- name: copy remote to remote (same host)
copy:
src: "{{ item }}"
dest: "{{ dir_base_path }}/go/to/my/nested/path"
remote_src: yes
owner: "{{ owner }}"
group: "{{ group }}"
mode: preserve
with_fileglob:
- "{{ dir_base_path }}/stay/at/parent_dir/*"
when: status.changed and dir.stat.exists
Remote Directory structure
--> parent path
-- all-the-files-I-need
--`nested_directory
-- need-to-copy-files here
Error Observed
TASK [playbook: copy remote to remote (same host)] ****************************************************************************************
[WARNING]: Unable to find 'base_path/stay/at/parent_dir/' in expected paths (use -vvvvv to see paths)
Version Information
ansible --version
ansible 2.7.10
config file = /etc/ansible/ansible.cfg
configured module search path = ['/root/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/local/lib/python3.6/dist-packages/ansible
executable location = /usr/local/bin/ansible
python version = 3.6.7 (default, Oct 22 2018, 11:32:17) [GCC 8.2.0]
As per documentation for fileglob - Matching is against local system files on the Ansible controller. To iterate a list of files on a remote node, use the find module.
Refer: https://docs.ansible.com/ansible/latest/plugins/lookup/fileglob.html
You can first use find command to find the files and then store using register and then copy those files.
I am running Ansible Tower v3.4.1 with Ansible v2.7.6 on an ubuntu 16.04 VM running on VirtualBox. I run a playbook that works when I run it from the command line using "ansible-playbook" but fails when I try to run it from Ansible Tower. I know I must have something misconfigured in ansible tower but I can't find it.
I get this warning no matter what changes I make to the inventory (hosts) file.
$ ansible-playbook 2.7.6
config file = /etc/ansible/ansible.cfg
configured module search path = [u'/var/lib/awx/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python2.7/dist-packages/ansible
executable location = /usr/bin/ansible-playbook
python version = 2.7.12 (default, Nov 12 2018, 14:36:49) [GCC 5.4.0 20160609]
Using /etc/ansible/ansible.cfg as config file
SSH password:
**/tmp/awx_74_z6yJB4/tmpVlXGCX did not meet host_list requirements**, check plugin documentation if this is unexpected
Parsed /tmp/awx_74_z6yJB4/tmpVlXGCX inventory source with script plugin
PLAYBOOK: addpool.yaml *********************************************************
1 plays in addpool.yaml
[WARNING]: **Could not match supplied host pattern, ignoring: bigip**
PLAY [Sample pool playbook] ****************************************************
17:05:43
skipping: no hosts matched
I have enabled inventory plugins for YAML, and made my hosts file into a hosts.yml file.
Here's my hosts file:
192.168.68.253
192.168.68.254
192.168.1.165
[centos]
dad2 ansible_ssh_host=192.168.1.165
[bigip]
bigip1 ansible_host=192.168.68.254
bigip2 ansible_host=192.168.68.253
Here's my playbook:
---
- name: Sample pool playbook
hosts: bigip
connection: local
tasks:
- name: create web servers pool
bigip_pool:
name: web-servers2
lb_method: ratio-member
password: admin
user: admin
server: '{{inventory_hostname}}'
validate_certs: no
I replaced hosts: bigip with hosts: all and specified the inventory in Tower as bigip which contains only the two hosts I want to change. This seems to provide the output I am looking for.
For the ansible-playbook command line, I added --limit bigip and this seems to provide the output I am looking for.
So things appear to be working, I just don't know whether this is best practice use.
If you get the error below while running a playbook with the command
ansible-playbook -i test-project/inventory.txt playbook.yml
{"changed": false, "msg": "Failed to connect to the host via ssh: ssh: connect to host 172.31.24.10 port 22: Connection timed out", "unreachable": true}
The solution is to add, in the file ansible.cfg:
[defaults]
inventory=/etc/ansible/hosts
I think you need to remove the connection: local.
You have specified in hosts: bigip that you want these tasks to only run on hosts in the bigip group. You then specify connection: local which causes the task to run on the controller node (i.e. localhost), rather than the nodes in the bigip group. Localhost is not a member of the bigip group, and so none of the tasks in the play will trigger.
Check for special characters in absolute path of hosts file or playbook. Incase if you directly copied the path from putty, try copy and paste it from notepad or any editor
For me the issue was the format of the /etc/ansible/hosts file. You should use the :children suffix in order to use groups of groups like this:
[dev1]
dev_1 ansible_ssh_host=192.168.1.55 ansible_connection=ssh ansible_ssh_user={{username}} ansible_ssh_pass={{password}}
[dev2]
dev_2 ansible_ssh_host=192.168.1.68 ansible_connection=ssh ansible_ssh_user={{username}} ansible_ssh_pass={{password}}
[devs:children]
dev1
dev2
Reference: here
I'm setting up an Ansible playbook to set up a couple servers. There are a couple of tasks that I only want to run if the current host is my local dev host, named "local" in my hosts file. How can I do this? I can't find it anywhere in the documentation.
I've tried this when statement, but it fails because ansible_hostname resolves to the host name generated when the machine is created, not the one you define in your hosts file.
- name: Install this only for local dev machine
pip:
name: pyramid
when: ansible_hostname == "local"
The necessary variable is inventory_hostname.
- name: Install this only for local dev machine
pip:
name: pyramid
when: inventory_hostname == "local"
It is somewhat hidden in the documentation at the bottom of this section.
You can limit the scope of a playbook by changing the hosts header in its plays without relying on your special host label ‘local’ in your inventory. Localhost does not need a special line in inventories.
- name: run on all except local
hosts: all:!local
This is an alternative:
- name: Install this only for local dev machine
pip: name=pyramid
delegate_to: localhost