Pass variables from project to playbooks ansible - ansible

This is my setup. I have created a scm type git project and have my code there. My playbook is on that repository as well and it contains docker build and run command. In order to build my docker I should execute my build command where my docker file is located (in this case where ansible project clones /var/lib/awx/project). I want to get that path to my ansible playbook.
My playbook looks like this:
---
- hosts: all
sudo: yes
remote_user: ubuntu
gather_facts: no
tasks:
- name : build docker
become: yes
become_user: root
command : docker build -t "test-api" .
- name: run docker
become: yes
become_user: root
command : docker run -it -p 80:9001 --name api test-api
How can i achieve this?

You can send the variable to the playbook when you execute the ansible-playbook command. This is what you could do:
ansible-playbook my-playbook.yml -e"path='/var/lib/awx/project'"
Then just use it in the playbook as a normal variable: {{ path }}
This is useful if you decide to change the path. If you have any question about this, feel free to ask in the comments

You can make use of vars for defining the variables:
---
- hosts: all
sudo: yes
remote_user: ubuntu
gather_facts: no
vars:
file_path: "<your file path>"
tasks:
- name : build docker
become: yes
become_user: root
command : docker build -t "test-api" .
- name: run docker
become: yes
become_user: root
command : docker run -it -p 80:9001 --name api test-api
And In your command access that variable like "{{ file_path }}"

Related

How to execute ansible playbook with sudo privilege

I've got a problem with running ansible-playbook
See below my playbook
---
- hosts: some_group
remote_user: someuser
become: true
become_method: sudo
tasks:
- name: Copy file to remote nodes
copy: src=/root/ansible/someimage dest=/home/someuser/
- name: Load exported file of nginx image
command: docker load -i /home/someuser/someimage
The command in terminal is:
ansible-playbook test.yml --ask-pass -K
ansible version is 2.0.0.2
The error is : "stderr": "Got permission denied while trying to connect to the Docker daemon socket at unix:///var/run/docker.sock:
Make sure, that you understood the limitations when becoming an unprivileged user. I would try to avoid this.
Instead you can work as privileged user. You just have to fix the permissions.
---
- hosts: some_group
become: true
tasks:
- name: Copy file to remote nodes
copy: src=/root/ansible/someimage dest=/home/someuser/someimage
- name: Set permisions
file:
dest: /home/someuser/someimage
owner: someuser
group: someuser
mode: 0644
- name: Load exported file of nginx image
command: sudo someuser docker load -i /home/someuser/someimage

how to define login user and become root in playbook

my loginuser is user1 and i want to execute the playbook with root. how can i do this. if i use in cmdline it does not work like this
ansible-playbook main.yaml -i hosts --user=git -k --become-user=root --ask-become-pass --become-method=su
Please tell me how to implement this.
name: Install and Configure IEM
hosts: rhel
ansible_become: yes
ansible_become_method: su
ansible_become_user: root
ansible_become_pass: passw0rd
tasks:
- name: Creating masthead file path
file: path=/etc/opt/BESClient state=directory
- name: Creating install directory
I use :
deploy.yml
- name: Todo something
hosts: all
become: yes
become_user: root
become_method: su
When you execute the playbook pass the password as an extra var.
--extra-vars='ansible_become_pass=password'
From Ansible docs:
you can set those in the playbook as #Raul-Hugo, with become_user and become_user;
alternatively, it can also be done in the inventory, which allows setting per host or group. But then the variables get "ansible_" prefix: ansible_become_user, ansible_become_user, etc. That's why the playbook you gave in your question did not work: it used variable names that are used in the inventory.
You can become root like below and install the packages
tasks:
- name: install apache package
become: yes
become_user: root
yum:
name: httpd
state: present
- name: ensure apache is running
become: yes
become_user: root
service:
name: httpd
state: started
All the above answers caused Ansible to try to login as root from the beginning. but in this case, the user you request is git so the below example worked for me:
- name: Install and Configure IEM
hosts: rhel
tasks:
- name: Creating masthead file path
file: path=/etc/opt/BESClient state=directory
remote_user: git
become: yes # when not specifying `become_user` it's "root"
This will cause it to login as git and after the login - switch to root

Ansible Shell Cannot Pull Docker Image

I've run into issues pulling Docker images from a private DockerHub repo using the Docker module of Ansible, so to sanity-check that code decided to try pulling the image in question first using the shell. This also fails. What's going on here? If I SSH onto the box, I am able to run exactly the same command in the shell and it works, pulling the right image.
Isolated example play:
---
- hosts: <host-ip>
gather_facts: True
remote_user: ubuntu
sudo: yes
tasks:
- include_vars: vars/client_vars.yml
- name: Pull stardog docker image [private]
shell: sudo docker pull {{stardog_docker_repo}}
- name: Tag stardog docker image [private]
shell: sudo docker tag {{stardog_docker_repo}} stardog_tag
The error that's being output is:
failed: [<host-ip>] => {"changed": true, "cmd": "sudo docker pull <org>/<image>:latest", "delta": "0:00:01.395931", "end": "2015-08-05 17:35:22.480811", "rc": 1, "start": "2015-08-05 17:35:21.084880", "warnings": []}
stderr: Error: image <org>/<image>:latest not found
stdout: Pulling repository <org>/<image>
FATAL: all hosts have already failed -- aborting
NB: I've sanitised my <org> and <image> but rest assured their image identifier in the playbook and error logging perfectly match the image that I can successfully run in the shell over ssh by doing:
$ sudo docker pull <org>/<image>:latest
I'm aware of various GitHub issues (like this one I had when using the Docker module), patches et cetera related to the docker-py library, but the thing here is I'm just using the Ansible shell module. What have I missed?
A colleague of mine pointed out something - if you log the env, you find that the sudo: yes makes root run the docker commands by default and thus the ubuntu user's Docker credentials are not picked up. This playbook works (assuming you have a valid dockercfg.json in the docker folder, relative to this playbook.
---
- hosts: <host-ip>
gather_facts: True
remote_user: ubuntu
sudo: yes
tasks:
- include_vars: vars/client_vars.yml
# run the docker tasks
- name: Add docker credentials for ubuntu user
copy: src=docker/dockercfg.json dest=/root/.dockercfg
- name: Get env
shell: sudo env
register: sudo_env
- name: Debug
debug: msg="{{sudo_env}}"
- name: Pull stardog docker image [private]
shell: docker pull {{stardog_docker_repo}}
- name: Tag stardog docker image [private]
shell: docker tag {{stardog_docker_repo}} stardog_tag
This gives root the right DockerHub creds. Alternatively, you can add sudo: false to each of the plays and use sudo inline on each shell call to run as the ubuntu user.
You should use Ansible's docker_container
module to pull image now.
In this way, you don't need to run sudo in shell.
http://docs.ansible.com/ansible/docker_container_module.html

template task: write to root owned directory

I want to copy a template generated file to /etc/init.d folder. But template task doesn't seem to support sudo parameter.
What is the recommended way to handle this? should I copy it to temporary directory and then move file with with sudo?
The playbook task looks like as shown below. Ansible version 1.8.2
- name: copy init script
template: src=template/optimus_api_service.sh dest=/etc/init.d/optimus-api mode=0755 force=yes owner=root group=root
I have tested the following playbook and it works.
My setup:
The User vagrant on the machine vm is allowed to execute commands password-free with sudo.
I created a simple template and installed it with the following playbook:
---
- name: Test template
hosts: vm
gather_facts: no
remote_user: vagrant
vars:
bla: blub # some variable used in the template
tasks:
- name: copy init script
sudo: yes # << you have to activate sudo
sudo_user: root # << and provide the user
template: src=template/test.j2 dest=/opt/test mode=0755 force=yes owner=root group=root

Ansible 1.9.1 'become' and sudo issue

I am trying to run an extremely simple playbook to test a new Ansible setup.
When using the 'new' Ansible Privilege Escalation config options in my ansible.cfg file:
[defaults]
host_key_checking=false
log_path=./logs/ansible.log
executable=/bin/bash
#callback_plugins=./lib/callback_plugins
######
[privilege_escalation]
become=True
become_method='sudo'
become_user='tstuser01'
become_ask_pass=False
[ssh_connection]
scp_if_ssh=True
I get the following error:
fatal: [webserver1.local] => Internal Error: this module does not support running commands via 'sudo'
FATAL: all hosts have already failed -- aborting
The playbook is also very simple:
# Checks the hosts provisioned by midrange
---
- name: Test su connecting as current user
hosts: all
gather_facts: no
tasks:
- name: "sudo to configued user -- tstuser01"
#action: ping
command: /usr/bin/whoami
I am not sure if there is something broken in Ansible 1.9.1 or if I am doing something wrong. Surely the 'command' module in Ansible allows running commands as sudo.
The issue is with configuration; I also took it as an example and got the same problem. After playing awhile I noticed that the following works:
1) deprecated sudo:
---
- hosts: all
sudo: yes
gather_facts: no
tasks:
- name: "sudo to root"
command: /usr/bin/whoami
2) new become
---
- hosts: all
become: yes
become_method: sudo
gather_facts: no
tasks:
- name: "sudo to root"
command: /usr/bin/whoami
3) using ansible.cfg:
[privilege_escalation]
become = yes
become_method = sudo
and then in a playbook:
---
- hosts: all
gather_facts: no
tasks:
- name: "sudo to root"
command: /usr/bin/whoami
since you "becoming" tstuser01 (not a root like me), please play a bit, probably user name should not be quoted too:
become_user = tstuser01
at least this is the way I define remote_user in ansible.cfg and it works... My issue resolved, hope yours too
I think you should use the sudo directive in the hosts section so that subsequent tasks can run with sudo privileges unless you explicitly specified sudo:no in a task.
Here's your playbook that I've modified to use sudo directive.
# Checks the hosts provisioned by midrange
---
- hosts: all
sudo: yes
gather_facts: no
tasks:
- name: "sudo to configued user -- tstuser01"
command: /usr/bin/whoami

Resources