I've run into issues pulling Docker images from a private DockerHub repo using the Docker module of Ansible, so to sanity-check that code decided to try pulling the image in question first using the shell. This also fails. What's going on here? If I SSH onto the box, I am able to run exactly the same command in the shell and it works, pulling the right image.
Isolated example play:
---
- hosts: <host-ip>
gather_facts: True
remote_user: ubuntu
sudo: yes
tasks:
- include_vars: vars/client_vars.yml
- name: Pull stardog docker image [private]
shell: sudo docker pull {{stardog_docker_repo}}
- name: Tag stardog docker image [private]
shell: sudo docker tag {{stardog_docker_repo}} stardog_tag
The error that's being output is:
failed: [<host-ip>] => {"changed": true, "cmd": "sudo docker pull <org>/<image>:latest", "delta": "0:00:01.395931", "end": "2015-08-05 17:35:22.480811", "rc": 1, "start": "2015-08-05 17:35:21.084880", "warnings": []}
stderr: Error: image <org>/<image>:latest not found
stdout: Pulling repository <org>/<image>
FATAL: all hosts have already failed -- aborting
NB: I've sanitised my <org> and <image> but rest assured their image identifier in the playbook and error logging perfectly match the image that I can successfully run in the shell over ssh by doing:
$ sudo docker pull <org>/<image>:latest
I'm aware of various GitHub issues (like this one I had when using the Docker module), patches et cetera related to the docker-py library, but the thing here is I'm just using the Ansible shell module. What have I missed?
A colleague of mine pointed out something - if you log the env, you find that the sudo: yes makes root run the docker commands by default and thus the ubuntu user's Docker credentials are not picked up. This playbook works (assuming you have a valid dockercfg.json in the docker folder, relative to this playbook.
---
- hosts: <host-ip>
gather_facts: True
remote_user: ubuntu
sudo: yes
tasks:
- include_vars: vars/client_vars.yml
# run the docker tasks
- name: Add docker credentials for ubuntu user
copy: src=docker/dockercfg.json dest=/root/.dockercfg
- name: Get env
shell: sudo env
register: sudo_env
- name: Debug
debug: msg="{{sudo_env}}"
- name: Pull stardog docker image [private]
shell: docker pull {{stardog_docker_repo}}
- name: Tag stardog docker image [private]
shell: docker tag {{stardog_docker_repo}} stardog_tag
This gives root the right DockerHub creds. Alternatively, you can add sudo: false to each of the plays and use sudo inline on each shell call to run as the ubuntu user.
You should use Ansible's docker_container
module to pull image now.
In this way, you don't need to run sudo in shell.
http://docs.ansible.com/ansible/docker_container_module.html
Related
I've got a CentOS cluster where /home is going to get mounted over nfs. So I think the centos user's home should get moved to somewhere which will remain local, maybe /var/lib/centos or something. But given centos is the ansible_user I can't use:
hosts: cluster
become: yes
tasks:
- ansible.builtin.user:
name: centos
move_home: yes
home: "/var/lib/centos"
as unsurprisingly it fails with
usermod: user centos is currently used by process 45028
Any semi-tidy workarounds for this, or better ideas?
I don't think you're going to be able to do this with the user module if you're connecting as the centos user. However, if you handle the individual steps yourself it should work:
---
- hosts: centos
gather_facts: false
become: true
tasks:
- command: rsync -a /home/centos/ /var/lib/centos/
- command: sed -i 's,/home/centos,/var/lib/centos,' /etc/passwd
args:
warn: false
- meta: reset_connection
- command: rm -rf /home/centos
args:
warn: false
This relocates the home directory, updates /etc/passwd, and then removes the old home directory. The reset_connection is in there to force a new ssh connection: without that, Ansible will be unhappy when you remove the home directory.
In practice, you'd want to add some logic to the above playbook to make it idempotent.
This is my setup. I have created a scm type git project and have my code there. My playbook is on that repository as well and it contains docker build and run command. In order to build my docker I should execute my build command where my docker file is located (in this case where ansible project clones /var/lib/awx/project). I want to get that path to my ansible playbook.
My playbook looks like this:
---
- hosts: all
sudo: yes
remote_user: ubuntu
gather_facts: no
tasks:
- name : build docker
become: yes
become_user: root
command : docker build -t "test-api" .
- name: run docker
become: yes
become_user: root
command : docker run -it -p 80:9001 --name api test-api
How can i achieve this?
You can send the variable to the playbook when you execute the ansible-playbook command. This is what you could do:
ansible-playbook my-playbook.yml -e"path='/var/lib/awx/project'"
Then just use it in the playbook as a normal variable: {{ path }}
This is useful if you decide to change the path. If you have any question about this, feel free to ask in the comments
You can make use of vars for defining the variables:
---
- hosts: all
sudo: yes
remote_user: ubuntu
gather_facts: no
vars:
file_path: "<your file path>"
tasks:
- name : build docker
become: yes
become_user: root
command : docker build -t "test-api" .
- name: run docker
become: yes
become_user: root
command : docker run -it -p 80:9001 --name api test-api
And In your command access that variable like "{{ file_path }}"
I've been banging my head on this one for most of the day, I've tried everything I could without success, even with the help of my sysadmin. (note that I am not at all an ansible expert, I've discovered that today)
context: I try to run implement continuous integration of a java service via gitlab. a pipeline will, on a push, run tests, package the jar, then run an ancible playbook to stop the existing service, replace the jar, launch the service again. We have that for the production in google cloud, and it works fine. I'm trying to add an extra step before that, to do the same on localhost.
And I just can't understand why ansible fails to do a "sudo service XXXX stop|start" . All I got is
fatal: [localhost]: FAILED! => {"changed": false, "failed": true, "module_stderr": "Sorry, try again.\n[sudo via ansible, key=nbjplyhtvodoeqooejtlnhxhqubibbjy] password: \nsudo: 1 incorrect password attempt\n", "module_stdout": "", "msg": "MODULE FAILURE", "rc": 1}
Here is the the gitlab pipeline stage that I call :
indexer-integration:
stage: deploy integration
script:
- ansible-playbook -i ~/git/ansible/inventory deploy_integration.yml --vault-password-file=/home/gitlab-runner/vault.txt
when: on_success
vault.txt contains the vault encryption password. Here is the deploy_integration.yml
---
- name: deploy integration saleindexer
hosts: localhost
gather_facts: no
user: test-ccc #this is the user that I created as a test
connection: local
vars_files:
- /home/gitlab-runner/secret.txt #holds the sudo password
tasks:
- name: Stop indexer
service: name=indexer state=stopped
become: true
become_user: root
- name: Clean JAR
become: true
become_user: root
file:
state: absent
path: '/PATH/indexer-latest.jar'
- name: Copy JAR
become: true
become_user: root
copy:
src: 'target/indexer-latest.jar'
dest: '/PATH/indexer-latest.jar'
- name: Start indexer
service: name=indexer state=started
become: true
become_user: root
the user 'test-ccc' is another user that I created ( part of the group root and in the sudoer file) to make sure it was not an issue related to the gitlab-runner user ( and because apparently no one here can remembers the sudo password of that user xD )
I've try a lot od thing, including
shell: echo 'password' | sudo -S service indexer stop
that works in command line. But if executed by ansible, all I got is a prompt message asking me to enter the sudo password
Thanks
edit per comment request : The secret.txt has :
ansible_become_pass: password
When using that user in command line (su user / sudo service start ....) and prompted for that password, it works fine. The problem I believe is that either ansible always prompts for password, or the password is not properly passed to the task.
The sshd_config has a line 'PermitRootLogin yes'
ok, thanks to a reponse(now deleted) from techraf, I noticed that the line
user: test-ccc
is actually useless, everything was still run by the 'gitlab-runner' user. So I :
put all my action in a script postbuild.sh
add gitlab-runners to the sudoers and gave the nopassword for that script
gitlab-runner ALL=(ALL) NOPASSWD:/home/PATH/postbuild.sh
removed everrything about passing the password and the secret from the ansible task, and used instead :
shell: sudo -S /home/PATH/postbuild.sh
So that works, the script is executed, service is stop/start. I'll mark this as answered, even though using service: name=indexer state=started and giving NOPASSWD:ALL for the user still caused an error (the one in my comment on the question ) . If anyone can shed light on that in the comment ....
What I have is valid YAML but for some reason it's not valid for Ansible and the documentation on the Ansible site has some great examples of running the modules but there isn't any documentation on how the modules work together or if they can. I'm assuming I can run a shell module and a docker_container module in the same task but it appears to me as if Ansible is disagreeing with me. Here's what I have.
---
- name: Setup Rancher Container
shell: sudo su_root
docker_container:
name: rancherserver
image: rancher/server
state: started
restart_policy: always
published_ports: 8080:8080
...
ERROR! conflicting action statements
The error appears to have been in '/opt/ansible_scripts/ansible/roles/dockermonitoring
/tasks/main.yml': line 2, column 3, but maybe elsewhere in the file depending on the exact
syntax problem.
The offending line appears to be:
---
- name: Setup Rancher Container
^ here
Because I'm runing this on RHEL 7 I need to be able to run the sudo su_root script to become root before ansible can communicate with the Docker API as docker runs as root.
So if I can't run this script and then run the docker_container I think that's a big problem with ansible.
I'm assuming I can run a shell module and a docker_container module in the same task
Wrong. One task – one module.
There's become parameter to get privileged access, like:
- name: Setup Rancher Container
become: yes
docker_container:
name: rancherserver
...
Our actual setup runs on AWS where we have RDS available, but in vagrant we naturally need to install MySQL locally. What's the normal way of skipping installation with Vagrant? My ansible file looks something like this:
---
- name: foo
hosts: foo
sudo: yes
roles:
- common-web
- bennojoy.mysql
- php
I would recommend having specific groups in your inventory file, and run an 'install locally' playbook on the vagrant instances. This also means you would want to run an 'install RDS config' playbook on the AWS instances of course...
Trying to do all the things in all the places in one playbook is possible, but imo its cleaner to have different playbooks for different environments.
You can do this, as the vagrant always created a directory on the root level "/vagrant"
So just check it like this:
---
- name: foo
hosts: foo
sudo: yes
roles:
- common-web
- bennojoy.mysql
- php
tasks:
- name: Check that /vagrant directory exist
command: /usr/bin/test -e /vagrant
register: dir_exists
roles:
- common-web
- { role: bennojoy.mysql, when: when: dir_exists.rc == 0 }
- php
Here I am supposing that "bennojoy.mysql" is your main mysql role, please check it and let me know if it work for you. Thanks