Ansible job prompting for password - ansible

I have an Ansible job that ensures certain directories are present on remote servers and then copies files into them.
---
- hosts: cac
tasks:
- name: Create Required directories.
file: path=/opt/app/ca/{{ item }} state=directory mode=0755 owner=admin group=admin
with_items:
- cac/webapps
- cac/iam_config
- name: Copy and unarchive webapps node.
synchronize: src=/home/ansible/templates/app/Sprint6/webapps dest=/opt/app/ca/iam_cac checksum=yes
My environmnet file is:
[cac]
10.169.99.70
10.169.99.72
[cac:vars]
ansible_ssh_user=admin
ansible_ssh_pass=xyz
When I run the job, in debug mode I can see that Task one is run as admin user and no password is prompted from me.
But the second task asks me for the admin password.
TASK [Copy and unarchive webapps node.] ****************************************
task path: /home/ansible/playbooks/release-deploy.yaml:10
<10.169.99.70> ESTABLISH LOCAL CONNECTION FOR USER: ansible
<10.169.99.70> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo $HOME/.ansible/tmp/ansible-tmp-1477753023.09-93847262523946 `" && echo ansible-tmp-1477753023.09-93847262523946="` echo $HOME/.ansible/tmp/ansible-tmp-1477753023.09-93847262523946 `" ) && sleep 0'
<10.169.99.72> ESTABLISH LOCAL CONNECTION FOR USER: ansible
<10.169.99.72> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo $HOME/.ansible/tmp/ansible-tmp-1477753023.09-27220657560306 `" && echo ansible-tmp-1477753023.09-27220657560306="` echo $HOME/.ansible/tmp/ansible-tmp-1477753023.09-27220657560306 `" ) && sleep 0'
<10.169.99.70> PUT /tmp/tmpBP7rLm TO /home/ansible/.ansible/tmp/ansible-tmp-1477753023.09-93847262523946/synchronize
<10.169.99.72> PUT /tmp/tmpVKR5A9 TO /home/ansible/.ansible/tmp/ansible-tmp-1477753023.09-27220657560306/synchronize
<10.169.99.70> EXEC /bin/sh -c 'LANG=en_US.UTF-8 LC_ALL=en_US.UTF-8 LC_MESSAGES=en_US.UTF-8 /usr/bin/python /home/ansible/.ansible/tmp/ansible-tmp-1477753023.09-93847262523946/synchronize; rm -rf "/home/ansible/.ansible/tmp/ansible-tmp-1477753023.09-93847262523946/" > /dev/null 2>&1 && sleep 0'
<10.169.99.72> EXEC /bin/sh -c 'LANG=en_US.UTF-8 LC_ALL=en_US.UTF-8 LC_MESSAGES=en_US.UTF-8 /usr/bin/python /home/ansible/.ansible/tmp/ansible-tmp-1477753023.09-27220657560306/synchronize; rm -rf "/home/ansible/.ansible/tmp/ansible-tmp-1477753023.09-27220657560306/" > /dev/null 2>&1 && sleep 0'
admin#10.169.99.72's password: admin#10.169.99.70's password:
I am confused as to why this step is requiring me to enter the password when I have configured it in my environment file.
Secondly why does it say?
ESTABLISH LOCAL CONNECTION FOR USER: ansible

The first sentence in the synchronize module doc page answers your second question (why does it say ESTABLISH LOCAL CONNECTION FOR USER: ansible:?
synchronize is a wrapper around the rsync command, meant to make common tasks with rsync easier. It is run and originates on the local host where Ansible is being run.
As for the first question, the parameters section in the same manual explains you need to use the following argument:
use_ssh_args
(default: no)
Use the ssh_args specified in ansible.cfg.
While it refers only to ansible.cfg, it refers to the variables defined in the inventory file as well.

Related

Gitlab CI check if directory exists before pulling origin

I'm trying to deploy my flask application to AWS EC2 instance using gitlab ci runner.
.gitlab.ci.yml
stages:
- test
- deploy
test_app:
image: python:latest
stage: test
before_script:
- python -V
- pip install virtualenv
- virtualenv env
- source env/bin/activate
- pip install flask
script:
- cd flask-ci-cd
- python test.py
prod-deploy:
stage: deploy
only:
- master # Run this job only on changes for stage branch
before_script:
- mkdir -p ~/.ssh
- echo -e "$RSA_KEY" > ~/.ssh/id_rsa
- chmod 600 ~/.ssh/id_rsa
- '[[ -f /.dockerenv ]] && echo -e "Host *\n\tStrictHostKeyChecking no\n\n" > ~/.ssh/config'
script:
- bash .gitlab-deploy-prod.sh
environment:
name: deploy
.gitlab-deploy-prod.sh
#!/bin/bash
# Get servers list
set -f
# access server terminal
shell="ssh -o StrictHostKeyChecking=no ${SERVER_URL}"
git_token=$DEPLOY_TOKEN
echo "Deploy project on server ${SERVER_URL}"
if [ ${shell} -d "/flask-ci-cd" ] # check if directory exists
then
eval "${shell} cd flask-ci-cd && git clone https://sbhusal123:${git_token}#gitlab.com/sbhusal123/flask-ci-cd.git master && cd flask-ci-cd"
else
eval "${shell} git pull https://sbhusal123:${git_token}#gitlab.com/sbhusal123/flask-ci-cd.git master && cd flask-ci-cd && cd flask-ci-cd"
fi
Error: .gitlab-deploy-prod.sh: line 7: -o: command not found
How can i check if directory existing??
What i've tried.
#!/bin/bash
# Get servers list
set -f
# access server terminal
shell="ssh -o StrictHostKeyChecking=no ${SERVER_URL}"
git_token=$DEPLOY_TOKEN
eval "${shell}" # i thought gitlab would provide me with shell access
echo "Deploy project on server ${SERVER_URL}"
if [-d "/flask-ci-cd" ] # check if directory exists
then
eval "cd flask-ci-cd && git clone https://sbhusal123:${git_token}#gitlab.com/sbhusal123/flask-ci-cd.git master && cd flask-ci-cd"
else
eval "git pull https://sbhusal123:${git_token}#gitlab.com/sbhusal123/flask-ci-cd.git master && cd flask-ci-cd && cd flask-ci-cd"
fi
I've tried to log into the ssh shell before executing the scripts inside if else. But it doesn't works the way intended.
Your script has some errors.
Do not use eval. No, eval does not work that way. eval is evil
When storing a command to a variable, do not use normal variables. Use bash arrays instead to preserve "words".
Commands passed via ssh are double escaped. I would advise to prefer to use here documents, they're simpler to get the quoting right. Note the difference in expansion when the here document delimiter is quoted or not.
i thought gitlab would provide me with shell access No, without open standard input the remote shell will just terminate, as it will read EOF from input. No, it doesn't work that way.
Instead of doing many remote connection, just transfer the execution to remote side once and do all the work there.
Take your time and research how quoting and word splitting works in shell.
git_token=$DEPLOY_TOKEN No, set variables are not exported to remote shell. Either pass them manually or expand them before calling the remote side. (Or you could also use ssh -o SendEnv=git_token and configure remote ssh with AcceptEnv=git_token I think, never tried it).
Read documentation for the utilities you use.
No, git clone doesn't take branch name after url. You can specify branch with --branch or -b option. After url it takes directory name. See git clone --help. Same for git pull.
How can i check if directory existing??
Use bash arrays to store the command. Check if the directory exists just by executing the test command on the remote side.
shell=(ssh -o StrictHostKeyChecking=no "${SERVER_URL}")
if "${shell[#]}" [ -d "/flask-ci-cd" ]; then
...
In case of directory name with spaces I would go with:
if "${shell[#]}" sh <<'EOF'
[ -d "/directory with spaces" ]
EOF
then
Pass set -x to sh to see what's happening also on the remote side.
For your script, try rather to move the execution to remote side - there is little logic in making 3 separate connections. I say just
echo "Deploy project on server ${SERVER_URL}"
ssh -o StrictHostKeyChecking=no "${SERVER_URL}" bash <<EOF
if [ ! -d /flask-ci-cd ]; then
# Note: git_token is expanded on host side
git clone https://sbhusal123:${git_token}#gitlab.com/sbhusal123/flask-ci-cd.git /flask-ci-cd
fi
cd /flask-ci-cd
git pull
EOF
But instead of getting the quoting in some cases right, use declare -p and declare -f to transfer properly quoted stuff to remote side. That way you do not need case about proper quoting - it will work naturally:
echo "Deploy project on server ${SERVER_URL}"
work() {
if [ ! -d /flask-ci-cd ]; then
# Note: git_token is expanded on host side
git clone https://sbhusal123:"${git_token}"#gitlab.com/sbhusal123/flask-ci-cd.git /flask-ci-cd
fi
cd /flask-ci-cd
git pull
{
ssh -o StrictHostKeyChecking=no "${SERVER_URL}" bash <<EOF
$(declare -p git_token) # transfer variables you need
$(declare -f work) # transfer function you need
work # call the function.
EOF
Updated answer for future reads.
.gitlab-ci.yml
stages:
- test
- deploy
test_app:
image: python:latest
stage: test
before_script:
- python -V
- pip install virtualenv
- virtualenv env
- source env/bin/activate
- pip install flask
script:
- cd flask-ci-cd
- python test.py
prod-deploy:
stage: deploy
only:
- master
before_script:
- mkdir -p ~/.ssh
- echo -e "$RSA_KEY" > ~/.ssh/id_rsa
- chmod 600 ~/.ssh/id_rsa
- '[[ -f /.dockerenv ]] && echo -e "Host *\n\tStrictHostKeyChecking no\n\n" > ~/.ssh/config'
script:
- bash .gitlab-deploy-prod.sh
environment:
name: deploy
.gitlab-deploy-prod.sh
#!/bin/bash
# Get servers list
set -f
shell=(ssh -o StrictHostKeyChecking=no "${SERVER_URL}")
git_token=$DEPLOY_TOKEN
echo "Deploy project on server ${SERVER_URL}"
ssh -o StrictHostKeyChecking=no "${SERVER_URL}" bash <<EOF
if [ ! -d flask-ci-cd ]; then
echo "\n Cloning into remote repo..."
git clone https://sbhusal123:${git_token}#gitlab.com/sbhusal123/flask-ci-cd.git
# Create and activate virtualenv
echo "\n Creating virtual env"
python3 -m venv env
else
echo "Pulling remote repo origin..."
cd flask-ci-cd
git pull
cd ..
fi
# Activate virtual env
echo "\n Activating virtual env..."
source env/bin/activate
# Install packages
cd flask-ci-cd/
echo "\n Installing dependencies..."
pip install -r requirements.txt
EOF
There is a test command which is explicit about checking files and directories:
test -d "/flask-ci-cd" && eval $then_commands || eval $else_commands
Depending on the AWS instance I'd expect "test" to be available. I'd recommend putting the commands in variables. (e.g. eval $then_commands)

GitLab Pipeline: Works in YML, Fails in Extracted SH

I followed the GitLab Docs to enable my project's CI to clone other private dependencies. Once it was working, I extracted from .gitlab-ci.yml:
before_script:
- 'which ssh-agent || ( apt-get update -y && apt-get install openssh-client -y )'
- eval $(ssh-agent -s)
- ssh-add <(echo "$SSH_PRIVATE_KEY")
- mkdir -p ~/.ssh
- '[[ -f /.dockerenv ]] && echo -e "Host *\n\tStrictHostKeyChecking no\n\n" > ~/.ssh/config'
into a separate shell script setup.sh as follows:
which ssh-agent || ( apt-get update -y && apt-get install openssh-client -y )
eval $(ssh-agent -s)
ssh-add <(echo "$SSH_PRIVATE_KEY")
mkdir -p ~/.ssh
[[ -f /.dockerenv ]] && echo -e "Host *\n\tStrictHostKeyChecking no\n\n" > ~/.ssh/config
leaving only:
before_script:
- chmod 700 ./setup.sh
- ./setup.sh
I then began getting:
Cloning into '/root/Repositories/DependentProject'...
Warning: Permanently added 'gitlab.com,52.167.219.168' (ECDSA) to the list of known hosts.
Permission denied (publickey).
fatal: Could not read from remote repository.
Please make sure you have the correct access rights
and the repository exists.
How do I replicate the original behavior in the extracted script?
When running ssh-add either use source or . so that the script runs within the same shell, in your case it would be:
before_script:
- chmod 700 ./setup.sh
- . ./setup.sh
or
before_script:
- chmod 700 ./setup.sh
- source ./setup.sh
For a better explanation as to why this needs to run in the same shell as the rest take a look at this answer to a related question here.

starting jboss server using ansible and returning back control [duplicate]

This question already has answers here:
Ansible Command module says that '|' is illegal character
(2 answers)
Closed 5 years ago.
Below yaml playbook restarts the jboss server but doesnt get back control to execute next ansible command. I have also used wait for module to stop waiting for current command result and go for next command. But still ansbile hangs on current command indefinitely . Please let me know when I went wrong?
---
- hosts: test1
tasks:
- name: simple command
become: true
command: whoami
register: output
- debug:
msg: "I gave the command whoami and the out is '{{output.stdout}}'"
- name: change to jboss user
become: true
become_user: jboss
command: whoami
register: output
- debug:
msg: "I gave the command whoami and the out is '{{output.stdout}}'"
- name: start jboss server as jboss user
become: true
become_user: jboss
command: sh /usr/jboss/bin/run.sh -c XXXXXXXX -b x.x.x.x &
when: inventory_hostname in groups['test1']
register: restartscript
- debug:
msg: "output of server restart command is '{{restartscript.stdout}}'"
- name: waiting for server to come back
local_action:
module: wait_for
timeout=20
host=x.x.x.x
port=8080
delay=6
state=started
terminal output message
ESTABLISH SSH CONNECTION FOR USER: XXXXXXXXXXX
SSH: EXEC sshpass -d12 ssh -vvv -C -o ControlMaster=auto -o ControlPersist=60s -o User=XXXXXXXXX -o ConnectTimeout=10 -o ControlPath=/home/tcprod/XXXXXXXXXX/.ansible/cp/ansible-ssh-%h-%p-%r -tt X.X.X.X '/bin/sh -c '"'"'sudo -H -S -p "[sudo via ansible, key=hvgwnsbxpkxvbcmtcfvvsplfphdrevxg] password: " -u jboss /bin/sh -c '"'"'"'"'"'"'"'"'echo BECOME-SUCCESS-hvgwnsbxpkxvbcmtcfvvsplfphdrevxg; /usr/bin/python /tmp/ansible-tmp-XXXXXXXXX.XX-XXXXXXXXXXXXXX/command.py'"'"'"'"'"'"'"'"' && sleep 0'"'"''
The & is not allowed in Ansible "command".
command: sh /usr/jboss/bin/run.sh -c XXXXXXXX -b x.x.x.x &
Try removing it or use shell instead of command.
From Ansible documentation about command:
The given command [...] will not
be processed through the shell, so variables like $HOME and operations
like "<", ">", "|", ";" and "&" will not work (use the shell module if
you need these features).

Ansible lineinfile give an error with /etc/hosts

I have this simple task in my role:
- name: Updating the /etc/hosts
lineinfile: dest=/etc/hosts line="192.168.99.100 {{ item }}"
with_items:
- domain1.com
- domain2.com
tags: etc
When I run my Ansible playbook:
robe:ansible-develop robe$ ansible-playbook -i inventory develop-env.yml -vvvv --extra-vars "user=`whoami`" --tags etc --become-user=robe --ask-become-pass
SUDO password:
PLAY [127.0.0.1] **************************************************************
GATHERING FACTS ***************************************************************
<127.0.0.1> REMOTE_MODULE setup
<127.0.0.1> EXEC ['/bin/sh', '-c', 'mkdir -p /tmp/ansible-tmp-1446050161.27-256837595805154 && chmod a+rx /tmp/ansible-tmp-1446050161.27-256837595805154 && echo /tmp/ansible-tmp-1446050161.27-256837595805154']
<127.0.0.1> PUT /var/folders/x1/dyrdksh50tj0z2szv3zx_9rc0000gq/T/tmpMYjnXz TO /tmp/ansible-tmp-1446050161.27-256837595805154/setup
<127.0.0.1> EXEC ['/bin/sh', '-c', 'chmod a+r /tmp/ansible-tmp-1446050161.27-256837595805154/setup']
<127.0.0.1> EXEC /bin/sh -c 'sudo -k && sudo -H -S -p "[sudo via ansible, key=rqphpqfpcbsifqtnwflmmlmpwrcnkpqe] password: " -u robe /bin/sh -c '"'"'echo BECOME-SUCCESS-rqphpqfpcbsifqtnwflmmlmpwrcnkpqe; LANG=en_US.UTF-8 LC_CTYPE=en_US.UTF-8 /usr/bin/python /tmp/ansible-tmp-1446050161.27-256837595805154/setup'"'"''
<127.0.0.1> EXEC ['/bin/sh', '-c', 'rm -rf /tmp/ansible-tmp-1446050161.27-256837595805154/ >/dev/null 2>&1']
ok: [127.0.0.1]
TASK: [docker-tool-box | Updating the /etc/hosts] *****************************
<127.0.0.1> REMOTE_MODULE lineinfile dest=/etc/hosts line="192.168.99.100 ptxrt.com"
<127.0.0.1> EXEC ['/bin/sh', '-c', 'mkdir -p /tmp/ansible-tmp-1446050161.49-9492873099893 && chmod a+rx /tmp/ansible-tmp-1446050161.49-9492873099893 && echo /tmp/ansible-tmp-1446050161.49-9492873099893']
<127.0.0.1> PUT /var/folders/x1/dyrdksh50tj0z2szv3zx_9rc0000gq/T/tmpyLOGd6 TO /tmp/ansible-tmp-1446050161.49-9492873099893/lineinfile
<127.0.0.1> EXEC ['/bin/sh', '-c', u'chmod a+r /tmp/ansible-tmp-1446050161.49-9492873099893/lineinfile']
<127.0.0.1> EXEC /bin/sh -c 'sudo -k && sudo -H -S -p "[sudo via ansible, key=nofwziqxytbhjwhluhtzdfcqclqjuypv] password: " -u robe /bin/sh -c '"'"'echo BECOME-SUCCESS-nofwziqxytbhjwhluhtzdfcqclqjuypv; LANG=en_US.UTF-8 LC_CTYPE=en_US.UTF-8 /usr/bin/python /tmp/ansible-tmp-1446050161.49-9492873099893/lineinfile'"'"''
<127.0.0.1> EXEC ['/bin/sh', '-c', 'rm -rf /tmp/ansible-tmp-1446050161.49-9492873099893/ >/dev/null 2>&1']
failed: [127.0.0.1] => (item=ptxrt.com) => {"failed": true, "item": "ptxrt.com"}
msg: The destination directory (/private/etc) is not writable by the current user.
FATAL: all hosts have already failed -- aborting
PLAY RECAP ********************************************************************
to retry, use: --limit #/Users/robe/develop-env.retry
127.0.0.1 : ok=1 changed=0 unreachable=0 failed=1
I don't understand why the error msg said:
msg: The destination directory (/private/etc) is not writable by the current user.
The correct directory should be /etc/hosts.
Any clue?
I am working on MacOS.
My playbook is:
- hosts: 127.0.0.1
connection: local
become: yes
become_method: sudo
become_user: "{{user}}"
roles:
- role-1
- role-2
I put the become_user by command line. So all my roles are running with become. And it still doesn't work.
On OSX the /etc/ folder is actually a symlink to the /private/etc/ folder - hence the error. (Ansible is just transparently following the symlink).
As for the error you're going to need to run the task with become: yes (sudo permissions) to be able to write to /etc/hosts/
Edit based on update and commments
To get the correct privileges to edit the hosts file you need to be root. Setting become: yes on the task is good enough for this for OSX as Ansible will default to sudo as the become method and root as the user.
To specify the sudo password you can do one of two things.
Use --ask-become-pass on the command line and Ansible will prompt you when it needs it
Use the ansible_become_pass variable on the group or host in the inventory file. E.g. localhost ansible_become_pass=batman
Note that the Ansible docs recommend against 2 and using 1 so as not to store your password in plain text.

Unable to run top on a remote host via Ansible

I have the following playbook:
---
- hosts: ESNodes
remote_user: ihazan
tasks:
- name: Run Monitoring
action: command /tmp/monitoring/cpu_mon
The content of /tmp/monitoring/cpu_mon is as follows:
top -bn1800 -p $(ps -ef | grep elasticsearch | grep -v grep | grep -v sudo | awk '{print $2}') | grep root > /tmp/cpu_stats &
Pay attention that top is run the the background with &
When running that playbook Ansible get stuck forever on the top command:
-bash-4.1$ ansible-playbook es_playbook_run.yml -l PerfSetup -K -f 10
sudo password:
PLAY [ESNodes] ****************************************************************
GATHERING FACTS ***************************************************************
ok: [isk-vsrv643]
TASK: [Run Monitoring] ********************************************************
When running it via remote SSH(which is what ansible should do) it works fine:
-bash-4.1$ ssh ihazan#isk-vsrv643 'nohup /tmp/monitoring/cpu_mon'
-bash-4.1$
Following is the debug version of the output:
-bash-4.1$ ansible-playbook es_playbook_run.yml -l PerfSetup -K -f 10 -vvvv
sudo password:
PLAY [ESNodes] ****************************************************************
GATHERING FACTS ***************************************************************
<isk-vsrv643> ESTABLISH CONNECTION FOR USER: ihazan on PORT 22 TO isk-vsrv643
<isk-vsrv643> EXEC /bin/sh -c 'mkdir -p $HOME/.ansible/tmp/ansible-1393860499.75-256362698809430 && chmod a+rx $HOME/.ansible/tmp/ansible-1393860499.75-256362698809430 && echo $HOME/.ansible/tmp/ansible-1393860499.75-256362698809430'
<isk-vsrv643> REMOTE_MODULE setup
<isk-vsrv643> PUT /tmp/tmpZh9bYP TO /usr2/ihazan/.ansible/tmp/ansible-1393860499.75-256362698809430/setup
<isk-vsrv643> EXEC /bin/sh -c '/usr/bin/python /usr2/ihazan/.ansible/tmp/ansible-1393860499.75-256362698809430/setup; rm -rf /usr2/ihazan/.ansible/tmp/ansible-1393860499.75-256362698809430/ >/dev/null 2>&1'
ok: [isk-vsrv643]
TASK: [Run Monitoring] ********************************************************
<isk-vsrv643> ESTABLISH CONNECTION FOR USER: ihazan on PORT 22 TO isk-vsrv643
<isk-vsrv643> EXEC /bin/sh -c 'mkdir -p $HOME/.ansible/tmp/ansible-1393860500.32-92141081389545 && chmod a+rx $HOME/.ansible/tmp/ansible-1393860500.32-92141081389545 && echo $HOME/.ansible/tmp/ansible-1393860500.32-92141081389545'
<isk-vsrv643> REMOTE_MODULE command /tmp/monitoring/cpu_mon
<isk-vsrv643> PUT /tmp/tmp7dYRPY TO /usr2/ihazan/.ansible/tmp/ansible-1393860500.32-92141081389545/command
<isk-vsrv643> EXEC /bin/sh -c '/usr/bin/python /usr2/ihazan/.ansible/tmp/ansible-1393860500.32-92141081389545/command; rm -rf /usr2/ihazan/.ansible/tmp/ansible-1393860500.32-92141081389545/ >/dev/null 2>&1'
Thx in advance
Use fire and forget mode, i.e. async + poll 0 :
---
- hosts: ESNodes
remote_user: ihazan
tasks:
- name: Run Monitoring
action: command /tmp/monitoring/cpu_mon
async: 45
poll: 0
Whole scoop on async is here : http://docs.ansible.com/playbooks_async.html
Good luck.

Resources