Ansible copy from the remote server to ansible host fails - ansible

I need to copy the latest log file from remote linux server to the ansible host. This is what I have tried so far.
- hosts: [host]
remote_user: root
tasks:
- name: Copy the file
command: bash -c "ls -rt | grep install | tail -n1"
register: result
args:
chdir: /root
- name: Copying the file
copy:
src: "/root/{{ result.stdout }}"
dest: /home
But I am getting the following error .
TASK [Gathering Facts] ********************************************************************************************************************************************************************************************
ok
TASK [Copy the file] **********************************************************************************************************************************************************************************************
changed: => {"changed": true, "cmd": ["bash", "-c", "ls -rt | grep install | tail -n1"], "delta": "0:00:00.011388", "end": "2017-06-14 07:53:26.475344", "rc": 0, "start": "2017-06-14 07:53:26.463956", "stderr": "", "stdout": "install.20170614-051027.log", "stdout_lines": ["install.20170614-051027.log"], "warnings": []}
TASK [Copying the file] *******************************************************************************************************************************************************************************************
fatal: FAILED! => {"changed": false, "failed": true, "msg": "Unable to find 'install.20170614-051027.log' in expected paths."}
PLAY RECAP ********************************************************************************************************************************************************************************************************
: ok=2 changed=1 unreachable=0 failed=1
But that file is right there.Please help me resolve this issue.

Ansible Copy copies files from ansible host to remote host. Use Ansible fetch instead.
http://docs.ansible.com/ansible/fetch_module.html

This one works , i have to use fetch instead of copy to get the file from remote .
- name: Copy the file
command: bash -c "ls -rt | grep install | tail -n1"
register: result
args:
chdir: /root
- name: Copying the file
fetch:
src: "/root/{{ result.stdout }}"
dest: /home
flat: yes

Related

End play if database does not exits

I have created the following playbook, to check if a database exists:
- name: Check database exits
shell: |
mysql -hmysqlhost -uroot -ppassword -e "show databases" | egrep db"
register: mysql_exist
- name: Show database
debug:
msg: "{{ mysql_exist.stdout }}"
My idea is to finish the playbook if the database does not exist and show a message, I tried this but it does not work, otherwise I should continue to the next task.
- name: Check database exits
shell: |
mysql -hmysqlhost -uroot -ppassword -e "show databases" | egrep db"
register: mysql_exist
- name: End Playbook If database not exits.
meta: end_play
when: mysql_exist == 0
- name: Show database
debug:
msg: "{{ mysql_exist.stdout }}"
## other tasks
How can I create a playbook to check if a database exists and if it does not exist, it must display the message The database does not exist and finish the playbook without running other tasks?
if you want to show a message if playbook has to finish, use a block:
(you dont show the output of your register when a db doesnt exist so, i suppose your test in when condition is ok!!)
- block:
- name: "end play "
debug:
msg: "db doesnt exist"
- meta: end_play
when: mysql_exist == 0
so the playbook is finished after the message displaying
Do you need to show the message? Stopping the playbook already happens automatically if egrep does not find anything, because it exits with a non-0 code.
Playbook:
---
- hosts: srv1
become: True
tasks:
- name: x
shell: "echo nope | egrep dbname"
- name: good
shell: "echo very much"
Output (notice how "good" is not executed):
PLAY [srv1] *******************************************************************************************************************************
TASK [x] **********************************************************************************************************************************
fatal: [srv1]: FAILED! => {"changed": true, "cmd": "echo nope | egrep dbname", "delta": "0:00:00.005942", "end": "2022-02-09 15:56:56.726828", "msg": "non-zero return code", "rc": 1, "start": "2022-02-09 15:56:56.720886", "stderr": "", "stderr_lines": [], "stdout": "", "stdout_lines": []}
PLAY RECAP ********************************************************************************************************************************
srv1 : ok=0 changed=0 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0
It could be approximated with something like this:
- name: x
shell: "echo nope | egrep dbname || { echo Database not found && false; }"
Which gives:
TASK [x] ***********************************************************************************************************************************
fatal: [srv1]: FAILED! => {"changed": true, "cmd": "echo nope | egrep dbname || { echo Database not found && false; }", "delta": "0:00:00.006159", "end": "2022-02-09 15:59:26.176704", "msg": "non-zero return code", "rc": 1, "start": "2022-02-09 15:59:26.170545", "stderr": "", "stderr_lines": [], "stdout": "Database not found", "stdout_lines": ["Database not found"]}

Ansible Regex: Get a integer from command & pass to other command to run

I have to execute below 2 commands whose value depends on the system.
# sysctl -w kernel.shmmax= parse_from_shm.sh script #For example 17179869184
# sysctl -w kernel.shmall= parse_from_shm.sh script #For example 4194304
./shm.sh will echo both system values required in the below format
kernel.shmmax=4185686016
kernel.shmall=1021896
So I have to parse & get integer value above result & execute ultimately below 2 commands
# sysctl -w kernel.shmmax=4185686016
# sysctl -w kernel.shmall=1021896
I have tried to register & parse the integer values using regex. But I couldn't able to process it perfectly. Any help would be of great help.
---
- hosts: fossology_test
become: true
become_user: root
environment:
HOME: /usr/ansible
gather_facts: no
tasks:
- name: run shell script
become: true
become_user: root
command: ./shm.sh
args:
chdir: /usr/local/src/
register: results
- set_fact:
shmmax: "{{ results.stdout | regex_search(shmmaxregexp, '\\1' ) }}"
shmall: "{{ results.stdout | regex_search(shmallregexp, '\\1' ) }}"
vars:
shmmaxregexp: 'shmmax=([^\"]+)'
shmallregexp: 'shmall=([^\"]+)'
- name: sysctl -w kernel.shmmax="{{ shmmax | int }}"
become: true
become_user: root
command: sysctl -w kernel.shmmax="{{ shmmax | int }}"
- name: sysctl -w kernel.shmall="{{ shmall }}"
become: true
become_user: root
command: sysctl -w kernel.shmall="{{ shmall }}"
This is the output
dinesh#dinesh-VirtualBox:~/Documents/remote/Ansible-Playbook/fossology_playbook$ ansible-playbook regex.yml -K -v
Using /etc/ansible/ansible.cfg as config file
BECOME password:
PLAY [fossology_test] ************************************************************************************
TASK [run shell script] **********************************************************************************
changed: [fossology_test] => {"changed": true, "cmd": ["./shm.sh"], "delta": "0:00:00.005912", "end": "2020-03-28 05:25:42.022156", "rc": 0, "start": "2020-03-28 05:25:42.016244", "stderr": "", "stderr_lines": [], "stdout": "kernel.shmmax=4185686016\nkernel.shmall=1021896", "stdout_lines": ["kernel.shmmax=4185686016", "kernel.shmall=1021896"]}
TASK [set_fact] ******************************************************************************************
ok: [fossology_test] => {"ansible_facts": {"shmall": ["1021896"], "shmmax": ["4185686016\nkernel.shmall=1021896"]}, "changed": false}
TASK [sysctl -w kernel.shmmax="0"] ***********************************************************************
changed: [fossology_test] => {"changed": true, "cmd": ["sysctl", "-w", "kernel.shmmax=0"], "delta": "0:00:00.003133", "end": "2020-03-28 05:25:42.574223", "rc": 0, "start": "2020-03-28 05:25:42.571090", "stderr": "", "stderr_lines": [], "stdout": "kernel.shmmax = 0", "stdout_lines": ["kernel.shmmax = 0"]}
TASK [sysctl -w kernel.shmall="[u'1021896']"] ************************************************************
changed: [fossology_test] => {"changed": true, "cmd": ["sysctl", "-w", "kernel.shmall=[u'1021896']"], "delta": "0:00:00.003558", "end": "2020-03-28 05:25:43.071811", "rc": 0, "start": "2020-03-28 05:25:43.068253", "stderr": "sysctl: setting key \"kernel.shmall\": Invalid argument", "stderr_lines": ["sysctl: setting key \"kernel.shmall\": Invalid argument"], "stdout": "kernel.shmall = [u'1021896']", "stdout_lines": ["kernel.shmall = [u'1021896']"]}
PLAY RECAP ***********************************************************************************************
fossology_test : ok=4 changed=3 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
I am using ansible 2.9.6
dinesh#dinesh-VirtualBox:/$ ansible --version
ansible 2.9.6
config file = /etc/ansible/ansible.cfg
configured module search path = [u'/home/dinesh/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python2.7/dist-packages/ansible
executable location = /usr/bin/ansible
python version = 2.7.17 (default, Nov 7 2019, 10:07:09) [GCC 7.4.0]
As you can very clearly see in the set_fact results dict, the output of regexp_search is a list of matched strings, not the just the capture group. And, because your regex is imprecise, that's why your shmmax is the numbers plus a newline plus the rest of the text.
The accurate regex is shmmax=([0-9]+) because those values aren't "any character except a double quote" it's "any number after the equals sign"

How to get IP address from controller's hosts file in Ansible?

I am having difficulty figuring out to set a fact in my Ansible playbook that contains the IP address of a server that is listed in the /etc/hosts file on my controller. I am running a playbook against my web server which needs the IP address of my file server. I run the command like this:
ansible-playbook deploy-webservers.yml -i inventory.ini -l webservers
My inventory file looks like this:
[fileservers]
prod-fs1.example.com
[webservers]
prod-web1.example.com
[localhost]
127.0.0.1 ansible_connection=local ansible_python_interpreter=/Users/jsmith/.virtualenvs/provision/bin/python
Here is the playbook:
---
hosts: all
gather_facts: yes
become: yes
pre_tasks:
- name: get file server's IP address
command: "grep prod-fs1 /etc/hosts | awk '{ print $1 }'"
register: fs_ip_addr
delegate_to: localhost
- debug: var={{ fs_ip_addr }}
When I run it, I get this error:
TASK [get file server's IP address] ****************************************************************************************
fatal: [prod-web1.example.com -> localhost]: FAILED! => {"changed": true, "cmd": ["grep", "prod-fs1", "/etc/hosts", "|", "awk", "{ print $0 }"], "delta": "0:00:00.010303", "end": "2020-03-03 12:24:36.207656",
"msg": "non-zero return code", "rc": 2, "start": "2020-03-03 12:24:36.197353", "stderr": "grep: |: No such file or directory\ngrep: awk: No such file or directory\ngrep: { print $0 }:
No such file or directory", "stderr_lines": ["grep: |: No such file or directory", "grep: awk: No such file or directory", "grep: { print $0 }: No such file or directory"], "stdout": "/etc/hosts:45.79.99.99 prod-fs1.example.com prod-fs1", "stdout_lines": ["/etc/hosts:45.79.99.99 prod-fs1.example.com prod-fs1"]}
PLAY RECAP ****************************************************************************************************************
prod-web1.example.com : ok=7 changed=0 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0;
It looks like Ansible has a problem parsing the command when it reaches the pipe symbol. Is there a way around this problem?
Try this. No need to delegate to localhost. lookup is running always on the controller
- set_fact:
fs_ip_addr: "{{ (lookup('file', '/etc/hosts').splitlines()|
list|
select('search', search_host)|
list|
first).split().0 }}"
vars:
search_host: "prod-fs1"
The code can be simplified a bit. Note that search_host is a var.
- set_fact:
fs_ip_addr: "{{ lookup('file', '/etc/hosts').splitlines() |
select('search', search_host) |
first | split() | first }}"

Setting and using environment variable in Ansible

I want to set one value as an environment variable in Ansible and then use it another playbook.
Below is my playbook:
get_cmd.yaml
[root#a6296ab33a34 test_code]# vi get-cwd.yaml
- hosts: localhost
connection: local
gather_facts: False
tasks:
#- name: Get directory
# shell: export ACWD="{{ playbook_dir }}"
# when: platform == 'jenkins'
- name: Get CWD
shell: "export ACWD=/test_code_demo"
when: platform != 'jenkins'
- name: DEMO
shell: echo $ACWD
Output
[root#a6296ab33a34 test_code]# vi get-cwd.yaml
[root#a6296ab33a34 test_code]# ansible-playbook get-cwd.yaml --extra-vars="#deploy-vars.yaml" -vv
[WARNING] Ansible is being run in a world writable directory (/test_code), ignoring it as an ansible.cfg source. For more information see https://docs.ansible.com/ansible/devel/reference_appendices/config.html#cfg-in-world-writable-dir
ansible-playbook 2.8.4
config file = /etc/ansible/ansible.cfg
configured module search path = [u'/root/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python2.7/site-packages/ansible
executable location = /usr/bin/ansible-playbook
python version = 2.7.5 (default, Jun 20 2019, 20:27:34) [GCC 4.8.5 20150623 (Red Hat 4.8.5-36)]
Using /etc/ansible/ansible.cfg as config file
[WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match 'all'
PLAYBOOK: get-cwd.yaml *********************************************************************************************************************************************************************************************************************
1 plays in get-cwd.yaml
PLAY [localhost] ***************************************************************************************************************************************************************************************************************************
META: ran handlers
TASK [Get CWD] *****************************************************************************************************************************************************************************************************************************
task path: /test_code/get-cwd.yaml:11
changed: [localhost] => {"changed": true, "cmd": "export ACWD=/test_code_demo", "delta": "0:00:00.713703", "end": "2019-12-13 14:43:37.054390", "rc": 0, "start": "2019-12-13 14:43:36.340687", "stderr": "", "stderr_lines": [], "stdout": "", "stdout_lines": []}
TASK [DEMO] ********************************************************************************************************************************************************************************************************************************
task path: /test_code/get-cwd.yaml:15
changed: [localhost] => {"changed": true, "cmd": "echo $ACWD", "delta": "0:00:00.705605", "end": "2019-12-13 14:43:37.919962", "rc": 0, "start": "2019-12-13 14:43:37.214357", "stderr": "", "stderr_lines": [], "stdout": "/test_code_dinesh", "stdout_lines": ["/test_code_dinesh"]}
META: ran handlers
META: ran handlers
PLAY RECAP *********************************************************************************************************************************************************************************************************************************
localhost : ok=2 changed=2 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
[root#a6296ab33a34 test_code]#
You can see, though I have tried to set the value to test_code_demo, still old value test_code_dinesh is reflecting.
Please let me know way to resolve the above issue.
Remember that when you set an environment variable (anywhere, not just in Ansible), it only effects the current process and its children.
When you run something like this:
- name: Get CWD
shell: "export ACWD=/test_code_demo"
when: platform != 'jenkins'
You are:
Spawning a shell
Setting the environment variable ACWD in that shell
Exiting the shell
At this point, the environment is destroyed. There's no way to set an environment variable in one task and have it effect another task. You can set per-task environment variables using the environment key on your task, like this:
- name: DEMO
shell: echo $ACWD
environment:
ACWD: '/test_code_demo'
If you need to apply the environment setting to multiple tasks, you can set it on a play instead:
- hosts: localhost
environment:
ACWD: '/test_code_demo'
tasks:
- command: 'echo $ACWD'
register: output1
- command: 'echo $ACWD'
register: output2
- debug:
msg:
- "{{ output1.stdout }}"
- "{{ output2.stdout }}"

Evaluating return code in ansible conditional

I'm working on automating a task which needs to append the latest version of software to a file. I don't want it to do this multiple times for the same version.
It looks at the following example file:
var software releases = new Array(
"4.3.0",
"4.4.0",
"4.5.0",
"4.7.0",
"4.8.0",
"4.11.0",
"4.12.1",
"4.14.0",
"4.15.0",
"4.16.0",
);
the defaults main.yml would pass in something like
VERSION: 4.16.2
code
- name: register version check
shell: cat /root/versions.js | grep -q {{VERSION}}
register: current_version
- debug: msg="The registered variable output is {{ current_version.rc }}"
- name: append to versions.js
lineinfile:
dest: /root/versions.js
regexp: '^\);'
insertbefore: '^#\);'
line: " \"{{VERSION}}\",\n);"
owner: root
state: present
when: current_version.rc == 1
problem: the debug message is evaluating current_version.rc and showing me boolean values based on the grep commands output, but I can't re-use this in the when conditional to determine if the task should be run.
Edit: the output:
PLAY [localhost] **************************************************************
GATHERING FACTS ***************************************************************
ok: [localhost]
TASK: [test | register version check] *****************************************
failed: [localhost] => {"changed": true, "cmd": "cat /root/versions.js | grep -q 3.19.2", "delta": "0:00:00.003570", "end": "2015-12-17 00:24:49.729078", "rc": 1, "start": "2015-12-17 00:24:49.725508", "warnings": []}
FATAL: all hosts have already failed -- aborting
PLAY RECAP ********************************************************************
to retry, use: --limit #/root/site.retry
localhost : ok=1 changed=0 unreachable=0 failed=1
As nikobelia pointed out in the comments, grep returns an exit code of 1 when it doesn't match any lines. Ansible then interprets this (actually any status code other than 0 from a shell/command task) as an error and so promptly fails.
You can tell Ansible to ignore the response code from the shell/command task by using ignore_errors. Although with grep this will ignore actual errors (given by a return code of 2) so instead you might want to use failed_when like this:
- name: register version check
shell: cat /root/versions.js | grep -q {{VERSION}}
register: current_version
failed_when: current_version.rc == 2

Resources