how to use unix command in src in ansible - ansible

File is opened if I run the following command from shell:
ls -l /tmp/uname -n ---file1
I am trying to run it with fetch:
- name: COPY file from remote machine to local
fetch:
src: /tmp/`uname -n`---mem-swap.out
dest: /tmp
But it gives me the error:
file not found: /tmp/uname -n---mem-swap.out
Is it possible to execute it in "src"
Thank you.

This is not possible. A lookup would help if your were running the playbook on a local system, but unfortunately lookups don't run on remotely managed nodes.
As per the other answer, you can run a task first.
But if you are collecting facts first, why not use an Ansible variable?
- name: COPY file from remote machine to local
fetch:
src: /tmp/{{ ansible_nodename }}---mem-swap.out
dest: /tmp

It's possible to register the result and concatenate the src string. For example
- commnad: "uname -n"
register: result
- fetch:
src: "/tmp/{{ result.stdout }}---mem-swap.out"
dest: "/tmp"

Related

Ansible can't copy files on remote server, but the command runs correctly if run from command line

I'm trying to move everything under /opt/* to a new location on the remote server. I've tried this using command to run rsync directly, as well as using both the copy and the sychronize ansible module. In all cases I get the same error message saying:
"msg": "rsync: link_stat \"/opt/*\" failed: No such file or directory
If I run the command listed in the "cmd" part of the ansible error message directly on my remote server it works without error. I'm not sure why ansible is failing.
Here is the current attempt using sychronize:
- name: move /opt to new partition
become: true
synchronize:
src: /opt/*
dest: /mnt/opt/*
delegate_to: "{{ inventory_hostname }}"
You should skip the wildcards that is a common mistake:
UPDATE
Thanks to the user: # Zeitounator, I managed to do it with synchronize.
The advantage of using synchronize instead of copy module is performance, it's much faster if you have a lot of files to copy.
- name: move /opt to new partition
become: true
synchronize:
src: /opt/
dest: /mnt/opt
delegate_to: "{{ inventory_hostname }}"
So basically the initial answer was right but you needed to delete the wildcards "*" and the slash on dest path.
Also, you should add the deletion of files on /opt/

ansible-playbook gather information from a file

I want to read a file by ansible and find specific thing and store all of them in a file in my localhost
for example there is /tmp/test file in all host and I want to grep specific thing in this file and store all of them in my home.
What should I do?
There might be many ways to accomplish this. The choice of Ansible modules (or even tools) can vary.
One approach is (using only Ansible):
Slurp the remote file
Write new file with filtered content
Fetch the file to Control machine
Example:
- hosts: remote_host
tasks:
# Slurp the file
- name: Get contents of file
slurp:
src: /tmp/test
register: testfile
# Filter the contents to new file
- name: Save contents to a variable for looping
set_fact:
testfile_contents: "{{ testfile.content | b64decode }}"
- name: Write a filtered file
lineinfile:
path: /tmp/filtered_test
line: "{{ item }}"
create: yes
when: "'TEXT_YOU_WANT' in item"
with_items: "{{ testfile_contents.split('\n') }}"
# Fetch the file
- name: Fetch the filtered file
fetch:
src: /tmp/filtered_test
dest: /tmp/
This will fetch the file to /tmp/<ANSIBLE_HOSTNAME>/tmp/filtered_test.
You can use the Ansible fetch module to download files from the remote system to your local system. You can then do the processing locally, as shown in this Ansible cli example:
REMOTE=[YOUR_REMOTE_SERVER]; \
ansible -m fetch -a "src=/tmp/test dest=/tmp/ansiblefetch/" $REMOTE && \
grep "[WHAT_YOU_ARE_INTERESTED_IN]" /tmp/ansiblefetch/$REMOTE/tmp/test > /home/user/ansible_files/$REMOTE
This snippet runs the ad-hoc version of Ansible, calling the module fetch with the source folder (on the remote) and the destination folder (locally) as arguments. Fetch copies the file into a folder [SRC]/[REMOTE_NAME]/[DEST], from which we then grep what we are interested in, and output that in the /home/user/ansible_files/$REMOTE.

Ansible Unarchive command causes error "Failed to find handler"

I operate AmazonLinux2 on EC2 using Ansible. However, when the Unarchive command is executed, the following error is displayed.
"Failed to find handler for \"/tmp/hoge.db.gz\".
Make sure the required command to extract the file is installed.
Command \"/usr/bin/unzip\" could not handle archive. Command \"/usr/bin/gtar\" could not handle archive."
The contents of PlayBook are as follows.
- name: Unarchive hoge
become: yes
unarchive:
src: /tmp/hoge.db.gz
dest: /root/fuga/
remote_src: yes
Below is the information I have examined to identify the cause of the error.
unarchive Requires Command
[root#ip- ~]# which gtar
/usr/bin/gtar
[root#ip- ~]# which unzip
/usr/bin/unzip
[root#ip- ~]# which zipinfo
/usr/bin/zipinfo
PATH
- debug:
var: ansible_env.PATH
"ansible_env.PATH": "/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/root/bin"
The unarchive module cannot handle gzip files unless they are a compressed tar ball (see https://docs.ansible.com/ansible/latest/modules/unarchive_module.html).
You will need to use the copy module to first copy the gzip file, and then the shell module to decompress it using gunzip.
Example:
- copy:
src: /tmp/hoge.db.gz
dest: /root/fuga/hoge.db.gz
- shell: gunzip /root/fuga/hoge.db.gz
You may need to first install gunzip on the managed host
It works if you use the extra_opts like so:
- name: Unarchive hoge
become: true
ansible.builtin.unarchive:
src: /tmp/hoge.db.gz
dest: /root/fuga/
remote_src: yes
extra_opts:
- '-z'
Tested with ansible 2.9.25 and python 3.6.8 on CentOS 8.

How to copy local files named with the destination server name?

How would you resolve this small script in ansible playbook ?
Files to copy are named [ServerName].[extension]
The destination server is ServeName
for file in $(ls /var/tmp)
do
ServerName=$(echo $file | awk -F. 'NF{NF--};1'
scp /var/tmp/$file $ServerName:/var/tmp/
scp /var/tmp/pkg.rpm $ServerName:/var/tmp/
ssh $ServerName "cd /var/tmp; yum -y localinstall pkg.rpm "
done
Thanks for your help
The idea would be to have something like this (but working, of course)
- name: main loop
copy:
src: "{{ item }}"
dest: "/var/tmp/myfile.json"
- name: Install package
yum:
name: "packageToInstall"
state: present
delegate_to: "{{ item.split('/')[-1][:-5] }}"
with_fileglob:
- "/var/temp/*json"
If you were to write this in yaml, you should use ansible_hostname, ansible already has several informations about the host via setup.
- name: copying files
copy:
src: /mine/file.ext
dest: /etc/{{ ansible_hostname }}.ext
More about copy in module description.
All facts gather during setup are available here.

Execute local script on remote without copying it across in Ansible

Is it possible to execute a local script on a remote host in Ansible without copying it across and then executing it?
The script, shell and command modules all seems like they might be the answer but I'm not sure which is best.
The script module describes itself as "Runs a local script on a remote node after transferring it" but the examples given don't suggest a copy operation - e.g. no src, dest - so maybe this is the answer?
script module FTW
tasks:
- name: Ensure docker repo is added
script: "{{ role_path }}/files/add-docker-repo.sh"
register: dockeraddrepo
notify: Done dockeraddrepo
when: ansible_local.dockeraddrepo | d(0) == 0
handlers:
- name: Done dockeraddrepo
copy:
content: '{{ dockeraddrepo }}'
dest: /etc/ansible/facts.d/dockeraddrepo.fact

Resources