Ansible - Fetching a file from a Windows remote share - windows

I'm trying to get a file from a windows DFS share to the localhost (linux) to parse it later.
The path to the file is something like : \\windows_host\folder\file
And I'm trying to use the fetch module with something similar to this:
- name: Test
hosts: all
connection: local
gather_facts: no
tasks:
- name: Fetching a file from a Windows DFS Share
fetch:
src: \\windows_host\folder\file
dest: local_folder/file
flat: yes
But when I run it, it does not get the file and if I use the verbose option it says:
"msg": "the remote file does not exist, not transferring, ignored"
Though the file exists at the specific location.. so I think the problem is with the path encoding (I might be wrong) and I have tried a few different formats but so far no luck.
Does anyone know how to do that or what I'm doing wrong?
Alternative ways to get the file are also appreciated considering anyway that
I'm not allowed to mount the share or to have any service (ftp/http/etc..) returning the file
Thanks in advance
ValerioG

I actually managed to make it work using the command module and smbclient command in linux.
In case someone needs something similar, the playbook below works for me.
---
- name: Test
hosts: all
connection: local
gather_facts: no
vars_files:
- vault_with_AD_credentials.yaml
tasks:
- name: Getting the Exchange Data file from Windows Share
run_once: yes
command: smbclient '\\windows_host\share' -c 'lcd local_folder; cd remote_folder; get filename' -U {{ ad_username }}%{{ ad_password }}

Related

Send the output from Ansible to a file [duplicate]

This question already has answers here:
Ansible - Save registered variable to file
(5 answers)
Closed 2 months ago.
I am trying to gain knowledge in Ansible and solve a few problems:
I want to, not sure if it is even possible. Can the output be saved local to the server the playbook is being run on?
in the example, I am just printing to terminal I am running the playbook. I it not much use when there is a large amount of data. I would like it to be saved in a file on the server I am running the playbook instead.
---
- name: list os version
hosts: test
become: true
tasks:
- name: hostname
command: hostname
register: command_output
- name: cat /etc/redhat-release
command: cat redhat-release chdir=/etc
- name: Print output to console
debug:
msg: "{{command_output.stdout}}"
I really want the output to go to a file. I cant find anything about if this is possible.
as you can read on the ansible documentation, you can create a local configuration file ansible.cfg inside the directory vers you have your playbook and then set the proper config log file to output all the playbook output inside: Ansible output documentation
By default Ansible sends output about plays, tasks, and module arguments to your screen (STDOUT) on the control node. If you want to capture Ansible output in a log, you have three options:
To save Ansible output in a single log on the control node, set the log_path configuration file setting. You may also want to set display_args_to_stdout, which helps to differentiate similar tasks by including variable values in the Ansible output.
To save Ansible output in separate logs, one on each managed node, set the no_target_syslog and syslog_facility configuration file settings.
To save Ansible output to a secure database, use AWX or Red Hat Ansible Automation Platform. You can then review history based on hosts, projects, and particular inventories over time, using graphs and/or a REST API.
If you just want to output the result of the task on file, use the copy module on the localhost delegation
---
- name: list os version
hosts: test
become: true
tasks:
- name: hostname
command: hostname
register: command_output
- name: cat /etc/redhat-release
command: cat redhat-release chdir=/etc
- name: Create your local file on master node
ansible.builtin.file:
path: /your/local/file
owner: foo
group: foo
mode: '0644'
delegate_to: localhost
- name: Print output to file
ansible.builtin.copy:
content: "{{command_output.stdout}}"
dest: /your/local/file
delegate_to: localhost

How should I install Splunk from remote tgz package?

Im trying to perform an additional task on the output of stdout_lines.
Here is the playbook:
- name: Change to Splunk user
hosts:
sudo: yes
sudo_user: splunk
gather_facts: true
tasks:
- name: Run WGET & install SPLUNK
command: wget -O splunk-9.0.2-17e00c557dc1-Linux-x86_64.tgz https://download.splunk.com/products/splunk/releases/9.0.2/linux/splunk-9.0.2-17e00c557dc1-Linux-x86_64.tgz
- name: run 'ls' to get SPLUNK_PACKAGE_NAME
shell: 'ls -l'
register: command_output
- debug:
var: command_output.stdout_lines
I am using wget to download Splunk on the server and I need the Splunk package name so that I can extract the file in the next task.
For that, I tried to register ls -l as command_output.
Now, I need to untag it (tar xvzf splunk_package_name.tgz -C/opt), but I dont know how I can use the stdout_lines output in my tar command.
In Ansible, your use case should resume to one single task, using the unarchive module, along with the remote_src parameter set to true and the src one to your URL.
As described in the documentation:
If remote_src=yes and src contains ://, the remote machine will download the file from the URL first.
So, you end up with this single task:
- name: Install Splunk from remote archive
unarchive:
src: "https://download.splunk.com/products/splunk/releases/9.0.2\
/linux/splunk-9.0.2-17e00c557dc1-Linux-x86_64.tgz"
remote_src: true
## with this, you will end up with Splunk installed in /opt/splunk
dest: /opt

How to run linux like cp command on same server..but copy says it does not find remote server

I am trying to emulate scenario of copying local file from one directory to another directory on same machine..but ansible copy command is looking for remote server always..
code I am using
- name: Configure Create directory
hosts: 127.0.0.1
connection: local
vars:
customer_folder: "{{ customer }}"
tasks:
- file:
path: /opt/scripts/{ customer_folder }}
state: directory
- copy:
src: /home/centos/absample.txt
dest: /opt/scripts/{{ customer_folder }}
~
I am running this play book like
ansible-playbook ab_deploy.yml --extra-vars "customer=ab"
So two problem i am facing
It should create a directory called ab under /opt/scripts/ but it creating folder as { customer_folder }}..its not taking ab as name of directory
second, copy as i read documentation, copy only work to copy files from local to remote machine, But i want is simply copy from local to local..
how can i achieve this..might be silly, i am just trying out things
Please suggest.
I solved it..i used cmd under shell module then it worked.

Extract nslookup result

I'm trying to figure out how I can extract the value of the "Address" of a nslookup command result in an Ansible task. I will always get back 1 ip address in the result:
nslookup fs-d12345.efs.ap-southeast-2.amazonaws.com 10.75.0.2
Server: 10.75.0.2
Address: 10.75.0.2#53
Non-authoritative answer:
Name: fs-d12345.efs.ap-southeast-2.amazonaws.com
Address: 10.75.21.67
I need the value 10.75.21.67 to be stored as a var that I can use later in the playbook.
My task would look something like:
- shell: "nslookup fs-d12345.efs.ap-southeast-2.amazonaws.com 10.75.0.2"
register: results
How do I extract the value of the Address?
Thanks!
Before reaching for the command or shell module, always have a look around for alternatives as for common tasks, the Ansible community have often done the heavy lifting for you.
If you can accommodate installing an additional Python package to support, there is already an nslookup (well dig) utility built into Ansible:
- set_fact:
target_ip: "{{ lookup('dig', 'fs-d12345.efs.ap-southeast-2.amazonaws.com', '#10.75.0.2') }}"
The pre-requisite to getting this to work, is you need the dnspython library installed on the machine where this task will be run, e.g.
apt-get install python-dnspython
or
yum install python-dns # (I think ...)
If you only want to have to do this on your control machine, but you want to be able to access the looked up data on a remote machine, you could do something like this:
- hosts: localhost
connection: local
tasks:
- set_fact:
target_ip: "{{ lookup('dig', 'fs-d12345.efs.ap-southeast-2.amazonaws.com', '#10.75.0.2') }}"
- hosts: remote_machine
tasks:
- debug:
var: hostvars['localhost'].target_ip

Get sorted list of folders with Ansible

I have OS X "El capitan" 10.11.6 and I am using Ansible 2.1.1.0 to run some maintenance tasks on a remote Linux server Ubuntu 16.04 Xenial. I am trying to get the following list of folders sorted on the remote machine (Linux), so I can remove the old ones when needed:
/releases/0.0.0
/releases/0.0.1
/releases/0.0.10
/releases/1.0.0
/releases/1.0.5
/releases/2.0.0
I have been trying with the module find in Ansible, but it returns a not sorted list. Is there an easy way to achieve this with Ansible?
You can sort items with sort filter:
- hosts: localhost
gather_facts: no
tasks:
- find: path="/tmp" patterns="test*"
register: files
- debug: msg="{{ files.files | sort(attribute='ctime') | map(attribute='path') | list }}"
Just change sort attribute to your need.
But beware that string sort is not numeric, so /releases/1.0.5 will go after /releases/1.0.10.
Interesting solutions, thanks a lot guys. But I think I have found the easiest way in Ubuntu, just using ls -v /releases/ will apply natural sorting to all folders:
- name: List of releases in ascendent order
command: ls -v /releases/
register: releases
- debug: msg={{ releases.stdout_lines }}
The response is:
ok: [my.remote.com] => {
"msg": [
"0.0.0",
"0.0.1",
"0.0.10",
"1.0.0",
"1.0.5",
"2.0.0"
]
}
If you want to find files older than a period, maybe age and age_stamp parameters of find module can help you. For example:
# Recursively find /tmp files older than 4 weeks and equal or greater than 1 megabyte
- find: paths="/tmp" age="4w" size="1m" recurse=yes
It sounds like what you want to do is real simple but the standard ansible modules doesn't quite have what you needed.
As an alternative you can write your own script using your favorite programming language then use the copy module to pass that script to the host and use command to execute it. When done, use file to remove that script.
The downside of it is that the target host will need to have the required executable to run your script. For instance if you are doing a python script then the target host will need python
Example:
- name: Send your script to the target host
copy: src=directory_for_scripts/my_script.sh dest=/tmp/my_script.sh
- name: Execute my script on target host
command: >
/bin/bash /tmp/my_script.sh
- name: Clean up the target host by removing script
file: path=/tmp/my_script.sh state=absent

Resources