Ansible fetch file from windows machine to local ansible machine - ansible

I am attempting to fetch a zip file from a windows machine to the local ansible machine to later be unzipped and committed to our SCM.
I am wondering if it is possible to fetch files from a windows src to the ansible local machine...
I have previously done the inverse of this using a win_copy and specifying remote_src: no, but I'm not sure if it works the other way around.
- name: "Fetch the file from the src to tower"
fetch:
src: "C:/backup.zip"
dest: "{{ ansible_tmp_path }}/backup.zip"
delegate_to: 127.0.0.1
This is what I have come up with so far but it fails with this output
[$remote_host_name]: FAILED! => {
"changed": false,
"invocation": {
"module_args": {
"src": "C:/backup.zip"
}
},
"msg": "file not found: C:/backup.zip"
}

You can remove the delagate_to: 127.0.0.1 .
Since you are delegating the task to the localhost its searching for "C:/backup.zip" in your ansible tower machine .

Related

Copy & fetch files in Ansible/Cygwin

Question - - how do you navigate the cygwin path structure for file transfers, copies and fetches?
I've installed ansible on a windows 10 machine using cygwin. Everthing works except for the ansible.builtin.copy task. Here is the setup
Relevant Directory Structure
C:.
├───.github
│ └───workflows
├───files
└───payload
├───communication
├───monitoring
The playbook sits in the documents directory of the user, so . is C:/Users/user/Documents/
Ansible Task
- name: Download YAML payloads
ansible.builtin.copy:
src: payload
dest: /some/directory/
The ansible cygwin command line actually runs from /cygdrive/c/Users... path. I can navigate to the payload directory from either windows cli or the cygwin cli using their native paths. [Must be a symlink?] In any event - when I run the above task, the src directory is not found.
What I've tried - both absolute and relative path variables in the src line, for both the cywgin and the windows paths. I've also tried using the inventory environment variables ({{ playbook_dir }}). fileglob: didn't work either.
What I haven't tried - {{ role_path }}. I'd like to keep the source YAMLs all together in the top directory. But not sure if this would work by putting the files directory under a role.
added details
Path to playbook from windows
C:\Users\billr\Documents\GitHub\home-k3s
Path to playbook from cygwin
/cygdrive/c/Users/billr/Documents/GitHub/home-k3
files & directories
home-k3s
files // these are the files/dirs I'm looking to copy
payload
communication
first.yaml
second.yaml
monitoring
first.yaml
second.yaml
hosts.ini //contains playbook hosts.
test.yml //this is the playbook I'm running
playbook cat
---
- hosts: master
gather_facts: yes
become: yes
tasks:
- name: Download YAML payloads
ansible.builtin.copy:
src: payload
dest: /home/bill/
Run #1
src: payload <-- this is the method per docs (for linux).
result: FAILED! => {"changed": false, "msg": "Source payload not found"}
Run #2
src: "{{ playbook_dir }}/files/payload"
result: FAILED! => {"changed": false, "msg": "Source /cygdrive/c/Users/billr/Documents/GitHub/home-k3s/files/payload not found"}
Run #3
src: "/cygdrive/c/Users/billr/Documents/GitHub/home-k3s/files/payload"
result: FAILED! => {"changed": false, "msg": "Source /cygdrive/c/Users/billr/Documents/GitHub/home-k3s/files/payload not found"}
Run #4
src: "c:/Users/billr/Documents/GitHub/home-k3s/files/payload"
FAILED! => {"changed": false, "msg": "Source c:/Users/billr/Documents/GitHub/home-k3s/files/payload not found"}
Note that I can see the files from the cygwin terminal with ls and I can see the files from the windows cli with dir.
Final Notes
Cygwin Github Issue Link

ansible: How to change $HOME directory

I am running ansible 2.9.6 in my control machine Ubuntu 18.04 Desktop
to control single server Ubuntu 16.04 server which doesn't have /home/username/ directory.
I don't intend to create one aswell.
I am just trying to create a new folder "/usr/local/src/fromcontrolmachine" in slave machine from control machine
So I ran below command
dinesh#dinesh-VirtualBox:/etc/ansible$ ansible all -u dira
--become -m file -a "dest=/usr/local/src/fromcontrolmachine mode=755 owner=dira group=dira state=directory" -K
BECOME password:
> 10.211.108.44 | FAILED! => {
> "changed": false,
> "module_stderr": "Shared connection to 10.211.108.44 closed.\r\n",
> "module_stdout": "Could not chdir to home directory /home/dira: No such file or directory\r\n\r\n",
> "msg": "MODULE FAILURE\nSee stdout/stderr for the exact error",
> "rc": 1 }
I thought of changing the $Home directory by adding below line in
/etc/ansible/ansible.cfg. It just created an empty folder called ansible
remote_tmp = usr/local/src/ansible
How to tell ansible to change the default $Home directory by pointing to other location than default /home/dira ?
I wanted to clear this annoying error
"module_stdout": "Could not chdir to home directory /home/dira271641: No such file or directory
UPDATE:
Also tried creating playbook pb.yml & add home_dir: /usr/local/src/ansible as mentioned below.
---
- hosts: all
become: true
tasks:
- set_fact:
home_dir: /usr/local/src/ansible
become: true
- name: ansible create directory example
file:
path: /tmp/devops_directory
state: directory
When i run above using command ansible-playbook pb.yml -K
But it gives the same error as mentioned above.
UPDATE:
I tried environment: HOME:
---
- hosts: all
become: true
environment:
HOME: /usr/local/src/ansible
tasks:
- name: ansible create directory example
file:
path: /tmp/devops_directory
state: directory
Throws same error
Could not chdir to home directory /home/dira: No such file or directory\r\n\r\n", "msg": "MODULE FAILURE\nSee stdout/stderr for the exact error", "rc": 1}}, "msg": "The following modules failed to execute: setup\n"}
Adding below line.
become_user: dira
solved this problem. Note: dira is my username. So replace your username instead.
So full playbook script will look like
---
- hosts: all
become: true
become_user: dira
environment:
HOME: /usr/local/src/ansible
tasks:
- name: ansible create directory example
file:
path: /tmp/devops_directory
state: directory

Using Ansible Playbook how to copy Java certs to hosts?

Using Ansible Playbook how to copy Java certs to hosts? Each host is having different JDK installed. I need to verify in all hosts which JDK is running and copy those certificate to all the hosts.
I have written the below playbook and the error that I'm getting. Please help me with figuring out what's wrong.
---
- hosts: test
vars:
pack1: /ngs/app/rdrt
pack2: /usr/java/jdk*
tasks:
- name: copy the files
copy:
src: "/Users/sivarami.rc/Downloads/Problem46218229/apple_corporate_root_ca.pem"
dest: "{{ pack1 }}"
- name: copy the files
copy:
src: "/Users/sivarami.rc/Downloads/Problem46218229/apple_corporate_root_ca2.pem"
dest: "{{ pack1 }}"
- name: copy the files
copy:
src: "/Users/sivarami.rc/Downloads/Problem46218229/ca-trust-check-1.0.0.jar"
dest: "{{ pack1 }}"
- name: Import SSL certificate to a given cacerts keystore
java_cert:
cert_path: "{{ pack1 }}/apple_corporate_root_ca.pem"
cert_alias: Apple_Corporate_Root_CA
cert_port: 443
keystore_path: "{{ pack2 }}/jre/lib/security/cacerts"
keystore_pass: change-it
executable: "{{ pack2 }}/bin/keytool"
state: present
- name: Import SSL certificate to a cacerts keystore
java_cert:
cert_path: "{{ pack1 }}/apple_corporate_root_ca2.pem"
cert_alias: Apple_Corporate_Root_CA2
cert_port: 443
keystore_path: "{{ pack2 }}/jre/lib/security/cacerts"
keystore_pass: changeit
executable: "{{ pack2 }}/bin/keytool"
state: present
- name: checking those files trusted or untrusted
shell: "{{ pack2 }}/bin/java -jar {{ pack1 }}/ca-trust-check-1.0.0.jar"
The error:
fatal: [c5147061#rn2-radart-lapp117.rno.apple.com]: FAILED! => {"changed": false, "cmd": "'/usr/java/jdk*/bin/keytool'", "msg": "[Errno 2] No such file or directory", "rc": 2}
fatal: [c5147061#rn2-radart-lapp121.rno.apple.com]: FAILED! => {"changed": false, "cmd": "'/usr/java/jdk*/bin/keytool'", "msg": "[Errno 2] No such file or directory", "rc": 2}
The following error is displayed:
"cmd": "'/usr/java/jdk*/bin/keytool'", "msg": "[Errno 2] No such file or directory"
As you can see, the keytool command can not be found in that location. You need to ensure that the path you're providing is actually there on the server.
Where you define the pack2 variable, you need to provide the full path instead of using a wildcard, e.g. like this:
vars:
pack2: /usr/java/jdk-1.8.0_67
Then ensure that this path exists on the remote machine, and your code should no longer show that error.
If the path is different on each node since you have a different version of Java on each node, here are some options:
Use host-specific variables for defining the path for each host, if you have that information.
Gather the information in a previous step, e.g. like here: Check Java version via Ansible playbook.
Check the JAVA_HOME environment variable to see if that is set.
I had the same error that the keytool utility was not found (on my PATH), but that was because I did not use the become_user which has the correct PATH value.
So my solution was to add the following line to my playbook:
become: yes
become_user: wls
(wls is the weblogic user but can be another system account depending on your needs)
I had the same error because keytool was link to a really old version of the JDK (version 6).
By using a more recent version (JDK version 11), I fixed this error.

SSH-less LXC containers using Ansible

I am new to ansible, and I am trying to use ansible on some lxc containers.
My problem is that I don't want to install ssh on my containers. So
What I tried:
I tried to use this connection plugin but it seams that it does not work with ansible 2.
After understanding that chifflier connection plugin doesn't work, I tried to use the connection plugin from openstack.
After some failed attempts I dived into the code, and I understand
that the plugin doesn't have the information that the host I am talking with is a container.(because the code never reached this point)
My current setup:
{Ansbile host}---|ssh|---{vm}--|ansible connection plugin|---{container1}
My ansible.cfg:
[defaults]
connection_plugins = /home/jkarr/ansible-test/connection_plugins/ssh
inventory = inventory
My inventory:
[hosts]
vm ansible_host=192.168.28.12
[containers]
mailserver physical_host=vm container_name=mailserver
my group vars:
ansible_host: "{{ physical_hostname }}"
ansible_ssh_extra_args: "{{ container_name }}"
ansible_user: containeruser
container_name: "{{ inventory_hostname }}"
physical_hostname: "{{ hostvars[physical_host]['ansible_host'] }}"
My testing playbook:
- name: Test Playbook
hosts: containers
gather_facts: true
tasks:
- name: testfile
copy:
content: "Test"
dest: /tmp/test
The output is:
fatal: [mailserver]: UNREACHABLE! => {
"changed": false,
"msg": "Failed to connect to the host via ssh: ssh: Could not resolve hostname mailserver: No address associated with hostname\r\n",
"unreachable": true
}
Ansible version is: 2.3.1.0
So what am I doing wrong? any tips?
Thanks in advance!
Update 1:
Based on eric answer I am now using this connection plug-in.
I update the my inventory and it looks like:
[hosts]
vm ansible_host=192.168.28.12
[containers]
mailserver physical_host=vm ansible_connection=lxc
After running my playbook I took:
<192.168.28.12> THIS IS A LOCAL LXC DIR
fatal: [mailserver]: FAILED! => {
"failed": true,
"msg": "192.168.28.12 is not running"
}
Which is weird because 192.168.28.12 is the vm and the container is called mailserver. Also I verified that the container is running.
Also why it says that 192.168.28.12 is local lxc dir?
Update 2:
I remove my group_vars, my ansible.cfg and the connection plugin from the playbook and I got the this error:
<mailserver> THIS IS A LOCAL LXC DIR
fatal: [mailserver]: FAILED! => {
"failed": true,
"msg": "mailserver is not running"
}
You should take a look at this lxc connection plugin. It might fit your needs.
Edit : lxc connection plugin is actually part of Ansible.
Just add ansible_connection=lxc in your inventory or group vars.
I'm trying something similar.
I want to configure a host over ssh using ansible and run lxc containers on the host, which are also configured using ansible:
ansible control node ----> host-a -----------> container-a
ssh lxc-attach
The issue with the lxc connection module is, that it only works for local lxc containers. There is no way to get it working through ssh.
At the moment the only way seems to be a direct ssh connection or a ssh connection through the first host:
ssh
ansible control node ----> container-a
or
ssh ssh
ansible control node ----> host-a ----> container-a
Both require sshd installed in the container. But the second way doesn't need port forwarding or multiple ip addresses.
Did you get a working solution?

Ansible unarchive: doesn't reach a remote host

I use Ansible as a provisioner for Vagrant. I have a task:
- name: download and unarchive redis binaries
unarchive:
src: "http://download.redis.io/redis-stable.tar.gz"
dest: "/tmp"
remote_src: True
but for some reasons I see an error in console when I run a vagrant provision:
"failed": true, "msg": "file or module does not exist: /Users/my-username/Projects/project-name/http:/download.redis.io/redis-stable.tar.gz"`
> ansible --version
ansible 2.1.2.0
Any ideas?
NB: look carefully for the error http:/download. Why is there only one backslash?
The syntax from your question works with Ansible 2.2.0.0 and later.
For Ansible 2.0 and 2.1 use:
- name: download and unarchive redis binaries
unarchive:
src: "http://download.redis.io/redis-stable.tar.gz"
dest: "/tmp"
copy: false
The double slash from your question was stripped, because the argument src was treated as a path to a local file (again, because old versions of Ansible required copy: false in addition to the URL).

Resources