Ansible: transferring files between hosts - ansible

With ansible, I'm trying to copy an application artifact from a remote server "artifacts_host" to a target machine, i.e. a host in my inventory. The play I'm trying to run is something like:
- name: rsync WAR artifact from artifacts host
synchronize: >
src={{ artifacts_path }}/{{ artifact_filename }}.war
dest={{ artifact_installation_dir }}
delegate_to: "{{ artifacts_host }}"
I came very close to getting this to work by using ansible-vault to encrypt a "secrets.yml" variable file with the artifact_host's public key and then installed it on the target machine's auth file like:
- name: install artifacts_host's public key to auth file
authorized_key: >
user={{ ansible_ssh_user }}
key='{{ artifacts_host_public_key }}'
sudo: yes
but the problem is that my artifacts_host cannot resolve an IP address from the FQDN that Ansible passes to it. If I was able to "inform" the artifacts_host of the IP to use (what the fqdn should resolve to) then I would be fine. I would also be fine having the task fire off on the target machine to pull from the artifacts_host, but I can't find an idempotent way of accomplishing this, nor can I figure out how to feed the target machine a login/password OR ssh key to use.
Am I just gonna have to template out a script to push to my targets???

For anyone who comes across this and has the same question, I did not really figure it out, I just decided to install the private key in the target machines' /etc/ssh directory and chmod it to 0600. I figure it's basically as secure as it could get without a transient (in-memory only) key/password and with idempotence.

Related

Download file to ansible controller instead to remote machine

I'm having two machines one is the ansible controller where I execute my playbooks from and the other one is the remote machine (target) which I would like to install/update. The important thing is that the controller runs within the corporate network but the target is outside that network (only accessible via ssh).
So I need to download a file (from within the corporate network) and copy it to the target node.
I've tried to use: ansible.builtin.get_url to download the file but
unfortunately it will do that on the remote (target) machine which has of course no access to the corporate network.
Does someone has a tip/idea ?
Update: Using ansible [core 2.11.6]
To download something to the local Ansible Controller you may use the following approach.
- name: Download something to Ansible Controller
delegate_to: localhost
get_url:
url: "https://{{ ansible_user }}:{{ ansible_password }}#files.example.com/installer.rpm"
dest: "/tmp/{{ ansible_user }}"
owner: "{{ ansible_user }}"
tags: download,local
Please take note that according Controlling where tasks run: delegation and local actions, delegate_to is not a parameter of module get_url but of the task.

Using key file in ansible for a particular task

What I need to achieve:
I need to copy file from one remote server to another. I am using the yml below.
- name: fetching file to localhost
synchronize:
src: /lcaope01/lca/lst1/logs/{{ inventory_hostname }}
dest: /lcaope01/lca/FETCHEDLOG/
mode: pull
delegate_to: serverX
now I have 5(A,B,C,D,E) servers from which files are copied to serverX. If i create a password less connection between serverX and this servers it copies easily.
But the problem is I cant create a password less connection for this.
Is there any way i can use a private key file for the same specifying it in playbook or tasks.
NOTE: Ansible is not running in serverX it is in a different server.
NOTE: I have a private key for serverX

How to copy a file from a remote windows host to an ansible control server?

I want to copy a file from a remote windows host to the local ansible server.
I have searched stackoverflow but I only found answers for Linux host : like this one Unfortunately the fetch seems not to work with windows hosts.
So how can I copy from a remote windows host to a local ansible server?
I could figure it out, and I have to revert my initial statement. The error messages where miss leading. The fetch module does work also for Windows. I my case I had a bad winrm connection. But instead of an error message the module tried to connect via ssh and finally ended "ok" (green!) the only indication that it did not worked that the file was not copied -- and this never could have happened since the was no ssh connection. I reinstalled the winrm and all worked fine!! Here is the working code:
- name: Fetch war file from buildserver
fetch:
validate_checksum: yes
src: "{{ war_file_path }}{{ war_file_name }}"
dest: "{{ warfile_tmp_folder }}"
flat: yes
delegate_to: "{{ buildserver }}"

Ansible Delegate_to WinRM

I am using the ansible vsphere_guest module to spin up a base windows machine on a VMWare environment. In my playbook, to do this I set Hosts: 127.0.0.1 connection: local. The reason I am doing this is I beleive im not targeting this playbook at any particular host, as I dont have one yet. I instead want to run the playbook locally.
When this runs, I get a new shiny windows server VM. What I now want to do is rename that VM's computer name. To do this I am trying to upload and run a powershell script like so rename_host.ps1 $newHostname. As I understand, I need to use the script module to do this. However, this time I want to target my brand new VM, which I get the IP address of through a fact, {{ newvm_ipaddress }}.
However, when I try and run this script with delegate_to: "{{ newvm_ipaddress}}", its trying to run as SSH. SSH wont work, im targeting a windows machine with remote powershell.
is there any way to set the connection to use winRM in the context of delegate_to? Perhaps there is a better way of doing this?
Thank you for your help
I managed to work out how to solve it. The answer is the ansible module 'add_host'. I have a play under vsphere_guest as follows. This creates a new in memory host, which can then be accessed by a different play.
- add_host group=new_machine name={{ vm_ipaddress }} ansible_connection=winrm
After this, I then have a new play that can now target this host.
- host: new_machine
Also to note, variables do not span across different hosts. The solution was to use the set_fact module in play A, which can then be accessed from within play B
-set_fact:
vm_ipaddress: "{{ hw_eth0.ipaddresses[1] }}" #hw_eth0 is the fact returned from the vsphere_guest module
What about updating the inventory with the new hosts name and with ssh winrm connection params before using delegate_to, or perhaps setting some default catch-all naming scheme with these params?
For example:
[databases]
db-[a:f].example.com:5986 ansible_user=Administrator ansible_connection=winrm ansible_winrm_server_cert_validation=ignore

Having trouble provisioning EC2 instances using Ansible

I'm very confused on how you are supposed to launch EC2 instances using Ansible.
I'm trying to use the ec2.py inventory scripts. I'm not sure which one is supposed to be used, because there is three installed with Ansible:
ansible/lib/ansible/module_utils/ec2.py
ansible/lib/ansible/modules/core/cloud/amazon/ec2.py
ansible/plugins/inventory/ec2.py
I thought running the one in inventory/ would make most sense, so I run it using:
ansible-playbook launch-ec2.yaml -i ec2.py
which gives me:
msg: Either region or ec2_url must be specified
So I add a region (even though I have a vpc_subnet_id specified) and I get:
msg: Region us-east-1e does not seem to be available for aws module boto.ec2. If the region definitely exists, you may need to upgrade boto or extend with endpoints_path
I'm thinking Amazon must have recently changed ec2 so you need to use a VPC? Even when I try and launch an instance from Amazon's console, the option for "EC2 Classic" is disabled.
When I try and use the ec2.py script in cloud/amazon/ I get:
ERROR: Inventory script (/software/ansible/lib/ansible/modules/core/cloud/amazon/ec2.py) had an execution error:
There are no more details than this.
After some searching, I see that ec2.py module in /module_utils has been changed so a region doesn't need to be specified. I try to run this file but get:
ERROR: The file /software/ansible/lib/ansible/module_utils/ec2.py is marked as executable, but failed to execute correctly. If this is not supposed to be an executable script, correct this with chmod -x /software/ansible/lib/ansible/module_utils/ec2.py.
So as the error suggests, I remove the executable permissions for the ec2.py file, but then get the following error:
ERROR: /software/ansible/lib/ansible/module_utils/ec2.py:30: Invalid ini entry: distutils.version - need more than 1 value to unpack
Does anyone have any ideas on how to get this working? What is the correct file to be using? I'm completely lost at this point on how to get this working.
There are several questions in your post. I'll try to summarise them in three items:
Is it still possible to launch instances in EC2 Classic (no VPC)?
How do I create a new EC2 instance using Ansible?
How to launch the dynamic inventory file ec2.py?
1. EC2 Classic
Your options will differ depending on when did you create your AWS account, the type of instance and the AMI virtualisation type used. Refs: aws account,instance type.
If none of the above parameters restricts the usage of EC2 classic you should be able to create a new instance without defining any VPC.
2. Create a new EC2 instance with Ansible
Since your instance doesn't exist yet a dynamic inventory file (ec2.py) is useless. Try to instruct ansible to run on your local machine instead.
Create a new inventory file, e.g. new_hosts with the following contents:
[localhost]
127.0.0.1
Then your playbook, e.g. create_instance.yml should use a local connection and hosts: localhost. See an example below:
--- # Create ec2 instance playbook
- hosts: localhost
connection: local
gather_facts: false
vars_prompt:
inst_name: "What's the name of the instance?"
vars:
keypair: "your_keypair"
instance_type: "m1.small"
image: "ami-xxxyyyy"
group: "your_group"
region: "us-west-2"
tasks:
- name: make one instance
ec2: image={{ image }}
instance_type={{ instance_type }}
keypair={{ keypair }}
instance_tags='{"Name":"{{ inst_name }}"}'
region={{ region }}
group={{ group }}
wait=true
register: ec2_info
- name: Add instances to host group
add_host: hostname={{ item.public_ip }} groupname=ec2hosts
with_items: ec2_info.instances
- name: Wait for SSH to come up
wait_for: host={{ item.public_dns_name }} port=22 delay=60 timeout=320 state=started
with_items: ec2_info.instances
This play will create an EC2 instance and it will register its public IP as an ansible host variable ec2hosts ie. as if you had defined it in the inventory file. This is useful if you want to provision the instance just created, just add a new play with hosts: ec2hosts.
Ultimately, launch ansible as follows:
export ANSIBLE_HOST_KEY_CHECKING=false
export AWS_ACCESS_KEY=<your aws access key here>
export AWS_SECRET_KEY=<your aws secret key here>
ansible-playbook -i new_hosts create_instance.yml
The purpose of the environment variable ANSIBLE_HOST_KEY_CHECKING=false is to avoid being prompted to add the ssh host key when connecting to the instance.
Note: boto needs to be installed on the machine that runs the above ansible command.
3. Use ansible's ec2 dynamic inventory
EC2 dynamic inventory is comprised of 2 files, ec2.py and ec2.ini. In your particular case, I believe that your issue is due to the fact that ec2.py is unable to locate ec2.ini file.
To solve your issue, copy ec2.py and ec2.ini to the same folder in the machine where you intend to run ansible, e.g. to /etc/ansible/.
Pre Ansible 2.0 release (change the branch accordingly).
cd /etc/ansible
wget https://raw.githubusercontent.com/ansible/ansible/stable-1.9/plugins/inventory/ec2.py
wget https://raw.githubusercontent.com/ansible/ansible/stabe-1.9/plugins/inventory/ec2.ini
chmod u+x ec2.py
For Ansible 2:
cd /etc/ansible
wget https://raw.githubusercontent.com/ansible/ansible/devel/contrib/inventory/ec2.py
wget https://raw.githubusercontent.com/ansible/ansible/devel/contrib/inventory/ec2.ini
chmod u+x ec2.py
Configure ec2.ini and run ec2.py, which should print an ini formatted list of hosts to stdout.

Resources