I need to save two 2 IPs to a variable in a vars_file when launching ec2_instances, which are used later during deployment.
This is how I am saving a single server ip:
- name: Save server public IP to vars file
lineinfile: line="server_public_ip{{':'}} {{ item.public_ip }}"
dest="{{ansible_env.HOME}}/dynamic_ips_{{ec2_environment}}"
with_items: server.instances #server is registered in previous task
The output I have in dynamic_ips file is server_public_ip: xxx.xxx.xx.x
Now I have 2 servers launched and registered as servers.
I need to save this as server_public_ips: xxx.xx.x.xx , xxx.x.xx.x
I tried to declare an empty string and append ips to it, something like this, but I am getting errors.
set_fact:
ips: ""
set_fact:
ips: " {{ ips }} + {{ item.public_ip}} "
with_items: servers.instances #servers is registered in previous task
lineinfile: line="server_public_ips{{':'}} {{ ips }}"
dest="{{ansible_env.HOME}}/dynamic_ips_{{ec2_environment}}"
I think it can be done using lineinfile insertafter and regex.
Finally, I need this to do this in a different server,
- name: Restrict access to outside world
command: iptables INPUT {{ item }} ACCEPT
with_items: {{ server_public_ips }}.split(,) #grant access for each ip
command: iptables INPUT DROP
set_fact:
ips: " {{servers.instances | join(',') }} "
should actually work when servers.instances is a list.
Related
I am trying to get the inventory_hostname from a hosts group I created.
What I am really trying to achieve is copying a file with a new name to a remote server, while its new name will be other remote servers hostname.
{{ inventory_hostname }} gives IP address and {{ ansible_hostname }} returns hostname as default variables but while doing this, I am stuck:
- name: Copy File
hosts: nagios
become: yes
tasks:
- shell: cp /home/xxx/test.txt /home/xxx/{{ item }}.txt
with_items: "{{ groups['dbs'] }}"
{{ item }} returns the IP address but I want the hostname, not the IP for this one. Any help would be appreciated.
I am limited to using the raw module in this task, which is what is causing the complications.
I simply want to grab a file from the local host, and scp it to the list of hosts in my inventory file. I don't want to have to set up ssh keys from all destination hosts to the source host, so I want the scp to go outwards, from source to list of destination hosts. I could do something along the lines of:
raw: "scp {{ file }} {{ user_id }}#{{ item }}:."
with_items: "{{ list_of_hosts }}"
but then i'd have to define the list of hosts in two places, which, with a dynamic list, you don't want to be doing.
Is there any way to have an inventory file such as:
[src-host]
hostA
[dest-hosts]
host1
host2
host3
[dest-hosts:vars]
user_id="foo"
file="bar"
and a playbook such as:
- hosts: src-host
tasks:
- raw: "scp {{ file }} {{ user_id }}#{{ item }}"
with_items: "{{ dest-hosts }}"
EDIT: To clarify based on comment 1, I may ONLY use the 'raw' module due to the limitations of the target (destination) host.
I have a playbook that contains roles for localhost and roles for remote hosts.
In one of the localhost roles I set a fact called git_tag.
I want to use this fact in a template for the remote hosts.
I tried:
- name: Read Version
set_fact:
git_tag: "{{ package_json.stdout | from_json | json_query('version')}}"
delegate_to: "test-server"
But when Ansible reaches the role that reads the template that has {{ git_tag }} it says that git_tag is undefined.
I'm sure I'm doing something wrong. How can I do it?
You should use a hostvars magic variable:
{{ hostvars['localhost']['git_tag'] }}
you can use this
{{ hostvars['test-server']['git_tag'] }}
I have a playbook that spins up a new droplet on DigitalOcean using the core module built into Ansible:
- name: Provision droplet on DigitalOcean
local_action: digital_ocean
state=present
ssh_key_ids=1234
name=mydroplet
client_id=ABC
api_key=ABC
size_id=1
region_id=2
image_id=3
wait_timeout=500
register: my_droplet
- debug: msg="ID is {{ my_droplet.droplet.id }}"
- debug: msg="Your droplet has the IP address of {{ my_droplet.droplet.ip_address }}"
I run this using (note the local argument):
ansible-playbook playbooks/create_droplet.yml -c local -i playbooks/hosts
My hosts file initially looks like this:
[production]
TBA
[localhost]
localhost
When the above playbook finishes I can see the debug information in STDOUT:
ok: [localhost] => {
"msg": "Your droplet has the IP address of 255.255.255.255"
}
Is there any way for this playbook to retain the my_droplet.ip_address variable and save the TBA in the hosts file instead of having to manually copy-pasta it there? I ask because I want to add this provisioning playbook to a ruby script that subsequently bootstraps the VPS with another playbook.
I am doing the same, having written a play that creates servers from a dict (approximately 53 servers at a go, dynamically creating a full test environment). To use a static hosts file, I add the following:
- name: Create in-memory inventory
add_host:
name: "{{ item.value.ServerName }}"
groups: "{{ item.value.RoleTag }},tag_Environment_{{ env_name }}"
when: item.value.template is defined
with_dict: server_details
- name: create file
become: yes
template:
src: ansible-hosts.j2
dest: "{{ wm_inventory_file }}"
The ansible-hosts.j2 template is simply:
{% for item in groups %}
[{{item}}]
{% for entry in groups[item] %}
{{ entry }}
{% endfor %}
{% endfor %}
I'm launching instances with ec2, and I originally wanted to do the same thing. I found some examples of using lineinfile and massaging it to the do the same thing as so:
- name: Launching new instances for {{ application }}
local_action: ec2 group={{ security_group }} instance_type={{ instance_type}} image={{ image }} wait=true region={{ region }} keypair={{ keypair }} vpc_subnet_id={{ subnet }} count={{ instance_count }}
register: ec2
- name: Add instance to local host group
local_action: lineinfile dest=ansible_hosts regexp="{{ item.public_ip }}" insertafter="\[{{ application }}\]" line="{{ item.public_ip }} ansible_ssh_private_key_file=~/ec2-keys/{{ keypair }}.pem" state=present
with_items: ec2.instances
But, I have to agree that it's generally not something you want to do. I found it to be quite a pain. I've since switched to using add_host and life is much better. BTW, application would be the value I used for the group name...
Is there any way for this playbook to retain the my_droplet.ip_address
variable and save the TBA in the hosts file instead of having to
manually copy-pasta it there?
You can retain the ip address of your new host by using the add_host module which allows you to dynamically change the in-memory inventory during an ansible-playbook run. This is useful for when you want to provision a new host and then configure it in a single playbook.
For example
local_action: >
add_host
hostname={{ my_droplet.droplet.id }}
groupname=launched
And then later in your playbook:
- name: Configure instance(s)
hosts: launched
tasks:
...
the second part of your questions:
... and save the TBA in the hosts file instead of having to
manually copy-pasta it there?
There is no built-in ansible way to write additions to the inventory file that is on disk. This is generally not something you want to do. In this case you would need to add it or use a dynamic inventory script to discover the host for future configuration runs.
You should use a dynamic inventory script for this. Using name to distinguish instances, you can subsequently refer to your droplets.
Check https://github.com/ansible/ansible/blob/devel/contrib/inventory/digital_ocean.py for an example script.
I'm using the ec2 module with ansible-playbook I want to set a variable to the contents of a file. Here's how I'm currently doing it.
Var with the filename
shell task to cat the file
use the result of the cat to pass to the ec2 module.
Example contents of my playbook.
vars:
amazon_linux_ami: "ami-fb8e9292"
user_data_file: "base-ami-userdata.sh"
tasks:
- name: user_data_contents
shell: cat {{ user_data_file }}
register: user_data_action
- name: launch ec2-instance
local_action:
...
user_data: "{{ user_data_action.stdout }}"
I assume there's a much easier way to do this, but I couldn't find it while searching Ansible docs.
You can use lookups in Ansible in order to get the contents of a file, e.g.
user_data: "{{ lookup('file', user_data_file) }}"
Caveat: This lookup will work with local files, not remote files.
Here's a complete example from the docs:
- hosts: all
vars:
contents: "{{ lookup('file', '/etc/foo.txt') }}"
tasks:
- debug: msg="the value of foo.txt is {{ contents }}"
You can use the slurp module to fetch a file from the remote host: (Thanks to #mlissner for suggesting it)
vars:
amazon_linux_ami: "ami-fb8e9292"
user_data_file: "base-ami-userdata.sh"
tasks:
- name: Load data
slurp:
src: "{{ user_data_file }}"
register: slurped_user_data
- name: Decode data and store as fact # You can skip this if you want to use the right hand side directly...
set_fact:
user_data: "{{ slurped_user_data.content | b64decode }}"
You can use fetch module to copy files from remote hosts to local, and lookup module to read the content of fetched files.
lookup only works on localhost. If you want to retrieve variables from a variables file you made remotely use include_vars: {{ varfile }} . Contents of {{ varfile }} should be a dictionary of the form {"key":"value"}, you will find ansible gives you trouble if you include a space after the colon.