I've the following file in my cloud-init
write_files:
- path: /opt/data/conf.conf
owner: root:root
permissions: '0755'
content: |
[default]
host = $HOSTNAME
I need to interpolate properly the variable $HOSTNAME when it is executed in an EC2 instance.
I didn't find anything about that in the documentation but I found EC2 datasource.
Any idea how to properly interpolate that value?
The quick idea I had is to write it inside an executable and interpolate from there. is it the only way?
Related
i have an inventory file which looks like below.
[abc]
dsad1.jkas.com
dsad2.jkas.com
[def]
dsad3.jkas.com
dsad4.jkas.com
[main:children]
abc
def
[main:vars]
ansible_user="{{lookup('env', 'uid')}}
ansible_password="{{lookup('env', 'pwd')}}
ansible_connection=paramkio
main.yaml --> main yaml looks like below
---
- hosts: "{{my-hosts}}"
roles:
- role: dep-main
tags:
- main
- role: dep-test
tags:
-test
cat roles/dep-main/tasks/main.yaml
- name: run playbook
script: path/scripts/dep-main.sh
where i have scripts folder inside which i have dep-main.sh --> using script module to run the shell script on remote machine.
"ansible-playbook -i inventory -e "my_hosts=main" --tags main main.yaml"
i am following above design for a new requirement. Now the challenge is i need to set environment variable for each host and env variables are diff for each host. how can i achieve it . please help me.
there are around 15 env key value pairs that needs to be exported to each host above, out of which 10 are common, where i'll simply put in the above shell script. where as other 5 env key value pairs are diff for each host like below.
dsad1.jkas.com
sys=abc1
cap=rty2
jam=yup4
pak=hyd4
jum=563
dsad2.jkas.com
sys=abc45
cap=hju
jam=upy
pak=upsc
jum=y78
please help.
Please see the ansible documentation for this as it's rather explanatory:
https://docs.ansible.com/ansible/latest/user_guide/intro_inventory.html
I am editing my initial answer and try to make it more clear. But please note that I am assuming a few things here:
you are using Linux
you use bash as your shell, Linux offers a variety of shells and the syntax is different
when you say export environment variables I assume you mean copy the variables to the remote host in some form and export them to the shell
Also there's a few things left to do but I have to leave it to you:
You need to "load" the env file so the variables are available to the shell (export), you can add an extra line to the ".bashrc" file to source the file, like "source /etc/env_variables", but that will depend on how you want to use the vars. You can easily do that with your shell script or with ansible (lineinfile)
I recommend you use a separate file like I did and don't edit the bashrc adding all the variables. It will make you life easier in the future if you want to update the variables.
Rename the env file properly so you know what's for and add some backup mechanism - check the documentation https://docs.ansible.com/ansible/latest/collections/ansible/builtin/blockinfile_module.html
Inventory file with the variables:
all:
hosts:
<your_host_name1>:
sys: abc1
cap: rty2
jam: yup4
pak: hyd4
jum: 563
<your_host_name2>:
sys: abc45
cap: hju
jam: upy
pak: upsc
jum: y78
My very simplistic playbook:
- hosts: all
become: yes
become_method: sudo
tasks:
- name: Create env file and add variables
blockinfile:
path: /etc/env_variables
create: yes
block: |
export sys={{ sys }}
export cap={{ cap }}
export jam={{ jam }}
export pak={{ pak }}
export jum={{ jum }}
The final result on the servers I used for testing is the following:
server1:
# BEGIN ANSIBLE MANAGED BLOCK
export sys=abc1
export cap=rty2
export jam=yup4
export pak=hyd4
export jum=563
# END ANSIBLE MANAGED BLOCK
server2:
# BEGIN ANSIBLE MANAGED BLOCK
export sys=abc45
export cap=hju
export jam=upy
export pak=upsc
export jum=y78
# END ANSIBLE MANAGED BLOCK
Hope this is what you are asking for.
For windows you may have to find where the user variables are set and adjust the playbook to edit that file, if they are in a text file...
Hei guys,
After a few days of struggling, I've decided to write my issue here.
I have an ansible(2.7) task that that has a single variable, which points to a host var that uses the file lookup plugin.
Thing is that this works, for one host, but I have 6 hosts, where a value inside the lookup file should be different for each of the hosts.
Can you pass a variable inside the file that is looked up?
I'm new to ansible and don't master it fully.
Has someone encountered this before?
Task:
- name: Copy the file to its directory
template:
src: file.conf
dest: /path/to/file
vars:
file_contents: "{{file_configuration}}"
-----
hostvar file:
file_configuration:
- "{{lookup('file', './path/to/file') | from_yaml}}"
----
file that is looked up:
name: {{ value that should be different per host }}
driver:
long_name: unchanged value.
You should have 6 host_vars files, one for each host. In that host_var file, set your desired value.
Ansible documentation is here
E.g.
https://imgur.com/a/JCbnNBT
Content of host1.yml
---
my_value: something
Content of host2.yml
---
my_value: else
Ansible automagicly sees the host_var folder. It looks in that folder and searches for files which exactly match a host in the play.
So ensure your host_var/filename.yml matches the hostname in your play!
If there is a match, then it'll use that .yml file for that specific host.
I ran into a configuration problem when coding an Ansible playbook for SSH private key files. In static Ansible inventories, I can define combinations of host servers, IP addresses, and related SSH private keys - but I have no idea how to define those with dynamic inventories.
For example:
---
- hosts: tag_Name_server1
gather_facts: no
roles:
- role1
- hosts: tag_Name_server2
gather_facts: no
roles:
- roles2
I use the below command to call that playbook:
ansible-playbook test.yml -i ec2.py --private-key ~/.ssh/SSHKEY.pem
My questions are:
How can I define ~/.ssh/SSHKEY.pem in Ansible files rather than on the command line?
Is there a parameter in playbooks (like gather_facts) to define which private keys should be used which hosts?
If there is no way to define private keys in files, what should be called on the command line when different keys are used for different hosts in the same inventory?
TL;DR: Specify key file in group variable file, since 'tag_Name_server1' is a group.
Note: I'm assuming you're using the EC2 external inventory script. If you're using some other dynamic inventory approach, you might need to tweak this solution.
This is an issue I've been struggling with, on and off, for months, and I've finally found a solution, thanks to Brian Coca's suggestion here. The trick is to use Ansible's group variable mechanisms to automatically pass along the correct SSH key file for the machine you're working with.
The EC2 inventory script automatically sets up various groups that you can use to refer to hosts. You're using this in your playbook: in the first play, you're telling Ansible to apply 'role1' to the entire 'tag_Name_server1' group. We want to direct Ansible to use a specific SSH key for any host in the 'tag_Name_server1' group, which is where group variable files come in.
Assuming that your playbook is located in the 'my-playbooks' directory, create files for each group under the 'group_vars' directory:
my-playbooks
|-- test.yml
+-- group_vars
|-- tag_Name_server1.yml
+-- tag_Name_server2.yml
Now, any time you refer to these groups in a playbook, Ansible will check the appropriate files, and load any variables you've defined there.
Within each group var file, we can specify the key file to use for connecting to hosts in the group:
# tag_Name_server1.yml
# --------------------
#
# Variables for EC2 instances named "server1"
---
ansible_ssh_private_key_file: /path/to/ssh/key/server1.pem
Now, when you run your playbook, it should automatically pick up the right keys!
Using environment vars for portability
I often run playbooks on many different servers (local, remote build server, etc.), so I like to parameterize things. Rather than using a fixed path, I have an environment variable called SSH_KEYDIR that points to the directory where the SSH keys are stored.
In this case, my group vars files look like this, instead:
# tag_Name_server1.yml
# --------------------
#
# Variables for EC2 instances named "server1"
---
ansible_ssh_private_key_file: "{{ lookup('env','SSH_KEYDIR') }}/server1.pem"
Further Improvements
There's probably a bunch of neat ways this could be improved. For one thing, you still need to manually specify which key to use for each group. Since the EC2 inventory script includes details about the keypair used for each server, there's probably a way to get the key name directly from the script itself. In that case, you could supply the directory the keys are located in (as above), and have it choose the correct keys based on the inventory data.
The best solution I could find for this problem is to specify private key file in ansible.cfg (I usually keep it in the same folder as a playbook):
[defaults]
inventory=ec2.py
vault_password_file = ~/.vault_pass.txt
host_key_checking = False
private_key_file = /Users/eric/.ssh/secret_key_rsa
Though, it still sets private key globally for all hosts in playbook.
Note: You have to specify full path to the key file - ~user/.ssh/some_key_rsa silently ignored.
You can simply define the key to use directly when running the command:
ansible-playbook \
\ # Super verbose output incl. SSH-Details:
-vvvv \
\ # The Server to target: (Keep the trailing comma!)
-i "000.000.0.000," \
\ # Define the key to use:
--private-key=~/.ssh/id_rsa_ansible \
\ # The `env` var is needed if `python` is not available:
-e 'ansible_python_interpreter=/usr/bin/python3' \ # Needed if `python` is not available
\ # Dry–Run:
--check \
deploy.yml
Copy/ Paste:
ansible-playbook -vvvv --private-key=/Users/you/.ssh/your_key deploy.yml
I'm using the following configuration:
#site.yml:
- name: Example play
hosts: all
remote_user: ansible
become: yes
become_method: sudo
vars:
ansible_ssh_private_key_file: "/home/ansible/.ssh/id_rsa"
I had a similar issue and solved it with a patch to ec2.py and adding some configuration parameters to ec2.ini. The patch takes the value of ec2_key_name, prefixes it with the ssh_key_path, and adds the ssh_key_suffix to the end, and writes out ansible_ssh_private_key_file as this value.
The following variables have to be added to ec2.ini in a new 'ssh' section (this is optional if the defaults match your environment):
[ssh]
# Set the path and suffix for the ssh keys
ssh_key_path = ~/.ssh
ssh_key_suffix = .pem
Here is the patch for ec2.py:
204a205,206
> 'ssh_key_path': '~/.ssh',
> 'ssh_key_suffix': '.pem',
422a425,428
> # SSH key setup
> self.ssh_key_path = os.path.expanduser(config.get('ssh', 'ssh_key_path'))
> self.ssh_key_suffix = config.get('ssh', 'ssh_key_suffix')
>
1490a1497
> instance_vars["ansible_ssh_private_key_file"] = os.path.join(self.ssh_key_path, instance_vars["ec2_key_name"] + self.ssh_key_suffix)
I've figured out how to use the shell module to create a mount on the network using the following command:
- name: mount image folder share
shell: "mount -t cifs -o domain=MY_DOMAIN,username=USER,password=PASSWORD //network_path/folder /local_path/folder
sudo_user: root
args:
executable: /bin/bash
But it seems like it's better practice to use Ansible's mount module to do the same thing.
Specifically, I'm confused about going about providing the options for domain=MY_DOMAIN,username=USER,password=PASSWORD. I see there is a flag for opts, but I'm not quite sure how this would look.
Expanding on #adam-kalnas answer:
Creating an fstab entry and then calling mount will allow you to mount the file-system in more appropriate permission level (i.e. not 0777). By doing state: present ansible will only create the entry in /etc/fstab and then can be mounted by the user. From ansible mount module documentation:
absent and present only deal with fstab but will not affect current
mounting.
Additionally, you'd want to avoid leaving credentials in the /etc/fstab file (as its readable by all, human or service). To avoid that, create a credential file that is appropriately secured.
All together looks something like this:
- name: Create credential file (used for fstab entry)
copy:
content: |
username=USER
password=PASSWORD
dest: /home/myusername/.credential
mode: 0600
become: true
become_user: myusername
- name: Create fstab entry for product image folder share
mount:
state: present
fstype: cifs
opts: "credentials=/home/myusername/.credential,file_mode=0755,dir_mode=0755,user"
src="//network_path/folder"
path="/local_path/folder"
become: true
- name: Mount product image folder share
shell: |
mount "/local_path/folder"
become: true
become_user: myusername
Here's the command I ended up going with:
- name: mount product image folder share
mount: state="present"
fstype="cifs"
opts="domain= MY_DOMAIN,username=USER,password=PASSWORD,file_mode=0777,dir_mode=0777" src="//network_path/folder" name="/local_path/folder"
sudo_user: root
sudo: yes
A few notes about it:
I don't think the file_mode=0777,dir_mode=0777 should have to be required, but in my situation is was needed in order for me to have write access to the folder. I could read the folder without specifying the permissions, but I couldn't write to it.
This snippet is required right not because of this ansible bug I tested this on 1.9.0 and 1.9.1, and it was an issue in both versions.
sudo_user: root
sudo: yes
I have not personally used the mount module, but from the documentation it looks like you'd want to do something like:
- name: mount image folder share
mount: fstype="cifs"
opts="domain=MY_DOMAIN,username=USER,password=PASSWORD"
src="//network_path/folder"
name="/local_path/folder"
I'd give that a try, and run it using -vvvv to see all the ansible output. If it doesn't work then the debug output should give you a pretty decent idea of how to proceed.
When using cloud init's #cloud-config to create configuration files, how would I go about using variables to populate values?
In my specific case I'd like to autostart EC2 instances as preconfigured salt minions. Example of salt minion cloud config
Say I'd like to get the specific EC2 instances id and set that as the salt minion's id.
How would I go about it setting the value dynamically for each instance?
in boot command bootcmd can have environment variable $INSTANCE_ID, you can save it for later usage. see http://cloudinit.readthedocs.org/en/latest/topics/examples.html#run-commands-on-first-boot
I for example do like below
#cloud-config
bootcmd:
- echo $INSTANCE_ID > /hello.txt
The closest I've seen to configurable variables is [Instance Metadata].(https://cloudinit.readthedocs.io/en/latest/topics/instancedata.html#)
It says you can use:
userdata provided at instance creation
You can use data created in /run/cloud-init/instance-data.json.
You can use import this instance data using Jinja templates in your YAML cloud-config to pull in this data:
## template: jinja
#cloud-config
runcmd:
- echo 'EC2 public hostname allocated to instance: {{
ds.meta_data.public_hostname }}' > /tmp/instance_metadata
- echo 'EC2 availability zone: {{ v1.availability_zone }}' >>
/tmp/instance_metadata
- curl -X POST -d '{"hostname": "{{ds.meta_data.public_hostname }}",
"availability-zone": "{{ v1.availability_zone }}"}'
https://example.com
But I'm not exactly sure how you create the /run/cloud-init/instance-data.json file.
This CoreOS issue suggests that if you put variables into /etc/environment then you can use those.
I know for example that there are a few variables used such as $MIRROR $RELEASE, $INSTANCE_ID for the phone_home module.
Try the ec2metadata tool (just queries the EC2 metadata). Say put the following in your instances userdata:
wget http://s3.amazonaws.com/ec2metadata/ec2-metadata
chmod u+x ec2-metadata
# The following gives you the instance id and you can pass it to your salt minion config
ec2-metadata -i
More info on the ec2-metadata script here