Vagrant ansible: pick var fron environment variable - vagrant

Here my ansible_local related Vagrantfile code:
config.vm.provision "ansible_local" do |ansible|
ansible.become = true
ansible.inventory_path = '/vagrant/provisioning/inventory/hosts.ini'
ansible.playbook = "/vagrant/provisioning/playbook.yml"
ansible.limit = 'all'
ansible.galaxy_role_file = "/vagrant/provisioning/requirements.yml"
ansible.galaxy_roles_path = "/etc/ansible/roles"
ansible.galaxy_command = "sudo ansible-galaxy install --role-file=%{role_file} --roles-path=%{roles_path} --force"
end
As you can see, ansible.limit is all.
├── ansible.cfg
├── provisioning
│   ├── group_vars
│   │   └── all.yml
│   ├── inventory
│   │   ├── hosts.ini
│   │   └── hosts.yml
│   ├── playbook.yml
│   └── requirements.yml
└── Vagrantfile
all.yml content is:
solr_cores:
mssql_restore_backups: false
I need to replace mssql_restore_backup default value picking it up from an environment variable.
Is there anyway to pass environment variable value to ansible provisioner?
Any ideas?

In Ansible the variables with mayor precedence are extra-vars and you can add them to your Vagrantfile as below
ansible.extra_vars = {
mssql_restore_backup: $MSSQLRESTOREBACKUP
}
Documentation:
https://docs.ansible.com/ansible/latest/user_guide/playbooks_variables.html#understanding-variable-precedence
https://www.vagrantup.com/docs/provisioning/ansible_common#extra_vars

Related

Ansible - variable for a specific inventory

I have a multi environment & multi inventories setup within ansible (2.7.9).
With one of the inventories, I am wanting to set a global variable to be inherited by all the hosts within the inventory. For this purpose I added the variable into that specific inventory (inventory/production/prodinv):
[all:vars]
myvar="True"
And it works fine if I ran ansible against that specific inventory (inventory/production/prodinv). However, if I run ansible against the inventory directory (eg inventory/production) , I noticed that the variable is inherited on all the hosts across all the inventories - which isn't ideal because I only want the hosts within firstenv inventory to have the var defined.
Currently group_vars and host_vars are a symlink (for all the inventories) against a "shared" root group_vars and host_vars.
To add more clarity to my question, below is the structure of my ansible:
.
├── ansible.cfg
├── playbooks/
├── roles/
├── inventory/
│ │
│ ├── group_vars/
| |
| ├── host_vars/
| |
│ ├── tnd/
│ │ ├── group_vars/ -> ../group_vars
│ | ├── host vars/ -> ../host_vars
│ │ └── devinv
│ │
│ ├── production/
│ │ ├── group_vars/ -> ../group_vars
│ | ├── host vars/ -> ../host_vars
│ │ └── prodinv
│
└── . .
I'm not sure how / where to define this var that should apply to all hosts/groups within a particular inventory, without running into the same issue. Ideas?
Thanks,
J
I think your problem is two-fold.
Ansible applies the group_vars of a directory to all files and subdirectories within the specified inventory directory. So, inventory/production/group_vars will get applied to everything within inventory/production. This just gets masked when you explicitly limit your inventory further while running, like you did (-i inventory/production/prodinv).
This means that you need to put the group_vars only being applied to prodinv in their own directory and not in the inventory/production directory. For example, inventory/production/prodinv/group_vars.
Your symlinks are set up in a way that if you run against inventory, you're going to have the same group_vars applied to all your inventories. You're not hitting this in your example, but you'll likely hit it in the future.

Ansible group vars on specific host

I try to push specific conf with group_vars, but it only make push for one instance aa.yml and I don't have the push for bb.yml inventory. I already used group_vars and works before, but not with conf ansible
- name: Push conf
uri:
url: "https://xxx{{ instance_id }}"
method: POST
status_code: [201]
headers:
Content-Type: application/json
body_format: json
body: "{\"server\":{{ server }},\"labels\":{{{ site }}},\"name\":\"{{ instance.value.name }}"
return_content: true
vars:
instance: "{{ item }}"
loop: "{{ instances }}"
inventory/host/group_vars/aa/aa.yml
site: "\"aa\""
instance_id: "06a56590"
server: "[\"server1\"]"
inventory/host/group_vars/bb/bb.yml
site: "\"bb\""
instance_id: "bcc37660"
server: "[\"server2\"]"
inventory/host/000_hosts
[host]
server1
server2
The command:
ansible-playbook task.yml -i inventory/host/000_hosts --extra-vars "target=host"
Supplying with an answer:
group_vars/XXX directories typically refers to groups defined in your inventory, and they contain variables only available for that group. In your case you created directories for the groups aa and bb, these groups does not exists in your inventory, meaning when you call your playbook referring to your hosts (- hosts: host), ansible will look for group variables related to that group. Which is this case does not exists.
As you will see in my suggestion below; by using the keyword children in your inventory, you are basically saying: The hosts defined in the group aa/bb is part/children of the group host (the parent), and the variables follows. (inheriting-variable-values-group-variables-for-groups-of-groups)
Changing your inventory to the following, should solve the problem:
inventory/host/hosts
[aa]
server1
[bb]
server2
[host:children]
aa
bb
You could also change your directory structure to something like:
inventory/
├── group_vars
│   ├── aa
│   │   └── aa.yml
│   └── bb
│   └── bb.yml
└── hosts
Edit:
However, if I'm not mistaken: your hosts directory (in inventory/hosts) is typically used to identify your environment like:
Multistage environment Ansible
.
├── ansible.cfg
├── environments/ # Parent directory for our environment-specific directories
│ │
│ ├── dev/ # Contains all files specific to the dev environment
│ │ ├── group_vars/ # dev specific group_vars files
│ │ │ ├── all
│ │ │ ├── db
│ │ │ └── web
│ │ └── hosts # Contains only the hosts in the dev environment
│ │
│ ├── prod/ # Contains all files specific to the prod environment
│ │ ├── group_vars/ # prod specific group_vars files
│ │ │ ├── all
│ │ │ ├── db
│ │ │ └── web
│ │ └── hosts # Contains only the hosts in the prod environment
│ │
│ └── stage/ # Contains all files specific to the stage environment
│ ├── group_vars/ # stage specific group_vars files
│ │ ├── all
│ │ ├── db
│ │ └── web
│ └── hosts # Contains only the hosts in the stage environment
│
├── playbook.yml
│
└── . . .
Take a look at organizing-host-and-group-variables

How do you manage per env config file using ansible?

I'm using ansible to install Apache, currently I have multiple httpd.conf files(test/dev/staging/production) in ansible repository, most of the content is same excepts some environment specific settings.
Is it possible to use one httpd.conf template file, and modify the file when send the httpd.conf to remote server?
Yes you can. With jinja2 and group_vars.
So what you do in your templates/ folder create a file like such:
templates/http.conf.j2
Say you have something like this in there:
NameVirtualHost *:80
<VirtualHost *:80>
ServerName {{ subdomain }}.{{ domain }}
ServerAlias www.{{ subdomain }}.{{ domain }}
</VirtualHost>
Your layout should look like this:
├── group_vars
│   ├── all
│   │   └── config
│   ├── dev
│   │   └── config
│   └── test
│   └── config
├── inventory
│   ├── dev
│   │   └── hosts
│   └── test
│   └── hosts
├── site.yml
└── templates
└── http.conf.j2
In group_vars/all you would have domain: "example.com"
In group_vars/dev you would have subdomain: dev
In group_vars/test you would have subdomain: test
In your task, you'd have your ansible template command i.e.
- hosts: all
tasks:
- name: Copy http conf
template:
dest: /etc/apache2/http.conf
src: templates/http.conf.j2
owner: root
group: root
And run your playbook like this:
ansible-playbook -i inventory/test site.yml
The file should end up on the host looking like this:
<VirtualHost *:80>
ServerName test.example.com
ServerAlias www.test.example.com
</VirtualHost>

Passing variables to ansible roles

I have my directory structure as this
└── digitalocean
├── README.md
├── play.yml
└── roles
├── bootstrap_server
│   └── tasks
│   └── main.yml
├── create_new_user
│   └── tasks
│   └── main.yml
├── update
│   └── tasks
│   └── main.yml
└── vimserver
├── files
│   └── vimrc_server
└── tasks
└── main.yml
When I am creating a user under the role create_new_user, I was hard coding the user name as
---
- name: Creating a user named username on the specified web server.
user:
name: username
state: present
shell: /bin/bash
groups: admin
generate_ssh_key: yes
ssh_key_bits: 2048
ssh_key_file: .ssh/id_rsa
- name: Copy .ssh/id_rsa from host box to the remote box for user username
become: true
copy:
src: ~/.ssh/id_rsa.pub
dest: /home/usernmame/.ssh/authorized_keys
mode: 0600
owner: username
group: username
One way of solving this may be to create a var/main.yml and put the username there. But I wanted something through which I can specify the username at play.yml level. As I am also using the username in the role vimrcserver.
I am calling the roles using play.yml
---
- hosts: testdroplets
roles:
- update
- bootstrap_server
- create_new_user
- vimserver
Would a template work here in this case? Couldn't find much from these SO questions
I got it working by doing a
---
- hosts: testdroplets
roles:
- update
- bootstrap_server
- role: create_new_user
username: username
- role: vimserver
username: username
on play.yml
Although would love to see a different approach then this
Docs: http://docs.ansible.com/ansible/playbooks_roles.html#roles
EDIT
I finally settled with a directory structure like
$ tree
.
├── README.md
├── ansible.cfg
├── play.yml
└── roles
├── bootstrap_server
│   └── tasks
│   └── main.yml
├── create_new_user
│   ├── defaults
│   │   └── main.yml
│   └── tasks
│   └── main.yml
├── update
│   └── tasks
│   └── main.yml
└── vimserver
├── defaults
│   └── main.yml
├── files
│   └── vimrc_server
└── tasks
└── main.yml
Where I am creating a defaults/main.yml file inside the roles where I need the usage of {{username}}
If someone is interested in the code,
https://github.com/tasdikrahman/ansible-bootstrap-server
You should be able to put username in a vars entry in play.yml.
Variables can also be split out into separate files.
Here is an example which shows both options:
- hosts: all
vars:
favcolor: blue
vars_files:
- /vars/external_vars.yml
tasks:
- name: this is just a placeholder
command: /bin/echo foo
https://docs.ansible.com/ansible/playbooks_variables.html#variable-file-separation
Ansible seems to delight in having different ways to do the same thing, without having either a nice comprehensive reference, or a rationale discussing the full implications of each different approach :). If you didn't remember the above was possible (I'd completely forgotten vars_files), the easiest option to find from the documentation might have been a third way, which is the most sophisticated one.
There's a prominent recommendation for ansible-examples. You can see a group_vars directory, with files which automatically provide values for hosts according to their groups, including the magic all group. The group_vars directory can be placed in the same directory as the playbook.
https://github.com/ansible/ansible-examples/tree/master/lamp_simple
Maybe this is what you want?
---
- hosts: testdroplets
roles:
- update
- bootstrap_server
- { role: create_new_user, username: 'foobar' }
- vimserver
https://docs.ansible.com/ansible/2.5/user_guide/playbooks_reuse_roles.html#using-roles
If you use include_role, variables can be passed like below.
- hosts: all_hosts
tasks:
- include_role:
name: "path/to/role"
vars:
var1: "var2_value"
var2: "var2_value"
Can't you just pass the variable from the command line with the -e parameter? So you can specifiy the variable even before execution. This also results in the strongest variable declaration which always takes precendence (see Variable precendence).
If you want to place it inside your playbook I suggest defining the username with the set_fact directive in the playbook. This variable is then available in all roles and included playbooks as well. Something like:
---
- hosts: testdroplets
pre_tasks:
- set_fact:
username: my_username
roles:
- update
- bootstrap_server
- create_new_user
- vimserver
It is all here: http://docs.ansible.com/ansible/playbooks_variables.html
while there are already some good answers, but I wanted to add mine because I've done this exact thing.
Here is the role I wrote: https://github.com/jmalacho/ansible-examples/tree/master/roles/users
And, I use hash_merge=true, and ansible's group_vars to make a dictionary of users: keys,groups so that adding a new user by host or by environment, and re-running is easy.
I also wrote up how my team uses group variables for environments once like this: "https://www.coveros.com/ansible-environment-design/"

How can I get a more clear name for Vagrant machine

I am working with Vagrant and it is a wonderful tool for virtualized enviroments. But I have a doubts about how VirtualBox changes the name of VM created.
I have this Vagrantfile configuration
# -*- mode: ruby -*-
# vi: set ft=ruby :
Vagrant.configure("2") do |config|
config.vm.hostname = "ubuntu"
config.vm.box = "ubuntu-12.04"
config.vm.network :private_network, ip: "33.33.33.13"
config.berkshelf.enabled = true
end
When I run vagrant up the virtualbox tool creates my machine with a different name.
roberto#rcisla-pc:~/myface$ VBoxManage list vms
"myface_default_1390750040" {eed39077-32da-44e5-961f-2bb772a2bf31}
Why virtualbox creates the machine with the name myface_default_1390750040 when I configured in the Vagrantfile a different name, in this case ubuntu-12.04.
This is my myface cookbook structure
.
├── attributes
│   └── default.rb
├── Berksfile
├── Berksfile.lock
├── chefignore
├── definitions
├── files
│   └── default
├── Gemfile
├── libraries
├── LICENSE
├── metadata.rb
├── providers
├── README.md
├── recipes
│   └── default.rb
├── resources
├── templates
│   └── default │  
├── test
│   └── integration
│   └── default
│   └── serverspec
│   ├── default
│   │   ├── test_spec.rb
│   │   └── demo_spec.rb
│   └── spec_helper.rb
├── Thorfile
└── Vagrantfile
I don't understand why VirtualBox take the name relative to the cookbook and not take the name ubuntu-12.04
I am using
Vagrant 1.3.5
VirtualBox 4.3.5 r91406
You can add a config.vm.define block in your vagrant file.
E.g:
Vagrant.configure(VAGRANTFILE_API_VERSION) do |config|
config.vm.synced_folder ".", "/vagrant", type: "nfs"
config.vm.define "{YOUR NICE NAME HERE}" do |web|
web.vm.box = "CentOs"
web.vm.box_url = "http://developer.nrel.gov/downloads/vagrant-boxes/CentOS-6.4-x86_64-v20130731.box"
web.vm.hostname = 'dev.local'
web.vm.network :forwarded_port, guest: 90, host: 9090
web.vm.network :private_network, ip: "22.22.22.11"
web.vm.provision :puppet do |puppet|
puppet.manifests_path = "puppet/manifests"
puppet.manifest_file = "web.pp"
puppet.module_path = "puppet/modules"
puppet.options = ["--verbose", "--hiera_config /vagrant/hiera.yaml", "--parser future"]
end
end
end
This way you will see your defined name back when you check the list in virtual box. But remember that Vagrant will always append a random string behind the VM's name, since it needs to identify the correct machine when running multiple vm's of the same vagrant file.
The default VM machine name generated by vagrant uses your folder_name and timestamp FOLDER_default_TIMESTAMP. To name your vm machine on virtualbox, add the following code to vagrantfile:
config.vm.provider :virtualbox do |vb|
vb.name = 'myhost'
end
The hostname setting controls how the ssh prompt is displayed:
config.vm.hostname = 'myhost'
To use the same name on virtualbox and ssh prompt, add the following code:
config.vm.hostname = 'myhost'
config.vm.provider :virtualbox do |vb|
vb.name = config.vm.hostname
end
I am using virtualbox 5.1.18 and vagrant 1.9.2.

Resources