My current ansible structure looks something like this:
- inventory
- prod
- prod1
- hosts.yml
- group_vars
- all.yml
- prod2
- hosts.yml
- group_vars
- all.yml
- prod3
- hosts.yml
- group_vars
- all.yml
- nonprod
- dev
- hosts.yml
- group_vars
- all.yml
- qa
- hosts.yml
- group_vars
- all.yml
- uat
- hosts.yml
- group_vars
- all.yml
- roles
- main.yml (this isn't accurate just a sample playbook for the question)
I'd like to be able to run something like this: ansible-playbook main.yml -i inventory/prod and have it automatically cycle through the environments (each with distinct group_vars values).
Currently the command will find the hosts in each environment but not the vars which stops the playbook from running, but if I run ansible-playbook main.yml -i inventory/prod/prod1 it runs fine.
I would suggest restructuring your inventory. Ansible looks for the group_vars directory adjacent to your inventory files. If you run with ansible -i inventory ..., it won't find the group_vars file (it will only find it when running e.g. ansible -i inventory/prod/prod1).
Consider a layout like this:
inventory/
├── group_vars
│ ├── prod1.yaml
│ ├── prod2.yaml
│ └── prod3.yaml
└── prod
├── prod1.yaml
├── prod2.yaml
└── prod3.yaml
Where each inventory file places hosts into a similarly named hostgroup. E.g., inventory/prod/prod1.yaml contains:
all:
children:
prod1:
hosts:
prod1-node0:
prod1-node1:
prod1-node2:
If we have a variable defined with a different value for each group:
$ grep . inventory/group_vars/*
inventory/group_vars/prod1.yaml:location: datacenter1
inventory/group_vars/prod2.yaml:location: datacenter2
inventory/group_vars/prod3.yaml:location: datacenter3
And a playbook like this:
- hosts: all
gather_facts: false
tasks:
- debug:
var: location
We can run it against all the hosts (I'm only using two groups here, prod1 and prod2, in order to keep the output shorter):
$ ansible-playbook playbook.yaml -i inventory
TASK [debug] ********************************************************************************************
ok: [prod1-node0] => {
"location": "datacenter1"
}
ok: [prod1-node1] => {
"location": "datacenter1"
}
ok: [prod1-node2] => {
"location": "datacenter1"
}
ok: [prod2-node0] => {
"location": "datacenter2"
}
ok: [prod2-node1] => {
"location": "datacenter2"
}
ok: [prod2-node2] => {
"location": "datacenter2"
}
Or you can run the playbook against a specific group:
$ ansible-playbook playbook.yaml -i inventory -l prod2
TASK [debug] ********************************************************************************************
ok: [prod2-node0] => {
"location": "datacenter2"
}
ok: [prod2-node1] => {
"location": "datacenter2"
}
ok: [prod2-node2] => {
"location": "datacenter2"
}
In each case, hosts will use the values from the appropriate group_vars file based on their hostgroup.
Related
I would like to know how to target a playbook on a specific hosts within the inventory file with a specific column...
My inventory file:
[server]
demo_1.example.com dc="pri"
demo_2.example.com dc="sec"
demo_3.example.com dc="pri"
I want just run a playbook over the servers with dc="pri"
What is should be the syntax ?
ansible-playbook -i inventory/my_file role/my_playbook.yaml
You are taking the problem upside down in my opinion. With what you ask, you basically want to define a group based on an individual variable defined for each host. This is possible but really not ideal.
Instead of that, create the needed groups and assign the variables to the group. Here is an example for the above in a all in one ini inventory
[server:children]
primary
secondary
[primary]
demo_1.example.com
demo_3.example.com
[primary:vars]
dc="pri"
[secondary]
demo_2.example.com
[secondary:vars]
dc="sec"
You can then target for example in your hosts play target or as a --limit to the ansible command line:
all servers: server
only the primary dc: primary
Now if for any reason you need to keep your original inventory untouched, you can still achieve something similar to above with a constructed dynamic inventory
inventory/0-hosts
[server]
demo_1.example.com dc="pri"
demo_2.example.com dc="sec"
demo_3.example.com dc="pri"
inventory/1-constructed.yml
---
plugin: ansible.builtin.constructed
strict: False
groups:
primary: dc == 'pri'
secondary: dc == 'sec'
As you can see the global resulting inventoy will have the same groups as above
$ ansible-inventory -i inventory/ --list
{
"_meta": {
"hostvars": {
"demo_1.example.com": {
"dc": "pri"
},
"demo_2.example.com": {
"dc": "sec"
},
"demo_3.example.com": {
"dc": "pri"
}
}
},
"all": {
"children": [
"primary",
"secondary",
"servers",
"ungrouped"
]
},
"primary": {
"hosts": [
"demo_1.example.com",
"demo_3.example.com"
]
},
"secondary": {
"hosts": [
"demo_2.example.com"
]
},
"servers": {
"hosts": [
"demo_1.example.com",
"demo_2.example.com",
"demo_3.example.com"
]
}
}
So you can target them exactly as I described earlier.
Q: "Run a playbook over the servers with dc="pri""
A: Use the inventory plugin constructed and create inventory groups by the value of the variable. See
shell> ansible-doc -t inventory ansible.builtin.constructed
In your case, use keyed_groups instead of just groups. You don't have to specify the values of the variable. The names of the created groups will be automatically constructed by using the values of the variable.
For example, the tree below for testing
shell> tree .
.
├── ansible.cfg
├── inventory
│ ├── 01-hosts
│ └── 02-constructed.yml
└── pb.yml
1 directory, 4 files
shell> cat ansible.cfg
[defaults]
gathering = explicit
inventory = $PWD/inventory
stdout_callback = yaml
Create the inventory
shell> cat inventory/01-hosts
[server]
demo_1.example.com dc="pri"
demo_2.example.com dc="sec"
demo_3.example.com dc="pri"
shell> cat inventory/02-constructed.yml
plugin: ansible.builtin.constructed
keyed_groups:
- key: dc
prefix: dc
default_value: pool
and test it. Two new groups dc_pri and dc_sec will be created
shell> ansible-inventory --list --yaml
all:
children:
dc_pri:
hosts:
demo_1.example.com:
dc: pri
demo_3.example.com:
dc: pri
dc_sec:
hosts:
demo_2.example.com:
dc: sec
server:
hosts:
demo_1.example.com: {}
demo_2.example.com: {}
demo_3.example.com: {}
ungrouped: {}
Use the groups. For example, in the playbook below
shell> cat pb.yml
- hosts: all
tasks:
- debug:
var: ansible_play_hosts_all
run_once: true
shell> ansible-playbook pb.yml --limit dc_pri
PLAY [all] ***********************************************************************************
TASK [debug] *********************************************************************************
ok: [demo_1.example.com] =>
ansible_play_hosts_all:
- demo_1.example.com
- demo_3.example.com
PLAY RECAP ***********************************************************************************
demo_1.example.com: ok=1 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
I receive an parsing error when executing the inventory... seems the plugin is not loaded?
0-inventory.yaml
[app_server]
demo_1.example.com dc="pri"
demo_2.example.com dc="sec"
demo_3.example.com dc="pri"
1-constructed.yaml
plugin: ansible.builtin.constructed
strict: False
groups:
primary: dc == 'pri'
secondary: dc == 'sec'
tree
.
├── inventory
│ ├── 0-hosts
│ └── 1-constructed.yaml
├── pb_filter.yaml
└── task1.yaml
execution outcome
$ ansible-inventory -i inventory/ --list
[WARNING]: * Failed to parse
/mnt/c/Users/rcarb/Documents/GitHub/Ansible/Ansible_test/Ansible_filter_inventory/inventory/1-constructed.yaml with
script plugin: problem running
/mnt/c/Users/rcarb/Documents/GitHub/Ansible/Ansible_test/Ansible_filter_inventory/inventory/1-constructed.yaml --list
([Errno 8] Exec format error:
'/mnt/c/Users/rcarb/Documents/GitHub/Ansible/Ansible_test/Ansible_filter_inventory/inventory/1-constructed.yaml')
[WARNING]: * Failed to parse
/mnt/c/Users/rcarb/Documents/GitHub/Ansible/Ansible_test/Ansible_filter_inventory/inventory/1-constructed.yaml with
auto plugin: Invalid value "ansible.builtin.constructed" for configuration option "plugin_type: inventory plugin:
ansible_collections.ansible.builtin.plugins.inventory.constructed setting: plugin ", valid values are: ['constructed']
[WARNING]: * Failed to parse
/mnt/c/Users/rcarb/Documents/GitHub/Ansible/Ansible_test/Ansible_filter_inventory/inventory/1-constructed.yaml with
yaml plugin: Plugin configuration YAML file, not YAML inventory
[WARNING]: * Failed to parse
/mnt/c/Users/rcarb/Documents/GitHub/Ansible/Ansible_test/Ansible_filter_inventory/inventory/1-constructed.yaml with ini
plugin: Invalid host pattern '---' supplied, '---' is normally a sign this is a YAML file.
[WARNING]: Unable to parse
/mnt/c/Users/rcarb/Documents/GitHub/Ansible/Ansible_test/Ansible_filter_inventory/inventory/1-constructed.yaml as an
inventory source
{
"_meta": {
"hostvars": {
"demo_1.example.com": {
"dc": "pri"
},
"demo_2.example.com": {
"dc": "sec"
},
"demo_3.example.com": {
"dc": "pri"
}
}
},
"all": {
"children": [
"app_server",
"ungrouped"
]
},
"app_server": {
"hosts": [
"demo_1.example.com",
"demo_2.example.com",
"demo_3.example.com"
]
}
}
Any idea about why is failing to parse the inventory file?
Best regards
I have scoured through the web and the ansible documentation, but I have not been able to find an answer for this question.
Say the structure is as follows:
./playbooks/foo.yml
./hosts/HOST_NAME (Contains IP for a specific host)
./hosts/host_vars/HOST_NAME/vault1
./hosts/host_vars/HOST_NAME/vault2
When I run the command:
ansible-playbook -i hosts/HOST_NAME playbooks/foo.yml
Will ansible use vault1 or vault2 per default?
If it looks in both, what happens if both vaults have defined the same variable? That is:
vault1 -> username: user1
vault2 -> username: user2
If it looks in both, will the command fail if one of the vaults fail the decryption?
Q1: "Will Ansible use vault1 or vault2 per default?"
A: Both. In the sort order.
Q2: "What happens if both vaults have defined the same variable?"
A: The last one wins.
Q3: "Will the command fail if one of the vaults fails the decryption?"
A: Yes.
Example: Given the tree
shell> tree ../test-915
../test-915
├── ansible.cfg
├── hosts
├── host_vars
│ └── test_11
│ ├── vault1
│ └── vault2
└── pb.yml
the unencrypted vaults
shell> cat host_vars/test_11/vault1
username: user1
key1: val1
shell> cat host_vars/test_11/vault2
username: user2
key2: val2
and the playbook
shell> cat pb.yml
- hosts: test_11
tasks:
- debug:
msg: |
username: {{ username }}
key1: {{ key1 }}
key2: {{ key2 }}
gives
shell> ansible-playbook pb.yml
PLAY [test_11] **********************************************************************************
TASK [debug] ************************************************************************************
ok: [test_11] =>
msg: |-
username: user2
key1: val1
key2: val2
I'm having a very difficult time understanding how to organize large playbooks with many roles, using inventory with multiple "environments" and using sub-plays to try and organize things. All the while having common variables at the parent playbook, sharing those with sub-plays. I use ansible but in a very limited way so I'm trying to expand my knowledge of it by doing this exercise.
Directory structure (simplified for testing)
├── inventory
│ ├── dev
│ │ ├── group_vars
│ │ │ └── all.yml
│ │ └── hosts
│ └── prod
│ ├── group_vars
│ │ └── all.yml
│ └── hosts
├── playbooks
│ └── infra
│ └── site.yml
├── site.yml
└── vars
└── secrets.yml
Various secrets are in the secrets.yml file, including the ansible_ssh_user and ansible_become_pass.
Contents of all.yml
---
ansible_ssh_user: "{{ vault_ansible_ssh_user }}"
ansible_become_pass: "{{ vault_ansible_become_pass }}"
Contents of site.yml
---
- name: test plays
hosts: all
vars_files:
- vars/secrets.yml
become: true
gather_facts: true
pre_tasks:
- include_vars: secrets.yml
tasks:
- debug:
var: ansible_ssh_user
- import_playbook: playbooks/infra/site.yml
Content of playbooks/infra/site.yml
---
- name: test sub-play
hosts: all
become: true
gather_facts: true
tasks:
- debug:
var: ansible_ssh_user
The main parent playbook is being called with ansible-playbook -i inventory/dev site.yml. The problem is I can't access vault_ansible_ssh_user or vault_ansible_become_pass (or any secrets in vault) from within the sub-play if I don't include both var_files AND pre_tasks: - include_vars
If I remove var_files, I can't access the secrets in the parent playbook. If I remove pre_tasks: - include_vars, I can't access any secrets in the imported sub-play. Any idea why I need both of these variable include statements for this to work? Also, is this just a terrible design and I'm doing things completely wrong? I'm having a hard time wrapping my head around the best way to organize huge playbooks with a lot of required variables so I ended up with a directory structure like this to try and compartmentalize the variables to avoid very large variables files and the need to duplicate variable files all over the place. This probably just boils down to me wanting to fit a round peg in a square hole but I can't find a great best practices example for something like this.
This issue might also have to do with me trying to put ansible vault variables in an inventory var file maybe. If so, is that something I should or shouldn't be doing? As I was writing this, I may have had a "light bulb" moment and finally understand how I should handle this but I need to test some things to understand it fully but regardless, I'm still very interested in what the stackoverflow community has to say about how I'm currently doing this.
EDIT: turns out my "light bulb" idea is just the same as I have here just moved around in a different way, with the same issues
Q: "If I remove ... include_vars, I can't access any secrets in the imported sub-play."
A: To share variables among the plays use include_vars or set_fact. Quoting from Variable scope: how long is a value available?
Variable values associated directly with a host or group, including variables defined in inventory, by vars plugins, or using modules like set_fact and include_vars, are available to all plays. These ‘host scope’ variables are also available via the hostvars[] dictionary.
Given the files below
shell> cat inventory/prod/hosts
test_01
test_02
shell> cat inventory/prod/group_vars/all.yml
ansible_ssh_user: "{{ vault_ansible_ssh_user }}"
ansible_become_pass: "{{ vault_ansible_become_pass }}"
shell> cat vars/secrets.yml
vault_ansible_ssh_user: ansible-ssh-user
vault_ansible_become_pass: ansible-become-pass
shell> cat site.yml
- name: test plays
hosts: all
gather_facts: false
vars_files: vars/secrets.yml
tasks:
- debug:
var: ansible_ssh_user
- debug:
var: ansible_become_pass
- import_playbook: playbooks/infra/site.yml
shell> cat playbooks/infra/site.yml
- name: test sub-plays
hosts: all
gather_facts: false
tasks:
- debug:
var: ansible_ssh_user
The variables declared by vars_files are not shared among the plays and the second play will fail. The abridged result is
shell> ANSIBLE_INVENTORY=$PWD/inventory/prod/hosts ansible-playbook site.yml
PLAY [test plays] ****
TASK [debug] ****
ok: [test_01] => {
"ansible_ssh_user": "ansible-ssh-user"
}
ok: [test_02] => {
"ansible_ssh_user": "ansible-ssh-user"
}
TASK [debug] ****
ok: [test_01] => {
"ansible_become_pass": "ansible-become-pass"
}
ok: [test_02] => {
"ansible_become_pass": "ansible-become-pass"
}
PLAY [test sub-plays] ****
TASK [debug] ****
fatal: [test_01]: FAILED! => {"msg": "The field 'become_pass' has an invalid value, which includes an undefined variable. The error was: 'vault_ansible_become_pass' is undefined"}
fatal: [test_02]: FAILED! => {"msg": "The field 'become_pass' has an invalid value, which includes an undefined variable. The error was: 'vault_ansible_become_pass' is undefined"}
The problem will disappear if you use include_vars or set_fact, i.e. "instantiate" the variables. Commenting set_fact and uncommenting include_vars, or uncommenting both, will give the same result
- name: test plays
hosts: all
gather_facts: false
vars_files: vars/secrets.yml
tasks:
- debug:
var: ansible_ssh_user
- debug:
var: ansible_become_pass
# - include_vars: secrets.yml
- set_fact:
ansible_ssh_user: "{{ ansible_ssh_user }}"
ansible_become_pass: "{{ ansible_become_pass }}"
- import_playbook: playbooks/infra/site.yml
Then the abridged result is
shell> ANSIBLE_INVENTORY=$PWD/inventory/prod/hosts ansible-playbook site.yml
PLAY [test plays] ****
TASK [debug] ****
ok: [test_01] => {
"ansible_ssh_user": "ansible-ssh-user"
}
ok: [test_02] => {
"ansible_ssh_user": "ansible-ssh-user"
}
TASK [debug] ****
ok: [test_01] => {
"ansible_become_pass": "ansible-become-pass"
}
ok: [test_02] => {
"ansible_become_pass": "ansible-become-pass"
}
TASK [set_fact] ****
ok: [test_01]
ok: [test_02]
PLAY [test sub-plays] ****
TASK [debug] ****
ok: [test_02] => {
"ansible_ssh_user": "ansible-ssh-user"
}
ok: [test_01] => {
"ansible_ssh_user": "ansible-ssh-user"
}
Notes
In this example, it's not important whether the variables are encrypted or not.
become and gather_facts don't influence this problem.
There might be other issues. It's a good idea to review include and import issues.
Q: "Why is the vars_files needed?"
A: The variable ansible_become_pass is needed to escalate the user's privilege when a task is sent to the remote host. As a result, when the variable vault_ansible_become_pass is declared in the task include_vars only, this variable won't be available before the tasks are executed, and the play will fail with the error
fatal: [test_01]: FAILED! => {"msg": "The field 'become_pass' has an invalid value, which includes an undefined variable. The error was: 'vault_ansible_become_pass' is undefined"}
See
Understanding variable precedence
Understanding privilege escalation: become
No vars_files is needed if there are user-defined variables only. For example, the playbook below works as expected
shell> cat inventory/prod/group_vars/all.yml
var1: "{{ vault_var1 }}"
var2: "{{ vault_var2 }}"
shell> cat vars/secrets2.yml
vault_var1: test-var1
vault_var2: test-var2
shell> cat site2.yml
- name: test plays
hosts: all
gather_facts: false
tasks:
- include_vars: secrets2.yml
- debug:
var: var1
- debug:
var: var2
- import_playbook: playbooks/infra/site2.yml
shell> cat playbooks/infra/site2.yml
- name: test sub-plays
hosts: all
gather_facts: false
tasks:
- debug:
var: var1
- debug:
var: var2
I would like to shape my directory structure of my ansible roles and playbooks.
Currently I have a directory structure like.
group_vars
* all
* group-one
- group-vars.yml
- group-vault.yml
...
host_vars
- server1.yml
plays
- java_plays
* deploy_fun_java_stuff.yml
* deploy_playbook.yml
roles
- role1
- tasks
* main.yml
- handlers
- (the rest of the needed directories)
- role2
- java
- java_role1
- tasks
* main.yml
- handlers
- (the rest of the needed directories)
I would like to be able to call upon the role java_role1 in the play deploy_fun_java_stuff.yml
I can call
---
- name: deploy fun java stuff
hosts: java
roles:
- { role: role1 }
but I cannot call (I've tried multiple ways). Is this possible?
- name: deploy fun java stuff
hosts: java
roles:
- { role: java/java_role1 }
What I really want to accomplish is to be able to structure my plays in an orderly fashion along with my roles.
I will end up with a large number of both roles and plays I would like to organize them.
I can handle this with a separate ansible.cfg file for each play directory but I cannot add those cfg files to ansible tower (So I'm looking for an alternate solution).
I think the problem is that you need to set the relative path properly. Ansible first applies the given path relative to the called playbooks directory, then looks in the current working path (from which you are executing the ansible-playbook command) and finally checks in /etc/ansible/roles, so instead of { role: java/java_role1 } in your dir structure you could use { role: ../../roles/java/java_role1 } or { role: roles/java/java_role1 }. Yet another option would be to configure the paths in which ansible is looking for roles. For that you could set the roles_path inside your projects ansible.cfg as described in the Ansible docs.
Based on your example:
Dir tree:
ansible/
├── hosts
│ └── dev
├── plays
│ └── java_plays
│ └── java.yml
└── roles
├── java
│ └── java_role1
│ └── tasks
│ └── main.yml
└── role1
└── tasks
└── main.yml
To test it, the play would include java_role1 and role1.
plays/java_plays/java.yml:
---
- name: deploy java stuff
hosts: java
roles:
- { role: roles/role1 }
- { role: roles/java/java_role1 }
For testing purposes these roles simply print a debug msg.
role1/tasks/main.yml:
---
- debug: msg="Inside role1"
The dev hosts file simply sets localhost to the java group. Now I can use the playbook:
fishi#zeus:~/workspace/ansible$ ansible-playbook -i hosts/dev plays/java_plays/java.yml
PLAY [deploy java stuff] *******************************************************
TASK [setup] *******************************************************************
ok: [localhost]
TASK [role1 : debug] ***********************************************
ok: [localhost] => {
"msg": "Inside role1"
}
TASK [java_role1 : debug] *************************************
ok: [localhost] => {
"msg": "Inside java_role1"
}
PLAY RECAP *********************************************************************
localhost : ok=3 changed=0 unreachable=0 failed=0
Now doing the same when you use { role: ../../roles/java/java_role1 } and { role: ../../roles/role1 } your log output inside the TASK brackets would show the whole relative path instead of just the role name:
fishi#zeus:~/workspace/ansible$ ansible-playbook -i hosts/dev plays/java_plays/java.yml
PLAY [deploy java stuff] *******************************************************
TASK [setup] *******************************************************************
ok: [localhost]
TASK [../../roles/role1 : debug] ***********************************************
ok: [localhost] => {
"msg": "Inside role1"
}
TASK [../../roles/java/java_role1 : debug] *************************************
ok: [localhost] => {
"msg": "Inside java_role1"
}
PLAY RECAP *********************************************************************
localhost : ok=3 changed=0 unreachable=0 failed=0
Another option and one that I use is to create an ansible.cfg file in your playbook directory and place the following in it:
[defaults]
roles_path = /etc/ansible/roles: :
or in your case:
[defaults]
roles_path = /etc/ansible/roles:/etc/ansible/roles/java
Then don't use any relative paths.
A more elegant solution (imo) is symlinking your roles directory to inside the playbooks directory.
My directory structure is as follows:
inventory/
playbooks/
|-> roles -> ../roles
|-> group_vars -> ../group_vars
|-> host_vars -> ../host_vars
roles/
group_vars/
host_vars/
In my case, I created the symlink by running ln -s ../roles playbooks/roles
I want to make the most of variable precedence with ansible.
So let’s have a look at this simplified project:
├── group_vars
│ └── all
│ └── web
├── hosts
│ └── local
└── site.yml
The inventory file hosts/local:
[local_web]
192.168.1.20
[local_db]
192.168.1.20
[web:children]
local_web
[db:children]
local_db
The group_vars/all file:
test: ALL
The group_vars/web file:
test: WEB
The site.yml file:
---
- name: Test
hosts: db
tasks:
- debug: var=test
Alright, so this is just to test variable precedence. As I run ansible in the db group, the test variable should display “ALL”, as ansible will only looks into group_vars/all, right?
Wrong:
TASK: [debug var=test] ********************************************************
ok: [192.168.1.20] => {
"var": {
"test": "WEB"
}
}
Actually, if local_web and local_db hosts are different, then it works.
Why ansible still looks into an unrelated config file when hosts are the same? Is that a bug or just me?
You're stating that 192.168.1.20 is a member of all 4 of your defined groups, and that's independent of how you reference the host in your playbook. No matter how you reference the host in your playbook Ansible is going to evaluate all the groups that host is in and import variables based on those groups.
Here's a handy test to demonstrate this:
- name: Test
hosts: db
tasks:
- debug: msg="{{ inventory_hostname }} is in group {{ item }}"
when: inventory_hostname in groups[item]
with_items: group_names
The output of this is:
TASK: [debug msg="host is in group {{ item }}"] *******************************
ok: [192.168.1.20] => (item=db) => {
"item": "db",
"msg": "192.168.1.20 is in group db"
}
ok: [192.168.1.20] => (item=local_db) => {
"item": "local_db",
"msg": "192.168.1.20 is in group local_db"
}
ok: [192.168.1.20] => (item=local_web) => {
"item": "local_web",
"msg": "192.168.1.20 is in group local_web"
}
ok: [192.168.1.20] => (item=web) => {
"item": "web",
"msg": "192.168.1.20 is in group web"
}
Since the host is in the web group then the web group_vars file was included.
#Bruce P answer is right.
However, this ansible behavior is not satisfying for me, because it change variable precedence depending on the hosts. Instead of group_vars, I use the vars_files dict.
I moved group_vars into a directory named vars.
The updated site.yml:
---
- name: Test
hosts: db
tasks:
- debug: var=test
vars_files:
- vars/all
- vars/db
Now, test displays “ALL”, as I wanted. It first read vars/all, then vars/db (which is empty).
(Note: variable precedence seems to be a bit buggy at the moment - v1.9.2 -. This means if you use variables as var files name, ansible will not load files in the order expected.)
Another solution that can leave variables in group_vars is to replace the inventory file hosts/local by two files : hosts/inv_db and hosts/inv_web, and to use the -i option to specify the inventory file to use.