I'm struggling to get a solution for this:
I need to create a CSV file with all the filesystems and respective mountpoints on every server, using Ansible.
I managed to create a Python script that displays the information I want:
hostname,filesystem,mountpoint
Works beautifully on every server EXCEPT those where Python is not installed :)
- name: Get filesystem data
shell: python /tmp/filesystem.py
register: get_fs_data
become: yes
become_method: sudo
become_user: root
- set_fact:
filesystem_data: "{{ get_fs_data.stdout }}"
when: get_fs_data.rc == 0
So, my question is ...
How can this be achieved without the usage of that Python script?
I basically want to build a similar list like the one above (hostname, filesystem, mountpoint).
I could execute something like:
df --type=xfs -h --output=source,target
But the output has several lines, one for each file system, and I'm not sure how to handle it on Ansible directly.
There are more options.
See Ansible collect df output and convert to dictionary from set_fact how to parse the output of df
The simplest option is to use the setup module parameter gather_subset: mounts. Unfortunately, this doesn't work properly for all OS. There is no problem with Linux. The playbook
- hosts: localhost
gather_facts: false
tasks:
- setup:
gather_subset:
- distribution
- mounts
- debug:
var: ansible_distribution
- debug:
var: ansible_mounts
gives (abridged)
TASK [debug] *********************************************************************************
ok: [localhost] =>
ansible_distribution: Ubuntu
TASK [debug] *********************************************************************************
ok: [localhost] =>
ansible_mounts:
- block_available: 2117913
block_size: 4096
block_total: 10013510
block_used: 7895597
device: /dev/nvme0n1p6
fstype: ext4
inode_available: 1750968
inode_total: 2564096
inode_used: 813128
mount: /
options: rw,relatime,errors=remount-ro
size_available: 8674971648
size_total: 41015336960
uuid: 505b60e7-509f-46e6-b833-f388df6bb9f0
...
But the same playbook on FreeBSD shows nothing
TASK [debug] *********************************************************************************
ok: [test_11] =>
ansible_distribution: FreeBSD
TASK [debug] *********************************************************************************
ok: [test_11] =>
ansible_mounts: []
If gather_subset: mounts works on your systems the report is simple. For example, the task below
- copy:
dest: /tmp/ansible_df_all.csv
content: |
hostname,filesystem,mountpoint
{% for hostname in ansible_play_hosts %}
{% for mount in hostvars[hostname]['ansible_mounts'] %}
{{ hostname }},{{ mount.device }},{{ mount.mount}}
{% endfor %}
{% endfor %}
run_once: true
delegate_to: localhost
will create the file at the controller
shell> cat /tmp/ansible_df_all.csv
hostname,filesystem,mountpoint
localhost,/dev/nvme0n1p6,/
localhost,/dev/nvme0n1p7,/export
localhost,/dev/nvme0n1p2,/boot/efi
Q: Get list of filesystems and mountpoints using Ansible and df without Python installed
As already mentioned within the other answer, there are many options possible.
the output has several lines, one for each file system, and I'm not sure how to handle it on Ansible directly
The data processing can be done on Remote Nodes (data pre-processing, data cleansing), Control Node (data post-processing) or partially on each of them.
Whereby I prefer the answer of #Vladimir Botka and recommend to use it since the data processing is done on the Control Node and in Python
How can this be achieved without the usage of that Python script? (annot.: or any Python installed on the Remote Node)
a lazy approach could be
---
- hosts: test
become: false
gather_facts: false # is necessary because setup.py depends on Python too
tasks:
- name: Gather raw 'df' output with pre-processing
raw: "df --type=ext4 -h --output=source,target | tail -n +2 | sed 's/ */,/g'"
register: result
- name: Show result as CSV
debug:
msg: "{{ inventory_hostname }},{{ item }}"
loop_control:
extended: true
label: "{{ ansible_loop.index0 }}"
loop: "{{ result.stdout_lines }}"
resulting into an output of
TASK [Show result as CSV] *************
ok: [test.example.com] => (item=0) =>
msg: test.example.com,/dev/sda3,/
ok: [test.example.com] => (item=1) =>
msg: test.example.com,/dev/sda4,/tmp
ok: [test.example.com] => (item=2) =>
msg: test.example.com,/dev/sdc1,/var
ok: [test.example.com] => (item=3) =>
msg: test.example.com,/dev/sda2,/boot
As noted before, the data pre-processing parts
Remove first line from output via | tail -n +2
Remove multiple whitepspaces and replace via | sed 's/ */,/g'
could be processed in Ansible and Python on the Control Node, in example
Remove first line in stdout_lines (annot.: and as a general approach for --no-headers even for commands which do not provide such)
Simply split item on space and re-join all elements of the list with comma (,) via {{ item | split | join(',') }}
- name: Gather raw 'df' output without pre-processing
raw: "df --type=ext4 -h --output=source,target"
register: result
- name: Show result as CSV
debug:
msg: "{{ inventory_hostname }},{{ item | split | join(',')}}"
loop_control:
extended: true
label: "{{ ansible_loop.index0 }}"
loop: "{{ result.stdout_lines[1:] }}" # with --no-headers
resulting into the same output as before.
Documentation Links
for Ansible
raw module - Executes a low-down and dirty command
This is useful and should only be done in a few cases ... speaking to any devices such as routers that do not have any Python installed. Arguments given to raw are run directly through the configured remote shell. Standard output, error output and return code are returned when available.
Whats the difference between Ansible raw, shell and command?
The command and shell module, as well gather_facts (annot.: setup.py) depend on a properly installed Python interpreter on the Remote Node(s). If that requirement isn't fulfilled one may experience errors
Extended loop variables
split filter – split a string into a list
Using filters to manipulate data - Manipulating strings, see split and join
Information about Ansible: magic variables
You can use the magic variable inventory_hostname, the name of the host as configured in your inventory, as an alternative to ansible_hostname when fact-gathering is disabled. If you have a long FQDN, you can use inventory_hostname_short, which contains the part up to the first period, without the rest of the domain.
How Ansible gather_facts and sets variables?
and Linux commands
How to omit heading in df command?
How to strip multiple spaces to one using sed?
Further Reading
cli_parse module – Parse cli output or text using a variety of parsers
Re: Wishlist: --no-header option for df
Related
I'm trying to create a list of interface names along with their mac addresses from a Debian 11 server, initially, I was trying to get the mac addresses in order only but now I realize I need a list that looks like this:
eth0 <SOME_MAC>
eth1 <SOME_MAC>
...
I want to pass this list as a variable and then use it in the next task to create a 10-persistent-net.link file in the /etc/systemd/network directory.
The current task that I'm using is:
- name: Get mac addresses of all interfaces except local
debug:
msg: "{{ ansible_interfaces |
map('regex_replace','^','ansible_') |
map('extract',hostvars[inventory_hostname]) |
selectattr('macaddress','defined') |
map(attribute='macaddress') |
list }}"
As you can see I'm using the debug module to test out my code and I have no idea how to create my desired list and pass it as a variable.
The above code gives the following result:
ok: [target1] =>
msg:
- 08:00:27:d6:08:1a
- 08:00:27:3a:3e:ff
- f6:ac:58:a9:35:33
- 08:00:27:3f:82:c2
- 08:00:27:64:6a:f8
ok: [target2] =>
msg:
- 08:00:27:34:70:60
- 42:04:1a:ff:6c:46
- 42:04:1a:ff:6c:46
- 08:00:27:d6:08:1a
- 08:00:27:9c:d7:af
- f6:ac:58:a9:35:33
Any help on which module to use to pass the list as a variable and how to create the list in the first place is appreciated.
Kindly Note that I'm using Ansible v5.9.0 and each server may have any number of interfaces, some of them may have ethx interface name format while others may have enspx, brx etc interface format.
UPDATE: Per advice in a comment I must mention that I need one list for each target that will be used in a natural host loop task that will run against each target.
UPDATE 2: As I'm new to Ansible and per my coworker's advice I was under the impression that a list of interface names along with their MAC addresses separated by space is what I need as a variable to be passed to the next task, however, throughout comments and answers I now realize that I was absolutely heading in the wrong direction. Please accept my apology and blame it on my lack of experience and knowledge of Ansible. In the end, it turned out that a dictionary of interface names and their MAC addresses is what is most suitable for this kind of action in Ansible.
Get the list of the variables
blacklist: ['lo']
interfaces: "{{ ['ansible_']|
product(ansible_interfaces|
difference(blacklist))|
map('join')|list }}"
Get the values of the variables and create a dictionary
devices: "{{ interfaces|
map('extract', vars)|
items2dict(key_name='device',
value_name='macaddress') }}"
Notes
A dictionary is more efficient compared with a list. The keys must be unique.
A dictionary in YAML aka mapping is 'an unordered set of key/value node pairs, with the restriction that each of the keys is unique'.
As of Python version 3.7, dictionaries are ordered.. As a result Ansible (YAML) dictionaries are also ordered when using Python 3.7 and later. For example,
devices:
docker0: 02:42:35:39:f7:f5
eth0: 80:3f:5d:14:b1:d3
eth1: e4:6f:13:f5:09:80
wlan0: 64:5d:86:5d:16:b9
xenbr0: 80:3f:5d:14:b1:d3
See Jinja on how to create various formats of output. For example,
- debug:
msg: |-
{% for ifc, mac in devices.items() %}
{{ ifc }} {{ mac }}
{% endfor %}
gives
msg: |-
wlan0 64:5d:86:5d:16:b9
eth0 80:3f:5d:14:b1:d3
eth1 e4:6f:13:f5:09:80
xenbr0 80:3f:5d:14:b1:d3
docker0 02:42:35:39:f7:f5
You can see that the output of Jinja is not ordered. Actually, the order is not even persistent when you repeat the task. Use the filter sort if you want to order the lines. For example,
- debug:
msg: |-
{% for ifc, mac in devices.items()|sort %}
{{ ifc }} {{ mac }}
{% endfor %}
gives
msg: |-
docker0 02:42:35:39:f7:f5
eth0 80:3f:5d:14:b1:d3
eth1 e4:6f:13:f5:09:80
wlan0 64:5d:86:5d:16:b9
xenbr0 80:3f:5d:14:b1:d3
This is how I would do it.
Note that my example uses the json_query filter which requires pip install jmespath on your ansible controller.
---
- name: Create a formated list for all interfaces
hosts: all
vars:
elligible_interfaces: "{{ ansible_interfaces | reject('==', 'lo') }}"
interfaces_list_raw: >-
{{
hostvars[inventory_hostname]
| dict2items
| selectattr('value.device', 'defined')
| selectattr('value.device', 'in', elligible_interfaces)
| map(attribute='value')
}}
interface_query: >-
[].[device, macaddress]
interfaces_formated_list: >-
{{ interfaces_list_raw | json_query(interface_query) | map('join', ' ') }}
tasks:
- name: Show our calculated var
debug:
var: interfaces_formated_list
Which gives running against my localhost:
$ ansible-playbook -i localhost, /tmp/test.yml
PLAY [Create a formated list for all interfaces] **************************************************************************************************
TASK [Gathering Facts] ****************************************************************************************************************************
ok: [localhost]
TASK [Show our calculated var] ********************************************************************************************************************
ok: [localhost] => {
"interfaces_formated_list": [
"docker0 02:42:98:b8:4e:75",
"enp4s0 50:3e:aa:14:17:8f",
"vboxnet0 0a:00:27:00:00:00",
"veth7201fce 92:ab:61:7e:df:65"
]
}
PLAY RECAP ****************************************************************************************************************************************
localhost : ok=2 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
As you can see, this is showing several interfaces that you might want to filter out in your use case. You can inspect interfaces_list_raw and create additional filters to achieve your goal. But at least you get the idea.
Effectively, I have two servers, and I am trying to get use output from the command of one of them, to configure the other, and vise versa. I spent a few hours reading on this, and found out that the hostvars process and dummy hosts is seemingly what I want. No matter how I try to implement this process, I still get undefined variables, and/or failures from the host(s) not being in the pattern for the task:
Here is the relevant block, with only hosts mux-ds1 and mux-ds2 are in the dispatchers group:
---
- name: Play that sets up the sql database during the build process on all mux dispatchers.
hosts: mux_dispatchers
remote_user: ansible
vars:
ansible_ssh_pipelining: yes
tasks:
- name: Check and save database master bin log file and position on mux-ds2.
shell: sudo /usr/bin/mysql mysql -e "show master status \G" | grep -E 'File:|Position:' | cut -d{{':'}} -f2 | awk '{print $1}'
become: yes
become_method: sudo
register: syncds2
when: ( inventory_hostname == 'mux-ds2' )
- name: Print current ds2 database master bin log file.
debug:
var: "syncds2.stdout_lines[0]"
- name: Print current ds2 database master bin position.
debug:
var: "syncds2.stdout_lines[1]"
- name: Add mux-ds2 some variables to a dummy host allowing us to use these variables on mux-ds1.
add_host:
name: "ds2_bin"
bin_20: "{{ syncds2.stdout_lines }}"
- debug:
var: "{{ hostvars['ds2_bin']['bin_21'] }}"
- name: Compare master bin variable output for ds1's database and if different, configure for it.
shell: sudo /usr/bin/mysql mysql -e "stop slave; change master to master_log_file='"{{ hostvars['ds2_bin']['bin_21'][0] }}"', master_log_pos="{{ hostvars['ds2_bin']['bin_21'][1] }}"; start slave"
become: yes
become_method: sudo
register: syncds1
when: ( inventory_hostname == 'mux-ds1' )
Basically everything works properly up to where I try to see the value of the variable from the dummy host with the debug module, but it tells me the variable is still undefined even though it's defined in the original variable. This is supposed to be the system to get around such problems:
TASK [Print current ds2 database master bin log file.] **************************************************
ok: [mux-ds1] => {
"syncds2.stdout_lines[0]": "VARIABLE IS NOT DEFINED!"
}
ok: [mux-ds2] => {
"syncds2.stdout_lines[0]": "mysql-bin.000001"
}
TASK [Print current ds2 database master bin position.] **************************************************
ok: [mux-ds1] => {
"syncds2.stdout_lines[1]": "VARIABLE IS NOT DEFINED!"
}
ok: [mux-ds2] => {
"syncds2.stdout_lines[1]": "107"
}
The above works as I intend, and has the variables populated and referenced properly for mux-ds2.
TASK [Add mux-ds2 some variables to a dummy host allowing us to use these variables on mux-ds1.] ********
fatal: [mux-ds1]: FAILED! => {"msg": "The task includes an option with an undefined variable. The error was: 'dict object' has no attribute 'stdout_lines'\n\nThe error appears to be in '/home/ansible/ssn-project/playbooks/i_mux-sql-config.yml': line 143, column 8, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n\n - name: Add mux-ds2 some variables to a dummy host allowing us to use these variables on mux-ds1.\n ^ here\n"}
This is where the issue is, the variable seems to be magically undefined again, which is odd given this process is designed to end-run that issue. I can't even make it to the second set of debug tasks.
Note that this is ultimately for the purpose of syncing up two master/master replication mysql databases. I'm also doing this with the shell module because the mysql version which must be used can be no higher than 5.8, and the ansible module requires 5.9, which is a shame. There same process will be done for mux-ds2 in reverse as well assuming this can be made to work.
Either I'm making a mistake in this implementation which keeps it from functioning, or I'm using the wrong implementation for what I want. I've spent too much time now trying to figure this out alone and would appreciate any solution for this which would work. Thanks in advance!
Seems like you are going a complicated route when a simple delegation of tasks and the usage of the special variable hostvars, to be able to fetch fact from different node, should give you what you expect.
Here is an example — focusing just on the important part — so, you might want to add the become and become_user back in there:
- shell: >-
sudo /usr/bin/mysql mysql -e "show master status \G"
| grep -E 'File:|Position:'
| cut -d{{':'}} -f2
| awk '{print $1}'
delegate_to: mux-ds2
run_once: true
register: syncds2
- shell: >-
sudo /usr/bin/mysql mysql -e "stop slave;
change master to
master_log_file='"{{ hostvars['mux-ds2'].syncds2.stdout_lines.0 }}"',
master_log_pos="{{ hostvars['mux-ds2'].syncds2.stdout_lines.1 }}";
start slave"
delegate_to: mux-ds1
run_once: true
Here is an example, running some dummy shell tasks, given the playbook:
- hosts: node1, node2
gather_facts: no
tasks:
- shell: |
echo 'line 0'
echo 'line 1'
delegate_to: node2
run_once: true
register: master_config
- shell: |
echo '{{ hostvars.node2.master_config.stdout_lines.0 }}'
echo '{{ hostvars.node2.master_config.stdout_lines.1 }}'
delegate_to: node1
run_once: true
register: master_replicate_config
- debug:
var: master_replicate_config.stdout_lines
delegate_to: node1
run_once: true
This would yield:
PLAY [node1, node2] **********************************************************
TASK [shell] *****************************************************************
changed: [node1 -> node2(None)]
TASK [shell] *****************************************************************
changed: [node1]
TASK [debug] *****************************************************************
ok: [node1] =>
master_replicate_config.stdout_lines:
- line 0
- line 1
I have the ansible role coded below.
---
- name: Get host facts
set_fact:
serverdomain: "{{ansible_domain}}"
server_ip: "{{ansible_ip_addresses[1]}}"
- name: Host Ping Check
failed_when: false
win_ping:
register: var_ping
- name: Get Host name
debug: msg="{{the_host_name}}"
- name: Set Execution File and parameters
set_fact:
scriptfile: "{{ansible_user_dir}}\\scripts\\host_check.ps1"
params: "-servername '{{the_host_name}}' -response var_ping.failed"
- name: Execute script
win_command: powershell.exe "{{scriptfile}}" "{{params}}"
It works the way it should do but of course unreachable hosts are not touched at all. I would like to generate a list or is there a variable which contains all the unreachable hosts. Is it possible to have this in a comma delimmeted list and stored in a variable ?
Lastly, how/where do i need to set gather_facts: no. I tried several places to no avail.
EDIT 1
- name Unreachable servers
set_fact:
down: "{{ ansible_play_hosts_all | difference(ansible_play_hosts)}}"
- name: Set Execution File and parameters
set_fact:
scriptfile: "{{ansible_user_dir}}\\scripts\\host_check.ps1"
params: "-servername '{{the_host_name}}' -response var_ping.failed -unreachable_hosts {{ down }}"
- name: Execute script
win_command: powershell.exe "{{scriptfile}}" "{{params}}"
when: inventory_hostname == {{ db_server_host }}
Thanks for the answers, I have now been able to use thesame login in my ansible role, and it appears to work.
I do however have some questions. My playbook runs against hosts defined in my inventory, in this case what I want to achieve is a situation where the unreachable hosts is passed onto a powershell script right at the end and at once. So for example, 100 hosts, 10 were unreachable. The playbook should gather the 10 unreachable hosts, and pass the list of hosts in an array format to a powershell script or as a json data type.
All I want to do is be able to process the list of unreachable servers from my powershell script.
At the moment, I get
[
"server1",
"server2"
]
MY script will work with a format like this "server1","server2"
Lastly.The process of logging the unreachable servers at the end, only needs to happen once to a specific server which does the logging to a database. I created a task to do this as seen above.
db_server_host is being passed as a extra variables from ansible tower. The reason i added the when is that I only want it to run on the DB server and not on every host. I get the error The conditional check inventory_hostname == {{ db_server_host }} failed. The error was error while evaluating conditional inventory_hostname == {{ db_server_host }} 'mydatabaseServerName' is undefined
Q: "Get a list of unreachable hosts in an Ansible playbook."
Short answer: Create the difference between the lists below
down: "{{ ansible_play_hosts_all|difference(ansible_play_hosts) }}"
Details: Use Special Variables. Quoting:
ansible_play_hosts_all:
List of all the hosts that were targeted by the play.
ansible_play_hosts:
List of hosts in the current play run, not limited by the serial. Failed/Unreachable hosts are excluded from this list.
For example, given the inventory
shell> cat hosts
alpha
beta
charlie
delta
The playbook
- hosts: alpha,beta,charlie
gather_facts: true
tasks:
- block:
- debug:
var: ansible_play_hosts_all
- debug:
var: ansible_play_hosts
- set_fact:
down: "{{ ansible_play_hosts_all|difference(ansible_play_hosts) }}"
- debug:
var: down
run_once: true
gives (bridged) if alpha is unreachable (see below)
ansible_play_hosts_all:
- alpha
- beta
- charlie
ansible_play_hosts:
- beta
- charlie
down:
- alpha
Host alpha was unreachable
PLAY [alpha,beta,charlie] *************************************************
TASK [Gathering Facts] ****************************************************
fatal: [alpha]: UNREACHABLE! => changed=false
msg: 'Failed to connect to the host via ssh: ssh: connect to host test_14 port 22: No route to host'
unreachable: true
ok: [charlie]
ok: [beta]
...
Note
The dictionary hostvars keeps all hosts from the inventory, e.g.
- hosts: alpha,beta,charlie
gather_facts: true
tasks:
- debug:
var: hostvars.keys()
run_once: true
gives (abridged)
hostvars.keys():
- alpha
- beta
- charlie
- delta
tasks:
- ping:
register: ping_out
# one can include any kind of timeout or other stall related config here
- debug:
msg: |
{% for k in hostvars.keys() %}
{{ k }}: {{ hostvars[k].ping_out.unreachable|default(False) }}
{% endfor %}
yields (when an inventory consisting of only an alive alpha host):
msg: |-
alpha: False
beta: True
charlie: True
Lastly, how/where do i need to set gather_facts: no. I tried several places to no avail.
It only appears one time in the playbook keywords and thus it is a play keyword:
- hosts: all
gather_facts: no
tasks:
- name: now run "setup" for those who you wish to gather_facts: yes
setup:
when: inventory_host is awesome
We are trying to get lesser output while executing a playbook on multiple OS flavours. But unable to find a solution hence posting is here for a better answer.
As we get multiple task executed, is it possible to merge into one. We are collecting the output in a file & then will veryfy the same with different tags.
- name: verify hostname
block:
- name: read hostname [PRE]
shell: hostname
register: hostname
- name: set fact [hostname]
set_fact:
results_pre: "{{ results_pre | combine({'hostname': hostname.stdout.replace(\"'\", '\"')|quote }) }}"
- name: write hostname
copy:
dest: "{{ remote_logs_path }}/{{ ansible_ssh_host }}/pre/hostname"
content: "{{ hostname.stdout }}"
tags:
- pre
Current output
TASK [role : read hostname [PRE]] ***************************************************************************
changed: [ip]
TASK [role : set fact [hostname]] ***************************************************************************
ok: [ip]
TASK [role : write hostname] ********************************************************************************
changed: [ip]
Required Output
TASK [role : Hostname Collected] ********************************************************************************
changed: [ip]
Generally, it's a bad idea to parse Ansible output. You may get some runtime warnings or unexpected additional lines.
If you really want to stick to Ansible output, there are a so-called callback plugins, you may try to implement your own if you want.
If you need some report from Ansible playbook, the common pattern is to have a separate task, which reports into a file (usually, on a controller host, using delegate: localhost).
Finally, if you want to check for idempotence, Molecule provides this feature.
I have a task in my playbook to take backups of a directory on each remote host itself, as :
- name: Copy files to backup
synchronize:
src: /opt/myDir/
dest: /opt/myBackupDir/
archive: yes
ignore_errors: no
delegate_to: "{{ inventory_hostname }}"
register: sync_out
And my inventory is like :
myWeb1 ansible_host=prodvm1 ansible_user=testuser
myApp1 ansible_host=prodvm2 ansible_user=testuser
myApp2 ansible_host=prodvm3 ansible_user=testuser
The issue is that the output shown is like below :
TASK [Copy files to backup] **********************************************************************************************************************************************************
ok: [myWeb1 -> prodvm1]
ok: [myApp1 -> prodvm2]
ok: [myApp2 -> prodvm3]
Since I am delegating the task there are two fields shown in the output, separated by the arrow mark, but I would like to have it show the name / alias specified in the inventory , i.e., myApp1/myWeb1 instead of the actual hostname ( prodvm* ) after the arrows ( to avoid showing the hostname / IPs ).
I tried to use debug module to see what ansible is evaulating inventory_hostname to , but it gives the expected result, i.e., myWeb1.
How can I get similar behaviour when using the delegate_to module ?
It's really annoying behavior for Ansible. Delegation always 'cuts to the point' of where things happens.
If it's really annoy you, you can try disable delegation and use ansible_host trick, but at my opinion, delegation is better (even with junk in output).
This is the ansible_host trick:
- name: Copy files to backup
synchronize:
src: /opt/myDir/
dest: /opt/myBackupDir/
archive: true
var:
ansible_host: '{{ hostvars[name_where_its_delegated].ansible_host }}'
register: sync_out
I warn you again, delegation is better than this.