top_level_main.yml
roles:
- { role: deploy_nds }
roles/deploy_nds/vars/main.yml
artifact_url: urlsomething
roles/deploy_nds/meta/main.yml
dependencies:
- {role: download_artifact, url: artifact_url }
roles/download_artifactory/tasks/main.yml
- name: download artifact from jfrog
get_url:
url: "{{ url }}"
dest: /var/tmp
I tried using variable name as "{{ artifact_url }}" but still it does not work as expect. Can someone please help?
I have explicitly included vars file of perticular role in playbook then it worked.
vars_files:
- roles/deploy_nds/vars/main.yml
roles:
- { role: deploy_nds }
Related
I am trying come up with a way to pass multiple variables to the same field in a role but not having any luck getting it to work using the Role duplication and execution method I've been using. As an example I want an SLB Server to have multiple ports assigned to it using the port_number variable. I'm new to Ansible so making some rookie mistakes like the code below (port_number: "80", port_number: "8080" returns duplicate entry so only uses the first) but I have tried just about every syntax I have found examples for and nothing is working right. The end result is basically having test3 with both of the port_number: entries assigned to it but at this point I'm not even sure it's possible doing it this way or if I have to run a separate module after the fact to add the entries. Any help is greatly appreciated. Thanks.
---
- name: Deploy A10 config
connection: local
hosts: all
roles:
- role: server
vars:
name: "test1"
fqdn_name: "test1.test.domain.net"
health_check: "TCP-8080-HALFOPEN"
port_number: "80"
- { role: server, vars: { name: "test2", fqdn_name: "test2.test.domain.net", port_number: "8080" }}
- { role: server, vars: { name: "test3", fqdn_name: "test3.test.domain.net", port_number: "80", port_number: "8080" }}
---
- name: Test server create
a10_slb_server:
a10_host: "10.1.1.1"
a10_username: "admin"
a10_password: "admin"
a10_port: "443"
a10_protocol: "https"
state: present
name: "{{ name }}"
fqdn_name: "{{ fqdn_name }}"
port_list:
- port_number: "{{ port_number }}"
In your code vars is dictionary. The keys in a dictionary must be unique.
vars:
name: "test1"
fqdn_name: "test1.test.domain.net"
health_check: "TCP-8080-HALFOPEN"
port_number: "80"
YAML resolves the duplication of the keys simply by overriding the value. This expression
vars: { name: "test3", fqdn_name: "test3.test.domain.net", port_number: "80", port_number: "8080" }
would give
"vars": {
"fqdn_name": "test3.test.domain.net",
"name": "test3",
"port_number": "8080"
}
In your code port_list is list. It's a list of dictionaries. This seems to be the proper way to declare multiple port numbers.
port_list:
- port_number: "80"
- port_number: "8080"
In serialized format
port_list: [{port_number: "80"}, {port_number: "8080"}]
But, in your code role: server it's not clear how these variables are used in the role. It is necessary to review the role to learn how to submit the data.
For example:
- role: server
vars:
name: "test1"
fqdn_name: "test1.test.domain.net"
health_check: "TCP-8080-HALFOPEN"
port_number1: "80"
port_number2: "8080"
--
- name: Test server create
a10_slb_server:
a10_host: "10.1.1.1"
a10_username: "admin"
a10_password: "admin"
a10_port: "443"
a10_protocol: "https"
state: present
name: "{{ name }}"
fqdn_name: "{{ fqdn_name }}"
port_list:
- port_number: "{{ port_number1 }}"
- port_number: "{{ port_number2 }}"
I have a role which uses with_items:
- name: Create backup config files
template:
src: "config.yml.j2"
dest: "/tmp/{{ project }}_{{ env }}_{{ item.type }}.yml"
with_items:
- "{{ backups }}"
I can access the item.type, as usual, but not project or env which are defined outside the collection:
deploy/main.yml
- hosts: ...
vars:
project: ...
rails_env: qa
roles:
- role: ../../../roles/deploy/dolly
project: "{{ project }}"
env: "{{ rails_env }}"
backups:
- type: mysql
username: ...
password: ...
The error I get is:
Error was a <class 'ansible.errors.AnsibleError'>, original message: An unhandled exception occurred while templating '{{ project }}'
The template, config.j2.yml, is:
type: {{ item.type }}
project: {{ project }}
env: {{ env }}
database:
username: {{ item.username }}
password: {{ item.password }}
It turns out for can't redefine a var with the same name as an existing var, so project: {{ project }} will always fail with an error.
Instead project can be omitted and the existing definition, in vars, can be used.
- hosts: ...
vars:
project: ... # <- already defined here
roles:
- role: ../../../roles/deploy/dolly
backups:
- type: mysql
username: ...
password: ...
If the var is not defined in vars can be defined in the role:
- hosts: ...
vars:
name: ...
roles:
- role: ../../../roles/deploy/dolly
project: "{{ name }}" # <- define here
backups:
- type: mysql
username: ...
password: ...
I'm doing a playbook to create symlinks of nodejs, npm and gulp because I need to use a specific version and to install it all I'm just unzipping folders to /opt/ where all this is going to stay.
The task with items I'm using for creating the links are:
- name: Create NPM symlink
file:
src: '{{ item.src_dir }}/{{ item.src_name }}'
dest: '{{ item.dest_dir }}/{{ item.dest_name }}'
owner: "{{ ansible_ssh_user }}"
group: "{{ ansible_ssh_user }}"
state: link
with_items:
- { src_dir: "{{ npm_real_dir }}", src_name: "{{ npm_real_name }}" }
- { dest_dir: "{{ nodenpm_link_dir }}", dest_name: "{{ npm_link_name }}" }
all the variables used in the items "zone" are declared in the host file as such:
npm_real_dir=/opt/nodejs/node-v6.11.2-linux-x64/lib/node_modules/npm/bin
npm_real_name=npm-cli.js
nodenpm_link_dir=/opt/nodejs/node-v6.11.2-linux-x64/bin
npm_link_name=npm
ansible_ssh_user=vagrant
And I'm getting the error:
FAILED! => {"msg": "The task includes an option with an undefined variable. The error was: 'dict object' has no attribute 'dest_dir'
Which I'm not understanding since all the variables used in the task are declared and correct. I made a similar task without items:
- name: Create symbolic link for npm
file:
src: '{{ npm_real_dir }}/{{ npm_real_name }}'
path: '{{ nodenpm_link_dir }}/{{ npm_link_name }}'
owner: "{{ ansible_ssh_user }}"
group: "{{ ansible_ssh_user }}"
state: link
And its working, however the structure is the same as before, just without the items.
At this point I just want to know if its a known bug, if there is any issue in using items to create links, or if I did a stupid mistake and gain knowledge about it
Thanks in advance
The issue is that you're passing two different objects to the with_items property. The first object has two properties (src_dir and src_name), while the second object has two different properties (dest_dir and dest_name).
It looks like you want to combine them into a single object like this:
- name: Create NPM symlink
file:
src: '{{ item.src_dir }}/{{ item.src_name }}'
dest: '{{ item.dest_dir }}/{{ item.dest_name }}'
owner: "{{ ansible_ssh_user }}"
group: "{{ ansible_ssh_user }}"
state: link
with_items:
- { src_dir: "{{ npm_real_dir }}", src_name: "{{ npm_real_name }}", dest_dir: "{{ nodenpm_link_dir }}", dest_name: "{{ npm_link_name }}" }
That should work better and get rid of the error, but in this case you don't really need the with_items, since it's only one item that you're dealing with. You can add more objects for other tools, e.g. gulp in the same manner, e.g. like this:
- name: Create symlinks
file:
src: '{{ item.src_dir }}/{{ item.src_name }}'
dest: '{{ item.dest_dir }}/{{ item.dest_name }}'
owner: "{{ ansible_ssh_user }}"
group: "{{ ansible_ssh_user }}"
state: link
with_items:
- { src_dir: "{{ npm_real_dir }}", src_name: "{{ npm_real_name }}", dest_dir: "{{ nodenpm_link_dir }}", dest_name: "{{ npm_link_name }}" }
- { src_dir: "{{ gulp_real_dir }}", src_name: "{{ gulp_real_name }}", dest_dir: "{{ gulp_link_dir }}", dest_name: "{{ gulp_link_name }}" }
I need to get JMX metrics from the Hazelcast product. I have created a Logstash process that connects to the JMX port. This process has to read a json where is the information of the hostname, port, cluster, environment, etc of Hazelcast JMX. I need to deploy on the Logstash machines the json file for each Hazelcast machine / port. In this case there are three Hazelcast machines and a total of 6 processes with different ports.
Example data:
Hazelcast Hostnames: hazelcast01, hazelcast02, hazelcast03
Hazelcast Ports: 6661, 6662, 6663, 6664, 6665
Logstash Hostnames: logstash01, logstash02, logstash03
Dictionary of Hazelcast information in Ansible:
logstash_hazelcast_jmx:
- hazelcast_pre:
name: hazelcast_pre
port: 15554
cluster: PRE
- hazelcast_dev:
name: hazelcast_dev
port: 15555
cluster: DEV
Example of task in Ansible:
- name: Deploy HAZELCAST JMX config
template:
src: "hazelcast_jmx.json.j2"
dest: "{{ logstash_directory_jmx_hazelcast }}/hazelcast_jmx_{{ item }}_{{ item.value.cluster }}.json"
owner: "{{ logstash_system_user }}"
group: "{{ logstash_system_group }}"
mode: 0640
with_dict:
- "{{ groups['HAZELCAST'] }}"
- logstash_hazelcast_jmx
The final result should be as follows:
/opt/logstash/jmx/hazelcast/hazelcast_jmx_hazelcast01_DEV.json
/opt/logstash/jmx/hazelcast/hazelcast_jmx_hazelcast01_PRE.json
/opt/logstash/jmx/hazelcast/hazelcast_jmx_hazelcast02_DEV.json
...
Here is an example of the json content:
{
"host" : "{{ hostname of groups['HAZELCAST' }}",
"port" : {{ item.value.port }},
"alias" : "{{ hostname of groups['HAZELCAST' }}_{{ item.value.cluster }}",
"queries" : [
{
"object_name" : "com.hazelcast:instance=_hz_{{ item.value.cluster }},type=XXX,name=YYY",
"attributes" : [ "size", "localHits" ],
"object_alias" : "Hazelcast_map"
} ,{
"object_name" : "com.hazelcast:instance=_hz_{{ item.value.cluster }},type=IMap,name=user",
"attributes" : [ "size", "localHits" ],
"object_alias" : "Hazelcast_map"
}
]
}
I think the problem I have is that the with_dict option does not allow using a listing of inventory hosts and a dictionary.
How can I get this generation of json files for each machine / port?
If you run your playbook against logstash hosts, you can use with_nested:
---
- hosts: logstash_hosts
tasks:
- name: Deploy HAZELCAST JMX config
template:
src: "hazelcast_jmx.json.j2"
dest: "{{ logstash_directory_jmx_hazelcast }}/hazelcast_jmx_{{ helper_host }}_{{ helper_cluster }}.json"
owner: "{{ logstash_system_user }}"
group: "{{ logstash_system_group }}"
mode: 0640
with_nested:
- "{{ groups['HAZELCAST'] }}"
- "{{ logstash_hazelcast_jmx }}"
vars:
helper_host: "{{ item.0 }}"
helper_cluster: "{{ item.1.cluster }}"
helper_port: "{{ item.1.port }}"
I also used helper variables with more meaningful names. You should also modify your template with either helper vars or item.0, item.1 – where item.0 is a host from HAZELCAST group, and item.1 is an item from logstash_hazelcast_jmx list.
Let's say I have a single playbook with some roles for the installation of an appserver and I like to apply the same playbook on both production and testing servers.
Both production and testing servers have the same list of roles, with the exception of one, that only should be applied on production servers.
Is it possible to specify somehow that this role will only be applied to production servers using the same playbook?
For example, if the playbook is:
---
- hosts: appserver
roles:
- { role: mail, tags: ["mail"] }
- { role: firewall, tags: ["firewall"] }
- { role: webserver, tags: ["webserver"] }
- { role: cache, tags: ["cache"] }
and I have two inventories: one for production and one for testing.
When I run the playbook using the testing inventory, I don't want the role 'firewall' to be executed.
My idea is do something like setting a variable in the 'production' inventory and use something like 'if <var> is set, then execute role 'firewall'' ... I don't know if this is possible, and if it is, how to do it?
You can define a variable, for example production_environment, in your inventory files, assign true or false value and use a when conditional in the playbook:
---
- hosts: appserver
roles:
- { role: mail, tags: ["mail"] }
- { role: firewall, tags: ["firewall"], when: production_environment }
- { role: webserver, tags: ["webserver"] }
- { role: cache, tags: ["cache"] }
Or you can access inventory_file variable directly. For example if you use -i production:
---
- hosts: appserver
roles:
- { role: mail, tags: ["mail"] }
- { role: firewall, tags: ["firewall"], when: inventory_file == "production" }
- { role: webserver, tags: ["webserver"] }
- { role: cache, tags: ["cache"] }
Which way you go is an administrative decision. The second method uses less steps, but requires the inventory file name to be "hardcoded" in the playbook.
Why don't you just use inventory groups?
Make two inventories:
testing:
[application]
my-dev-server
production:
[application]
company-prod-server
[firewall]
company-prod-server
And change your playbook as follows:
---
- hosts: firewall
roles:
- { role: firewall, tags: ["firewall"] }
- hosts: application
roles:
- { role: mail, tags: ["mail"] }
- { role: webserver, tags: ["webserver"] }
- { role: cache, tags: ["cache"] }
- { role: firewall, tags: ["firewall"], when: MyVar }
See Applying ‘when’ to roles and includes.