How do you specify environment specific inventory files? - ansible

I have a folder structure like so:
.
├── ansible.cfg
├── etc
│   ├── dev
│   │   ├── common
│   │   │   ├── graphite.yml
│   │   │   ├── mongo.yml
│   │   │   ├── mysql.yml
│   │   │   └── rs4.yml
│   │   ├── inventory
│   │   └── products
│   │   ├── a.yml
│   │   ├── b.yml
│   │   └── c.yml
│   └── prod
│   ├── common
│   │   ├── graphite.yml
│   │   ├── mongo.yml
│   │   ├── redis.yml
│   │   └── rs4.yml
│   ├── inventory
│   └── products
│   ├── a.yml
│   ├── b.yml
│   └── c.yml
├── globals.yml
├── startup.yml
├── roles
| └── [...]
└── requirements.txt
And in my ansible.cfg, I would like to do something like: hostfile=./etc/{{ env }}/inventory, but this doesn't work. Is there a way I can go about specifying environment specific inventory files in Ansible?

I assume common and products are variable files.
As #Deepali Mittal already mentioned your inventory should look like inventory/{{ env }}.
In inventory/prod you would define a group prod and in inventory/dev you would define a group dev:
[prod]
host1
host2
hostN
This enables you to define group vars for prod and dev. For this simply create a folder group_vars/prod and place your vars files inside.
Re-ordered your structure would look like this:
.
├── ansible.cfg
├── inventory
│ ├── dev
│ └── prod
├── group_vars
│ ├── dev
│ │ ├── common
│ │ │ ├── graphite.yml
│ │ │ ├── mongo.yml
│ │ │ ├── mysql.yml
│ │ │ └── rs4.yml
│ │ └── products
│ │ ├── a.yml
│ │ ├── b.yml
│ │ └── c.yml
│ └── prod
│ ├── common
│ │ ├── graphite.yml
│ │ ├── mongo.yml
│ │ ├── mysql.yml
│ │ └── rs4.yml
│ └── products
│ ├── a.yml
│ ├── b.yml
│ └── c.yml
├── globals.yml
├── startup.yml
├── roles
| └── [...]
└── requirements.txt
I'm not sure what globals.yml is. If it is a playbook, it is in the correct location. If it is a variable file with global definitions it should be saved as group_vars/all.yml and automatically would be loaded for all hosts.
Now you call ansible-playbook with the correct inventory file:
ansible-playbook -i inventory/prod startup.yml
I don't think it's possible to evaluate the environment inside the ansible.cfg like you asked.

I think instead of {{ env }}/inventory, /inventory/{{ env }} should work. Also if you can please share how you use it right now and the error you get when you change the configuration to envs one

Related

Folder Structure for CI/CD conform Databricks Repo

Are there any best-practices how to organize your project folders so that the CI/CD pipline remains simple?
Here, the following structure is used, which seems to be quite complex:
project
│ README.md
│ azure-pipelines.yml
│ config.json
│ .gitignore
└─── package1
│ │ __init__.py
│ │ setup.py
│ │ README.md
│ │ file.py
│ └── submodule
│ │ │ file.py
│ │ │ file_test.py
│ └── requirements
│ │ │ common.txt
│ │ │ dev.txt
│ └─ notebooks
│ │ notebook1.txt
│ │ notebook2.txt
└─── package2
| │ ...
└─── ci_cd_scripts
│ requirements.py
│ script1.py
│ script2.py
│ ...
Here, the following structure is suggested:
.
├── .dbx
│   └── project.json
├── .github
│   └── workflows
│   ├── onpush.yml
│   └── onrelease.yml
├── .gitignore
├── README.md
├── conf
│   ├── deployment.json
│   └── test
│   └── sample.json
├── pytest.ini
├── sample_project
│   ├── __init__.py
│   ├── common.py
│   └── jobs
│   ├── __init__.py
│   └── sample
│   ├── __init__.py
│   └── entrypoint.py
├── setup.py
├── tests
│   ├── integration
│   │   └── sample_test.py
│   └── unit
│   └── sample_test.py
└── unit-requirements.txt
In concrete, I want to know:
Should I use one repo for all repositories and notebooks (such as suggested in the first approach) or should I create one repo per library (which makes the CI/CD more effortfull as there might be dependencies between the packages)
With both suggested folder structures it is unclear for me where to place my notebooks that are not related to any specific package (e.g. notebooks that contain my business logic and use the package)?
Is there a well-established folder structure?
The Databricks had a repository with project templates to be used with Databricks (link) but now it has been archived and the template creation is part of dbx tool - maybe these two links will be useful for you:
dbx init command - https://dbx.readthedocs.io/en/latest/reference/cli/?h=init#dbx-init
DevOps for Workflows Guide - https://dbx.readthedocs.io/en/latest/concepts/devops/#devops-for-workflows

Why won't it kustomize the node already visited

I am using kubectl kustomizecommands to deploy multiple applications (parsers and receivers) with similar configurations and I'm having problems with the hierarchy of kustomization.yaml files (not understanding what's possible and what's not).
I run the kustomize command as follows from custom directory:
$ kubectl kustomize overlay/pipeline/parsers/commercial/dev - this works fine, it produces expected output defined in the kustomization.yaml #1 as desired. What's not working is that it does NOT automatically execute the #2 kustomization, which is in the (already traversed) directory path 2 levels above. The #2 kustomization.yaml contains configMap creation that's common to all of the parser environments. I don't want to repeat those in every env. When I tried to refer to #1 from #2 I got an error about circular reference, yet it fails to run the config creation.
I have the following directory structure tree:
custom
├── base
| ├── kustomization.yaml
│ ├── logstash-config.yaml
│ └── successful-vanilla-ls7.8.yaml
├── install_notes.txt
├── overlay
│   └── pipeline
│   ├── logstash-config.yaml
│   ├── parsers
│   │   ├── commercial
│   │   │   ├── dev
│   │   │   │   ├── dev-patches.yaml
│   │   │   │   ├── kustomization.yaml <====== #1 this works
│   │   │   │   ├── logstash-config.yaml
│   │   │   │   └── parser-config.yaml
│   │   │   ├── prod
│   │   │   ├── stage
│   │   ├── kustomization.yaml <============= #2 why won't this run automatically?
│   │   ├── logstash-config.yaml
│   │   ├── parser-config.yaml
│
Here is my #1 kustomization.yaml:
bases:
- ../../../../../base
namePrefix: dev-
commonLabels:
app: "ls-7.8-logstash"
chart: "logstash"
heritage: "Helm"
release: "ls-7.8"
patchesStrategicMerge:
- dev-patches.yaml
And here is my #2 kustomization.yaml file:
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
configMapGenerator:
# generate a ConfigMap named my-generated-configmap-<some-hash> where each file
# in the list appears as a data entry (keyed by base filename).
- name: logstashpipeline-parser
behavior: create
files:
- parser-config.yaml
- name: logstashconfig
behavior: create
files:
- logstash-config.yaml
The issue lays within your structure. Each entry in base should resolve to a directory containing one kustomization.yaml file. The same goes with overlay. Now, I think it would be easier to explain on an example (I will use $ to show what goes where):
├── base $1
│ ├── deployment.yaml
│ ├── kustomization.yaml $1
│ └── service.yaml
└── overlays
├── dev $2
│ ├── kustomization.yaml $2
│ └── patch.yaml
├── prod #3
│ ├── kustomization.yaml $3
│ └── patch.yaml
└── staging #4
├── kustomization.yaml $4
└── patch.yaml
Every entry resolves to it's corresponding kustomization.yaml file. Base $1 resolves to kustomization.yaml $1, dev $2 to kustomization.yaml $2 and so on.
However in your use case:
├── base $1
| ├── kustomization.yaml $1
│ ├── logstash-config.yaml
│ └── successful-vanilla-ls7.8.yaml
├── install_notes.txt
├── overlay
│ └── pipeline
│ ├── logstash-config.yaml
│ ├── parsers
│ │ ├── commercial
│ │ │ ├── dev $2
│ │ │ │ ├── dev-patches.yaml
│ │ │ │ ├── kustomization.yaml $2
│ │ │ │ ├── logstash-config.yaml
│ │ │ │ └── parser-config.yaml
│ │ │ ├── prod $3
│ │ │ ├── stage $4
│ │ ├── kustomization.yaml $???
│ │ ├── logstash-config.yaml
│ │ ├── parser-config.yaml
│
Nothing resolves to your second kustomization.yaml.
So to make it work you should put those files separately under each environment.
Below you can find sources with some more examples showing how the tipical directory structure should look like:
Components
Directory layout
GitHub

Ansible dynamic inventory: unable to use group_vars

Here is my directory structure,
├── README.md
├── internal-api.retry
├── internal-api.yaml
├── ec2.py
├── environments
│   ├── alpha
│   │   ├── group_vars
│   │   │   ├── alpha.yaml
│   │   │   ├── internal-api.yaml
│   │   ├── host_vars
│   │   ├── internal_ec2.ini
│   ├── prod
│   │   ├── group_vars
│   | │   ├── prod.yaml
│   │   │   ├── internal-api.yaml
│   │   │   ├── tag_Name_prod-internal-api-3.yml
│   │   ├── host_vars
│   │   ├── internal_ec2.ini
│   └── stage
│   ├── group_vars
│   │   ├── internal-api.yaml
│   │   ├── stage.yaml
│   ├── host_vars
│   │   ├── internal_ec2.ini
├── roles
│   ├── internal-api
├── roles.yaml
I am using separate config for an ec2 instance with tag Name = prod-internal-api-3, so I have defined a separate file, tag_Name_prod-internal-api-3.yaml in environments/prod/group_vars/ folder.
Here is my tag_Name_prod-internal-api-3.yaml,
---
internal_api_gunicorn_worker_type: gevent
Here is my main playbook, internal-api.yaml
- hosts: all
any_errors_fatal: true
vars_files:
- "environments/{{env}}/group_vars/{{env}}.yaml" # this has the ssh key,users config according to environments
- "environments/{{env}}/group_vars/internal-api.yaml"
become: yes
roles:
- internal-api
For prod deployemnts, I do export EC2_INI_PATH=environment/prod/internal_ec2.ini, likewise for stage and alpha. In environment/prod/internal_ec2.ini I have added instance filter, instance_filters = tag:Name=prod-internal-api-3
When I run my playbook,
I get this error,
fatal: [xx.xx.xx.xx]: FAILED! => {"changed": false, "msg": "AnsibleUndefinedVariable: 'internal_api_gunicorn_worker_type' is undefined"}
It means that it is not able to pick variable from the file tag_Name_prod-internal-api-3.yaml. Why is it happening? Do I need to manually add it in include_vars(I don't think that should be the case)?
Okay, so it is really weird, like really really weird. I don't know whether it has been documented or not(please provide link if it is).
If your tag Name is like prod-my-api-1, then the file name tag_Name_prod-my-api-1 will not work.
Your filename has to be tag_Name_prod_my_api_1. Yeah, thanks ansible for making me cry for 2 days.

Anible vars in inventories directory no applying

I am using a role (zaxos.lvm-ansible-role) to manage lvms on a few hosts. Initially I had my vars for the lvm under host_vars/server.yaml which works.
Here is the working layout
├── filter_plugins
├── group_vars
├── host_vars
│   ├── server1.yaml
│   └── server2.yaml
├── inventories
│   ├── preprod
│   ├── preprod.yml
│   ├── production
│   │   ├── group_vars
│   │   └── host_vars
│   ├── production.yaml
│   ├── staging
│   │   ├── group_vars
│   │   └── host_vars
│   └── staging.yml
├── library
├── main.yaml
├── module_utils
└── roles
└── zaxos.lvm-ansible-role
├── defaults
│   └── main.yml
├── handlers
│   └── main.yml
├── LICENSE
├── meta
│   └── main.yml
├── README.md
├── tasks
│   ├── create-lvm.yml
│   ├── main.yml
│   ├── mount-lvm.yml
│   ├── remove-lvm.yml
│   └── unmount-lvm.yml
├── tests
│   ├── inventory
│   └── test.yml
└── vars
└── main.yml
For my environment it would make more sense to have the host_vars under the inventories directory which is also supported (Alternative Directory Layout) as per Ansible doc.
However when I change to this layout the vars are not initialized and the lvms on the host don’t change.
├── filter_plugins
├── inventories
│   ├── preprod
│   │   ├── group_vars
│   │   └── host_vars
│   │   ├── server1.yaml
│   │   └── server2.yaml
│   ├── preprod.yml
│   ├── production
│   │   ├── group_vars
│   │   └── host_vars
│   ├── production.yaml
│   ├── staging
│   │   ├── group_vars
│   │   └── host_vars
│   └── staging.yml
├── library
├── main.yaml
├── module_utils
└── roles
└── zaxos.lvm-ansible-role
├── defaults
│   └── main.yml
├── handlers
│   └── main.yml
├── LICENSE
├── meta
│   └── main.yml
├── README.md
├── tasks
│   ├── create-lvm.yml
│   ├── main.yml
│   ├── mount-lvm.yml
│   ├── remove-lvm.yml
│   └── unmount-lvm.yml
├── tests
│   ├── inventory
│   └── test.yml
└── vars
└── main.yml
Any idea why this approach is not working?
Your host_vars directory must reside in ansible's discovered inventory_dir.
With the above filetree, I guess you are launching your playbook with ansible-playbook -i inventories/preprod.yml yourplaybook.yml. In this context, ansible discovers inventory_dir as inventories
The solution is to move your inventory files inside each directory for your environment, e.g. for preprod => mv inventories/preprod.yml inventories/preprod/
You can then launch your playbook with ansible-playbook -i inventories/preprod/preprod.yml yourplaybook.yml and it should work as you expect.

Callback plugin didn't work with Ansible v2.0

I am using Ansible v2.0 and using this plugin, which shows the time that each task consume and here is my directory struture:
.
├── aws.yml
├── callback_plugins
│   ├── profile_tasks.py  
├── inventory
│   └── hosts
├── roles
│   ├── ec2instance
│   │   ├── defaults
│   │   │   └── main.yml
│   │   └── tasks
│   │   └── main.yml
│   ├── ec2key
│   │   ├── defaults
│   │   │   └── main.yml
│   │   └── tasks
│   │   └── main.yml
│   ├── ec2sg
│   │   ├── defaults
│   │   │   └── main.yml
│   │   └── tasks
│   │   └── main.yml
│   ├── elb
│   │   ├── defaults
│   │   │   └── main.yml
│   │   └── tasks
│   │   └── main.yml
│   ├── rds
│   │   ├── defaults
│   │   │   └── main.yml
│   │   └── tasks
│   │   └── main.yml
│   └── vpc
│   ├── defaults
│   │   └── main.yml
│   └── tasks
│   └── main.yml
└── secret_vars
├── backup.yml
└── secret.yml
But when I run the playbook, it didn't show the result, can you please point me that where I am making mistake.
I am able to solve this problem by adding this to the ansible.cfg file:
[defaults]
callback_whitelist = profile_tasks
plugin is included with ansible 2.0 and as most of those included it requires whitelisting in ansible.cfg
Hope this will help others.
Did you set callback directory in your ansible.cfg file?
If not, just add ansible.cfg file at the root level of your directory and specify path to your callback folder.
Because there are other plugin types, I suggest placing callback_plugins inside of the plugins folder.
[defaults]
callback_plugins = ./plugins/callback_plugins

Resources