How can I store ansible playbooks in different directories but still be able to call them? - ansible

It seems that ansible assumes that the ansible.cfg file exists in the current working directory so when you try to call a playbook that exists in a subdirectory it will fail to load the roles and other stuff.
Is it possible to store playbook in different directories?
Please note that the ansible.cfg is part of the source code.

Per the documentation, Ansible will look for the configuration file in the following order:
ANSIBLE_CONFIG (an environment variable)
ansible.cfg (in the current directory)
.ansible.cfg (in the home directory)
/etc/ansible/ansible.cfg
So if you'd like to call playbooks from alternate directories, you can pass along ANSIBLE_CONFIG pointing at the appropriate ansible.cfg.

I was facing the same problem. I wanted to putt all the playbooks in a separate directory instead of having them all flying around in my base directory.
My base directory is under git control an it contains a local ansible.cfg file with the following content:
[defaults]
inventory=hosts
roles_path=roles
By setting roles_path ansible with look for the roles directory in the base directory instead of the subdirectory of your playbooks.
Be aware that if you have another directory with files you're referencing in your playbooks, you have to qualify this as followed:
- name: copy nagios config files
copy:
src: ../files/nagios3/config/
dest: /etc/nagios3/conf.d/
owner: root
group: root
notify: reload nagios
In this case the playbook is located in a playbooks subdirectoy at the same level as the files directory.
Just like that:
.
├── README
├── ansible.cfg
├── files
│   └── ...
├── host_vars
│   └── ...
├── hosts
├── playbooks
│   └── ...
├── roles
│   └── ...
└── templates
└── ...

Related

Bash: automatically add a file to a Xcode project?

I am creating a script.sh file that creates a Test.swift file and adds it into a Xcode project. However, I would like to know if there is a way to add this file to Xcode (in the project.pbxproj file) from this script? Instead of doing it manually in Xcode (Add files to Project...).
Thank you
3/05 Update
I tried #Johnykutty answer, here is my current Xcode project before executing the ruby script:
I have already generated a A folder with a Sample.swift file located in test, but these files are not linked to my Xcode project yet:
Now here is the script that I'm executing:
require 'xcodeproj'
project_path = '../TestCodeProjTest.xcodeproj'
project = Xcodeproj::Project.open(project_path)
file_group = project["TestCodeProjTest"]["test"]
file_group.new_file("#{project.project_dir}/TestCodeProjTest/test/A")
project.save()
This almost works fine, except that it creates a folder reference instead of a group, and it doesn't link it to my target:
Hence the content of Sample.swift is unreachable.
Its hard to achieve by bash. But really easy if you use Ruby and xcodeproj gem from Cocoapods
Consider you have file structure like
├── GeneratedFiles
│   └── Sample1.swift
├── MyProject
│   ├── AppDelegate.swift
│   ├── ... all other files
│   ├── SceneDelegate.swift
│   └── ViewController.swift
├── MyProject.xcodeproj
│   ├── project.pbxproj
│   ├── .....
└── add_file.rb
Then you can add files like
require 'xcodeproj'
project_path = 'MyProject.xcodeproj'
project = Xcodeproj::Project.open(project_path)
file_group = project["MyProject"]
file_group.new_file("../GeneratedFiles/Sample1.swift")
project.save()
UPDATE:
project["MyProject"] returns a file group which is a group named MyProject in the root of the project, you can select another group inside MyProject by file_group = project["MyProject"]["MyGroup"]
Then the generated file path should be either related to that group like file_group.new_file("../../GeneratedFiles/Sample1.swift") or full path like file_group.new_file("#{project.project_dir}/GeneratedFiles/Sample1.swift")
More details about Xcodeproj here

How do I run ansible-test on the root of my collection?

I'm currently developing my own Ansible collection and following the documentation. The directory structure looks like this:
~/.ansible/collections/gertvdijk/mycollection
├── galaxy.yml
├── plugins
│   └── lookup
│   └── mylookup.py
├── README.md
└── tests
└── unit
└── plugins
└── lookup
└── test_mylookup.py
The location ~/.ansible/collections/gertvdijk/mycollection is chosen for convenience so that it's found on the default search paths for collections (COLLECTIONS_PATHS).
The Ansible developer document section Testing collections mentions that I should use ansible-test command from the root of my collection with the given structure.
You must always execute ansible-test from the root directory of a collection.
However, that fails to me, with an error as if I should use this in a project already.
Even running --help fails with the current working directory error:
$ ansible-test --help
ERROR: The current working directory must be at or below:
- an Ansible collection: {...}/ansible_collections/{namespace}/{collection}/
Current working directory: /home/gert/.ansible/collections/gertvdijk/mycollection
Same thing happens by cloning an existing community collection (e.g. community.grafana). The GitHub CI steps include an installation in a ansible_collections/{namespace}/{collection} path (seen here).
Taking that as a work-around for now (I'd like to avoid that); move the repository of the collection to some path that includes /ansible_collections/gertvdijk/mycollection and then run it from there.
This can't be true, right, that the directory name two levels up make or break the ansible-test tool? What am I missing here?
TL;DR: The path for your home collection should be /home/gert/.ansible/collections/ansible_collections/gertvdijk/mycollection
The directories listed in COLLECTION_PATH are actually expected to contain a top level ansible_collections folder. This is linked to the ansible_collections convention used by e.g. module_utils as explained in the documentation
You can also observe how a blank folder gets structured by running e.g.
ansible-galaxy collection install -p /whatever community.grafana
In this case, you will end up with the folder /whatever/ansible_collections/community/grafana.
So your actual home folder collection path should be /home/gert/.ansible/collections/ansible_collections/gertvdijk/mycollection

Conditionally manage Helm chart dependencies without keeping the child charts inside 'charts' directory

I currently have 3 Helm repositories with the following structure:
repoA/
├── templates/
├── Chart.yaml
├── values.yaml
repoB/
├── templates/
├── Chart.yaml
├── values.yaml
masterRepo/
├── templates/
├── Chart.yaml
├── values.yaml
├── requirements.yaml
The requirements.yaml file from masterRepo is something like below:
dependencies:
- name: repoA
version: "1.0"
repository: "file://../repoA"
condition: repoA.enabled
- name: repoB
version: "1.0"
repository: "file://../repoB"
condition: repoB.enabled
I would like to only use masterRepo to deploy the dependent Helm charts.
I know I can manually put all the child repositories in the masterRepo/charts and it will work but I wanna keep these repositories independent so that other master-repositories can use any of
What to do to make parent Helm chart detect all the required Helm charts and install them conditionally (based on repoX.enabled variable) without keeping the dependent repositories inside the charts directory of the Master-helm-chart?
If you have multiple Helm charts at different locations in the system, you can create dependencies without changing their location.
With the structure specified in the question, we can add dependencies in requirements.yaml (for Helm version: 2.x.x) or Chart.yaml (for Helm version:3.x.x). I am currently using Helm v2.16.1.
Now simply run helm dependency update or helm dep up from inside the masterRepo directory and a charts directory gets created. Now the updated structure of masterRepo looks like:
masterRepo/
├── charts/
└── chartA-1.tgz
└── chartB-1.tgz
├── templates/
├── Chart.yaml
├── requirements.lock
├── requirements.yaml
├── values.yaml
The new files/directories added are:
ChartA-1.tgz and ChartB-1.tgz TAR Archive files which are nothing but zipped chartA and chartB charts.
requirements.lock: Used to rebuild the charts/ directory. Read more about this file in this SO post.
To install the child charts conditionally, you can the following the values.yaml file of the masterRepo:
repoA:
enabled: True
repoB:
enabled: True
Now a simple helm install command from inside the masterRepo will deploy masterRepo as well as it's dependencies (chartA and chartB).
Hope this helps. Happy Helming!

Automatic Ansible custom modules installation with Ansible Galaxy

Is there any nice way to use Ansible Galaxy order to install and enable Ansible (2.7.9) custom modules?
My requirement allows Ansible Galaxy to download the right Ansible role which embeds my custom module. Once ansible-galaxy install --roles-path ansible/roles/ -r roles/requirements.yml, I get the following structure (non-exhaustive):
├── ansible
│   ├── roles
│   │   ├── mymodule (being imported by Galaxy)
│   │   │   ├── library
│   │   │   │   └── mymodule.py
By looking this part of the documentation, it seems like my module is at the right place and does not require any further configuration: https://docs.ansible.com/ansible/latest/user_guide/playbooks_best_practices.html?highlight=library#directory-layout
But when I found this part of the documentation I got confused. Is ANSIBLE_LIBRARY related to the custom modules?
DEFAULT_MODULE_PATH
Description: Colon separated paths in which Ansible will search for Modules.
Type: pathspec
Default: ~/.ansible/plugins/modules:/usr/share/ansible/plugins/modules
Ini Section: defaults
Ini Key: library
Environment: ANSIBLE_LIBRARY
https://docs.ansible.com/ansible/latest/reference_appendices/config.html#default-module-path
When calling my module,
- name: Test of my Module
mymodule:
I get the following error:
ERROR! no action detected in task. This often indicates a misspelled module name, or incorrect module path.
I expected not to have to configure the ANSIBLE_LIBRARY and the module being automatically callable. Am I understanding correctly or should I also trick this var?
If your custom module is in a role, you need to include the role in your playbook, so at the very least:
---
- hosts: myhosts
roles:
- role: mymodule
tasks:
- name: Test of my Module
mymodule:

Yocto doesn't pack busybox syslog files

I am using Yocto 2.3 to build my device image.
My image includes packagegroup-core-boot that, in turn, includes busybox.
IMAGE_INSTALL = "\
....
packagegroup-core-boot \
Busybox is configured to include syslogd:
CONFIG_SYSLOGD=y
CONFIG_FEATURE_ROTATE_LOGFILE=y
CONFIG_FEATURE_REMOTE_LOG=y
CONFIG_FEATURE_SYSLOGD_DUP=y
CONFIG_FEATURE_SYSLOGD_CFG=y
CONFIG_FEATURE_SYSLOGD_READ_BUFFER_SIZE=256
CONFIG_FEATURE_IPC_SYSLOG=y
CONFIG_FEATURE_IPC_SYSLOG_BUFFER_SIZE=64
CONFIG_LOGREAD=y
CONFIG_FEATURE_LOGREAD_REDUCED_LOCKING=y
CONFIG_FEATURE_KMSG_SYSLOG=y
CONFIG_KLOGD=y
It is built and installed correctly.
Relevant syslog files do appear in busybox image directory:
tmp/work/armv5e-poky-linux-gnueabi/busybox/1.24.1-r0/image$ tree etc/
etc/
├── default
├── init.d
│   └── syslog.busybox
├── syslog.conf.busybox
├── syslog-startup.conf.busybox
These files don't appear in my main image rootfs, though. Only the syslogd command is included. See output on target device:
# ls -l $( which syslogd )
lrwxrwxrwx 1 root root 19 Jan 10 12:31 /sbin/syslogd -> /bin/busybox.nosuid
What can be happening to make this files not to be included in the final image?
Additional question:
As shown in the tree output, the init script for syslog is included in busybox but no link to /etc/rc?.d/ is created.
I understand that is should be created by a do_install() hook, shouldn't?
Thanks in advance.
EDIT
Contents of packages-split, as #Anders says, seems ok:
poky/build-idprint/tmp/work/armv5e-poky-linux-gnueabi/busybox/1.24.1-r0$ tree packages-split/busybox-syslog/
packages-split/busybox-syslog/
└── etc
├── init.d
│   ├── syslog
│   └── syslog.busybox
├── syslog.conf
├── syslog.conf.busybox
├── syslog-startup.conf
└── syslog-startup.conf.busybox
I just can't figure out what is stripping this files out of the final image.
Check tmp/work/armv5e-poky-linux-gnueabi/busybox/1.24.1-r0/packages-split. This is where all files are split into the packages that will be generated. If you search that directory, you'll find eg syslog.conf in the busybox-syslog package.
Thus, in order to get those files into your image, you'll need to add busybox-syslog to your image. I.e. IMAGE_INSTALL += "busybox-syslog".

Resources