I am using Ansible v2.0 and using this plugin, which shows the time that each task consume and here is my directory struture:
.
├── aws.yml
├── callback_plugins
│ ├── profile_tasks.py
├── inventory
│ └── hosts
├── roles
│ ├── ec2instance
│ │ ├── defaults
│ │ │ └── main.yml
│ │ └── tasks
│ │ └── main.yml
│ ├── ec2key
│ │ ├── defaults
│ │ │ └── main.yml
│ │ └── tasks
│ │ └── main.yml
│ ├── ec2sg
│ │ ├── defaults
│ │ │ └── main.yml
│ │ └── tasks
│ │ └── main.yml
│ ├── elb
│ │ ├── defaults
│ │ │ └── main.yml
│ │ └── tasks
│ │ └── main.yml
│ ├── rds
│ │ ├── defaults
│ │ │ └── main.yml
│ │ └── tasks
│ │ └── main.yml
│ └── vpc
│ ├── defaults
│ │ └── main.yml
│ └── tasks
│ └── main.yml
└── secret_vars
├── backup.yml
└── secret.yml
But when I run the playbook, it didn't show the result, can you please point me that where I am making mistake.
I am able to solve this problem by adding this to the ansible.cfg file:
[defaults]
callback_whitelist = profile_tasks
plugin is included with ansible 2.0 and as most of those included it requires whitelisting in ansible.cfg
Hope this will help others.
Did you set callback directory in your ansible.cfg file?
If not, just add ansible.cfg file at the root level of your directory and specify path to your callback folder.
Because there are other plugin types, I suggest placing callback_plugins inside of the plugins folder.
[defaults]
callback_plugins = ./plugins/callback_plugins
Related
I am trying to overwrite an backoffice class which has a bean, but it is in a web context.
In our project we have already an custom backoffice package, but there is no spring.xml. Also no other classes in that package have any beans.
To be more specific, I am trying to overwrite this class: hybris/bin/ext-backoffice/backoffice/web/webroot/WEB-INF/classes/com/hybris/backoffice/widgets/searchadapters/conditions/products/FlexibleSearchUncategorizedConditionAdapter.class.
Our backoffice extension looks like that:
├── backoffice <-- webroot
│ ├── resources
│ │ └── widgets
│ │ ├── projectbackofficeWidget
│ │ │ ├── definition.xml
│ │ │ ├── images
│ │ │ │ └── ...
│ │ │ ├── labels
│ │ │ │ └── ...
│ │ │ ├── projectbackofficewidget.scss
│ │ │ └── projectbackofficewidget.zul
│ │ └── actions
│ │ └── ...
│ ├── src
│ │ └── de
│ │ └── company
│ │ └── project
│ │ └── backoffice
│ │ ├── b2bcommerce
│ │ │ └── actions
│ │ │ └── ...
│ │ ├── editors
│ │ │ └── ...
│ │ ├── services
│ │ │ └── ...
│ │ └── widgets
│ │ ├── ...
│ │ └── searchadapters
│ │ └── myFlexibleSearchUncategorizedConditionAdapter.java
│ └── testsrc
│ └── ...
├── build.xml
├── buildcallbacks.xml
├── extensioninfo.xml
├── extensioninfo.xsd
├── gensrc
│ └── ...
├── platformhome.properties
├── project.properties
├── resources
│ ├── backoffice
│ │ └── projectbackoffice_bof.jar
│ ├── beans.xsd
│ ├── cockpitng
│ │ └── ...
│ ├── items.xsd
│ ├── projectbackoffice
│ │ ├── projectbackoffice-testclasses.xml
│ │ └── projectbackoffice-webtestclasses.xml
│ ├── projectbackoffice-backoffice-config.xml
│ ├── projectbackoffice-backoffice-labels
│ │ └── ...
│ ├── projectbackoffice-backoffice-spring.xml
│ ├── projectbackoffice-backoffice-widgets.xml
│ ├── projectbackoffice-beans.xml
│ ├── projectbackoffice-items.xml
│ ├── projectbackoffice-spring.xml
│ ├── projectbackoffice.build.number
│ └── localization
│ └── ...
├── src
│ └── de
│ └── company
│ └── project
│ └── backoffice
│ ├── projectbackofficeStandalone.java
│ ├── constants
│ │ └── projectbackofficeConstants.java
│ └── jalo
│ └── projectbackofficeManager.java
└── testsrc
└── ...
I know there is an spring.xml, but it is not working with the classes in the webroot.
In all other web extensions, there are separated files for that.
How do I add an spring.xml so I can overwrite that OOTB bean? Or how can I use the existing spring.xml for that?
You can use customize for that, but it seems there is a better solution that lets you append the backoffice web spring configs as mentioned here.
I use Ansible to deploy my userspecific configuration (shell, texteditor, etc.) on a newly installed system. That's why i have all config files in my roles file directory, structured the same way as they should be placed in my home directory.
What's the correct way to realize this? I don't want to list every single file in the role and exisiting files should be overwriten, existing directories should be merged.
I've tried the copy module, but the whole task is skipped; I assume because the parent directory(.config) already exist.
Edit: add the requested additional information
Ansible Version: 2.9.9
The roles copy task:
- name: Install user configurations
copy:
src: "home/"
dest: "{{ ansible_env.HOME }}"
The Files to copy in the role directory:
desktop-enviroment
├── defaults
│ └── main.yml
├── files
│ └── home
│ ├── .config
│ │ ├── autostart-scripts
│ │ │ └── ssh-keys.sh
│ │ ├── MusicBrainz
│ │ │ ├── Picard
│ │ │ ├── Picard.conf
│ │ │ └── Picard.ini
│ │ ├── sublime-text-3
│ │ │ ├── Installed Packages
│ │ │ ├── Lib
│ │ │ ├── Local
│ │ │ └── Packages
│ │ └── yakuakerc
│ └── .local
│ └── share
│ ├── plasma
│ └── yakuake
├── handlers
│ └── main.yml
├── meta
│ └── main.yml
├── tasks
│ ├── desktop-common.yaml
│ ├── desktop-gnome.yaml
│ ├── desktop-kde.yaml
│ └── main.yml
├── templates
└── vars
└── main.yml
The relevant ansible output:
TASK [desktop-enviroment : Install user configurations] **
ok: [localhost]
I have a TeamCity settings.kts file where it consists of the Root Project and hence all subsequent sub project. Currently, it's one big massive file and I am trying to split up the KTS file based on projects.
What's the best practice to split up the settings file? Should I do a file per project and how do I reference them from the main settings file?
TeamCity generates a singe settings.kts file only for small projects.
You can try and play with some big project, download settings in Kotlin format for it.
E.g., here is how generated by TeamCity big project settings look like:
nadias-mbp:projectSettings 2 nburnasheva$ tree
.
├── README
├── ServiceMessages
│ ├── Project.kt
│ └── buildTypes
│ ├── ServiceMessagesChangeBuildStatus.kt
│ ├── ServiceMessages_BuildProgressServiceMessage.kt
│ ├── ServiceMessages_ErrorParsingServiceMessage.kt
│ ├── ServiceMessages_FailBuild.kt
│ └── ServiceMessages_ReportBuildParameterDoNotReport.kt
├── ServiceMessages_ReportBuildParametersChar
│ ├── Project.kt
│ └── buildTypes
│ ├── ServiceMessages_ReportBuildParametersChar_ReportBuildParameter.kt
│ ├── ServiceMessages_ReportBuildParametersChar_ReportBuildParameterWaitReasonWithTooLongValue.kt
│ └── ServiceMessages_ReportBuildParametersChar_ThreadSleep.kt
├── ServiceMessages_ReportBuildParametersChartCopy
│ ├── Project.kt
│ └── buildTypes
│ └── ServiceMessages_ReportBuildParametersChartCopy_ReportBuildPara.kt
├── _Self
│ ├── Project.kt
│ ├── buildTypes
│ │ ├── AnsiParseAnsiColorLoggerOutput.kt
│ │ ├── BuildStepsAutodetection.kt
│ │ ├── CheckPromptParameter.kt
│ │ ├── EchoBuildIdToFile.kt
│ │ ├── EchoParametersToConsole.kt
│ │ ├── EchoUmlaut.kt
│ │ ├── FailBuildOnTextInTheLogs.kt
│ │ ├── MpsQuottingTest.kt
│ │ ├── RunGitCommand.kt
│ │ ├── RunMavenFromCommandLine.kt
│ │ ├── SetPasswordParameterInServiceMessages.kt
│ │ ├── SimpleWindowsEcho.kt
│ │ ├── SparseFile.kt
│ │ └── StderrRunAsOnMacOS.kt
│ └── vcsRoots
│ ├── HttpsGithubComBanadigaPhotoBackupGitRefsHeadsMaster.kt
│ └── HttpsGithubComBurnashevaCommandLineRunnerGitRefsHeadsMaster.kt
├── pom.xml
└── settings.kts
9 directories, 32 files
And here is the content of settings.kts:
import jetbrains.buildServer.configs.kotlin.v2018_2.*
version = "2019.1"
project(_Self.Project)
Here is my directory structure,
├── README.md
├── internal-api.retry
├── internal-api.yaml
├── ec2.py
├── environments
│ ├── alpha
│ │ ├── group_vars
│ │ │ ├── alpha.yaml
│ │ │ ├── internal-api.yaml
│ │ ├── host_vars
│ │ ├── internal_ec2.ini
│ ├── prod
│ │ ├── group_vars
│ | │ ├── prod.yaml
│ │ │ ├── internal-api.yaml
│ │ │ ├── tag_Name_prod-internal-api-3.yml
│ │ ├── host_vars
│ │ ├── internal_ec2.ini
│ └── stage
│ ├── group_vars
│ │ ├── internal-api.yaml
│ │ ├── stage.yaml
│ ├── host_vars
│ │ ├── internal_ec2.ini
├── roles
│ ├── internal-api
├── roles.yaml
I am using separate config for an ec2 instance with tag Name = prod-internal-api-3, so I have defined a separate file, tag_Name_prod-internal-api-3.yaml in environments/prod/group_vars/ folder.
Here is my tag_Name_prod-internal-api-3.yaml,
---
internal_api_gunicorn_worker_type: gevent
Here is my main playbook, internal-api.yaml
- hosts: all
any_errors_fatal: true
vars_files:
- "environments/{{env}}/group_vars/{{env}}.yaml" # this has the ssh key,users config according to environments
- "environments/{{env}}/group_vars/internal-api.yaml"
become: yes
roles:
- internal-api
For prod deployemnts, I do export EC2_INI_PATH=environment/prod/internal_ec2.ini, likewise for stage and alpha. In environment/prod/internal_ec2.ini I have added instance filter, instance_filters = tag:Name=prod-internal-api-3
When I run my playbook,
I get this error,
fatal: [xx.xx.xx.xx]: FAILED! => {"changed": false, "msg": "AnsibleUndefinedVariable: 'internal_api_gunicorn_worker_type' is undefined"}
It means that it is not able to pick variable from the file tag_Name_prod-internal-api-3.yaml. Why is it happening? Do I need to manually add it in include_vars(I don't think that should be the case)?
Okay, so it is really weird, like really really weird. I don't know whether it has been documented or not(please provide link if it is).
If your tag Name is like prod-my-api-1, then the file name tag_Name_prod-my-api-1 will not work.
Your filename has to be tag_Name_prod_my_api_1. Yeah, thanks ansible for making me cry for 2 days.
I am using a role (zaxos.lvm-ansible-role) to manage lvms on a few hosts. Initially I had my vars for the lvm under host_vars/server.yaml which works.
Here is the working layout
├── filter_plugins
├── group_vars
├── host_vars
│ ├── server1.yaml
│ └── server2.yaml
├── inventories
│ ├── preprod
│ ├── preprod.yml
│ ├── production
│ │ ├── group_vars
│ │ └── host_vars
│ ├── production.yaml
│ ├── staging
│ │ ├── group_vars
│ │ └── host_vars
│ └── staging.yml
├── library
├── main.yaml
├── module_utils
└── roles
└── zaxos.lvm-ansible-role
├── defaults
│ └── main.yml
├── handlers
│ └── main.yml
├── LICENSE
├── meta
│ └── main.yml
├── README.md
├── tasks
│ ├── create-lvm.yml
│ ├── main.yml
│ ├── mount-lvm.yml
│ ├── remove-lvm.yml
│ └── unmount-lvm.yml
├── tests
│ ├── inventory
│ └── test.yml
└── vars
└── main.yml
For my environment it would make more sense to have the host_vars under the inventories directory which is also supported (Alternative Directory Layout) as per Ansible doc.
However when I change to this layout the vars are not initialized and the lvms on the host don’t change.
├── filter_plugins
├── inventories
│ ├── preprod
│ │ ├── group_vars
│ │ └── host_vars
│ │ ├── server1.yaml
│ │ └── server2.yaml
│ ├── preprod.yml
│ ├── production
│ │ ├── group_vars
│ │ └── host_vars
│ ├── production.yaml
│ ├── staging
│ │ ├── group_vars
│ │ └── host_vars
│ └── staging.yml
├── library
├── main.yaml
├── module_utils
└── roles
└── zaxos.lvm-ansible-role
├── defaults
│ └── main.yml
├── handlers
│ └── main.yml
├── LICENSE
├── meta
│ └── main.yml
├── README.md
├── tasks
│ ├── create-lvm.yml
│ ├── main.yml
│ ├── mount-lvm.yml
│ ├── remove-lvm.yml
│ └── unmount-lvm.yml
├── tests
│ ├── inventory
│ └── test.yml
└── vars
└── main.yml
Any idea why this approach is not working?
Your host_vars directory must reside in ansible's discovered inventory_dir.
With the above filetree, I guess you are launching your playbook with ansible-playbook -i inventories/preprod.yml yourplaybook.yml. In this context, ansible discovers inventory_dir as inventories
The solution is to move your inventory files inside each directory for your environment, e.g. for preprod => mv inventories/preprod.yml inventories/preprod/
You can then launch your playbook with ansible-playbook -i inventories/preprod/preprod.yml yourplaybook.yml and it should work as you expect.