How to handle multiple bambooinstances with one bamboo.yaml - yaml

I have a bamboo.yaml (same project) which is used on 2 diffrent Bambooservers - this is needed (cause of staging concept and other stufff)
The buildjobs differ a bit on those bambooinstances, i could solve this by using global-variables and conditional Task.
Like this:
tasks:
- maven:
executable: Maven 3.6.3
jdk: JDK 11.0.2
goal: |-
clean install -s settings.xml
environment: BUILD_USER=${bamboo.hpf_bamboo_user} BUILD_PWD=${bamboo.hpf_bamboo_password}
conditions:
- variable:
equals:
bamboo_instance: devstack
- maven:
executable: Maven 3.6.3
jdk: JDK 11.0.2
goal: |-
clean deploy -s settings.xml
environment: BUILD_USER=${bamboo.hpf_bamboo_user} BUILD_PWD=${bamboo.hpf_bamboo_password}
conditions:
- variable:
equals:
bamboo_instance: ci
The group which should have permissions on the job has diffrent names on the bambooinstances too,
but i cant use variables on permissions.
plan-permissions:
- users: []
groups: ${bamboo.devgroup}
This will return the error "no group '${bamboo.devgroup}'"
Has anyone a idea how i could solve this ?

FYI: Found a Solution
its possible to define the bamboo-server name in the yaml - bamboo will skip the configurations which have another servername as itself :)
server-name: 'bambooservername'

Related

How can I set gradle/test to work on the same docker environment where other CircleCi jobs are running

I have a CircleCi's workflow that has 2 jobs. The second job (gradle/test) is dependent on the first one creating some files for it.
The problem is with the first job running inside a docker, and the second job (gradle/test) is not. Hence, the gradle/test is failing since it cannot find the files the first job created. How can I set gradle/test to work on the same space?
Here is a code of the workflow:
version: 2.1
orbs:
gradle: circleci/gradle#2.2.0
executors:
daml-executor:
docker:
- image: cimg/openjdk:11.0-node
...
workflows:
checkout-build-test:
jobs:
- daml_test:
daml_sdk_version: "2.2.0"
context: refapps
- gradle/test:
app_src_directory: prototype
executor: daml-executor
requires:
- daml_test
Can anyone help me configure gradle/test correctly?
CircleCI has a mechanism to share artifacts between jobs called "workspace" (well, they have multiple ones, but workspace is what you want here).
Concretely, you would add this at the end of your daml_test job definition, as an additional step:
- persist_to_workspace:
root: /path/to/folder
paths:
- "*"
and that would add all the files from /path/to/folder to the workspace. On the other side, you can "mount" the workspace in your gradle/test job by adding something like this before the step where you need the files:
- attach_workspace:
at: /whatever/mountpoint
I like to use /tmp/workspace for the path on both sides, but that's just personal preference.

Ansible organization for a versioned product

I'm working in a company with a big code base with multiples products,
We use Ansible for our products deployments.
We have multiples shared roles and specific roles per product.
Each shared role is versioning and tagged,
Let's imagine product X in version 1.0.0 and a product Y in version 1.1.0
and 2 shared roles A & B
X has a dependency on role A/B in version 1.0.0
Y has a dependency on role A in version 1.0.0 and role B in version 1.5.0
I'm working on my product X for a new major release 1.1.0
So I'm working too in Ansible to add multiples config files, remove some others, update the shared role A
But my version 1.0.0 is still in production, so it may require some adjustments for bugs. and It may need to update the ansible too.
Now is my problem.
When I release X:1.1.0 with the updated ansible, my playbooks are no more compatible with the 1.0.0 branch.
What's a good practice for handling ansible with a product life
It is good to have a role and tasks per version ?
tasks
- main.yml
- 1.0.0.yml
- 1.1.0.yml
cat main.yml
---
include_tasks: {{product_version}}.yml
include_tasks: 1.1.0.yml
when : product_version >= 1.1.0
and in each version file contains the diff with the previous one ?
but for the 1.10.0, it will become a nightmare...
Do you have any experience for these uses cases ?
First, we must assume that all roles are in their own git repositories. (If they are not, fix that first.) Also, the playbook that will do the installation is also in its own repository, along with a file called roles/requirements.yml. (That is where Ansible Tower expects it to be.)
A particular version of the installation playbook repo will have a particular roles/requirements.yml file which will specify exactly what versions of what roles are needed to run that version of the playbook.
So, for product X, we have a roles/requirements.yml file that says,
- name: A
src: git#gitlab.company.com:mygroup/A.git
scm: git
version: 1.0.0
- name: B
src: git#gitlab.company.com:mygroup/B.git
scm: git
version: 1.0.0
And for product Y, we have a roles/requirements.yml file that says,
- name: A
src: git#gitlab.company.com:mygroup/A.git
scm: git
version: 1.0.0
- name: B
src: git#gitlab.company.com:mygroup/B.git
scm: git
version: 1.5.0
Installing Roles with Ansible Galaxy

How to print/debug all jobs of includes in GitLab CI?

In Gitlab CI it is possible to include one or many files in a .gitlab-ci.yml file.
It is even possible to nest these includes.
See https://docs.gitlab.com/ee/ci/yaml/includes.html#using-nested-includes.
How can I see the resulting CI file all at once?
Right now, when I debug a CI cycle, I open every single include file and
combine the resulting file structure by myself. There has to be a better way.
Example
Content of https://company.com/autodevops-template.yml:
variables:
POSTGRES_USER: user
POSTGRES_PASSWORD: testing_password
POSTGRES_DB: $CI_ENVIRONMENT_SLUG
production:
stage: production
script:
- install_dependencies
- deploy
environment:
name: production
url: https://$CI_PROJECT_PATH_SLUG.$KUBE_INGRESS_BASE_DOMAIN
only:
- master
Content of .gitlab-ci.yml:
include: 'https://company.com/autodevops-template.yml'
image: alpine:latest
variables:
POSTGRES_USER: root
POSTGRES_PASSWORD: secure_password
stages:
- build
- test
- production
production:
environment:
url: https://example.com
This should result in the following file structure:
image: alpine:latest
variables:
POSTGRES_USER: root
POSTGRES_PASSWORD: secure_password
POSTGRES_DB: $CI_ENVIRONMENT_SLUG
stages:
- build
- test
- production
production:
stage: production
script:
- install_dependencies
- deploy
environment:
name: production
url: https://example.com
only:
- master
→ How can I see this output somewhere?
Environment
Self-Hosted GitLab 13.9.1
As I continued to search for a solution, I read about a “Pipeline Editor“ which was released in GitLab version 13.8 this year. Turns out that the feature I am looking for was added to this editor just some days ago:
Version 13.9.0 (2021-02-22) „View an expanded version of the CI/CD configuration” → see Release Notes for a feature description and introduction video.
To view the fully expanded CI/CD configuration as one combined file,
go to the pipeline editor’s »View merged YAML« tab. This tab displays an
expanded configuration where:
Configuration imported with include is copied into the view.
Jobs that use extends display with the extended configuration merged into the job.
YAML anchors are replaced with the linked configuration
Usage: Open project → Module »CI / CD« → Submodule »Editor« → Tab »View merged YAML«
GitLab 15.1 offers an altervative:
Link to included CI/CD configuration from the pipeline editor
A typical CI/CD configuration uses the include keyword to import configuration stored in other files or CI/CD templates. When editing or troubleshooting your configuration though, it can be difficult to understand how all the configuration works together because the included configuration is not visible in your .gitlab-ci-yml, you only see the include entry.
In this release, we added links to all included configuration files and templates to the pipeline editor.
Now you can easily access and view all the CI/CD configuration your pipeline uses, making it much easier to manage large and complex pipelines.
See Documentation and Issue.

Sharing Ansible roles

I have packages of Ansible roles that I would like to import into a project.
If I organise them by subdirectory I run into problems relating to dependency relative paths
i.e the shared role needs to know its relative location of where it would be installed if it uses meta dependencies
I would like to be able to just reference everything to the directory the playbook is being run from though this doesn't work
roles/roleA/meta
---
dependencies:
- { role: "{{ playbook_dir }}/roles/shared_roles/roleB"}
roles/shared_roles/roleB
...
I've tried multiple options and running out of ideas.
I looked into roles-path http://docs.ansible.com/intro_configuration.html#roles-path
Though I don't really want to have to uniquely name all roles as they ought to be namespaced / grouped.
Thanks
You could basically use ansible-galaxy to install the roles within ~/.ansible/roles/ path.
The path of .ansible in my macOS machine is in: ~/.ansible/roles/
All you need to do is just create a requirements.yml file which looks something like this:
- src: git+https://<githubURL-Master-role>.git
scm: git
name: Master-role
- src: git+https://<githubURL-dependent1-role>.git
scm: git
name: dependent1-role
- src: git+https://<githubURL-dependent2-role>.git
scm: git
name: dependent2-role
Run this command to install your roles in ~/.ansible/roles/ path:
sudo ansible-galaxy -vvv -r install requirements.yml
This will basically set your roles in the ~/.ansible/roles/ path. To understand it better please follow this documentation https://docs.ansible.com/ansible/latest/reference_appendices/galaxy.html.
and then, like you have already mentioned, you should add all the dependent roles in your ##path/master-role/meta/main.yml file even before executing the anisble-galaxy command, in-order to execute the dependent roles before the master-role, when you run your master-role playbook.
The ##path/master-role/meta/main.yml file looks something like this:
galaxy_info:
author: Jithendra
description: blah...blah...blah
min_ansible_version: 1.2
galaxy_tags:'[]'
dependencies:
- { role: 'Master-role, when: blah == "true"'} #when condition is optional
- { role: 'dependent1-role, when blah == "false"'}
- { role: 'dependent2-role, when blah == "false"'}
Solved it by enforcing that all shared modules reference the name of the shared directory in all dependencies.
I wanted to leave this up to the calling project and hoped that dependencies would be relative to the role directory.
The solution works though and provides namespacing so I can include multiple roles with the same name as long as they are in separate directories.
I now have a project that includes 3 separate packages of roles
PLAY RECAP ********************************************************************
staging : ok=214 changed=5 unreachable=0 failed=0
Its all built using Maven, each shared group being a maven dependency. If anybody reads this and it helps i'll share the structure / poms etc.

Keep Ansible DRY: How to avoid repeating variables?

Recently just started using Ansible and I have run into a problem. In one of my YAML structures I have defined something like this:
---
# file: main.yml
#
# Jenkins variables for installation and configuration
jenkins:
debian:
# Debian repository containing Jenkins and the associated key for accessing the repository
repo: 'deb http://pkg.jenkins-ci.org/debian binary/'
repo_key: 'http://pkg.jenkins-ci.org/debian/jenkins-ci.org.key'
# Dependencies needed for Jenkins to run properly
dependencies:
- 'openjdk-7-jdk'
- 'git-core'
- 'curl'
port: 8080
# Installation directory
home: '/opt/jenkins'
# Path to the location of the main Jenkins-cli binary
bin_path: '{{jenkins.home}}/jenkins-cli.jar'
# Path to the location of the Jenkins updates file
updates_path: '{{jenkins.home}}/updates_jenkins.json'
Ansible gives me an error like this from a specific task:
"recursive loop detected in template string:
{{jenkins.home}}/updates_jenkins.json"
I've narrowed it down to the problem being with the fact bin_path and updates_path both refer to 'home'. Is there a way around this? It kind of sucks that I would need to define '/opt/jenkins' multiple times.
As far as I know that this is a limitation of jinja2 variables. it was working with the old ${} and broke since 1.4. I am not sure if they fixed that in 1.5.
My temp solution would be removing home from the "jenkins"
---
# file: main.yml
#
# Jenkins variables for installation and configuration
# Installation directory
jenkins_home: '/opt/jenkins'
jenkins:
debian:
# Debian repository containing Jenkins and the associated key for accessing the repository
repo: 'deb http://pkg.jenkins-ci.org/debian binary/'
repo_key: 'http://pkg.jenkins-ci.org/debian/jenkins-ci.org.key'
# Dependencies needed for Jenkins to run properly
dependencies:
- 'openjdk-7-jdk'
- 'git-core'
- 'curl'
port: 8080
# Path to the location of the main Jenkins-cli binary
bin_path: '{{jenkins_home}}/jenkins-cli.jar'
# Path to the location of the Jenkins updates file
updates_path: '{{jenkins_home}}/updates_jenkins.json'
This should do it:
bin_path: "{{ jenkins['home'] }}/jenkins-cli.jar"

Resources