To improve the structure of my Gitlab CI file I include some specific files, like for example
include:
- '/ci/config/linux.yml'
- '/ci/config/windows.yml'
# ... more includes
To avoid the error-prone redundancy of the path I thought to put it into a variable, like:
variables:
CI_CONFIG_DIR: '/ci/config'
include:
- '${CI_CONFIG_DIR}/linux.yml' # ERROR: "Local file `${CI_CONFIG_DIR}/linux.yml` does not exist!"
- '${CI_CONFIG_DIR}/windows.yml'
# ... more includes
But this does not work. Gitlab CI claims that ${CI_CONFIG_DIR}/linux.yml does not exist, although the documentation says that variables in include paths are allowed, see https://docs.gitlab.com/ee/ci/variables/where_variables_can_be_used.html#gitlab-ciyml-file.
What also didn't work was to include a file /ci/config/main.yml and from that include the specific configurations without paths:
# /ci/config/main.yml
include:
- 'linux.yml' # ERROR: "Local file `linux.yml` does not exist!"
- 'windows.yml'
# ... more includes
How can I make this work or is there an alternative to define the path in only one place without making it too complicated?
This does not seem to be implemented at the moment, and there is an open issue at the moment in the backlog.
Also, with the documentation saying that you could use variables within include sections, those are only for predefined variables.
See if GitLab 14.2 (August 2021) can help:
Use CI/CD variables in include statements in .gitlab-ci.yml
You can now use variables as part of include statements in .gitlab-ci.yml files.
These variables can be instance, group, or project CI/CD variables.
This improvement provides you with more flexibility to define pipelines.
You can copy the same .gitlab-ci.yml file to multiple projects and use variables to alter its behavior.
This allows for less duplication in the .gitlab-ci.yml file and reduces the need for complicated per-project configuration.
See Documentation and Issue.
Related
So i use the "serverless framework" for AWS lambdas with a variety of plugins as follow:
plugins:
- serverless-esbuild
- serverless-offline
- serverless-stack-output
- plugin4
- plugin5
- plugin6
- plugin7
- plugin8
- plugin9
- plugin10
I also have multiple 'environments' (for different AWS accounts, configs, etc) so I make the serverless.yml content vary depending on environment using imported sub yml files as follow:
vpc: ${file(serverless/environment/${env:ENVIRONMENT}.yml):vpc}
But what i need is to only make the presence of a single plugin conditionnal on the ENVIRONMENT variable. Let's say plugin8 should not be included if ENVIRONMENT=XXX
With my previous strategy, i could externalise the whole plugin list to the individual environment sub yml files but that would lead to a fair amount of duplicaiton.
Any better approach to just make one line in a yml list conditionnal on an environment variable?
Thanks
I am newbie to puppet and I wonder how I can pass arguments to the command line. I will explain myself:
This is the command that I'm running (puppet apply):
C:>puppet apply --environment test -l C:\Puppet_logs\log.log C:\ProgramData\PuppetLabs\code\environments\test\manifests\site.pp
Site.pp:
File { backup => false }
node default {
include 'tn'
}
It means that I am running 'tn' which is one of the modules in my puppet project.
For example,
I have these modules in my puppet project:
tn
ps
av
So to run each module I need to go to this site.pp file and change it to
include 'ps'
or
include 'av'
My question is -
How do I pass these modules as arguments to the puppet apply command?
I know that I can create 3 .pp files that each one contains one module (ps, av, tn)
And then my command will look like:
puppet apply --environment test -l C:\Puppet_logs\log.log C:\ProgramData\PuppetLabs\code\environments\test\manifests\ps.pp
puppet apply --environment test -l C:\Puppet_logs\log.log C:\ProgramData\PuppetLabs\code\environments\test\manifests\av.pp
puppet apply --environment test -l C:\Puppet_logs\log.log C:\ProgramData\PuppetLabs\code\environments\test\manifests\tn.pp
But, I think it's not a good solution..
Is there another way to pass these modules as arguments to the puppet apply?
If I didn't mention - each module is responsible for different actions.
thanks !!!
I know that I can create 3 .pp files that each one contains one module
(ps, av, tn)
[...]
But, I think it's not a good solution.
Why isn't it a good solution? It seems perfectly sensible to me that if you have three different things you want to be able to do, then you have a separate file to use to accomplish each.
Nevertheless, if your modules do not use each other, then you could probably accomplish what you describe by relying on tags. Have your site manifest include all three modules:
File { backup => false }
node default {
include 'tn'
include 'ps'
include 'av'
}
Then use the --tags option to select only one of those modules and all the other classes it brings in:
puppet apply --tags ps --environment test -l C:\Puppet_logs\log.log C:\ProgramData\PuppetLabs\code\environments\test\manifests\site.pp
A pp file is a class file not a module, a module contains the classes and anything else needed to support/test those classes, take a look at https://puppet.com/docs/puppet/5.5/modules_fundamentals.html.
Look at how modules are laid out on https://forge.puppet.com/
It’s well worth looking at the PDK https://puppet.com/docs/pdk/1.x/pdk.html as it'll build a module for you, you just need to add the classes.
In your case you probably want to create a new module (let’s call it mymodule) and in that module put all your tn.pp ps.pp and av.pp class files under the C:\ProgramData\PuppetLabs\code\environments\test\modules\mymodule\manifests directory.
Then for local testing use the examples pattern, so in your module you’ll have an examples directory and in there you might have a file called ps.pp which would contain include mymodule::ps to include that ps.pp class file.
The aim of the examples directory is to give you a method of passing in parameters for local testing.
Back in your site.pp file you’d apply is with:
Node default {
Include mymodule::ps
}
So now you want to apply different classes to the nodes and there you hit the world of node classification and there are many ways you can do that. In your case I think you’re probably doing this on a small scale so you’d have;
Node psserver.example.com {
Include mymodule::ps
}
Node tnserver.example.com {
Include mymodule::tn
}
Have a look at some of the online training https://puppet.com/learning-training/kits/puppet-language-basics
I'm working on a capistrano deployment configuration and would like to set the shared folder on another place. Background is, that I want to use a wildcard deployment (review app) and the target directory will be generated on-the-fly (which means, there isn't a shared folder in it) and I would use the shared folder with the assets across ALL review apps in this environment.
Therefore I have directories on the server:
/var/www/review/application_name
/var/www/review/application_name/shared/... (here are the assets and configurations I would like to share across ALL review apps)
/var/www/review/application_name/branch-name/ - this is the deployment path which will be created by capistrano when deploying a specific branch to the review stage.
I have used shared_path
set :shared_path, "/var/www/review/#{fetch(:application)}"
which works fine for the linked_dirs, but NOT for the linked_files. I get the error message:
00:01 deploy:check:linked_files
ERROR linked file /var/www/review/www.app.tld/123/shared/myfile does not exist on review.app.tld
which is true - but I don't know how to tell cap to put it in place. Of course the named file is in the shared folder
/var/www/review/www.app.tld/shared/
but capistrano seems to search on the wrong place when trying to check the linked_files (again: the linked_dirs are processed correct).
Any hints? Thanks in advance!
The shared_path is not something you can configure directly. Using set will not have any effect.
The shared path in Capistrano is always a directory named shared inside your :deploy_to location. Therefore if you want to change the shared path, you must set :deploy_to, like so:
set :deploy_to, -> { "/var/www/review/#{fetch(:application)}" }
This will effectively cause shared_path to become:
"/var/www/review/#{fetch(:application)}/shared"
Keep in mind that :deploy_to is used as the base directory for many things: releases, repo, current, etc. So if you change :deploy_to you will affect all of them.
If your :application variable is defined at some later point, or changed, you'll need to set to a deferred variable:
set :shared_path, -> { "/var/www/review/#{fetch(:application)}" }
This evaluates that string on-demand instead of in advance.
We use a Gitlab Project in a team. Each developer has his own Kubernetes cluster in the cloud and an own branch within GitLab. We use GitLab-CI to automatically build new containers and deploy them to our Kubernetes clusters.
At the moment we have a .gitlab-ci.yml looks something like this:
variables:
USERNAME: USERNAME
CI_K8S_PROJECT: ${USERNAME_CI_K8S_PROJECT}
REGISTRY_JSON_KEY_FILE: ${USERNAME_REGISTRY_JSON_KEY_FILE}
[...]
stages:
- build
- deploy
- remove
build-zeppelin:
stage: build
image: docker:latest
variables:
image_name: "zeppelin"
only:
- ${USERNAME}#Gitlab-Repo
tags:
- cloudrunner
script:
- docker login -u _json_key -p "${REGISTRY_JSON_KEY_FILE?}" https://eu.gcr.io
- image_name_fqdn="eu.gcr.io/${CI_K8S_PROJECT?}/${image_name?}:latest"
- docker build -t ${image_name_fqdn?} .
- docker push ${image_name_fqdn?}
- echo "Your new image is '${image_name_fqdn?}'. Have fun!"
[...]
So in the beginning we reference the important information by using a USERNAME-prefix. This works quite well, but is problematic, since we need to correct them after every pull request from another user.
So we search for a way to keep the gitlab-ci file the same to every developer while still referencing some gitlab-variables different for every developer.
Things we thought about, that don't seem to work:
Use multiple yml files and import them into each other => not supported.
Try to combine Gitlab Environment variables as Prefix:
CI_K8S_PROJECT: ${${GITLAB_USER_ID}_CI_K8S_PROJECT}
or
INDIVIDUAL_CI_K8S_PROJECT: ${GITLAB_USER_ID}_CI_K8S_PROJECT
CI_K8S_PROJECT: ${INDIVIDUAL_CI_K8S_PROJECT}
We found a solution using indirect expansion (bash feature):
before_script:
- variableName=${GITLAB_USER_ID}_CI_K8S_PROJECT
- export wantedValue=${!variableName}
But we also recognised, that our setup was somehow stupid: It does not make sense to have multiple branches for each user and use prefixed variables, since this leads to problems such as the above and security concerns, since all variables are accessible to all users.
It is way easier if each user forks the root project and simply creates a merge request for new features. This way there is no renaming/prefixing of variables or branches necessary at all.
Solution from #nik will work only for bash. For sh will work:
before_script:
- variableName=...
- export wantedValue=$( eval echo \$$variableName )
Something like this works (on 15.0.5-ee):
variables:
IMAGE_NAME: "test-$CI_PROJECT_NAME"
I have the following problem. I'm keeping two separate Ansible project directories for two different technologies. Imagine you have a nice Ansible setup and want to pull an Ansible project and use some of your established structure without integrating it completely.
The first statement does what I want. It gives a fq path.
debug: msg="{{lynx_ansible}}/roles/centos_common/centos_{{jdk_provider}}.yml"
include: "{{lynx_ansible}}/roles/centos_common/centos_{{jdk_provider}}.yml"
The include adds a path to the ansible-project root dir and doesn't expand the variables. Is there a way to do this?
Try $lynx_ansible rather than {{ lynx_ansible }}. Include doesn't seem to support jinja2 syntax.