Helmfile - "needs" keyword has no effect - helmfile

I have been trying to make use of the keyword needs (following the doc) to control the order of installation of the releases.
Here is my helmfile:
helmDefaults:
createNamespace: false
timeout: 600
helmBinary: /usr/local/bin/helm
releases:
- name: dev-sjs-pg
chart: ../helm_charts/sjs-pg
- name: dev-sjs
chart: ../helm_charts/sjs
needs: ['dev-sjs-pgg']
Regarding versions:
helmfile version v0.139.9
helm version.BuildInfo{Version:"v3.5.4", GitCommit:"1b5edb69df3d3a08df77c9902dc17af864ff05d1", GitTreeState:"clean", GoVersion:"go1.15.11"}
When I run helmfile sync , both releases are installed simultaneously. In particular, there is no error due to my spelling error (dev-sjs-pgg instead of dev-sjs-pg). It is like needs is just not read.
Could you help me understanding what I am doing wrong please ?

I tried to reproduce this. When executing helmfile --log-level=debug sync I see in the debug log:
processing 2 groups of releases in this order:
GROUP RELEASES
1 dev-sjs-pg
2 dev-sjs
I also see these are deployed one after another (just a few seconds difference because I am deploying a fast nginx chart):

Related

Google cloud build with pack and secrets manager not accessing environment variables

I'm using a standard gcr.io/k8s-skaffold/pack build function to build my app for google cloud run using google cloud build.
In my cloudbuild.yaml I load 2 secrets from google secrets manager and pass it to the build function. The google cloud build has access to those secrets, otherwise I would get an error message for this (I got this kind of error at the beginning when setting up the build, now it seems to have access).
However, it seems like the environment variables don't get set.
I think that it might be a syntactical problem of how I try to pass the variables.
This is the stripped down cloudbuild.yaml
steps:
- name: gcr.io/k8s-skaffold/pack
args:
- build
- '$_GCR_HOSTNAME/$PROJECT_ID/$REPO_NAME/$_SERVICE_NAME:$COMMIT_SHA'
- '--builder=gcr.io/buildpacks/builder:v1'
- '--network=cloudbuild'
- '--path=.'
- '--env=SEC_A=$$SEC_A'
- '--env=SEC_B=$$SEC_B'
secretEnv: ['SEC_A', 'SEC_B']
id: Buildpack
entrypoint: pack
availableSecrets:
secretManager:
- versionName: projects/<pid>/secrets/SEC_A/versions/latest
env: SEC_A
- versionName: projects/<pid>/secrets/SEC_B/versions/latest
env: SEC_B
An Error message that I hacked into the build for checking shows me that the env var is empty during this build step.
I tried using $, $$ (as seen above), &&, ${...}, for substitution. But maybe the problem lies somewhere else.
Yes, it's a common issue and a trap on Cloud Build. In fact, your secrets can't be read if you use the args[] arrays to pass argument. you have to use the script mode, like that
steps:
- name: gcr.io/k8s-skaffold/pack
entrypoint: bash
args:
- -c
- |
pack build $_GCR_HOSTNAME/$PROJECT_ID/$REPO_NAME/$_SERVICE_NAME:$COMMIT_SHA --builder=gcr.io/buildpacks/builder:v1 --network=cloudbuild --path=. --env=SEC_A=$$SEC_A --env=SEC_B=$$SEC_B
secretEnv: ['SEC_A', 'SEC_B']
id: Buildpack

Caddy not working in api-platfrom 2.6.4 distribution - panic: proto: file "pb.proto" is already registered

When I try us api-platform version 2.6.4 I am not able to run it when i build adn strat containers and check logs caddy is not working i get an error like this. Any idea? Caddy version is 2.3.0
caddy_1 | panic: proto: file "pb.proto" is already registered
caddy_1 | See https://developers.google.com/protocol-buffers/docs/reference/go/faq#namespace-conflict
tureality_caddy_1 exited with code 2
Other people have reported having this bug and I had it too.
Fortunately, the bug as just been fixed by Dunglas itself. :)
https://github.com/api-platform/api-platform/issues/1881#issuecomment-822663193
The repair was done at the mercure level and not in the api platform source code itself so you can keep your current version.
You just have to docker-compose up and it will work.

Configurable Replica Number in Template using ValueFrom

I have a template.yml file that is used when deploying to any OpenShift project. Each project has specific project-props configMap to be used, this is part of our CICD pipeline, so each project has a unique project.props available to it
I would like to be able to control the number of replicas and CPU/Memory limits based on what project I am deploying to. For example a branch testing OpenShift project vs Performance testing OpenShift project would have a different CPU request and limit than an ephemeral OpenShift project.
My template.yml file looks something like this:
// <snip>
spec:
replicas: "${OS_REPLICAS}"
// <snip>
resources:
limits:
cpu: "${OS_CPU_LIMIT}"
memory: "${OS_MEMORY_LIMIT}"
requests:
cpu: "${OS_CPU_REQUEST}"
memory: "${OS_MEMORY_REQUEST}"
// <snip>
parameters:
- name: OS_REPLICAS
displayName: OS Number of Replicas
valueFrom:
configMapKeyRef:
name: project-props
key: os.replicas
// rest of params are similar
My relevant project-props section is:
os.replicas=2
os.cpu.limit=2
os.cpu.request=250m
os.memory.limit=1Gi
os.memory.request=1Gi
When I try to deploy this I get the following error:
quantities must match the regular expression '^([+-]?[0-9.]+)([eEinumkKMGTP]*[-+]?[0-9]*)$'
If I change template.yml to have a parameter defined it works fine
parameters:
- name: OS_REPLICAS
displayName: OS Number of Replicas
value: 2
It seems that valueFrom vs value has a different behavior. Is this impossible to do using valueFrom? Is there another way I can dynamically change spec and resources using a configMap?
The alternative is to deploy and then use oc scale dc <deploy_config_name> --replicas=<number> but it's not very elegant.
Where you have:
spec:
replicas: "${OS_REPLICAS}"
you should have:
spec:
replicas: "${{OS_REPLICAS}}"
With template parameter of:
parameters:
- name: OS_REPLICAS
displayName: OS Number of Replicas
value: 2
See:
https://docs.openshift.org/latest/dev_guide/templates.html#writing-parameters
for use of "${{}}".
What it does is interpret the contents of the parameter as JSON/YAML, rather than a string value. This allows you to supply an integer, which replicas requires.
So you don't need valueFrom, which wouldn't work anyway as that is only usable for environment variables and not arbitrary fields like replicas.
As to trying to set a default for memory and CPU for pods deployed in a project, you should look at having a LimitRange resource defined against the project and set a default.
https://docs.openshift.com/container-platform/3.5/dev_guide/compute_resources.html#dev-limit-ranges
I figured out the answer, it's does not read the values from the file but at least they can be dynamic.
OpenShift has an oc process command that you can be run when using a template.
So this works by doing:
oc process -f <template_name>.yaml -v <param_name>=<param_value>
This will over write the parameter value with the one being inserted by -v.
An actual example would be
oc process -f ./src/main/openshift/service.template.yaml -v OS_REPLICAS=2
You can read more about it OpenShift template documentation
It seems that the OS Origin team does not want to support using files for parameter insertion. You can read more about it here:
https://github.com/openshift/origin/pull/10952
https://github.com/openshift/origin/issues/10687

drone.io 0.5 slack no longer working

We had slack notification working in drone.io 0.4 just fine, but since we updated to 0.5 I can't get it working despite trying out the documentation.
Before, it was like this
build:
build and deploy stuff...
notify:
slack:
webhook_url: $$SLACK_WEBHOOK_URL
channel: continuous_integratio
username: drone
You can see here that I used the $$ to reference the special drone config file of old.
Now my latest attempt looks like this
pipeline:
build and deploy stuff...
slack:
image: plugins/slack
webhook: https://hooks.slack.com/services/...
channel: continuous_integratio
username: drone
According to the documentation slack is now indented within the pipeline (previously build) level.
I tried changing slack out for notify like it was before, used the SLACK_WEBHOOK secret only via the drone cli and there where other things I attempted as well.
Does anyone know what I might be doing wrong?
This is an (almost exact) yaml I am using with slack notification enabled with the exception that I've masked the credentials
pipeline:
build:
image: golang
commands:
- go build
- go test
slack:
image: plugins/slack
webhook: https://hooks.slack.com/services/XXXXXXXXX/YYYYYYYYY/ZZZZZZZZZZZZZZZZZZZZZZZZ
when:
status: [ success, failure ]
There is unfortunately nothing in your example that jumps out, perhaps with the exception of the channel name having a typo (although I'm not sure if that represents your real yaml configuration or not)
If you are attempting to use secrets (via the cli) you need to make sure you sign your yaml file and commit the signature file to your repository. You can then reference your secret in the yaml similar to 0.4 but with a slightly different syntax:
pipeline:
build:
image: golang
commands:
- go build
- go test
slack:
image: plugins/slack
webhook: ${SLACK_WEBHOOK}
when:
status: [ success, failure ]
You can read more about secrets at http://readme.drone.io/usage/secret-guide/
You can also invoke the plugin directly from the command line to help test different input values. This can help with debugging. See https://github.com/drone-plugins/drone-slack#usage
The issue was that in 0.4 the notify plugin was located outside the scope of the pipeline (then build) and now since 0.5 its located inside the pipeline. This combined with the fact that when a pipeline fails it quits the scope immediately, which means the slack (then notify) step never get's reached at all anymore.
The solution to this is to just explicitly tell it to execute the step on failure with the when command:
when:
status: [ success, failure ]
This is actually mentioned in the getting-started guide, though, but I didn't go through till the end as I was aiming to quickly get it up and running and didn't worry about what I considered to be edge cases.

Jenkins Gerrit plugin offers patch revision instead of master following a merge to master

I experienced a merge order problem whereby a gerrit change ("100" for arguments sake) was code reviewed after a later change ("101") had been code reviewed. This caused jenkins to build and release gerrit ID "100", and the code previously released from gerrit ID "101" was no longer the latest release.
I am wondering if I have a fundamental issue - my initial thoughts are that "choosing-strategy: gerrit" is correct for maven verify, but should be "choosing-strategy: default" when I build the code reviewed code from master.
I have the following jenkins config in JJB format for the job which build master and from which a release is generated:
parameters:
- string:
name: GERRIT_REFSPEC
default: refs/heads/master
triggers:
- gerrit:
trigger-on:
- change-merged-event
scm:
- git:
url: '{gerrit_url}/{repo}'
credentials-id: '{gerrit_credentials_id}'
name: origin
refspec: $GERRIT_REFSPEC
branches:
- $GERRIT_BRANCH
choosing-strategy: gerrit
UPDATE (April 2018): What seems to be happening is that following an event where a user has code reviewed and merged to master, the GERRIT_REFSPEC passed to Jenkins turns out to yield the patch, i.e. the code as it looked BEFORE it was merged into master.
Therefore, what I first thought was an obscure merge order problem, turns out that we were simply building the wrong thing in the first place. The choosing-strategy suggested, provides a decent enough work around, but I'm not sure I would call it a solution.

Resources