Helm, evaluate linux env variable in values.yaml - spring-boot

I have the following variable JVM_ARGS
base-values.yaml
app:
env:
PORT: 8080
...
JVM_ARGS: >
-Dspring.profiles.active=$(SPRING_PROFILE),test
-Dspring.config.additional-location=/shared
-javaagent:/app/dd-java-agent.jar
service-x-values.yaml
app:
env:
SPRING_PROFILE: my-local-profile
Values file are evaluated is the order:
base-values.yaml
service-x-values.yaml
I need JVM_ARGS to be evaluated against SPRING_PROFILE and so far I cannot make it work.
What is the best way to do something like that?
I'm new to helm and Kubernetes and have a feeling that I'm missing something basic.
What I tried:
defining JVM_ARGS surrounded with double quotes and without them.
UPD:
The problem was that I had some custom Helm charts built by the other devs and I had little knowledge how those charts worked. I only worked with values files which were applied against the chart templates.
I wanted the property to be resolved by helm to
-Dspring.profiles.active=my-local-profile,vault
At the end I decided to see how Spring Boot itself resolves properties and came up with the following:
-Dspring.profiles.active=${SPRING_PROFILE},vault
Since spring.profiles.active is a regular property, env variables are allowed there and Spring will resolve the property at the runtime which worked for me.

I'm a bit confused: are you referring to an environment variable (as in the title of the question) or to a helm value?
Helm does not evaluate environment variables in the value files. $(SPRING_PROFILE) is treated as a literal string, it's not eveluated.
Actually Helm does not evaluate ANYTHING in the value files. They are source of data, not templates. Placeholders (actually GO templates) are evaluated only inside template files.
As a consequence of the point 3., you cannot reference one helm variable from another.
If you really need to get Spring Profiles from a Linux environment variable, you could achieve it by setting Helm variable when calling helm install and the like (although using --set is considered a bad practice):
helm install --set app.env.spring_profile=$SPRING_PROFILE ...
Although even than, app.env.spring_profile couldn't be evaluated inside base-values.yaml. You would need to move it directly to your template file, e.g.:
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-app
spec:
...
template:
...
spec:
containers:
- name: my-app
...
env:
SPRING_PROFILES_ACTIVE: {{- .Values.app.env.spring_profile }},test

Related

What is the equivalent of the --set functionality for helm install command, when using ansible community.kubernetes.helm plugin?

I want to deploy helm chart using ansible-playbook,
my command looks like this:
helm install istio-operator manifests/charts/istio-operator --set operatorNamespace=istio-operator
however I could not find the equivalent for the --set arguments in the ansible plugin.
The bad news is the documentation fails to document the values: parameter, but one can see its use in the Examples section
- community.kubernetes.helm:
name: istio-operator
chart_ref: manifests/charts/istio-operator
values:
operatorNamespace: istio-operator
If for some reason that doesn't work, using --set is (plus or minus) the same as putting that key-value pair in a yaml file and then calling --values $the_filename, so you'd want to do that same operation only manually: create the file on the target machine (not the controller), then invoke c...k...helm: with the documented values_files: pointed at that newly created yaml file

Setting up Idempotency for Parse-Platform on k8s

I'm trying to enable the Idempotency options on a Parse-Platform server.
I've tried adding the following to my k8s config:
spec:
containers:
- args:
- --idempotencyOptions
- '{"paths":["classes/.*"], ttl: 30}'
I've also tried to add it to env vars (according to https://github.com/parse-community/parse-server/issues/7151 you have to write the json object as a string.
env:
- name: PARSE_SERVER_EXPERIMENTAL_IDEMPOTENCY_OPTIONS
value: '{"paths":["classes/.*"], ttl: 30}'
No luck. The first option and I get an error/crash loop stating idempotencyOptions isn't valid. The second one, it boots up fine, but duplicates still occur even when the proper headers are added (X-Parse-Request-Id)
Anyone have any other ideas?
So, always good to double check your versions.
Both of these work, my Parse Server was 4.2.0. This feature was introduced in 4.3.0. Once I upgraded everything worked as intended.

PROJECT_ID env and Secret Manager Access

I would like to use the Secret Manager to store a credential to our artifactory, within a cloud build step. I have it working using a build similar to:
steps:
- name: 'busybox:glibc'
entrypoint: 'sh'
args: ['-c', 'env']
secretEnv: ['SECRET_VALUE']
availableSecrets:
secretManager:
- versionName: "projects/PROJECT_ID/secrets/TEST-SECRET/versions/1"
env: 'SECRET_VALUE'
All great, no problems - I then try and slightly improve it to:
steps:
- name: 'busybox:glibc'
entrypoint: 'sh'
args: ['-c', 'env']
secretEnv: ['SECRET_VALUE']
availableSecrets:
secretManager:
- versionName: "projects/$PROJECT_ID/secrets/TEST-SECRET/versions/1"
env: 'SECRET_VALUE'
But then it throws the error:
ERROR: (gcloud.builds.submit) INVALID_ARGUMENT: failed to get secret name from secret version "projects/$PROJECT_ID/secrets/TEST-SECRET/versions/1"
I have been able to add a TRIGGER level env var (SECRET_MANAGER_PROJECT_ID), and that works fine. The only issue that as that is a trigger env, it is not available on rebuild, which breaks a lot of things.
Does anyone know how to get the PROJECT_ID of a Secret Manager from within CloudBuild without using a Trigger Param?
For now, it's not possible to set dynamic value in the secret field. I already provided this feedback directly to the Google Cloud PM, it has been take into account, but I don't have more info to share, especially for the availability.
EDIT 1
(January 22). Thanks to Seza443 comment, I tested again and now it works with automatically populated variable (PROJECT_ID and PROJECT_NUMBER), but also with customer defined substitution variables!
It appears that Cloud Build now allows for the use of substitution variables within the availableSecrets field of a build configuration.
From Google Cloud's documentation on using secrets:
After all the build steps, add an availableSecrets field to specify the secret version and environment variables to use for your secret. You can include substitution variables in the value of the secretVersion field. You can specify more than one secret in a build.
I was able to use the $PROJECT_ID variable in my own build configuration like so:
...
availableSecrets:
secretManager:
- versionName: projects/$PROJECT_ID/secrets/api-key/versions/latest
env: API_KEY
Granted, there appears to be (at least at present) some discrepancy between the documentation quoted above and the recommended configuration file schema. In the documentation they refer to secretVersion, but that appears to have changed to versionName. In either case, it seems to work properly.
Use the $PROJECT_NUMBER instead.
https://cloud.google.com/build/docs/configuring-builds/substitute-variable-values#using_default_substitutions

google deployment manager, can you import files in jinja template that you call directly with --template?

https://cloud.google.com/deployment-manager/docs/configuration/templates/create-basic-template
I can deploy a template directly like this: gcloud deployment-manager deployments create a-single-vm --template vm_template.jinja
But what if that template depends on other files that need to be imported? If using a --config file you can define import in that file and call the template as a resource. But you cant pass parameter/properties to a config file. I want to call a template directly to pass --properties via the command line but that template also needs to import other files.
EDIT: What I needed was a top level jinja template instead of a config. My confusion was that you cant use imports in a jinja template without a schema file- it was failing and I thought it wasnt supported. So the solution was just swap out the config with a jinja template (with schema file) and then I can use --properies
Maybe you can try importing the dependent files into your config file as follows:
imports:
- path: vm-template.jinja
- path: vm-template-2.jinja
# In the resources section below, the properties of the resources are replaced
# with the names of the templates.
resources:
- name: vm-1
type: vm-template.jinja
- name: vm-2
type: vm-template-2.jinja
and Set Arbitrary Metadata insito create a special variable that you can pass and might use in other applications outside of Deployment Manager:
properties:
size:
type: integer
default: 2
description: Number of Mongo Slaves
variable-x: ultra-secret-sauce
More info about gcloud deployment-manager deployments create optional flags and example can be found here.
More info about passing properties using a Schema can be found here
Hope it helps

How to reuse anchored entry under already unwrapped anchor?

I am trying to write a CircleCI config that will allow me to reuse both whole list/mapping(?) entries and its properties.
Having the following:
image_definitions:
docker:
- &default_localstack_image
image: localstack/localstack:0.10.3
environment:
KINESIS_LATENCY: 0
defaults_env: &defaults_env
environment:
PG_PORT: 5432
PG_USER: root
I would like to be able to replace:
test: &test
docker:
- image: localstack/localstack:0.10.3
<<: *defaults_env
with something like:
test: &test
docker:
- *default_localstack_image
<<: *defaults_env
but it doesn't work this way.
I've also tried:
test: &test
docker:
- *default_localstack_image
*defaults_env
but that also didn't work.
How can I do that?
According to the documentation:
test: &test
docker:
- <<: [*default_localstack_image, *defaults_env]
However, be aware that the merge feature is not part of the YAML spec and has only been defined for outdated YAML 1.1. I don't know if this is actually implemented. Even if it is, be aware that this merge key is the odd man out – violating the spec that says every tag is to be mapped to a type, it is instead interpreted as transformation instruction even though the loading process as defined by the spec has no place for executing transformation steps.
Similar functionality (for example for concatenating scalars) is more or less frequently requested on SO but is not available (and will probably never be) and if you need to do something like this, my advice is to do what e.g. Ansible and SaltStack do and use a templating engine as preprocessor for your YAML file.

Resources