In Helm, how to get the index of an array alement? - go

I read List and List Function but didn't find what I needed.
I have the following in my values.yaml file.
# Default environment
environment: sandbox
environments:
- sandbox
- staging
- production
How can I get the index value of the environments array based on a passed-in environment during a helm install --set environment=xxx command? So if I don't pass a value, I get 0 since the default environment is sandbox. If I pass --set environment=production, I get 2.
I think I need the opposite of the index function.
Note I know I can do a map
environments:
sandbox: 0
staging: 1
production: 2
and have
index .Values.environments .Values.environment
But there's got to be a way to do what I want to do with an array?

Related

Modiyfing conda configuration file does not reflect changes in environment

I am trying to change the default installation location for Conda environments because the system I am using (a supercomputing cluster) has a ~20GB user home quota. Under normal circumstances, this could easily be done by editing ~/.condarc and adding a portion envs_dirs, which is explained quite well in this question and answer.
However, it seems that the compute environment I am in (i.e., with the supercomputer), does not let me modify the priority of various locations for environments. In an ideal world, I would be able to place /work/helikarlab/joshl/.conda/envs at the top of the list, which is a high-storage partition, so I can install additional environments if needed.
My ~/.condarc is configured as follows:
env_prompt: ({name})
channels:
- conda-forge
- bioconda
- defaults
auto_activate_base: false
envs_dirs:
- /work/helikarlab/joshl/.conda/envs/
Yet, I observe the following entries with conda config --show envs_dirs
envs_dirs:
- /home/helikarlab/joshl/.conda/envs
- /util/opt/anaconda/deployed-conda-envs/packages/python/envs
- /util/opt/anaconda/deployed-conda-envs/packages/perl/envs
- /util/opt/anaconda/deployed-conda-envs/packages/git/envs
- /util/opt/anaconda/deployed-conda-envs/packages/nano/envs
- /work/helikarlab/joshl/.conda/envs
- /home/helikarlab/joshl/.conda/envs/base_env/envs
Does anyone know why my attempt set envs_dirs is not working? How can I set the /work/helikarlab/joshl/.conda/envs to the highest priority?
Additional Info
Here is the result from conda config --show-sources
==> /util/opt/anaconda/4.9.2/.condarc <==
allow_softlinks: False
auto_update_conda: False
auto_activate_base: False
notify_outdated_conda: False
repodata_threads: 4
verify_threads: 4
execute_threads: 2
aggressive_update_packages: []
pkgs_dirs:
- ${WORK}/.conda/pkgs
- ${HOME}/.conda/pkgs
channel_priority: disabled
channels:
- hcc
- https://conda.anaconda.org/t/<TOKEN>/hcc
- conda-forge
- bioconda
- defaults
- file:///util/opt/conda_repo
==> /home/helikarlab/joshl/.condarc <==
auto_activate_base: False
env_prompt: ({name})
envs_dirs:
- /work/helikarlab/joshl/.conda/envs/
channel_priority: disabled
channels:
- conda-forge
- bioconda
- defaults
==> envvars <==
envs_path:
- /home/helikarlab/joshl/.conda/envs
- /util/opt/anaconda/deployed-conda-envs/packages/python/envs
- /util/opt/anaconda/deployed-conda-envs/packages/perl/envs
- /util/opt/anaconda/deployed-conda-envs/packages/git/envs
- /util/opt/anaconda/deployed-conda-envs/packages/nano/envs
Background: Conda's configuration priorities
As documented in "The Conda Configuration Engine for Power Users" post, Conda sources configuration values from four sources, listed from lowest to highest priority:
Default values in the Python code
.condarc configuration files (system < user < environment < working directory)
Environment variables (CONDA_* variables)
Command-line specifications
Problem: Environment variable prioritized
We can observe how this plays out in OP's case, with the --show-sources result. Specifically, there are three places where envs_dirs is defined:
System level configuration file at /util/opt/anaconda/4.9.2/.condarc
User-level configuration file at /home/helikarlab/joshl/.condarc
Environment variable CONDA_ENVS_PATH1
And since the environment variable takes priority and defines the preferred directory to be /home/helikarlab/joshl/.conda/envs, that will take precedence no matter what is set with conda config and .condarc files.
Workarounds
All the following workarounds involve manipulating the environment variable. It is unclear when the variable is set (probably via a system-level shell configuration file). It should be reliable to manipulate the variable by appending any of the following workarounds to user-level shell configuration file (e.g., ~/.bashrc, ~/.bash_profile, ~/.zshrc).
Option 1: Unset variable
One could completely remove the variable with
unset CONDA_ENVS_PATH
This would then allow the user-level .condarc to take priority.
However, this variable also appears to provide locations for several system-level shared environments. It is unclear how integral these shared environments are for normal functionality. So, removing the variable altogether could have additional consequences.
Option 2: Replace value
Conveniently, the location default and desired locations differ only by replacing /home with /work. This could be changed directly in the variable with:
export CONDA_ENVS_PATH=${CONDA_ENVS_PATH/\/home/\/work}
Option 3: Prepend desired default
The most general override would be to prepend the desired default path to the environment variable:
export CONDA_ENVS_PATH="/work/helikarlab/joshl/.conda/envs/:${CONDA_ENVS_PATH}"
This is probably the most robust, since it assumes nothing about the inherited value.
Additional Note
Users with small disk quotas in default locations should also consider moving the package cache (pkgs_dirs) to coordinate with the environments directory. Details in this answer.
[1]: CONDA_ENVS_DIRS and CONDA_ENVS_PATH are interchangeable, however only one can be defined at time. The former is the contemporary usage, so I believe the latter is likely supported for backward compatibility.

Serverless stage environment variables using dotenv (.env)

I'm new to serverless,
So far I was be able to deploy and use .env for the app.
then, under provider in stage property in serverless.yml file, I change it to different stage. I also made new.env.{stage}.
after re-deploy using sls deploy, It still reads the default .env file.
the documentation states:
The framework looks for .env and .env.{stage} files in service directory and then tries to load them using dotenv. If .env.{stage} is found, .env will not be loaded. If stage is not explicitly defined, it defaults to dev.
So, I still don't understand "If stage is not explicitly defined, it defaults to dev". How to explicitly define it?
The dotenv File is choosen based on your stage property configuration. You need to explicitly define the stage property in your serverless.yaml or set it within your deployment command.
This will use the .env.dev file
useDotenv: true
provider:
name: aws
stage: dev # dev [default], stage, prod
memorySize: 3008
timeout: 30
Or you set the stage property via deploy command.
This will use the .env.prod file
sls deploy --stage prod
In your serverless.yml you need to define the stage property inside the provider object.
Example:
provider:
name: aws
[...]
stage: prod
As Feb 2023 I'm going to attempt to give my solution. I'm using the Nx tootling for monorepo (this shouldn't matter but just in case) and I'm using the serverless.ts instead.
I see the purpose of this to be to enhance the developer experience in the sense that it is nice to just nx run users:serve --stage=test (in my case using Nx) or sls offline --stage=test and serverless to be able to load the appropriate variables for that specific environment.
Some people went the route of using several .env.<stage> per environment. I tried to go this route but because I'm not that good of a developer I couldn't make it work. The approach that worked for the was to concatenate variable names inside the serverless.ts. Let me explain...
I'm using just one .env file instead but changing variable names based on the --stage. The magic is happening in the serverless.ts
// .env
STAGE_development=test
DB_NAME_development=mycraftypal
DB_USER_development=postgres
DB_PASSWORD_development=abcde1234
DB_PORT_development=5432
READER_development=localhost // this could be aws rds uri per db instances
WRITER_development=localhost // this could be aws rds uri per db instances
# TEST
STAGE_test=test
DB_NAME_test=mycraftypal
DB_USER_test=postgres
DB_PASSWORD_test=abcde1234
DB_PORT_test=5433
READER_test=localhost // this could be aws rds uri per db instances
WRITER_test=localhost // this could be aws rds uri per db instances
// serverless.base.ts or serverless.ts based on your configuration
...
useDotenv: true, // this property is at the root level
...
provider: {
...
stage: '${opt:stage, "development"}', // get the --stage flag value or default to development
...,
environment: {
STAGE: '${env:STAGE_${self:provider.stage}}}',
DB_NAME: '${env:DB_NAME_${self:provider.stage}}',
DB_USER: '${env:DB_USER_${self:provider.stage}}',
DB_PASSWORD: '${env:DB_PASSWORD_${self:provider.stage}}',
READER: '${env:READER_${self:provider.stage}}',
WRITER: '${env:WRITER_${self:provider.stage}}',
DB_PORT: '${env:DB_PORT_${self:provider.stage}}',
AWS_NODEJS_CONNECTION_REUSE_ENABLED: '1',
}
...
}
When one is utilizing the useDotenv: true, serverless loads your variables from the .env and puts them in the env variable so you can access them env:STAGE.
Now I can access the variable with dynamic stage like so ${env:DB_PORT_${self:provider.stage}}. If you look at the .env file each variable has the ..._<stage> at the end. In this way I can retrieve dynamically each value.
I'm still figuring it out since I don't want to have the word production in my url but still get the values dynamically and since I'm concatenating this value ${env:DB_PORT_${self:provider.stage}}... then the actual variable becomes DB_PORT_ instead of DB_PORT.

What is the equivalent of the --set functionality for helm install command, when using ansible community.kubernetes.helm plugin?

I want to deploy helm chart using ansible-playbook,
my command looks like this:
helm install istio-operator manifests/charts/istio-operator --set operatorNamespace=istio-operator
however I could not find the equivalent for the --set arguments in the ansible plugin.
The bad news is the documentation fails to document the values: parameter, but one can see its use in the Examples section
- community.kubernetes.helm:
name: istio-operator
chart_ref: manifests/charts/istio-operator
values:
operatorNamespace: istio-operator
If for some reason that doesn't work, using --set is (plus or minus) the same as putting that key-value pair in a yaml file and then calling --values $the_filename, so you'd want to do that same operation only manually: create the file on the target machine (not the controller), then invoke c...k...helm: with the documented values_files: pointed at that newly created yaml file

PROJECT_ID env and Secret Manager Access

I would like to use the Secret Manager to store a credential to our artifactory, within a cloud build step. I have it working using a build similar to:
steps:
- name: 'busybox:glibc'
entrypoint: 'sh'
args: ['-c', 'env']
secretEnv: ['SECRET_VALUE']
availableSecrets:
secretManager:
- versionName: "projects/PROJECT_ID/secrets/TEST-SECRET/versions/1"
env: 'SECRET_VALUE'
All great, no problems - I then try and slightly improve it to:
steps:
- name: 'busybox:glibc'
entrypoint: 'sh'
args: ['-c', 'env']
secretEnv: ['SECRET_VALUE']
availableSecrets:
secretManager:
- versionName: "projects/$PROJECT_ID/secrets/TEST-SECRET/versions/1"
env: 'SECRET_VALUE'
But then it throws the error:
ERROR: (gcloud.builds.submit) INVALID_ARGUMENT: failed to get secret name from secret version "projects/$PROJECT_ID/secrets/TEST-SECRET/versions/1"
I have been able to add a TRIGGER level env var (SECRET_MANAGER_PROJECT_ID), and that works fine. The only issue that as that is a trigger env, it is not available on rebuild, which breaks a lot of things.
Does anyone know how to get the PROJECT_ID of a Secret Manager from within CloudBuild without using a Trigger Param?
For now, it's not possible to set dynamic value in the secret field. I already provided this feedback directly to the Google Cloud PM, it has been take into account, but I don't have more info to share, especially for the availability.
EDIT 1
(January 22). Thanks to Seza443 comment, I tested again and now it works with automatically populated variable (PROJECT_ID and PROJECT_NUMBER), but also with customer defined substitution variables!
It appears that Cloud Build now allows for the use of substitution variables within the availableSecrets field of a build configuration.
From Google Cloud's documentation on using secrets:
After all the build steps, add an availableSecrets field to specify the secret version and environment variables to use for your secret. You can include substitution variables in the value of the secretVersion field. You can specify more than one secret in a build.
I was able to use the $PROJECT_ID variable in my own build configuration like so:
...
availableSecrets:
secretManager:
- versionName: projects/$PROJECT_ID/secrets/api-key/versions/latest
env: API_KEY
Granted, there appears to be (at least at present) some discrepancy between the documentation quoted above and the recommended configuration file schema. In the documentation they refer to secretVersion, but that appears to have changed to versionName. In either case, it seems to work properly.
Use the $PROJECT_NUMBER instead.
https://cloud.google.com/build/docs/configuring-builds/substitute-variable-values#using_default_substitutions

Helm, evaluate linux env variable in values.yaml

I have the following variable JVM_ARGS
base-values.yaml
app:
env:
PORT: 8080
...
JVM_ARGS: >
-Dspring.profiles.active=$(SPRING_PROFILE),test
-Dspring.config.additional-location=/shared
-javaagent:/app/dd-java-agent.jar
service-x-values.yaml
app:
env:
SPRING_PROFILE: my-local-profile
Values file are evaluated is the order:
base-values.yaml
service-x-values.yaml
I need JVM_ARGS to be evaluated against SPRING_PROFILE and so far I cannot make it work.
What is the best way to do something like that?
I'm new to helm and Kubernetes and have a feeling that I'm missing something basic.
What I tried:
defining JVM_ARGS surrounded with double quotes and without them.
UPD:
The problem was that I had some custom Helm charts built by the other devs and I had little knowledge how those charts worked. I only worked with values files which were applied against the chart templates.
I wanted the property to be resolved by helm to
-Dspring.profiles.active=my-local-profile,vault
At the end I decided to see how Spring Boot itself resolves properties and came up with the following:
-Dspring.profiles.active=${SPRING_PROFILE},vault
Since spring.profiles.active is a regular property, env variables are allowed there and Spring will resolve the property at the runtime which worked for me.
I'm a bit confused: are you referring to an environment variable (as in the title of the question) or to a helm value?
Helm does not evaluate environment variables in the value files. $(SPRING_PROFILE) is treated as a literal string, it's not eveluated.
Actually Helm does not evaluate ANYTHING in the value files. They are source of data, not templates. Placeholders (actually GO templates) are evaluated only inside template files.
As a consequence of the point 3., you cannot reference one helm variable from another.
If you really need to get Spring Profiles from a Linux environment variable, you could achieve it by setting Helm variable when calling helm install and the like (although using --set is considered a bad practice):
helm install --set app.env.spring_profile=$SPRING_PROFILE ...
Although even than, app.env.spring_profile couldn't be evaluated inside base-values.yaml. You would need to move it directly to your template file, e.g.:
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-app
spec:
...
template:
...
spec:
containers:
- name: my-app
...
env:
SPRING_PROFILES_ACTIVE: {{- .Values.app.env.spring_profile }},test

Resources