I have a helm dependencies chart that I'm willing to programmatically search and replace a specific Chart name with a given version.
For example, here's my file:
apiVersion: v2
name: my-chart
version: 1.2.3
dependencies:
- name: dependency-1
version: 1.0.20
repository: https://my.registry.com/helm/
- name: dependency-2
version: 1.0.1
repository: https://my.registry.com/helm/
- name: dependency-3
version: 1.0.20
repository: https://my.registry.com/helm/
- name: dependency-4
version: 0.3.24
repository: https://my.registry.com/helm/
- name: dependency-5
version: 3.1.2
repository: https://my.registry.com/helm/
I'm trying to work on workflow that would take two inputs:
chartName
newVersion
Then, when invoked, the workflow will check if $chartName exists in .dependencies (I was able to achieve that using select directive, as following:
yq ".dependencies[] | select (.name == \"dependency-3\")" my-chart/Chart.yaml
Which only outputs the node that matches the select:
$ yq ".dependencies[] | select (.name == \"dependency-3\")" my-chart/Chart.yaml
name: dependency-3
version: 1.0.20
repository: https://my.registry.com/helm/
$
And then I tried to use strenv directive to update the version to the new one ($newVersion), as below:
ver=1.0.0 yq ".dependencies[] | select (.name == \"dependency-3\") | strenv(ver)" my-chart/Chart.yaml
But it only outputs the updated version, so if I run yq -i - it replaces the entire file with simply 1.0.0:
$ ver=1.0.0 yq -i ".dependencies[] | select (.name == \"dependency-3\") | strenv(ver)" my-chart/Chart.yaml
$ cat my-chart/Chart.yaml
1.0.0
How can I get yq to only update the version of the provided dependency name in the dependencies array?
Thanks!
You basically already have all the components, just the assignment = was missing. All put together:
chart="dependency-3" newver="2.0.0" yq '
(.dependencies[] | select (.name == strenv(chart))).version = strenv(newver)
' my-chart/Chart.yaml
Use the -i option to not just output but update the file.
Related
I want to build a pipeline function that replaces a value in a yaml file. For that I want to make both the
pattern and the replacement value variable. I have seen the env-variables-operators article in the yq docs, however I cannot find the relevant section.
I have a yaml file with the following content:
---
spec:
source:
helm:
parameters:
- name: "image.tag"
value: "1.0.0"
I now want to build a pipeline function that will replace the value of the value key in the yaml.
I can do so with:
$ yq '.spec.source.helm.parameters[0].value = "2.0.0"' myyaml.yml
---
spec:
source:
helm:
parameters:
- name: "image.tag"
value: "2.0.0"
Now I want to make this command customizable.
What works:
$ VALUE=3.0.0
$ replacement=$VALUE yq '.spec.source.helm.parameters[0].value = env(replacement)' myyaml.yml
---
spec:
source:
helm:
parameters:
- name: "image.tag"
value: "3.0.0"
What doesn't work
$ VALUE=3.0.0
$ PATTERN=.spec.source.helm.parameters[0].value
$ replacement=$VALUE pattern=$PATTERN yq 'env(pattern) = env(replacement)'
spec:
source:
helm:
parameters:
- name: "image.tag"
value: "1.0.0"
I have also tried to use strenv and wrapping the replacement pattern in quotes, but it is not working.
Can anyone help me with the correct syntax?
You can import data with env but not code. You could inject it (note the changes in the quoting), but this is bad practice as it makes your script very vulnerable:
VALUE='3.0.0'
PATTERN='.spec.source.helm.parameters[0].value'
replacement="$VALUE" yq "${PATTERN} = env(replacement)" myyaml.yml
---
spec:
source:
helm:
parameters:
- name: "image.tag"
value: "3.0.0"
Better practice would be to import the path in a form that is interpretable by yq, e.g. as an array and using setpath:
VALUE='3.0.0'
PATTERN='["spec","source","helm","parameters",0,"value"]'
replacement="$VALUE" pattern="$PATTERN" yq 'setpath(env(pattern); env(replacement))' myyaml.yml
I am trying to insert the content of a file called version.txt into this yaml template using Bash. Version.txt contains only one number. This is my script.
version=`cat version.txt`
echo $version
(
echo "cat <<EOF >final.yml";`
cat endpoint-config.yml;
echo "EOF";
) >temp.yml
. temp.yml
cat final.yml
The yml template looks like this
$schema: https://azuremlschemas.azureedge.net/latest/managedOnlineDeployment.schema.json
name: k-means-sparkml-mlflow-deployment
endpoint_name: k-means-endpoint
model:
name: mlflow-k-means-model
version: ${version}
local_path: k_means_model
model_format: mlflow
instance_type: Standard_DS2_v2
instance_count: 1
and I want the final.yml to look like this
$schema: https://azuremlschemas.azureedge.net/latest/managedOnlineDeployment.schema.json
name: k-means-sparkml-mlflow-deployment
endpoint_name: k-means-endpoint
model:
name: mlflow-k-means-model
version: 7
local_path: k_means_model
model_format: mlflow
instance_type: Standard_DS2_v2
instance_count: 1
but so far, it looks like this:
: https://azuremlschemas.azureedge.net/latest/managedOnlineDeployment.schema.json
name: k-means-sparkml-mlflow-deployment
endpoint_name: k-means-endpoint
model:
name: mlflow-k-means-model
version: 7
local_path: k_means_model
model_format: mlflow
instance_type: Standard_DS2_v2
instance_count: 1EOF
Any help is appreciated!
What I try
conda create --name ml --file ./requirements.txt
I created the requirements.txt file with conda list -e > requirements.txt on another computer in the past.
requirements.txt:
https://github.com/penguinsAreFunny/bugFinder-machineLearning/blob/master/requirements.txt
Error
PackagesNotFoundError: The following packages are not available from current channels:
protobuf==3.19.1=pypi_0
tensorboard-data-server==0.6.1=pypi_0
pygments==2.10.0=pypi_0
scikit-learn==1.0.1=pypi_0
tensorflow-estimator==2.4.0=pypi_0
flake8==4.0.1=pypi_0
nest-asyncio==1.5.1=pypi_0
[...]
Current channels:
https://conda.anaconda.org/conda-forge/win-64
https://conda.anaconda.org/conda-forge/noarch
https://repo.anaconda.com/pkgs/main/win-64
https://repo.anaconda.com/pkgs/main/noarch
https://repo.anaconda.com/pkgs/r/win-64
https://repo.anaconda.com/pkgs/r/noarch
https://repo.anaconda.com/pkgs/msys2/win-64
https://repo.anaconda.com/pkgs/msys2/noarch
https://conda.anaconda.org/pickle/win-64
https://conda.anaconda.org/pickle/noarch
https://conda.anaconda.org/nltk/win-64
https://conda.anaconda.org/nltk/noarch
Question
Why can´t conda find the packages in the channels?
I think the missing packages should be in conda-forge, shouldn´t they?
Used Version
conda 4.11.0
Issue: PyPI not compatible
The packages likely are in Conda Forge as suggested, but the build strings, "pypi_0", indicate that they had been installed from PyPI in the previous environment. The conda list -e command captures this info, but the conda create command cannot handle it.
Workarounds
Option 1: Source from Conda
The quickest fix is probably to edit the file to remove the build string specification on those packages. That is, something like:
## remove all PyPI references
sed -e 's/=pypi_0//' requirements.txt > reqs.nopip.txt
## try creating only from Conda packages
conda create -n m1 --file reqs.nopip.txt
Conda will then try to treat these PyPI package specifications as Conda packages. However, this is not always reliable, since some packages go by different names in the two repositories.
Option 2: Export YAML
Alternatively, serializing to YAML can handle both capturing and reinstalling Pip-installed packages. So, if you still have the old environment around, consider using:
conda env export > environment.yaml
which can recreate (on the same platform) with
conda env create -n m1 -f environment.yaml
Option 3: Convert requirements.txt to YAML
If the environment is no longer around, or the requirements.txt was provided by another user, then another option is to convert the file to a YAML format. Here is an AWK script for doing this:
list_export_to_yaml.awk
#!/usr/bin/env awk -f
#' Author: Mervin Fansler
#' GitHub: #mfansler
#' License: MIT
#'
#' Basic usage
#' $ conda list --export | awk -f list_export_to_yaml.awk
#'
#' Omitting builds with 'no_builds'
#' $ conda list --export | awk -v no_builds=1 -f list_export_to_yaml.awk
#'
#' Specifying channels with 'channels'
#' $ conda list --export | awk -v channels="conda-forge,defaults" -f list_export_to_yaml.awk
BEGIN {
FS="=";
if (channels) split(channels, channels_arr, ",");
else channels_arr[0]="defaults";
}
{
# skip header
if ($1 ~ /^#/) next;
if ($3 ~ /pypi/) { # pypi packages
pip=1;
pypi[i++]=" - "$1"=="$2" ";
} else { # conda packages
if ($1 ~ /pip/) pip=1;
else { # should we keep builds?
if (no_builds) conda[j++]=" - "$1"="$2" ";
else conda[j++]=" - "$1"="$2"="$3" ";
}
}
}
END {
# emit channel info
print "channels: ";
for (k in channels_arr) print " - "channels_arr[k]" ";
# emit conda pkg info
print "dependencies: ";
for (j in conda) print conda[j];
# emit PyPI pkg info
if (pip) print " - pip ";
if (length(pypi) > 0) {
print " - pip: ";
for (i in pypi) print pypi[i];
}
}
For OP's example, we get:
$ wget -O requirements.txt 'https://github.com/penguinsAreFunny/bugFinder-machineLearning/raw/master/requirements.txt'
$ awk -f list_export_to_yaml.awk requirements.txt > bugfinder-ml.yaml
which then has the contents:
channels:
- defaults
dependencies:
- brotlipy=0.7.0=py38h294d835_1003
- ca-certificates=2021.10.8=h5b45459_0
- cffi=1.15.0=py38hd8c33c5_0
- chardet=4.0.0=py38haa244fe_2
- cryptography=35.0.0=py38hb7941b4_2
- future=0.18.2=py38haa244fe_4
- h2o=3.34.0.3=py38_0
- openjdk=11.0.9.1=h57928b3_1
- openssl=1.1.1l=h8ffe710_0
- pycparser=2.20=pyh9f0ad1d_2
- pyopenssl=21.0.0=pyhd8ed1ab_0
- pysocks=1.7.1=py38haa244fe_4
- python=3.8.12=h7840368_2_cpython
- python_abi=3.8=2_cp38
- requests=2.26.0=pyhd8ed1ab_0
- setuptools=58.5.3=py38haa244fe_0
- sqlite=3.36.0=h8ffe710_2
- tabulate=0.8.9=pyhd8ed1ab_0
- ucrt=10.0.20348.0=h57928b3_0
- urllib3=1.26.7=pyhd8ed1ab_0
- vc=14.2=hb210afc_5
- vs2013_runtime=12.0.21005=1
- vs2015_runtime=14.29.30037=h902a5da_5
- wheel=0.37.0=pyhd8ed1ab_1
- win_inet_pton=1.1.0=py38haa244fe_3
- pip
- pip:
- absl-py==0.15.0
- appdirs==1.4.4
- astroid==2.7.3
- astunparse==1.6.3
- autopep8==1.6.0
- backcall==0.2.0
- backports-entry-points-selectable==1.1.0
- black==21.4b0
- cachetools==4.2.4
- certifi==2021.10.8
- cfgv==3.3.1
- charset-normalizer==2.0.7
- click==8.0.3
- cycler==0.11.0
- deap==1.3.1
- debugpy==1.5.1
- decorator==5.1.0
- dill==0.3.4
- distlib==0.3.3
- entrypoints==0.3
- filelock==3.3.2
- flake8==4.0.1
- flatbuffers==1.12
- gast==0.3.3
- google-auth==2.3.3
- google-auth-oauthlib==0.4.6
- google-pasta==0.2.0
- grpcio==1.32.0
- h5py==2.10.0
- identify==2.3.3
- idna==3.3
- importlib-resources==5.4.0
- ipykernel==6.5.0
- ipython==7.29.0
- isort==5.10.0
- jedi==0.18.0
- jinja2==3.0.2
- joblib==1.1.0
- jupyter-client==7.0.6
- jupyter-core==4.9.1
- keras-preprocessing==1.1.2
- kiwisolver==1.3.2
- markdown==3.3.4
- markupsafe==2.0.1
- matplotlib==3.4.3
- matplotlib-inline==0.1.3
- mypy==0.910
- mypy-extensions==0.4.3
- nest-asyncio==1.5.1
- nodeenv==1.6.0
- numpy==1.19.5
- oauthlib==3.1.1
- opt-einsum==3.3.0
- pandas==1.3.4
- parso==0.8.2
- pathspec==0.9.0
- pickleshare==0.7.5
- pillow==8.4.0
- platformdirs==2.4.0
- pre-commit==2.15.0
- prompt-toolkit==3.0.22
- protobuf==3.19.1
- pyasn1==0.4.8
- pyasn1-modules==0.2.8
- pycodestyle==2.8.0
- pyflakes==2.4.0
- pygments==2.10.0
- pylint==2.10.2
- pyparsing==3.0.4
- python-dateutil==2.8.2
- pytz==2021.3
- pywin32==302
- pyyaml==6.0
- pyzmq==22.3.0
- regex==2021.11.2
- requests-oauthlib==1.3.0
- rsa==4.7.2
- scikit-learn==1.0.1
- scipy==1.7.1
- six==1.15.0
- stopit==1.1.2
- sweetviz==2.1.3
- tensorboard==2.7.0
- tensorboard-data-server==0.6.1
- tensorboard-plugin-wit==1.8.0
- tensorflow==2.4.4
- tensorflow-estimator==2.4.0
- termcolor==1.1.0
- threadpoolctl==3.0.0
- tornado==6.1
- tpot==0.11.7
- tqdm==4.62.3
- traitlets==5.1.1
- typing-extensions==3.7.4.3
- update-checker==0.18.0
- virtualenv==20.10.0
- wcwidth==0.2.5
- werkzeug==2.0.2
- xgboost==1.5.0
- zipp==3.6.0
Note that since conda list --export does not capture channel information, the user must determine this on their own. By default the script inserts a defaults, but also provides an argument (channels) to specify additional channels for the YAML in a comma-separate format. E.g.
awk -f list_export_to_yaml.awk -v channels='conda-forge,defaults' requirements.txt
would output
channels:
- conda-forge
- defaults
in the YAML.
There is also a no_builds argument to suppress builds (i.e., versions only). E.g.,
awk -f list_export_to_yaml.awk -v no_builds=1 requirements.txt
I would like to overlay a yaml document on top another document, updating any arrays to be the overlaid values.
Given the base/source file:
#base.yaml
include:
- directory: foo/foo1
- directory: bar/bar1
and the overlay file:
#overlay.yaml
include:
- directory: foo/foo1
extra: true
stuff:
stuff1: true
stuff2: true
- directory: something/else
and the result should look like
#results.yaml
include:
- directory: foo/foo1
extra: true
stuff:
stuff1: true
stuff2: true
- directory: bar/bar1
- directory: something/else
I think I am close to having it working with yq from this post, but the list elements are not overridden. I instead get duplicates of foo/foo1
yq eval-all '. as $item ireduce ({}; . *+ $item)' base.yaml overlay.yaml produces
#base.yaml
#overlay.yaml
include:
- directory: foo/foo1
- directory: bar/bar1
- directory: foo/foo1
extra: true
stuff:
stuff1: true
stuff2: true
- directory: something/else
Removing the + after the * will drop bar/bar1 from the output.
Basically I think I'm operating on the include, not the values of the include keys. I would greatly appreciate any help in getting the overlay working.
Have a look at this example here: https://mikefarah.gitbook.io/yq/operators/multiply-merge#merge-arrays-of-objects-together-matching-on-a-key
Basically you will need to:
yq eval-all '
(
((.include[] | {.directory: .}) as $item ireduce ({}; . * $item )) as $uniqueMap
| ( $uniqueMap | to_entries | .[]) as $item ireduce([]; . + $item.value)
) as $mergedArray
| select(fi == 0) | .include= $mergedArray
' sample.yml another.yml
Disclosure: I wrote yq
I'm using yq 4.3.1 to update the version field in this yaml:
jobs:
my-job:
steps:
- name: Step 1
id: step1
uses: actions/step1
- name: Step 2
id: step2
uses: actions/step2
with:
version: 1.2.3
But I can't figure out how to select the array item based on the id == 'step2' property so that I can update the version?
Why is it you always figure out the answer the second after you post a question on stackoverflow?
yq eval '(.jobs.my-job.steps[] | select(has("id")) | select(.id == "step2")).with.version = "1.2.4"' -i my.yaml
EDIT
Wow, how wrong was I... :D Updated with a working version