Conda create from requirements.txt not finding packages - anaconda

What I try
conda create --name ml --file ./requirements.txt
I created the requirements.txt file with conda list -e > requirements.txt on another computer in the past.
requirements.txt:
https://github.com/penguinsAreFunny/bugFinder-machineLearning/blob/master/requirements.txt
Error
PackagesNotFoundError: The following packages are not available from current channels:
protobuf==3.19.1=pypi_0
tensorboard-data-server==0.6.1=pypi_0
pygments==2.10.0=pypi_0
scikit-learn==1.0.1=pypi_0
tensorflow-estimator==2.4.0=pypi_0
flake8==4.0.1=pypi_0
nest-asyncio==1.5.1=pypi_0
[...]
Current channels:
https://conda.anaconda.org/conda-forge/win-64
https://conda.anaconda.org/conda-forge/noarch
https://repo.anaconda.com/pkgs/main/win-64
https://repo.anaconda.com/pkgs/main/noarch
https://repo.anaconda.com/pkgs/r/win-64
https://repo.anaconda.com/pkgs/r/noarch
https://repo.anaconda.com/pkgs/msys2/win-64
https://repo.anaconda.com/pkgs/msys2/noarch
https://conda.anaconda.org/pickle/win-64
https://conda.anaconda.org/pickle/noarch
https://conda.anaconda.org/nltk/win-64
https://conda.anaconda.org/nltk/noarch
Question
Why can´t conda find the packages in the channels?
I think the missing packages should be in conda-forge, shouldn´t they?
Used Version
conda 4.11.0

Issue: PyPI not compatible
The packages likely are in Conda Forge as suggested, but the build strings, "pypi_0", indicate that they had been installed from PyPI in the previous environment. The conda list -e command captures this info, but the conda create command cannot handle it.
Workarounds
Option 1: Source from Conda
The quickest fix is probably to edit the file to remove the build string specification on those packages. That is, something like:
## remove all PyPI references
sed -e 's/=pypi_0//' requirements.txt > reqs.nopip.txt
## try creating only from Conda packages
conda create -n m1 --file reqs.nopip.txt
Conda will then try to treat these PyPI package specifications as Conda packages. However, this is not always reliable, since some packages go by different names in the two repositories.
Option 2: Export YAML
Alternatively, serializing to YAML can handle both capturing and reinstalling Pip-installed packages. So, if you still have the old environment around, consider using:
conda env export > environment.yaml
which can recreate (on the same platform) with
conda env create -n m1 -f environment.yaml
Option 3: Convert requirements.txt to YAML
If the environment is no longer around, or the requirements.txt was provided by another user, then another option is to convert the file to a YAML format. Here is an AWK script for doing this:
list_export_to_yaml.awk
#!/usr/bin/env awk -f
#' Author: Mervin Fansler
#' GitHub: #mfansler
#' License: MIT
#'
#' Basic usage
#' $ conda list --export | awk -f list_export_to_yaml.awk
#'
#' Omitting builds with 'no_builds'
#' $ conda list --export | awk -v no_builds=1 -f list_export_to_yaml.awk
#'
#' Specifying channels with 'channels'
#' $ conda list --export | awk -v channels="conda-forge,defaults" -f list_export_to_yaml.awk
BEGIN {
FS="=";
if (channels) split(channels, channels_arr, ",");
else channels_arr[0]="defaults";
}
{
# skip header
if ($1 ~ /^#/) next;
if ($3 ~ /pypi/) { # pypi packages
pip=1;
pypi[i++]=" - "$1"=="$2" ";
} else { # conda packages
if ($1 ~ /pip/) pip=1;
else { # should we keep builds?
if (no_builds) conda[j++]=" - "$1"="$2" ";
else conda[j++]=" - "$1"="$2"="$3" ";
}
}
}
END {
# emit channel info
print "channels: ";
for (k in channels_arr) print " - "channels_arr[k]" ";
# emit conda pkg info
print "dependencies: ";
for (j in conda) print conda[j];
# emit PyPI pkg info
if (pip) print " - pip ";
if (length(pypi) > 0) {
print " - pip: ";
for (i in pypi) print pypi[i];
}
}
For OP's example, we get:
$ wget -O requirements.txt 'https://github.com/penguinsAreFunny/bugFinder-machineLearning/raw/master/requirements.txt'
$ awk -f list_export_to_yaml.awk requirements.txt > bugfinder-ml.yaml
which then has the contents:
channels:
- defaults
dependencies:
- brotlipy=0.7.0=py38h294d835_1003
- ca-certificates=2021.10.8=h5b45459_0
- cffi=1.15.0=py38hd8c33c5_0
- chardet=4.0.0=py38haa244fe_2
- cryptography=35.0.0=py38hb7941b4_2
- future=0.18.2=py38haa244fe_4
- h2o=3.34.0.3=py38_0
- openjdk=11.0.9.1=h57928b3_1
- openssl=1.1.1l=h8ffe710_0
- pycparser=2.20=pyh9f0ad1d_2
- pyopenssl=21.0.0=pyhd8ed1ab_0
- pysocks=1.7.1=py38haa244fe_4
- python=3.8.12=h7840368_2_cpython
- python_abi=3.8=2_cp38
- requests=2.26.0=pyhd8ed1ab_0
- setuptools=58.5.3=py38haa244fe_0
- sqlite=3.36.0=h8ffe710_2
- tabulate=0.8.9=pyhd8ed1ab_0
- ucrt=10.0.20348.0=h57928b3_0
- urllib3=1.26.7=pyhd8ed1ab_0
- vc=14.2=hb210afc_5
- vs2013_runtime=12.0.21005=1
- vs2015_runtime=14.29.30037=h902a5da_5
- wheel=0.37.0=pyhd8ed1ab_1
- win_inet_pton=1.1.0=py38haa244fe_3
- pip
- pip:
- absl-py==0.15.0
- appdirs==1.4.4
- astroid==2.7.3
- astunparse==1.6.3
- autopep8==1.6.0
- backcall==0.2.0
- backports-entry-points-selectable==1.1.0
- black==21.4b0
- cachetools==4.2.4
- certifi==2021.10.8
- cfgv==3.3.1
- charset-normalizer==2.0.7
- click==8.0.3
- cycler==0.11.0
- deap==1.3.1
- debugpy==1.5.1
- decorator==5.1.0
- dill==0.3.4
- distlib==0.3.3
- entrypoints==0.3
- filelock==3.3.2
- flake8==4.0.1
- flatbuffers==1.12
- gast==0.3.3
- google-auth==2.3.3
- google-auth-oauthlib==0.4.6
- google-pasta==0.2.0
- grpcio==1.32.0
- h5py==2.10.0
- identify==2.3.3
- idna==3.3
- importlib-resources==5.4.0
- ipykernel==6.5.0
- ipython==7.29.0
- isort==5.10.0
- jedi==0.18.0
- jinja2==3.0.2
- joblib==1.1.0
- jupyter-client==7.0.6
- jupyter-core==4.9.1
- keras-preprocessing==1.1.2
- kiwisolver==1.3.2
- markdown==3.3.4
- markupsafe==2.0.1
- matplotlib==3.4.3
- matplotlib-inline==0.1.3
- mypy==0.910
- mypy-extensions==0.4.3
- nest-asyncio==1.5.1
- nodeenv==1.6.0
- numpy==1.19.5
- oauthlib==3.1.1
- opt-einsum==3.3.0
- pandas==1.3.4
- parso==0.8.2
- pathspec==0.9.0
- pickleshare==0.7.5
- pillow==8.4.0
- platformdirs==2.4.0
- pre-commit==2.15.0
- prompt-toolkit==3.0.22
- protobuf==3.19.1
- pyasn1==0.4.8
- pyasn1-modules==0.2.8
- pycodestyle==2.8.0
- pyflakes==2.4.0
- pygments==2.10.0
- pylint==2.10.2
- pyparsing==3.0.4
- python-dateutil==2.8.2
- pytz==2021.3
- pywin32==302
- pyyaml==6.0
- pyzmq==22.3.0
- regex==2021.11.2
- requests-oauthlib==1.3.0
- rsa==4.7.2
- scikit-learn==1.0.1
- scipy==1.7.1
- six==1.15.0
- stopit==1.1.2
- sweetviz==2.1.3
- tensorboard==2.7.0
- tensorboard-data-server==0.6.1
- tensorboard-plugin-wit==1.8.0
- tensorflow==2.4.4
- tensorflow-estimator==2.4.0
- termcolor==1.1.0
- threadpoolctl==3.0.0
- tornado==6.1
- tpot==0.11.7
- tqdm==4.62.3
- traitlets==5.1.1
- typing-extensions==3.7.4.3
- update-checker==0.18.0
- virtualenv==20.10.0
- wcwidth==0.2.5
- werkzeug==2.0.2
- xgboost==1.5.0
- zipp==3.6.0
Note that since conda list --export does not capture channel information, the user must determine this on their own. By default the script inserts a defaults, but also provides an argument (channels) to specify additional channels for the YAML in a comma-separate format. E.g.
awk -f list_export_to_yaml.awk -v channels='conda-forge,defaults' requirements.txt
would output
channels:
- conda-forge
- defaults
in the YAML.
There is also a no_builds argument to suppress builds (i.e., versions only). E.g.,
awk -f list_export_to_yaml.awk -v no_builds=1 requirements.txt

Related

Pre-commit Pylint "exit code: 32" upon committing, no issues on `run --all-files`

When I run pre-commit run --all-files all goes well, when I try to commit, pylint throws an error with: Exit code: 32, followed by the list of usage options. The only files changed are .py files:
git status
On branch include-gitlab-arg
Your branch is up to date with 'origin/include-gitlab-arg'.
Changes to be committed:
(use "git restore --staged <file>..." to unstage)
renamed: code/project1/src/Main.py -> code/project1/src/GitLab/GitLab_runner_token_getter.py
renamed: code/project1/src/get_gitlab_runner_token.py -> code/project1/src/GitLab/get_gitlab_runner_token.py
modified: code/project1/src/__main__.py
modified: code/project1/src/control_website.py
deleted: code/project1/src/get_website_controller.py
modified: code/project1/src/helper.py
Error Output:
The git commit -m "some change." command yields the following pre-commit error:
pylint...................................................................Failed
- hook id: pylint
- exit code: 32
usage: pylint [options]
optional arguments:
-h, --help
show this
help
message and
exit
Commands:
Options which are actually commands. Options in this group are mutually exclusive.
--rcfile RCFILE
whereas pre-commit run --all-files passes.
And the .pre-commit-config.yaml contains:
# This file specifies which checks are performed by the pre-commit service.
# The pre-commit service prevents people from pushing code to git that is not
# up to standards. # The reason mirrors are used instead of the actual
# repositories for e.g. black and flake8, is because those repositories also
# need to contain a pre-commit hook file, which they often don't by default.
# So to resolve that, a mirror is created that includes such a file.
default_language_version:
python: python3.8. # or python3
repos:
# Test if the python code is formatted according to the Black standard.
- repo: https://github.com/Quantco/pre-commit-mirrors-black
rev: 22.6.0
hooks:
- id: black-conda
args:
- --safe
- --target-version=py36
# Test if the python code is formatted according to the flake8 standard.
- repo: https://github.com/Quantco/pre-commit-mirrors-flake8
rev: 5.0.4
hooks:
- id: flake8-conda
args: ["--ignore=E501,W503,W504,E722,E203"]
# Test if the import statements are sorted correctly.
- repo: https://github.com/PyCQA/isort
rev: 5.10.1
hooks:
- id: isort
args: ["--profile", "black", --line-length=79]
## Test if the variable typing is correct. (Variable typing is when you say:
## def is_larger(nr: int) -> bool: instead of def is_larger(nr). It makes
## it explicit what type of input and output a function has.
## - repo: https://github.com/python/mypy
# - repo: https://github.com/pre-commit/mirrors-mypy
#### - repo: https://github.com/a-t-0/mypy
# rev: v0.982
# hooks:
# - id: mypy
## Tests if there are spelling errors in the code.
# - repo: https://github.com/codespell-project/codespell
# rev: v2.2.1
# hooks:
# - id: codespell
# Performs static code analysis to check for programming errors.
- repo: local
hooks:
- id: pylint
name: pylint
entry: pylint
language: system
types: [python]
args:
[
"-rn", # Only display messages
"-sn", # Don't display the score
"--ignore-long-lines", # Ignores long lines.
]
# Runs additional tests that are created by the pre-commit software itself.
- repo: https://github.com/pre-commit/pre-commit-hooks
rev: v4.3.0
hooks:
# Check user did not add large files.
- id: check-added-large-files
# Check if `.py` files are written in valid Python syntax.
- id: check-ast
# Require literal syntax when initializing empty or zero Python builtin types.
- id: check-builtin-literals
# Checks if there are filenames that would conflict if case is changed.
- id: check-case-conflict
# Checks if the Python functions have docstrings.
- id: check-docstring-first
# Checks if any `.sh` files have a shebang like #!/bin/bash
- id: check-executables-have-shebangs
# Verifies json format of any `.json` files in repo.
- id: check-json
# Checks if there are any existing merge conflicts caused by the commit.
- id: check-merge-conflict
# Checks for symlinks which do not point to anything.
- id: check-symlinks
# Checks if xml files are formatted correctly.
- id: check-xml
# Checks if .yml files are valid.
- id: check-yaml
# Checks if debugger imports are performed.
- id: debug-statements
# Detects symlinks changed to regular files with content path symlink was pointing to.
- id: destroyed-symlinks
# Checks if you don't accidentally push a private key.
- id: detect-private-key
# Replaces double quoted strings with single quoted strings.
# This is not compatible with Python Black.
#- id: double-quote-string-fixer
# Makes sure files end in a newline and only a newline.
- id: end-of-file-fixer
# Removes UTF-8 byte order marker.
- id: fix-byte-order-marker
# Add <# -*- coding: utf-8 -*-> to the top of python files.
#- id: fix-encoding-pragma
# Checks if there are different line endings, like \n and crlf.
- id: mixed-line-ending
# Asserts `.py` files in folder `/test/` (by default:) end in `_test.py`.
- id: name-tests-test
# Override default to check if `.py` files in `/test/` START with `test_`.
args: ['--django']
# Ensures JSON files are properly formatted.
- id: pretty-format-json
args: ['--autofix']
# Sorts entries in requirements.txt and removes incorrect pkg-resources entries.
- id: requirements-txt-fixer
# Sorts simple YAML files which consist only of top-level keys.
- id: sort-simple-yaml
# Removes trailing whitespaces at end of lines of .. files.
- id: trailing-whitespace
- repo: https://github.com/PyCQA/autoflake
rev: v1.7.0
hooks:
- id: autoflake
args: ["--in-place", "--remove-unused-variables", "--remove-all-unused-imports", "--recursive"]
name: AutoFlake
description: "Format with AutoFlake"
stages: [commit]
- repo: https://github.com/PyCQA/bandit
rev: 1.7.4
hooks:
- id: bandit
name: Bandit
stages: [commit]
# Enforces formatting style in Markdown (.md) files.
- repo: https://github.com/executablebooks/mdformat
rev: 0.7.16
hooks:
- id: mdformat
additional_dependencies:
- mdformat-toc
- mdformat-gfm
- mdformat-black
- repo: https://github.com/MarcoGorelli/absolufy-imports
rev: v0.3.1
hooks:
- id: absolufy-imports
files: '^src/.+\.py$'
args: ['--never', '--application-directories', 'src']
- repo: https://github.com/myint/docformatter
rev: v1.5.0
hooks:
- id: docformatter
- repo: https://github.com/pre-commit/pygrep-hooks
rev: v1.9.0
hooks:
- id: python-use-type-annotations
- id: python-check-blanket-noqa
- id: python-check-blanket-type-ignore
# Updates the syntax of `.py` files to the specified python version.
# It is not compatible with: pre-commit hook: fix-encoding-pragma
- repo: https://github.com/asottile/pyupgrade
rev: v3.0.0
hooks:
- id: pyupgrade
args: [--py38-plus]
- repo: https://github.com/markdownlint/markdownlint
rev: v0.11.0
hooks:
- id: markdownlint
With pyproject.toml:
# This is used to configure the black, isort and mypy such that the packages don't conflict.
# This file is read by the pre-commit program.
[tool.black]
line-length = 79
include = '\.pyi?$'
exclude = '''
/(
\.git
| \.mypy_cache
| build
| dist
)/
'''
[tool.coverage.run]
# Due to a strange bug with xml output of coverage.py not writing the full-path
# of the sources, the full root directory is presented as a source alongside
# the main package. As a result any importable Python file/package needs to be
# included in the omit
source = [
"foo",
".",
]
# Excludes the following directories from the coverage report
omit = [
"tests/*",
"setup.py",
]
[tool.isort]
profile = "black"
[tool.mypy]
ignore_missing_imports = true
[tool.pylint.basic]
bad-names=[]
[tool.pylint.messages_control]
# Example: Disable error on needing a module-level docstring
disable=[
"import-error",
"invalid-name",
"fixme",
]
[tool.pytest.ini_options]
# Runs coverage.py through use of the pytest-cov plugin
# An xml report is generated and results are output to the terminal
# TODO: Disable this line to disable CLI coverage reports when running tests.
#addopts = "--cov --cov-report xml:cov.xml --cov-report term"
# Sets the minimum allowed pytest version
minversion = 5.0
# Sets the path where test files are located (Speeds up Test Discovery)
testpaths = ["tests"]
And setup.py:
"""This file is to allow this repository to be published as a pip module, such
that people can install it with: `pip install networkx-to-lava-nc`.
You can ignore it.
"""
import setuptools
with open("README.md", encoding="utf-8") as fh:
long_description = fh.read()
setuptools.setup(
name="networkx-to-lava-nc-snn",
version="0.0.1",
author="a-t-0",
author_email="author#example.com",
description="Converts networkx graphs representing spiking neural networks"
+ " (SNN)s of LIF neruons, into runnable Lava SNNs.",
long_description=long_description,
long_description_content_type="text/markdown",
url="https://github.com/a-t-0/networkx-to-lava-nc",
packages=setuptools.find_packages(),
classifiers=[
"Programming Language :: Python :: 3",
"License :: OSI Approved :: AGPL3",
"Operating System :: OS Independent",
],
)
Question
How can I resolve the pylint usage error to ensure the commit passes pre-commit?
The issue was caused by #"--ignore-long-lines", # Ignores long lines. in the .pre-commit-config.yaml. I assume it conflicts with the line length settings for black, and for the pyproject.toml, which are set to 79 respectively.

yq - Search and replace a node's specific value (If exists)

I have a helm dependencies chart that I'm willing to programmatically search and replace a specific Chart name with a given version.
For example, here's my file:
apiVersion: v2
name: my-chart
version: 1.2.3
dependencies:
- name: dependency-1
version: 1.0.20
repository: https://my.registry.com/helm/
- name: dependency-2
version: 1.0.1
repository: https://my.registry.com/helm/
- name: dependency-3
version: 1.0.20
repository: https://my.registry.com/helm/
- name: dependency-4
version: 0.3.24
repository: https://my.registry.com/helm/
- name: dependency-5
version: 3.1.2
repository: https://my.registry.com/helm/
I'm trying to work on workflow that would take two inputs:
chartName
newVersion
Then, when invoked, the workflow will check if $chartName exists in .dependencies (I was able to achieve that using select directive, as following:
yq ".dependencies[] | select (.name == \"dependency-3\")" my-chart/Chart.yaml
Which only outputs the node that matches the select:
$ yq ".dependencies[] | select (.name == \"dependency-3\")" my-chart/Chart.yaml
name: dependency-3
version: 1.0.20
repository: https://my.registry.com/helm/
$
And then I tried to use strenv directive to update the version to the new one ($newVersion), as below:
ver=1.0.0 yq ".dependencies[] | select (.name == \"dependency-3\") | strenv(ver)" my-chart/Chart.yaml
But it only outputs the updated version, so if I run yq -i - it replaces the entire file with simply 1.0.0:
$ ver=1.0.0 yq -i ".dependencies[] | select (.name == \"dependency-3\") | strenv(ver)" my-chart/Chart.yaml
$ cat my-chart/Chart.yaml
1.0.0
How can I get yq to only update the version of the provided dependency name in the dependencies array?
Thanks!
You basically already have all the components, just the assignment = was missing. All put together:
chart="dependency-3" newver="2.0.0" yq '
(.dependencies[] | select (.name == strenv(chart))).version = strenv(newver)
' my-chart/Chart.yaml
Use the -i option to not just output but update the file.

GitHub Actions: Passing JSON data to another job

I'm attempting to pass an array of dynamically fetched data from one GitHub Action job to the actual job doing the build. This array will be used as part of a matrix to build for multiple versions. However, I'm encountering an issue when the bash variable storing the array is evaluated.
jobs:
setup:
runs-on: ubuntu-latest
outputs:
versions: ${{ steps.matrix.outputs.value }}
steps:
- id: matrix
run: |
sudo apt-get install -y jq && \
MAINNET=$(curl https://api.mainnet-beta.solana.com -X POST -H "Content-Type: application/json" -d '{"jsonrpc":"2.0","id":1, "method":"getVersion"}' | jq '.result["solana-core"]') && \
TESTNET=$(curl https://api.testnet.solana.com -X POST -H "Content-Type: application/json" -d '{"jsonrpc":"2.0","id":1, "method":"getVersion"}' | jq '.result["solana-core"]') && \
VERSIONS=($MAINNET $TESTNET) && \
echo "${VERSIONS[#]}" && \
VERSION_JSON=$(echo "${VERSIONS[#]}" | jq -s) && \
echo $VERSION_JSON && \
echo '::set-output name=value::$VERSION_JSON'
shell: bash
- id: debug
run: |
echo "Result: ${{ steps.matrix.outputs.value }}"
changes:
needs: setup
runs-on: ubuntu-latest
# Set job outputs to values from filter step
outputs:
core: ${{ steps.filter.outputs.core }}
package: ${{ steps.filter.outputs.package }}
strategy:
matrix:
TEST: [buy, cancel, create_auction_house, delegate, deposit, execute_sale, sell, update_auction_house, withdraw_from_fee, withdraw_from_treasury, withdraw]
SOLANA_VERSION: ${{fromJson(needs.setup.outputs.versions)}}
steps:
- uses: actions/checkout#v2
# For pull requests it's not necessary to checkout the code
- uses: dorny/paths-filter#v2
id: filter
with:
filters: |
core:
- 'core/**'
package:
- 'auction-house/**'
- name: debug
id: debug
working-directory: ./auction-house/program
run: echo ${{ needs.setup.outputs.versions }}
In the setup job above, the two versions are evaluated to a bash array (in VERSIONS) and converted into a JSON array to be passed to the next job (in VERSION_JSON). The last echo in the matrix step results in a print of [ "1.10.31", "1.11.1" ], but the debug step prints out this:
Run echo "Result: "$VERSION_JSON""
echo "Result: "$VERSION_JSON""
shell: /usr/bin/bash -e {0}
env:
CARGO_TERM_COLOR: always
RUST_TOOLCHAIN: stable
Result:
The changes job also results in an error:
Error when evaluating 'strategy' for job 'changes'.
.github/workflows/program-auction-house.yml (Line: 44, Col: 25): Unexpected type of value '$VERSION_JSON', expected type: Sequence.
It definitely seems like the $VERSION_JSON variable isn't actually being evaluated properly, but I can't figure out where the evaluation is going wrong.
For echo '::set-output name=value::$VERSION_JSON' you need to use double quotes or bash would not expand $VERSION_JSON.
set-output is not happy with multi-lined data. For your case, you can use jq -s -c so the output will be one line.

GitLab CI rules not working with extends and individual rules

Below are two jobs in the build stage.
Default, there is set some common condition, and using extends keyword for that, ifawsdeploy.
As only one of them should run, if variable $ADMIN_SERVER_IP provided then connect_admin_server should run, working that way.
If no value provided to $ADMIN_SERVER_IP then create_admin_server should run, but it is not running.
.ifawsdeploy:
rules:
- if: '$TEST_CREATE_ADMIN && $REGION && $ROLE_ARN && $PACKAGEURL && $TEST_CREATE_ADMIN == "aws" && $SUB_PLATFORM == "aws" && $ROLE_ARN != "" && $PACKAGEURL != "" && $REGION != ""'
variables:
TEST_CREATE_ADMIN:
#value: aws
description: "Platform, currently aws only"
SUB_PLATFORM:
value: aws
description: "Platform, currently aws only"
REGION:
value: "us-west-2"
description: "region where to deploy company"
PACKAGEURL:
value: "http://somerpmurl.x86_64.rpm"
description: "company rpm file url"
ACCOUNT_NAME:
value: "testsubaccount"
description: "Account name of sub account to refer in the deployment, no need to match in AWS"
ROLE_ARN:
value: "arn:aws:iam::491483064167:role/uat"
description: "ROLE ARN of the user account assuming: aws sts get-caller-identity"
tfenv_version: "1.1.9"
DEV_PUB_KEY:
description: "Optional public key file to add access to admin server"
ADMIN_SERVER_IP:
description: "Existing Admin Server IP Address"
ADMIN_SERVER_SSH_KEY:
description: "Existing Admin Server SSH_KEY PEM content"
#export variables below will cause the terraform to use the root account instead of the one specified in tfvars file
.configure_aws_cli: &configure_aws_cli
- aws configure set region $REGION
- aws configure set aws_access_key_id $AWS_FULL_STS_ACCESS_KEY_ID
- aws configure set aws_secret_access_key $AWS_FULL_STS_ACCESS_KEY_SECRET
- aws sts get-caller-identity
- aws configure set source_profile default --profile $ACCOUNT_NAME
- aws configure set role_arn $ROLE_ARN --profile $ACCOUNT_NAME
- aws sts get-caller-identity --profile $ACCOUNT_NAME
- aws configure set region $REGION --profile $ACCOUNT_NAME
.copy_remote_log: &copy_remote_log
- if [ -e outfile ]; then rm outfile; fi
- copy_command="$(cat $CI_PROJECT_DIR/scp_command.txt)"
- new_copy_command=${copy_command/"%s"/"outfile"}
- new_copy_command=${new_copy_command/"~"/"/home/ec2-user/outfile"}
- echo $new_copy_command
- new_copy_command=$(echo "$new_copy_command" | sed s'/\([^.]*\.[^ ]*\) \([^ ]*\) \(.*\)/\1 \3 \2/')
- echo $new_copy_command
- sleep 10
- eval $new_copy_command
.check_remote_log: &check_remote_log
- sleep 10
- grep Error outfile || true
- sleep 10
- returnCode=$(grep -c Error outfile) || true
- echo "Return code received $returnCode"
- if [ $returnCode -ge 1 ]; then exit 1; fi
- echo "No errors"
.prepare_ssh_key: &prepare_ssh_key
- echo $ADMIN_SERVER_SSH_KEY > $CI_PROJECT_DIR/ssh_key.pem
- cat ssh_key.pem
- sed -i -e 's/-----BEGIN RSA PRIVATE KEY-----/-bk-/g' ssh_key.pem
- sed -i -e 's/-----END RSA PRIVATE KEY-----/-ek-/g' ssh_key.pem
- perl -p -i -e 's/\s/\n/g' ssh_key.pem
- sed -i -e 's/-bk-/-----BEGIN RSA PRIVATE KEY-----/g' ssh_key.pem
- sed -i -e 's/-ek-/-----END RSA PRIVATE KEY-----/g' ssh_key.pem
- cat ssh_key.pem
- chmod 400 ssh_key.pem
connect-admin-server:
stage: build
allow_failure: true
image:
name: amazon/aws-cli:latest
entrypoint: [ "" ]
rules:
- if: '$ADMIN_SERVER_IP && $ADMIN_SERVER_IP != "" && $ADMIN_SERVER_SSH_KEY && $ADMIN_SERVER_SSH_KEY != ""'
extends:
- .ifawsdeploy
script:
- TF_IN_AUTOMATION=true
- yum update -y
- yum install git unzip gettext jq -y
- echo "Your admin server key and info are added as artifacts"
# Copy the important terraform outputs to files for artifacts to pass into other jobs
- *prepare_ssh_key
- echo "ssh -i ssh_key.pem ec2-user#${ADMIN_SERVER_IP}" > $CI_PROJECT_DIR/ssh_command.txt
- echo "scp -q -i ssh_key.pem %s ec2-user#${ADMIN_SERVER_IP}:~" > $CI_PROJECT_DIR/scp_command.txt
- test_pre_command="$(cat "$CI_PROJECT_DIR/ssh_command.txt") -o StrictHostKeyChecking=no"
- echo $test_pre_command
- test_command="$(echo $test_pre_command | sed -r 's/(ssh )(.*)/\1-tt \2/')"
- echo $test_command
- echo "sudo yum install -yq $PACKAGEURL 2>&1 | tee outfile ; exit 0" | $test_command
- *copy_remote_log
- echo "Now checking log file for returnCode"
- *check_remote_log
artifacts:
untracked: true
when: always
paths:
- "$CI_PROJECT_DIR/ssh_key.pem"
- "$CI_PROJECT_DIR/ssh_command.txt"
- "$CI_PROJECT_DIR/scp_command.txt"
after_script:
- cat $CI_PROJECT_DIR/ssh_key.pem
- cat $CI_PROJECT_DIR/ssh_command.txt
- cat $CI_PROJECT_DIR/scp_command.txt
create-admin-server:
stage: build
allow_failure: false
image:
name: amazon/aws-cli:latest
entrypoint: [ "" ]
rules:
- if: '$ADMIN_SERVER_IP != ""'
when: never
extends:
- .ifawsdeploy
script:
- echo "admin server $ADMIN_SERVER_IP"
- TF_IN_AUTOMATION=true
- yum update -y
- yum install git unzip gettext jq -y
- *configure_aws_cli
- aws sts get-caller-identity --profile $ACCOUNT_NAME #to check whether updated correctly or not
- git clone "https://project-n-setup:$(echo $PERSONAL_GITLAB_TOKEN)#gitlab.com/company-oss/project-n-setup.git"
# Install tfenv
- git clone https://github.com/tfutils/tfenv.git ~/.tfenv
- ln -s ~/.tfenv /root/.tfenv
- ln -s ~/.tfenv/bin/* /usr/local/bin
# Install terraform 1.1.9 through tfenv
- tfenv install $tfenv_version
- tfenv use $tfenv_version
# Copy the tfvars temp file to the terraform setup directory
- cp .gitlab/admin_server.temp_tfvars project-n-setup/$SUB_PLATFORM/
- cd project-n-setup/$SUB_PLATFORM/
- envsubst < admin_server.temp_tfvars > admin_server.tfvars
- rm -rf .terraform || exit 0
- cat ~/.aws/config
- terraform init -input=false
- terraform apply -var-file=admin_server.tfvars -input=false -auto-approve
- echo "Your admin server key and info are added as artifacts"
# Copy the important terraform outputs to files for artifacts to pass into other jobs
- terraform output -raw ssh_key > $CI_PROJECT_DIR/ssh_key.pem
- terraform output -raw ssh_command > $CI_PROJECT_DIR/ssh_command.txt
- terraform output -raw scp_command > $CI_PROJECT_DIR/scp_command.txt
- cp $CI_PROJECT_DIR/project-n-setup/$SUB_PLATFORM/terraform.tfstate $CI_PROJECT_DIR
- cp $CI_PROJECT_DIR/project-n-setup/$SUB_PLATFORM/admin_server.tfvars $CI_PROJECT_DIR
artifacts:
untracked: true
paths:
- "$CI_PROJECT_DIR/ssh_key.pem"
- "$CI_PROJECT_DIR/ssh_command.txt"
- "$CI_PROJECT_DIR/scp_command.txt"
- "$CI_PROJECT_DIR/terraform.tfstate"
- "$CI_PROJECT_DIR/admin_server.tfvars"
How to fix that?
I tried the below step from suggestions on comments section.
.generalgrabclustertrigger:
rules:
- if: '$TEST_CREATE_ADMIN && $REGION && $ROLE_ARN && $PACKAGEURL && $TEST_CREATE_ADMIN == "aws" && $SUB_PLATFORM == "aws" && $ROLE_ARN != "" && $PACKAGEURL != "" && $REGION != ""'
.ifteardownordestroy: # Automatic if triggered from gitlab api AND destroy variable is set
rules:
- !reference [.generalgrabclustertrigger, rules]
- if: 'CI_PIPELINE_SOURCE == "triggered"'
when: never
And included the above in extends of a job.
destroy-admin-server:
stage: cleanup
extends:
- .ifteardownordestroy
allow_failure: true
interruptible: false
But I am getting syntax error in the .ifteardownordestroy part.
jobs:destroy-admin-server:rules:rule if invalid expression syntax
You are overriding rules: in your job that extends .ifawsdeploy. rules: are not combined in this case -- the definition of rules: in the job takes complete precedence.
Take for example the following configuration:
.template:
rules:
- one
- two
myjob:
extends: .template
rules:
- a
- b
In the above example, the myjob job only has rules a and b in effect. Rules one and two are completely ignored because they are overridden in the job configuration.
Instead of uinsg extends:, you can use !reference to preserve and combine rules. You can also use YAML anchors if you want.
create-admin-server:
rules:
- !reference [.ifawsdeploy, rules]
- ... # your additional rules
If no value provided to $ADMIN_SERVER_IP then create_admin_server should run
Lastly, pay special attention to your rules:
rules:
- if: '$ADMIN_SERVER_IP != ""'
when: never
In this case, there are no rules that allow the job to run ever. You either need a case that will evaluate true for the job to run, or to have a default case (an item with no if: condition) in order for the job to run.
To get the behavior you expect, you probably want your default case to be on_success:
rules:
- if: '$ADMIN_SERVER_IP != ""'
when: never
- when: on_success
you can change your rules to :
rules:
- if: '$ADMIN_SERVER_IP != ""'
when: never
- when: always
or
rules:
- if: '$ADMIN_SERVER_IP == ""'
when: always
I have a sample in here: try-rules-stackoverflow-72545625 - GitLab and the pipeline record Pipeline no value - GitLab, Pipeline has value - GitLab

Overlay and override two yaml documents with yq

I would like to overlay a yaml document on top another document, updating any arrays to be the overlaid values.
Given the base/source file:
#base.yaml
include:
- directory: foo/foo1
- directory: bar/bar1
and the overlay file:
#overlay.yaml
include:
- directory: foo/foo1
extra: true
stuff:
stuff1: true
stuff2: true
- directory: something/else
and the result should look like
#results.yaml
include:
- directory: foo/foo1
extra: true
stuff:
stuff1: true
stuff2: true
- directory: bar/bar1
- directory: something/else
I think I am close to having it working with yq from this post, but the list elements are not overridden. I instead get duplicates of foo/foo1
yq eval-all '. as $item ireduce ({}; . *+ $item)' base.yaml overlay.yaml produces
#base.yaml
#overlay.yaml
include:
- directory: foo/foo1
- directory: bar/bar1
- directory: foo/foo1
extra: true
stuff:
stuff1: true
stuff2: true
- directory: something/else
Removing the + after the * will drop bar/bar1 from the output.
Basically I think I'm operating on the include, not the values of the include keys. I would greatly appreciate any help in getting the overlay working.
Have a look at this example here: https://mikefarah.gitbook.io/yq/operators/multiply-merge#merge-arrays-of-objects-together-matching-on-a-key
Basically you will need to:
yq eval-all '
(
((.include[] | {.directory: .}) as $item ireduce ({}; . * $item )) as $uniqueMap
| ( $uniqueMap | to_entries | .[]) as $item ireduce([]; . + $item.value)
) as $mergedArray
| select(fi == 0) | .include= $mergedArray
' sample.yml another.yml
Disclosure: I wrote yq

Resources