Share installed requirements between jobs - pip

I have the following yml configuration file with 3 different jobs:
stages:
- build
- test
- analyze
build:
stage: build
script:
- apt-get update -qy
- apt-get install -y python3-dev python3-pip
- pip3 install -r requirements.txt
test:
stage: test
script:
- coverage run --source='.' manage.py test
cache:
paths:
- .coverage
analyze:
stage: analyze
script:
- flake8
- coverage report
In the first job I install the requirements, among which are coverage or flake8. But these tools are used in the following jobs. I have tried using 'dependencies' or 'cache', but it didn't work: only files/dirs under the project root directory can be shared, not the binaries under /user/local/bin.
I have tried to indicate another directory for pip install, but the binary is installed in /user/local/bin.
The workaround I have found is to install the dependencies in each job, but I think that this is the less optimal solution.
I think that there must be a better solution for that.
Thanks.

I just found a solution, at least for python3 (enough for me):
python3 has a built-in tool for managing virtual envs: venv
Using venv, we can create the virtual env in the project root dir, cache this dir, and enable our virtual env in each job.
variables:
VENV_NAME: "env"
cache:
paths:
- $VENV_NAME
first_job:
script:
- apt-get update -qy
- apt-get install -y python3-dev python3-pip python3-venv
- python3 -m venv $VENV_NAME
- source $VENV_NAME/bin/activate
- pip3 install -r requirements.txt
next_jobs:
script:
- source $VENV_NAME/bin/activate
- echo "hello world!"
PD: don't forget to exclude virtual env dir from coverage or other analysis tools

Related

How can I get latest and latest-1 pip version of package in .gitlab-ci.yml

I would like to be able to run tests for my ansible roles in gitlab-ci, I would like to test them using latest ansible and previous latest major release.
For now this is how my .gitlab-ci.yml looks:
---
stages:
- test
before_script:
- python3.9 -m venv ${CI_PROJECT_NAME}_molecule_venv
- source ${CI_PROJECT_NAME}_molecule_venv/bin/activate
- pip3 install -U pip
- if [ ${ANSIBLE_VERSION} != "latest" ]; then pip3 install ansible==${ANSIBLE_VERSION}; fi
- pip3 install -r ci_requirements.txt
standard_checks_ansible:
stage: test
variables:
PY_COLORS: '1'
ANSIBLE_FORCE_COLOR: '1'
GIT_SUBMODULE_STRATEGY: recursive
script:
- molecule test
parallel:
matrix:
- ANSIBLE_VERSION: ["2.9","latest"]
only:
- merge_requests
I would like to have in ANSIBLE_VERSION something like, latest and latest-1 in order to test with the previous major release of ansible.
How can I achieve this?
Thx!

Unconditionally get the pip install sorce in Github Actions

I want to test my Python package not only when built locally, but also when it is installed via pip. For this, I basically want to do a pip3 install git+https://github.com/…, like this:
name: CI test
on: [push, pull_request]
jobs:
tests:
name: Pip test
runs-on: ubuntu-20.04
steps:
- name: Setup basic environment
run: |
sudo apt-get update
sudo apt-get install --no-install-recommends python3-dev python3-pip
- name: Install my package via pip
run: |
pip3 install git+$GITHUB_SERVER_URL/$GITHUB_REPOSITORY#$GITHUB_SHA
- name: Run tests
run: |
python3 -m pytest -s --pyargs mypkg
This works nicely for pushed commits, but the Pull Request actions fail claiming that the GITHUB_SHA is "not a tree".
How can I change this so that it works on both commits and pull requests?

How to apt-get install in a GitHub Actions workflow?

In the new GitHub Actions, I am trying to install a package in order to use it in one of the next steps.
name: CI
on: [push, pull_request]
jobs:
translations:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout#v1
with:
fetch-depth: 1
- name: Install xmllint
run: apt-get install libxml2-utils
# ...
However this fails with
Run apt-get install libxml2-utils
apt-get install libxml2-utils
shell: /bin/bash -e {0}
E: Could not open lock file /var/lib/dpkg/lock-frontend - open (13: Permission denied)
E: Unable to acquire the dpkg frontend lock (/var/lib/dpkg/lock-frontend), are you root?
##[error]Process completed with exit code 100.
What's the best way to do this? Do I need to reach for Docker?
The docs say:
The Linux and macOS virtual machines both run using passwordless sudo. When you need to execute commands or install tools that require more privileges than the current user, you can use sudo without needing to provide a password.
So simply doing the following should work:
- name: Install xmllint
run: sudo apt-get install -y libxml2-utils
Please see this answer here: https://stackoverflow.com/a/73500415/2038264
cache-apt-pkgs-action can both install and cache the apt package, so your subsequent builds are fast. It is also easier to configure, just add the packages you want:
- uses: awalsh128/cache-apt-pkgs-action#latest
with:
packages: dia doxygen doxygen-doc doxygen-gui doxygen-latex graphviz mscgen
version: 1.0

SSL Certificate error while running python -m nltk.downloader -d $NLTK_DATA punkt command on aws lambda

Getting SSL Certificate error while deploying the following code on aws lambda using aws codestar build pipeline.
Looked at multiple community discussions, nothing worked out.
version: 0.2
phases:
install:
commands:
# Upgrade AWS CLI & PIP to the latest version
- pip install --upgrade awscli
- pip install --upgrade pip
# Define Directories
- export HOME_DIR=`pwd`
- export NLTK_DATA=$HOME_DIR/nltk_data
pre_build:
commands:
- cd $HOME_DIR
# Create VirtualEnv to package for lambda
- virtualenv venv
- . venv/bin/activate
# Install Supporting Libraries
- pip install -U scikit-learn
- pip install -U requests
# Install WordNet
- pip install -U nltk
- python -m nltk.downloader -d $NLTK_DATA punkt
# Output Requirements
- pip freeze > requirements.txt
# Discover and run unit tests in the 'tests' directory. For more information, see <https://docs.python.org/3/library/unittest.html#test-discovery>
- python -m unittest discover tests
build:
commands:
- cd $HOME_DIR
- mv $VIRTUAL_ENV/lib/python3.6/site-packages/* .
Only way that worked for me was download the modules and install them into my source folder in a nltk_data folder then create a lambda environment variable NLTK with value ./nltk_data

CI/CD Gitlab deployment Failed - dbl command not found

The pipeline .gitlab-ci.yml code successfully works till yesterday, but today i got the error which says “dpl command not found”
the below is my .gitlab-ci.yml file
image: node:8.9.3
stages:
- job1
- test
- production
job1:
stage: job1
script: "ls -l"
test:
stage: test
script:
- npm install
production:
type: deploy
stage: production
image: ruby:latest
script:
- apt-get update -qy
- apt-get install -y ruby-dev
- gem install dpl
- dpl --provider=heroku --app=quailapp --api-key=$HEROKU_PRODUCTION_API_KEY
only:
- master
This is the log Generated,
Setting up rake (10.5.0-2) ...
Setting up libruby2.3:amd64 (2.3.3-1+deb9u2) ...
Setting up ruby2.3 (2.3.3-1+deb9u2) ...
Setting up ruby2.3-dev:amd64 (2.3.3-1+deb9u2) ...
Setting up ruby-dev:amd64 (1:2.3.3) ...
Setting up ruby (1:2.3.3) ...
Processing triggers for libc-bin (2.24-11+deb9u3) ...
$ gem install dpl
Successfully installed dpl-1.9.6
1 gem installed
$ dpl --provider=heroku --app=quailapp --api-key=$HEROKU_PRODUCTION_API_KEY
/bin/bash: line 68: dpl: command not found
ERROR: Job failed: exit code 1
please help me for finding the solution.
Same here, Issuing the command to install dpl with verbosity: gem install dpl --verbose I've been able to see something weird:
/usr/local/bundle/bin/dpl
Successfully installed dpl-1.9.6
1 gem installed
I don't know why but it is installed in a non-default path. As a workaround I've added the /usr/local/bundle/bin in $PATH environment variable issuing the following command:
export PATH=$PATH:/usr/local/bundle/bin
It works for me and my gitlab ci pipelines are now working again.
BTW, It would be great to know why it has changed suddenly...
Same problem here. I think, it's a problem in docker image. See https://github.com/docker-library/ruby/pull/209
They made some changes and broke path for gems binaries. We have to wait until they merge fix.
UPDATE:
It's already merged and their fix works for me.

Resources