Unconditionally get the pip install sorce in Github Actions - pip

I want to test my Python package not only when built locally, but also when it is installed via pip. For this, I basically want to do a pip3 install git+https://github.com/…, like this:
name: CI test
on: [push, pull_request]
jobs:
tests:
name: Pip test
runs-on: ubuntu-20.04
steps:
- name: Setup basic environment
run: |
sudo apt-get update
sudo apt-get install --no-install-recommends python3-dev python3-pip
- name: Install my package via pip
run: |
pip3 install git+$GITHUB_SERVER_URL/$GITHUB_REPOSITORY#$GITHUB_SHA
- name: Run tests
run: |
python3 -m pytest -s --pyargs mypkg
This works nicely for pushed commits, but the Pull Request actions fail claiming that the GITHUB_SHA is "not a tree".
How can I change this so that it works on both commits and pull requests?

Related

Connection refused for local server in github actions workflow

I'm trying to run a local server for my project CI/CD Pipeline. When I start the server I got a "Connection refused" on it.
My project is a fastAPI application and I'm trying to run a integration tests on PR to validate the app before merge the code. I tried to start my app directly (gunicorn), building a docker image and starting it... I tried a lot of things. Then, I tried to run a simple server instead of my app and... got the same error!
This is my simple server workflow:
on:
push:
branches:
- "develop"
jobs:
mylocalserver:
name: Run server
runs-on: ubuntu-latest
steps:
- name: Setup python
uses: actions/setup-python#v3
with:
python-version: 3.9
- name: Run server in background
run: |
python -V
python -m http.server 3000 &> /dev/null &
sudo lsof -i -P -n | grep LISTEN
curl http://localhost:3000
- name: Run server with console
run: |
python -m http.server 3000
Output:
If I run my app with console (no daemon mode in gunicorn), the server start and log to console in workflow with success:
But this way I cannot run nothing after this (and I have to cancel workflow). Some idea? Thank you!
Maybe not the best answer, but for now runnig the job into a container works (only add a container label in question example). Example for my fastAPI app:
on:
pull_request:
branches:
- 'main'
- 'develop'
jobs:
run-on-pr:
runs-on: ubuntu-latest
container: ubuntu
services:
mongodb:
image: mongo
ports:
- 27017:27017
steps:
- name: Setup git
run: |
apt-get update; apt-get install -y git
- name: Git checkout
uses: actions/checkout#v3
with:
path: api
- name: Setup python
uses: actions/setup-python#v4
with:
python-version: 3.9
- name: Install pip
run: |
apt-get update; apt-get install -y python3-pip
- name: Build and ENV
run: |
cd api
cp .env_example_docker .env
pip3 install -r requirements.txt
- name: Run fastAPI
run: |
cd api
gunicorn -D -k uvicorn.workers.UvicornWorker -c ./gunicorn_conf.py app.main:app
env:
MONGO_URL: "mongodb://mongodb:27017"
- name: Install curl
run: |
apt-get update; apt-get install -y curl
- name: Run curl
run: |
curl http://localhost:3000
This works, but I have to install all in container (git, pip). I will try a solution without using the container label and if I found anything I can post here.

How to avoid installing all centos packages everytime I run gitlab ci pipeline?

I'm running a gitlab ci pipeline with a Centos image.
The pipeline has a before script that runs a set of commands.
gitlab-ci.yaml
variables:
WORKSPACE_HOME: '$CI_PROJECT_DIR'
DELIVERY_HOME: delivery
PIP_CACHE_DIR: "$CI_PROJECT_DIR/.cache/pip"
default:
image: centos:latest
cache:
paths:
- .cache/pip
before_script:
- chmod u+x devops/scripts/*.sh
- devops/scripts/install-ci.sh
- python3 -m ensurepip --upgrade
- cp .env.docker.dist .env
- pip3 install --upgrade pip
- pip3 install -r requirements.txt
install-ci.yaml
sed -i 's/mirrorlist/#mirrorlist/g' /etc/yum.repos.d/CentOS-Linux-* &&\
sed -i 's|#baseurl=http://mirror.centos.org|baseurl=http://vault.centos.org|g' /etc/yum.repos.d/CentOS-Linux-*
yum -y update
yum -y install gcc gcc-c++ make
yum -y install python3.8
yum install python3-setuptools
yum -y groupinstall "Development Tools"
yum -y install python3-devel
yum -y install postgresql-server
yum -y install postgresql-devel
yum -y install postgresql-libs
yum -y install python3-pip
timedatectl set-timezone Europe/Paris
yum -y install sqlite-devel
The issue is that everytime I run the ci pipeline it takes time to install centos and all it's packages.
Is there a way to avoid this ? or cache this operation somewhere ?
You could create your own image in which all your dependencies are installed and use this in your job instead of installing the dependencies all over again. I would create a dedicated project on your gitlab instance, something like "centos-python-postgress" and within this project you create a Dockerfile in which you install everything you need. (You can either copy your install-ci.sh or RUN the commands directly within your dockerfile) :
FROM centos:latest
RUN sed -i 's/mirrorlist/#mirrorlist/g' /etc/yum.repos.d/CentOS-Linux-* && sed -i 's|#baseurl=http://mirror.centos.org|baseurl=http://vault.centos.org|g' /etc/yum.repos.d/CentOS-Linux-*
RUN yum -y update
RUN yum -y install gcc gcc-c++ make
...
You can now either build the Dockerfile on your machine and push it manually to the container registry in this project or you create a CI Pipeline that builds and pushes that image automatically:
stages:
- build
stage: build
image:
name: gcr.io/kaniko-project/executor:debug
entrypoint: [""]
script:
- mkdir -p /kaniko/.docker
- echo "{\"auths\":{\"${CI_REGISTRY}\":{\"auth\":\"$(printf "%s:%s" "${CI_REGISTRY_USER}" "${CI_REGISTRY_PASSWORD}" | base64 | tr -d '\n')\"}}}" > /kaniko/.docker/config.json
- >-
/kaniko/executor
--context "${CI_PROJECT_DIR}"
--dockerfile "${CI_PROJECT_DIR}/Dockerfile"
--destination "${CI_REGISTRY_IMAGE}:latest"
Now, Instead of using centos:latest in your origin project/job, you can use your own image:
variables:
WORKSPACE_HOME: '$CI_PROJECT_DIR'
DELIVERY_HOME: delivery
PIP_CACHE_DIR: "$CI_PROJECT_DIR/.cache/pip"
default:
image: registry.gitlab.com/snowfire/centos-python-postgress:latest
cache:
paths:
- .cache/pip
before_script:
- ...

How to apt-get install in a GitHub Actions workflow?

In the new GitHub Actions, I am trying to install a package in order to use it in one of the next steps.
name: CI
on: [push, pull_request]
jobs:
translations:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout#v1
with:
fetch-depth: 1
- name: Install xmllint
run: apt-get install libxml2-utils
# ...
However this fails with
Run apt-get install libxml2-utils
apt-get install libxml2-utils
shell: /bin/bash -e {0}
E: Could not open lock file /var/lib/dpkg/lock-frontend - open (13: Permission denied)
E: Unable to acquire the dpkg frontend lock (/var/lib/dpkg/lock-frontend), are you root?
##[error]Process completed with exit code 100.
What's the best way to do this? Do I need to reach for Docker?
The docs say:
The Linux and macOS virtual machines both run using passwordless sudo. When you need to execute commands or install tools that require more privileges than the current user, you can use sudo without needing to provide a password.
So simply doing the following should work:
- name: Install xmllint
run: sudo apt-get install -y libxml2-utils
Please see this answer here: https://stackoverflow.com/a/73500415/2038264
cache-apt-pkgs-action can both install and cache the apt package, so your subsequent builds are fast. It is also easier to configure, just add the packages you want:
- uses: awalsh128/cache-apt-pkgs-action#latest
with:
packages: dia doxygen doxygen-doc doxygen-gui doxygen-latex graphviz mscgen
version: 1.0

SSL Certificate error while running python -m nltk.downloader -d $NLTK_DATA punkt command on aws lambda

Getting SSL Certificate error while deploying the following code on aws lambda using aws codestar build pipeline.
Looked at multiple community discussions, nothing worked out.
version: 0.2
phases:
install:
commands:
# Upgrade AWS CLI & PIP to the latest version
- pip install --upgrade awscli
- pip install --upgrade pip
# Define Directories
- export HOME_DIR=`pwd`
- export NLTK_DATA=$HOME_DIR/nltk_data
pre_build:
commands:
- cd $HOME_DIR
# Create VirtualEnv to package for lambda
- virtualenv venv
- . venv/bin/activate
# Install Supporting Libraries
- pip install -U scikit-learn
- pip install -U requests
# Install WordNet
- pip install -U nltk
- python -m nltk.downloader -d $NLTK_DATA punkt
# Output Requirements
- pip freeze > requirements.txt
# Discover and run unit tests in the 'tests' directory. For more information, see <https://docs.python.org/3/library/unittest.html#test-discovery>
- python -m unittest discover tests
build:
commands:
- cd $HOME_DIR
- mv $VIRTUAL_ENV/lib/python3.6/site-packages/* .
Only way that worked for me was download the modules and install them into my source folder in a nltk_data folder then create a lambda environment variable NLTK with value ./nltk_data

Share installed requirements between jobs

I have the following yml configuration file with 3 different jobs:
stages:
- build
- test
- analyze
build:
stage: build
script:
- apt-get update -qy
- apt-get install -y python3-dev python3-pip
- pip3 install -r requirements.txt
test:
stage: test
script:
- coverage run --source='.' manage.py test
cache:
paths:
- .coverage
analyze:
stage: analyze
script:
- flake8
- coverage report
In the first job I install the requirements, among which are coverage or flake8. But these tools are used in the following jobs. I have tried using 'dependencies' or 'cache', but it didn't work: only files/dirs under the project root directory can be shared, not the binaries under /user/local/bin.
I have tried to indicate another directory for pip install, but the binary is installed in /user/local/bin.
The workaround I have found is to install the dependencies in each job, but I think that this is the less optimal solution.
I think that there must be a better solution for that.
Thanks.
I just found a solution, at least for python3 (enough for me):
python3 has a built-in tool for managing virtual envs: venv
Using venv, we can create the virtual env in the project root dir, cache this dir, and enable our virtual env in each job.
variables:
VENV_NAME: "env"
cache:
paths:
- $VENV_NAME
first_job:
script:
- apt-get update -qy
- apt-get install -y python3-dev python3-pip python3-venv
- python3 -m venv $VENV_NAME
- source $VENV_NAME/bin/activate
- pip3 install -r requirements.txt
next_jobs:
script:
- source $VENV_NAME/bin/activate
- echo "hello world!"
PD: don't forget to exclude virtual env dir from coverage or other analysis tools

Resources