How to apt-get install in a GitHub Actions workflow? - installation

In the new GitHub Actions, I am trying to install a package in order to use it in one of the next steps.
name: CI
on: [push, pull_request]
jobs:
translations:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout#v1
with:
fetch-depth: 1
- name: Install xmllint
run: apt-get install libxml2-utils
# ...
However this fails with
Run apt-get install libxml2-utils
apt-get install libxml2-utils
shell: /bin/bash -e {0}
E: Could not open lock file /var/lib/dpkg/lock-frontend - open (13: Permission denied)
E: Unable to acquire the dpkg frontend lock (/var/lib/dpkg/lock-frontend), are you root?
##[error]Process completed with exit code 100.
What's the best way to do this? Do I need to reach for Docker?

The docs say:
The Linux and macOS virtual machines both run using passwordless sudo. When you need to execute commands or install tools that require more privileges than the current user, you can use sudo without needing to provide a password.
So simply doing the following should work:
- name: Install xmllint
run: sudo apt-get install -y libxml2-utils

Please see this answer here: https://stackoverflow.com/a/73500415/2038264
cache-apt-pkgs-action can both install and cache the apt package, so your subsequent builds are fast. It is also easier to configure, just add the packages you want:
- uses: awalsh128/cache-apt-pkgs-action#latest
with:
packages: dia doxygen doxygen-doc doxygen-gui doxygen-latex graphviz mscgen
version: 1.0

Related

Unconditionally get the pip install sorce in Github Actions

I want to test my Python package not only when built locally, but also when it is installed via pip. For this, I basically want to do a pip3 install git+https://github.com/…, like this:
name: CI test
on: [push, pull_request]
jobs:
tests:
name: Pip test
runs-on: ubuntu-20.04
steps:
- name: Setup basic environment
run: |
sudo apt-get update
sudo apt-get install --no-install-recommends python3-dev python3-pip
- name: Install my package via pip
run: |
pip3 install git+$GITHUB_SERVER_URL/$GITHUB_REPOSITORY#$GITHUB_SHA
- name: Run tests
run: |
python3 -m pytest -s --pyargs mypkg
This works nicely for pushed commits, but the Pull Request actions fail claiming that the GITHUB_SHA is "not a tree".
How can I change this so that it works on both commits and pull requests?

Share installed requirements between jobs

I have the following yml configuration file with 3 different jobs:
stages:
- build
- test
- analyze
build:
stage: build
script:
- apt-get update -qy
- apt-get install -y python3-dev python3-pip
- pip3 install -r requirements.txt
test:
stage: test
script:
- coverage run --source='.' manage.py test
cache:
paths:
- .coverage
analyze:
stage: analyze
script:
- flake8
- coverage report
In the first job I install the requirements, among which are coverage or flake8. But these tools are used in the following jobs. I have tried using 'dependencies' or 'cache', but it didn't work: only files/dirs under the project root directory can be shared, not the binaries under /user/local/bin.
I have tried to indicate another directory for pip install, but the binary is installed in /user/local/bin.
The workaround I have found is to install the dependencies in each job, but I think that this is the less optimal solution.
I think that there must be a better solution for that.
Thanks.
I just found a solution, at least for python3 (enough for me):
python3 has a built-in tool for managing virtual envs: venv
Using venv, we can create the virtual env in the project root dir, cache this dir, and enable our virtual env in each job.
variables:
VENV_NAME: "env"
cache:
paths:
- $VENV_NAME
first_job:
script:
- apt-get update -qy
- apt-get install -y python3-dev python3-pip python3-venv
- python3 -m venv $VENV_NAME
- source $VENV_NAME/bin/activate
- pip3 install -r requirements.txt
next_jobs:
script:
- source $VENV_NAME/bin/activate
- echo "hello world!"
PD: don't forget to exclude virtual env dir from coverage or other analysis tools

Gitlab: gem not found

I am trying to deploy our app to Heroku and our settings in .gitlab-ci.yml looks like
staging_heroku:
stage: deploy
script:
- git remote add heroku https://heroku:$STAGING_HEROKU_KEY#git.heroku.com/staging-myapp.git
- git push -f heroku master
This is what we see in logs
Cloning repository...
Cloning into '/builds/org/project'...
Checking out 340111af as dev/feature1...
Skipping Git submodules setup
Downloading artifacts for maven-build (17234382)...
Downloading artifacts from coordinator... ok id=17234382 responseStatus=200 OK token=2YSHdANA
/bin/sh: eval: line 46: apt-get: not found
$ apt-get update -yqqq
ERROR: Job failed: exit code 127
These runners do not even have apt-get and so I can not install gem.
I even tried git command, but even that is not found. Can someone help?
You need to install ruby and ruby-dev first! (and rubygems-integration for Debian 8)
staging:
type: deploy
script:
- apt-get update -yq
- apt-get install -y ruby ruby-dev rubygems-integration
- gem install dpl
- dpl --provider=heroku --app=teeth-taroko --api-key=$HEROKU_STAGING_API_KEY
only:
- develop

Installing bundler within Jenkins Docker image

Is it possible to install Ruby within a docker image, specifically Jenkins?
I can see from the docs that you can attach to a container or use docker exec -i -t 4e2bf4128e3e bash. This will log me in as jenkins#4e2bf4128e3e.
But if I try and install anything
apt-get install ruby 2.0.0 # Yes will install rvm, this is just an example
I get
E: Could not open lock file /var/lib/dpkg/lock - open (13: Permission denied)
E: Unable to lock the administration directory (/var/lib/dpkg/), are you root?
And when I try
sudo apt-get install ruby 2.0.0
Then I get sudo command not found.
The problem you have is that, as you can see here, the jenkins docker images executes commands as the jenkins user which is forbidden to use apt.
On https://hub.docker.com/_/jenkins/ you have some documentation, namely the "Installing more tools" section which advise you to do this:
FROM jenkins
# if we want to install via apt
USER root
RUN apt-get update && apt-get install -y ruby make more-thing-here
USER jenkins # drop back to the regular jenkins user - good practice
You could create your own image, that layers those two images
Dockerfile
FROM jenkins
FROM ruby
...
Now you have a docker container of your own that has both ruby AND jenkins.

How to install ansible-modules-extras?

I've installed ansible via the ubuntu apt package ansible, I am trying to use the npm module which is an extras module, which is provided only in the ansible-modules-extras Github repository.
How do I install ansible-modules-extras?
Looking at where files were installed as part of the ansible apt package, I would guess I have to merge some of the source codes folders to like /usr/share/ansible
or somewhere under /usr/lib/python2.7/dist-packages/ansible.
I ask this as I get this error from the Ansible output:
msg: Failed to find required executable npm
Ansible extras are included in the Ubuntu ansible apt package.
The target machine must have npm installed, apt package npm, can be installed like so via Ansible:
tasks:
- name: install npm
apt: pkg=npm state=present
Try to install with python-pip, for this first time delete the ansible.
sudo apt-get remove ansible
After install python-pip
sudo apt-get install gcc python-pip python-dev
And install ansible
sudo pip install ansible
It is install the newest version. It should contain the npm mondule.

Resources