GitLab CI - Cache not working - caching

I'm currently using GitLab in combination with CI runners to run unit tests of my project, to speed up the process of bootstrapping the tests I'm using the built-in cache functionality, however this doesn't seem to work.
Each time someone commits to master, my runner does a git fetch and proceeds to remove all cached files, which means I have to stare at my screen for around 10 minutes to wait for a test to complete while the runner re-downloads all dependencies (NPM and PIP being the biggest time killers).
Output of the CI runner:
Fetching changes...
Removing bower_modules/jquery/ --+-- Shouldn't happen!
Removing bower_modules/tether/ |
Removing node_modules/ |
Removing vendor/ --'
HEAD is now at 7c513dd Update .gitlab-ci.yml
Currently my .gitlab-ci.yml
image: python:latest
services:
- redis:latest
- node:latest
cache:
key: "$CI_BUILD_REF_NAME"
untracked: true
paths:
- ~/.cache/pip/
- vendor/
- node_modules/
- bower_components/
before_script:
- python -V
# Still gets executed even though node is listed as a service??
- '(which nodejs && which npm) || (apt-get update -q && apt-get -o dir::cache::archives="vendor/apt/" install nodejs npm -yqq)'
- npm install -g bower gulp
# Following statements ignore cache!
- pip install -r requirements.txt
- npm install --only=dev
- bower install --allow-root
- gulp build
test:
variables:
DEBUG: "1"
script:
- python -m unittest myproject
I've tried reading the following articles for help however none of them seem to fix my problem:
http://docs.gitlab.com/ce/ci/yaml/README.html#cache
https://fleschenberg.net/gitlab-pip-cache/
https://gitlab.com/gitlab-org/gitlab-ci-multi-runner/issues/336

Turns out that I was doing some things wrong:
Your script can't cache files outside of your project scope, creating a virtual environment instead and caching that allows you to cache your pip modules.
Most important of all: Your test must succeed in order for it to cache the files.
After using the following config I got a -3 minute time difference:
Currently my configuration looks like follows and works for me.
# Official framework image. Look for the different tagged releases at:
# https://hub.docker.com/r/library/python
image: python:latest
# Pick zero or more services to be used on all builds.
# Only needed when using a docker container to run your tests in.
# Check out: http://docs.gitlab.com/ce/ci/docker/using_docker_images.html#what-is-service
services:
- mysql:latest
- redis:latest
cache:
untracked: true
key: "$CI_BUILD_REF_NAME"
paths:
- venv/
- node_modules/
- bower_components/
# This is a basic example for a gem or script which doesn't use
# services such as redis or postgres
before_script:
# Check python installation
- python -V
# Install NodeJS (Gulp & Bower)
# Default repository is outdated, this is the latest version
- 'curl -sL https://deb.nodesource.com/setup_8.x | bash -'
- apt-get install -y nodejs
- npm install -g bower gulp
# Install dependencie
- pip install -U pip setuptools
- pip install virtualenv
test:
# Indicate to the framework that it's being unit tested
variables:
DEBUG: "1"
# Test script
script:
# Set up virtual environment
- virtualenv venv -ppython3
- source venv/bin/activate
- pip install coverage
- pip install -r requirements.txt
# Install NodeJS & Bower + Compile JS
- npm install --only=dev
- bower install --allow-root
- gulp build
# Run all unit tests
- coverage run -m unittest project.tests
- coverage report -m project/**/*.py
Which resulted in the following output:
Fetching changes...
Removing .coverage --+-- Don't worry about this
Removing bower_components/ |
Removing node_modules/ |
Removing venv/ --`
HEAD is now at 24e7618 Fix for issue #16
From https://git.example.com/repo
85f2f9b..42ba753 master -> origin/master
Checking out 42ba7537 as master...
Skipping Git submodules setup
Checking cache for master... --+-- The files are back now :)
Successfully extracted cache --`
...
project/module/script.py 157 9 94% 182, 231-244
---------------------------------------------------------------------------
TOTAL 1084 328 70%
Creating cache master...
Created cache
Uploading artifacts...
venv/: found 9859 matching files
node_modules/: found 7070 matching files
bower_components/: found 982 matching files
Trying to load /builds/repo.tmp/CI_SERVER_TLS_CA_FILE ...
Dialing: tcp git.example.com:443 ...
Uploading artifacts to coordinator... ok id=127 responseStatus=201 Created token=XXXXXX
Job succeeded
For the coverage report, I used the following regular expression:
^TOTAL\s+(?:\d+\s+){2}(\d{1,3}%)$

Related

How do I add a Python library dependency for a Lambda function in CodeBuild

I have a CodePipline that grabs code out of CodeCommit bundles it up in CodeBuild and then publishes it via CloudFormation.
I want to use the Python package gspread and because it's not part of the standard AWS Linux image I need to install it.
Currently when the code is run I get the error:
[ERROR] Runtime.ImportModuleError: Unable to import module 'index': No module named 'gspread'
Code structure
- buildspec.yml
- template.yml
package/
- gspread/
- gspread-3.6.0.dist-info/
- (37 other python packages)
source/
- index.py
buildspec.yml -- EDITED
version: 0.2
phases:
install:
commands:
# Use Install phase to install packages or any pre-reqs you may need throughout the build (e.g. dev deps, security checks, etc.)
- echo "[Install phase]"
- pip install --upgrade pip
- pip install --upgrade aws-sam-cli
- sam --version
- cd source
- ls
- pip install --target . gspread oauth2client
# consider using pipenv to install everything in the environement and then copy the files installed into the /source folder
- ls
runtime-versions:
python: 3.8
pre_build:
commands:
# Use Pre-Build phase to run tests, install any code deps or any other customization before build
# - echo "[Pre-Build phase]"
build:
commands:
- cd ..
- sam build
post_build:
commands:
# Use Post Build for notifications, git tags and any further customization after build
- echo "[Post-Build phase]"
- export BUCKET=property-tax-invoice-publisher-deployment
- sam package --template-file template.yml --s3-bucket $BUCKET --output-template-file outputtemplate.yml
- echo "SAM packaging completed on `date`"
##################################
# Build Artifacts to be uploaded #
##################################
artifacts:
files:
- outputtemplate.yml
discard-paths: yes
cache:
paths:
# List of path that CodeBuild will upload to S3 Bucket and use in subsequent runs to speed up Builds
- '/root/.cache/pip'
The index.py file has more in it than this. But to show the offending line.
-- index.py --
import os
import boto3
import io
import sys
import csv
import json
import smtplib
import gspread #**<--- Right here**
def lambda_handler(event, context):
print("In lambda_handler")
What I've tried
Creating the /package folder and committing the gspread and other packages
Running "pip install gspread" in the CodeBuild builds: commands:
At the moment, I'm installing it everywhere and seeing what sticks. (nothing is currently sticking)
Version: Python 3.8
I think you may need to do the following steps :
Use virtual env to install the packages locally.
Create requirements.txt to let code build know of the package requirement.
In CodeBuild buildspec.xml , include commands to install virutal env and then supply requirements.txt.
pre_build:
commands:
pip install virtualenv
virtualenv env
. env/bin/activate
pip install -r requirements.txt
Detailed steps here for reference :
https://adrian.tengamnuay.me/programming/2018/07/01/continuous-deployment-with-aws-lambda/

Github-actions - composer fails with 'sh: git: not found'?

I have a wordpress plugin where i'm using composer to define my dependent libaries and github-actions to build the installable package. I plan to publish the vendors folder to a 'build' branch in github so the whole application can be installed.
My composer.json file has this content and works locally
{
"name" : "emeraldjava/bhaa_wordpress_plugin",
"description" : "bhaa_wordpress_plugin",
"type" : "wordpress-plugin",
"require": {
"scribu/scb-framework": "dev-master",
"scribu/lib-posts-to-posts": "dev-master",
"mustache/mustache": "2.12.0",
"league/csv": "^9.1",
"michelf/php-markdown": "^1.8"
},
and my github-actions build.yml file uses 'MilesChou/composer-action' to install the composer env in the docker container
jobs:
build:
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout#v1
- name: Composer install
uses: MilesChou/composer-action/7.3#master
with:
args: install --no-dev
- uses: docker://php:7.3-alpine
- uses: docker://alpine/git:latest
From the build log, I can see that the zip files for these composer artifacts have been downloaded to the cache
36/38: https://codeload.github.com/scribu/wp-scb-framework/legacy.zip/95b23ac342fce16bf5eb8d939ac5a361b94b104b
37/38: https://codeload.github.com/sebastianbergmann/phpunit/legacy.zip/a7834993ddbf4b0ed2c3b2dc1f3b1d093ef910a9
38/38: https://codeload.github.com/scribu/wp-lib-posts-to-posts/legacy.zip/a695438e455587fa228e993d05b4431cde99af1b
Finished: success: 38, skipped: 0, failure: 0, total: 38
The build then failed with this 'sh: git: not found' error
Package operations: 5 installs, 0 updates, 0 removals
- Installing scribu/scb-framework (dev-master 95b23ac): Cloning 95b23ac342
Failed to download scribu/scb-framework from source: Failed to clone https://github.com/scribu/wp-scb-framework.git, git was not found, check that it is installed and in your PATH env.
sh: git: not found
Now trying to download from dist
- Installing scribu/scb-framework (dev-master 95b23ac): Loading from cache
- Installing scribu/lib-posts-to-posts (dev-master a695438): Cloning a695438e45
Failed to download scribu/lib-posts-to-posts from source: Failed to clone https://github.com/scribu/wp-lib-posts-to-posts.git, git was not found, check that it is installed and in your PATH env.
sh: git: not found
Now trying to download from dist
- Installing scribu/lib-posts-to-posts (dev-master a695438): Loading from cache
- Installing mustache/mustache (v2.12.0): Loading from cache
- Installing michelf/php-markdown (1.8.0): Loading from cache
- Installing league/csv (9.4.1): Loading from cache
I'm assuming i need to ensure the docker container had git installed, but it seems odd that composer can access the legacy.zip file, so why it git needed at this stage?
EDIT 1
I guess the quick fix here is a duplicate of this issue, and as the answer below states.
For the sake of completeness, lets assume i can't call 'composer --prefer-dist' how could i ensure the docker container has git available to it?
By default Composer uses dist (zip files) for tagged releases and source (git clone) for branches. Since you're targeting master branch for your dependencies, Composer tries to clone repositories first. You can override this behavior by using --prefer-dist switch:
with:
args: install --prefer-dist --no-dev
--prefer-dist: Reverse of --prefer-source, Composer will install from dist if possible. This can speed up installs substantially on build
servers and other use cases where you typically do not run updates of
the vendors. It is also a way to circumvent problems with git if you
do not have a proper setup.
https://getcomposer.org/doc/03-cli.md#install-i

Gitlab runner stop after npm install

For using the pipeline in Gitlab, i've created the following .gitlab-ci.yml file:
image: node:8.2.1
cache:
paths:
- node_modules/
TestIt:
script:
- npm install
- '/node_modules/#angular/cli/bin/ng test --single-run=true --browsers PhantomJS --watch=false'
When the runner start the job it is doing the npm install successfully but by that it ends. It doesn't continue to the second script (like it ignores it from some reason).
This is the output:
What can be the cause for that?
If you are on windows, you probably ran into this problem (nothing else executing after a "npm" command):
https://gitlab.com/gitlab-org/gitlab-runner/issues/2730
TL;DR: Use call npm install instead of npm install, then the second command will execute too. Downside: Then your CI config is not platform-independent anymore.
I still didn't found the reason why this happens but as A workaround After a long search instead of using ng test, i'm using npm test, like that:
TestIt:
script:
- npm test
in
Karma.config.js
I changed from autoWatch: true to false and from singleRun: false to true to prevent continuous testing.
I took out the - npm install

How to use slimer.js in Travis CI?

I'm using casper.js & backstop.js in Travis CI to run tests with phantom.js. But I would prefer to use slimer.js instead of phantom.js.
Is it possible to do? I tried install it with:
npm install -g slimerjs
and with:
env:
- SLIMERJSLAUNCHER=$(which firefox) DISPLAY=:99.0 PATH=$TRAVIS_BUILD_DIR/slimerjs:$PATH
addons:
firefox: "42.0"
before_script:
- "sh -e /etc/init.d/xvfb start"
- "echo 'Installing Slimer'"
- "wget http://download.slimerjs.org/v0.9/0.9.6/slimerjs-0.9.6.zip"
- "unzip slimerjs-0.9.6.zip"
- "mv slimerjs-0.9.6 ./slimerjs"
both not working and I get an error:
Gecko error: it seems /usr/local/bin/firefox is not compatible with SlimerJS. See Gecko version compatibility.
I tried different versions of FF specified in application.ini but without any success.
I checked the project: https://github.com/JulianBirch/cljs-ajax (referred in: https://github.com/travis-ci/travis-ci/issues/1290) and went over the git history in the .travis.yml file and it seems there is a way to have a green build with slimer 0.9.6.
Copy/pasting the .travis.yml of the last build with slimerjs included (build status is green: https://travis-ci.org/JulianBirch/cljs-ajax/jobs/104345408):
language: clojure
lein: lein2
env:
- SLIMERJSLAUNCHER=$(which firefox) DISPLAY=:99.0 PATH=$TRAVIS_BUILD_DIR/slimerjs:$PATH
addons:
firefox: "24.0"
before_script:
- "sh -e /etc/init.d/xvfb start"
- "curl https://slimerjs.org/slimerjs-pubkey.gpg | gpg --import"
- "wget http://download.slimerjs.org/releases/0.9.6/slimerjs-0.9.6-linux-x86_64.tar.bz2"
- "wget http://download.slimerjs.org/releases/0.9.6/slimerjs-0.9.6-linux-x86_64.tar.bz2.asc"
- "gpg --verify-files *.asc"
- "tar jxfv slimerjs-0.9.6-linux-x86_64.tar.bz2"
- "mv slimerjs-0.9.6 ./slimerjs"
- "yes | sudo lein2 upgrade 2.5.2"
sudo: required
Well, it might also depend on the VM type you use, but it should be a good starting point.
Anyway, I feel like heading the same direction, so it would be cool if you could share the config working for you.

cache installation of deps and afterwards build in travis

Is it possible to separate the install deps and caching from the build of the source code?
I have:
sudo: required
language: cpp
matrix:
include:
- env: GCC_VERSION="4.9"
os: linux
dist: trusty
compiler: gcc
cache:
directories:
- /usr/local/include
- /usr/local/lib
- /usr/local/share
addons:
apt:
packages:
- gcc-4.9
- g++-4.9
sources:
- ubuntu-toolchain-r-test
# Install dependencies
install:
- export BUILD_DEPS="OFF"
- export BUILD_GRSF="ON"
- export CHECKOUT_PATH=`pwd`;
- chmod +x $CHECKOUT_PATH/travis/install_${TRAVIS_OS_NAME}.sh
- . $CHECKOUT_PATH/travis/install_${TRAVIS_OS_NAME}.sh
script:
- chmod +x $CHECKOUT_PATH/travis/build.sh
- . $CHECKOUT_PATH/travis/build.sh
notifications:
email: false
Because my build takes too long (more than 50 minutes with building dependencies and the source code) I proceed in the following way:
I set
BUILD_DEPS="ON" # build only deps
BUILD_GRSF="OFF"
which only builds the dependencies and caches them, afterwards I set
BUILD_DEPS="OFF"
BUILD_GRSF="ON" # build only source
in the .travis.yaml file which then builds only the source code.
This seems to work but is cumbersome? Is there a better solution to this? Maybe directly on travis modifying the .travis.yaml and make a new commit "travis cached, build source now". which will then trigger another travis build (which now builds the source)
Your dependency install script could look for a marker file your script leaves after successful installation in a cached dir and only if that's not found you would re-run the build.
That way you don't need any modifications to the travis spec at least.
It seems travis can only cache in $HOME:
https://github.com/travis-ci/travis-ci/issues/6115#issuecomment-222817367

Resources