Gitlab-Ci: How could I share data between jobs - yaml

I want to share a file between two jobs and modify it if there are changed files. The python script compare the cache.json file with changes and modify the cahce file sometimes.
.gitlab-ci.yaml:
image: ubuntu
stages:
- test
cache:
key: one-cache
paths:
- cache.json
job1:
stage: test
script:
# - touch cache.json
- cat cache.json
- python3 modify_json_file.py
- cat cache.json
The problem is that it the cache.json file not exist at the next job run. I get the error message: cat: cache.json: No such file or directory. I did also insert once the touch command, but this doesn't change anything for the next run without the touch command.
Do I something wrong or don't I understand the cache at gitlab wrong.

I think you need artifacts and not cache.
From cache vs artifact:
cache - Use for temporary storage for project dependencies. Not useful for keeping intermediate build results, like jar or apk files. Cache was designed to be used to speed up invocations of subsequent runs of a given job, by keeping things like dependencies (e.g., npm packages, Go vendor packages, etc.) so they don't have to be re-fetched from the public internet. While the cache can be abused to pass intermediate build results between stages, there may be cases where artifacts are a better fit.
artifacts - Use for stage results that will be passed between stages. Artifacts were designed to upload some compiled/generated bits of the build, and they can be fetched by any number of concurrent Runners. They are guaranteed to be available and are there to pass data between jobs. They are also exposed to be downloaded from the UI.

Related

Share a file between two workflows in CircleCI

In our repo build and deploy are two different workflows.
In build we call lerna to check for changed packages and save the output in a file saved to the current workspace.
check_changes:
working_directory: ~/project
executor: node
steps:
- checkout
- attach_workspace:
at: ~/project
- run:
command: npx lerna changed > changed.tmp
- persist_to_workspace:
root: ./
paths:
- changed.tmp
I'd like to pass the exact same file from build workflow to deploy workflow and access it in another job. How do I do that?
read_changes:
working_directory: ~/project
executor: node
steps:
- checkout
- attach_workspace:
at: ~/project
- run:
command: |
echo 'Reading changed.tmp file'
cat changed.tmp
According to this blog post
Unlike caching, workspaces are not shared between runs as they no
longer exists once a workflow is complete
it feels that caching would be the only option.
But according to the CircelCI documentation, my case doesn't fit their cache defintions:
Use the cache to store data that makes your job faster, but, in the
case of a cache miss or zero cache restore, the job still runs
successfully. For example, you might cache NPM package directories
(known as node_modules).
I think you can totally use caching here. Make sure you choose your key template(s) wisely.
The caveat to keep in mind is that (unlike the job level where you can use the requires key), there's no *native *way to sequentially execute workflows. Although you could consider using an orb for that; for example the roopakv/swissknife orb.
So you'll need to make sure that the job (that needs the file) in the deploy workflow, doesn't move to the restore_cache step until the save_cache in the other job has happened.

Cypress binary is missing and Gitlab CI pipeline

I'm trying to integrate cypress testing into gitlab pipeline.
I've tried about 10 different configurations which all fail.. I've included what I think are the relevant portions of of the gitlab.yml file, as well as the screenshot of the error on gitlab.
Thanks for any help
variables:
GIT_SUBMODULE_STRATEGY: recursive
cache:
paths:
- src/ui/node_modules/
- /root/.cache/Cypress/ //added this, also have tried src/ui/cypress/
build_ui:
image: node:16.14.2
stage: build
script:
- cd src/ui
- yarn install --pure-lockfile --prefer-offline --cache-folder .yarn
ui_test:
image: node:16.14.2
stage: test
needs: [build_ui]
script:
- cd src/ui
- yarn run runCypressHeadless
Each job gets its own separate environment. Therefore, you need to install your dependencies in each job. Add your yarn install command to the ui_test job.
The reason why your cache: did not restore to the job from the previous stage is because caches are per job by default (e.g. caches are restored from previous pipelines that ran the same job). If you want subsequent jobs in the same pipeline to use the cache, set the cache:key: to something like $CI_COMMIT_SHA or use cache:key:files: to use a file key, like your lockfile(s).
Also, you can only cache paths in the workspace. So you won't be able to cache/restore /root/.cache/... -- instead you should change the cache location to somewhere in the workspace.
For additional reference, see: caching in GitLab CI and caching NodeJS dependencies.

Gitlab Runner Cache gets corrupted

I have a Gitlab CI/CD pipeline with multiple jobs running in parallel, each job executes mvn test package.
Because there's a lot of dependencies, I'm making use of Gitlabs caching feature to store the .m2 folder:
cache:
key: "$CI_PROJECT_NAME"
paths:
- .m2/repository
I'm using the CI_PROJECT_NAME as I want the cache to be available to all jobs in all branches.
It mostly works, in many jobs I see the build succeed, then a message that the cache was either
created or it's already up to date:
Creating cache my-project-name...
.m2/repository: found 10142 matching files and directories
Archive is up to date!
Created cache
But in some jobs, Maven suddenly fails:
355804 [ERROR] Failed to execute goal net.alchim31.maven:scala-maven-plugin:4.5.2:compile (default) on project spark: wrap: scala.reflect.internal.FatalError: Error accessing /builds/Kxs9HrJp/4/analytics/my-project-name/.m2/repository/org/apache/spark/spark-catalyst_2.12/3.1.1/spark-catalyst_2.12-3.1.1.jar: zip END header not found -> [Help 1]
It seems that the cache was somehow corrupted. If I execute the same job again, it now consistently fails. If I clear the runner cache through the UI, the same job runs successfully again until it fails for another file at some point.
I have a feeling that the concurrent runs are the problem, but I don't know why.
Each job downloads the current state of the cache at the beginning.
Even if it's not up to date, maven will simply download the missing libraries.
If two or more jobs try to update / upload the cache "at the same time", it's OK for the last one to win and overwrite the others' cache.
Any idea what's happening here?
I think maybe it's related to concurrent workers ( Read over write at the same time maybe) if you have received the error just one time, I could suppose that it maybe error connection from the runner to the cache location, but what I'm seeing its more often, the problem is maybe concurrency.
Try to change your key to be more specific by branch/Commit hash and try again...
cache:
key: $CI_COMMIT_REF_SLUG
paths:
- .m2/repository
OR using a distributed location like S3 with versioning enabled...

Reuse a conda environment within the same gitlab CI pipeline?

Motivation
My main goal is this: Within a pipeline, I would like to reuse as much as possible (i.e. not build the conda environment multiple times, if all jobs share the same environment).
In my project, I use conda as depency manager and gitlab ci/cd for continuous integration. For the sake of simplicity, let's say I have a build job and a test job. The most straight forward approach would be to create the conda environment from the environment.yml in any job and then do the actual work. This adds an overhead of several minutes to any job. It also seems like overhead to me, since I would like to build the environment once in the build job and then use it in my test job (especially when creating multiple jobs for different tests).
Research Results
The first thing I need to do is to set the CONDA_ENVS_PATH to somewhere in my project directory.
I've looked at gitlab's caching mechanism, but found that it only helps for the same job in repeated runs of the same pipeline, but not not for different jobs of the same run within a pipeline.
I've also looked at gitlab's artifacts mechanism, but found that due the up- and download of those, they don't increase run time significantly (basically I only save time by not downloading many small packages and not having to compile them again, but loose time by compressing and decompressing them).
I've also tried to make use of the GIT_CLEAN_FLAGS by setting them to none in my test job. That way, the conda environment is not deleted when getting the latest data from git. This does cause a serious speedup in my pipelines, but it does not work all the time. Some jobs fail, not finding the conda environment. A simple rerun does magically work however. Of course, in a CI / CD setting, this nondeterminism is not practical.
As a workaround to the original question, we've come up with an intermediate solution. We introduce a docker image, holding our custom environment, through a minimal Dockerfile and a couple of changes to our .gitlab-ci.yml (see below for an example). By only executing the job that builds our custom docker image when the dockerfile or environment changed, we save valuable time on each run. At the same time, we keep the full flexibility in our environment definition and can adjust it exactly how we usually would: by changing the environment.yml.
Question
All solutions tried so far are not really satisfactory. Thus my question is: How can my test job reuse the same conda environment as my build job in gitlab-ci?
In case someone else would like to use a similar setup: Here is my current approach:
# Dockerfile
FROM continuumio/miniconda3:latest
COPY environment.yml .
RUN conda env create -f environment.yml
ENTRYPOINT [""]
# .gitlab-ci.yml
# Use the latest version of this project's docker file
# This will be the default image for all jobs unless specified otherwise
image: $CI_REGISTRY_IMAGE:latest
# Change cache directories to be inside the project directory since we can
# only cache local items.
variables:
PRE_COMMIT_HOME: "${CI_PROJECT_DIR}/.cache/pre-commit"
stages:
- build
- test
# Make conda environment available to all jobs
# This expects the conda environment to have the same name as the gitlab project path
# Avoid dashes and other non-alphabetical characters
default:
before_script:
- source activate "${CI_PROJECT_NAME}"
# Build the docker image including the correct conda environment for subsequent jobs
# This assumes a docker image registry being configured for your gitlab instance
dockerimage:
stage: build
image:
name: gcr.io/kaniko-project/executor:debug
entrypoint: [""]
rules:
- changes:
- Dockerfile
- environment.yml
before_script: [ ]
script:
- mkdir -p /kaniko/.docker
- echo "{\"auths\":{\"$CI_REGISTRY\":{\"username\":\"$CI_REGISTRY_USER\",\"password\":\"$CI_REGISTRY_PASSWORD\"}}}" > /kaniko/.docker/config.json
- /kaniko/executor --context $CI_PROJECT_DIR --dockerfile $CI_PROJECT_DIR/Dockerfile --destination $CI_REGISTRY_IMAGE:latest
# Run pytest
pytest:
stage: test
script:
- conda install pytest-cov
- pytest tests --cov=src
Edit: I replaced the example code for using GIT_CLEAN_FLAGS with our most recent approach: using a custom docker image.
Disclaimer: I saw this and this question, but they are both dated, don't have a satisfying answer and I only found them after writing this question, so I hope my additional question increases discoverability of the topic.

Gitlab CI : how to cache node_modules from a prebuilt image?

The situation is this:
I'm running Cypress tests in a Gitlab CI (launched by vue-cli). To speed up the execution, I built a Docker image that contains the necessary dependencies.
How can I cache node_modules from the prebuilt image to use it in the test job ?
Currently I'm using an awful (but working) solution:
testsE2e:
image: path/to/prebuiltImg
stage: tests
script:
- ln -s /node_modules/ /builds/path/to/prebuiltImg/node_modules
- yarn test:e2e
- yarn test:e2e:report
But I think there must be a cleaner way using the Gitlab CI cache.
I've been testing:
cacheE2eDeps:
image: path/to/prebuiltImg
stage: dependencies
cache:
key: e2eDeps
paths:
- node_modules/
script:
- find / -name node_modules # check that node_modules files are there
- echo "Caching e2e test dependencies"
testsE2e:
image: path/to/prebuiltImg
stage: tests
cache:
key: e2eDeps
script:
- yarn test:e2e
- yarn test:e2e:report
But the job cacheE2eDeps displays a "WARNING: node_modules/: no matching files" error.
How can I do this successfully? The Gitlab documentation doesn't really talk about caching from a prebuilt image...
The Dockerfile used to build the image :
FROM cypress/browsers:node13.8.0-chrome81-ff75
COPY . .
RUN yarn install
There is not documentation for caching data from prebuilt images, because it’s simply not done. The dependencies are already available in the image so why cache them in the first place? It would only lead to an unnecessary data duplication.
Also, you seem to operate under the impression that cache should be used to share data between jobs, but it’s primary use case is sharing data between different runs of the same job. Sharing data between jobs should be done using artifacts.
In your case you can use cache instead of prebuilt image, like so:
variables:
CYPRESS_CACHE_FOLDER: "$CI_PROJECT_DIR/cache/Cypress"
testsE2e:
image: cypress/browsers:node13.8.0-chrome81-ff75
stage: tests
cache:
key: "e2eDeps"
paths:
- node_modules/
- cache/Cypress/
script:
- yarn install
- yarn test:e2e
- yarn test:e2e:report
The first time the above job is run, it’ll install dependencies from scratch, but the next time it’ll fetch them from the runner cache. The caveat is that unless all runners that run this job share cache, each time you run it on a new runner it’ll install the dependencies from scratch.
Here’s the documentation about using yarn with GitLab CI.
Edit:
To elaborate on using cache vs artifacts - artifacts are meant for both storing job output (eg. to manually download it later) and for passing results of one job to another one from a subsequent stage, while cache is meant to speed up job execution by preserving files that the job needs to download from the internet. See GitLab documentation for details.
Contents of node_modules directory obviously fit into the second category.

Resources