I'm trying to use Gitlab to create an automatic build for my Spring project.
I've installed Docker on my local machine and setup Gitlab with a runner (the runner is not inside docker, I installed as regular windows service).
According to this configuration found on this stackoverflow thread I'm able to cache the .m2 folder to store all the downloaded dependencies.
Now, looking at Gitlab documentation I see that I can tag my cache that will be used across different branches. The same branch should use always the same cache to speed up the build process.
Now I tried to implement the pipeline and all is fine. But if I clear the cache and re-run the pipeline, the cache is used anyway, also if I cleared it (using the interface button).
This is the log of the pipeline:
Removing "..\\..\\..\\cache\\root\\core-library\\master-12\\cache.zip"
Removing .m2/
Removing target/
Skipping Git submodules setup
Restoring cache
Checking cache for master-13...
No URL provided, cache will not be downloaded from shared cache server. Instead a local version of cache will be extracted.
Successfully extracted cache
If i clear the cache, the next log increase the cache number (in this case become master-14) but the cache is used anyway.. also if is the first run.
This is my simple pipeline, can someone help me to understand why the cache is used also if i forced the deletion ?
Pipeline example:
image: maven:3.6.3-openjdk-11
variables:
MAVEN_OPTS: "-Dhttps.protocols=TLSv1.2 -Dmaven.repo.local=$CI_PROJECT_DIR/.m2/repository -Dorg.slf4j.simpleLogger.log.org.apache.maven.cli.transfer.Slf4jMavenTransferListener=WARN -Dorg.slf4j.simpleLogger.showDateTime=true -Djava.awt.headless=true"
MAVEN_CLI_OPTS: "--batch-mode --errors --fail-at-end --show-version -DinstallAtEnd=true -DdeployAtEnd=true"
cache:
key: ${CI_COMMIT_REF_SLUG}
paths:
- .m2/repository
build-job:
stage: build
script:
- echo "Cache key is $CI_COMMIT_REF_SLUG"
- mvn $MAVEN_CLI_OPTS -DskipTests install
The configuration seems to be fine, except the part related to mvn as said by khmarbaise, so thanks for your support!
Because i'm running Gitlab on Docker using windows, I installed the runner as windows service but for some reason this will need some kind of different configuration.
So, I decided to move the runner in a Docker container (in the same machine where i'm running gitlab - my local desktop computer). After that, the cache works! Maybe there are something related to the path path.
Anyway, those are the two commands I executed to register the runner and start the runner.
Runner registration:
docker run --rm -v ${PWD}:/etc/gitlab-runner gitlab/gitlab-runner register `
--non-interactive `
--executor "docker" `
--docker-image ruby:2.6 `
--url "http://192.168.0.151:10080/" `
--registration-token "7e_CQCVYEyfCTwE_hZxp" `
--description "docker-runner" `
--tag-list "docker" `
--run-untagged="true" `
--locked="false" `
--access-level="not_protected"
Runner startup:
docker run --name gitlab-runner `
-v ${PWD}:/etc/gitlab-runner `
-v /var/run/docker.sock:/var/run/docker.sock `
gitlab/gitlab-runner:latest
Related
We have installed Gitlab on our custom server. We are looking to use the gitlab CI/CD pipeline to build and release our software for that I'm working on a POC. I have created a project with the following .gitlab-ci.yml
variables:
GOOS: linux
GOARCH: amd64
stages:
- test
- build
- deb-build
run_tests:
stage: test
image: golang:latest
before_script:
- go mod tidy
script:
- go test ./...
build_binary:
stage: build
image: golang:latest
artifacts:
untracked: true
script:
- GOOS=$GOOS GOARCH=$GOARCH go build -o newer .
build deb:
stage: deb-build
image: ubuntu:latest
before_script:
- mkdir -p deb-build/usr/local/bin/
- chmod -R 0755 deb-build/*
- mkdir build
script:
- cp newer deb-build/usr/local/bin/
- dpkg-deb --build deb-build release-1.1.1.deb
- mv release-1.1.1.deb build
artifacts:
paths:
- build/*
TLDR: I have updated the gitlab-ci.yml and the screenshot of the error.
What I have noticed, the error is persistent if I use the shared runner(GJ7z2Aym) if you register a runner (i.e Specific Runner)
gitlab-runner register --non-interactive --url "https://gitlab.sboxdc.com/" --registration-token "<register_token>" --description "" --executor "docker" --docker-image "docker:latest"
I see the build passing without any problem
Failed case.
https://gist.github.com/meetme2meat/0676c2ee8b78b3683c236d06247a8a4d
One that Passed
https://gist.github.com/meetme2meat/058e2656595a428a28fcd91ba68874e8
The failing job is using a runner with shell executor, that was probably setup when you configured your GitLab instance. This can be seen on logs by this line:
Preparing the "shell" executor
Using Shell executor...
shell executor will ignore your job's image: config. It will run job script directly on the machine on which the runner is hosted, and will try to find go binary on this machine (failing in your case). It's a bit like running go commands on some Ubuntu without having go installed.
Your successful job is using a runner with docker executor, running your job's script in a golang:latest image as you requested. It's like running docker run golang:latest sh -c '[your script]'. This can be seen in job's logs:
Preparing the "docker" executor
Using Docker executor with image golang:latest ...
Pulling docker image golang:latest ...
Using docker image sha256:05e[...]
golang:latest with digest golang#sha256:04f[...]
What you can do:
Make sure you configure a runner with a docker executor. Your config.toml would then look like:
[[runners]]
# ...
executor = "docker"
[runners.docker]
# ...
It seems you already did by registering your own runner.
Configure your jobs to use this runner with job tags. You can set tag docker_executor on your Docker runner (when registering or via Gitlab UI) and setup something like:
build_binary:
stage: build
# Tags a runner must have to run this job
tags:
- docker_executor
image: golang:latest
artifacts:
untracked: true
script:
- GOOS=$GOOS GOARCH=$GOARCH go build -o newer .
See Runner registration and Docker executor for details.
Since you have used image: golang:latest, go should be in the $PATH
You need to check at which stage it is failing: run_tests or build_binary.
Add echo $PATH in your script steps, to check what $PATH is considered.
Check also if the error comes from the lack of git, used by Go for accessing modules remote repositories. See this answer as an example.
From your gists, the default GitLab runner uses a shell executor (which knows nothing about Go)
Instead, the second one uses a Docker executor, based on the Go image.
Registering the (Docker) runner is therefore the right way to ensure the expected executor.
I'm trying to integrate cypress testing into gitlab pipeline.
I've tried about 10 different configurations which all fail.. I've included what I think are the relevant portions of of the gitlab.yml file, as well as the screenshot of the error on gitlab.
Thanks for any help
variables:
GIT_SUBMODULE_STRATEGY: recursive
cache:
paths:
- src/ui/node_modules/
- /root/.cache/Cypress/ //added this, also have tried src/ui/cypress/
build_ui:
image: node:16.14.2
stage: build
script:
- cd src/ui
- yarn install --pure-lockfile --prefer-offline --cache-folder .yarn
ui_test:
image: node:16.14.2
stage: test
needs: [build_ui]
script:
- cd src/ui
- yarn run runCypressHeadless
Each job gets its own separate environment. Therefore, you need to install your dependencies in each job. Add your yarn install command to the ui_test job.
The reason why your cache: did not restore to the job from the previous stage is because caches are per job by default (e.g. caches are restored from previous pipelines that ran the same job). If you want subsequent jobs in the same pipeline to use the cache, set the cache:key: to something like $CI_COMMIT_SHA or use cache:key:files: to use a file key, like your lockfile(s).
Also, you can only cache paths in the workspace. So you won't be able to cache/restore /root/.cache/... -- instead you should change the cache location to somewhere in the workspace.
For additional reference, see: caching in GitLab CI and caching NodeJS dependencies.
I've created a complete new Xamarin Forms App project in Visual Studio for Mac and added it to a GitLab repository. After that I created a .gitlab-ci.yml file for setting up my CI build. But the problem is that I get error messages:
error MSB4019: The imported project "/usr/lib/mono/xbuild/Xamarin/iOS/Xamarin.iOS.CSharp.targets" was not found. Confirm that the expression in the Import declaration "/usr/lib/mono/xbuild/Xamarin/iOS/Xamarin.iOS.CSharp.targets" is correct, and that the file exists on disk.
This error pops up also for Xamarin.Android.Csharp.targets.
My YML file look like this:
image: mono:latest
stages:
- build
build:
stage: build
before_script:
- msbuild -version
- 'echo BUILDING'
- 'echo NuGet Restore'
- nuget restore 'XamarinFormsTestApp.sln'
script:
- 'echo Cleaning'
- MONO_IOMAP=case msbuild 'XamarinFormsTestApp.sln' $BUILD_VERBOSITY /t:Clean /p:Configuration=Debug
Some help would be appreciated ;)
You will need a mac os host to build Xamarin.iOS application and AFAIK it's not there yet in GitLab. You can find the discussion here and private beta here. For now, I would recommend going your own MacOS Host and registered GitLab runner on that host:
https://docs.gitlab.com/runner/
You can set up the host where you want (VM or Physical device) and install the GitLab runner and Xamarin environment there, tag it and use with the GitLab pipelines as with any other shared runner.
From the comments on your question, it looks like Xamarin isn't available in the mono:latest image, but that's ok because you can create your own docker images to use in Gitlab CI. You will need to have access to a registry, but if you use gitlab.com (opposed to a self-hosted instance) the registry is enabled for all users. You can find more information on that in the docs: https://docs.gitlab.com/ee/user/packages/container_registry/
If you are using self-hosted, the registry is still available (even for free versions) but it has to be enabled by an admin (docs here: https://docs.gitlab.com/ee/administration/packages/container_registry.html).
Another option is to use Docker's own registry, Docker Hub. It doesn't matter what registry you use, but you'll have to have access to one of them so your runners can pull down your image. This is especially true if you're using shared runners that you (or your admins) don't have direct control over. If you can directly control your runners, another option is to build the docker image on all of your runners that need it.
I'm not familiar with Xaramin, but here's how you can create a new Docker image based on mono:latest:
# ./mono-xamarin/Dockerfile
FROM mono:latest # this lets us build off of an existing image rather than starting from scratch. Everything in the mono:latest image will be available in this image
RUN ./install_xamarin.sh # Run whatever you need to in order to install xamarin or anything else you need.
RUN apt-get install git # just an example
Once your Dockerfile is written, you can build it like this:
docker build --file path/to/Dockerfile --tag mono-xamarin:latest
If you build the image on your runners, you can use it immediately like:
# .gitlab-ci.yml
image: mono-xamarin:latest
stages:
- build
...
Otherwise you can now push it to whichever registry you want to use.
The situation is this:
I'm running Cypress tests in a Gitlab CI (launched by vue-cli). To speed up the execution, I built a Docker image that contains the necessary dependencies.
How can I cache node_modules from the prebuilt image to use it in the test job ?
Currently I'm using an awful (but working) solution:
testsE2e:
image: path/to/prebuiltImg
stage: tests
script:
- ln -s /node_modules/ /builds/path/to/prebuiltImg/node_modules
- yarn test:e2e
- yarn test:e2e:report
But I think there must be a cleaner way using the Gitlab CI cache.
I've been testing:
cacheE2eDeps:
image: path/to/prebuiltImg
stage: dependencies
cache:
key: e2eDeps
paths:
- node_modules/
script:
- find / -name node_modules # check that node_modules files are there
- echo "Caching e2e test dependencies"
testsE2e:
image: path/to/prebuiltImg
stage: tests
cache:
key: e2eDeps
script:
- yarn test:e2e
- yarn test:e2e:report
But the job cacheE2eDeps displays a "WARNING: node_modules/: no matching files" error.
How can I do this successfully? The Gitlab documentation doesn't really talk about caching from a prebuilt image...
The Dockerfile used to build the image :
FROM cypress/browsers:node13.8.0-chrome81-ff75
COPY . .
RUN yarn install
There is not documentation for caching data from prebuilt images, because it’s simply not done. The dependencies are already available in the image so why cache them in the first place? It would only lead to an unnecessary data duplication.
Also, you seem to operate under the impression that cache should be used to share data between jobs, but it’s primary use case is sharing data between different runs of the same job. Sharing data between jobs should be done using artifacts.
In your case you can use cache instead of prebuilt image, like so:
variables:
CYPRESS_CACHE_FOLDER: "$CI_PROJECT_DIR/cache/Cypress"
testsE2e:
image: cypress/browsers:node13.8.0-chrome81-ff75
stage: tests
cache:
key: "e2eDeps"
paths:
- node_modules/
- cache/Cypress/
script:
- yarn install
- yarn test:e2e
- yarn test:e2e:report
The first time the above job is run, it’ll install dependencies from scratch, but the next time it’ll fetch them from the runner cache. The caveat is that unless all runners that run this job share cache, each time you run it on a new runner it’ll install the dependencies from scratch.
Here’s the documentation about using yarn with GitLab CI.
Edit:
To elaborate on using cache vs artifacts - artifacts are meant for both storing job output (eg. to manually download it later) and for passing results of one job to another one from a subsequent stage, while cache is meant to speed up job execution by preserving files that the job needs to download from the internet. See GitLab documentation for details.
Contents of node_modules directory obviously fit into the second category.
We have a separate build for the frontend and backend of the application, where we need to pull the dist build of frontend to backend project during the build. During the build the 'curl' cannot write to the desired location.
In detail, we are using SpringBoot as backend for serving Angular 2 frontend. So we need to pull the frontend files to src/main/resources/static folder.
image: maven:3.3.9
pipelines:
default:
- step:
script:
- curl -s -L -v --user xxx:XXXX https://api.bitbucket.org/2.0/repositories/apprentit/rent-it/downloads/release_latest.tar.gz -o src/main/resources/static/release_latest.tar.gz
- tar -xf -C src/main/resources/static --directory src/main/resources/static release_latest.tar.gz
- mvn package -X
As a result of this the build fails with output of CURL.
* Failed writing body (0 != 16360)
Note: I've tried the same with maven-exec-plugin, the result was the same. The solution works on local machine naturally.
I would try running these commands from a local docker run of the image you're specifying (maven:3.3.9). I found that the most helpful way to debug things that were behaving differently in Pipelines vs. in my local environment.
To your specific question yes, you can download external content from the Pipeline run. I have a Pipeline that clones other repos from BitBucket via HTTP into the running container.