When using gitlab-ci on a java+maven project; the maven artifacts are not cached. They are getting downloaded every-time.
I have deployed the gitlab-runner as kubernetes.
And the build time logs show
Creating cache edu-erp...
.m2/repository/: found 761 matching files
No URL provided, cache will be not uploaded to shared cache server. Cache will be stored only locally.
.gitlab-ci.yml
image: maven:latest
variables:
MAVEN_CLI_OPTS: "--batch-mode --errors --fail-at-end --show-version"
MAVEN_OPTS: "-Djava.awt.headless=true -Dmaven.repo.local=./.m2/repository"
cache:
paths:
- ./.m2/repository
# keep cache across branch
key: "$CI_COMMIT_REF_NAME"
stages:
- build
- test
build:
stage: build
cache:
key: edu-erp
paths:
- .m2/repository/
script:
- "mvn clean compile $MAVEN_CLI_OPTS"
artifacts:
paths:
- target/
test:
stage: test
cache:
key: edu-erp
script:
- "mvn test $MAVEN_CLI_OPTS"
The cache should be available across builds.
Related
I want to create a CI/CD pipeline between gitlab and aws ec2 deployment.
My repository is nodejs/express web server project.
And I created a gitlab-ci.yaml
image: node:latest
cache:
paths:
- node_modules/
stages:
- build
- test
- staging
- openMr
- production
before_script:
- apt-get update -qq && apt-get install
Build:
stage: build
tags:
- node
before_script:
- yarn config set cache-folder .yarn
- yarn install
script:
- npm run build
Test:
stage: test
tags:
- node
before_script:
- yarn config set cache-folder .yarn
- yarn install --frozen-lockfile
script:
- npm run test
Deploy to Production:
stage: production
tags:
- node
before_script:
- mkdir -p ~/.ssh
- echo -e "$SSH_PRIVATE_KEY" > ~/.ssh/id_rsa
- chmod 600 ~/.ssh/id_rsa
- '[[ -f /.dockerenv ]] && echo -e "Host *\n\tStrictHostKeyChecking no\n\n" > ~/.ssh/config'
script:
- bash ./gitlab-deploy/.gitlab-deploy.prod.sh
environment:
name: production
url: http://ec2-url.compute.amazonaws.com:81
When I push a new commit pipeline failed on build step. And I get a warning as :
This job is stuck because you don't have any active runners online or
available with any of these tags assigned to them: node
I checked my runner on gitlab settings/CI/CD
After that I checkked server
admin#ip-111.222.222.111:~$ gitlab-runner
statusRuntime platform arch=amd64 os=linux pid=18787 revision=98daeee0 version=14.7.0
FATAL: The --user is not supported for non-root users
You need to remove the tag node from your jobs. Runner tags are used to define which runner should pick up your jobs (https://docs.gitlab.com/ee/ci/runners/configure_runners.html#use-tags-to-control-which-jobs-a-runner-can-run). As there is no runner available which supports the tag node, your job gets stuck.
It doesn't look like your pipeline has any special requirements so you can just remove the tag so it can be picked up by every runner.
The runner that can be seen in your screenshot supports the tag shop_service_runner. So another option would be to change the tag node to shop_service_runner which would lead to this runner (and every runner with the same tags) being able to pick up this job.
Before i start, let me tell you that i'm newbie to Gitlab CI file :)
i'm looking to automate deployments of spring boot microservice app to custom server (a namecheap VPS).
(I'm using the Gitlab shared runner)
i have used jHipster ci-cd to generate the .gitlab-ci.yml file. Doing that, i have: build, package, release stages working.
i even could see the image repository built in Gitlab Container registery.
What last, is the deployment. i know that i have to use a docker container to deploy it, but i don't know how.
I'm stuck in deploying the image repository to my VPS.
(as it's my first microservice, my VPS is still new, having only java installed).
Here is my .gitlab-ci.yml file:
#image: jhipster/jhipster:v6.9.0
image: openjdk:11-jdk
cache:
key: "$CI_COMMIT_REF_NAME"
paths:
- .maven/
stages:
- check
- build
# - test
# - analyze
- package
- release
- deploy
before_script:
- chmod +x mvnw
- git update-index --chmod=+x mvnw
- export NG_CLI_ANALYTICS="false"
- export MAVEN_USER_HOME=`pwd`/.maven
nohttp:
stage: check
script:
- ./mvnw -ntp checkstyle:check -Dmaven.repo.local=$MAVEN_USER_HOME
maven-compile:
stage: build
script:
- ./mvnw -ntp compile -P-webpack -Dmaven.repo.local=$MAVEN_USER_HOME
artifacts:
paths:
- target/classes/
- target/generated-sources/
expire_in: 1 day
#maven-test:
# stage: test
# script:
# - ./mvnw -ntp verify -P-webpack -Dmaven.repo.local=$MAVEN_USER_HOME
# artifacts:
# reports:
# junit: target/test-results/**/TEST-*.xml
# paths:
# - target/test-results
# - target/jacoco
# expire_in: 1 day
maven-package:
stage: package
script:
- ./mvnw -ntp verify -Pprod -DskipTests -Dmaven.repo.local=$MAVEN_USER_HOME
artifacts:
paths:
- target/*.jar
- target/classes
expire_in: 1 day
# Uncomment the following line to use gitlabs container registry. You need to adapt the REGISTRY_URL in case you are not using gitlab.com
docker-push:
stage: release
variables:
REGISTRY_URL: registry.gitlab.com
IMAGE_TAG: $CI_REGISTRY_IMAGE:$CI_COMMIT_REF_SLUG-$CI_COMMIT_SHA
dependencies:
- maven-package
script:
- ./mvnw -ntp compile jib:build -Pprod -Djib.to.image=$IMAGE_TAG -Djib.to.auth.username=gitlab-ci-token -Djib.to.auth.password=$CI_BUILD_TOKEN -Dmaven.repo.local=$MAVEN_USER_HOME
docker-deploy:
image: docker:stable-git
stage: deploy
script:
when: manual
only:
- master
Thanks for your help :)
This my .travis.yml file. I am trying to automate deployment to aws-codedeploy.
language: node_js
node_js:
- 7.10.0
services:
- mongodb
env:
- PORT=6655 IP="localhost" NODE_ENV="test"
script:
- npm start &
- sleep 25
- npm test
deploy:
provider: codedeploy
access_key_id:
secure: $Access_Key_Id
secret_access_key:
secure: $Access_Key_Secret
revision_type: github
application: Blog
deployment_group: Ayush-Bahuguna
region: us-east-2
after_deploy:
- "./build.sh"
Here build.sh is a shell script that generates the build files
cd /var/www/cms
sudo yarn install
npm run build-prod
And here is .gitignore file
node_modules/
client/dashboard/dist/
client/blog/dist/
The issue is that, even though travis-ci build succeeds, and after_deploy runs successfully, no build files are generated on the aws ec2 instance where my project is hosted.
Are you able to see any deployment created on your AWS CodeDeploy console? And are your able to see the deployment status? If there is a deployment created, but failed, you can try to see the reason why it failed. Even though the deployment succeeded, it doesn't equal to all instances are deployed depends on the deployment configuration: http://docs.aws.amazon.com/codedeploy/latest/userguide/deployment-configurations.html.
Thanks,
Binbin
I'm using GitLab Runner as CI to build an Android project however the cache is not working.
Here's my .gitlab-ci.yml. It was modified from https://gist.github.com/daicham/5ac8461b8b49385244aa0977638c3420.
image: runmymind/docker-android-sdk:latest
variables:
GRADLE_USER_HOME: $CI_PROJECT_DIR/.gradle
stages:
- build
debug:
stage: build
script:
- set +e
- du -sh $CI_PROJECT_DIR/.gradle/wrapper
- du -sh $CI_PROJECT_DIR/.gradle/caches
- set -e
- ./gradlew assembleDebug
- mkdir artifacts
- cp mobile/build/outputs/apk/*.apk artifacts/
- cp wear/build/outputs/apk/*.apk artifacts/
cache:
paths:
- .gradle/wrapper/
- .gradle/caches/
- build/
- mobile/build/
- wear/build/
artifacts:
name: "project_${CI_JOB_NAME}_${CI_COMMIT_REF_NAME}_${CI_COMMIT_SHA}"
expire_in: 2 weeks
paths:
- artifacts/
And the log:
Running with gitlab-ci-multi-runner 9.0.0 (08a9e6f)
Using Docker executor with image runmymind/docker-android-sdk:latest ...
Using docker image sha256:d696fa13188c8d2d121c86cf526201b363c1e34ee7b163d6ce1ab1718f91a5e6 ID=sha256:d696fa13188c8d2d121c86cf526201b363c1e34ee7b163d6ce1ab1718f91a5e6 for predefined container...
Pulling docker image runmymind/docker-android-sdk:latest ...
Using docker image runmymind/docker-android-sdk:latest ID=sha256:474ac98077a496f2f71aa22ce4eebcea966c2960a061d4a59babe81ff007009b for build container...
Running on runner-8ce5d03c-project-72-concurrent-0 via outrage...
Cloning repository...
Cloning into '/builds/User/android-project'...
Checking out 015d01d0 as master...
Skipping Git submodules setup
Checking cache for default...
Successfully extracted cache
$ set +e
$ du -sh $CI_PROJECT_DIR/.gradle/wrapper
du: cannot access '/builds/User/android-project/.gradle/wrapper': No such file or directory
$ du -sh $CI_PROJECT_DIR/.gradle/caches
du: cannot access '/builds/User/android-project/.gradle/caches': No such file or directory
$ set -e
$ ./gradlew assembleDebug
Downloading https://services.gradle.org/distributions/gradle-3.4.1-all.zip
Unzipping /builds/User/android-project/.gradle/wrapper/dists/gradle-3.4.1-all/c3ib5obfnqr0no9szq6qc17do/gradle-3.4.1-all.zip to /builds/User/android-project/.gradle/wrapper/dists/gradle-3.4.1-all/c3ib5obfnqr0no9szq6qc17do
Set executable permissions for: /builds/User/android-project/.gradle/wrapper/dists/gradle-3.4.1-all/c3ib5obfnqr0no9szq6qc17do/gradle-3.4.1/bin/gradle
Starting a Gradle Daemon (subsequent builds will be faster)
Download https://jcenter.bintray.com/com/android/tools/build/gradle/2.3.0/gradle-2.3.0.pom
(more downloads)
I have also tried using gradle argument to set gradle user home, aggressively specifying .gradle/ for cache, etc, but none of them worked.
Any ideas?
If you use Gitlab < 9.0,you add to specify the cache has to be shared among different pipelines.
Try to add key: $CI_PROJECT_NAME under cache:
image: runmymind/docker-android-sdk:latest
variables:
GRADLE_USER_HOME: $CI_PROJECT_DIR/.gradle
stages:
- build
debug:
stage: build
script:
- set +e
- du -sh $CI_PROJECT_DIR/.gradle/wrapper
- du -sh $CI_PROJECT_DIR/.gradle/caches
- set -e
- ./gradlew assembleDebug
- mkdir artifacts
- cp mobile/build/outputs/apk/*.apk artifacts/
- cp wear/build/outputs/apk/*.apk artifacts/
cache:
key: $CI_PROJECT_NAME
paths:
- .gradle/wrapper/
- .gradle/caches/
- build/
- mobile/build/
- wear/build/
artifacts:
name: "project_${CI_JOB_NAME}_${CI_COMMIT_REF_NAME}_${CI_COMMIT_SHA}"
expire_in: 2 weeks
paths:
- artifacts/
I have a simple .gitlab-ci.yml file:
image: docker:latest
services:
- docker:dind
- postgres:9.5
stages:
- build
- test
variables:
STAGING_REGISTRY: "dhub.example.com"
CONTAINER_TEST_IMAGE: ${STAGING_REGISTRY}/${CI_PROJECT_NAME}:latest
before_script:
- docker login -u gitlab-ci -p $DHUB_PASSWORD $STAGING_REGISTRY
build:
stage: build
script:
- docker build --pull -t $CONTAINER_TEST_IMAGE -f Dockerfile-dev .
- docker push $CONTAINER_TEST_IMAGE
test:
stage: test
script:
- docker run --env-file=.environment --link=postgres:db $CONTAINER_TEST_IMAGE nosetests
Everything works fine until the actual test stage. In test I'm unable to access my postgres service.
docker: Error response from daemon: Could not get container for postgres.
I tried to write test like this:
test1:
stage: test
image: $CONTAINER_TEST_IMAGE
services:
- postgres:9.5
script:
- python manage.py test
But in this case, I'm unable to pull this image, because of authentication:
ERROR: Preparation failed: unauthorized: authentication required
Am I missing something?