I'm new with Gitlab CI. Every time Gitlab CI run, it replace old folder on server. I have small problem when I want to reduce time Gradle build for project which include DL4J (very big size and take time to build). So I want it keep build folder from last version. I follow this to reduce time build by gradle.
Question: Is that possible to skip some folder by config of gitlab ci to keep it exist. This is my gitlab ci
stages:
- build
something_run:
stage: build
script:
- gradle build
- systemctl restart myproject
tags:
- ml
only:
- master
When it run, gradle will build project and time to build quite long. So I want next time CI run it will not delete last build version.
Take a look at cache (https://docs.gitlab.com/ee/ci/yaml/#cache)
cache is used to specify a list of files and directories which should be cached between jobs.
GitLab CI/CD provides a caching mechanism that can be used to save time when your jobs are running.
See also https://docs.gitlab.com/ee/ci/caching/index.html
Related
My problem is basically this:
I have a build job and a deploy job in my gitlab-ci.yml.
build:
extends: .node_base
artifacts:
paths:
- artifact_folder
stage: deploy
script:
- npm start
deploy:
tags:
- linux-docker
stage: deploy
when: manual
image: registry.gitlab.com/gitlab-org/cloud-deploy/aws-base:latest
script:
- aws --endpoint-url $AWS_HOST s3 sync artifact_folder/ s3://$AWS_S3_BUCKET --delete --acl public-read
dependencies:
- build
The build job downloads files from an external location and saves them in an artifact for my deploy job to use.
The deploy takes the files from the build job artifact and uploads them to an s3 bucket.
So far so good. The problem is that everytime I want to deploy new changes I will have to first re-run the build job to get the updated files from the external location, before I re-run the deploy job.
Its not a big issue but I would like to, if possible, only have one job that does both the build step and the deploy step.
My first idea was to simply run the - npm start in the build job as a before_script in the deploy job. However, I am limited by the infrastructure setup by devops atm, which means the build job runs on an environment where npm is installed, and the deploy job runs on an environment where npm is not installed.
Is there anyway I can run these two jobs separately, but somehow also only need one button in gitlab to start both of these scripts.
Or perhaps force the build job to always re-run before the deploy job runs, or vice versa. And disable the deploy job from being able to run independently of the build job?
I have a spring boot application and I deployed it in the first stage and now I have a jar file.
my question is how to access this jar file in the next stage and how I can run it.
second question is HOW can I increase its version number? for example jar file name is spring.0.0.1.jar and I want to increase version number after every push. is this possible?
First Question:
You can save the jar file as an artifact and access it in the second stages jobs' script area. For example
FirstJob:
stage: FirstStage
scrip:
- <your commands here>
artifacs:
paths:
- ./artifacts/myOutput.jar
Now your "myOutput.jar" is accessible in the artifacts folder for all following jobs. See here: https://docs.gitlab.com/ee/ci/pipelines/job_artifacts.html
Second Question:
As far as I know there is no way of handing down variables between pipelines in GitLab CI so this would not be possible if I'm right. Since artifacts don't get added to the repository, previous artifacts are not available to following pipelines either. Yet, if I had to come up with a solution on the spot, I'd try:
Saving the version number somewhere accessible to every pipeline (cloud, repo)
git push every artifact to actually add it to the repository, then check file name and increment version number
using the GitLab CI release option. The CI can create a release object for you, maybe this could help as well. See here: https://docs.gitlab.com/ee/ci/yaml/README.html#release
I have configured and working following setup
gitlab-ci, which uses docker-machine runner and uploads cache to S3
maven build with configured caching
caching correctly loads and uploads on each job
But the problem is, that every time I run mvn install, something in the local maven repository changes (I assume it updates pom metadata) and gitlab runner keeps uploading new versions of the cache, on every single build.
It is still faster and more reliable to use this "busted" cache, than to download the deps from internet every time, but the upload can take a long time and I would like to shave off this extra time.
How can I modify my build to force maven, to generate cacheable local repository?
Simplified version of my .gitlab-ci.yml:
variables:
# we have a custom java+maven image, that uses this ENV variable,
# to auto-configure path where to put the local maven repository
MAVEN_LOCAL_REPOSITORY: $CI_PROJECT_DIR/.cache/maven
job-build:
stage: build
image: internal-gitlab/java/maven:3.6-jdk8-alpine
script:
- mvn -B clean package
cache:
key: backend-dependencies
paths:
- .cache/
You have a constant as a cache key. Maybe a more fine grained cache would help.
See the link here
In short - prepare your own maven image with required dependencies and use it instead of internal-gitlab/java/maven:3.6-jdk8-alpine.
Some details:
First of all, you need to create a maven docker image where all (or most of) required for your project dependencies are presented. Publish it to your registry (gitlab has one) and use it instead of internal-gitlab/java/maven:3.6-jdk8-alpine.
To create such an image I usually create an additional job in CI triggered manually. You need to trigger it at initial stage and when project dependencies are heavily modified.
Working sample can be found here:
https://gitlab.com/alexej.vlasov/syncer/blob/master/.gitlab-ci.yml
- this project is using the prepared image and also it has a job to prepare this image.
https://gitlab.com/alexej.vlasov/maven/blob/master/Dockerfile
- dockerfile to run maven and download dependencies once.
The pros:
don't need to download dependencies each time - they are inside a
docker image (and docker layers are cached on the runners)
don't need to upload artifacts when job is finished
The problem:
Bamboo executes old unit tests that don't exist my current develop branch which causes a build error.
The situation that causes this problem:
After a big refactoring process of my maven java project, where I basically moved, modified and renamed every file, I committed my changes to my remote repository.
That triggered my bamboo build plan, to start the build process.
The git code checkout seems to work, but the next step, running the unit tests, fails!
Looking in the log file I see that an old, no more existing java Unit test class gets executed and of course fails because of NullPointerExceptions.
Things I tried to fix this problem
A. Remove caches in the Administration section
I went to Bamboo->Administration->Repository Settings and selected
the cache of my project and deleted it.
I started the build plan again
BUILD ERROR ! Same problem
B. Delete the cache directory in the file system
Start a RDP session on the bamboo server
stop bamboo
go to D:\bamboo-home_64\xml-data\build-dir_git-repositories-cache
delete all files in this folder
start bamboo
start the build plan again
BUILD ERROR! same problem
Meta info
bamboo version: 6.1.0 build 60103 - 18 Jul 17
I don't know what I can do to fix this..
There's Clean working directory task. Add it as first task to your Job and see if it solves the issue.
Currently, I have two main project.
1-) Vue Project which contains (webviews for IOS and Android, websites, and renderer for our Electron ) they are sharing components & API's.
2-) Electron Project which builds desktop app for (windows, darwin, linux)
i would like to automate our building, releasing process. my current setup..
before_script:
- apt-get update
- apt-get install zip unzip
- rm -rf vue-project
- git clone vue-project
- cd vue-project
- git checkout dev
- git pull
- sed -i "/\b\(areaCode\|inline-svg-loader\)\b/d" ./packages/devtool/package.json
- yarn install
- ln -s vue-project/packages/desktop/ web
- npm install
build_darwin:
stage: build
script:
- npm run package -- darwin --deploy
cache:
paths:
- vue-project/node_modules
- node_modules
which basically before bundling electron project it's cloning vue-project install dependencies and bundling electron-renderer then when it's finish. i'm running package.
I would like to separate this two different job from each other. is there anyway i could use artifacts from different project gitlab-CI pipelines ?
any help would be an appreciated.
Gitlab has a API for do a lot of tricks.
curl --header "PRIVATE-TOKEN:YOURPRIVATETOKEN" "https://gitlab.example.com/api/v4/projects/1/jobs/artifacts/master/download?job=test"
for download it as a file.
curl --header "PRIVATE-TOKEN:YOURPRIVATETOKEN" -o artifacts.zip "http://gitlab.example.net/api/v4/projects/<projectnumber>/jobs/artifacts/master/download?job=build_desktop
Gitlab can certainly support this. To accomplish this, follow these steps:
ARTIFACT GENERATION
In your Vue Project, modify your job(s) of interest to store artifacts relevant to the Electron project. Each job's artifacts are defined using Gitlab Job Artifacts notation, and are uploaded at job completion to Gitlab, and stored associated to your Project, Branch, and Job.
Note: Branch is often overlooked, and matters when you want to retrieve your artifacts, more on this later.
Illustrating:
Vue Project .gitlab_ci.yml
stages:
- stage1
- ...
vue-job1:
stage: stage1
script:
- echo "vue-job1 artifact1" > ./artifact1
- echo "vue-job1 artifact2" > ./artifact2
artifacts:
when: always
paths:
- ./artifact1
- ./artifact2
expire_in: 90 days
vue-job2:
stage: stage1
script:
# error, would overwrite job1's artifacts since working
# directory is a global space shared by all pipeline jobs
# - echo "vue-job2 artifact1" > ./artifact1
- echo "vue-job2 artifact1" > ./artifact3
artifacts:
when: always
paths:
- ./artifact3
expire_in: 90 days
The artifacts generated above are written to the working directory, which is a clone of your project's repo. So be careful with filename conflicts. To be safe, put your artifacts in a subdirectory (eg: cat "foo" > ./subdir/artifact) and reference them in paths the same way (paths: - ./subdir/artifact). You can use 'ls' in your script to view the working directory.
When your job completes, you can confirm the artifacts stored in Gitlb by using the Gitlab UI. View the job output, and use the Browse button under Job Artifacts on the right panel.
ARTIFACT RETRIEVAL
In your Electron Project, modify your job(s) of interest to retrieve artifacts stored in the Vue Project using the Gitlab Job Artifacts API and curl. In order to access the Vue artifacts, you will need the Vue Project, Branch, and Job that the artifacts were created under.
Project: For Project, use the Project ID displayed in the Gitlab UI Project Details screen.
Branch: Usually master, but depends on the branch your pipeline executes against. Although this is not relevant for your problem, if you are generating and consuming artifacts across executions of the same pipeline, use the Gitlab variable $CI_COMMIT_BRANCH for the branch.
Job: Generally the Job Name that generated the artifacts for your Project. But if you need artifacts produced by a specific Job, then use the Job Number and the corresponding retrieval API.
Illustrating:
Electron Project .gitlab_ci.yml
stages:
- stage1
- ...
electron-job1:
stage: stage1
script:
- curl -o ./artifact1 -H "PRIVATE-TOKEN:$TOKEN" https://gitlab.example.com/api/v4/projects/$VUE_PROJECT_ID/jobs/artifacts/$BRANCH/raw/artifact1?job=vue-job1
- curl -o ./artifact2 -H "PRIVATE-TOKEN:$TOKEN" https://gitlab.example.com/api/v4/projects/$VUE_PROJECT_ID/jobs/artifacts/$BRANCH/raw/artifact2?job=vue-job1
- curl -o ./artifact3 -H "PRIVATE-TOKEN:$TOKEN" https://gitlab.example.com/api/v4/projects/$VUE_PROJECT_ID/jobs/artifacts/$BRANCH/raw/artifact3?job=vue-job2
This script retrieves artifacts individually to the working directory of your Electron Project. There are also options for retrieving all artifacts for your job at once as a zip archive.
MISCELLANEOUS
Although this is not in the problem posed, it is worth noting that you can use artifacts both within the lifespan of a single pipeline execution to pass information between jobs. You can also use this to pass information across pipeline executions within the same project.
With recent versions of gitlab, this can be achieved simply by using either the multi-project pipeline feature (then starting a pipeline in one project triggers a build from the other project): see the documentation
Or you can also use the "needs:project" mechanism which allows one job to download artifacts from other pipelines (see the documentation Use needs:project to download artifacts from up to five jobs in other pipelines.)