Gitlab-runner submodule .../.git/modules/... is not an directory - windows

I have a repository with one submodule. The CI is configured to clone the submodule:
variables:
GIT_STRATEGY: clone
GIT_SUBMODULE_STRATEGY: normal
I then created two jobs. One job to build on debian and one job build on windows. The debian build job works fine the submodule is cloned and the correct commit is checked out.
build:debian:
image: debian
stage: build
tags:
- linux
- docker
script:
- qmake project.pro
- make
But the windows job fails during cloning the submodule:
build:windows:
image: windows
stage: build
tags:
- windows
- docker
script:
- qmake.exe project.pro -spec win32-msvc "CONFIG+=release" -r
- nmake clean
- jom.exe -j4
The error looks like this:
Cloning into 'C:/builds/group_name/project_name/submodule_name'...
fatal: Invalid path 'C:/builds/group_name/project_name/.git/modules/submodule_name': Not a directory
fatal: clone of 'https://gitlab-ci-token:[MASKED]#gitlab.domain.com/group_name/submodule_name.git' into submodule path 'C:/builds/group_name/project_name/submodule_name' failed
Failed to clone 'submodule_name' a second time, aborting
I did not posted the first try since it is the same error message.
Do someone know this error, has a solution or maybe an idea how to fix or how to debug?

Related

My Gitlab CI/CD pipeline failing in cache with FATAL: file does not exist error

right now I'm trying to learn CI/CD with Gradle. I'm using GitLab CI's pipeline. With a Gitlab documentation and a little bit search found gilab-ci.yml like this
image: gradle:jdk11
before_script:
- export GRADLE_USER_HOME='pwd'/.gradle
cache:
paths:
- .gradle/wrapper
- .gradle/caches
package:
stage: build
script:
- ./gradlew assemble
test:
stage: test
script:
- ./gradlew check
However its not working for my Spring boot application
Gitlab pipline gives me "FATAL: file does not exist" error. I'm thinking its due to caching but everything seems okay in YAML file
Even though "FATAL: file does not exist" is red and bold, it is not failing the build but is just a warning. The actual cause of the failure is a couple of lines below: /bin/bash: line 104: ./gradlew: Permission denied.
To resolve the issue, update the build stage with the following code segment:
package:
stage: build
script:
- chmod +x ./gradlew
- ./gradlew --build-cache assemble

Gitlab Docker build: calling shell command in .gitlab-ci.yml

I'm trying to call shell command in .gitlab-ci.yml, whose relevant parts are:
image: docker:latest
services:
- docker:dind
stages:
- build
- deploy
...
build:
stage: build
script:
- apt-get update -y
- GIT_TAG=$(git tag | tail -1)
- GIT_TAG=$(/usr/bin/git tag | tail -1)
- docker ...
However all top three shell command callings have failed, all with "command not found" error. The git command being failing is really odd, because it has to get the git repo first before start the script section. I.e., I can see that git is working, but I just can't use it myself.
Is there any way to make it working?
You see git working in separate steps because GitLab is probably doing it in another container. They keep your container clean, so you have to install dependencies yourself.
Since the image you're using is based on Alpine Linux, the command to install git is:
apk add --no-cache git
You can also skip the whole thing and use the predefined environment variables if all you need is git information. $CI_COMMIT_TAG will contain the tag and $CI_COMMIT_SHA will contain the commit hash.
from the documentation of GitLab, here is the definition of CI_COMMIT_TAG: CI_COMMIT_TAG - The commit tag name. Present only when building tags
means - when you will push a commit to GitLab, then it will start a pipeline without CI_COMMIT_TAG variable. When you make a tag on this commit and push this tag to GitLab, then another pipeline (this time for the tag, not for the commit) will be started. In that case CI_COMMIT_TAG will be present.
#xpt - thanks for the vote confidence and asking to write up this as an answer, hope this helps the community!

gitlab-sonar-scanner: command not found for JS language

I am using gitlab CICD to run the pipelines. In order to analyse the code with sonarqube I am using .gitlab-ci.yml file with all config as below,
stages:
- analysis
sonarqube:
stage: analysis
image: ciricihq/gitlab-sonar-scanner
variables:
SONAR_URL: https://playground.altimetrik.com/sonarqube
SONAR_ANALYSIS_MODE: issues
script:
- gitlab-sonar-scanner
I have also added sonar-project.properties
I am always ending up with error saying,
Running with gitlab-runner 11.3.1 (0aa5179e)
on Gitlab-Pipeline 04bdd119
Using Shell executor...
Running on ip-10-101-102-187.ec2.internal...
Fetching changes...
HEAD is now at 39bbd50 Update .gitlab-ci.yml
From https://gitlab.altimetrik.com/playground/node_js_usecasepg-ta1811536129881158
39bbd50..99600f7 master -> origin/master
Checking out 99600f70 as master...
Skipping Git submodules setup
$ gitlab-sonar-scanner
bash: line 57: gitlab-sonar-scanner: command not found
ERROR: Job failed: exit status 1
The same config wa working few days back, I am facing this error only in recent days. Can anyone help on this?
Thanks you...
It Seems like the sonar-scanner has not been installed in your runner.
Please verify your runner to check this.
Reference : installation steps

CircleCI 2.0 Workflow - Deploy not working

I'm trying to set up a workflow in CircleCI for my React project.
What I want to achieve is to get a job to build the stuff and another one to deploy the master branch to Firebase hosting.
This is what I have so far after several configurations:
witmy: &witmy
docker:
- image: circleci/node:7.10
version: 2
jobs:
build:
<<: *witmy
steps:
- checkout
- restore_cache:
keys:
- v1-dependencies-{{ checksum "package.json" }}
- v1-dependencies-
- run: yarn install
- save_cache:
paths:
- node_modules
key: v1-dependencies-{{ checksum "package.json" }}
- run:
name: Build app in production mode
command: |
yarn build
- persist_to_workspace:
root: .
deploy:
<<: *witmy
steps:
- attach_workspace:
at: .
- run:
name: Deploy Master to Firebase
command: ./node_modules/.bin/firebase deploy --token=MY_TOKEN
workflows:
version: 2
build-and-deploy:
jobs:
- build
- deploy:
requires:
- build
filters:
branches:
only: master
The build job always success, but with the deploy I have this error:
#!/bin/bash -eo pipefail
./node_modules/.bin/firebase deploy --token=MYTOKEN
/bin/bash: ./node_modules/.bin/firebase: No such file or directory
Exited with code 1
So, what I understand is that the deploy job is not running in the same place the build was, right?
I'm not sure how to fix that. I've read some examples they provide and tried several things, but it doesn't work. I've also read the documentation but I think it's not very clear how to configure everything... maybe I'm too dumb.
I hope you guys can help me out on this one.
Cheers!!
EDITED TO ADD MY CURRENT CONFIG USING WORKSPACES
I've added Workspaces... but still I'm not able to get it working, after a loooot of tries I'm getting this error:
Persisting to Workspace
The specified paths did not match any files in /home/circleci/project
And also it's a real pain to commit and push to CircleCI every single change to the config file when I want to test it... :/
Thanks!
disclaimer: I'm a CircleCI Developer Advocate
Each job is its own running Docker container (or VM). So the problem here is that nothing in node_modules exists in your deploy job. There's 2 ways to solve this:
Install Firebase and anything else you might need, on the fly, just like you do in the build job.
Utilize CircleCI Workspaces to carry over your node_modules directory from the build job to the deploy job.
In my opinion, option 2 is likely your best bet because it's more efficient.

Gitlab CI: Clone repo only before first build in pipeline

I have ~5-10 builds in my .yml file for Gitlab CI. To save time, I'm wondering if there is a way to NOT re-clone the repo between every job. Ideally, the repo will be cloned once and then all 3 jobs run. I also don't want to combine the jobs into a single build because I'd like to see the results of each individually (when they are combined, gitlab's "pass/fail" is just the result of the last job).
I don't want to simply do git fetch because I want a fresh clone at the start.
stages:
- run
job1:
stage: run
script:
- pwd
- make all TEST=job1
job2:
stage: run
script:
- pwd
- make all TEST=job2
job3:
stage: run
script:
- pwd
- make all TEST=job3
...
I'm also fiddling around with this topic.
Actually, I go by doing a checkout-stage at first (with GIT_STRATEGY: clone) and then the build-stage with multiple jobs and GIT_STRATEGY: fetch.
This ensures that the repo is really full cloned at first and only fetched for every buildstep. Maybe this helps you too.
stages:
- checkout
- build
checkout:
variables:
GIT_STRATEGY: clone
GIT_SUBMODULE_STRATEGY: recursive
stage: checkout
script: '#echo Checking out...'
build:commander:
stage: build
variables:
GIT_STRATEGY: fetch
script:
- _Publish.bat commander
artifacts:
paths:
- BuildArtifacts\Commander\**
build:login:
stage: build
variables:
GIT_STRATEGY: fetch
script:
- _Publish.bat login
artifacts:
paths:
- BuildArtifacts\Login\**
build:cli:
stage: build
variables:
GIT_STRATEGY: fetch
script:
- _Publish.bat cli
artifacts:
paths:
- BuildArtifacts\Cli\**
This might be helpful, assuming you are using a new enough version of gitlab and the runner: https://docs.gitlab.com/ce/ci/yaml/README.html#git-strategy
You can set your git-strategy to none, and manually clone the repo in your before_script section.
This will have some difficulties still - because different runners can service different jobs, if you don't have a dedicated runner for this project, all runners would need access to the repo location.

Resources