For some reason the build step for my NestJS project in my GitHub Action fails for a few days now. I use Turborepo with pnpm in a monorepo and try to run the build with turbo run build. This works flawlessly on my local machine, but somehow in GitHub it fails with sh: 1: nest: Permission denied. ELIFECYCLE Command failed with exit code 126. I'm not sure how this is possible, since I couldn't find any meaningful change I made to the code in the meantime. It just stopped working unexpectedly. I actually think it is an issue with GH Actions, since it actually works in my local Docker build as well.
Has anyone else encountered this issue with NestJS in GH Actions?
This is my action yml:
name: Test, lint and build
on:
push:
jobs:
test-lint-build:
runs-on: ubuntu-latest
services:
postgres:
# Docker Hub image
image: postgres
# Provide the password for postgres
env:
POSTGRES_HOST: localhost
POSTGRES_USER: test
POSTGRES_PASSWORD: docker
POSTGRES_DB: financing-database
ports:
# Maps tcp port 5432 on service container to the host
- 2345:5432
# Set health checks to wait until postgres has started
options: >-
--health-cmd pg_isready
--health-interval 10s
--health-timeout 5s
--health-retries 5
steps:
- name: Checkout
uses: actions/checkout#v3
- name: Install pnpm
uses: pnpm/action-setup#v2.2.2
with:
version: latest
- name: Install
run: pnpm i
- name: Lint
run: pnpm run lint
- name: Test
run: pnpm run test
- name: Build
run: pnpm run build
env:
VITE_SERVER_ENDPOINT: http://localhost:8000/api
- name: Test financing-server (e2e)
run: pnpm --filter #project/financing-server run test:e2e
I found out what was causing the problem. I was using node-linker = hoisted to mitigate some issues the pnpm way of linking modules was causing with my jest tests. Removing this from my project suddenly made the action work again.
I still don't know why this only broke the build recently, since I've had this option activated for some time now.
My container has a shell script that runs after the container starts. It was working before but when I push it to Gitlab and ran it with a pipeline it doesn't start. Here's my Dockerfile.
Dockerfile
# Setup SQL Server 2019
FROM mcr.microsoft.com/mssql/server:2019-latest
ENV SA_PASSWORD=Banana100
ENV ACCEPT_EULA=Y
ENV MSSQL_PID=Developer
USER mssql
#Copy test database and create DB script to image working directory
WORKDIR /usr/src/app
COPY . /usr/src/app
EXPOSE 1433
ENTRYPOINT /bin/bash ./entrypoint.sh
What's weird is that whenever I delete the /bin/bash ./entrypoint.sh, retype it, then run docker compose --force-recreate --build -d on my local machine, the shell script is executed just fine. But if the pipeline do it, it doesn't work. I have nothing to push on my repo, Git thinks I didn't make any changes at all.
This post here suggest that it could be an issue with the line endings. It probably worked on most people but I have tried these with no luck:
Daniel Howard's answer
P.J.Meisch's answer
Ryan Allen's answer
My entrypoint.sh is a shell script that runs an SQL file to create and restore database. Please see the codes below.
entrypoint.sh
# /opt/mssql/bin/sqlservr & /opt/mssql-tools/bin/sqlcmd -S localhost -U sa -P FBuilder*09 -d master -i createDB.sql
/opt/mssql/bin/sqlservr & /opt/mssql-tools/bin/sqlcmd -S localhost -U sa -P $SA_PASSWORD -d master -i createDB.sql
while true; do sleep 1000; done
docker-compose.yml
version: "3"
services:
fb_db:
container_name: test_fbdb
build: ./test.environment
ports:
- "5002:1433"
gitlab-ci.yml
.ci_server:
tags:
- ci
stages:
- build
- test
variables:
SOLUTION_NAME: FBSVC.sln
before_script:
- Set-Variable -Name "time" -Value (date -Format "%H:%m")
- echo ${time}
- echo "started by ${GITLAB_USER_NAME}"
build:
stage: build
extends:
- .ci_server
script:
- echo "Restoring project dependencies..."
- $Env:Path += ";C:\nuget"
- nuget restore
- echo "Preparing development environment"
- copy "D:\CI_TestDB\FormulaBuilder\FormulaBuilder.bak" "D:\Project Files\Gitlab-Runner\builds\_1ZgTgcF\0\water-utilities\formula-builder\test.environment"
- docker compose up --force-recreate --build -d
- echo "Publishing project..."
- $Env:Path += ";C:\Program Files (x86)\Microsoft Visual Studio\2019\BuildTools\MSBuild\Current\Bin"
- msbuild FBSVC.sln /p:DeployOnBuild=true /p:PublishProfile=CI_IntegrationTest
- echo "Build stage completed successfully!"
only:
- merge_requests
test:
stage: test
variables:
GIT_STRATEGY: clone
extends:
- .ci_server
script:
- echo "Installing dev dependencies..."
- npm ci
- echo "Running tests..."
- npm run cy:test
dependencies:
- build
only:
- merge_requests
The Gitlab Runner is running on my physical local machine, it is not a Shared Runner. Any help would be appreciated. Thanks!
UPDATE (June 24, 2022)
I have replaced the ENTRYPOINT on the Dockerfile to ENTRYPOINT [ "/bin/bash", "-c", "./entrypoint.sh"], it is still do the same issue. Looking at the "Select End Line Sequence" menu on VS Code found at the bottom-right corner, it is set to "LF". I checked the Dockerfile being pulled by the pipeline it is now set to "CRLF". Does this matter?
I have ran git config --global core.autocrlf input multiple times but if the file is being pulled from the repo, it automatically does this. Hence, I decided to detach the folder containing my files that builds my database environment for the time being. Doing this requires rewriting the whole ENTRYPOINT line on my Dockerfile to work properly again. If you have any advice, please let me know.
If a GitLab project is configured on GitLab CI, is there a way to run the build locally?
I don't want to turn my laptop into a build "runner", I just want to take advantage of Docker and .gitlab-ci.yml to run tests locally (i.e. it's all pre-configured). Another advantage of that is that I'm sure that I'm using the same environment locally and on CI.
Here is an example of how to run Travis builds locally using Docker, I'm looking for something similar with GitLab.
Since a few months ago this is possible using gitlab-runner:
gitlab-runner exec docker my-job-name
Note that you need both docker and gitlab-runner installed on your computer to get this working.
You also need the image key defined in your .gitlab-ci.yml file. Otherwise won't work.
Here's the line I currently use for testing locally using gitlab-runner:
gitlab-runner exec docker test --docker-volumes "/home/elboletaire/.ssh/id_rsa:/root/.ssh/id_rsa:ro"
Note: You can avoid adding a --docker-volumes with your key setting it by default in /etc/gitlab-runner/config.toml. See the official documentation for more details. Also, use gitlab-runner exec docker --help to see all docker-based runner options (like variables, volumes, networks, etc.).
Due to the confusion in the comments, I paste here the gitlab-runner --help result, so you can see that gitlab-runner can make builds locally:
gitlab-runner --help
NAME:
gitlab-runner - a GitLab Runner
USAGE:
gitlab-runner [global options] command [command options] [arguments...]
VERSION:
1.1.0~beta.135.g24365ee (24365ee)
AUTHOR(S):
Kamil Trzciński <ayufan#ayufan.eu>
COMMANDS:
exec execute a build locally
[...]
GLOBAL OPTIONS:
--debug debug mode [$DEBUG]
[...]
As you can see, the exec command is to execute a build locally.
Even though there was an issue to deprecate the current gitlab-runner exec behavior, it ended up being reconsidered and a new version with greater features will replace the current exec functionality.
Note that this process is to use your own machine to run the tests using docker containers. This is not to define custom runners. To do so, just go to your repo's CI/CD settings and read the documentation there. If you wanna ensure your runner is executed instead of one from gitlab.com, add a custom and unique tag to your runner, ensure it only runs tagged jobs and tag all the jobs you want your runner to be responsible of.
I use this docker-based approach:
Edit: 2022-10
docker run --entrypoint bash --rm -w $PWD -v $PWD:$PWD -v /var/run/docker.sock:/var/run/docker.sock gitlab/gitlab-runner:latest -c 'git config --global --add safe.directory "*";gitlab-runner exec docker test'
For all git versions > 2.35.2. You must add safe.directory within the container to avoid fatal: detected dubious ownership in repository at.... This also true for patched git versions < 2.35.2. The old command will not work anymore.
Details
0. Create a git repo to test this answer
mkdir my-git-project
cd my-git-project
git init
git commit --allow-empty -m"Initialize repo to showcase gitlab-runner locally."
1. Go to your git directory
cd my-git-project
2. Create a .gitlab-ci.yml
Example .gitlab-ci.yml
image: alpine
test:
script:
- echo "Hello Gitlab-Runner"
3. Create a docker container with your project dir mounted
docker run -d \
--name gitlab-runner \
--restart always \
-v $PWD:$PWD \
-v /var/run/docker.sock:/var/run/docker.sock \
gitlab/gitlab-runner:latest
(-d) run container in background and print container ID
(--restart always) or not?
(-v $PWD:$PWD) Mount current directory into the current directory of the container - Note: On Windows you could bind your dir to a fixed location, e.g. -v ${PWD}:/opt/myapp. Also $PWD will only work at powershell not at cmd
(-v /var/run/docker.sock:/var/run/docker.sock) This gives the container access to the docker socket of the host so it can start "sibling containers" (e.g. Alpine).
(gitlab/gitlab-runner:latest) Just the latest available image from dockerhub.
4. Execute with
Avoid fatal: detected dubious ownership in repository at... More info
docker exec -it -w $PWD gitlab-runner git config --global --add safe.directory "*"
Actual execution
docker exec -it -w $PWD gitlab-runner gitlab-runner exec docker test
# ^ ^ ^ ^ ^ ^
# | | | | | |
# (a) (b) (c) (d) (e) (f)
(a) Working dir within the container. Note: On Windows you could use a fixed location, e.g. /opt/myapp.
(b) Name of the docker container
(c) Execute the command "gitlab-runner" within the docker container
(d)(e)(f) run gitlab-runner with "docker executer" and run a job named "test"
5. Prints
...
Executing "step_script" stage of the job script
$ echo "Hello Gitlab-Runner"
Hello Gitlab-Runner
Job succeeded
...
Note: The runner will only work on the commited state of your code base. Uncommited changes will be ignored. Exception: The .gitlab-ci.yml itself does not have be commited to be taken into account.
Note: There are some limitations running locally. Have a look at limitations of gitlab runner locally.
I'm currently working on making a gitlab runner that works locally.
Still in the early phases, but eventually it will become very relevant.
It doesn't seem like gitlab want/have time to make this, so here you go.
https://github.com/firecow/gitlab-runner-local
If you are running Gitlab using the docker image there: https://hub.docker.com/r/gitlab/gitlab-ce, it's possible to run pipelines by exposing the local docker.sock with a volume option: -v /var/run/docker.sock:/var/run/docker.sock. Adding this option to the Gitlab container will allow your workers to access to the docker instance on the host.
The GitLab runner appears to not work on Windows yet and there is an open issue to resolve this.
So, in the meantime I am moving my script code out to a bash script, which I can easily map to a docker container running locally and execute.
In this case I want to build a docker container in my job, so I create a script 'build':
#!/bin/bash
docker build --pull -t myimage:myversion .
in my .gitlab-ci.yaml I execute the script:
image: docker:latest
services:
- docker:dind
before_script:
- apk add bash
build:
stage: build
script:
- chmod 755 build
- build
To run the script locally using powershell I can start the required image and map the volume with the source files:
$containerId = docker run --privileged -d -v ${PWD}:/src docker:dind
install bash if not present:
docker exec $containerId apk add bash
Set permissions on the bash script:
docker exec -it $containerId chmod 755 /src/build
Execute the script:
docker exec -it --workdir /src $containerId bash -c 'build'
Then stop the container:
docker stop $containerId
And finally clean up the container:
docker container rm $containerId
Another approach is to have a local build tool that is installed on your pc and your server at the same time.
So basically, your .gitlab-ci.yml will basically call your preferred build tool.
Here an example .gitlab-ci.yml that i use with nuke.build:
stages:
- build
- test
- pack
variables:
TERM: "xterm" # Use Unix ASCII color codes on Nuke
before_script:
- CHCP 65001 # Set correct code page to avoid charset issues
.job_template: &job_definition
except:
- tags
build:
<<: *job_definition
stage: build
script:
- "./build.ps1"
test:
<<: *job_definition
stage: test
script:
- "./build.ps1 test"
variables:
GIT_CHECKOUT: "false"
pack:
<<: *job_definition
stage: pack
script:
- "./build.ps1 pack"
variables:
GIT_CHECKOUT: "false"
only:
- master
artifacts:
paths:
- output/
And in nuke.build i've defined 3 targets named like the 3 stages (build, test, pack)
In this way you have a reproducible setup (all other things are configured with your build tool) and you can test directly the different targets of your build tool.
(i can call .\build.ps1 , .\build.ps1 test and .\build.ps1 pack when i want)
I am on Windows using VSCode with WSL
I didn't want to register my work PC as a runner so instead I'm running my yaml stages locally to test them out before I upload them
$ sudo apt-get install gitlab-runner
$ gitlab-runner exec shell build
yaml
image: node:10.19.0 # https://hub.docker.com/_/node/
# image: node:latest
cache:
# untracked: true
key: project-name
# key: ${CI_COMMIT_REF_SLUG} # per branch
# key:
# files:
# - package-lock.json # only update cache when this file changes (not working) #jkr
paths:
- .npm/
- node_modules
- build
stages:
- prepare # prepares builds, makes build needed for testing
- test # uses test:build specifically #jkr
- build
- deploy
# before_install:
before_script:
- npm ci --cache .npm --prefer-offline
prepare:
stage: prepare
needs: []
script:
- npm install
test:
stage: test
needs: [prepare]
except:
- schedules
tags:
- linux
script:
- npm run build:dev
- npm run test:cicd-deps
- npm run test:cicd # runs puppeteer tests #jkr
artifacts:
reports:
junit: junit.xml
paths:
- coverage/
build-staging:
stage: build
needs: [prepare]
only:
- schedules
before_script:
- apt-get update && apt-get install -y zip
script:
- npm run build:stage
- zip -r build.zip build
# cache:
# paths:
# - build
# <<: *global_cache
# policy: push
artifacts:
paths:
- build.zip
deploy-dev:
stage: deploy
needs: [build-staging]
tags: [linux]
only:
- schedules
# # - branches#gitlab-org/gitlab
before_script:
- apt-get update && apt-get install -y lftp
script:
# temporarily using 'verify-certificate no'
# for more on verify-certificate #jkr: https://www.versatilewebsolutions.com/blog/2014/04/lftp-ftps-and-certificate-verification.html
# variables do not work with 'single quotes' unless they are "'surrounded by doubles'"
- lftp -e "set ssl:verify-certificate no; open mediajackagency.com; user $LFTP_USERNAME $LFTP_PASSWORD; mirror --reverse --verbose build/ /var/www/domains/dev/clients/client/project/build/; bye"
# environment:
# name: staging
# url: http://dev.mediajackagency.com/clients/client/build
# # url: https://stg2.client.co
when: manual
allow_failure: true
build-production:
stage: build
needs: [prepare]
only:
- schedules
before_script:
- apt-get update && apt-get install -y zip
script:
- npm run build
- zip -r build.zip build
# cache:
# paths:
# - build
# <<: *global_cache
# policy: push
artifacts:
paths:
- build.zip
deploy-client:
stage: deploy
needs: [build-production]
tags: [linux]
only:
- schedules
# - master
before_script:
- apt-get update && apt-get install -y lftp
script:
- sh deploy-prod
environment:
name: production
url: http://www.client.co
when: manual
allow_failure: true
The idea is to keep check commands outside of .gitlab-ci.yml. I use Makefile to run something like make check and my .gitlab-ci.yml runs the same make commands that I use locally to check various things before committing.
This way you'll have one place with all/most of your commands (Makefile) and .gitlab-ci.yml will have only CI-related stuff.
I have written a tool to run all GitLab-CI job locally without have to commit or push, simply with the command ci-toolbox my_job_name.
The URL of the project : https://gitlab.com/mbedsys/citbx4gitlab
Years ago I build this simple solution with Makefile and docker-compose to run the gitlab runner in docker, you can use it to execute jobs locally as well and should work on all systems where docker works:
https://gitlab.com/1oglop1/gitlab-runner-docker
There are few things to change in the docker-compose.override.yaml
version: "3"
services:
runner:
working_dir: <your project dir>
environment:
- REGISTRATION_TOKEN=<token if you want to register>
volumes:
- "<your project dir>:<your project dir>"
Then inside your project you can execute it the same way as mentioned in other answers:
docker exec -it -w $PWD runner gitlab-runner exec <commands>..
I recommend using gitlab-ci-local
https://github.com/firecow/gitlab-ci-local
It's able to run specific jobs as well.
It's a very cool project and I have used it to run simple pipelines on my laptop.
When i use docker pipeline it will success building.
But when i use exec pipe line it's always stuck in pending.
And i don't what going wrong.
kind: pipeline
type: exec
name: deployment
platform:
os: linux
arch: amd64
steps:
- name: backend image build
commands:
- echo start build images...
# - export MAJOR_VERSION=1.0.rtm.
# - export BUILD_NUMBER=$DRONE_BUILD_NUMBER
# - export WORKSPACE=`pwd`
# - bash ./jenkins_build.sh
when:
branch:
- master
Docker Pipe Line is fine.
kind: pipeline
type: docker
name: deployment
steps:
- name: push image to repo
image: plugins/docker
settings:
dockerfile: src/ZR.DataHunter.Api/Dockerfile
tags: latest
insecure: true
registry: "xxx"
repo: "xxx"
username:
from_secret: username
password:
from_secret: userpassword
First of all it's important to note that the exec pipeline can be used only when Drone is self-hosted. It is written in the official docs:
Please note exec pipelines are disabled on Drone Cloud. This feature is only available when self-hosting
When Drone is self-hosted be sure, that:
The exec runner is installed
It is configured properly in its config file (to be able to connect to the Drone server)
And the drone-runner-exec service is running
After the service is started, look for its log file and you have to see an info message saying it was able to connect to your Drone server:
level=info msg="successfully pinged the remote server"
level=info msg="polling the remote server"
It's also possible to view the web UI (Dashboard) of the running service if you enable it.
So if you see it can poll your server, then your exec pipeline should run as expected.
When i'm running my build it shows this output when dealing with the bundler cache:
Restoring Cache:
No cache is found for key: gems-E2sY+gLiTCg0YwHjev8ZrCqAW_U9vUHKPA_FZnRiAPc=-circleci-project-setup
No cache is found for key: gems-E2sY+gLiTCg0YwHjev8ZrCqAW_U9vUHKPA_FZnRiAPc=
Saving Cache
Creating cache archive...
Uploading cache archive...
Stored Cache to gems-wV_lhNyGV+evee5LBWbzjy7uuCEgZRgX8PlgbJreAv4=-circleci-project-setup
* /home/circleci/project/vendor/bundle
but the key it used to save doesn't match the key its using to fetch. So the next run of the build will have another cache-miss
Circle File:
version: 2.1 # Use 2.1 to enable using orbs and other features.
# Declare the orbs that we'll use in our config.
# read more about orbs: https://circleci.com/docs/2.0/using-orbs/
orbs:
ruby: circleci/ruby#1.0
node: circleci/node#2
jobs:
build:
docker:
- image: cimg/ruby:2.7-node # use a tailored CircleCI docker image.
steps:
- checkout
- ruby/install-deps: # use the ruby orb to install dependencies
key: "gems"
# use the node orb to install our packages
# specifying that we use `yarn` and to cache dependencies with `yarn.lock`
# learn more: https://circleci.com/docs/2.0/caching/
- node/install-packages:
pkg-manager: yarn
cache-key: "yarn.lock"
test:
# we run "parallel job containers" to enable speeding up our tests;
# this splits our tests across multiple containers.
# !! No we don't, we only have a single node in the free plan
parallelism: 1
# here we set TWO docker images.
docker:
- image: cimg/ruby:2.7-node # this is our primary docker image, where step commands run.
- image: circleci/postgres:9.5-alpine
environment: # add POSTGRES environment variables.
POSTGRES_USER: xxx
POSTGRES_DB: xxx_test
POSTGRES_PASSWORD: xxx
# environment variables specific to Ruby/Rails, applied to the primary container.
environment:
BUNDLE_JOBS: "3"
BUNDLE_RETRY: "3"
DB_HOST: 127.0.0.1
db_password: xxx
RAILS_ENV: test
# A series of steps to run, some are similar to those in "build".
steps:
- checkout
- ruby/install-deps:
key: "gems"
- node/install-packages:
pkg-manager: yarn
cache-key: "yarn.lock"
# Here we make sure that the secondary container boots
# up before we run operations on the database.
- run:
name: Wait for DB
command: dockerize -wait tcp://localhost:5432 -timeout 1m
- run:
name: Database setup
command: bundle exec rails db:schema:load --trace
# Run tests
- run:
name: Tests
command: bundle exec rails test
# We use workflows to orchestrate the jobs that we declared above.
workflows:
version: 2
build_and_test: # The name of our workflow is "build_and_test"
jobs: # The list of jobs we run as part of this workflow.
- build # Run build first.
- test: # Then run test,
requires: # Test requires that build passes for it to run.
- build # Finally, run the build job.
How would i fix this so that the cache is stored properly
I think i found the issue.
I never pushed my Gemfile.lock file while building the project. I think the gemfile.lock file is hashed to generate the key, since the repository contained an old gemfile the hash didn't match. After running bundler the lockfile changes so it gets a new hash (which was already present). Since i never ran bundler locally it never updated my local gemfile.lock so it was never pushed to the repo.