GitLab equivalent to Jenkins Pipeline's agent dockerfile - jenkins-pipeline

Is there a equivalent to the dockerfile Agent in GitLab?
pipeline {
agent { dockerfile true }
stages {
stage('Test') {
steps {
sh 'node --version'
sh 'svn --version'
}
}
}
}
(cf.)
Edit
Currently I have a repo that contains a Python script for CI that requires some dependencies and would like to use that image in the docker runner.
Currently I have hacked it like this, but that fails if there is more than one runner under that tag (because the image is local to the runner).
FROM python:3.6
COPY requirements.txt .
RUN pip install -r requirements.txt
stages:
- image
- ci
prepare:
stage: image
script:
- docker build -f Containerfile --iidfile=iid .
- echo "IID=$(cat iid)" >> iid.env"
tags:
- tag
artifacts:
reports:
dotenv: iid.env
gen:
stage: ci
image: $IID
script:
- step
- step
- step
artifacts:
name: generated
paths:
- "./*.zip"

Related

gitlab Cypress generate allure report

I have my .gitlab-ci.yml file as follows:
image: cypress/base:14.16.0
stages:
- test
test:
stage: test
script:
- npm install
- npm run scripts
where scripts is --> cypress run --spec cypress/integration/UI/myScript.feature
when adding another command after the scripts parameter to generate allure report, The gitlab pipeline is throwing me an error of JAVA home path not being set to generate allure report.
ERROR: JAVA_HOME is not set and no 'java' command could be found in your PATH
So I updated my script to something like this:
image: cypress/base:14.16.0
stages:
- test
- allure
test:
stage: test
script:
- npm install
- npm run clean:allure
- npm run scripts
allure_report:
stage: allure
when: always
image: timbru31/java-node
dependencies:
- test
script:
- npm install
- npm run generate-allure-report
artifacts:
when: always
paths:
- cypress/reportsAllure/allure-report/
- cypress/reportsAllure/allure-results/
where generate-allure-report is --> allure generate cypress/reportsAllure/allure-results --clean -o cypress/reportsAllure/allure-report
But here empty reports are generated. does anyone know what artifacts I need to pass from the first stage onto next in order to generate allure report ?
This works for me, but I'm using the default folder locations so you'd need to change those same as artifacts paths and append -o folder/allure-report where appropriate
stages:
- test
- allure
- deploy
cache:
key: ${CI_COMMIT_REF_SLUG}
paths:
- node_modules
.download_history: &download_history
after_script:
- apt-get install -y unzip
- mkdir backup && cd backup || true
- "curl --location --output report.zip --request GET \"https://gitlab.com/api/v4/projects/${CI_PROJECT_ID}/jobs/artifacts/master/download?job=pages\" --header \"Authorization: Bearer ${CI_DEPLOY_TOKEN}\" || true"
- (unzip report.zip) || true
- cd ../
- (cp -r backup/public/history/ allure-results/history/) || true
.test_template: &test_template
image:
name: cypress/included:7.5.0
entrypoint: [""]
stage: test
variables:
CY_RUN_ID: ${CI_JOB_ID}
script:
- export CYPRESS_VIDEO=false
- npm install
- ./node_modules/.bin/cypress run --headless --env allure=true
artifacts:
when: always
paths:
- allure-results/
smoke:
<<: *test_template
<<: *download_history
allure_report:
stage: allure
when: always
image:
name: ubuntu:latest
entrypoint: [""]
dependencies:
- smoke
variables:
DEBIAN_FRONTEND: noninteractive
TZ: Europe/London
before_script:
- apt-get update
- apt-get install -y default-jdk wget unzip
- mkdir /work/
- wget https://github.com/allure-framework/allure2/releases/download/2.13.8/allure-2.13.8.zip -P /work/
- unzip /work/allure-2.13.8.zip -d /work/
script:
- /work/allure-2.13.8/bin/allure generate allure-results --clean -o allure-report
artifacts:
when: always
paths:
- allure-report/
- allure-results/
only:
- master
pages:
stage: deploy
when: always
dependencies:
- allure_report
script:
- mv allure-report/ public/
artifacts:
paths:
- public
expire_in: 30 days
only:
- master

How to start a selenoid on gitlab-ci?

I'm trying to run tests on gitlab-ci, but I don't know which command to start selenoid.
Locally, this command looks like ./ Cm selenoid start But how to specify in the case of starting selenoid from the service, I do not know.
This is my .gitlab-ci.yml:
stages:
- testing
ya_test:
stage: testing
tags:
- docker
services:
- selenoid/chrome
image: python:3.9-alpine
before_script:
- pip3 install -r requirements.txt
- selenoid/chrome start #???????
script:
- pytest -s
allow_failure: true
And what address to specify in the test fixture? localhost:4444?
Thanks for the help!
For launching Selenoid with Chrome try this yml.
Use webdriver.Remote(command_executor="http://selenoid__chrome:4444",
options=chrome_options, desired_capabilities=DesiredCapabilities.CHROME) to connect Chrome and add no-sandbox to options.
image: python:3.8
stages:
- test
test:
stage: test
services:
- name: aerokube/selenoid
- name: selenoid/chrome:89.0
before_script:
- echo "Install environment"
- apt-get update -q -y
- pip3 install -r requirements.txt
script:
- echo "Run all tests"
- py.test -s -v --junitxml=report.xml test.py
# if you want detailed report at GitLab add artifacts
artifacts:
when: always
reports:
junit: report.xml

Gitlab CD/CI: The user-provided path build/ does not exist

I have created one simple react for practicing gitlab's CI/CD pipeline. I have three jobs for CD/CI pipeline. First test the app then build then deploy to the AWS' S3 bucket. After successfully pass the test and run the build production, when it goes deploy stage I got this error : The user-provided path build does not exist. I don't know how to make path in Gitlab's cd/ci pipeline.
This is my gitlab's .gitlab-ci.yml file setup
image: 'node:12'
stages:
- test
- build
- deploy
test:
stage: test
script:
- yarn install
- yarn run test
variables:
AWS_ACCESS_KEY_ID: $AWS_ACCESS_KEY_ID
AWS_SECRET_ACCESS_KEY: $AWS_SECRET_ACCESS_KEY
AWS_REGION: $AWS_REGION
S3_BUCKET_NAME: $S3_BUCKET_NAME
build:
stage: build
only:
- master
script:
- npm install
- npm run build
deploy:
stage: deploy
only:
- master
image: python:latest
script:
- pip install awscli
- aws s3 cp build/ s3://$S3_BUCKET_NAME/ --recursive --include "*"
If build/ folder is created as part of build stage then it should be passed as artefact to the deploy stage and deploy should reference build stage using dependencies:
build:
stage: build
only:
- master
script:
- npm install
- npm run build
artifacts:
paths:
- build/
deploy:
stage: deploy
only:
- master
image: python:latest
dependencies:
- build
script:
- pip install awscli
- aws s3 cp build/ s3://$S3_BUCKET_NAME/ --recursive --include "*"

COPY failed: no source files were specified - How do I have to use the artifacts?

With mvn package in maven-build I create a folder (with the name "target") with the correct subfiles and folders. When I execute it in my development environment I can go on with the docker-build stage. In Gitlab I get the error: COPY failed: no source files were specified. This happens in step 3/7 in my Dockerfile.
Why they don't know the File in docker-build stage even though I create an artifact?
My .gitlab-ci.yml:
image: maven:latest
stages:
- build
- run
cache:
paths:
- .m2/repository
maven-build:
stage: build
script: mvn package -s .m2/settings.xml
artifacts:
paths:
- target/
docker-build:
image: docker:latest
stage: build
services:
- docker:dind
script:
- docker build . -t generic_test
run:
stage: run
script:
- docker run generictest
My Dockerfile:
FROM selenium/standalone-firefox
WORKDIR /app
COPY target/*.jar app.jar
COPY *.json .
ENV http_proxy=http://10.127.255.25:8080
ENV https_proxy=http://10.127.255.25:8080
ENTRYPOINT java -jar app.jar /usr/bin/geckodriver
When I had the target-Folder in Gitlab and don't have to create it before with mvn package it worked. Here is the code which worked before (and yes I have to create it and can't leave it in the repository):
stages:
- build
docker-build:
image: docker:latest
stage: build
services:
- docker:dind
script:
- echo docker build . -t dockertest
- echo docker run dockertest
I got it. By default, all artifacts from all previous stages are passed (documentation), but if you are in the same stage it doesn't know the artifact. I have to create two different stages.
I don't use stage: build two times, I created a third one.
image: maven:latest
stages:
- docker-build
- maven-build
- run
cache:
paths:
- .m2/repository
maven-build:
stage: docker-build
script:
- mvn package -s .m2/settings.xml
- dir
- cd target
- dir
artifacts:
paths:
- target/
docker-build:
image: docker:latest
stage: maven-build
services:
- docker:dind
script:
- ls
- docker build . -t generictest
run:
image: docker:latest
stage: run
services:
- docker:dind
script:
- docker run generictest

How to fix dynamic environment in gitlab?

I have built a "deploy review" stage and deploy job in my yaml file with the following code:
deploy review:
stage: deploy review
only:
- merge_requests
environment:
name: review/$CI_COMMIT_REF_NAME
url: https://crazymonk84-$CI_ENVIRONMENT_SLUG.surge.sh
script:
- npm install -g surge
- surge --project ./public --domain
https://crazymonk84-$CI_ENVIRONMENT_SLUG.surge.sh
When I check the pipeline on my gitlab account, I see the commit on my review branch but I do not see "deploy review" job running. I see "test artifact" "test website" jobs running.
The link to the gitlab project is https://gitlab.com/syed.r.abdullah/my-static-website/tree/review
I took the following steps:
Added "deploy review" to the yaml file
Created a new branch "review" locally
Added the change to the yaml file in review branch
Committed the change
Pushed the change to gitlab using git push -u origin review
Visited my pipelines and saw review pipeline in failed state
Jobs, inside the review pipeline are "test artifact" and "test website", not "deploy review"
image: node
variables:
STAGING_DOMAIN: crazymonk84-staging.surge.sh
PRODUCTION_DOMAIN: crazymonk84.surge.sh
stages:
- build
- test
- deploy review
- deploy staging
- deploy production
- production tests
- cache
cache:
key: ${CI_COMMIT_REF_SLUG}
policy: pull
paths:
- node_modules/
update cache:
stage: cache
script:
- npm install
only:
- schedules
cache:
key: ${CI_COMMIT_REF_SLUG}
policy: push
paths:
- node_modules/
build website:
stage: build
only:
- master
- merge_requests
except:
- schedules
script:
- echo $CI_COMMIT_SHORT_SHA
- npm install -g gatsby-cli
- npm i xstate#4.6.4
- gatsby build
- sed -i "s/%%VERSION%%/$CI_COMMIT_SHORT_SHA/" ./public/index.html
artifacts:
paths:
- ./public
test website:
stage: test
except:
- schedules
script:
- npm install -g gatsby-cli
- npm i xstate#4.6.4
- gatsby serve &
- sleep 3
- curl "http://localhost:9000" | tac | tac | grep -q "Gatsby"
test artifact:
image: alpine
stage: test
except:
- schedules
script:
- grep -q "Gatsby" ./public/index.html
cache: {}
deploy review:
stage: deploy review
only:
- merge_requests
environment:
name: review/$CI_COMMIT_REF_NAME
url: https://crazymonk84-$CI_ENVIRONMENT_SLUG.surge.sh
script:
- npm install -g surge
- surge --project ./public --domain https://crazymonk84-$CI_ENVIRONMENT_SLUG.surge.sh
deploy staging:
stage: deploy staging
environment:
name: staging
url: http://$STAGING_DOMAIN
only:
- master
except:
- schedules
script:
- npm install --global surge
- surge --project ./public --domain $STAGING_DOMAIN
cache: {}
deploy production:
stage: deploy production
environment:
name: production
url: http://$PRODUCTION_DOMAIN
only:
- master
when: manual
allow_failure: false
except:
- schedules
script:
- npm install --global surge
- surge --project ./public --domain $PRODUCTION_DOMAIN
cache: {}
production tests:
image: alpine
stage: production tests
only:
- master
except:
- schedules
script:
- apk add --no-cache curl
- curl -s "https://$PRODUCTION_DOMAIN" | grep -q "Hi people"
- curl -s "https://$PRODUCTION_DOMAIN" | grep -q "$CI_COMMIT_SHORT_SHA"
cache: {}
I am expecting to see "deploy review" as the only job in the pipeline. However, I see "test artifact" and "test website." What can I do to fix the issue? Thanks.
I found one solution:
Add the following to the build website and deploy review stages:
only:
- master
- merge_requests
I expect you watched a course by Valentin Despa. I've just come across the same problem. I wonder if he published any solution to the issue.
Basically I'm not positive if I'm correct, but...
if we open the link, there might be found an explanation:
https://docs.gitlab.com/ee/ci/pipelines/merge_request_pipelines.html
only:
- merge_requests
runs the deploy review stage in a detached mode. We don't have an access to our environment variables since they are protected. What I did was I went to Settings -> CI/CD -> Variables -> got rid of a tick in a protected variable option for both variables.
Then, if you run the pipeline again you'll notice it'll throw an error on ./public.
Play his video Dynamic environments and notice at 4:49 (timeline) there are three green pipelines. You have one only (your pipeline is detached)! Meaning build website stage hasn't been run. That's why you'll see the error relating to ./public since your pipeline knows nothing about Gatsby. We need to install Gatsby first and then build it.
deploy review:
stage: deploy review
environment:
name: review/$CI_COMMIT_REF_NAME
url: https://yourdomain-$CI_ENVIRONMENT_SLUG.surge.sh
only:
- merge_requests
script:
- npm install --silent
- npm install -g gatsby-cli
- gatsby build
- npm install --global surge
- surge --project ./public --domain yourdomain-$CI_ENVIRONMENT_SLUG.surge.sh
artifacts:
paths:
- ./public

Resources