unable to link gitlab services to own container in .gitlab-ci.yml - continuous-integration

I have a simple .gitlab-ci.yml file:
image: docker:latest
services:
- docker:dind
- postgres:9.5
stages:
- build
- test
variables:
STAGING_REGISTRY: "dhub.example.com"
CONTAINER_TEST_IMAGE: ${STAGING_REGISTRY}/${CI_PROJECT_NAME}:latest
before_script:
- docker login -u gitlab-ci -p $DHUB_PASSWORD $STAGING_REGISTRY
build:
stage: build
script:
- docker build --pull -t $CONTAINER_TEST_IMAGE -f Dockerfile-dev .
- docker push $CONTAINER_TEST_IMAGE
test:
stage: test
script:
- docker run --env-file=.environment --link=postgres:db $CONTAINER_TEST_IMAGE nosetests
Everything works fine until the actual test stage. In test I'm unable to access my postgres service.
docker: Error response from daemon: Could not get container for postgres.
I tried to write test like this:
test1:
stage: test
image: $CONTAINER_TEST_IMAGE
services:
- postgres:9.5
script:
- python manage.py test
But in this case, I'm unable to pull this image, because of authentication:
ERROR: Preparation failed: unauthorized: authentication required
Am I missing something?

Related

Unable to run gradle tests using gitlab and docker-compose

I want to run tests using Gradle after docker-compose up (Postgres DB + Spring-Boot app). All flow must be running inside the Gitlab merge request step. The problem is when I was running my test using the script part in gitlab-ci file. Important, in such a situation, we are in the correct directory where GitLab got my project. Part of gitlab-ci file:
before_script:
- ./gradlew clean build
- cp x.jar /path/x.jar
- docker-compose -f /path/docker-compose.yaml up -d
script:
- ./gradlew :functional-tests:clean test -Penv=gitlab --info
But here I can't call http://localhost:8080 -> connection refused. I try put 0.0.0.0 or 172.17.0.3 or docker.host... etc insite tests config, but it didn't work.
So, I made insite docker-compose another container where I try to run my test using the entry point command. To do that, I must have the current GitLab directory, but can't mount them.
My current solution:
Gitlab-ci:
run-functional-tests:
stage: run_functional_tests
image:
name: 'xxxx/docker-compose-java-11:0.0.7'
script:
- ./gradlew clean build -x test
- 'export SHARED_PATH="$(dirname ${CI_PROJECT_DIR})"' // current gitlab worspace dir
- cp $CI_PROJECT_DIR/x.jar $CI_PROJECT_DIR/docker/gitlab/x.jar
- docker-compose -f $CI_PROJECT_DIR/docker/gitlab/docker-compose.yaml up -d
- docker-compose -f $CI_PROJECT_DIR/docker/gitlab/docker-compose.yaml logs -f
timeout: 30m
docker-compose.yaml
version: '3'
services:
postgres:
build:
context: ../postgres
container_name: postgres
restart: always
networks:
- app-postgres
ports:
- 5432
app:
build:
context: .
dockerfile: Dockerfile
restart: always
container_name: app
depends_on:
- postgres
ports:
- "8080:8080"
networks:
- app-postgres
functional-tests:
build:
context: .
container_name: app-functional-tests
working_dir: /app
volumes:
- ${SHARED_PATH}:/app
depends_on:
- app
entrypoint: ["bash", "-c", "sleep 20 && ./gradlew :functional-tests:clean test -Penv=gitlab --info"]
networks:
- app-postgres
networks:
app-postgres:
but in such a situation my working_dir - /app - is empty. Can someone assist with that?

Execute bash script when running docker-compose in gitlab CI

I've got the following .gitlab-ci.yml file:
image:
name: docker/compose:1.24.1
entrypoint: ["/bin/sh","-l","-c"]
services:
- docker:dind
stages:
- test
- deploy
pytest:
stage: test
script:
- cd tests/
- docker-compose -f docker-compose.test.yml up --abort-on-container-exit --exit-code-from pytest
which runs the docker-compose.test.yml file, which looks like this:
version: '3'
services:
pytest:
build: ..
command: bash -c "/wait-for-it.sh dynamodb:8000 -- && cd /tests/ && pytest -s"
dynamodb:
image: amazon/dynamodb-local
ports:
- '8000:8000'
command: -jar DynamoDBLocal.jar -inMemory -port 8000
In this case, pytest waits for dynamodb to be up and running and then runs the python tests. This works well on my machine.
However, when Gitlab CI actually runs it, I get the following error:
bash: /wait-for-it.sh: Permission denied
How to avoid this issue? Using chmod -x, file is not found.
you need to make wait-for-it.sh executable, in your Dockerfile add:
RUN chmod +x /path/to/wait-for-it.sh

Yaml : Formatting error in yaml file. expected '<document start>', but found '<block mapping start>

version: 2.1
executors:
docker-publisher:
environment:
IMAGE_NAME: vinaya.nayak/mocking-service
docker:
- image: circleci/buildpack-deps:stretch
jobs:
build:
executor: docker-publisher
steps:
- checkout
- setup_remote_docker
- run:
name: Build Docker image
command: |
docker build -t $IMAGE_NAME:latest .
- run:
name: Archive Docker image
command: docker save -o mocking.tar $IMAGE_NAME
- persist_to_workspace:
root: .
paths:
- ./mocking.tar
publish-latest:
executor: docker-publisher
steps:
- attach_workspace:
at: /tmp/workspace
- setup_remote_docker
- run:
name: Load archived Docker image
command: docker load -i /tmp/workspace/mocking.tar
- run:
name: Publish Docker Image to Docker Hub
command: |
echo "$DOCKER_HUB_PASSWORD" | docker login -u "$DOCKER_HUB_USERNAME" --password-stdin
docker push docker.kfz42.de/v2/java/mocking-service/$IMAGE_NAME:latest .
workflows:
version: 2
build-master:
jobs:
- build:
filters:
branches:
only: master
- publish-latest:
requires:
- build
filters:
branches:
only: master
can some one help me with whats wrong with my yaml file. I get the following error. I even tried using yaml formatter and the yaml formatter says that this is a valid yaml file
!/bin/sh -eo pipefail Unable to parse YAML expected '', but found '' in 'string', line 39,
column 1: workflows: Warning: This configuration was auto-generated to
show you the message above. Don't rerun this job. Rerunning will have
no effect. false Exited with code 1
Your file starts with a key-value pair indented with two spaces, so you have a root level node that is a mapping. That is fine as long as all other root level are indented two spaces as well.
workflows is not indented, that is why the parser expected a new document.
version: 2.1
executors:
docker-publisher:
environment:
IMAGE_NAME: vinaya.nayak/mocking-service
docker:
- image: circleci/buildpack-deps:stretch
jobs:
build:
executor: docker-publisher
steps:
- checkout
- setup_remote_docker
- run:
name: Build Docker image
command: |
docker build -t $IMAGE_NAME:latest .
- run:
name: Archive Docker image
command: docker save -o mocking.tar $IMAGE_NAME
- persist_to_workspace:
root: .
paths:
- ./mocking.tar
publish-latest:
executor: docker-publisher
steps:
- attach_workspace:
at: /tmp/workspace
- setup_remote_docker
- run:
name: Load archived Docker image
command: docker load -i /tmp/workspace/mocking.tar
- run:
name: Publish Docker Image to Docker Hub
command: |
echo "$DOCKER_HUB_PASSWORD" | docker login -u "$DOCKER_HUB_USERNAME" --password-stdin
docker push docker.kfz42.de/v2/java/mocking-service/$IMAGE_NAME:latest .
workflows:
version: 2
build-master:
jobs:
- build:
filters:
branches:
only: master
- publish-latest:
requires:
- build
filters:
branches:
only: master
I fixed the above problem by indenting workflows with 2 spaces

Gitlab Elasticsearch Service not connecting during pipeline run

We have managed to get both Mongo and PostgreSql working fine using Gitab service however we are facing real issues with elasticsearch.
Whenever we try to run the pipeline the connection to elastic fails.
I have tried the following steps in this thread:
https://gitlab.com/gitlab-org/gitlab-ce/issues/42214
But still no luck.
i.e. both
image: maven:latest
test:
stage: test
services:
- name: docker.elastic.co/elasticsearch/elasticsearch:6.5.4
alias: elasticsearch
command: [ "bin/elasticsearch", "-Ediscovery.type=single-node" ]
stage: test
script:
- ps aux
- ss -plantu
- curl -v "http://elasticsearch:9200/_settings?pretty"
and:
image: maven:latest
test:
stage: test
services:
- elasticsearch:6.5.4
script:
- curl -v "http://127.0.0.1:9200/"
Result in connection errors.
Has anyone got this working for elasticsearch:6.5.4?
This was fixed by a 15 second sleep line.
ci file now looks like:
test:
stage: test
services:
- name: docker.elastic.co/elasticsearch/elasticsearch:6.5.4
command: ["bin/elasticsearch", "-Expack.security.enabled=false", "-Ediscovery.type=single-node"]
script:
- echo "Sleeping for 15 seconds.."; sleep 15;

Using dind on drone.io

I'm trying move from gitlab ci to drone.io. But I can't make DIND works well as on gitlab. Above is how I did on gitlab.
variables:
NODE_ENV: 'test'
DOCKER_DRIVER: overlay
image: gitlab/dind
services:
- docker:dind
cache:
untracked: true
stages:
- test
test:
stage: test
before_script:
- docker info
- docker-compose --version
- docker-compose pull
- docker-compose build
after_script:
- docker-compose down
script:
- docker-compose run --rm api yarn install
How can I create an equivalent drone file ?
You can use the services section to start the docker daemon.
pipeline:
ping:
image: docker
environment:
- DOCKER_HOST=unix:///drone/docker.sock
commands:
- sleep 10 # give docker enough time to initialize
- docker ps -a
services:
docker:
image: docker:dind
privileged: true
command: [ '-H', 'unix:///drone/docker.sock' ]
Note that we change the default location of the docker socket and write to the drone volume which is shared among all containers in the pipeline:
command: [ '-H', 'unix:///drone/docker.sock' ]
Also note that we need to run the dind container in privileged mode. The privileged flag can only be used by trusted repositories. You will therefore need a user administrator to set the trusted flag to true for your repository in the drone user interface.
privileged: true

Resources