Gitlab Elasticsearch Service not connecting during pipeline run - elasticsearch

We have managed to get both Mongo and PostgreSql working fine using Gitab service however we are facing real issues with elasticsearch.
Whenever we try to run the pipeline the connection to elastic fails.
I have tried the following steps in this thread:
https://gitlab.com/gitlab-org/gitlab-ce/issues/42214
But still no luck.
i.e. both
image: maven:latest
test:
stage: test
services:
- name: docker.elastic.co/elasticsearch/elasticsearch:6.5.4
alias: elasticsearch
command: [ "bin/elasticsearch", "-Ediscovery.type=single-node" ]
stage: test
script:
- ps aux
- ss -plantu
- curl -v "http://elasticsearch:9200/_settings?pretty"
and:
image: maven:latest
test:
stage: test
services:
- elasticsearch:6.5.4
script:
- curl -v "http://127.0.0.1:9200/"
Result in connection errors.
Has anyone got this working for elasticsearch:6.5.4?

This was fixed by a 15 second sleep line.
ci file now looks like:
test:
stage: test
services:
- name: docker.elastic.co/elasticsearch/elasticsearch:6.5.4
command: ["bin/elasticsearch", "-Expack.security.enabled=false", "-Ediscovery.type=single-node"]
script:
- echo "Sleeping for 15 seconds.."; sleep 15;

Related

Unable to run gradle tests using gitlab and docker-compose

I want to run tests using Gradle after docker-compose up (Postgres DB + Spring-Boot app). All flow must be running inside the Gitlab merge request step. The problem is when I was running my test using the script part in gitlab-ci file. Important, in such a situation, we are in the correct directory where GitLab got my project. Part of gitlab-ci file:
before_script:
- ./gradlew clean build
- cp x.jar /path/x.jar
- docker-compose -f /path/docker-compose.yaml up -d
script:
- ./gradlew :functional-tests:clean test -Penv=gitlab --info
But here I can't call http://localhost:8080 -> connection refused. I try put 0.0.0.0 or 172.17.0.3 or docker.host... etc insite tests config, but it didn't work.
So, I made insite docker-compose another container where I try to run my test using the entry point command. To do that, I must have the current GitLab directory, but can't mount them.
My current solution:
Gitlab-ci:
run-functional-tests:
stage: run_functional_tests
image:
name: 'xxxx/docker-compose-java-11:0.0.7'
script:
- ./gradlew clean build -x test
- 'export SHARED_PATH="$(dirname ${CI_PROJECT_DIR})"' // current gitlab worspace dir
- cp $CI_PROJECT_DIR/x.jar $CI_PROJECT_DIR/docker/gitlab/x.jar
- docker-compose -f $CI_PROJECT_DIR/docker/gitlab/docker-compose.yaml up -d
- docker-compose -f $CI_PROJECT_DIR/docker/gitlab/docker-compose.yaml logs -f
timeout: 30m
docker-compose.yaml
version: '3'
services:
postgres:
build:
context: ../postgres
container_name: postgres
restart: always
networks:
- app-postgres
ports:
- 5432
app:
build:
context: .
dockerfile: Dockerfile
restart: always
container_name: app
depends_on:
- postgres
ports:
- "8080:8080"
networks:
- app-postgres
functional-tests:
build:
context: .
container_name: app-functional-tests
working_dir: /app
volumes:
- ${SHARED_PATH}:/app
depends_on:
- app
entrypoint: ["bash", "-c", "sleep 20 && ./gradlew :functional-tests:clean test -Penv=gitlab --info"]
networks:
- app-postgres
networks:
app-postgres:
but in such a situation my working_dir - /app - is empty. Can someone assist with that?

Service elasticsearch is not visible when run tests

name: Rspec
on: [push]
jobs:
build:
runs-on: [self-hosted, linux]
services:
elasticsearch:
image: docker.elastic.co/elasticsearch/elasticsearch:7.9.2
env:
discovery.type: single-node
options: >-
--health-cmd "curl http://localhost:9200/_cluster/health"
--health-interval 10s
--health-timeout 5s
--health-retries 10
redis:
image: redis
options: --entrypoint redis-server
steps:
- uses: actions/checkout#v2
- name: running tests
run: |
sleep 60
curl -X GET http://elasticsearch:9200/
I am running tests self hosted, I see on host with docker ps the containers (redis and elasticsearch) when they up to test.
I access a container of redis, install a curl and run curl -X GET http://elasticsearch:9200/ and i see a response ok before 60 sec (wait time to service up)
On step running tests I got error message "Could not resolve host: elasticsearch"
So, inside service/container redis I see a host elasticsearch but on step running tests no. What I can do?
You have to map the ports of your service containers and use localhost:host-port as address in your steps running on the GitHub Actions runner.
If you configure the job to run directly on the runner machine and your step doesn't use a container action, you must map any required Docker service container ports to the Docker host (the runner machine). You can access the service container using localhost and the mapped port.
https://docs.github.com/en/free-pro-team#latest/actions/reference/workflow-syntax-for-github-actions#jobsjob_idservices
name: Rspec
on: [push]
jobs:
build:
runs-on: [self-hosted, linux]
services:
elasticsearch:
image: docker.elastic.co/elasticsearch/elasticsearch:7.9.2
env:
discovery.type: single-node
options: >-
--health-cmd "curl http://localhost:9200/_cluster/health"
--health-interval 10s
--health-timeout 5s
--health-retries 10
ports:
# <port on host>:<port on container>
- 9200:9200
redis:
image: redis
options: --entrypoint redis-server
steps:
- uses: actions/checkout#v2
- name: running tests
run: |
sleep 60
curl -X GET http://localhost:9200/
Alternative:
Also run your job in a container. Then the job has to access the service containers by hostname.
name: Rspec
on: [push]
jobs:
build:
services:
elasticsearch:
image: docker.elastic.co/elasticsearch/elasticsearch:7.9.2
env:
discovery.type: single-node
options: >-
--health-cmd "curl http://localhost:9200/_cluster/health"
--health-interval 10s
--health-timeout 5s
--health-retries 10
redis:
image: redis
options: --entrypoint redis-server
# Containers must run in Linux based operating systems
runs-on: [self-hosted, linux]
# Docker Hub image that this job executes in, pick any image that works for you
container: node:10.18-jessie
steps:
- uses: actions/checkout#v2
- name: running tests
run: |
sleep 60
curl -X GET http://elasticsearch:9200/

How do I check if Oracle is up in Docker?

As the title says: how do I check if Oracle is up in Docker? Now my app tries to create a Hibernate session and I'm getting
ERROR :
ORA-01033: ORACLE initialization or shutdown in progress
So I would like some kind of health check having url only. Is that possible? Thank you!
i'm using wnameless/oracle-xe-11g-r2 and this works for me
version: '3'
services:
db:
image: wnameless/oracle-xe-11g-r2
environment:
- ORACLE_ALLOW_REMOTE=true
ports:
- 49261:1521
volumes:
- ./0_init.sql:/docker-entrypoint-initdb.d/0_init.sql
healthcheck:
test: [ "CMD", "bash", "-c", "echo 'select 1 from dual;' | ORACLE_HOME=/u01/app/oracle/product/11.2.0/xe /u01/app/oracle/product/11.2.0/xe/bin/sqlplus -s USERNAME/PASSWORD#localhost"]
# docker inspect --format "{{json .State.Health }}" myproject_db_1
interval: 10s
timeout: 10s
retries: 60
myservice:
image: xxx
depends_on:
db:
condition: service_healthy
Using docker-compose.yml and Official Oracle docker images you can use checkDBStatus.sh script as a healthcheck. The script returns non-0 while db is in ORA-01033 state. Below is an example. Notice the combination of db's service healthcheck and tomcat's depends_on with service_healthy condition:
tomcat:
image: "tomcat:9.0"
depends_on:
oracle-db:
condition: service_healthy
links:
- oracle-db
services:
oracle-db:
build:
context: src/main/docker/oracle_db
dockerfile: Dockerfile.xe
mem_reservation: 2g
environment:
- ORACLE_PWD=oracle
volumes:
- oracle-data:/opt/oracle/oradata
healthcheck:
test: [ "CMD", "/opt/oracle/checkDBStatus.sh"]
interval: 2s
volumes:
oracle-data:
You can mimic tnsping in your Java app: How to do oracle TNSPING with java?
If you can't modify the app, tnsping can be called from a bash script - if you have Oracle client installed. If you don't, simply create a simple application from the link above and execute it in a script.
I've finished with a simple check for APEX:
while [[ "$(curl -s -o /dev/null -w ''%{http_code}'' db:8080/apex)" != "302" ]]; do sleep 5; done
302 is used because it redirects /apex to /apex/some_stuff. In my case db is the name of the container with Oracle:
version: '3'
services:
...
* other containers *
...
db:
image: some/image
ports:
- "8383:8080"
- "1521:1521"
Hope it helps someone!
If you are starting an oracle DB docker container within a jenkinsfile you may find this useful:
def waitForDbHealthy(containerName)
{
timeout(time: 4, unit: 'MINUTES')
{
def HEALTH_RESULT=""
while (! HEALTH_RESULT.toString().contains("healthy") )
{
echo "DB not yet healthy. going to sleep 10 sec."
sleep 10
HEALTH_RESULT=sh(returnStdout: true, script: "docker inspect --format='{{json .State.Health.Status}}' $containerName").trim()
echo "HEALTH_RESULT: $HEALTH_RESULT"
if ( HEALTH_RESULT.toString().contains("unhealthy") )
{
sh("docker logs $containerName")
echo "Going to throw IllegalStateException"
throw new IllegalStateException("Oracle DB switched to state unhealthy")
}
}
}
}
On my build server it tooks about 1 minute until the container is "healthy".
Be aware that oracle's TNS listener might not be ready yet. I found that an additional "sleep 60" (seconds) does the trick. Alternatively you can implement the java TNSPING as
Krzysztof Kaszkowiak pointed out in his answer.
Another note: Throwing an IllegalStateException is not allowed per default in Jenkinsfile's groovy. Your Jenkins administrator must explicitely accept it (Jenkins/Manage Jenkins/In-process Script Approval).
Jenkins 2.249.2
Docker Version: 19.03.8
Oracle docker image: based on store/oracle/database-enterprise:12.2.0.1-slim

Using dind on drone.io

I'm trying move from gitlab ci to drone.io. But I can't make DIND works well as on gitlab. Above is how I did on gitlab.
variables:
NODE_ENV: 'test'
DOCKER_DRIVER: overlay
image: gitlab/dind
services:
- docker:dind
cache:
untracked: true
stages:
- test
test:
stage: test
before_script:
- docker info
- docker-compose --version
- docker-compose pull
- docker-compose build
after_script:
- docker-compose down
script:
- docker-compose run --rm api yarn install
How can I create an equivalent drone file ?
You can use the services section to start the docker daemon.
pipeline:
ping:
image: docker
environment:
- DOCKER_HOST=unix:///drone/docker.sock
commands:
- sleep 10 # give docker enough time to initialize
- docker ps -a
services:
docker:
image: docker:dind
privileged: true
command: [ '-H', 'unix:///drone/docker.sock' ]
Note that we change the default location of the docker socket and write to the drone volume which is shared among all containers in the pipeline:
command: [ '-H', 'unix:///drone/docker.sock' ]
Also note that we need to run the dind container in privileged mode. The privileged flag can only be used by trusted repositories. You will therefore need a user administrator to set the trusted flag to true for your repository in the drone user interface.
privileged: true

unable to link gitlab services to own container in .gitlab-ci.yml

I have a simple .gitlab-ci.yml file:
image: docker:latest
services:
- docker:dind
- postgres:9.5
stages:
- build
- test
variables:
STAGING_REGISTRY: "dhub.example.com"
CONTAINER_TEST_IMAGE: ${STAGING_REGISTRY}/${CI_PROJECT_NAME}:latest
before_script:
- docker login -u gitlab-ci -p $DHUB_PASSWORD $STAGING_REGISTRY
build:
stage: build
script:
- docker build --pull -t $CONTAINER_TEST_IMAGE -f Dockerfile-dev .
- docker push $CONTAINER_TEST_IMAGE
test:
stage: test
script:
- docker run --env-file=.environment --link=postgres:db $CONTAINER_TEST_IMAGE nosetests
Everything works fine until the actual test stage. In test I'm unable to access my postgres service.
docker: Error response from daemon: Could not get container for postgres.
I tried to write test like this:
test1:
stage: test
image: $CONTAINER_TEST_IMAGE
services:
- postgres:9.5
script:
- python manage.py test
But in this case, I'm unable to pull this image, because of authentication:
ERROR: Preparation failed: unauthorized: authentication required
Am I missing something?

Resources