I am using a CI tool called Drone(drone.io). So i really want to do some integration tests with it. What i want is Drone to start my application container on some port on the drone host and then I would be able to run integration tests against it. For example in .drone.yml file:
build:
image: python3.5-uwsgi
pull: true
auth_config:
username: some_user
password: some_password
email: email
commands:
- pip install --user --no-cache-dir -r requirements.txt
- python manage.py integration_test -h 127.0.0.1:5000
# this should send various requests to 127.0.0.1:5000
# to test my application's behaviour
compose:
my_application:
# build and run a container based on dockerfile in local repo on port 5000
publish:
deploy:
Drone 0.4 can't start service from your Dockerfile, if you want start docker container, you should build it before, outside this build, and push to dockerhub or your own registry and put this in compose section, see http://readme.drone.io/usage/services/#images:bfc9941b6b6fd7b4ef09dd0ccd08af0c
You can also start your application in build, nohup python manage.py server -h 127.0.0.1:5000 & before you running your integration tests. Make sure that your application is started and listening 5000 port, before you run integration_test.
I recommend you use drone 0.5 with pipelines, you can build docker image and push that to registry before build, and use that as service inside your build.
Related
I use Laravel Vapor for deploying our microservices based on Laravel. This works very good so far, if the app with their dependencies is not too large. But if it is then it gets a little bit tricky.
Vapor provides a Docker runtime for this case where you are able to deploy apps up to 10GB size.
For local development we usually use Laradock.io because its easy and flexible.
That means if we deploy from our local environment it easy to enter the workspace container and and run the vapor deploy commands. After enabling Docker Client for the workspace container it works with the vapor Docker runtime properly.
But now we integrated the deployment process into Gitlab CI Pipeline. That works very well for our small services with Vapor PHP runtime.
But for the Docker runtime I desperate on the CI deployment.
The docker runtime needs an installed docker instance where vapor will be invoked. That means in the Gitlab-ci.yml I have to add an image with installed Docker and PHP to invoke the Vapor scripts.
So I created an docker image base on the laradock workspace container but the Gitlab-runner exits always with the error message no docker deamon is available.
This is the related part of my GitLab-CI yml (the image is only local available):
testing:
image:
name: lexitaldev/vapor-docker-deploy:latest
pull_policy: never
securityContext:
privileged: true
environment: testing
stage: deploy
only:
- test
script:
- composer install
- php vendor/bin/vapor deploy test
This is the specific output:
Error Output:
================
Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the
docker daemon running?
I've tried to use the standard 'laravelphp/vapor:php80' image and install docker over the script section as well.
before_script:
- apk add docker
- addgroup root docker
But nothing helped. It seems to be there is a problem with the docker.sock.
Did anybody managed to add Vapor Docker Runtime deployment to CI scripts?
Best,
Michael
I would like to tell you, that you only need to add the Service: dind, but after you do that, it will throw an error, related to the image that Gitlab create for your pipelines. So you need to create a runner with volumes, privileged flag, and tags.
I did it, using gitlab-runner on my machine.
sudo gitlab-runner register -n \
--url {{ your_url }} \
--registration-token {{your_token}} \
--executor docker \
--description "{{ Describe your runner }}" \
--docker-image "docker:20.10.12-alpine3.15" \
--docker-privileged \
--docker-volumes="/certs/client" \
--docker-volumes="cache" \
--docker-volumes="/var/run/docker.sock:/var/run/docker.sock"
--tag-list {{ a_tag_for_your_pipeline }}
Once you did that, you would need to use a docker stable version in your gitlab-ci.yml file. For some reason, it doesn't work when I was trying to use version 20 or latest
image: docker:stable
services:
- name: docker:stable:dind
before_script:
- echo $CI_JOB_TOKEN | docker login $CI_REGISTRY -u $CI_REGISTRY_USER --password-stdin
build:
tags:
- {{the tag you defined in your runner}}
variables:
IMAGE_TAG: $CI_REGISTRY_IMAGE:$CI_COMMIT_REF_SLUG
script:
- echo $IMAGE_TAG
- docker build -t $CI_REGISTRY_IMAGE -f {{your Dockerfile}} .
- docker push $CI_REGISTRY_IMAGE
All the variables are previously defined in Gitlab, so don't worry, you can "copy & paste". Also, I added some advices that Gitlab mention on its documentation when you need to register your Docker container in Gitlab container.
When i use docker pipeline it will success building.
But when i use exec pipe line it's always stuck in pending.
And i don't what going wrong.
kind: pipeline
type: exec
name: deployment
platform:
os: linux
arch: amd64
steps:
- name: backend image build
commands:
- echo start build images...
# - export MAJOR_VERSION=1.0.rtm.
# - export BUILD_NUMBER=$DRONE_BUILD_NUMBER
# - export WORKSPACE=`pwd`
# - bash ./jenkins_build.sh
when:
branch:
- master
Docker Pipe Line is fine.
kind: pipeline
type: docker
name: deployment
steps:
- name: push image to repo
image: plugins/docker
settings:
dockerfile: src/ZR.DataHunter.Api/Dockerfile
tags: latest
insecure: true
registry: "xxx"
repo: "xxx"
username:
from_secret: username
password:
from_secret: userpassword
First of all it's important to note that the exec pipeline can be used only when Drone is self-hosted. It is written in the official docs:
Please note exec pipelines are disabled on Drone Cloud. This feature is only available when self-hosting
When Drone is self-hosted be sure, that:
The exec runner is installed
It is configured properly in its config file (to be able to connect to the Drone server)
And the drone-runner-exec service is running
After the service is started, look for its log file and you have to see an info message saying it was able to connect to your Drone server:
level=info msg="successfully pinged the remote server"
level=info msg="polling the remote server"
It's also possible to view the web UI (Dashboard) of the running service if you enable it.
So if you see it can poll your server, then your exec pipeline should run as expected.
So I have localstack running locally (on my laptop) and can deploy serverless app to it and then invoke a Lambda.
However, I am really struggling with doing the same thing in gitlab-ci.
This is the relevant part of .gitlab-ci.yml:
integration-test:
stage: integration-test
image: node:14-alpine3.12
tags:
- docker
services:
- name: localstack/localstack
alias: localstack
variables:
LAMBDA_EXECUTOR: docker
HOSTNAME_EXTERNAL: localstack
DEFAULT_REGION: eu-west-1
USE_SSL: "false"
DEBUG: "1"
AWS_ACCESS_KEY_ID: test
AWS_SECRET_ACCESS_KEY: test
AWS_DEFAULT_REGION: eu-west-1
script:
- npm ci
- npx sls deploy --stage local
- npx jest --testMatch='**/*.integration.js'
only:
- merge_requests
The localstack gets started and the deployment works fine. But as soon as lambda is invoked (in an integration test), localstack is trying to create a container for the lambda to run in and that's when it fails with the following:
Lambda process returned error status code: 1. Result: . Output:\\nCannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?\\nmust specify at least one container source (.....)
I tried to set DOCKER_HOST to tcp://docker:2375 but then it fails with:
Lambda process returned error status code: 1. Result: . Output:\\nerror during connect: Post http://docker:2375/v1.29/containers/create: dial tcp: lookup docker on 169.254.169.254:53: no such host\
DOCKER_HOST set to tcp://localhost:2375 complains too:
Lambda process returned error status code: 1. Result: . Output:\\nCannot connect to the Docker daemon at tcp://localhost:2375. Is the docker daemon running?\\nmust specify at least one container source
Did anyone ever get lambdas to run within localstack within shared gitlab runners?
Thanks for your help :)
Running docker in docker is usually a bad idea, since it's a big security vulnerability. Granting access to local docker daemon equals granting root privileges on a runner.
If you still want to use docker installed on host to spawn containers, refer to official documentation - https://docs.gitlab.com/ee/ci/docker/using_docker_build.html#use-docker-socket-binding
which boils down to adding
volumes = ["/var/run/docker.sock:/var/run/docker.sock", "/cache"]
to [runners.docker] section in your runner config.
The question is, why do you need docker? According to https://github.com/localstack/localstack , setting LAMBDA_EXECUTOR to local will
run Lambda functions in a temporary directory on the local machine
Which should be the best approach to your problem, and won't compromise security of your runner host.
I'm trying to deploy a java maven project on aws with Gitlab CI/CD.
This is my .gitlab-ci.yml
image: maven:3-jdk-8
services:
- docker:dind
stages:
- test
- build
- deploy
maven-test:
stage: test
script:
- echo "Test stage"
- mvn clean validate compile test -B
maven-build:
stage: build
script:
- echo "Build stage"
- mvn install -B -DskipTests
artifacts:
paths:
- ./target/*.jar
maven-deploy:
stage: deploy
script:
- echo "Deploy stage"
- scp -v -o StrictHostKeyChecking=no -I "mykey.pem" ./target/*.jar ubuntu#xxxxxxx.com:*.jar
when: manual
If I execute the scp command on a terminal in my pc then the jar is uploaded in aws ec2 instance while in gitlab I have errors and the jar is not uploaded.
This is my first approach with Gitlab CI and aws, so can someone explain step by step what I need to do to deploy the project in aws ec2 instance with Gitlab CI?
Thanks!
Since you have not posted much about your problem nor did you post the error I will just suggest a few things to look at:
From a GitLab perspective:
Are you sure that the "mykey.pem" is available within the repository when running that command(maven-deploy) on the the gitlab-runner.?
Also are you sure that you are using a docker gitlab-runner, if you are not then you can't use the image: directive and therefore it might not not have mvn/scp locally.
You might want to look into the dependencies directive and ensure you make that artifact available in next task. This should be done by default!
From an AWS perspective:
Make sure that the ubuntu target machine/server has port 22 exposed to the EC2 machine running the gitlab-runner.
Edit:
If the error you are receiving is with the pem files permissions then take a look at this resolution for AWS EC2 pem file issue. Another similar resolution is here.
Seems like if you put chmod 400 mykey.pem before the scp it might fix your problem.
When I push an existing Docker image to Heroku, Heroku provides a $PORT environment variable. How can I pass this property to the Heroku run instance?
On localhost this would work:
docker pull swaggerapi/swagger-ui
docker run -p 80:8080 swaggerapi/swagger-ui
On Heroku I should do:
docker run -p $PORT:8080 swaggerapi/swagger-ui
Is something like this possible?
The question is quite old now, but still I will write my answer here if it can be of some help to others.
I have spring-boot App along with swagger-ui Dockerized and deployed on Heroku.
This is my application.yml looks like:
server:
port: ${PORT:8080}
forward-headers-strategy: framework
servlet:
contextPath: /my-app
springdoc:
swagger-ui:
path: '/swagger-ui.html'
Below is my DockerFile configuration.
FROM maven:3.5-jdk-8 as maven_build
WORKDIR /app
COPY pom.xml .
RUN mvn clean package -Dmaven.main.skip -Dmaven.test.skip && rm -r target
COPY src ./src
RUN mvn package spring-boot:repackage
########run stage########
FROM openjdk:8-jdk-alpine
WORKDIR /app
RUN apk add --no-cache bash
COPY --from=maven_build /app/target/springapp-1.1.1.jar ./
#run the app
# 256m was necessary for me, as I am using free version so Heroku was giving me memory quota limit exception therefore, I restricted the limit to 256m
ENV JAVA_OPTS "-Xmx256m"
ENTRYPOINT ["java","${JAVA_OPTS}", "-jar","-Dserver.port=${PORT}", "springapp-1.1.1.jar"]
The commands I used to create the heroku app:
heroku create
heroku stack:set container
The commands I used to build image and deploy:
docker build -t app-image .
heroku container:push web
heroku container:release web
Finally make sure on Heroku Dashboard the dyno information looks like this:
web java \$\{JAVA_OPTS\} -jar -Dserver.port\=\$\{PORT\} springapp-1.1.1.jar
After all these steps, I was able to access the swagger-ui via
https://testapp.herokuapp.com/my-app/swagger-ui.html
Your Docker container is required to listen to HTTP traffic in the port specified by Heroku.
Looking at the Dockerfile in the Github repo for swaggerapi/swagger-ui, it looks like it already supports the PORT environment variable: https://github.com/swagger-api/swagger-ui/blob/be72c292cae62bcaf743adc6236707962bc60bad/Dockerfile#L13
So maybe you don't really need to do anything?
It looks like this image would just work, if shipped to Heroku as a web app.