Concourse CI - how to run functional tests? - continuous-integration

we are in the middle of process migrating from Jenkins to Concourse CI and everything was pretty smooth so far. But now I have the issue, that I don't know how to solve. I would like to get any advices from the community.
What I am trying to do is a job that can run integrational or functional (web) tests using Selenium. There are few issues for us:
To run web tests I need to set up the database (and optionally, the searching engine, proxy and etc...) proxy to imitate the production environment as close at possible.
Ideally, it should be set up by docker-compose.
This database service should run in parallel of my tests
This database service should not return anything, neither error or success, because it only starts the database and nothing else
My web-tests should not be started until the database is ready
This database service should be stopped when all the web-tests were finished
As you can see, it's pretty non-trivial task. Of course, I can create an big uber-container that contains everything I need, but this is bad solution. Another option is to create a shell-script for that, but this is not flexible enough.
Is there any example how I could implement that or good practices for this issue?
Thanks!

Since version 1.3.0 it appears you can run Docker-compose in a task: https://github.com/concourse/concourse/issues/324
This appears to work:
jobs:
- name: docker-compose
public: true
serial: true
plan:
- do:
- task: docker-compose
timeout: 20m
privileged: true
config:
platform: linux
image_resource:
type: docker-image
source: {repository: "mumoshu/dcind", tag: "latest"}
run:
path: sh
args:
- -exc
- |
source /docker-lib.sh
start_docker
docker ps
docker-compose version

This is comment from author of Concourse:
There is no Docker binary or socket on the host - they're just running a Garden backend (probably Guardian). Concourse runs at an abstraction layer above Docker, so providing any sort of magic there doesn't really make sense.
The one thing missing post-1.3 is that Docker requires you to set up cgroups yourself. I forgot how annoying that is. I wish they did what Guardian does and auto-configure it, but what can ya do.
So, the full set of instructions is:
Use or a build an image with docker in it, e.g. docker:dind.
Run the following at the start of your task: https://github.com/concourse/docker-image-resource/blob/master/assets/common.sh#L1-L40
Spin up Docker with docker daemon &.
Then you can run docker-compose and friends as normal.
The downside of this is that you'll be fetching the images every time. #230 will address that.
In the long run, #324 (comment) is the direction I want to go.
See here https://github.com/concourse/concourse/issues/324
as in the accepted answer, the Slack archive data is deleted (due to Slack limit)
The docker image specialized for the usecase: https://github.com/meAmidos/dcind

It does not sound that complicated to me. I wrote a post on how to get something similar up and running here. I use some different containers for the stack and the test runner and fire up everything from an official docker:dind image with docker-compose installed on it...
Beyond the usual concourse CI stuff of fetching resources etc.
Performing a test-run would consist of :
Starting the web,rest, and other services with docker-compose up.
Starting the Testrunner service and fire the test-suites on the
web-page which communicates with the rest layer, which in turn is
dependent on the other services for responses.
Performing docker-compose down when the test-runner completes and
deciding the return-code of the task (0=fail, 1=success) based upon
the return code of the test-suite.
To cleanly setup and tear down the stack and test runner you could do something like the below, ( maybe you could use depends if your service is not started when the test begins, for me it works without)
# Setup the SUT stack:
docker-compose up -d
‌‌
# Run the test-runner container outside of the SUT to be able to teardown the SUT when testing is completed:
docker-compose run --rm test-runner --entrypoint '/entrypoint.sh /protractor/project/conf-dev.js --baseUrl=http://web:9000/dist/ --suite=my_suite'
‌‌
# Store the return-code from the tests and teardown:
rc=$?
docker-compose down
echo "exit code = $rc "
kill %1
exit $rc

Related

How to execute script by host after starting docker container

I have docker-compose.yml file and I start a container with DB via
docker-compose up -d db command.
I need to execute script by host machine that, briefly speaking, export dump to db in container.
So, now it looks like:
docker-compose up -d db
./script.sh
But I want to combine these two commands into one.
My question is "Is it possible?"
I found out that Docker Compose doesn't support this feature.
I know that I can create another script with these commands in it, but I want to leave only
docker-compose up -d db
UPD: I would like to mention that I am using mcr.microsoft.com/mssql/server:2017-latest image
Also, have to say one more time that I need to execute script exactly on host machine
You can't use the Docker tools to execute commands on the host system. A general design point around Docker is that containers shouldn't be able to affect the host.
Nothing stops you from writing your own shell script that runs on the host and does the steps you need:
#!/bin/sh
docker-compose -d up
./wait-for.sh localhost 1433
./script.sh
(The wait-for.sh script is the same as described in the answers to Docker Compose wait for container X before starting Y that don't depend on Docker health checks.)
For your use case it may be possible to run the data importer in a separate container. A typical setup could look like this; note that the importer will run every time you run docker-compose up. You may want to actually build this into a separate image.
version: '3.8'
services:
db: { same: as you have currently }
importer:
image: mcr.microsoft.com/mssql/server:2017-latest
entrypoint: ./wait-for.sh db 1433 -- ./script.sh
workdir: /import
volumes: [.:/import]
The open-source database containers also generally support putting scripts in /docker-entrypoint-initdb.d that get executed the first time the container is launched, but the SQL Server image doesn't seem to support this; questions like How can I restore an SQL Server database when starting the Docker container? have a complicated setup to replicate this behavior.

Visual Studio is deleting my environment variable additions to docker.compose.vs.debug.g.yml

First of all I am running this locally in Visual Studio 2019 so that is the environment I am currently trying to find my issues in right now. It's .NETCORE 3.1 and going to Linux.
I have inherited a project which I need to debug locally but is going to be pushed up to ECS on AWS most of the CI/CD has been setup around this container now and I feel its limiting my ability to debug its issues but that is what I was handed so here I am chatting with you folks.
Currently the app runs fine when outside a container and able to use my dev environment credentials.
The issues seem to stack up when I try to locally debug the container in docker-compose and the docker therefore no longer has access to the AWS Credentials being it is its own little container.
My original nefarious plan was just to review my output and then correct the docker-compose.vs.debug.g.yml it is using to run and shove my secret keys in there while debugging why the AWS ECS containers are exit code 139'ing on AWS.
The issue is there are a lot of cogs spinning around this now it seems and just doing a simple "docker run -e awssecretkey=YOUWISHBOI ." is all but impossible.
NOTE: Please dont get hung up on any errors below this is just to demonstrate where I want to push in my environment variables. I have renamed the programs carelessly to keep the innocent anonymous.
version: '3.4'
services:
pickle.application:
image: pickle:dev
container_name: pickle.Application
build:
target: base
labels:
com.microsoft.created-by: "visual-studio"
com.microsoft.visual-studio.project-name: "pickle.Application"
environment:
- NUGET_FALLBACK_PACKAGES=/root/.nuget/fallbackpackages
**- HOW_DO_I=KEEP_SOMETHING_HERE**
volumes:
- C:\mq-tech\pickle-system\pickle\pickle\src\pickle.Application:/app
- C:\Users\Carter-PC\vsdbg\vs2017u5:/remote_debugger:rw
- C:\Users\Carter-PC\.nuget\packages\:/root/.nuget/packages:ro
- C:\Program Files\dotnet\sdk\NuGetFallbackFolder:/root/.nuget/fallbackpackages:ro
entrypoint: tail -f /dev/null
labels:
com.microsoft.visualstudio.debuggee.program: "dotnet"
com.microsoft.visualstudio.debuggee.arguments: " --additionalProbingPath /root/.nuget/packages --additionalProbingPath /root/.nuget/fallbackpackages \"/app/bin/Debug/netcoreapp3.1/pickleApplication.dll\""
com.microsoft.visualstudio.debuggee.workingdirectory: "/app"
com.microsoft.visualstudio.debuggee.killprogram: "/bin/sh -c \"if PID=$$(pidof dotnet); then kill $$PID; fi\""
tty: true
So I suppose my question is whats the proper way with docker-compose debugging to put my AWS environment variables where it will debug and have them?
I imagine I could be coming at this the entirely wrong way, have mercy!
I realize this answer is a bit late, but someone still might find it useful.
You can create a docker-compose.vs.debug.yml file, which will be provided by VS as the last one when running docker-compose command (so it won't get overwritten).
It can look something like this:
version: '3.4'
services:
pickle.application:
environment:
- HOW_DO_I=KEEP_SOMETHING_HERE
The same rule applies to the docker-compose.vs.release.yml file, but in release mode.
Documentation -> https://learn.microsoft.com/en-us/visualstudio/containers/docker-compose-properties?view=vs-2019#docker-compose-file-labels

How to build a cassandra cluster with docker on a windows machine?

I want to build a cassandra cluster with docker. The documentation already tells you how to this so this is not the problem I have.
However I am currently using Docker on Windows 10 and obviously it cannot execute the nested command in docker run --name some-cassandra2 -d -e CASSANDRA_SEEDS="$(docker inspect --format='{{ .NetworkSettings.IPAddress }}' some-cassandra)" cassandra:tag which results in an empty seed list for the container.
How can I nest a command like this in Windows or - if this is not possible - get a workaround for this?
I managed to fix it thanks to a docker-compose.yml by Jason Giedymin. It should work in v1 as well as v2 of docker-compose. By doing it this way you just let docker do the linking from the get go and tell cassandras about other seeds with the environment variable the container already gives you.
The sleep 30 part is pretty smart as well as it makes sure that the second container doesn't try to connect on a container that isn't fully up yet.
One thing I would recommend though, is using external_links instead of links. This way other containers don't rely on all of the cassandra containers to be up to start/work. This would defeat the purpose of a distributed database.
I still don't know how to nest Windows cmd commands into each other so I would still be thankful for some tips.

Kafka on EC2 instance for integration testing

I'm trying to set up some integration tests for part of our project that makes use of Kafka. I've chosen to use the spotify/kafka docker image which contains both kafka and Zookeeper.
I can run my tests (and they pass!) on a local machine if I run the kafka container as described at that project site. When I try to run it on my ec2 build server, however, the container dies. The final fatal error is "INFO gave up: kafka entered FATAL state, too many start retries too quickly".
My suspicion is that it doesn't like the address passed in. I've tried using both the public and the private ip address that ec2 provides, but the results are the same either way, just as with localhost.
Any help would be appreciated. Thanks!
It magically works now even though I'm still doing exactly what I was doing before. However, in order to help others who might come along, I will post what I did to get it to work.
I created the following batch file and have jenkins run this as a build step.
#!/bin/bash
if ! docker inspect -f 1 test_kafka &>/dev/null
then docker run -d --name test_kafka -p 2181:2181 -p 9092:9092 --env ADVERTISED_HOST=localhost --env ADVERTISED_PORT=9092 spotify/kafka
fi
even though the localhost resolves to the private ip address, it seems to take it now. The if block is just to test if the container already exists and reuse it otherwise.

Ansible stops service but doesn't restart it

We recently deployed Ansible in our different environments and I'm running into a problem I can't find a solution to.
On two servers you have to start and stop the services by becoming a specific user.
su - itvmgr
Then you have to run a custom command to stop and start the services:
itvmgrctl stop dispatcher
itvmgrctl start dispatcher
One of the tasks looks like this:
- name: "Start Dispatcher Service"
sudo_user: itvmgr
command: su itvmgr -c '/itvmgr/bin/itvmgrctl start dispatcher'
- name: Pause
pause: seconds=15
There's another task to stop it which looks just like this one just using stop instead of start.
The problem I'm running into is Ansible stops the service fine but it fails to start the service again. I'm not getting any errors while it's running but I can't find any reason why it would stop the service fine, but the same command fails to start it.
If anyone has any suggestions on how I can troubleshoot this problem it would be greatly appreciated.
Perhaps you start-script needs an interactive shell or some environment variables?

Resources