gradle continuous build trick doesn't work in docker container - spring-boot

Hi I'm trying to use the trick described here to allow continuous building inside docker containers. The trick works fine when I open two separate terminals on my host machine, but fails when used in docker containers.
docker-compose.yml
build_server:
image: gradle:6.3.0-jdk8
working_dir: /home/gradle/server
volumes:
- ./server:/home/gradle/server
command: ["gradle", "build", "--continuous", "-x", "test"]
server:
image: gradle:6.3.0-jdk8
working_dir: /home/gradle/server
volumes:
- ./server:/home/gradle/server
ports:
- 8080:8080
depends_on:
- build_server
restart: on-failure
command: ["gradle", "bootRun"]
The error message I got from server container:
server_1 | FAILURE: Build failed with an exception.
server_1 |
server_1 | * What went wrong:
server_1 | Gradle could not start your build.
server_1 | > Could not create service of type ScriptPluginFactory using BuildScopeServices.createScriptPluginFactory().
server_1 | > Could not create service of type ChecksumService using BuildSessionScopeServices.createChecksumService().
server_1 | > Timeout waiting to lock checksums cache (/home/gradle/server/.gradle/checksums). It is currently in use by another Gradle instance.
server_1 | Owner PID: unknown
server_1 | Our PID: 31
server_1 | Owner Operation: unknown
server_1 | Our operation:
server_1 | Lock file: /home/gradle/server/.gradle/checksums/checksums.lock
It looks like gradle has added locks on local cache files and prevents bootRun task from being run in the other container. However, the trick works fine when I run the tasks in two terminals on my host machine, or when I only run the build_server container and run bootRun on host terminal. I wonder why it doesn't work inside docker containers. Thanks for helping in advance!

Found a workaround by setting a different project cache dir for the server container. i.e. replace the command with the following
command: ["gradle", "bootRun", "--project-cache-dir=/tmp/cache"]
Might not be the best solution but it does circumvent the problem caused by gradle's lock.

Related

Running lambdas in localstack in gitlab-ci

So I have localstack running locally (on my laptop) and can deploy serverless app to it and then invoke a Lambda.
However, I am really struggling with doing the same thing in gitlab-ci.
This is the relevant part of .gitlab-ci.yml:
integration-test:
stage: integration-test
image: node:14-alpine3.12
tags:
- docker
services:
- name: localstack/localstack
alias: localstack
variables:
LAMBDA_EXECUTOR: docker
HOSTNAME_EXTERNAL: localstack
DEFAULT_REGION: eu-west-1
USE_SSL: "false"
DEBUG: "1"
AWS_ACCESS_KEY_ID: test
AWS_SECRET_ACCESS_KEY: test
AWS_DEFAULT_REGION: eu-west-1
script:
- npm ci
- npx sls deploy --stage local
- npx jest --testMatch='**/*.integration.js'
only:
- merge_requests
The localstack gets started and the deployment works fine. But as soon as lambda is invoked (in an integration test), localstack is trying to create a container for the lambda to run in and that's when it fails with the following:
Lambda process returned error status code: 1. Result: . Output:\\nCannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?\\nmust specify at least one container source (.....)
I tried to set DOCKER_HOST to tcp://docker:2375 but then it fails with:
Lambda process returned error status code: 1. Result: . Output:\\nerror during connect: Post http://docker:2375/v1.29/containers/create: dial tcp: lookup docker on 169.254.169.254:53: no such host\
DOCKER_HOST set to tcp://localhost:2375 complains too:
Lambda process returned error status code: 1. Result: . Output:\\nCannot connect to the Docker daemon at tcp://localhost:2375. Is the docker daemon running?\\nmust specify at least one container source
Did anyone ever get lambdas to run within localstack within shared gitlab runners?
Thanks for your help :)
Running docker in docker is usually a bad idea, since it's a big security vulnerability. Granting access to local docker daemon equals granting root privileges on a runner.
If you still want to use docker installed on host to spawn containers, refer to official documentation - https://docs.gitlab.com/ee/ci/docker/using_docker_build.html#use-docker-socket-binding
which boils down to adding
volumes = ["/var/run/docker.sock:/var/run/docker.sock", "/cache"]
to [runners.docker] section in your runner config.
The question is, why do you need docker? According to https://github.com/localstack/localstack , setting LAMBDA_EXECUTOR to local will
run Lambda functions in a temporary directory on the local machine
Which should be the best approach to your problem, and won't compromise security of your runner host.

docker-compose Error Cannot start service mongo: driver failed programming external connectivity on endpoint

I'm setting up grandnode with mondodb in docker using docker compose.
docker-compose.yml
version: "3.6"
services:
mongo:
image: mongo:3.6
volumes:
- mongo_data_db:/data/db
- mongo_data_configdb:/data/configdb
ports:
- 27017:27017
grandnode:
image: grandnode/grandnode:4.10
ports:
- 8080:8080
depends_on:
- mongo
volumes:
mongo_data_db:
external: true
mongo_data_configdb:
external: true
Getting below error while using the docker-compose.
E:\docker\grandnode>docker-compose up
Creating network "grandnode_default" with the default driver
Creating grandnode_mongo_1 ... error
ERROR: for grandnode_mongo_1 Cannot start service mongo: driver failed programming external connectivity on endpoint grandnode_mongo_1 (1e54342c07b093e32189aad487927f226b3ed0d1b6bdf7413588377b0e99bc2c): Error starting userland proxy: mkdir /port/tcp:0.0.0.0:27017:tcp:172.20.0.2:27017: input/output error
ERROR: for mongo Cannot start service mongo: driver failed programming external connectivity on endpoint grandnode_mongo_1 (1e54342c07b093e32189aad487927f226b3ed0d1b6bdf7413588377b0e99bc2c): Error starting userland proxy: mkdir /port/tcp:0.0.0.0:27017:tcp:172.20.0.2:27017: input/output error
ERROR: Encountered errors while bringing up the project.
It happen to me, in Xubuntu 20.04.
The problem was that I had mongod running in my computer.
Stop mongod, was the solution for me.
I did this:
sudo systemctl stop mongod
Check that mongod was stopped with:
systemctl status mongod | grep Active
The output of this command should be:
Active: inactive (dead)
Then, executed again this:
docker-compose up -d
Everything worked as expected.
Unless you want to connect to your MongoDB instance from your local host, you don't need that port mapping "27017:27017".
Both services are on the same network and will see each other anyway. Grandnode can connect to MongoDB at mongo:27017
The problem was because the Shared Drives were unchecked.
Check the drives required
Click Apply
Restart Docker
This will fix the issue.
stop your MongoDB server from your OS.
for linux
sudo systemctl stop mongod
if this still doesn't work then uninstall MongoDB from the local machine and run docker compose once again
for Linux user
sudo systemctl stop MongoDB
sudo docker-compose up -d

How to use docker run with a Meteor image?

I have 2 containers mgmt-app who is a Meteor container and mgmt-mongo who is the MongoDB.
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
7b65be4ac454 gitlab-lab:5005/dfc/mongo:latest "/entrypoint.sh mongo" About an hour ago Up About an hour 27017/tcp mgmt-mongo
dff0b3c69c5f gitlab-lab:5005/dfc/mgmt-docker-gui:lab "/bin/sh -c 'sh $METE" About an hour ago Up 42 minutes 0.0.0.0:80->80/tcp mgmt-app
From my Docker host I want to run docker run gitlab-lab:5005/dfc/mgmt-docker-gui:lab ls -al
but I have this error:
=> Starting meteor app on port:80
/app/programs/server/node_modules/fibers/future.js:280
throw(ex);
^
Error: MONGO_URL must be set in environment
So I tried:
docker run -e "MONGO_URL=mongodb://mgmt-mongo:27017/meteor" gitlab-lab:5005/dfc/mgmt-docker-gui:lab ls -al
and then the error was:
/app/programs/server/node_modules/fibers/future.js:313
throw(ex);
^
MongoError: failed to connect to server [mgmt-mongo:27017] on first connect
I really don't understand because when I do a docker-compose up -d with this file:
mgmt-app:
image: gitlab-lab:5005/dfc/mgmt-docker-gui:latest
container_name: mgmt-app
volumes:
- /var/run/docker.sock:/var/run/docker.sock
- $HOME/.docker:/root/.docker
- /home/dockeradm/compose/area:/home/dockeradm/compose/area
environment:
- ROOT_URL=http://localhost:80
- MONGO_URL=mongodb://mgmt-mongo:27017/meteor
ports:
- 80:80
restart: always
mgmt-mongo:
image: gitlab-lab:5005/dfc/mongo:latest
container_name: mgmt-mongo
volumes:
- mgmt_mongo_data_config:/data/configdb
- mgmt_mongo_data_db:/data/db
restart: always
everything go well.
So my request is, how should I do my docker run to execute my command ? (the command is not a simple ls -al but it's ok for the demo)
When you run the containers separately with docker run, they are not linked on the same docker network so the mongo container is not accessible from the app container. To remedy this, you should use either:
--link to mark the app container as linked to the mongo container. This works, but is deprecated.
a defined docker network for both containers to be linked by; this is more complex, but is the recommended architecture
By contrast, docker-compose automatically adds both containers to the same docker network, so they are immediately connectable without any extra configuration required:
By default Compose sets up a single network for your app. Each container for a service joins the default network and is both reachable by other containers on that network, and discoverable by them at a hostname identical to the container name.

How to mount container-writable host directory?

I'm attempting to run an ELK stack using Docker. I found docker-elk which has already set up the config for me, using docker-compose.
I'd like to store the elasticsearch data on the host machine instead of a container. As per docker-elk's README, I added a volumes line to elasticsearch's section of docker-compose.yml:
elasticsearch:
image: elasticsearch:latest
command: elasticsearch -Des.network.host=0.0.0.0
ports:
- "9200"
- "9300"
volumes:
- ../../env/elasticsearch:/usr/share/elasticsearch/data
However, when I run docker-compose up I get:
$ docker-compose up
Starting dev_elasticsearch_1
Starting dev_logstash_1
Starting dev_kibana_1
Attaching to dev_elasticsearch_1, dev_logstash_1, dev_kibana_1
kibana_1 | Stalling for Elasticsearch
elasticsearch_1 | [2016-03-09 00:23:35,193][WARN ][bootstrap ] unable to install syscall filter: seccomp unavailable: your kernel is buggy and you should upgrade
elasticsearch_1 | Exception in thread "main" java.lang.IllegalStateException: Unable to access 'path.data' (/usr/share/elasticsearch/data/elasticsearch)
elasticsearch_1 | Likely root cause: java.nio.file.AccessDeniedException: /usr/share/elasticsearch/data/elasticsearch
elasticsearch_1 | at sun.nio.fs.UnixException.translateToIOException(UnixException.java:84)
elasticsearch_1 | at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:102)
elasticsearch_1 | at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:107)
... etc ...
Looking in ../../env, the elasticsearch directory was indeed created, but it was empty. If I create ../../env/elasticsearch/elasticsearch then I get an access error for /usr/share/elasticsearch/data/elasticsearch/nodes. If I creates /nodes then I get an error for /nodes/0, etc...
In short, it appears that the container doesn't have write permissions on the directory.
How do I get it to have write permissions? I tried chmod a+wx ../../env/elasticsearch, and then it manages to create the next directory, but that directory has permission drwxr-xr-x and it gets stuck again.
I don't like the idea of having to run this as root.
Docker doesn't tend to worry about these things in its base images because it expects you to use volumes or volume containers. Mounting to the host gets second class support. But as long as the UID that owns the directory is not zero (and it seems it's not based on our comment exchange) you should be able to get away with running elasticsearch as the user who already owns the directory. You could try removing and re-adding the elasticsearch user from the container, specifying its UID.
You would need to do this at entrypoint time, so your best bet would be to build a custom container. Create a file called my-entrypoint with these contents:
#!/bin/bash
# Allow running arbitrary one-off commands
[[ $1 && $1 != elasticsearch ]] && exec "$#"
# Otherwise, fix perms and then delegate the rest to vanilla
target_uid=$(stat -c %u /usr/share/elasticsearch/data)
userdel elasticsearch
useradd -u "$target_uid" elasticsearch
. /docker-entrypoint "$#"
Make sure it's executable. Then create a Dockerfile with these contents:
FROM elasticsearch
COPY my-entrypoint /
ENTRYPOINT ["/my-entrypoint"]
And finally update your docker-compose.yml file:
elasticsearch:
build: .
command: elasticsearch -Des.network.host=0.0.0.0
ports:
- "9200"
- "9300"
volumes:
- ../../env/elasticsearch:/usr/share/elasticsearch/data
Now when you run docker-compose up it should build an elasticsearch container with your changes.
(I had to do something like this once with apache for Magento.)

invalid argument creating a ruby dev env with docker & fig

Attempting to get a dev environment setup with fig and docker and I continually receive an 'invalid argument' error.
$ fig up
Recreating website_db_1...
Recreating website_web_1...
invalid argument
The Dockerfile builds via both fig and docker.
fig.yml
db:
image: "postgres:9.3"
ports:
- 5432
volumes:
- ./data:/var/lib/postgresql/data/
web:
build: .
command: bundle exec rails server
volumes:
- .:/usr/src/app/
ports:
- "3000:3000"
links:
- db
Dockerfile
FROM ruby:1.9.3-p547
RUN bundle config --global frozen 1
RUN mkdir -p /usr/src/app
I think you have some rare character or problematic encoding. I have run your files and from docker/fig side it seems to work nicely. The output I get when fig up:
Recreating fig_db_1...
Recreating fig_web_1...
Attaching to fig_db_1, fig_web_1
db_1 | LOG: database system was shut down at 2014-12-30 09:06:55 UTC
db_1 | LOG: database system is ready to accept connections
db_1 | LOG: autovacuum launcher started
web_1 | Could not locate Gemfile
fig_web_1 exited with code 10
Gracefully stopping... (press Ctrl+C again to force)
Stopping fig_db_1...
Try copying&pasting the code you have put here. I have found similar issues with fig and you have to take care of indentation and format. I hope this helps.
TL;DR
Turned the computer off and then on again.
Resolved.

Resources