How to connect two docker containers, one containing hazelcast in memory data grid, and one containing war file - spring

I have two docker containers, one with hazelcast java application (the core for the web application - jar package) and one with rest service for the web application (war package). I'm using docker-compose to build up whole project in docker which looks like this:
version: "3"
services:
escomled_datagrid:
image: escomled/escomled_datagrid
build:
context: ./sh_scripts/escomled_data_grid
tomcat:
image: escomled/tomcat
build:
context: ./tomcat/app
ports:
- 8585:8080
depends_on:
- escomled_datagrid
links:
- escomled_datagrid:escomled_datagrid
I also have Dockerfile for each container:
-escomled_datagrid:
FROM openjdk:8-jdk-alpine as build
WORKDIR /EscomledML
COPY ./. ./
COPY ./escomled.properties
/home/escomled/escomled_server/config/escomled.properties
CMD ["sh","/EscomledML/escomled_data_grid.sh","start"]
EXPOSE 8085
-tomcat
FROM tomcat:8.5-alpine
COPY ./sample.war /usr/local/tomcat/webapps/
COPY ./escomled-rest.war /usr/local/tomcat/webapps/
COPY ./escomled.properties
/home/escomled/escomled_server/config/escomled.properties
RUN sh -c 'touch /usr/local/tomcat/webapps/sample.war'
RUN sh -c 'touch /usr/local/tomcat/webapps/escomled-rest.war'
EXPOSE 8080
First container uses sh script in the runtime.
This way everyting works fine, the containers start and stay active.
The only problem is that they dont see each other, hazelcast server starts and waits for "member" to connect, war file (hazelcast member) also starts, but they dont "see" each other and wont connect. I put in the docker-compose file "links" and "depends on" tags, but that wont help.
The code for the project works fine when I start it localy, first I start data grid server as java application, then I start the tomcat containing rest service and the connection is established in no time.
So my question is, how do I link this two containers so they can see each other and work together?

try putting the containers in the same "network" by specifying the network bridge
version: "3"
services:
escomled_datagrid:
image: escomled/escomled_datagrid
build:
context: ./sh_scripts/escomled_data_grid
networks:
- networknamename
tomcat:
image: escomled/tomcat
build:
context: ./tomcat/app
ports:
- 8585:8080
depends_on:
- escomled_datagrid
links:
- escomled_datagrid:escomled_datagrid
networks:
- networknamename
networks:
networknamename:
driver: bridge

Related

How do I have my jar re-deployed and put into docker image every time I run compose?

So I know that there are a lot of tutorials on the topics, both docker and maven, but I'm having some confusion in combining them alltogether.
I created a multi-module Maven project with 2 modules, 2 spring applications, let's call them application 1 and application 2.
Starting each other via IntelliJ IDEA green "run" button works fine, now I'd like to automate things and run via docker.
I have Dockerfiles that looks the same in both cases:
(in both modules it's the same, only JAR name's different)
FROM adoptopenjdk:11-jre-hotspot
MAINTAINER *my name here lol*
ADD https://github.com/ufoscout/docker-compose-wait/releases/download/2.9.0/wait /wait
RUN chmod +x /wait
ARG JAR_FILE=target/*.jar
COPY ${JAR_FILE} application1-0.0.1-SNAPSHOT-jar-with-dependencies.jar
ENTRYPOINT ["java","-jar","/application1-0.0.1-SNAPSHOT-jar-with-dependencies.jar"]
CMD /wait && /*.jar
I also have docker-compose:
version: '2.1'
services:
application1:
container_name: app1
build:
context: ../app1
image: docker.io/myname/app1:latest
hostname: app1
ports:
- "8080:8080"
networks:
- spring-cloud-network-app1
application2:
container_name: app2
build:
context: ../app2
depends_on:
application1:
condition: service_started
links:
- application1
image: docker.io/myname/app2:latest
environment:
WAIT_HOSTS: application1:8080
ports:
- "8070:8070"
networks:
- spring-cloud-network-app2
networks:
spring-cloud-network-app1:
driver: bridge
spring-cloud-network-app2:
driver: bridge
What I do currently is:
I run maven package for each module and receive files like "application1(-2)-0.0.1-SNAPSHOT-jar-with-dependencies.jar" in both target folders.
"docker build -t springio/app1 ."
"docker-compose up --build"
And it works, but I feel I do some extra steps.
How can I do the project so that I ONLY have to run docker compose?
(after each time I change things in the code)
Again, I know it's a quite simple thing but I kinda lost the logic.
Thanks!
P.S
Ah, and about the "...docker-compose-wait/releases/download/2.9.0/wait /wait"
It's important that app start one after another, tried different solutions, unfortunately, doesn't really work as good as I would like to. But I guess I'll leave it as is.
So, again, if anyone ever wonders how to do the things I asked, here's the answer: you need multi-stage build Dockerfile.
It'll look like this:
#
# Build stage
#
FROM maven:3.6.0-jdk-11-slim AS build
COPY src /home/app/src
COPY pom.xml /home/app
RUN mvn -f /home/app/pom.xml clean package
#
# Package stage
#
FROM openjdk:11-jre-slim
COPY --from=build /home/app/target/demo-0.0.1-SNAPSHOT.jar /usr/local/lib/demo.jar
EXPOSE 8080
ENTRYPOINT ["java","-jar","/usr/local/lib/demo.jar"]
What it does is it basically first creates a jar file, copies it into package stage and eventually runs.
That's allow you to run your app in docker by running only docker compose.

Testing a container against DynamoDB-Local

I wanted to test a container locally before pushing it to aws ecs.
I ran unit tests against a docker-compose stack including a dynamodb-local container using a Go (aws-sdk-go-v2) endpoint resolver with http://localhost:8000 as the url.
So I wanted to build and test container locally and realised I needed to attach it to the default network created by docker-compose. I struggled with this a bit so I build a stripped down trial. I created an endpoint resolver with a url of http://dynamo-local:8000 (named the container dynamo-local in d-c) and attached it to the default network within docker run.
Now that all works, I can perform the various table operations successfully, but one of the things that confuses me is that if I run aws cli:
aws --endpoint-url=http://localhost:8000 dynamodb list-tables
then the output shows no tables exist when there is definitely a table existing. I had assumed, naively, that as I can access port 8000 of the same container with different endpoints I should be able to access the same resources. Wrong.
Obviously a gap in my education. What am I missing ? I need to expand the trial to a proper test of the full app, so its important to me that I understand what is going on here.
Is there a way I can use the aws cli to access the table?
docker-compose file :
version: '3.5'
services:
localstack:
image: localstack/localstack:latest
container_name: localstack_test
ports:
- '4566:4566'
environment:
- SERVICES=s3,sns,sqs, lambda
- DEBUG=1
- DATA_DIR=
volumes:
- './.AWSServices:/tmp/AWSServices'
- '/var/run/docker.sock:/var/run/docker.sock'
nginx:
build:
context: .
dockerfile: Dockerfile
image: chanonry/urlfiles-nginx:latest
container_name: nginx
ports:
- '8080:80'
dynamodb:
image: amazon/dynamodb-local:1.13.6
container_name: dynamo-local
ports:
- '8000:8000'
networks:
default:
name: test-net

Can't connect to Spring Boot app running on Docker locally

I stuck with the problem that can't open my REST Spring Boot app on localhost:8091 in browser.
Here is my docker-compose.xml (everything is deployed locally on Docker Desktop):
version: '3.3'
services:
postgres:
build:
context: services/postgres
dockerfile: Dockerfile.development
command: postgres
ports:
- "5432:5432"
environment:
- POSTGRESS_USER=postgres
- POSTGRESS_DB=postgres
- POSTGRESS_PASSWORD=qqq
- POSTGRES_HOST_AUTH_METHOD=trust
volumes:
- "db-data:/var/lib/postgresql/data"
app:
build:
context: services/app
dockerfile: Dockerfile.development
command: java -jar ./app.jar
environment:
- PORT=8091
network_mode: host
image: 'my-java-app'
ports:
- 8091:8091
depends_on:
- postgres
angular:
build:
context: services/angularfrontend
dockerfile: Dockerfile.development
image: 'my-angular-app'
ports:
- 80:80
volumes:
db-data:
Spring Boot App starts normally on 8091 and connects to the database, but then I can't make calls to it's API from my local machine ("connection refused").
Angular app opens normally (on localhost:80), but can't make calls to localhost:8091 Spring Boot app.
The call from angular service container to localhost:8091 fails, right?
Try to override in your angular frontend container the call to the backend
use app:8091 (this is how the backend service is called) instead of localhost:8091.
In the 'angular' container localhost doesn't translate to 'app' container.
You can't get from a container into a different container using localhost.
localhost inside a container will translate to the ip of that container.
Try to make in your angular application the call to the backend configurable, after that override that configuration in docker-compose using environment.
Also do that for the springboot app application.
I don't see in the environment that you override the call to the postgress.
Expose that configuration in application.properties and override in docker-compose, after that remove network_mode: host
If you really want to use network_mode: host, you don't need to specify <source>:<dest> because the app is listening on 8091 directly on the host network:
...
app:
build:
context: services/app
dockerfile: Dockerfile.development
command: java -jar ./app.jar
environment:
- PORT=8091
network_mode: host
image: 'my-java-app'
depends_on:
- postgres
...
If you want to run the java app like the other containers, simply remove this line from the compose file and the network mode will default to bridge:
network_mode: host

Make a request to a spring api running in a docker container from windows host

So, I searched around for an answer on this matter but either people don't address the issue or they say there's no problem doing this on their computer (mac or linux). It seems like this might be a windows problem.
I have a spring api running on a docker container (linux container). I use docker desktop on windows and I'm trying to make a request (in insomnia/postman/wtv) to that api.
If I run the api locally making the following request works perfectly:
http://localhost:8080/api/task/
This will list multiples task elements.
I've containerized this application like so:
Dockerfile
FROM openjdk:11.0.7
COPY ./target/spring-api-0.0.1-SNAPSHOT.jar /usr/app/
WORKDIR /usr/app
RUN sh -c 'touch spring-api-0.0.1-SNAPSHOT.jar'
ENTRYPOINT ["java", "-jar", "spring-api-0.0.1-SNAPSHOT.jar"]
docker-compose.yml
version: '3.8'
services:
api:
build: .
depends_on:
- mysql
environment:
- SPRING_DATASOURCE_URL=jdbc:mysql://mysql:3306/test?createDatabaseIfNotExist=true
ports:
- "8080:80"
mysql:
image: mysql
ports:
- "3306:3306"
environment:
- MYSQL_ROOT_PASSWORD=root
- MYSQL_USER=root
- MYSQL_PASSWORD=root
- MYSQL_DATABASE=test
If I do docker-compose up this works without issue:
The problem is, if I try to call the same endpoint as before from localhost I don't get any response.
Insomnia returns an error saying: Error: Server returned nothing (no headers, no data)
I've also tried connecting to the container's ip (got it from docker inspect) but no luck.
Ports are exposed in docker-compose.yml. What am I missing?
Thanks in advance.
Port mapping is incorrect.Spring boot application started at 8080 (from the image I see) inside container and it should be mapped to 8080 inside the container.
It should be like below:
ports:
- "8080:8080"

Docker compose, link file outside container

I'm working with docker-compose for a laravel project and nginx.
This is my docker-compose.yml :
version: '2'
services:
backend:
image: my_image
depends_on:
- elastic
- mysql
mysql:
image: mysql:8.0.0
nginx:
depends_on:
- backend
image: my_image
ports:
- 81:80
So, my Laravel project is in the container backend, and If I run the command : docker-compose up -d it's ok, all containers are created and my project is running on port 81.
My problem is, In the Laravel project in my container backend, I have a .env file with database login, password and other stuff.
How can I edit this file after docker-compose up ? Directly in the container is not a good idea, is there a way to link a file outside a container with docker-compose ?
Thanks
One approach to this is to use the 'env_file' directive on the docker-compose.yml, in there you can put a list of key value pairs that will be exported into the container. For example:
web:
image: nginx:latest
env_file:
- .env
ports:
- "8181:80"
volumes:
- ./code:/code
Then you can configure your application to use these env values.
One catch with this approach is that you need to recreate the containers if you change any value or add a new one (docker-compose down && docker-compose up -d).

Resources