Docker Container Connection Refused MacOS - macos

I have this docker-compose file:
networks:
default:
ipam:
config:
- subnet: 10.48.0.0/16
gateway: 10.48.0.1
services:
haproxy:
build: haproxy
container_name: haproxy
volumes:
- ./haproxy/conf/:/usr/local/etc/haproxy/
- ./haproxy/ssl/:/etc/ssl/xip.io/
ports:
- "80:80"
- "443:443"
networks:
default:
ipv4_address: 10.48.0.2
server:
build: server
container_name: server
restart: always
environment:
- ENV=env=production db=true
ports:
- "8081:8081"
volumes:
- ./server/config:/usr/src/app/config
depends_on:
- haproxy
networks:
default:
ipv4_address: 10.48.0.4
frontend:
build: frontend
container_name: frontend
restart: always
ports:
- "8080:8080"
volumes:
- ./frontend/config:/usr/src/app/config
depends_on:
- server
networks:
default:
ipv4_address: 10.48.0.5
version: '2'
In order to deploy a backend server and a frontend interface inside a subnet defined in the range 10.48.0.0/16.
So I tried to assign fixed ip to each container. On Linux everything is ok, so I can reach 10.48.0.4_8081/api, but on MacOS when I try to do the same thing, I have ERR_CONNECTION_REFUSED.
If I try to connect without using IP, but with localhost:8081/api, this is ok. But with multiple containers, I have to access directly with the IP.
Inside each container, if I try to ping the other ip address (example from container frontend with IP 10.48.0.5 I try to ping 10.48.0.4) everything is OK.
So my question is, How can I do in order make an http call to an api that is on another service? thanks for your help.

I've read everywhere that is a well know situation under windows and mac, but not on linux, where is possible from the client side making request directly on the ip address of the container. This is not possible on mac and is still open an issue on github.
In this case, I've used haproxy in order to proxy requests to each container.

Related

docker application not communicate with docker mysql container

I just encountered a problem. I am dockerizing a springboot application with MySQL as a database it is perfectly working in a local setup. But when I try to dockerize the application using docker-compose, MySQL container is working fine and is accessible in my workbench but my application is not able to access it throwing the communication link failure.
This is the compose file I am using:
version: "3.8"
services:
mysqldb:
image: mysql:5.7
restart:unless-stopped
environment:
- MYSQL_ROOT_PASSWORD=root
- MYSQL_DATABASE=baskartest
ports:
- 3307:3306
volumes:
- db:/var/lib/mysql
app:
depends_on:
- mysqldb
build: ./bezkoder-app
restart:on-failure
env_file: ./.env
ports:
- 8084:8080
environment:
SPRING_APPLICATION_JSON: '{
"spring.datasource.url" : "jdbc:mysql://mysqldb:3306/baskartest?useSSL=false",
"spring.datasource.username" : "root",
"spring.datasource.password" : "root",
"spring.jpa.properties.hibernate.dialect" : "org.hibernate.dialect.MySQL5InnoDBDialect",
"spring.jpa.hibernate.ddl-auto" : "update"
}'
volumes:
- .m2:/root/.m2
stdin_open: true
tty: true
MySQL is working fine but my app in services is not able to communicate with it.
Here is what I see:
Any help would be appreciated!
I have done small changes in docker-compose.yml file, please use this one. It is working fine.
version: "3.8"
services:
mysqldb:
image: mysql:5.7
container_name: MYSQL_DB
restart: unless-stopped
environment:
- MYSQL_ROOT_PASSWORD=root
- MYSQL_DATABASE=baskartest
ports:
- 3307:3306
app:
build: ./bezkoder-app
restart: on-failure
ports:
- 8084:8080
environment:
- SPRING_DATASOURCE_URL=jdbc:mysql://MYSQL_DB:3306/baskartest
- SPRING_DATASOURCE_USERNAME=root
- SPRING_DATASOURCE_PASSWORD=root
- SPRING_JPA_HIBERNATE_DDL_AUTO=update
depends_on:
- mysqldb
In my short experience with Docker and containers communicating with eachother, I've found that localhost configuration URL's aren't working. Because one container can't communicatie with another container using just localhost:[PORT].
A lazy way for me to fix this is to work out a static IP address of the machine the containers are running on. Then use this (local) IP address instead of localhost to define the endpoint of the database.
In my situation with Redis I'm doing it like this:
return new LettuceConnectionFactory(new RedisStandaloneConfiguration("192.168.1.201", 6379));
In your situation, you might want to use a Datasource URL like this:
- SPRING_DATASOURCE_URL=jdbc:mysql://[STATIC_IP_OF_MACHINE]:3306/baskartest
*i.e. ...192.168.1.100:3306/baskartest...*

Cannot make an HTTP request between two Laravel Docker containers

Let me start off by stating that I know this question has been asked on many forums. I have read them all.
I have two Docker containers that are built with docker-compose and contain a Laravel project each. They are both attached to a network and can ping one another successfully, however, when I make a request from Postman to the one backend that then makes a curl request to the other, I get the connection refused error shown below.
This is my docker-compose file for each project respectfully:
version: '3.8'
services:
bumblebee:
build:
context: .
dockerfile: Dockerfile
volumes:
- .:/app
networks:
- picknpack
ports:
- "8010:8000"
networks:
picknpack:
external: true
version: '3.8'
services:
optimus:
build:
context: .
dockerfile: Dockerfile
volumes:
- .:/app
ports:
- "8020:8000"
networks:
- picknpack
depends_on:
- optimus_db
optimus_db:
image: mysql:8.0.25
environment:
MYSQL_DATABASE: optimus
MYSQL_USER: test
MYSQL_PASSWORD: test1234
MYSQL_ROOT_PASSWORD: root
volumes:
- ./storage/dbdata:/var/lib/mysql
ports:
- "33020:3306"
networks:
picknpack:
external: true
Here you can see the successful ping:
I would love to keep messing with configuration files but I have a deadline to meet and nothing is working, any help would be appreciated.
EDIT
Please see inspection of network:
Within the docker network that I created, both containers are exposed on port 8000 as per their Dockerfiles. It was looking at me square in the face: 'Connection refused on port 80'. The HTTP client was using that as default rather than 8000. I updated the curl request to hit port 8000 and it works now. Thanks to #user3532758 for your help. Note that the containers are mapped to ports 8010 and 8020 in the external local network, not within the docker network. There they are both served on port 8000 with different IPs

Access to local database denied through docker container

I am having a problem connecting my compiled Spring-Boot app to the database that I have running on another container on my server.
I have tried different configurations, changing from localhost to the IP address of my server for the connection. I also double checked that the credentials matched by logging in via Adminer. Finally, I did a rebuild of the compose and image files several times to ensure that I have all the latest versions.
Compose file:
version: '3.1'
services:
db:
image: mariadb
restart: always
environment:
MYSQL_ROOT_PASSWORD: mypassword
MYSQL_DATABASE: marketingappdb
ports:
- "3306:3306"
expose:
- 3306
volumes:
- ./mariadbvolume:/var/lib/mariadb
networks:
- marketingapp
adminer:
image: adminer
restart: always
ports:
- "8086:8080"
expose:
- 8086
depends_on:
- db
networks:
- marketingapp
springserver:
image: marketingapp
restart: always
ports:
- "8091:8091"
expose:
- 8091
depends_on:
- db
networks:
- marketingapp
networks:
marketingapp:
Spring Server Image:
FROM openjdk:latest
COPY /marketing-app-final.jar .
EXPOSE 8091
ENTRYPOINT ["java", "-jar", "marketing-app-final.jar"]
Application properties for Spring:
server.port = 8091
spring.datasource.url=jdbc:mariadb://0.0.0.0:3306/marketingappdb
spring.datasource.username=root
spring.datasource.password=mypassword
spring.datasource.driver-class-name=org.mariadb.jdbc.Driver
spring.jpa.hibernate.ddl-auto=update
I can connect from my PC to the database using the app from the remote same configuration (obviously replacing localhost with the IP) and don't see why I shouldn't be able to do the same from the actual server. Thanks in advance for any help!
Use the docker dns to connect your spring App to the mariabd:
jdbc:mariadb://db:3306/marketingappdb
Just a few other hints: you don't need to expose port 3306, you already bind it to 3306 on Host (if you just want to use it from within the docker Services you don't need to bind/expose it at all). And the mariabd persistent storage is var/lib/mysql and not var/lib/mariadb

Send mail from a container using a postfix container

I'm using an application hosted on a docker container.
This application executes bash scripts / instructions to send mails.
I made another container which executes Postfix as a SMTP Relay.
I want to send mails from my application container by using a bash script using my Postfix container as a relay.
I tried to connect with SSH from my application container to my Postfix container. But that doesn't seem to work.
How can i make it so a script executed in my application container can use my Postfix relay while not allowing anything outside of the docker network, or even better, to only allow some containers, to send mails from this relay.
EDIT 1 : Docker-compose files
Application docker compose :
version: "3.4"
volumes:
[...]
services:
application:
restart: always
build: ./application
depends_on:
- mariadb
container_name: application
volumes:
[...]
ports:
- "80:80"
- "443:443"
- "5669:5669"
deploy:
restart_policy:
window: 300s
links:
- mariadb
external_links:
- smtp-server
mariadb:
restart: always
image: mariadb
command: mysqld --sql-mode=ERROR_FOR_DIVISION_BY_ZERO,NO_AUTO_CREATE_USER,NO_ENGINE_SUBSTITUTION
container_name: application-mariadb
volumes:
[...]
environment:
MYSQL_RANDOM_ROOT_PASSWORD: "yes"
deploy:
restart_policy:
window: 300s
Here's my docker compose for my SMTP server :
version: "3.4"
services:
postfix:
restart: always
build: ./postfix
container_name: smtp-server
deploy:
restart_policy:
window: 300s
{a quick response, because I "cicle" in my work ... and I'm taking 10 minutes of clear my mint, I hope it serves you}
Are you using "docker-compose" ?, could you give an example of your YML file? (a little more context)
[you can not connect to by ssh to a container unless you have "supervisor" installed,which I do not recommend at all.]
from what I see, you only need to make private networks; You could use this:
https://docs.docker.com/compose/networking/
to hide everything, I also recommend using a load balancer / Inverse Proxy like TRAEFIK (if they have access to port 80 or 443 in some clear way this ...)
so you only expose 1/2 port(s) (80 + 443 for example) and everything else is protected by your reverse proxy
Watch as I separate the networks as you need the different containers.
bash have access to db and smtp
db does not have access smtp neither nginx
nginx have access to bash
nginx have access to proxy network to expose 80 and 443
no other container is exposed to the outside more than nginx
--
version: "3"
services:
bash:
####### use hostname "smtp" as SMTP server
image: bash
depends_on:
- db
networks:
- smtp_internal_network
- internal_network
- data_network
volumes:
- ../html:/var/www/html
restart: always
db:
image: percona:5.7
# ports: # for debug connections and querys
# - 3306:3306
volumes:
- ../db_data:/var/lib/mysql
environment:
MYSQL_ROOT_PASSWORD: ${MYSQL_ROOT_PASSWORD}
MYSQL_DATABASE: ${MYSQL_DATABASE}
MYSQL_USER: ${MYSQL_USER}
MYSQL_PASSWORD: ${MYSQL_PASSWORD}
networks:
- data_network
restart: always
smtp:
image: mwader/postfix-relay
environment:
- POSTFIX_myhostname=smtp.domain.tld
networks:
- smtp_internal_network
restart: always
nginx:
image: nginx
volumes:
- ../html:/var/www/html
networks:
- external_network
- internal_network
labels:
- "traefik.backend=nginx_${COMPOSE_PROJECT_NAME}"
- "traefik.port=80"
- "traefik.frontend.rule=Host:${FRONTEND_RULE}"
- "traefik.frontend.passHostHeader=true"
- "traefik.enable=true"
- "traefik.docker.network=traefik_proxy"
restart: always
depends_on:
- db
- bash
networks:
external_network:
external:
name: traefik_proxy
internal_network:
driver: bridge
smtp_internal_network:
driver: bridge
data_network:
driver: bridge
Edit:
version: "3"
volumes:
[...]
services:
####### use hostname "smtp" as SMTP server in your application
application:
restart: always
build: ./application
depends_on:
- mariadb
volumes:
[...]
ports:
- "80:80"
- "443:443"
- "5669:5669"
deploy:
restart_policy:
window: 300s
networks:
- smtp_external_network
- data_network
mariadb:
restart: always
image: mariadb
command: mysqld --sql-mode=ERROR_FOR_DIVISION_BY_ZERO,NO_AUTO_CREATE_USER,NO_ENGINE_SUBSTITUTION
networks:
- data_network
volumes:
[...]
environment:
MYSQL_RANDOM_ROOT_PASSWORD: "yes"
deploy:
restart_policy:
window: 300s
networks:
smtp_external_network:
external:
name: [ReplaceForFolderParentNameOfSmtpYmlWithoutSquareBrackets]_smtp
data_network:
driver: bridge
--- (in your other file)
services:
smtp:
restart: always
build: ./postfix
networks:
- smtp
deploy:
restart_policy:
window: 300s
networks:
smpt:
driver: bridge

Docker. How to run my container on a specific local ip

I have such a simple docker-compose.yml file, that is serving some static files:
version: '3'
services:
nginx:
container_name: docs_nginx
image: nginx:latest
ports:
- "80:80"
- "443:443"
volumes:
- ./:/var/www
- ./docker/vhost.conf:/etc/nginx/conf.d/default.conf
Now it can be accessed at
127.0.0.1
0.0.0.0
Is it possible somehow to tweak the docker-compose.yml to run my container at 127.127.127.127, for example?
As an answer I'll accept a working docker-compose.yml example, cause I've read a lot about networking in docs/blogs, but can't figure out what I am doing wrong.
Thanks!
You have to create a network along with the services you wish to run. Then you give a static ip address within the range of the base IP and CIDR mask. Since I don't know your network, you will need to match this closer to your network configuration so gateway can connect to the internet through the same gateway as your host machine. There should be away to set it up so the gateway will route through the host, but I don't remember off the top of my hand how to manage that. I'm sure there is a more specific how-to that will describe that process. I will look when I have a chance to find the answer, to that question.
version: '3'
services:
nginx:
container_name: docs_nginx
image: nginx:latest
ports:
- "80:80"
- "443:443"
networks:
vpcbr:
ipv4_address: 10.5.0.6
volumes:
- ./:/var/www
- ./docker/vhost.conf:/etc/nginx/conf.d/default.conf
networks:
vpcbr:
driver: bridge
ipam:
config:
- subnet: 10.5.0.0/16
gateway: 10.5.0.1
https://docs.docker.com/engine/userguide/networking/#custom-network-plugins
this gives a pretty good idea of what's going on when creating a network for docker containers. When you call docker-compose it's translating the network key into network commands.
Note: I can not guarantee this will work since I don't know your setup. But this should get you in the ball park.
let me know if you have any questions.
You'll want to pick an IP address that your host can route to. If your host IP address is say 192.168.0.50 it could look something like this in the yml file:
version: "2"
services: host1:
networks:
mynet:
ipv4_address: 192.168.0.101 networks: mynet:
driver: bridge
ipam:
config:
- subnet: 192.168.0.0/24
You could also specify the IP address in a startup script using the manual steps listed at:
https://jpetazzo.github.io/2013/10/16/configure-docker-bridge-network/

Resources