hostname in docker-compose.yml fails to be recognized on on mac (but works on linux) - macos

I am using the docker-compose 'recipe' below to bring up a container that runs a component of the storm stream processing framework. I am finding that on Mac's
when i enter the container (once it is up and running via docker exec -t -i <container-id> bash)
and I do ping storm-supervisor I get the error
'unknown host'. However, when i run the same docker-compose script on Linux
the host is recognized and ping succeeds.
The failure to resolve the host leads to problems with the Storm component... but what
that component is doing can be ignored for this question. I'm pretty sure if I figured out
how to get the Mac's docker-compose behavior to match Linux's then I would have no problem.
I think i am experiencing the issue mentioned in this post:
https://forums.docker.com/t/docker-compose-not-setting-hostname-when-network-mode-host/16728
version: '2'
services:
supervisor:
image: sunside/storm-supervisor
container_name: storm-supervisor
hostname: storm-supervisor
network_mode: host
ports:
- "8000:8000"
environment:
- "LOCAL_HOSTNAME=localhost"
- "NIMBUS_ADDRESS=localhost"
- "NIMBUS_THRIFT_PORT=49627"
- "DRPC_PORT=49772"
- "DRPCI_PORT=49773"
- "ZOOKEEPER_ADDRESS=localhost"
- "ZOOKEEPER_PORT=2181"
thanks in advance for any leads or tips !

"network_mode: host" will not work well on docker mac. I experienced the same issue where I had few of my containers in bridge network and the others in host network.
However, you can move all your containers to a custom bridge network. It solved for me.
You can edit your docker-compose.yml file to have a custom bridge network.
version: '2'
services:
supervisor:
image: sunside/storm-supervisor
container_name: storm-supervisor
hostname: storm-supervisor
ports:
- "8000:8000"
environment:
- "LOCAL_HOSTNAME=localhost"
- "NIMBUS_ADDRESS=localhost"
- "NIMBUS_THRIFT_PORT=49627"
- "DRPC_PORT=49772"
- "DRPCI_PORT=49773"
- "ZOOKEEPER_ADDRESS=localhost"
- "ZOOKEEPER_PORT=2181"
networks:
- storm
networks:
storm:
external: true
Also, execute the below command to create the custom network.
docker network create storm
You can verify it by
docker network ls
Hope it helped.

Related

How to connect to my local machine domain from docker container?

I have local server with domain mydomain.com it is just alias to localhost:80
And I want to allow make requests to mydomain.com from my running docker-container.
When I'm trying to request to it I see
cURL error 7: Failed to connect to mydomain.com port 80: Connection refused
My docker-compose.yml
version: '3.8'
services:
nginx:
container_name: project-nginx
image: nginx:1.23.1-alpine
volumes:
- ./docker/nginx/conf.d/default.conf:/etc/nginx/conf.d/default.conf
- ./src:/app
ports:
- ${NGINX_PORT:-81}:80
depends_on:
- project
server:
container_name: project
build:
context: ./
environment:
NODE_MODE: service
APP_ENV: local
APP_DEBUG: 1
ALLOWED_ORIGINS: ${ALLOWED_ORIGINS:-null}
volumes:
- ./src:/app
I'm using docker desktop for Windows
What can I do?
I've tried to add
network_mode: "host"
but it ruins my docker-compose startup
When I'm trying to send request to host.docker.internal I see this:
The requested URL was not found on this server. If you entered
the URL manually please check your spelling and try again.
The host network is not supported on Windows. If you are using Linux containers on Windows, make sure you have switched to Linux containers on Docker Desktop. That uses WSL2, so you should be able to use that in there.

Facing error response from daemon-Windows

I am trying to run apache Kafka on windows using docker and my docker-compose.yml code is as follows:
version: "3"
services:
spark:
image: jupyter/pyspark-notebook
ports:
- "9092:9092"
- "4010-4109:4010-4109"
volumes:
- ./notebooks:/home/jovyan/work/notebooks/
zookeeper:
image: 'bitnami/zookeeper:latest'
container_name: zookeeper
ports:
- '2181:2181'
environment:
- ALLOW_ANONYMOUS_LOGIN=yes
kafka:
image: 'bitnami/kafka:latest'
container_name: kakfa
ports:
- '9092:9092'
environment:
- KAFKA_BROKER_ID=1
- KAFKA_LISTENERS=PLAINTEXT://:9092
- KAFKA_ADVERTISED_LISTENERS=PLAINTEXT://127.0.0.1:9092
- KAFKA_ZOOKEEPER_CONNECT=zookeeper:2181
- ALLOW_PLAINTEXT_LISTENER=yes
depends_on:
- zookeeper
When I execute the command
docker-compose -f docker-compose.yml up
I get an error: Error response from daemon: driver failed programming external connectivity on endpoint kafka-spark-1 (452eae1760b7860e3924c0e630943f825a809272760c8aa8bbb2f58ab2865377): Bind for 0.0.0.0:9092 failed: port is already allocated
I have tried net stop winnat and net start winnat, unfortunately this solution didn't work.
Would appreciate any kind of help!
Spark isn't running Kafka
Remove the ports here
image: jupyter/pyspark-notebook
ports:
- "9092:9092"
Also, change variable for Kafka to use the proper hostname, otherwise Spark will not work with it...
KAFKA_ADVERTISED_LISTENERS=PLAINTEXT://kafka:9092
Then you can also remove ports for Kafka container since you wouldn't have access from the host. Unless you add external listeners.
You may also be interested in an example notebook I use to test PySpark with Kafka.

Cannot make an HTTP request between two Laravel Docker containers

Let me start off by stating that I know this question has been asked on many forums. I have read them all.
I have two Docker containers that are built with docker-compose and contain a Laravel project each. They are both attached to a network and can ping one another successfully, however, when I make a request from Postman to the one backend that then makes a curl request to the other, I get the connection refused error shown below.
This is my docker-compose file for each project respectfully:
version: '3.8'
services:
bumblebee:
build:
context: .
dockerfile: Dockerfile
volumes:
- .:/app
networks:
- picknpack
ports:
- "8010:8000"
networks:
picknpack:
external: true
version: '3.8'
services:
optimus:
build:
context: .
dockerfile: Dockerfile
volumes:
- .:/app
ports:
- "8020:8000"
networks:
- picknpack
depends_on:
- optimus_db
optimus_db:
image: mysql:8.0.25
environment:
MYSQL_DATABASE: optimus
MYSQL_USER: test
MYSQL_PASSWORD: test1234
MYSQL_ROOT_PASSWORD: root
volumes:
- ./storage/dbdata:/var/lib/mysql
ports:
- "33020:3306"
networks:
picknpack:
external: true
Here you can see the successful ping:
I would love to keep messing with configuration files but I have a deadline to meet and nothing is working, any help would be appreciated.
EDIT
Please see inspection of network:
Within the docker network that I created, both containers are exposed on port 8000 as per their Dockerfiles. It was looking at me square in the face: 'Connection refused on port 80'. The HTTP client was using that as default rather than 8000. I updated the curl request to hit port 8000 and it works now. Thanks to #user3532758 for your help. Note that the containers are mapped to ports 8010 and 8020 in the external local network, not within the docker network. There they are both served on port 8000 with different IPs

Make a request to a spring api running in a docker container from windows host

So, I searched around for an answer on this matter but either people don't address the issue or they say there's no problem doing this on their computer (mac or linux). It seems like this might be a windows problem.
I have a spring api running on a docker container (linux container). I use docker desktop on windows and I'm trying to make a request (in insomnia/postman/wtv) to that api.
If I run the api locally making the following request works perfectly:
http://localhost:8080/api/task/
This will list multiples task elements.
I've containerized this application like so:
Dockerfile
FROM openjdk:11.0.7
COPY ./target/spring-api-0.0.1-SNAPSHOT.jar /usr/app/
WORKDIR /usr/app
RUN sh -c 'touch spring-api-0.0.1-SNAPSHOT.jar'
ENTRYPOINT ["java", "-jar", "spring-api-0.0.1-SNAPSHOT.jar"]
docker-compose.yml
version: '3.8'
services:
api:
build: .
depends_on:
- mysql
environment:
- SPRING_DATASOURCE_URL=jdbc:mysql://mysql:3306/test?createDatabaseIfNotExist=true
ports:
- "8080:80"
mysql:
image: mysql
ports:
- "3306:3306"
environment:
- MYSQL_ROOT_PASSWORD=root
- MYSQL_USER=root
- MYSQL_PASSWORD=root
- MYSQL_DATABASE=test
If I do docker-compose up this works without issue:
The problem is, if I try to call the same endpoint as before from localhost I don't get any response.
Insomnia returns an error saying: Error: Server returned nothing (no headers, no data)
I've also tried connecting to the container's ip (got it from docker inspect) but no luck.
Ports are exposed in docker-compose.yml. What am I missing?
Thanks in advance.
Port mapping is incorrect.Spring boot application started at 8080 (from the image I see) inside container and it should be mapped to 8080 inside the container.
It should be like below:
ports:
- "8080:8080"

Traefik - Can't connect via https

I am trying to run Traefik on a Raspberry Pi Docker Swarm (specifally following this guide https://github.com/openfaas/faas/blob/master/guide/traefik_integration.md from the OpenFaaS project) but have run into some trouble when actually trying to connect via https.
Specifically there are two issues:
1) When I connect to http://192.168.1.20/ui I am given the username / password prompt. However the details (unhashed password) generated by htpasswd and used in the below docker-compose.yml are not accepted.
2) Visting the https version (http://192.168.1.20/ui) does not connect at all. This is the same if I try to connect using the domain I have set in --acme.domains
When I explore /etc/ I can see that no /etc/traefik/ directory exists but should presumably be created so perhaps this is the root of my problem?
The relevant part of my docker-compose.yml looks like
traefik:
image: traefik:v1.3
command: -c --docker=true
--docker.swarmmode=true
--docker.domain=traefik
--docker.watch=true
--web=true
--debug=true
--defaultEntryPoints=https,http
--acme=true
--acme.domains='<my domain>'
--acme.email=myemail#gmail.com
--acme.ondemand=true
--acme.onhostrule=true
--acme.storage=/etc/traefik/acme/acme.json
--entryPoints=Name:https Address::443 TLS
--entryPoints=Name:http Address::80 Redirect.EntryPoint:https
ports:
- 80:80
- 8080:8080
- 443:443
volumes:
- "/var/run/docker.sock:/var/run/docker.sock"
- "acme:/etc/traefik/acme"
networks:
- functions
deploy:
labels:
- traefik.port=8080
- traefik.frontend.rule=PathPrefix:/ui,/system,/function
- traefik.frontend.auth.basic=user:password <-- relevant credentials from htpasswd here
restart_policy:
condition: on-failure
delay: 5s
max_attempts: 20
window: 380s
placement:
constraints: [node.role == manager]
volumes:
acme:
Any help very much appreciated.
Due to https://community.letsencrypt.org/t/2018-01-09-issue-with-tls-sni-01-and-shared-hosting-infrastructure/49996
The TLS challenge (default) for Let's Encrypt doesn't work anymore.
You must use the DNS challenge instead https://docs.traefik.io/configuration/acme/#dnsprovider.
Or waiting for the merge of https://github.com/containous/traefik/pull/2701

Resources