I have a laravel project which should be a wrap to the docker containers. I added containers for PHP, MySQL, Redis, and Nginx. But laravel-echo-server doesn't work. All connections are failed.
My docker-compose.yml config:
version: '3.7'
services:
redis:
image: redis:alpine
container_name: wex_redis
volumes:
- redis_data:/data
command: redis-server --appendonly yes
networks:
- wex-network
app:
build:
context: ./
dockerfile: docker/containers/app/Dockerfile
working_dir: /app
container_name: wex_app
volumes:
- ./:/app
environment:
- DB_PORT=3306
- DB_HOST=database
networks:
- wex-network
web:
build:
context: ./
dockerfile: docker/containers/web/Dockerfile
working_dir: /app
container_name: wex_web
ports:
- 8082:80
- 6001:6001
networks:
- wex-network
database:
image: mysql:8
container_name: wex_db
volumes:
- dbdata:/var/lib/mysql
environment:
- MYSQL_DATABASE=${DB_DATABASE:-wex}
- MYSQL_ROOT_PASSWORD=root
ports:
- 33062:3306
command: --default-authentication-plugin=mysql_native_password
networks:
- wex-network
networks:
wex-network:
driver: bridge
volumes:
dbdata:
redis_data:
"web" container need for install npm dependensies and set nginx. Dockerfile for web container:
FROM node:10-alpine as build
WORKDIR /app
# Load dependencies
COPY ./package*.json ./yarn.lock ./
COPY ./webpack.mix.js ./
RUN yarn install
COPY ./resources/assets /app/resources/assets
COPY ./public/assets /app/public/assets
# Build
RUN yarn run production
# Laravel-echo-server
RUN yarn global add --prod --no-lockfile laravel-echo-server \
&& yarn cache clean
EXPOSE 6001
CMD ["laravel-echo-server", "start"]
# NGINX
FROM nginx:alpine
EXPOSE 80
COPY ./docker/containers/web/nginx.conf /etc/nginx/conf.d/default.conf
COPY ./public /app/public
COPY --from=build /app/public/assets /app/public/assets
Settings for laravel-echo-server:
{
"authHost": "http://localhost",
"authEndpoint": "/broadcasting/auth",
"clients": [],
"database": "redis",
"databaseConfig": {
"redis": {},
"sqlite": {
"databasePath": "/database/laravel-echo-server.sqlite"
}
},
"devMode": true,
"host": null,
"port": "6001",
"protocol": "http",
"socketio": {},
"sslCertPath": "",
"sslKeyPath": "",
"sslCertChainPath": "",
"sslPassphrase": "",
"subscribers": {
"http": true,
"redis": true
},
"apiOriginAllow": {
"allowCors": false,
"allowOrigin": "",
"allowMethods": "",
"allowHeaders": ""
}
}
Connection to the laravel-echo-server from js:
import Echo from 'laravel-echo'
window.io = require('socket.io-client');
window.Echo = new Echo({
broadcaster: 'socket.io',
host: window.location.hostname + ':6001'
});
"docker ps" command:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
dddc2f5867b6 wex_app "/var/entrypoint.sh …" 43 minutes ago Up 43 minutes 9000/tcp wex_app
5fb70cf7da97 wex_web "nginx -g 'daemon of…" 19 hours ago Up 43 minutes 0.0.0.0:6001->6001/tcp, 0.0.0.0:8082->80/tcp wex_web
f2f331bf37e0 redis:alpine "docker-entrypoint.s…" 7 days ago Up 43 minutes 6379/tcp wex_redis
224e2af8c151 mysql:8 "docker-entrypoint.s…" 7 days ago Up 43 minutes 33060/tcp, 0.0.0.0:33062->3306/tcp wex_db
Could you please ask when I did a mistake? Why all connections to the laravel-echo server don't work?
Have a look for your Dockerfile, it use multi-stage builds.
There are 2 stages here, CMD ["laravel-echo-server", "start"] just in the first stage. But multi-stage builds's aim is to generate a final image with last stage. All previous stage is just intermediate to afford things to last stage.
So, when your docker-compose run web service, it just use the final image to start a container, so the container will just call ENTRYPOINT or CMD of last stage which will not start laravel-echo-server.
Related
After several hours to try things, I think I'm too much confuse to understand what's going wrong... The title explain perfectly what I'm trying to make working.
My docker-compose.yml:
version: '3'
services:
mysite.test:
build:
context: ./docker/8.1
dockerfile: Dockerfile
args:
WWWGROUP: '${WWWGROUP}'
image: sail-8.1/app
extra_hosts:
- 'host.docker.internal:host-gateway'
ports:
- '${APP_PORT:-80}:80'
- '443:443' //added for test but not working...
- '${HMR_PORT:-8080}:8080'
- '5173:5173' //Vite port
environment:
WWWUSER: '${WWWUSER}'
LARAVEL_SAIL: 1
XDEBUG_MODE: '${SAIL_XDEBUG_MODE:-off}'
XDEBUG_CONFIG: '${SAIL_XDEBUG_CONFIG:-client_host=host.docker.internal}'
volumes:
- '.:/var/www/html'
networks:
- sail
depends_on:
- mysql
- minio
mysql:
image: 'mysql/mysql-server:8.0'
ports:
- '${FORWARD_DB_PORT:-3306}:3306'
environment:
MYSQL_ROOT_PASSWORD: '${DB_PASSWORD}'
MYSQL_ROOT_HOST: "%"
MYSQL_DATABASE: '${DB_DATABASE}'
MYSQL_USER: '${DB_USERNAME}'
MYSQL_PASSWORD: '${DB_PASSWORD}'
MYSQL_ALLOW_EMPTY_PASSWORD: 1
volumes:
- 'sail-mysql:/var/lib/mysql'
- './vendor/laravel/sail/database/mysql/create-testing-database.sh:/docker-entrypoint-initdb.d/10-create-testing-database.sh'
networks:
- sail
healthcheck:
test: ["CMD", "mysqladmin", "ping", "-p${DB_PASSWORD}"]
retries: 3
timeout: 5s
networks:
sail:
driver: bridge
volumes:
sail-nginx:
driver: local
sail-mysql:
driver: local
My vite.config.js:
import { defineConfig } from 'vite';
import laravel from 'laravel-vite-plugin';
export default defineConfig({
server: {
https: true,
host: '0.0.0.0'
},
plugins: [
laravel({
input: [
'resources/css/app.css',
'resources/js/app.js',
],
refresh: true
}),
],
});
My .env:
(...)
APP_URL=https://mysite.test
APP_SERVICE=mysite.test
(...)
The result of that configuration is it working with http://mysite.test but not in https. That return:
This site can’t be reached
mysite.test unexpectedly closed the connection.
Does anyone have a tips for me? 🙏
Thank you!
I managed to get this working with Caddy using the following gist
https://gist.github.com/gilbitron/36d48eb751875bebdf43da0a91c9faec
After all of the above I added the vite port to docker-compose.yml under laravel.test service.
ports:
- '5173:5173'
Also vite itself needs an ssl certificate
vite.config.js:
import { defineConfig } from 'vite';
import laravel from 'laravel-vite-plugin';
import vue from '#vitejs/plugin-vue';
import basicSsl from '#vitejs/plugin-basic-ssl'
export default defineConfig({
server: {
https: true,
host: '0.0.0.0',
hmr: {
host: 'localhost'
},
},
plugins: [
basicSsl(),
laravel({
input: 'resources/js/app.js',
refresh: true,
}),
vue({
template: {
transformAssetUrls: {
base: null,
includeAbsolute: false,
},
},
}),
],
});
For some reason my app generates http links, not https. So I added the following to the boot method of my AppServiceProvider.php
\URL::forceScheme('https');
Using the share command you can run a https tunnel through expose. Your domain will look like
https://[something].expose.dev
Getting a real globally trusted ssl certificate can only be done by a certificate authority like for example Let's Encrypt. They need to verify you own the domain, either by a http challenge or a dns challenge.
As you cannot really own a .test domain, your only option left is sign a certificate yourself and add it to your own computer plus it's root certificate. If you do this only your computer will show the connection as secure though.
I am deploying a Laravel app with Laradock and Traefik as a reverse proxy.
When I hit the mydomain.com it shows an error message
Gateway timed out
When I inspect
docker network inspect traefik_proxy
I get the following:
"Containers": {
"100bbf77ba58214b286d892245fd2008458ed0b266f67e9f3bb7a68589229f7a": {
"Name": "laravelApp_nginx_1",
"EndpointID": "6ae4560837b219a6f0f8e7c9660ba58ffb04f8c48b21e4de12c241a4eb469327",
"MacAddress": "02:42:ac:16:00:04",
"IPv4Address": "172.22.0.4/16",
"IPv6Address": ""
},
"5f04c10a0730963e27cec0c06604ee3fce7b0002eb84695c441aa160bbb0ead0": {
"Name": "laravelApp_workspace_1",
"EndpointID": "9fc52cd4668b97a2bf923be5ae0506bcd0966282465149731a7cc4634d328c75",
"MacAddress": "02:42:ac:16:00:03",
"IPv4Address": "172.22.0.3/16",
"IPv6Address": ""
},
"c56bebf1e98b889714c185c6af964028649a850e7440256d4f8fc1b8a8291d2a": {
"Name": "traefik_traefik_1",
"EndpointID": "704f4d862513f8c2d87f5f737d23a455b012662babf531abec28070f7d732ed6",
"MacAddress": "02:42:ac:16:00:02",
"IPv4Address": "172.22.0.2/16",
"IPv6Address": ""
}
},
That means that Traefik is apparently aware of those containers and they are in the same network.
Here is my docker-compose.yml in the Nginx part:
version: '3.5'
networks:
frontend:
driver: ${NETWORKS_DRIVER}
backend:
driver: ${NETWORKS_DRIVER}
proxy:
external:
name: "traefik_proxy"
### NGINX Server #########################################
nginx:
build:
context: ./nginx
args:
- CHANGE_SOURCE=${CHANGE_SOURCE}
- PHP_UPSTREAM_CONTAINER=${NGINX_PHP_UPSTREAM_CONTAINER}
- PHP_UPSTREAM_PORT=${NGINX_PHP_UPSTREAM_PORT}
- http_proxy
- https_proxy
- no_proxy
volumes:
- ${APP_CODE_PATH_HOST}:${APP_CODE_PATH_CONTAINER}${APP_CODE_CONTAINER_FLAG}
- ${NGINX_HOST_LOG_PATH}:/var/log/nginx
- ${NGINX_SITES_PATH}:/etc/nginx/sites-available
- ${NGINX_SSL_PATH}:/etc/nginx/ssl
expose:
- "${NGINX_HOST_HTTP_PORT}"
- "${NGINX_HOST_HTTPS_PORT}"
labels:
- "traefik.http.routers.nginx.rule=Host(`constancias.sociales.unam.mx`)" #nginx > <service_name>
- "traefik.http.routers.nginx.entrypoints=websecurity" #Para forzar el uso de HTTPS y certificados
- "traefik.http.routers.nginx.tls.certresolver=myresolver" #Para forzar el uso de HTTPS y certificados
#ports:
#- "${NGINX_HOST_HTTP_PORT}:80"
#- "${NGINX_HOST_HTTPS_PORT}:443"
#- "${VARNISH_BACKEND_PORT}:81"
depends_on:
- php-fpm
networks:
- frontend
- backend
- proxy
And this is the Traefik docker-compose.yml file:
version: '3.7'
services:
traefik: # apache.app.test - nginx.app.test
image: traefik:v2.6.1
labels:
- "traefik.enable=true" # enable the dashboard
- "traefik.http.routers.traefik.rule=Host(`domain.com`) && (PathPrefix(`/api`) || PathPrefix(`/dashboard`))" #Estos labels son para acceder al dashboard con dominio
- "traefik.http.routers.traefik.entrypoints=traefik" #Para indicarle el puerto para acceder al dashboard, agregado al dominio especificado en la líea anterior y definido en traefik.yml.
- "traefik.http.routers.traefik.service=api#internal"
- "traefik.http.routers.traefik.middlewares=auth"
- "traefik.http.routers.traefik.tls=true"
- "traefik.http.middlewares.auth.basicauth.users=user1:$$apr1$$d9kcyTHX$$1HbJdddKRiU97fi."
- "traefik.http.routers.traefik.entrypoints=websecurity" #Hace referencia a "traefik.yml" Para el https
- "traefik.http.routers.traefik.tls.certresolver=myresolver" #Hace referencia a "traefik.yml" El que nos va a resolver el certificado
ports:
- 80:80
- 443:443
volumes:
- /var/run/docker.sock:/var/run/docker.sock #Si por ejemplo se trata de docker rootless, colocar solo del lado izquierdo "/run/user/1001/docker.sock", según $ echo $XDG_RUNTIME_DR
- ./traefik.yml:/traefik.yml
- ./acme.json:/acme.json
- ./traefik.log:/traefik.log
- ./config.yml:/config.yml
networks:
traefik_network:
restart: unless-stopped
networks:
traefik_network:
name: traefik_proxy
driver: bridge
ipam:
driver: default
Any ideas on what to check for , or any workaround?
How do I fix it?
Troubleshooting for my future self (workedaround-solved)
This error is shown, whenever I rebuild a container, for example, the php-worker when I added the wktohtmltopdf library or the php zip extension:
docker build php-worker
Follow the two steps:
Inside the laradock directory, restart the containers:
docker-compose restart
Inside the traefik directory, execute
docker-compose restart at least twice or even thrice and wait patiently to reload
The application should finally be up and running
I have managed to get Browsersync partially working, however, whenever I make a request from my app after the page has loaded, the request is sent to the wrong URL.
The following are my Browsersync settings:
mix.browserSync({
proxy: 'localhost',
port: 8082,
injectChanges: false,
open: false,
files: [
'./public/css/*.css',
'./public/js/*.js',
'./resources/views/**/*.blade.php',
'./resources/js/**/*.vue',
'./resources/css/**/*.css',
],
})
And here is my docker-compose.yml:
version: '3'
services:
laravel.test:
build:
context: ./docker/8.1
dockerfile: Dockerfile
args:
WWWGROUP: '${WWWGROUP}'
image: sail-8.0/app
working_dir: /var/www/html
extra_hosts:
- 'host.docker.internal:host-gateway'
ports:
- '${APP_PORT:-80}:80'
- "9865:22"
- "8082:8082"
- "8083:8083"
# ...
I navigate to http://localhost:8082/ and the page loads fine and Browsersync says it's connected. When I submit my form though, I get the following:
From my understanding, this is an issue with the proxy not having the port, however, I tried changing the proxy to localhost:8082 and then I couldn't even connect anymore. I also tried other suggestions of changing it to 127.0.0.1 which made no difference.
I fixed my issue by setting the following configuration:
mix.browserSync({
proxy: 'localhost:8081',
port: 8082,
ui: {
port: 8083,
},
injectChanges: false,
open: false,
files: [
'./public/css/*.css',
'./public/js/*.js',
'./resources/views/**/*.blade.php',
'./resources/js/**/*.vue',
'./resources/css/**/*.css',
],
watchOptions: {
usePolling: true,
interval: 500,
},
ghostMode: false,
})
Where 8081 is the APP_PORT set in the .env file. In other words, Browsersync needs to be proxied to the same port as your app is running on.
I have a simple Springboot app connecting to two different SQL Server database. When all of them are hosted locally, I have no issues. But I need to have each of them in a separated docker container, when I do this I get an SQLServerException at the start of my Springboot app telling me :
com.microsoft.sqlserver.jdbc.SQLServerException: The TCP/IP connection to the host '172.21.0.3', port 1434 has failed. Error: "'172.21.0.3'. Verify the connection properties.
Where 172.21.0.3 is the IP of one of my database and 1434 it's port.
I use a docker network called network_gls (which doesn't seems to work) to connect my containers (gls_app, mssql_1 & mssql_2) together, when I execute :
docker inspect network_gls
(NOTE : The execution of this line is after the start of the Springboot app container & before it's error)
I get the following result :
[
{
"Name": "network_gls",
"Id": "88895acb2247b3b63b0cc29656fcb6d1a0d4a8192a8c7c1bb7b79362509e0742",
"Created": "2020-09-28T15:21:39.995019917Z",
"Scope": "local",
"Driver": "bridge",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": {},
"Config": [
{
"Subnet": "172.21.0.0/16",
"Gateway": "172.21.0.1"
}
]
},
"Internal": false,
"Attachable": false,
"Ingress": false,
"ConfigFrom": {
"Network": ""
},
"ConfigOnly": false,
"Containers": {
"0754d8766736806549e99500c143420c556e9370c14f897f6beb82c24a3c1124": {
"Name": "mssql_1",
"EndpointID": "6d886cf8f2aed256d8cbc7141d9ea5242f7ce61d95ae5412c16905d1b490f133",
"MacAddress": "02:42:ac:15:00:02",
"IPv4Address": "172.21.0.2/16",
"IPv6Address": ""
},
"54d20a9a409053eaf53eb5c7e73e340ab29c12ceaf8ac20b109d1403cba0c3d3": {
"Name": "mssql_2",
"EndpointID": "e675f72fc6c737201a31dd485496e749d386165eaa90a6647e0bf13507683028",
"MacAddress": "02:42:ac:15:00:03",
"IPv4Address": "172.21.0.3/16",
"IPv6Address": ""
},
"7e4ae1a46358fe9081c5277cb52ec49681b44631d6d9c1cdcaf6116326277d37": {
"Name": "gls_app",
"EndpointID": "d9051cd0134f5074b2b756b44b60cced85d2cac2fd04653e0f52ddb9ada339b9",
"MacAddress": "02:42:ac:15:00:04",
"IPv4Address": "172.21.0.4/16",
"IPv6Address": ""
}
},
"Options": {},
"Labels": {}
}
]
And in my Springboot application, my connection string looks like this (example of the database in mssql_2) :
jdbc:sqlserver://172.21.0.3:1433;DatabaseName=gls
The docker networking aspect is new to me, tell me if I'm missing important information in this question
Thanks in advance
In my case it does work, when I use the Container Name instead of the IP address.
So instead of:
jdbc:sqlserver://172.21.0.3:1433;DatabaseName=gls
try this:
jdbc:sqlserver://mssql_2:1433;DatabaseName=gls
Also you can try to publish your ports, if you want to test if you have problems with your networking.
https://docs.docker.com/config/containers/container-networking/
Edit: Thanks to David for pointing out container_name is not required. It can be connected using service name.
You can create a docker compose and use it to start your DB. Given is an example of docker compose you use and application.properties.
You can use your docker service name while connecting to db from another container. To connect from localhost, in most cases port are exposed as port:port, it can be accessed as localhost.
docker-compose build web .
docker-compose up db
docker-compose up web
or
docker-compose up
You can use localhost when not accessing it from container
Docker Compose File
version: "3.3"
services:
web:
build:
context: ./
dockerfile: Dockerfile
image: web:latest
ports:
- 8080:8080
environment:
POSTGRES_JDBC_USER: UASENAME
POSTGRES_JDBC_PASS: PASSWORD
SPRING_DATASOURCE_URL: "jdbc:postgresql://db:5432/DATABASE"
SPRING_PROFILES_ACTIVE: dev
command: mvn spring-boot:run -Dspring.profiles.active=dev
depends_on:
- db
- rabbitmq
db:
image: "postgres:9.6-alpine"
ports:
- 5432:5432
expose:
- 5432
volumes:
- postgres:/var/lib/postgresql/data
environment:
POSTGRES_USER: USERNAME
POSTGRES_PASSWORD: PASSWORD
POSTGRES_DB: DATABASE
volumes:
postgres:
app:
This is application properties (for local development):
spring.jpa.database-platform=org.hibernate.dialect.PostgreSQLDialect
spring.datasource.url=jdbc:postgresql://localhost:5432/database
spring.datasource.username=USERNAME
spring.datasource.password=PASSWORD
Hope this will answer.
I am using the Consul Docker Image on dockerhub. I wanted to know if there is a way to store the Key/Value settings in a config that the docker images can load on boot. I understand that the Image has the /consul/config and /consul/data volumes that can be used. but I have not found a way to achieve this.
The following is how I run consul
version: '3.4'
service:
consul:
container_name: consul
image: consul:latest
ports:
- "8500:8500"
- "8300:8300"
volumes:
- ./consul:/consul/config
In my host consul dir I have a file called config.json which contains the following
{
"node_name": "consul_server",
"data_dir": "/data",
"log_level": "INFO",
"client_addr": "0.0.0.0",
"bind_addr": "0.0.0.0",
"ui": true,
"server": true,
"bootstrap_expect": 1
}