I am deploying a Laravel app with Laradock and Traefik as a reverse proxy.
When I hit the mydomain.com it shows an error message
Gateway timed out
When I inspect
docker network inspect traefik_proxy
I get the following:
"Containers": {
"100bbf77ba58214b286d892245fd2008458ed0b266f67e9f3bb7a68589229f7a": {
"Name": "laravelApp_nginx_1",
"EndpointID": "6ae4560837b219a6f0f8e7c9660ba58ffb04f8c48b21e4de12c241a4eb469327",
"MacAddress": "02:42:ac:16:00:04",
"IPv4Address": "172.22.0.4/16",
"IPv6Address": ""
},
"5f04c10a0730963e27cec0c06604ee3fce7b0002eb84695c441aa160bbb0ead0": {
"Name": "laravelApp_workspace_1",
"EndpointID": "9fc52cd4668b97a2bf923be5ae0506bcd0966282465149731a7cc4634d328c75",
"MacAddress": "02:42:ac:16:00:03",
"IPv4Address": "172.22.0.3/16",
"IPv6Address": ""
},
"c56bebf1e98b889714c185c6af964028649a850e7440256d4f8fc1b8a8291d2a": {
"Name": "traefik_traefik_1",
"EndpointID": "704f4d862513f8c2d87f5f737d23a455b012662babf531abec28070f7d732ed6",
"MacAddress": "02:42:ac:16:00:02",
"IPv4Address": "172.22.0.2/16",
"IPv6Address": ""
}
},
That means that Traefik is apparently aware of those containers and they are in the same network.
Here is my docker-compose.yml in the Nginx part:
version: '3.5'
networks:
frontend:
driver: ${NETWORKS_DRIVER}
backend:
driver: ${NETWORKS_DRIVER}
proxy:
external:
name: "traefik_proxy"
### NGINX Server #########################################
nginx:
build:
context: ./nginx
args:
- CHANGE_SOURCE=${CHANGE_SOURCE}
- PHP_UPSTREAM_CONTAINER=${NGINX_PHP_UPSTREAM_CONTAINER}
- PHP_UPSTREAM_PORT=${NGINX_PHP_UPSTREAM_PORT}
- http_proxy
- https_proxy
- no_proxy
volumes:
- ${APP_CODE_PATH_HOST}:${APP_CODE_PATH_CONTAINER}${APP_CODE_CONTAINER_FLAG}
- ${NGINX_HOST_LOG_PATH}:/var/log/nginx
- ${NGINX_SITES_PATH}:/etc/nginx/sites-available
- ${NGINX_SSL_PATH}:/etc/nginx/ssl
expose:
- "${NGINX_HOST_HTTP_PORT}"
- "${NGINX_HOST_HTTPS_PORT}"
labels:
- "traefik.http.routers.nginx.rule=Host(`constancias.sociales.unam.mx`)" #nginx > <service_name>
- "traefik.http.routers.nginx.entrypoints=websecurity" #Para forzar el uso de HTTPS y certificados
- "traefik.http.routers.nginx.tls.certresolver=myresolver" #Para forzar el uso de HTTPS y certificados
#ports:
#- "${NGINX_HOST_HTTP_PORT}:80"
#- "${NGINX_HOST_HTTPS_PORT}:443"
#- "${VARNISH_BACKEND_PORT}:81"
depends_on:
- php-fpm
networks:
- frontend
- backend
- proxy
And this is the Traefik docker-compose.yml file:
version: '3.7'
services:
traefik: # apache.app.test - nginx.app.test
image: traefik:v2.6.1
labels:
- "traefik.enable=true" # enable the dashboard
- "traefik.http.routers.traefik.rule=Host(`domain.com`) && (PathPrefix(`/api`) || PathPrefix(`/dashboard`))" #Estos labels son para acceder al dashboard con dominio
- "traefik.http.routers.traefik.entrypoints=traefik" #Para indicarle el puerto para acceder al dashboard, agregado al dominio especificado en la líea anterior y definido en traefik.yml.
- "traefik.http.routers.traefik.service=api#internal"
- "traefik.http.routers.traefik.middlewares=auth"
- "traefik.http.routers.traefik.tls=true"
- "traefik.http.middlewares.auth.basicauth.users=user1:$$apr1$$d9kcyTHX$$1HbJdddKRiU97fi."
- "traefik.http.routers.traefik.entrypoints=websecurity" #Hace referencia a "traefik.yml" Para el https
- "traefik.http.routers.traefik.tls.certresolver=myresolver" #Hace referencia a "traefik.yml" El que nos va a resolver el certificado
ports:
- 80:80
- 443:443
volumes:
- /var/run/docker.sock:/var/run/docker.sock #Si por ejemplo se trata de docker rootless, colocar solo del lado izquierdo "/run/user/1001/docker.sock", según $ echo $XDG_RUNTIME_DR
- ./traefik.yml:/traefik.yml
- ./acme.json:/acme.json
- ./traefik.log:/traefik.log
- ./config.yml:/config.yml
networks:
traefik_network:
restart: unless-stopped
networks:
traefik_network:
name: traefik_proxy
driver: bridge
ipam:
driver: default
Any ideas on what to check for , or any workaround?
How do I fix it?
Troubleshooting for my future self (workedaround-solved)
This error is shown, whenever I rebuild a container, for example, the php-worker when I added the wktohtmltopdf library or the php zip extension:
docker build php-worker
Follow the two steps:
Inside the laradock directory, restart the containers:
docker-compose restart
Inside the traefik directory, execute
docker-compose restart at least twice or even thrice and wait patiently to reload
The application should finally be up and running
Related
After several hours to try things, I think I'm too much confuse to understand what's going wrong... The title explain perfectly what I'm trying to make working.
My docker-compose.yml:
version: '3'
services:
mysite.test:
build:
context: ./docker/8.1
dockerfile: Dockerfile
args:
WWWGROUP: '${WWWGROUP}'
image: sail-8.1/app
extra_hosts:
- 'host.docker.internal:host-gateway'
ports:
- '${APP_PORT:-80}:80'
- '443:443' //added for test but not working...
- '${HMR_PORT:-8080}:8080'
- '5173:5173' //Vite port
environment:
WWWUSER: '${WWWUSER}'
LARAVEL_SAIL: 1
XDEBUG_MODE: '${SAIL_XDEBUG_MODE:-off}'
XDEBUG_CONFIG: '${SAIL_XDEBUG_CONFIG:-client_host=host.docker.internal}'
volumes:
- '.:/var/www/html'
networks:
- sail
depends_on:
- mysql
- minio
mysql:
image: 'mysql/mysql-server:8.0'
ports:
- '${FORWARD_DB_PORT:-3306}:3306'
environment:
MYSQL_ROOT_PASSWORD: '${DB_PASSWORD}'
MYSQL_ROOT_HOST: "%"
MYSQL_DATABASE: '${DB_DATABASE}'
MYSQL_USER: '${DB_USERNAME}'
MYSQL_PASSWORD: '${DB_PASSWORD}'
MYSQL_ALLOW_EMPTY_PASSWORD: 1
volumes:
- 'sail-mysql:/var/lib/mysql'
- './vendor/laravel/sail/database/mysql/create-testing-database.sh:/docker-entrypoint-initdb.d/10-create-testing-database.sh'
networks:
- sail
healthcheck:
test: ["CMD", "mysqladmin", "ping", "-p${DB_PASSWORD}"]
retries: 3
timeout: 5s
networks:
sail:
driver: bridge
volumes:
sail-nginx:
driver: local
sail-mysql:
driver: local
My vite.config.js:
import { defineConfig } from 'vite';
import laravel from 'laravel-vite-plugin';
export default defineConfig({
server: {
https: true,
host: '0.0.0.0'
},
plugins: [
laravel({
input: [
'resources/css/app.css',
'resources/js/app.js',
],
refresh: true
}),
],
});
My .env:
(...)
APP_URL=https://mysite.test
APP_SERVICE=mysite.test
(...)
The result of that configuration is it working with http://mysite.test but not in https. That return:
This site can’t be reached
mysite.test unexpectedly closed the connection.
Does anyone have a tips for me? 🙏
Thank you!
I managed to get this working with Caddy using the following gist
https://gist.github.com/gilbitron/36d48eb751875bebdf43da0a91c9faec
After all of the above I added the vite port to docker-compose.yml under laravel.test service.
ports:
- '5173:5173'
Also vite itself needs an ssl certificate
vite.config.js:
import { defineConfig } from 'vite';
import laravel from 'laravel-vite-plugin';
import vue from '#vitejs/plugin-vue';
import basicSsl from '#vitejs/plugin-basic-ssl'
export default defineConfig({
server: {
https: true,
host: '0.0.0.0',
hmr: {
host: 'localhost'
},
},
plugins: [
basicSsl(),
laravel({
input: 'resources/js/app.js',
refresh: true,
}),
vue({
template: {
transformAssetUrls: {
base: null,
includeAbsolute: false,
},
},
}),
],
});
For some reason my app generates http links, not https. So I added the following to the boot method of my AppServiceProvider.php
\URL::forceScheme('https');
Using the share command you can run a https tunnel through expose. Your domain will look like
https://[something].expose.dev
Getting a real globally trusted ssl certificate can only be done by a certificate authority like for example Let's Encrypt. They need to verify you own the domain, either by a http challenge or a dns challenge.
As you cannot really own a .test domain, your only option left is sign a certificate yourself and add it to your own computer plus it's root certificate. If you do this only your computer will show the connection as secure though.
I am working on a Laravel project. I started writing browser tests using Dusk. I am using docker as my development environment. When I run the tests, I am getting the "Connection refused" error.
This is my docker-compose.yaml file.
version: '3'
services:
apache:
container_name: res_apache
image: webdevops/apache:ubuntu-16.04
environment:
WEB_DOCUMENT_ROOT: /var/www/public
WEB_ALIAS_DOMAIN: restaurant.localhost
WEB_PHP_SOCKET: php-fpm:9000
volumes: # Only shared dirs to apache (to be served)
- ./public:/var/www/public:cached
- ./storage:/var/www/storage:cached
networks:
- res-network
ports:
- "8081:80"
- "443:443"
php-fpm:
container_name: res_php
image: jguyomard/laravel-php:7.3
volumes:
- ./:/var/www/
- ./ci:/var/www/ci:cached
- ./vendor:/var/www/vendor:delegated
- ./storage:/var/www/storage:delegated
- ./node_modules:/var/www/node_modules:cached
- ~/.ssh:/root/.ssh:cached
- ./composer.json:/var/www/composer.json
- ./composer.lock:/var/www/composer.lock
- ~/.composer/cache:/root/.composer/cache:delegated
networks:
- res-network
db:
container_name: res_db
image: mariadb:10.2
environment:
MYSQL_ROOT_PASSWORD: root
MYSQL_DATABASE: restaurant
MYSQL_USER: restaurant
MYSQL_PASSWORD: secret
volumes:
- res-data:/var/lib/mysql
networks:
- res-network
ports:
- "33060:3306"
chrome:
image: robcherry/docker-chromedriver
networks:
- res-network
environment:
CHROMEDRIVER_WHITELISTED_IPS: ""
CHROMEDRIVER_PORT: "9515"
ports:
- 9515:9515
cap_add:
- "SYS_ADMIN"
networks:
res-network:
driver: "bridge"
volumes:
res-data:
driver: "local"
The following is the driver function in DuskTestCase.php class
/**
* Create the RemoteWebDriver instance.
*
* #return \Facebook\WebDriver\Remote\RemoteWebDriver
*/
protected function driver()
{
$options = (new ChromeOptions)->addArguments([
'--disable-gpu',
'--headless',
'--window-size=1920,1080',
]);
return RemoteWebDriver::create(
'http://localhost:9515', DesiredCapabilities::chrome()->setCapability(
ChromeOptions::CAPABILITY, $options
)
);
}
I run the tests running the following command.
docker-compose exec php-fpm php artisan dusk
Then I get the following error.
Facebook\WebDriver\Exception\WebDriverCurlException: Curl error thrown for http POST to /session with params: {"capabilities":{"firstMatch":[{"browserName":"chrome","goog:chromeOptions":{"args":["--disable-gpu","--headless","--windo
w-size=1920,1080"]}}]},"desiredCapabilities":{"browserName":"chrome","platform":"ANY","chromeOptions":{"args":["--disable-gpu","--headless","--window-size=1920,1080"]}}}
Failed to connect to localhost port 9515: Connection refused
/var/www/vendor/php-webdriver/webdriver/lib/Remote/HttpCommandExecutor.php:331
What is wrong with my configuration and how can I fix it?
I am trying to set up a simple ELK stack using docker. While I disable xpack security it starts fine and I can access the Kibana interface. If xpack security is enabled I get an "Kibana server is not ready yet" error from the Kibana interface. This error is most likely caused by this Elasticsearch error:
{"type": "server", "timestamp": "2020-08-03T15:35:10,134Z", "level": "INFO", "component": "o.e.c.r.a.AllocationService", "cluster.name": "elastic-cluster", "node.name": "elasticsearch", "message": "Cluster health status changed from [RED] to [GREEN] (reason: [shards started [[.monitoring-es-7-2020.08.03][0]]]).", "cluster.uuid": "Vdk1-_4sSvuqlEspQcF-6A", "node.id": "PZMUpi_JSJS6IZ7tv6H22g" }
{"type": "server", "timestamp": "2020-08-03T15:35:10,560Z", "level": "ERROR", "component": "o.e.x.s.a.e.NativeUsersStore", "cluster.name": "elastic-cluster", "node.name": "elasticsearch", "message": "security index is unavailable. short circuiting retrieval of user [elasticadmin]", "cluster.uuid": "Vdk1-_4sSvuqlEspQcF-6A", "node.id": "PZMUpi_JSJS6IZ7tv6H22g" }
This is my elasticsearch.yml:
cluster.name: elastic-cluster
node.name: elasticsearch
network.host: 0.0.0.0
transport.host: 0.0.0.0
## Cluster Settings
discovery.seed_hosts: elasticsearch
cluster.initial_master_nodes: elasticsearch
## License
xpack.license.self_generated.type: basic
# Security
xpack.security.enabled: true
## - ssl
xpack.security.transport.ssl.enabled: true
xpack.security.transport.ssl.verification_mode: certificate
xpack.security.transport.ssl.key: certs/elasticsearch.key
xpack.security.transport.ssl.certificate: certs/elasticsearch.crt
xpack.security.transport.ssl.certificate_authorities: certs/ca.crt
## - http
#xpack.security.http.ssl.enabled: true
#xpack.security.http.ssl.key: certs/elasticsearch.key
#xpack.security.http.ssl.certificate: certs/elasticsearch.crt
#xpack.security.http.ssl.certificate_authorities: certs/ca.crt
#xpack.security.http.ssl.client_authentication: optional
# Monitoring
xpack.monitoring.enabled: true
xpack.monitoring.collection.enabled: true
This is the error log from Kibana:
{"type":"log","#timestamp":"2020-08-03T15:42:22Z","tags":["warning","plugins","licensing"],"pid":6,"
message":"License information could not be obtained from Elasticsearch due to [security_exception] unable to authenticate user [elasticadmin] for REST request [/_xpack], with { header={ WWW-Authenticate=\"Basic realm=\\\"security\\\" charset=\\\"UTF-8\\\"\" } } :: {\"path\":\"/_xpack\",\"statusCode\":401,\"response\":\"{\\\"error\\\":{\\\"root_cause\\\":[{\\\"type\\\":\\\"security_exception\\\",\\\"reason\\\":\\\"unable to authenticate user [elasticadmin] for REST request [/_xpack]\\\",\\\"header\\\":{\\\"WWW-Authenticate\\\":\\\"Basic realm=\\\\\\\"security\\\\\\\" charset=\\\\\\\"UTF-8\\\\\\\"\\\"}}],\\\"type\\\":\\\"security_exception\\\",\\\"reason\\\":\\\"unable to authenticate user [elasticadmin] for REST request [/_xpack]\\\",\\\"header\\\":{\\\"WWW-Authenticate\\\":\\\"Basic realm=\\\\\\\"security\\\\\\\" charset=\\\\\\\"UTF-8\\\\\\\"\\\"}},\\\"status\\\":401}\",\"wwwAuthenticateDirective\":\"Basic realm=\\\"security\\\" charset=\\\"UTF-8\\\"\"} error"}
Basic curl request:
curl -H "Authorization: Basic ZWxhc3RpY2FkbWluOjEyMzQ1Njc4OQ==" -XGET "http://localhost:9200/_cat/nodes?v&pretty"
{
"error" : {
"root_cause" : [
{
"type" : "security_exception",
"reason" : "unable to authenticate user [elasticadmin] for REST request [/_cat/nodes?v&pretty]",
"header" : {
"WWW-Authenticate" : "Basic realm=\"security\" charset=\"UTF-8\""
}
}
],
"type" : "security_exception",
"reason" : "unable to authenticate user [elasticadmin] for REST request [/_cat/nodes?v&pretty]",
"header" : {
"WWW-Authenticate" : "Basic realm=\"security\" charset=\"UTF-8\""
}
},
"status" : 401
}
Another Auth request:
docker#docker:~$ curl -H "Authorization: Basic ZWxhc3RpY2FkbWluOjEyMzQ1Njc4OQ" -XGET "http://localhost:9200/_security/_authenticate"
{"error":{"root_cause":[{"type":"security_exception","reason":"unable to authenticate user [elasticadmin] for REST request [/_security/_authenticate]","header":{"WWW-Authenticate":"Basic realm=\"security\" charset=\"UTF-8\""}}],"type":"security_exception","reason":"unable to authenticate user [elasticadmin] for REST request [/_security/_authenticate]","header":{"WWW-Authenticate":"Basic realm=\"security\" charset=\"UTF-8\""}},"status":401}
Docker-Compose:
secrets:
elasticsearch.keystore:
file: ${ELK_DATA}/secrets/keystore/elasticsearch.keystore
elastic.ca:
file: ${ELK_DATA}/secrets/certs/ca/ca.crt
elasticsearch.certificate:
file: ${ELK_DATA}/secrets/certs/elasticsearch/elasticsearch.crt
elasticsearch.key:
file: ${ELK_DATA}/secrets/certs/elasticsearch/elasticsearch.key
kibana.certificate:
file: ${ELK_DATA}/secrets/certs/kibana/kibana.crt
kibana.key:
file: ${ELK_DATA}/secrets/certs/kibana/kibana.key
services:
####################################################################
############################# ELK ##################################
####################################################################
elasticsearch:
container_name: elasticsearch
image: docker.elastic.co/elasticsearch/elasticsearch:${ELK_VERSION}
restart: unless-stopped
environment:
ELASTIC_USERNAME: ${ELASTIC_USERNAME}
ELASTIC_PASSWORD: ${ELASTIC_PASSWORD}
ELASTIC_CLUSTER_NAME: ${ELASTIC_CLUSTER_NAME}
ELASTIC_NODE_NAME: ${ELASTIC_NODE_NAME}
ELASTIC_INIT_MASTER_NODE: ${ELASTIC_INIT_MASTER_NODE}
ELASTIC_DISCOVERY_SEEDS: ${ELASTIC_DISCOVERY_SEEDS}
ES_JAVA_OPTS: -Xmx${ELASTICSEARCH_HEAP} -Xms${ELASTICSEARCH_HEAP} -Des.enforce.bootstrap.checks=true
bootstrap.memory_lock: "true"
volumes:
- ${ELK_DATA}/elasticsearch/data:/usr/share/elasticsearch/data
- ${ELK_DATA}/elasticsearch/config/elasticsearch.yml:/usr/share/elasticsearch/config/elasticsearch.yml
- ${ELK_DATA}/elasticsearch/config/log4j2.properties:/usr/share/elasticsearch/config/log4j2.properties
secrets:
- source: elasticsearch.keystore
target: /usr/share/elasticsearch/config/elasticsearch.keystore
- source: elastic.ca
target: /usr/share/elasticsearch/config/certs/ca.crt
- source: elasticsearch.certificate
target: /usr/share/elasticsearch/config/certs/elasticsearch.crt
- source: elasticsearch.key
target: /usr/share/elasticsearch/config/certs/elasticsearch.key
ports:
- 9200:9200
- 9300:9300
ulimits:
memlock:
soft: -1
hard: -1
nofile:
soft: 200000
hard: 200000
networks:
- traefik_proxy
logstash:
container_name: logstash
image: docker.elastic.co/logstash/logstash:${ELK_VERSION}
restart: unless-stopped
volumes:
- ${ELK_DATA}/logstash/config/logstash.yml:/usr/share/logstash/config/logstash.yml
- ${ELK_DATA}/logstash/config/pipelines.yml:/usr/share/logstash/config/pipelines.yml
- ${ELK_DATA}/logstash/pipeline:/usr/share/logstash/pipeline
environment:
ELASTIC_USERNAME: ${ELASTIC_USERNAME}
ELASTIC_PASSWORD: ${ELASTIC_PASSWORD}
ELASTICSEARCH_HOST_PORT: ${ELASTICSEARCH_HOST}:${ELASTICSEARCH_PORT}
LS_JAVA_OPTS: "-Xmx${LOGSTASH_HEAP} -Xms${LOGSTASH_HEAP}"
ports:
- 5044:5044
- 9600:9600
networks:
- traefik_proxy
kibana:
container_name: kibana
image: docker.elastic.co/kibana/kibana:${ELK_VERSION}
restart: unless-stopped
volumes:
- ${ELK_DATA}/kibana/config:/usr/share/kibana/config
environment:
ELASTIC_USERNAME: ${ELASTIC_USERNAME}
ELASTIC_PASSWORD: ${ELASTIC_PASSWORD}
ELASTICSEARCH_HOST_PORT: ${ELASTICSEARCH_HOST}:${ELASTICSEARCH_PORT}
secrets:
- source: elastic.ca
target: /certs/ca.crt
- source: kibana.certificate
target: /certs/kibana.crt
- source: kibana.key
target: /certs/kibana.key
ports:
- 5601:5601
networks:
- traefik_proxy
Where should I start looking to find the source of this issue?
Thanks for any help!
when you enable x-pack, elasticsearch is getting started, But it seems your kibana is not getting authenicated.please see below part of your error message which explains this.
elasticadmin user is not authenticated
Please see this user and see you are passing the correction authentication while accessing elasticsearch. You need to pass username and password under basic authentication mechanism.
I have the same issue but I solve it:
1 Step
you can configure you docker compose as
kibana:
build: kibana
container_name: kibana
ports:
- 5601:5601
volumes:
- ./kibana/kibana.yml:/usr/share/kibana/config/kibana.yml
networks:
backend:
aliases:
- "kibana"
2 Step
and my kibana file is that:
...
elasticsearch.username: "kibana"
elasticsearch.password: "mypwd"
...
and my Dockerfile is:
FROM docker.elastic.co/kibana/kibana:7.10.2
COPY kibana.yml /usr/share/kibana/kibana.yml
USER root
RUN chown root:kibana /usr/share/kibana/config/kibana.yml
USER kibana
I got this issue when the data folder of ElasticSearch was deleted and re-initialized from scratch afterwards. The point is that the built-in users were not initialized.
As soon as I initialized the built-in users the error disappeared and the system worked again.
bin/elasticsearch-setup-passwords interactive|auto [-u "https://<host_name>:9200"]
I am using the Consul Docker Image on dockerhub. I wanted to know if there is a way to store the Key/Value settings in a config that the docker images can load on boot. I understand that the Image has the /consul/config and /consul/data volumes that can be used. but I have not found a way to achieve this.
The following is how I run consul
version: '3.4'
service:
consul:
container_name: consul
image: consul:latest
ports:
- "8500:8500"
- "8300:8300"
volumes:
- ./consul:/consul/config
In my host consul dir I have a file called config.json which contains the following
{
"node_name": "consul_server",
"data_dir": "/data",
"log_level": "INFO",
"client_addr": "0.0.0.0",
"bind_addr": "0.0.0.0",
"ui": true,
"server": true,
"bootstrap_expect": 1
}
I have a laravel project which should be a wrap to the docker containers. I added containers for PHP, MySQL, Redis, and Nginx. But laravel-echo-server doesn't work. All connections are failed.
My docker-compose.yml config:
version: '3.7'
services:
redis:
image: redis:alpine
container_name: wex_redis
volumes:
- redis_data:/data
command: redis-server --appendonly yes
networks:
- wex-network
app:
build:
context: ./
dockerfile: docker/containers/app/Dockerfile
working_dir: /app
container_name: wex_app
volumes:
- ./:/app
environment:
- DB_PORT=3306
- DB_HOST=database
networks:
- wex-network
web:
build:
context: ./
dockerfile: docker/containers/web/Dockerfile
working_dir: /app
container_name: wex_web
ports:
- 8082:80
- 6001:6001
networks:
- wex-network
database:
image: mysql:8
container_name: wex_db
volumes:
- dbdata:/var/lib/mysql
environment:
- MYSQL_DATABASE=${DB_DATABASE:-wex}
- MYSQL_ROOT_PASSWORD=root
ports:
- 33062:3306
command: --default-authentication-plugin=mysql_native_password
networks:
- wex-network
networks:
wex-network:
driver: bridge
volumes:
dbdata:
redis_data:
"web" container need for install npm dependensies and set nginx. Dockerfile for web container:
FROM node:10-alpine as build
WORKDIR /app
# Load dependencies
COPY ./package*.json ./yarn.lock ./
COPY ./webpack.mix.js ./
RUN yarn install
COPY ./resources/assets /app/resources/assets
COPY ./public/assets /app/public/assets
# Build
RUN yarn run production
# Laravel-echo-server
RUN yarn global add --prod --no-lockfile laravel-echo-server \
&& yarn cache clean
EXPOSE 6001
CMD ["laravel-echo-server", "start"]
# NGINX
FROM nginx:alpine
EXPOSE 80
COPY ./docker/containers/web/nginx.conf /etc/nginx/conf.d/default.conf
COPY ./public /app/public
COPY --from=build /app/public/assets /app/public/assets
Settings for laravel-echo-server:
{
"authHost": "http://localhost",
"authEndpoint": "/broadcasting/auth",
"clients": [],
"database": "redis",
"databaseConfig": {
"redis": {},
"sqlite": {
"databasePath": "/database/laravel-echo-server.sqlite"
}
},
"devMode": true,
"host": null,
"port": "6001",
"protocol": "http",
"socketio": {},
"sslCertPath": "",
"sslKeyPath": "",
"sslCertChainPath": "",
"sslPassphrase": "",
"subscribers": {
"http": true,
"redis": true
},
"apiOriginAllow": {
"allowCors": false,
"allowOrigin": "",
"allowMethods": "",
"allowHeaders": ""
}
}
Connection to the laravel-echo-server from js:
import Echo from 'laravel-echo'
window.io = require('socket.io-client');
window.Echo = new Echo({
broadcaster: 'socket.io',
host: window.location.hostname + ':6001'
});
"docker ps" command:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
dddc2f5867b6 wex_app "/var/entrypoint.sh …" 43 minutes ago Up 43 minutes 9000/tcp wex_app
5fb70cf7da97 wex_web "nginx -g 'daemon of…" 19 hours ago Up 43 minutes 0.0.0.0:6001->6001/tcp, 0.0.0.0:8082->80/tcp wex_web
f2f331bf37e0 redis:alpine "docker-entrypoint.s…" 7 days ago Up 43 minutes 6379/tcp wex_redis
224e2af8c151 mysql:8 "docker-entrypoint.s…" 7 days ago Up 43 minutes 33060/tcp, 0.0.0.0:33062->3306/tcp wex_db
Could you please ask when I did a mistake? Why all connections to the laravel-echo server don't work?
Have a look for your Dockerfile, it use multi-stage builds.
There are 2 stages here, CMD ["laravel-echo-server", "start"] just in the first stage. But multi-stage builds's aim is to generate a final image with last stage. All previous stage is just intermediate to afford things to last stage.
So, when your docker-compose run web service, it just use the final image to start a container, so the container will just call ENTRYPOINT or CMD of last stage which will not start laravel-echo-server.