NextJS + Laravel + NGinx + Docker: Cant call API in getServerSideProps - laravel

I have a backend API written in Laravel, a frontend app written in NextJS and these are containerized with Docker using NGinx. Here is my docker-compose.yml file:
version: '3.9'
services:
# frontend nextjs app
nextjs:
build: ./hike-frontend
container_name: nextjs
volumes:
- ./hike-frontend:/usr/app
- /app/node_modules
- /app/.next
restart: always
networks:
- app-network
# API Laravel app
laravel:
build:
context: ./hike-backend
dockerfile: Dockerfile
image: digitalocean.com/php
container_name: laravel
restart: unless-stopped
tty: true
environment:
SERVICE_NAME: laravel
SERVICE_TAGS: dev
working_dir: /var/www
volumes:
- ./hike-backend:/var/www
- ./php/local.ini:/usr/local/etc/php/conf.d/local.ini
networks:
- app-network
expose:
- "9000"
# Nginx Service
nginx:
image: nginx:alpine
container_name: nginx
restart: unless-stopped
tty: true
ports:
- "80:80"
- "443:443"
expose:
- "80"
- "443"
volumes:
- ./hike-backend:/var/www
- ./nginx/conf.d/:/etc/nginx/conf.d/
networks:
- app-network
#Docker Networks
networks:
app-network:
driver: bridge
Here is my dockerfile for the next app:
# Base on offical Node.js Alpine image
FROM node:alpine
# Set working directory
WORKDIR /usr/app
# Install PM2 to automatically restart server if it crashes
RUN npm install --global pm2
# install react globally
RUN npm install --global react react-dom
# Copy package.json and package-lock.json before other files
# Utilise Docker cache to save re-installing dependencies if unchanged
COPY ./package*.json ./
# Install dependencies
RUN npm install --production
# Copy all files
COPY ./ ./
# Build app
RUN npm run build
# Expose the listening port
EXPOSE 3000
# Run container as non-root (unprivileged) user
# The node user is provided in the Node.js Alpine base image
USER node
# Run npm start script with PM2 when container starts
CMD [ "pm2-runtime", "npm", "--", "run", "dev" ]
And here is my dockerfile for the Laravel app:
FROM php:7.4-fpm
# Copy composer.lock and composer.json
COPY composer.lock composer.json /var/www/
# Set working directory
WORKDIR /var/www
# Install dependencies
RUN apt-get update && apt-get install -y \
build-essential \
libpng-dev \
libjpeg62-turbo-dev \
libfreetype6-dev \
locales \
zip \
libzip-dev \
jpegoptim optipng pngquant gifsicle \
vim \
unzip \
git \
curl
# Clear cache
RUN apt-get clean && rm -rf /var/lib/apt/lists/*
# Install extensions
RUN docker-php-ext-install pdo_mysql zip exif pcntl
#RUN docker-php-ext-configure gd --with-gd --with-freetype-dir=/usr/include/ --with-jpeg-dir=/usr/include/ --with-png-dir=/usr/include/
RUN docker-php-ext-install gd
# Install composer
RUN curl -sS https://getcomposer.org/installer | php -- --install-dir=/usr/local/bin --filename=composer
# Add user for laravel application
RUN groupadd -g 1000 www
RUN useradd -u 1000 -ms /bin/bash -g www www
# Copy existing application directory contents
COPY . /var/www
# Copy existing application directory permissions
COPY --chown=www:www . /var/www
# Change current user to www
USER www
# Expose port 9000 and start php-fpm server
EXPOSE 9000
CMD ["php-fpm"]
And here is my Nginx config:
upstream nextjs_upstream {
server nextjs:3000;
}
server {
listen 80;
server_name local.hike.com;
server_tokens off;
gzip on;
gzip_proxied any;
gzip_comp_level 4;
gzip_types text/css application/javascript image/svg+xml;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
location / {
proxy_pass http://nextjs_upstream;
}
}
server {
listen 80;
server_name local.api.hike.com;
index index.php index.html;
error_log /var/log/nginx/error.log;
access_log /var/log/nginx/access.log;
root /var/www/public;
location ~ \.php$ {
try_files $uri =404;
fastcgi_split_path_info ^(.+\.php)(/.+)$;
fastcgi_pass laravel:9000;
fastcgi_index index.php;
include fastcgi_params;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
fastcgi_param PATH_INFO $fastcgi_path_info;
}
location / {
try_files $uri $uri/ /index.php?$query_string;
gzip_static on;
}
}
This all works fine I can access both sites in my browser at local.hike.com or local.api.hike.com and ajax requests on the client side work fine as well. However I can't connect to the Laravel app from the next app inside getServerSideProps like so:
export async function getServerSideProps(context) {
const eventDetails = await Axios.get('http://laravel/api/events/1')
.then(result => {
console.log('result', result);
return result.data.data
})
.catch(error => {
console.log('error', error);
return {}
});
}
It just returns:
Error: connect ECONNREFUSED 172.22.0.4:80
I can access local.api.hike.com/api/events/1 no problem in my browser or from an ajax request in the next app but just not in getServerSideProps.

Related

Problem docker-compose with laravel nginx not working on production

I wrote a docker-compose.yml with nginx, mysql, and an application laravel with a dockerfile.
My docker-compose :
version: "3.7"
services:
app_back:
build:
context: .
dockerfile: Dockerfile
image: dreamy_back
container_name: dreamy-back-app
restart: unless-stopped
working_dir: /var/www/
volumes:
- .:/var/www
networks:
- dreamy
db:
image: mysql
container_name: dreamy-db
restart: unless-stopped
depends_on:
- app_back
environment:
MYSQL_DATABASE: ${DB_DATABASE}
MYSQL_ROOT_PASSWORD: 4ztgeU%
MYSQL_PASSWORD: ${DB_PASSWORD}
MYSQL_USER: ${DB_USERNAME}
networks:
- dreamy
nginx:
image: nginx:1.17-alpine
container_name: dreamy-back-nginx
restart: unless-stopped
ports:
- 8000:80
volumes:
- ./:/var/www
- ./docker-compose/vhosts:/etc/nginx/conf.d
networks:
- dreamy
networks:
dreamy:
driver: bridge
My dockerfile for an laravel application :
I execute composer install inside the container.
FROM php:8.1-fpm
# Install system dependencies
RUN apt-get update && apt-get install -y \
git \
curl \
libpng-dev \
libonig-dev \
libxml2-dev \
zip \
unzip
# Clear cache
RUN apt-get clean && rm -rf /var/lib/apt/lists/*
# Install PHP extensions
RUN docker-php-ext-install pdo_mysql mbstring exif pcntl bcmath gd
# Get latest Composer
COPY --from=composer:latest /usr/bin/composer /usr/bin/composer
# Set working directory
WORKDIR /var/www
My nginx conf :
server {
listen 80;
index index.php index.html;
error_log /var/log/nginx/error.log;
access_log /var/log/nginx/access.log;
root /var/www/public;
location ~ \.php$ {
try_files $uri =404;
fastcgi_split_path_info ^(.+\.php)(/.+)$;
fastcgi_pass app_back:9000;
fastcgi_index index.php;
include fastcgi_params;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
fastcgi_param PATH_INFO $fastcgi_path_info;
}
location / {
try_files $uri $uri/ /index.php?$query_string;
gzip_static on;
}
}
On my laptop it's working but on my vps server I get an error 500.
My docker should be accessible port 8000
My nginx logs
UPDATE - I found the problem, in production the application does not display laravel errors. So it is the default ngynx page that is displayed. The error was the right for the storage file.
Thank you for your help.

Laravel + Vite + Docker : build files 404 not found

this week I've been struggling with an issue.
I have a laravel/vuejs (using typescript) with vite project. I am using Laravel 8.74.0
Here is my packages versions :
* "vite": "3.0.4",
* "laravel-vite": "^0.0.19",
* "#vitejs/plugin-vue": "^2.2.2",
* "laravel-vite-plugin": "^0.6.0",
* "vue": "^3.2.6",
My issue is that when I try to build my project throught docker, the public/build files are created but the browser return me a 404 not found.
GET http://foo.localhost:8000/build/assets/main.80098a01.js net::ERR_ABORTED 404 (Not Found)
GET http://foo.localhost:8000/build/assets/main.b7cb1660.css net::ERR_ABORTED 404 (Not Found)
Notice : the "foo" before localhost is present because I am using Tenancy for Laravel
I verified and the build directory exist in the app container.
I am using docker compose to build the image.
docker-compose.yml
version: '3.9'
networks:
laravel:
services:
nginx:
build:
context: .
dockerfile: docker/nginx.dockerfile
ports:
- 8000:80
volumes:
- ./:/var/www/html
links:
- app
networks:
- laravel
app:
build:
context: .
dockerfile: app.dockerfile
links:
- db
networks:
- laravel
ports:
- 3000:3000
expose:
- 3000:3000
db:
image: mysql:8.0
ports:
- 3306:3306
environment:
- MYSQL_ROOT_PASSWORD=root
- MYSQL_USER=user
- MYSQL_PASSWORD=password
- MYSQL_DATABASE=fresh_final
networks:
- laravel
volumes:
- db:/var/lib/mysql
phpmyadmin:
image: phpmyadmin/phpmyadmin
container_name: fresh-pma
restart: always
ports:
- 49152:80
environment:
PMA_HOST: db
networks:
- laravel
volumes:
db:
driver: local
Because I only have one project with both laravel and vuejs, I decided to create one container for the frontend and the backend.
app.dockerfile
FROM php:8.0.21-fpm
# Copy composer.lock and composer.json
COPY composer.lock composer.json /var/www/html/
# Set working directory
WORKDIR /var/www/html
# Install dependencies
RUN apt-get update && apt-get install -y \
build-essential \
libpng-dev \
libjpeg62-turbo-dev \
libfreetype6-dev \
locales \
zip \
jpegoptim optipng pngquant gifsicle \
vim \
unzip \
git \
curl \
sudo
# Clear cache
RUN apt-get clean && rm -rf /var/lib/apt/lists/*
# Install extensions
# RUN docker-php-ext-install pdo_mysql mbstring zip exif pcntl
RUN docker-php-ext-install pdo_mysql exif pcntl bcmath gd
# RUN docker-php-ext-install pdo_mysql
RUN docker-php-ext-configure gd --with-freetype --with-jpeg
# RUN docker-php-ext-install gd
# Install composer
RUN curl -sS https://getcomposer.org/installer | php -- --install-dir=/usr/local/bin --filename=composer
# Add user for laravel application
RUN groupadd -g 1000 www
RUN useradd -u 1000 -ms /bin/bash -g www www
# RUN sudo passwd www
# # Install nodejs
RUN sudo curl -fsSL https://deb.nodesource.com/setup_16.x | sudo -E bash -
RUN apt-get install -y nodejs
RUN npm install --global yarn
# RUN rm -rf node_modules
# Change current user to www
USER www
# Copy existing application directory contents
# COPY . /var/www/html
# Copy existing application directory permissions
COPY --chown=www:www . /var/www/html
# RUN whoami
RUN yarn install
# RUN sudo chown -R www: /var/www/html/node_modules
# RUN node node_modules/esbuild/install.js
# Expose port 9000 and start php-fpm server
EXPOSE 9000
# EXPOSE 3000
# COPY --from=vuejs /app /var/www/html
CMD ["php-fpm"]
nginx.dockerfile
FROM nginx
ADD docker/conf/vhost.conf /etc/nginx/conf.d/default.conf
WORKDIR /var/www/html
vhost.conf
upstream app {
server app:9000;
}
server {
listen 80;
index index.php index.html;
error_log /var/log/nginx/error.log;
access_log /var/log/nginx/access.log;
root /var/www/html/public;
location / {
try_files $uri $uri/ /index.php?$query_string;
gzip_static on;
}
location ~ \.php$ {
# fastcgi_pass app:9000;
# fastcgi_param SCRIPT_FILENAME $realpath_root$fastcgi_script_name;
# include fastcgi_params;
fastcgi_split_path_info ^(.+\.php)(/.+)$;
fastcgi_pass app;
fastcgi_index index.php;
include fastcgi_params;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
fastcgi_param PATH_INFO $fastcgi_path_info;
}
}
Here is my vite.config.ts
import { defineConfig } from "laravel-vite";
import laravel from 'laravel-vite-plugin'
import vue from "#vitejs/plugin-vue";
export default defineConfig({
define: {
'process.env': process.env
},
server: {
host: true,
port: 3000,
watch: {
usePolling: true
}
},
plugins: [
laravel({
input: ['resources/scripts/main.ts'],
refresh: true,
}),
vue({
template: {
transformAssetUrls: {
// The Vue plugin will re-write asset URLs, when referenced
// in Single File Components, to point to the Laravel web
// server. Setting this to `null` allows the Laravel plugin
// to instead re-write asset URLs to point to the Vite
// server instead.
base: null,
// The Vue plugin will parse absolute URLs and treat them
// as absolute paths to files on disk. Setting this to
// `false` will leave absolute URLs un-touched so they can
// reference assets in the public directly as expected.
includeAbsolute: false,
},
},
}),
],
resolve: {
alias: {
'#': '/resources/views/src',
},
},
});
Also, I noticed that the docker image run perfectly fine when I ran the build locally. So I am assuming that the docker image is referencing the local build files. (When the local build directory is deleted, the 404 not found error reappears).

Quasar + Docker not working on localhost, but works with given external IP

I'm trying to dockerize a stack with nginx + Quasar + Laravel + Mysql.
Since I'm using 3th party plugins that need approved domains I'd like to be always using localhost instead of dynamic public IPs.
I've tried everything, but since I'm starting with docker there are many concepts that I may be not understanding.
This is probably related to port forwarding, but the problem is that I can't access Quasar app by using http://localhost:9000/
This is my docker-compose.yml
version: '3.8'
services:
# Laravel App
api:
build:
args:
user: luciano
uid: 1000
context: backend # path to the docker file
dockerfile: Dockerfile # name of the dockerfile
image: laravel # name to assign to the image
container_name: viso-api # name to assign to the container
restart: unless-stopped # always restart container except when stopped (manually or otherwise)
working_dir: /var/www # working directory of the running container
tty: true # once started keeps container running (attaching a terminal to it, like using -t docker directive)
environment: # set environment variables
SERVICE_NAME: app
SERVICE_TAGS: dev
depends_on:
- db
volumes: # map application directory in the /var/www/html directory of the container
- './backend:/var/www'
- './.docker/php/uploads.ini:/usr/local/etc/php/conf.d/uploads.ini'
networks: # networks at which container is connected
- viso-network
# Quasar App
quasar:
build:
context: frontend
dockerfile: Dockerfile
target: develop-stage
image: quasar
container_name: viso-frontend
working_dir: /app
volumes:
- './frontend:/app'
networks:
- viso-network
depends_on:
- api
# command: /bin/sh -c "yarn && quasar dev"
# Nginx Service
nginx:
image: 'nginx:1.23.1-alpine'
container_name: viso-nginx
restart: unless-stopped
tty: true
ports: # exposed ports
- '8888:80'
volumes:
- './backend:/var/www'
- './.docker/nginx:/etc/nginx/conf.d' # nginx configuration
networks:
- viso-network
depends_on:
- db
- api
# MySQL Service
db:
image: 'mysql:8.0.30'
container_name: viso-db
restart: unless-stopped
tty: true
ports:
- '33066:3306'
environment:
MYSQL_DATABASE: viso_manager
MYSQL_ROOT_PASSWORD: secretroot
MYSQL_ROOT_HOST: '%'
MYSQL_USER: luciano
MYSQL_PASSWORD: secretluciano
volumes:
- './.dbdata:/var/lib/mysql'
- './.docker/mysql/my.cnf:/etc/mysql/conf.d/my.cnf'
networks:
- viso-network
# Docker Networks
networks:
viso-network:
driver: bridge
# Volumes
volumes:
.dbdata:
This is my Laravel Dockerfile:
FROM php:8.1-fpm
# Set working directory
WORKDIR /var/www
# Get argument defined in docker-compose file
ARG user
ARG uid
# Install system dependencies
RUN apt-get update && apt-get install -y \
git \
curl \
libonig-dev \
libxml2-dev \
libpng-dev \
libwebp-dev \
libjpeg-dev \
libfreetype6-dev \
zip \
unzip \
nano \
&& docker-php-ext-install pdo_mysql \
&& docker-php-ext-install mbstring \
&& docker-php-ext-install exif \
&& docker-php-ext-install pcntl \
&& docker-php-ext-install bcmath \
&& docker-php-ext-install gd \
&& docker-php-ext-configure gd --with-jpeg --with-webp --with-freetype \
&& docker-php-ext-install -j$(nproc) gd \
&& docker-php-source delete
# Clear cache
RUN apt-get clean && rm -rf /var/lib/apt/lists/*
# Install Composer
RUN curl -sS https://getcomposer.org/installer | php -- --install-dir=/usr/local/bin --filename=composer
# Create system user to run Composer and Artisan Commands
RUN useradd -G www-data,root -u $uid -d /home/$user $user
RUN mkdir -p /home/$user/.composer && \
chown -R $user:$user /home/$user
EXPOSE 80
USER $user
This is my Quasar Dockerfile
# develop stage
FROM node:alpine as develop-stage
WORKDIR /app
COPY package*.json ./
RUN yarn global add #quasar/cli
COPY . .
RUN yarn install
EXPOSE 9999
# build stage
FROM develop-stage as build-stage
RUN yarn
RUN quasar build
# production stage
FROM nginx:alpine as production-stage
COPY --from=build-stage /app/dist/spa /usr/share/nginx/html
EXPOSE 80
CMD ["nginx", "-g", "daemon off;"]
and my nginx.conf
server {
listen 80;
index index.php index.html;
error_log /var/log/nginx/error.log;
access_log /var/log/nginx/access.log;
root /var/www/public;
client_max_body_size 32M;
location ~ \.php$ {
try_files $uri =404;
fastcgi_split_path_info ^(.+\.php)(/.+)$;
fastcgi_pass api:9000;
fastcgi_index index.php;
include fastcgi_params;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
fastcgi_param PATH_INFO $fastcgi_path_info;
}
location / {
try_files $uri $uri/ /index.php?$query_string;
gzip_static on;
}
location /storage {
add_header "Access-Control-Allow-Origin" "*";
}
}
Once everything is up and running, I go into Quasar container and run quasar dev which shows me 2 URLs
App URL................ http://localhost:9000/
http://172.xx.xx.4:9000/
The url by IP works fine, but http://localhost:9000/ throws
This site can’t be reached
localhost refused to connect.
Try:
Checking the connection
Checking the proxy and the firewall
ERR_CONNECTION_REFUSED

Nginx config not working on my docker container

I'm new with dockers.
I'm studying this docker-compose.yml in order to create a this three containers for webserver, workspace for my laravel and the db. And I'm understanding most of it.
It builds just fine but on localhost I get the nginx default page, not my laravel public folder.
I have a app.conf file inside of /nginx/conf.d/
server {
listen 80;
index index.php index.html;
error_log /var/log/nginx/error.log;
access_log /var/log/nginx/access.log;
root /var/www/public;
location ~ \.php$ {
try_files $uri =404;
fastcgi_split_path_info ^(.+\.php)(/.+)$;
fastcgi_pass app:9000;
fastcgi_index index.php;
include fastcgi_params;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
fastcgi_param PATH_INFO $fastcgi_path_info;
}
location / {
try_files $uri $uri/ /index.php?$query_string;
gzip_static on;
}
}
version: '3.2'
services:
#PHP Service
app:
build:
context: .
dockerfile: Dockerfile
image: myapp-workspace
container_name: myapp-workspace
restart: unless-stopped
tty: true
environment:
SERVICE_NAME: app
SERVICE_TAGS: dev
working_dir: "/var/www"
volumes:
- ./:/var/www
networks:
- app-network
#Nginx Service
webserver:
image: nginx:alpine
container_name: myapp-nginx
restart: unless-stopped
tty: true
ports:
- "80:80"
- "443:443"
volumes:
- .:/var/www
- ./nginx/conf.d/:/etc/nginx/conf.d/
networks:
- app-network
#MySQL Service
db:
image: mysql:5.7.22
container_name: myapp-db
restart: unless-stopped
tty: true
ports:
- "3306:3306"
environment:
MYSQL_DATABASE: 'database'
MYSQL_ROOT_PASSWORD: 'password'
MYSQL_PASSWORD: 'password'
MYSQL_USER: 'user'
SERVICE_TAGS: dev
SERVICE_NAME: mysql
volumes:
- ./dbdata:/var/lib/mysql/
- ./mysql/my.cnf:/etc/mysql/my.cnf
networks:
- app-network
#Docker Networks
networks:
app-network:
driver: bridge
Dockerfile
FROM php:7.3-fpm
# Copy composer.lock and composer.json
COPY composer.lock composer.json /var/www/
# Set working directory
WORKDIR /var/www
# Install dependencies
RUN apt-get update && apt-get install -y \
build-essential \
libpng-dev \
libjpeg62-turbo-dev \
libfreetype6-dev \
locales \
zip \
jpegoptim optipng pngquant gifsicle \
vim \
unzip \
git \
curl
# Clear cache
RUN apt-get clean && rm -rf /var/lib/apt/lists/*
# Install extensions
RUN docker-php-ext-install pdo_mysql
RUN docker-php-ext-configure gd --with-gd --with-freetype-dir=/usr/include/ --with-jpeg-dir=/usr/include/ --with-png-dir=/usr/include/
RUN docker-php-ext-install gd
# Install composer
RUN curl -sS https://getcomposer.org/installer | php -- --install-dir=/usr/local/bin --filename=composer
# Add user for laravel application
RUN groupadd -g 1000 www
RUN useradd -u 1000 -ms /bin/bash -g www www
# Copy existing application directory contents
COPY . /var/www
# Copy existing application directory permissions
COPY --chown=www:www . /var/www
# Change current user to www
USER www
# Expose port 9000 and start php-fpm server
EXPOSE 9000
CMD ["php-fpm"]
Found the problem.
Seems that the "app" part on this line "fastcgi_pass app:9000;" on my app.conf should be the name of my workspace container.
So like this it works.
server {
listen 80;
index index.php index.html;
error_log /var/log/nginx/error.log;
access_log /var/log/nginx/access.log;
root /var/www/public;
location ~ \.php$ {
try_files $uri =404;
fastcgi_split_path_info ^(.+\.php)(/.+)$;
fastcgi_pass myapp-workspace:9000;
fastcgi_index index.php;
include fastcgi_params;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
fastcgi_param PATH_INFO $fastcgi_path_info;
}
location / {
try_files $uri $uri/ /index.php?$query_string;
gzip_static on;
}
}

Laravel docker cron nginx 502 bad gateway issue (111: Connection refused) while connecting to upstream)

I am my laravel app in a docker container. Everything worked fine until I added a cron to the dockerfile. I need to schedule a job so I need a cron.
My compose file looks like this
version: '3'
networks:
laravel:
driver: bridge
services:
nginx:
image: nginx:stable
container_name: nginx
ports:
- "8080:80"
volumes:
- ./src:/var/www/html
- ./nginx/default.conf:/etc/nginx/conf.d/default.conf
depends_on:
- php
- mysql
networks:
- laravel
php:
build:
context: .
dockerfile: Dockerfile
container_name: php
volumes:
- ./src:/var/www/html
ports:
- "9000:9000"
networks:
- laravel
the nginx config file looks like this
server {
listen 80;
index index.php index.html;
server_name localhost;
error_log /var/log/nginx/error.log;
access_log /var/log/nginx/access.log;
root /var/www/html/public;
location / {
try_files $uri $uri/ /index.php?$query_string;
}
location ~ \.php$ {
try_files $uri =404;
fastcgi_split_path_info ^(.+\.php)(/.+)$;
fastcgi_pass php:9000;
fastcgi_index index.php;
include fastcgi_params;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
fastcgi_param PATH_INFO $fastcgi_path_info;
}
}
My docker file looks like this
FROM php:7.3-fpm
WORKDIR /var/www/html
# RUN docker-php-ext-install pdo pdo_mysql mysqli
# RUN docker-php-ext-install mysqli && docker-php-ext-enable mysqli
RUN docker-php-ext-install mysqli pdo pdo_mysql && docker-php-ext-enable mysqli
# Install cron
RUN apt-get update && apt-get install -y cron
# Add crontab file in the cron directory
ADD src/app/schedule/crontab /etc/cron.d/cron
# Give execution rights on the cron job
RUN chmod 0644 /etc/cron.d/cron
# Create the log file to be able to run tail
RUN touch /var/log/cron.log
# Run the command on container startup
CMD printenv > /etc/environment && echo "cron starting..." && (cron) && : > /var/log/cron.log && tail -f /var/log/cron.log
and the error in the nginx log is
2020/07/06 08:27:06 [error] 5#5: *1 connect() failed (111: Connection refused) while connecting to upstream, client: 172.19.0.1, server: localhost, request: "GET / HTTP/1.1", upstream: "fastcgi://172.19.0.2:9000", host: "localhost:8080"
the nginx is running at
/nginx - 172.19.0.7
What is going wrong here?
At last I found the solution
we need to replace the last line of the docker file.
Please use this
CMD cron && docker-php-entrypoint php-fpm
instead of
CMD printenv > /etc/environment && echo "cron starting..." && (cron) && : > /var/log/cron.log && tail -f /var/log/cron.log

Resources