GCP Cloud Run returns "Faithfully yours, nginx" - laravel

I'm trying to host my laravel application in GCP cloud run and everything works just fine but for some reason whenever I run a POST request with lots of data (100+ rows of data - 64Mb) saving to the database, it always throw an error. I'm using nginx with docker by the way. Please see the details below.
ERROR
Cloud Run Logs
The request has been terminated because it has reached the maximum request timeout.
nginx.conf
worker_processes 1;
events {
worker_connections 1024;
}
http {
include mime.types;
sendfile on;
keepalive_timeout 65;
server {
listen LISTEN_PORT default_server;
server_name _;
root /app/public;
index index.php;
charset utf-8;
location / {
try_files $uri $uri/ /index.php?$query_string;
}
location = /favicon.ico { access_log off; log_not_found off; }
location = /robots.txt { access_log off; log_not_found off; }
access_log /dev/stdout;
error_log /dev/stderr;
sendfile off;
client_max_body_size 100m;
location ~ \.php$ {
fastcgi_split_path_info ^(.+\.php)(/.+)$;
fastcgi_pass 127.0.0.1:9000;
fastcgi_index index.php;
include fastcgi_params;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
fastcgi_intercept_errors off;
fastcgi_buffer_size 32k;
fastcgi_buffers 8 32k;
}
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root html;
}
}
#include /etc/nginx/sites-enabled/*;
}
daemon off;
Dockerfile
FROM php:8.0-fpm-alpine
RUN apk add --no-cache nginx wget
RUN docker-php-ext-install mysqli pdo pdo_mysql
RUN mkdir -p /run/nginx
COPY docker/nginx.conf /etc/nginx/nginx.conf
RUN mkdir -p /app
COPY . /app
RUN sh -c "wget http://getcomposer.org/composer.phar && chmod a+x composer.phar && mv composer.phar /usr/local/bin/composer"
RUN cd /app && \
/usr/local/bin/composer install --no-dev
RUN chown -R www-data: /app
CMD sh /app/docker/startup.sh
Laravel version:
v9
Please let me know if you need some data that is not indicated yet on my post.

Increase max_execution_time in php configuration. By default it is only 30 seconds. Make 30 minutes for example:
max_execution_time = 1800
Increase timeouts of nginx:
http{
...
proxy_read_timeout 1800;
proxy_connect_timeout 1800;
proxy_send_timeout 1800;
send_timeout 1800;
keepalive_timeout 1800;
...
}
Another idea for investigation is to give more resources to your cloud instance (more CPUs, more RAM) in order to process your request faster and avoid timeout. But eventually it should be increased.

I think the issue has nothing to do with php, laravel, or nginx, but with Cloud Run.
As you can see in the Google Cloud documentation when they describe HTTP 504: Gateway timeout errors:
HTTP 504
The request has been terminated because it has reached the maximum request
timeout.
If your service is processing long requests, you can increase the request
timeout. If your service doesn't return a response within the time
specified, the request ends and the service returns an HTTP 504 error, as
documented in the container runtime contract.
As suggested in the docs, please, try increasing the request timeout until your application can process the huge POST data you mentioned: it is set by default to 5 minutes, but can be extended up to 60 minutes.
As described in the docs, you can set it through the Google Cloud console and the gcloud CLI; directly, or by modifying the service YAML configuration.

Default Nginx timeout is 60s. Since you have mentioned the data is 64mb. It will take time to process that request in your backend and send back the response within 60s.
So either you could try to increase the nginx timeout by adding the below block in your nginx.conf file
http{
...
proxy_read_timeout 300;
proxy_connect_timeout 300;
proxy_send_timeout 300;
keepalive_timeout 3000;
...
}
Or better way would be, dont process the data immediately, push the data to a message queue and send the response instantly.let the background workers handle the process with data. I dont know much about laravel. In django we can use rabbitmq and celery/ pika.
To get the result for the request with huge data you can poll the server at regular interval or setup a websocket connection

Related

Nginx Server Configuration: Hostname not resolving on Subdomain

thank you in advance for your support.
I set up an Ubuntu Server with Nginx as a Digitalocean Droplet and am using the server provisioning tool Laravel Forge, which works fine. I successfully installed the PHP Framework Laravel and deployed the code on the server. I ssh into the server and checked the files in the supposed directory. The code is successfully deployed.
Next I own a domain, and created an A record for the following subdomain: app.mywebsite.de, that points to that server. I followed the digitalocean instructions and I waited the required time. Using a DNS Lookup tool, I confirmed that the subdomain actually points to the server.
Screenshot of DNS Lookup
Yet, when I use the subdomain in my browser, the browser doesn't connect to the server. I get the following error message in the browser:
Screenshot of Browser when connecting to server
It seems like the subdomain is correctly pointed to the server, but the server isn't rightly configured. I tried to check the nginx configuration and under sites-avaialble I have the following configuration for the subdomain:
# FORGE CONFIG (DO NOT REMOVE!)
include forge-conf/app.mywebsite.de/before/*;
server {
listen 80;
listen [::]:80;
server_name app.mywebsite.de;
server_tokens off;
root /home/forge/app.mywebsite.de/public;
# FORGE SSL (DO NOT REMOVE!)
# ssl_certificate;
# ssl_certificate_key;
ssl_protocols TLSv1.2 TLSv1.3;
ssl_ciphers TLS13-AES-256-GCM-SHA384:TLS13-CHACHA20-POLY1305-SHA256:TLS_AES_256_GCM_SHA384:TLS-AES-256-GCM-SHA384:TLS_CHACHA20_POLY1305_SHA256:TLS-CHACHA20-POLY1305-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-RSA-CHACHA20-POLY1305:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA:ECDHE-RSA-AES256-SHA;
ssl_prefer_server_ciphers on;
ssl_dhparam /etc/nginx/dhparams.pem;
add_header X-Frame-Options "SAMEORIGIN";
add_header X-XSS-Protection "1; mode=block";
add_header X-Content-Type-Options "nosniff";
index index.html index.htm index.php;
charset utf-8;
# FORGE CONFIG (DO NOT REMOVE!)
include forge-conf/app.mywebsite.de/server/*;
location / {
try_files $uri $uri/ /index.php?$query_string;
}
location = /favicon.ico { access_log off; log_not_found off; }
location = /robots.txt { access_log off; log_not_found off; }
access_log off;
error_log /var/log/nginx/app.mywebsite.de-error.log error;
error_page 404 /index.php;
location ~ \.php$ {
fastcgi_split_path_info ^(.+\.php)(/.+)$;
fastcgi_pass unix:/var/run/php/php8.0-fpm.sock;
fastcgi_index index.php;
include fastcgi_params;
}
location ~ /\.(?!well-known).* {
deny all;
}
}
# FORGE CONFIG (DO NOT REMOVE!)
include forge-conf/app.mywebsite.de/after/*;
In forge-conf/app.website.de/before/*; is only one file redirect.conf with the following code:
server {
listen 80;
listen [::]:80;
server_name www.app.website.de;
if ($http_x_forwarded_proto = 'https') {
return 301 https://app.website.de$request_uri;
}
return 301 $scheme://app.website.de$request_uri;
}
There are no other sites on the server. So there is only the 000-catch-all file in the sites available directory of the nginx configuration folder.
Unfortunately I reached my limit of understanding here and I would love if somebody could point me into the right direction to find out, which part of nginx is not configured corectly.
P.S.
Some additional info:
Yes I restarted Nginx and the whole server multiple times.
Turns out, everything was configured correctly. I didn't change anything, except that I added some additional sites on the nginx server. Forge probably updated the server blocks, which resolved the problem.

Redirect from Http to https issue in NGINX Google Compute Engine

We already tried other solutions on Stack Overflow but they didn't work for us.
We are having issues while redirecting our Domain url from http to https.
When we hit the http://example.com, it is not getting redirected to https://example.com. We have also set up a Google Managed SSL in the Load Balancer in our Google Cloud Network Service.
We are using the Google Cloud Compute Engine for hosting the website and Google domains for url. Apart from that we are using the NGINX as our web server and Laravel as our framework. We also contacted the Google support team but couldn't worked.
Front and Backend Load Balancer Configuration:
PHP Framework - Laravel V8
Compute Engine - Debian 10 Buster
Below is the code for NGINX config file.
NGINX Default Config file
server
{
listen 80;
server_name example.in www.example.in;
root /var/www/html/test;
add_header X-Frame-Options "SAMEORIGIN";
add_header X-XSS-Protection "1; mode=block";
add_header X-Content-Type-Options "nosniff";
index index.html index.htm index.php;
charset utf-8;
location / {
try_files $uri $uri/ /index.php?$query_string;
}
location = /favicon.ico { access_log off; log_not_found off; }
location = /robots.txt { access_log off; log_not_found off; }
error_page 404 /index.php;
location ~ \.php$ {
fastcgi_pass unix:/var/run/php/php7.3-fpm.sock;
fastcgi_index index.php;
fastcgi_param SCRIPT_FILENAME $realpath_root$fastcgi_script_name;
include fastcgi_params;
}
location ~ /\.(?!well-known).* {
deny all;
}
}
So the below configuration really solved my issue .
I just added a new Port 80 (Http) configuration in my Front End configuration of my Load Balancer along with the Port 443 (Https) .
Now the Domain URL is getting redirected from http to https with secure connections.
Please refer to the below Screenshot of my Load Balancer Frontend Configuration .
Thank you #JohnHanley for your answer ;)
I think your NGINX configuration needs to adjust to listen on port 443 and you need to get the SSL certificate accordingly.
Please refer : https://cloud.google.com/community/tutorials/https-load-balancing-nginx.

HTTP/2 server pushed assets fail to load (HTTP2_CLIENT_REFUSED_STREAM)

I have the following error ERR_HTTP2_CLIENT_REFUSED_STREAM in chrome devtools console for all or some of the assets pushed via http2.
Refreshing the page and clearing the cache randomly fixes this issue partially and sometimes completely.
I am using nginx with http2 enabled (ssl via certbot) and cloudflare.
server {
server_name $HOST;
root /var/www/$HOST/current/public;
add_header X-Frame-Options "SAMEORIGIN";
add_header X-XSS-Protection "1; mode=block";
add_header X-Content-Type-Options "nosniff";
index index.html index.htm index.php;
charset utf-8;
set $auth "dev-Server.";
if ($request_uri ~ ^/opcache-api/.*$){
set $auth off;
}
auth_basic $auth;
auth_basic_user_file /etc/nginx/.htpasswd;
location / {
try_files $uri $uri/ /index.php?$query_string;
}
location = /favicon.ico { access_log off; log_not_found off; }
location = /robots.txt { access_log off; log_not_found off; }
error_page 404 /index.php;
location ~ \.php$ {
fastcgi_pass unix:/var/run/php/php7.2-fpm.sock;
fastcgi_index index.php;
fastcgi_param SCRIPT_FILENAME $realpath_root$fastcgi_script_name;
include fastcgi_params;
}
location ~ /\.(?!well-known).* {
deny all;
}
location ~* \.(?:css|js)$ {
access_log off;
log_not_found off;
# Let laravel query strings burst the cache
expires 1M;
add_header Cache-Control "public";
# Or force cache revalidation.
# add_header Cache-Control "public, no-cache, must-revalidate";
}
location ~* \.(?:jpg|jpeg|gif|png|ico|xml|svg|webp)$ {
access_log off;
log_not_found off;
expires 6M;
add_header Cache-Control "public";
}
location ~* \.(?:woff|ttf|otf|woff2|eot)$ {
access_log off;
log_not_found off;
expires max;
add_header Cache-Control "public";
types {font/opentype otf;}
types {application/vnd.ms-fontobject eot;}
types {font/truetype ttf;}
types {application/font-woff woff;}
types {font/x-woff woff2;}
}
listen 443 ssl http2;
ssl_certificate /etc/letsencrypt/live/$HOST/fullchain.pem; # managed by Certbot
ssl_certificate_key /etc/letsencrypt/live/$HOST/privkey.pem; # managed by Certbot
# include /etc/letsencrypt/options-ssl-nginx.conf; # managed by Certbot
ssl_ciphers EECDH+CHACHA20:EECDH+AES128:RSA+AES128:EECDH+AES256:RSA+AES256:EECDH+3DES:RSA+3DES:!MD5;
ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; # managed by Certbot
}
server {
if ($host = $HOST) {
return 301 https://$host$request_uri;
} # managed by Certbot
server_name $HOST;
listen 80;
return 404; # managed by Certbot
}
Googling this error doesn't return much results and if it helps, it's a laravel 6 app that pushes those assets, if i disable assets pushing in laravel, then all assets load correctly.
I don't even know where to start looking.
Update 1
I enabled chrome logging, and inspected the logs using Sawbuck, following the instructions provided here and found that the actual error has some relation with a 414 HTTP response, which implies some caching problem.
Update 2
I found this great The browser can abort pushed items if it already has them which states the following:
Chrome will reject pushes if it already has the item in the push cache. It rejects with PROTOCOL_ERROR rather than CANCEL or REFUSED_STREAM.
it also links to some chrome and mozilla bugs.
Which led me to disable cloudflare completely and test directly with the server, i tried various Cache-Control directives and also tried disabling the header, but the same error occures, upon a refresh after a cache clear.
Apparently, chrome cancel the http/2 pushed asset even when not present in push cache, leaving the page broken.
For now, i'm disabling http/2 server push in laravel app as a temporary fix.
We just got the exact same problem you're describing. We got "net::ERR_HTTP2_CLIENT_REFUSED_STREAM" on one of our Javascript files. Reloading and clearing cache worked but then the problem came back, seemingly randomly. The same issue in Chrome and Edge (Chromium based). Then I tried in Firefox and got the same behavior but Firefox complained that the response for that url was "text/html". My guess is that for some reason we had gotten a "text/html" response cached for that url in Cloudflare. When I opened that url directly in Firefox I got "application/javascript" and then the problem went away. Still not quite sure how this all happened though.
EDIT:
In our case it turned out that the response for a .js file was blocked by the server with a 401 and we didn't send out any cache headers. Cloudflare tries to be helpful because the browser was expecting a .js file so the response was cached even if the status was 401. Which later failed for others because we tried to http2 push a text/html response with status 401 as a .js file. Luckliy Firefox gave us a better, actionable error message.
EDIT2:
Turns out it wasn't a http header cache issue. It was that we had cookie authentication on .js files, and it seems that the http2 push requests doesn't always include cookies. The fix was to allow cookieless requests on the .js files.
For anyone who comes here using Symfony's local development server (symfony server:start), I could not fix it but this setting (which is the default setting) stops the server to try to push preloaded assets and the error goes away:
config/packages/dev/webpack_encore.yaml
webpack_encore:
preload: false

Nginx + PHP-FPM is very slow on Mountain Lion

I have set up Nginx with PHP-FPM on my MacBook running ML. It works fine but it takes between 5 and 10 seconds to connect when I run a page in my browser. Even the following PHP script:
<?php
die();
takes about 5 seconds to connect. I am using Chrome and I get the "Sending request" message in the status bar for around 7 seconds. If I refresh again it seems to work instantly, but if I leave it for around 10 seconds it will "sleep" again. It is as if nginx or PHP is going to sleep and then taking ages to wake up again.
Edit: This is also affecting static files on the server so it would seem to be an issue with DNS or nginx.
Can anyone help me figure out what is causing this?
nginx.conf
worker_processes 2;
events {
worker_connections 1024;
}
http {
include mime.types;
default_type text/plain;
server_tokens off;
sendfile on;
tcp_nopush on;
keepalive_timeout 1;
gzip on;
gzip_comp_level 2;
gzip_proxied any;
gzip_types text/plain text/css text/javascript application/json application/x-javascript text/xml application/xml application/xml+rss;
index index.html index.php;
upstream www-upstream-pool{
server unix:/tmp/php-fpm.sock;
}
include sites-enabled/*;
}
php-fpm.conf
[global]
pid = /usr/local/etc/php/var/run/php-fpm.pid
; run in background or in foreground?
; set daemonize = no for debugging
daemonize = yes
include=/usr/local/etc/php/5.4/pool.d/*.conf
pool.conf
[www]
user=matt
group=staff
pm = dynamic
pm.max_children = 10
pm.start_servers = 5
pm.min_spare_servers = 5
pm.max_spare_servers = 10
pm.max_requests = 500
listen = /tmp/php-fpm.sock
;listen = 127.0.0.1:9000
php_flag[display_errors] = off
sites-available/cft
server {
listen 80;
server_name cft.local;
root /Users/matt/Sites/cft/www;
access_log /Users/matt/Sites/cft/log/access.log;
error_log /Users/matt/Sites/cft/log/error.log;
index index.php;
location / {
try_files $uri $uri/ /index.php?$args;
}
include fastcgi_php_default.conf;
}
fastcgi_php_default.conf
fastcgi_intercept_errors on;
location ~ .php$
{
fastcgi_split_path_info ^(.+.php)(/.+)$;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
fastcgi_param QUERY_STRING $query_string;
fastcgi_param REQUEST_METHOD $request_method;
fastcgi_param CONTENT_TYPE $content_type;
fastcgi_param CONTENT_LENGTH $content_length;
fastcgi_param SCRIPT_NAME $fastcgi_script_name;
fastcgi_param REQUEST_URI $request_uri;
fastcgi_param DOCUMENT_URI $document_uri;
fastcgi_param DOCUMENT_ROOT $document_root;
fastcgi_param SERVER_PROTOCOL $server_protocol;
fastcgi_param HTTPS $https if_not_empty;
fastcgi_param GATEWAY_INTERFACE CGI/1.1;
fastcgi_param SERVER_SOFTWARE nginx/$nginx_version;
fastcgi_param REMOTE_ADDR $remote_addr;
fastcgi_param REMOTE_PORT $remote_port;
fastcgi_param SERVER_ADDR $server_addr;
fastcgi_param SERVER_PORT $server_port;
fastcgi_param SERVER_NAME $server_name;
fastcgi_param PATH_INFO $fastcgi_path_info;
fastcgi_param PATH_TRANSLATED $document_root$fastcgi_path_info;
fastcgi_read_timeout 300;
fastcgi_pass www-upstream-pool;
fastcgi_index index.php;
}
fastcgi_param REDIRECT_STATUS 200;
One reason could be - as already suspected above - that your server works perfectly but there is something wrong with DNS lookups.
Such long times usually are caused by try + timeout, then retry other way, works, cache.
Caching of the working request would explain why your second http request is fast.
I am almost sure, that this is caused by a DNS lookup, which tries to query an unreachable service, gives up after a timeout period, then tries a working service and caches the result for a couple of minutes.
Apple has recently made a significant change in how the OS handles requests for ".local" name resolution that can adversely affect Active Directory authentication and DFS resolution.
When processing a ".local" request, the Mac OS now sends a Multicast DNS (mDNS) or broadcast, then waits for that request to timeout before correctly sending the information to the DNS server. The delay caused by this results in an authentication failure in most cases.
http://www.thursby.com/local-domain-login-10.7.html
They are offering to set the timeout to the smallest possible value, which apparently is still 1 second - not really satisfying.
I suggest to use localhost or 127.0.0.1 or try http://test.dev as a local domain
/etc/hosts
127.0.0.1 localhost test.dev
EDIT
In OSX .local really seems to be a reserved tld for LAN devices. Using another domain like suggested above will def. solve this problem
http://support.apple.com/kb/HT3473
EDIT 2
Found this great article which exactly describes your problem and how to solve it
http://www.dmitry-dulepov.com/2012/07/os-x-lion-and-local-dns-issues.html?m=1
I can't see anything in your configuration that would cause this behaviour alone. Since the configuration of Nginx looks OK, and this affects both static and CGI request, I would suspect it is a system issue.
An issue that might be worth investigating is whether the problem is being caused by IPv6 negotiation on your server.
If you are using loopback (127.0.0.1) as your listen address, have a look in /etc/hosts and ensure that the following lines are present:
::1 localhost6.localdomain6 localhost6
::1 site.local cft.local
If this doesn't resolve the issue, I'm afraid you'll need to look at more advanced diagnostics, perhaps using strace or similar on the Nginx instance.
(this started as a comment but it's getting a bit long)
There's something very broken here - but I don't see anything obvious in your config to explain it. I'd start by looking at top and netstat while the request is in progress, and checking your logs (webserver and system) after the request has been processed. If that still reveals nothing, then next stop would be to capture all the network traffic - most likely causes for such a long delay are failed ident / DNS lookups.
Barring any configuration-related issues, it may be a compile issue.
I would advise that you install nginx from homebrew, the OS X open source package manager.
If you don't yet have homebrew (and every developer should!), you can grab it from here or just run this in a terminal:
ruby -e "$(curl -fsSkL raw.github.com/mxcl/homebrew/go)"
Then run
brew install ngnix
And ngnix will be deployed from the homebrew repository. Obviously, you'll want to make sure that you removed your old copy of nginx first.
The reason I recommend this is that homebrew has a number of OS X-specific patches for each open source library that needs it and is heavily used and tested by the community. I've used nginx from homebrew in the past on OS X and had no problems to speak of.

How do I configure nginx and CodeIgniter?

I'm running nginx on my home computer for development. I also have it linked to DynDNS so I can show progress to my co-worker a bit easier. I can't seem to get nginx to rewrite to CodeIgniter properly. I have CodeIgniters uri_protocol set to REQUEST_URI.
All pages that should be showing up wtih content show up completely blank. If I phpinfo(); die(); in the index.php file of Codeigniter, it works as expected and I get the phpinfo.
Also, pages that should give a 404 give a proper CodeIgniter 404 error.
Here's what I have so far.
user www-data;
worker_processes 1;
error_log /var/log/nginx/error.log;
pid /var/run/nginx.pid;
events {
worker_connections 1024;
# multi_accept on;
}
http {
include /etc/nginx/mime.types;
access_log /var/log/nginx/access.log;
error_log /var/log/nginx/error.log;
sendfile on;
#tcp_nopush on;
#keepalive_timeout 0;
keepalive_timeout 65;
tcp_nodelay on;
#gzip on;
#gzip_disable "MSIE [1-6]\.(?!.*SV1)";
root /home/zack/Development/public_html;
include /etc/nginx/conf.d/*.conf;
#include /etc/nginx/sites-enabled/*;
server {
listen 80;
server_name zackhovatter.dyndns.info;
index index.php;
root /home/zack/Development/public_html;
location /codeigniter/ {
if (!-e $request_filename) {
rewrite ^/codeigniter/(.*)$ /codeigniter/index.php/$1 last;
}
}
#this is for the index.php in the root folder
location /index.php {
fastcgi_pass 127.0.0.1:9000;
fastcgi_index index.php;
fastcgi_param SCRIPT_FILENAME /home/zack/Development/public_html/index.php;
include fastcgi_params;
}
location /codeigniter/index.php {
fastcgi_index index.php;
include fastcgi_params;
fastcgi_param SCRIPT_FILENAME /home/zack/Development/public_html/codeigniter/index.php;
fastcgi_param REQUEST_URI $request_uri;
fastcgi_param QUERY_STRING $query_string;
fastcgi_param PATH_INFO $document_uri;
fastcgi_param REQUEST_METHOD $request_method;
fastcgi_param CONTENT_TYPE $content_type;
fastcgi_param CONTENT_LENGTH $content_length;
fastcgi_pass 127.0.0.1:9000;
}
}
}
Actually I got this working. Not sure how, but I think it was file permissions being set incorrectly or something.
Edit:
For those who are wondering, I had an older version of PHP installed it seems. Reverting back to old class constructors fixed the issue. Darnit.
I have exactly the same problem (Codeigniter gives blank pages and the error pages and php are working) and i found that the problem was from the mysql db driver which should be changed to mysqli driver .
So just we need to change the db driver in config/database.php
$db['default']['dbdriver'] = 'mysqli';
And this is due that the mysql driver is deprecated in the new versions of php.
CodeIgniter4 How To Setup NGINX Server Blocks (Virtual Hosts) on RasperyPi Debian 10.4
I am using a RasperyPi-4 as web developing server with different projects running on it.
Visual Studio Code enables editing files via Remote-SSH extension very easily.
Setting up CI4 on NGINX gives you the opportunity to run different projects on the same server.
Because it took me some days to get this configuration running I will give you a quick reference guide to setup quick and easy.
If you have not installed NGINX and composer yet please have a look here:
http://nginx.org/en/docs/beginners_guide.html
https://codeigniter4.github.io/userguide/installation/installing_composer.html
https://getcomposer.org/download/
CodeIgniter4 installation via composer
server ip: 10.0.0.10
port: 8085
project name is 'test'
Please modify to your needs!
cd /var/www/html
composer create-project codeigniter4/appstarter test
the command above will create a 'test' folder /var/www/html/test
Modify the 'env' file
sudo nano /var/www/html/test/env
app.baseURL = 'http://10.0.0.10:8085'
Important: Do NOT use 'localhost' for the server URL!!!!!
please uncomment:
# CI_ENVIRONMENT = production
and modify:
CI_ENVIRONMENT = development
Save the file as '.env' to hide the file
NGINX Server Blocks
cd /etc/nginx/sites-available
touch test
sudo nano test
Paste this into the file and save after modifying to your requirements:
server {
listen 8085;
listen [::]:8085;
server_name test;
root /var/www/html/test/public;
index index.php index.html index.htm;
location / {
try_files $uri $uri/ /index.php$is_args$args;
}
location ~ \.php$ {
include snippets/fastcgi-php.conf;
# With php-fpm:
fastcgi_pass unix:/run/php/php7.4-fpm.sock;
# With php-cgi:
# fastcgi_pass 127.0.0.1:9000;
}
error_page 404 /index.php;
# deny access to hidden files such as .htaccess
location ~ /\. {
deny all;
}
}
After saving the file create a symbolic link to sites-enabled:
cd /etc/nginx/sites-enabled
sudo ln -s /etc/nginx/sites-available/test /etc/nginx/sites-enabled/test
Modify File and Group Permissions
chown -v -R pi:www-data /var/www/html/test
sudo chmod -v -R 775 /var/www/html/test
Start NGINX
sudo service nginx start
Open CI4 Welcome Page
http://10.0.0.10:8085
Here is a solution.
https://web.archive.org/web/20120504010725/http://hiteshjoshi.com/linux/secure-nginx-and-codeigniter-configuration.html
Also, I had to change $config['uri_protocol'] = “DOCUMENT_URI”; to make it work.
UPDATE: Fixed the 404 url from web archive.

Resources