How to configure ngnix, vite and laravel for use with HMR via a reverse proxy? - laravel

I've only just got into Laravel/Vue3 so I'm working off the basics. However, I have an existing Docker ecosystem that I use for local dev and an nginx reverse proxy to keep my many projects separate.
I'm having trouble getting HMR working and even more trouble finding appropriate documentation on how to configure Vite and Nginx so I can have a single HTTPS entry point in nginx and proxy back to Laravel and Vite.
The build is based on https://github.com/laravel-presets/inertia/tree/boilerplate.
For completeness, this is the package.json, just in case it changes:
{
"private": true,
"scripts": {
"dev": "vite",
"build": "vite build"
},
"devDependencies": {
"#vitejs/plugin-vue": "^2.3.1",
"#vue/compiler-sfc": "^3.2.33",
"autoprefixer": "^10.4.5",
"postcss": "^8.4.12",
"tailwindcss": "^3.0.24",
"vite": "^2.9.5",
"vite-plugin-laravel": "^0.2.0-beta.10"
},
"dependencies": {
"vue": "^3.2.31",
"#inertiajs/inertia": "^0.11.0",
"#inertiajs/inertia-vue3": "^0.6.0"
}
}
To keep things simple, I'm going to try and get it working under HTTP only and deal with HTTPS later.
Because I'm running the dev server in a container, I've set server.host to 0.0.0.0 in vite.config.ts and server.hmr.clientPort to 80. This will allow connections to the dev server from outside the container and hopefully realize that the public port is 80, instead of the default 3000.
I've tried setting the DEV_SERVER_URL to be the same as the APP_URL so that all traffic from the public site goes to the same place. But I'm not sure what the nginx side of things should look like.
I've also tried setting the DEV_SERVER_URL to be http://0.0.0.0:3000/ so I can see what traffic is trying to be generated. This almost works, but is grossly wrong. It fails when it comes to ws://0.0.0.0/ communications, and would not be appropriate when it comes to HTTPS.
I have noticed calls to /__vite_plugin, which I'm going to assume is the default ping_url that would normally be set in config/vite.php.
Looking for guidance on which nginx locations for should forward to the Laravel port and which locations should forward to the Vite port, and what that should look like so that web socket communications is also catered for.
I've seen discussions that Vite 3 may make this setup easier, but I'd like to deal with what is available right now.

The answer appears to be in knowing which directories to proxy to Vite and being able to isolate the web socket used for HMR.
To that end, you will want to do the following:
Ensure that your .env APP_URL and DEV_SERVER_URL match.
In your vite.config.ts, ensure that the server.host is '0.0.0.0' so that connections can be accepted from outside of the container.
In your vite.config.ts specify a base such as '/app/' so that all HMR traffic can be isolated and redirected to the Vite server while you are running npm run dev. You may wish to use something else if that path may clash with real paths in your Laravel or Vite app, like /_dev/ or /_vite'.
In your config/vite.php set a value for ping_url as http://localhost:3000. This allows Laravel to ping the Vite server locally so that the manifest should not be used and the Vite server will be used. This also assumes that ping_before_using_manifest is set to true.
Lastly, you want to configure your nginx proxy so that a number of locations are specifically proxied to the Vite server, and the rest goes to the Laravel server.
I am not an Nginx expert, so there may be a way to declare the following succinctly.
Sample Nginx server entry
# Some standard proxy variables
map $http_x_forwarded_proto $proxy_x_forwarded_proto {
default $http_x_forwarded_proto;
'' $scheme;
}
map $http_x_forwarded_port $proxy_x_forwarded_port {
default $http_x_forwarded_port;
'' $server_port;
}
map $http_upgrade $proxy_connection {
default upgrade;
'' close;
}
map $scheme $proxy_x_forwarded_ssl {
default off;
https off;
}
server {
listen *:80;
server vite-inertia-vue-app.test;
/* abridged version that does not include gzip_types, resolver, *_log and other headers */
location ^~ /resources/ {
proxy_pass http://198.18.0.1:3000;
include /etc/nginx/vite-inertia-vue-app.test.include;
}
location ^~ /#vite {
proxy_pass http://198.18.0.1:3000;
include /etc/nginx/vite-inertia-vue-app.test.include;
}
location ^~ /app/ {
proxy_pass http://198.18.0.1:3000;
include /etc/nginx/vite-inertia-vue-app.test.include;
}
location / {
proxy_pass http://198.18.0.1:8082;
include /etc/nginx/vite-inertia-vue-app.test.include;
}
}
vite-inertia-vue-app.test.include to include common proxy settings
proxy_read_timeout 190;
proxy_connect_timeout 3;
proxy_redirect off;
proxy_http_version 1.1;
proxy_set_header Host $host;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $proxy_connection;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $proxy_x_forwarded_proto;
proxy_set_header X-Forwarded-Ssl $proxy_x_forwarded_ssl;
proxy_set_header X-Forwarded-Port $proxy_x_forwarded_port;
proxy_set_header Proxy "";
My Nginx instance runs in a local Docker Swarm and I use a loopback interface (198.18.0.1) to hit open ports on my machine. Your mileage may vary. Port 3000 is for the Vite server. Port 9082 is for the Laravel server.
At some point, I may investigate using the hostname as it is declared in the docker-compose stack, though I'm not too sure how well this holds up when communicating between Docker Swarm and a regular container stack. The point would be not to have to allocate unique ports for the Laravel and Vite servers if I ended up running multiple projects at the same time.
Entry points /#vite and /resources are for when the app initially launches, and these are used by the script and link tags in the header. After that, all HMR activities use /app/.
The next challenge will be adding a self-signed cert, as I plan to integrate some Azure B2C sign in, but I think that may just involve updating the Nginx config to cater for TLS and update APP_URL and DEV_SERVER_URL in the .env to match.

Related

Simple Nginx proxy pass not working with Laravel Valet installed

Quick backstory, I used Laravel Valet to setup a local development environment. I've since containerized the application and would like to just have Nginx proxy the main port 80 traffic to the localhost:8000 port the docker container is listening at. I've tried to remove (unpark/stop) Valet. I've commented out the lines from the nginx.conf that refer to the Valet config files. Nothing seems to work though
Here is my conf:
server {
listen 80;
server_name app.trucase.test paylaw.trucase.test;
location / {
proxy_pass http://127.0.0.0:8000;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
}
I can go to app.trucase.test:8000 and it all works. But without the port, Nginx just isn't proxying the traffic. What am I missing?

Devilbox (docker) + Laravel Websockets

Trying to get the two to work together. Is there something I'm missing or way to debug why it's not working?
Edited .devilbox/nginx.yml as suggested here although trying to contain it to path: wsapp
---
###
### Basic vHost skeleton
###
vhost: |
server {
listen __PORT____DEFAULT_VHOST__;
server_name __VHOST_NAME__ *.__VHOST_NAME__;
access_log "__ACCESS_LOG__" combined;
error_log "__ERROR_LOG__" warn;
# Reverse Proxy definition (Ensure to adjust the port, currently '8000')
location /wsapp/ {
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "Upgrade";
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_pass http://php:6001;
}
__REDIRECT__
__SSL__
__VHOST_DOCROOT__
__VHOST_RPROXY__
__PHP_FPM__
__ALIASES__
__DENIES__
__SERVER_STATUS__
# Custom directives
__CUSTOM__
}
Installed laravel-websockets and configured to use '/wsapp'
Visit the dashboard to test:
https://example.local/laravel-websockets
But console has error:
Firefox can’t establish a connection to the server at
wss://example.local:6001/wsapp/app/a558686cac00228eb003?protocol=7&client=js&version=4.3.1&flash=false.
2 pusher.min.js:8:6335 The connection to
wss://example.local:6001/wsapp/app/a558686cac00228eb003?protocol=7&client=js&version=4.3.1&flash=false
was interrupted while the page was loading. pusher.min.js:8:6335
I've Created a Setup that works...
first you need 2 domains in devilbox...
For you Laravel App (example.local)
For you Laravel Websocket (socket.example.local)
on your socket.example.local directory...
create htdocs and .devilbox here you'll add your nginx.yml file
when you try to connect to your socket.
don't use the port anymore...
and don't isolate the socket to /wsapp anymore...
use socket.example.local in .env PUSHER_HOST value
run your laravel websocket on example.local...
visit /laravel-websockets dashboard... remove the port value then click connect
I don't suggest you'll serve your socket in /wsapp because it's hard to configure nginx to serve 2 apps... (it's hard for me, maybe someone more expert on nginx can suggest something regarding this setup)
but that's my solution... if you didn't understand, please do comment

Django media file not serve in production but serving in development (localhost)?

Django media file serving in development but not on production. whatever image i am uploading through Django admin it serving on website on local host but when i live my site on digital ocean its no displaying. how to solve this issue can any one tell ? my website url-http://139.59.56.161 click on book test menu
Resurrecting a long-dead question which was all I could find to help me out here. Recording my answer for posterity. My "production" environment uses nginx as a reverse proxy in front of uwsgi hosting my django application. The solution is that Django just does not serve files in Production; instead you should configure your web-server to do that.
Django is slightly unhelpful in talking about static files and then saying 'media files: same.'
So, I believe its best to catch file requests up front, in my case in the nginx server, to reduce double-handling and also your front-end web-server is the most optimised for the job.
To do this:
within a server definition block in your /etc/nginx/sites-available/[site.conf], define the webroot, the directory on your server's file system that covers everything with the declaration 'root [dir]'.
server {
listen 80;
server_name example.com www.example.com;
root /srv/;
This next block tells nginx to send all the traffic to the uwsgi service running django - I lifted it holus bolus from an example, probably on digitalocean.com.
location / {
proxy_pass http://localhost:8000;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Client-Verify SUCCESS;
proxy_set_header X-Client-DN $ssl_client_s_dn;
proxy_set_header X-SSL-Subject $ssl_client_s_dn;
proxy_set_header X-SSL-Issuer $ssl_client_i_dn;
proxy_read_timeout 1800;
proxy_connect_timeout 1800;
include uwsgi_params;
uwsgi_pass unix:/run/uwsgi/mysite6.sock;
}
Now, here are the bits we need to serve files when they are requested. try_files attempts to serve $uri and then $uri/, and it would be a good idea to put a file like 'resource_not_found.html' in /srv and set it as the last fallback for try_files, so the user knows that this part has been unintentionally left blank.
location /static/ {
try_files $uri $uri/ ;
}
location /media/ {
try_files $uri $uri/ ;
}
}
That concludes our server block for http, hence the extra close "}".
Alternatively, you can get uwsgi doing it by setting 'static-map' or 'static-map2'. 'static-map' "eats" the mapped url part, whereas static-map2 adds it.
static-map /files=/srv/files
means a request for /files/funny.gif will serve /srv/files/files.gif.
static-map2 /files=/srv
will do the same thing, because it will take a request for /files/funny.gif and look for /srv/files/funny.gif. As per the uwsgi docs, you can create as many of these mappings as you want, even to the same uri, and they will be checked in order of appearance. Damnit, I've just now finally found the docs for nginx open source.
uwsgi docs

trouble getting a file from node.js using nginx reverse proxy

I have set up an nginx reverse proxy to node essentially using this set up reproduced below:
upstream nodejs {
server localhost:3000;
}
server {
listen 8080;
server_name localhost;
root ~/workspace/test/app;
location / {
try_files $uri $uri/ #nodejs;
}
location #nodejs {
proxy_redirect off;
proxy_http_version 1.1;
proxy_pass http://nodejs;
proxy_set_header Host $host ;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
Now all my AJAX POST requests travel just fine to the node with this set up, but I am polling for files afterward that I cannot find when I make a clientside AJAX GET request to the node server (via this nginx proxy).
For example, for a clientside javascript request like .get('Users/myfile.txt') the browser will look for the file on localhost:8080 but won't find it because it's actually written to localhost:3000
http://localhost:8080/Users/myfile.txt // what the browser searches for
http://localhost:3000/Users/myfile.txt // where the file really is
How do I set up the proxy to navigate through to this file?
Okay, I got it working. The set up in the nginx.conf file posted above is just fine. This problem was never an nginx problem. The problem was in my index.js file over on the node server.
When I got nginx to serve all the static files, I commented out the following line from index.js
app.use(express.static('Users')); // please don't comment this out thank you
It took me a while to troubleshoot my way back to this as I was pretty wrapped up in understanding nginx. My thinking at the time was that if nginx is serving static files why would I need express to serve them? Without this line however, express won't serve any files at all obviously.
Now with express serving static files properly, nginx handles all static files from the web app and node handles all the files from the backend and all is good.
Thanks to Keenan Lawrence for the guidance and AR7 for the config!

Laravel+Homestead+nginx proxy: Getting the right url in my routes

I have a dedicated server with a Laravel 5 project and Homestead properly installed and working.
I just added a todo.app line in the Homestead.yml like this:
sites:
- map: todo.app
to: /home/vagrant/Code/todo-app/public
To access it from the outside, I configured an nginx reverse proxy like this:
location /todo/ {
resolver 127.0.0.1;
proxy_pass http://todo.app:8000/;
proxy_redirect off;
proxy_set_header Host $host:$server_port;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
Then, I added a line in the hosts file resolving todo.app to 127.0.0.1 and installed dnsmasq as resolver for nginx.
If I browse http://dev.mydomain.com/todo, I get everything working properly except for my routes: every URL the framework will generate will forget the subdirectory. For instance, the login URL is: http://dev.mydomain.com/login but should be http://dev.mydomain.com/todo/login. Changing the APP_URL in the .env file won't help.
Have you tried grouping all your routes and adding a 'todo' prefix to all?
Here is an example:
Route::group(['prefix' => 'todo'], function(){
Route::get('index','HomeController#index')->name('Homepage');
...
//The rest of the routes
}
This could be your solution if your willing to touch your code for this installation.
If you need custom behaviours on the predefined routes and are using:
Route::auth();
in your routes file, you should remove the line and create your own routes.You can use the following screenshot as a guide to see what the previous command automatically generates for you and use it to customize and create your own:
Please review: http://i.stack.imgur.com/ep9cD.png
You can use the:
php artisan route:list
command to replicate the image results in your own solution.

Resources