trouble getting a file from node.js using nginx reverse proxy - ajax

I have set up an nginx reverse proxy to node essentially using this set up reproduced below:
upstream nodejs {
server localhost:3000;
}
server {
listen 8080;
server_name localhost;
root ~/workspace/test/app;
location / {
try_files $uri $uri/ #nodejs;
}
location #nodejs {
proxy_redirect off;
proxy_http_version 1.1;
proxy_pass http://nodejs;
proxy_set_header Host $host ;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
Now all my AJAX POST requests travel just fine to the node with this set up, but I am polling for files afterward that I cannot find when I make a clientside AJAX GET request to the node server (via this nginx proxy).
For example, for a clientside javascript request like .get('Users/myfile.txt') the browser will look for the file on localhost:8080 but won't find it because it's actually written to localhost:3000
http://localhost:8080/Users/myfile.txt // what the browser searches for
http://localhost:3000/Users/myfile.txt // where the file really is
How do I set up the proxy to navigate through to this file?

Okay, I got it working. The set up in the nginx.conf file posted above is just fine. This problem was never an nginx problem. The problem was in my index.js file over on the node server.
When I got nginx to serve all the static files, I commented out the following line from index.js
app.use(express.static('Users')); // please don't comment this out thank you
It took me a while to troubleshoot my way back to this as I was pretty wrapped up in understanding nginx. My thinking at the time was that if nginx is serving static files why would I need express to serve them? Without this line however, express won't serve any files at all obviously.
Now with express serving static files properly, nginx handles all static files from the web app and node handles all the files from the backend and all is good.
Thanks to Keenan Lawrence for the guidance and AR7 for the config!

Related

Devilbox (docker) + Laravel Websockets

Trying to get the two to work together. Is there something I'm missing or way to debug why it's not working?
Edited .devilbox/nginx.yml as suggested here although trying to contain it to path: wsapp
---
###
### Basic vHost skeleton
###
vhost: |
server {
listen __PORT____DEFAULT_VHOST__;
server_name __VHOST_NAME__ *.__VHOST_NAME__;
access_log "__ACCESS_LOG__" combined;
error_log "__ERROR_LOG__" warn;
# Reverse Proxy definition (Ensure to adjust the port, currently '8000')
location /wsapp/ {
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "Upgrade";
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_pass http://php:6001;
}
__REDIRECT__
__SSL__
__VHOST_DOCROOT__
__VHOST_RPROXY__
__PHP_FPM__
__ALIASES__
__DENIES__
__SERVER_STATUS__
# Custom directives
__CUSTOM__
}
Installed laravel-websockets and configured to use '/wsapp'
Visit the dashboard to test:
https://example.local/laravel-websockets
But console has error:
Firefox can’t establish a connection to the server at
wss://example.local:6001/wsapp/app/a558686cac00228eb003?protocol=7&client=js&version=4.3.1&flash=false.
2 pusher.min.js:8:6335 The connection to
wss://example.local:6001/wsapp/app/a558686cac00228eb003?protocol=7&client=js&version=4.3.1&flash=false
was interrupted while the page was loading. pusher.min.js:8:6335
I've Created a Setup that works...
first you need 2 domains in devilbox...
For you Laravel App (example.local)
For you Laravel Websocket (socket.example.local)
on your socket.example.local directory...
create htdocs and .devilbox here you'll add your nginx.yml file
when you try to connect to your socket.
don't use the port anymore...
and don't isolate the socket to /wsapp anymore...
use socket.example.local in .env PUSHER_HOST value
run your laravel websocket on example.local...
visit /laravel-websockets dashboard... remove the port value then click connect
I don't suggest you'll serve your socket in /wsapp because it's hard to configure nginx to serve 2 apps... (it's hard for me, maybe someone more expert on nginx can suggest something regarding this setup)
but that's my solution... if you didn't understand, please do comment

How to configure ngnix, vite and laravel for use with HMR via a reverse proxy?

I've only just got into Laravel/Vue3 so I'm working off the basics. However, I have an existing Docker ecosystem that I use for local dev and an nginx reverse proxy to keep my many projects separate.
I'm having trouble getting HMR working and even more trouble finding appropriate documentation on how to configure Vite and Nginx so I can have a single HTTPS entry point in nginx and proxy back to Laravel and Vite.
The build is based on https://github.com/laravel-presets/inertia/tree/boilerplate.
For completeness, this is the package.json, just in case it changes:
{
"private": true,
"scripts": {
"dev": "vite",
"build": "vite build"
},
"devDependencies": {
"#vitejs/plugin-vue": "^2.3.1",
"#vue/compiler-sfc": "^3.2.33",
"autoprefixer": "^10.4.5",
"postcss": "^8.4.12",
"tailwindcss": "^3.0.24",
"vite": "^2.9.5",
"vite-plugin-laravel": "^0.2.0-beta.10"
},
"dependencies": {
"vue": "^3.2.31",
"#inertiajs/inertia": "^0.11.0",
"#inertiajs/inertia-vue3": "^0.6.0"
}
}
To keep things simple, I'm going to try and get it working under HTTP only and deal with HTTPS later.
Because I'm running the dev server in a container, I've set server.host to 0.0.0.0 in vite.config.ts and server.hmr.clientPort to 80. This will allow connections to the dev server from outside the container and hopefully realize that the public port is 80, instead of the default 3000.
I've tried setting the DEV_SERVER_URL to be the same as the APP_URL so that all traffic from the public site goes to the same place. But I'm not sure what the nginx side of things should look like.
I've also tried setting the DEV_SERVER_URL to be http://0.0.0.0:3000/ so I can see what traffic is trying to be generated. This almost works, but is grossly wrong. It fails when it comes to ws://0.0.0.0/ communications, and would not be appropriate when it comes to HTTPS.
I have noticed calls to /__vite_plugin, which I'm going to assume is the default ping_url that would normally be set in config/vite.php.
Looking for guidance on which nginx locations for should forward to the Laravel port and which locations should forward to the Vite port, and what that should look like so that web socket communications is also catered for.
I've seen discussions that Vite 3 may make this setup easier, but I'd like to deal with what is available right now.
The answer appears to be in knowing which directories to proxy to Vite and being able to isolate the web socket used for HMR.
To that end, you will want to do the following:
Ensure that your .env APP_URL and DEV_SERVER_URL match.
In your vite.config.ts, ensure that the server.host is '0.0.0.0' so that connections can be accepted from outside of the container.
In your vite.config.ts specify a base such as '/app/' so that all HMR traffic can be isolated and redirected to the Vite server while you are running npm run dev. You may wish to use something else if that path may clash with real paths in your Laravel or Vite app, like /_dev/ or /_vite'.
In your config/vite.php set a value for ping_url as http://localhost:3000. This allows Laravel to ping the Vite server locally so that the manifest should not be used and the Vite server will be used. This also assumes that ping_before_using_manifest is set to true.
Lastly, you want to configure your nginx proxy so that a number of locations are specifically proxied to the Vite server, and the rest goes to the Laravel server.
I am not an Nginx expert, so there may be a way to declare the following succinctly.
Sample Nginx server entry
# Some standard proxy variables
map $http_x_forwarded_proto $proxy_x_forwarded_proto {
default $http_x_forwarded_proto;
'' $scheme;
}
map $http_x_forwarded_port $proxy_x_forwarded_port {
default $http_x_forwarded_port;
'' $server_port;
}
map $http_upgrade $proxy_connection {
default upgrade;
'' close;
}
map $scheme $proxy_x_forwarded_ssl {
default off;
https off;
}
server {
listen *:80;
server vite-inertia-vue-app.test;
/* abridged version that does not include gzip_types, resolver, *_log and other headers */
location ^~ /resources/ {
proxy_pass http://198.18.0.1:3000;
include /etc/nginx/vite-inertia-vue-app.test.include;
}
location ^~ /#vite {
proxy_pass http://198.18.0.1:3000;
include /etc/nginx/vite-inertia-vue-app.test.include;
}
location ^~ /app/ {
proxy_pass http://198.18.0.1:3000;
include /etc/nginx/vite-inertia-vue-app.test.include;
}
location / {
proxy_pass http://198.18.0.1:8082;
include /etc/nginx/vite-inertia-vue-app.test.include;
}
}
vite-inertia-vue-app.test.include to include common proxy settings
proxy_read_timeout 190;
proxy_connect_timeout 3;
proxy_redirect off;
proxy_http_version 1.1;
proxy_set_header Host $host;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $proxy_connection;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $proxy_x_forwarded_proto;
proxy_set_header X-Forwarded-Ssl $proxy_x_forwarded_ssl;
proxy_set_header X-Forwarded-Port $proxy_x_forwarded_port;
proxy_set_header Proxy "";
My Nginx instance runs in a local Docker Swarm and I use a loopback interface (198.18.0.1) to hit open ports on my machine. Your mileage may vary. Port 3000 is for the Vite server. Port 9082 is for the Laravel server.
At some point, I may investigate using the hostname as it is declared in the docker-compose stack, though I'm not too sure how well this holds up when communicating between Docker Swarm and a regular container stack. The point would be not to have to allocate unique ports for the Laravel and Vite servers if I ended up running multiple projects at the same time.
Entry points /#vite and /resources are for when the app initially launches, and these are used by the script and link tags in the header. After that, all HMR activities use /app/.
The next challenge will be adding a self-signed cert, as I plan to integrate some Azure B2C sign in, but I think that may just involve updating the Nginx config to cater for TLS and update APP_URL and DEV_SERVER_URL in the .env to match.

Gitlab Client in Login Redirect Loop

I have been doing some work trying to update our gitlab servers. Somewhere along the line, something in the configuration changed and now I can't access the web client. The backend starts up correctly and when I run rake gitlab:check everything comes back as green. Same for nginx, as far as I can tell it is working correctly. When I try to go to the landing page in the browser though, I keep getting an error about 'too many redirects'.
Looking at the browser console, I can see that it is repeatedly trying to redirect to the login page until the browser gives up and throws an error. I did some looking around, and most of the answers seem to involve going to the login page directly and then changing the landing page from the admin settings. When I tried that I got the same problem. Apparently any page on my domain wants to redirect to the login, leaving me with an infinite loop.
I'm also seeing some potentially related errors in the nginx logs. When I try to hit the sign in page the error log is showing
open() "/usr/local/Cellar/nginx/1.15.9/html/users/sign_in" failed (2: No such file or directory)
Is that even the correct directory for the gitlab html views? If not how do I change it?
Any help on this would be greatly appreciated.
Environment:
OSX 10.11.6 El Capitan
Gitlab 8.11
nginx 1.15.9
My config files. I have removed some commented out lines to save on space.
nginx.config
worker_processes 1;
events {
worker_connections 1024;
}
http {
include mime.types;
default_type application/octet-stream;
sendfile on;
keepalive_timeout 65;
include servers/*;
}
nginx/servers/gitlab
upstream gitlab-workhorse {
server unix:/Users/git/gitlab/tmp/sockets/gitlab-workhorse.socket fail_timeout=0;
}
server {
listen 0.0.0.0:8081;
listen [::]:8081;
server_name git.my.server.com; ## Replace this with something like gitlab.example.com
server_tokens off; ## Don't show the nginx version number, a security best practice
## See app/controllers/application_controller.rb for headers set
## Individual nginx logs for this GitLab vhost
access_log /var/log/nginx/gitlab_access.log;
error_log /var/log/nginx/gitlab_error.log;
location / {
client_max_body_size 0;
gzip off;
## https://github.com/gitlabhq/gitlabhq/issues/694
## Some requests take more than 30 seconds.
proxy_read_timeout 300;
proxy_connect_timeout 300;
proxy_redirect off;
proxy_http_version 1.1;
proxy_set_header Host $http_host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_pass http://gitlab-workhorse;
}
}
I finally found the answer after several days of digging. At some point my default config file (/etc/default/gitlab) got changed. For whatever reason, my text editor decided to split gitlab_workhorse_options into two lines. As a result, gitlab was missing the arguments for authSocket and document root and was just using the default values. If that wasn't bad enough, the line split started on a $ character, so it looked like nano was just doing a word wrap.

Django media file not serve in production but serving in development (localhost)?

Django media file serving in development but not on production. whatever image i am uploading through Django admin it serving on website on local host but when i live my site on digital ocean its no displaying. how to solve this issue can any one tell ? my website url-http://139.59.56.161 click on book test menu
Resurrecting a long-dead question which was all I could find to help me out here. Recording my answer for posterity. My "production" environment uses nginx as a reverse proxy in front of uwsgi hosting my django application. The solution is that Django just does not serve files in Production; instead you should configure your web-server to do that.
Django is slightly unhelpful in talking about static files and then saying 'media files: same.'
So, I believe its best to catch file requests up front, in my case in the nginx server, to reduce double-handling and also your front-end web-server is the most optimised for the job.
To do this:
within a server definition block in your /etc/nginx/sites-available/[site.conf], define the webroot, the directory on your server's file system that covers everything with the declaration 'root [dir]'.
server {
listen 80;
server_name example.com www.example.com;
root /srv/;
This next block tells nginx to send all the traffic to the uwsgi service running django - I lifted it holus bolus from an example, probably on digitalocean.com.
location / {
proxy_pass http://localhost:8000;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Client-Verify SUCCESS;
proxy_set_header X-Client-DN $ssl_client_s_dn;
proxy_set_header X-SSL-Subject $ssl_client_s_dn;
proxy_set_header X-SSL-Issuer $ssl_client_i_dn;
proxy_read_timeout 1800;
proxy_connect_timeout 1800;
include uwsgi_params;
uwsgi_pass unix:/run/uwsgi/mysite6.sock;
}
Now, here are the bits we need to serve files when they are requested. try_files attempts to serve $uri and then $uri/, and it would be a good idea to put a file like 'resource_not_found.html' in /srv and set it as the last fallback for try_files, so the user knows that this part has been unintentionally left blank.
location /static/ {
try_files $uri $uri/ ;
}
location /media/ {
try_files $uri $uri/ ;
}
}
That concludes our server block for http, hence the extra close "}".
Alternatively, you can get uwsgi doing it by setting 'static-map' or 'static-map2'. 'static-map' "eats" the mapped url part, whereas static-map2 adds it.
static-map /files=/srv/files
means a request for /files/funny.gif will serve /srv/files/files.gif.
static-map2 /files=/srv
will do the same thing, because it will take a request for /files/funny.gif and look for /srv/files/funny.gif. As per the uwsgi docs, you can create as many of these mappings as you want, even to the same uri, and they will be checked in order of appearance. Damnit, I've just now finally found the docs for nginx open source.
uwsgi docs

Wordpress Nginx proxy cannot load wp-admin/ajax.php

My website is hosted on 6-cylinder.com
and I decided to add a wordpress blog which is in a completely different VPS. So, I used proxy to list my blog as a subdirectory of my main domain
So the final product should be 6-cylinder.com/blog
The proxy is working completely fine except for one file only!!!!!
wp-admin/ajax.php
This is the error message in chrome console
Here is what I added to my wp-config.php
$_SERVER['REQUEST_URI'] = str_replace("/wp-admin/", "/blog/wp-admin/", $_SERVER['REQUEST_URI']);
define( 'WP_SITEURL', 'http://6-cylinder.com/blog' );
define( 'WP_HOME', 'http://6-cylinder.com/blog' );
and here is the proxy code in the nginx file
location ^~ /blog/ {
proxy_pass http://139.59.211.216/;
proxy_set_header X-Original-Host $host;
proxy_set_header X-Is-Reverse-Proxy "true";
proxy_pass_header Set-Cookie;
proxy_cookie_path / /blog/;
}
The time i had problems with wp-admin the solution was to add the folowing line to the wp-config.php:
$_SERVER['HTTP_HOST']=$_SERVER['HTTP_X_FORWARDED_HOST'];
(wordpress behind nginx proxy (acces from two sources))

Resources