How to solve blank page while running second vue app under nginx? - macos

Need to run multiple Vue.js v3 apps with ngnix under macOS Big Sur.
First app located in: /Users/kharraz/Developer/code/homeapp/dist;
Second app: /Users/kharraz/Developer/code/businessapp/dist;
The two folders contains yarn build output;
The goal is when i tape localhost or localhost/homeapp takes me to the first app and when i type localhost/businessapp takes me to the second app.
With my current conf, localhost/homeapp/index.html is working but the second localhost/businessapp/index.html is giving me a blank page.
Both apps are working fine under localhost:8001 and localhost:8002.
Here is my nginx.conf file:
http {
server {
listen 8080;
location / {
proxy_set_header Host $host;
proxy_set_header X-Forward-Host $host;
proxy_pass http://127.0.0.1:8001;
break;
}
location /markets {
proxy_set_header X-Forward-Host $host;
proxy_pass http://127.0.0.1:8002;
}
}
server {
listen 8001;
location / {
root /Users/kharraz/Developer/code/homeapp/dist;
try_files $uri /index.html;
include /opt/homebrew/etc/nginx/mime.types;
}
}
server {
server_name businessapp;
listen 8002;
location / {
root /Users/kharraz/Developer/code/businessapp/dist;
try_files $uri /index.html;
include /opt/homebrew/etc/nginx/mime.types;
}
}
}

Manually make the App route to path '/' instead of depending on Vue to take to homepage.
import { onMounted } from 'vue';
import { useRouter } from 'vue-router';
export default {
setup() {
const route = useRouter();
const doOpeningCeremony = async () => {
route.push({path: '/'});
}
onMounted(doOpeningCeremony);
},
}

Related

Should I change the localhost to be my public IP DNS address inside Nginx server config file?

My front-end in running on port 3000 and my back-end on port 8000.
I've deployed my app using EC2 instance and I've installed Nginx and PM2.
When I try to open the app on the browser using my public DNS address the app appears for a short period of time then it breaks. It shows me the error
net::ERR_CONNECTION_REFUSED
GET http://localhost:8000/api/info/countByType net::ERR_CONNECTION_REFUSED
Here is where my front-end is making the request
export const countByCity = createAsyncThunk("info/countByCity", async() => {
try{
const res = await axios.get("http://localhost:8000/api/info/countByCity");
return res.data;
}catch(error){
console.log(error.response)
}
});
and here is my /etc/nginx/sites-available/default file configuration
server {
listen 80;
listen [::]:80;
root /usr/share/nginx/booking.com;
index index.html index.htm index.nginx-debian.html;
server_name _;
access_log /var/log/nginx/example.com.access.log;
error_log /var/log/nginx/example.com.error.log;
location / {
try_files $uri /index.html;
}
location /api {
proxy_pass http://ec2-54-167-89-197.compute-1.amazonaws.com:8000;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
}
}
I have changed the proxy_pass to be my Public IP address. The previous configuration was http://localhost:8000.
My question is : My front end is trying to reach the localhost:8000 endpoint. Should I change it to be my Public DNS address instead? Example : http://ec2-54-167-89-197.compute-1.amazonaws.com/api/info/countByCity ?
This code in your front end is running in the user's web browser, not on the EC2 server:
const res = await axios.get("http://localhost:8000/api/info/countByCity");
When that code tries to access localhost it is trying to access a server running on your local laptop/PC, not a server in AWS. You need to change that localhost address to be the public IP of your EC2 server.
This proxy configuration on your Nginx server:
proxy_pass http://ec2-54-167-89-197.compute-1.amazonaws.com:8000;
Is going all the way out to the Internet and back, just to access a service that is running on a different port of the same server. You need to change that to be localhost for both security and efficiency.

send cookies from quasar ssr to laravel backend

I am building a web app using Quasar SSR for the front-end and Laravel for the backend.
For Authentication, I use the new Laravel Sanctum Package which uses cookies for user authentication.
Building the app in SPA, I have no problem to authenticate users. Unfortunately in SSR mode, no cookies are sent to the server thus making user authentication impossible.
I use axios to handle ajax request. I set it in a boot file.
Can someone help me send the cookies to the backend?
Editing to add some precisions. I am using nginx as a web server and my configuration is as follow:
server {
listen 80;
listen [::]:80;
listen 443 ssl http2;
listen [::]:443 ssl http2;
ssl_certificate /var/www/certs/www.mysite/fullchain.pem;
ssl_certificate_key /var/www/certs/www.mysite/privkey.pem;
ssl_protocols TLSv1.2;
root /var/www/next/mysite/public;
index index.php index.html index.htm index.nginx-debian.html;
server_name next.mysite.net;
location / {
#try_files $uri $uri/ /index.php?$args;
proxy_pass http://localhost:8080;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
}
location /storage/ {
}
location /vendor/ {
}
location /server {
try_files $uri $uri/ /index.php?$args;
}
location /prequel-api {
try_files $uri $uri/ /index.php?$args;
}
location /api {
try_files $uri $uri/ /index.php?$args;
}
location /graphql {
try_files $uri $uri/ /index.php?$args;
}
location /sanctum {
try_files $uri $uri/ /index.php?$args;
}
location ~ [^/]\.php(/|$) {
#try_files $uri =404;
fastcgi_split_path_info ^(.+?\.php)(/.*)$;
#if (!-f $document_root$fastcgi_script_name) {
# return 404;
#}
# Mitigate https://httpoxy.org/ vulnerabilities
fastcgi_param HTTP_PROXY "";
fastcgi_pass unix:/var/run/php/php7.4-fpm.sock;
fastcgi_index index.php;
# include the fastcgi_param setting
include fastcgi_params;
# SCRIPT_FILENAME parameter is used for PHP FPM determining
# the script name. If it is not set in fastcgi_params file,
# i.e. /etc/nginx/fastcgi_params or in the parent contexts,
# please comment off following line:
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
}
location ~ /\.ht {
deny all;
}
}
import Axios from "axios";
import { boot } from "quasar/wrappers";
import { Cookies } from "quasar";
const axios = Axios.create({
baseUrl: "https://next.mysite.net"
});
axios.defaults.withCredentials = true;
axios.interceptors.response.use(
response => {
return response.data;
},
error => {
return Promise.reject(error.response);
}
);
export default boot(async ({ Vue, ssrContext }) => {
if (process.env.SERVER) {
//Check if cookies are available
var cookies = JSON.stringify(Cookies.parseSSR(ssrContext).getAll()).replace(
/[{}]/g,
""
);
console.log("cookies: ", cookies);
}
Vue.prototype.$axios = axios;
});
export { axios };
I solved a similar problem by extending the axios instance with cookies when creating the instance
//boot/axios.js (inside callback function)
if (ssrContext) {
const cookies = ssrContext.$q.cookies.getAll();
instance.interceptors.request.use(function (config) {
const cookiesSet = Object.keys(cookies).
reduce((prev, curr) => prev + curr + '=' + cookies[curr] + ';', '');
if (cookiesSet.length > 0) {
config.headers.Cookie = cookiesSet;
}
return config;
});
}

How to run bash script from Nginx

1) I have static site and wand to set "autopull" from bitbucket.
2) I have webhook from bitbucket.
3) I have bash script which do "git pull"
How can I run this script when nginx catch request?
server {
listen 80;
server_name example.ru;
root /path/to/root;
index index.html;
access_log /path/to/logs/nginx-access.log;
error_log /path/to/logs/nginx-error.log;
location /autopull {
something to run autopull.sh;
}
location / {
auth_basic "Hello, login please";
auth_basic_user_file /path/to/htpasswd;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $host;
}
}
I tried lua_block and fastcgi service, but both are failed.
lua does not run os.execute("/path/to/script") and does not write the log.
fastcgi is more successful, but it has not permissions, because my www-data user doesn't have ssh-key in my bitbuchet repo.
Problem solved.
I didnt want to use any script/process on another port because I have several sites and I need port for each.
My final configuration is:
server {
listen 80;
server_name example.ru;
root /path/to/project;
index index.html;
access_log /path/to/logs/nginx-access.log;
error_log /path/to/logs/nginx-error.log;
location /autopull {
content_by_lua_block {
io.popen("bash /path/to/autopull.sh")
}
}
location / {
auth_basic "Hello, login please";
auth_basic_user_file /path/to/htpasswd;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $host;
}
}
Problem was in permission of www-data user and its ssh-kay in repo.
Based on this, create py script
#!/usr/bin/python
from BaseHTTPServer import BaseHTTPRequestHandler,HTTPServer
from subprocess import call
PORT_NUMBER = 8080
autopull = '/path/to/autopull.sh'
command = [autopull]
#This class will handles any incoming request from
#the browser
class myHandler(BaseHTTPRequestHandler):
#Handler for the GET requests
def do_GET(self):
self.send_response(200)
self.send_header('Content-type','text/html')
self.end_headers()
# Send the html message
self.wfile.write("runing {}".format(autopull))
call(command)
return
try:
#Create a web server and define the handler to manage the
#incoming request
server = HTTPServer(('', PORT_NUMBER), myHandler)
print 'Started httpserver on port ' , PORT_NUMBER
#Wait forever for incoming htto requests
server.serve_forever()
except KeyboardInterrupt:
print '^C received, shutting down the web server'
server.socket.close()
Run it and in nginx config add
location /autopull { proxy_pass http://localhost:8080; }

Directly opening url with stateparam not working in angularjs

I have url like
http://localhost:8000/articles/2345 when clicked within angular, it works fine. This URL's have to be linked from other websites also. But i get error like
Uncaught SyntaxError: Unexpected token <
I am using ui-router like below
.state('kbca', {
parent : 'root',
url: "/articles/:articleId",
templateUrl: "views/abc/abc.html,
controller: "abcCtrl",
data : {
authorizedRoles : [ROLES.PUBLIC]
}
})
and config like
.config(function($locationProvider){
$locationProvider.html5Mode(true).hashPrefix('!');
});
and i have index.html where in i have made base as
<div id="wrap">
<div ui-view></div>
</div> <!-- end wrap -->
<base href="/">
i tried lots of options on web. But nothing seems to be working for me.
Here is my server conf
server {
listen 8000;
root D:/xampp/htdocs/web/myweb/app;
index index.html index.htm;
server_name localhost;
location / {
try_files $uri $uri/ /index.html;
}
location /mysvcs {
proxy_pass http://localhost:8080/mymvc;
proxy_set_header Host localhost;
}
location /mymvc {
proxy_pass http://localhost:8080/mymvc;
proxy_set_header Host localhost;
}
}

Gorilla mux router not working for particular route

I'm having an issue using Gorilla for routing. For some routes it works fine, but for other it does not.
I have the following code:
import (
"github.com/gorilla/mux"
"github.com/justinas/alice"
)
mx.Handle("/register", commonHandlers.ThenFunc(registerUser)).Methods("POST").Name("register") // This works
mx.Handle("/verify", commonHandlers.ThenFunc(verifyUser)).Methods("GET").Name("verify") // Does not work
The verifyUser calls the Verify function handler is just suppose to output something to the console, for example:
log.Println("This works!")
But for some reason, when I visit example.com/verify, the function Verify never actually gets called. Oddly enough, my AngularJS code actually outputs something when /verify is visited, but my Go code does not.
I have the following configuration in my nginx file, not sure if it may conflict with Gorilla routing.
server {
listen 80; ## listen for ipv4; this line is default and implied
root /home/usr/go/src/project/dist/;
server_name localhost;
index index.html index.htm;
location #proxy {
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $remote_addr;
proxy_set_header Host $host;
#proxy_pass http://127.0.0.1:3000;
proxy_pass http://127.0.0.1:9000;
}
location / {
try_files $uri.html $uri/ #proxy;
autoindex on;
}
}
Is there something wrong with my routing?

Resources