how to correct nginx conf with strapi - strapi

what's the problem of this nginx.conf?
I had to change somewhere but still not working ..
upstream strapi {
server localhost:1337 max_fails=1 fail_timeout=5s;
}
server {
# Listen HTTP
listen 80 default_server;
listen [::]:80 default_server;
server_name sh**rk.app;
# Proxy Config
location / {
try_files $uri $uri/ #strapi;
}
location #strapi{
proxy_pass http://strapi;
}
}

ok you can configure with this way that is a good way
deploy strapi on ubuntu server

Related

Cannot connect to AWS EC2 nginx

I have a EC2 instance , and inbound/outbound security like:
and '/etc/nginx/sites-enabled/default' setting is:
server {
listen 8080 default_server;
listen [::]:8080 default_server;
root /var/www/html;
index index.html index.htm index.nginx-debian.html;
server_name _;
location / {
proxy_pass http://127.0.0.1:8000;
}
}
I try to use curl to test , and it's work.
I don't know why I can't connect from local ?

How make a nginx cache like a varnish?

I'm hosting a website in heroku and using nginx in a cloud host as a proxy.
In my cloud host I defined this:
## /etc/nginx/sites-available/default
server {
charset utf-8;
listen 80;
server_name mywebsite.com www.mywebsite.com;
location /api {
proxy_pass http://mywebsite-api.herokuapp.com;
}
location /auth {
proxy_pass http://mywebsite-api.herokuapp.com;
}
location / {
fastcgi_cache CACHE_KEY;
fastcgi_cache_valid 200 60m;
proxy_pass http://mywebsite-fe.herokuapp.com;
}
}
## in /etc/nginx/nginx.conf
.......
http {
fastcgi_cache_path /etc/nginx/cache levels=1:2 keys_zone=CACHE_KEY:100m inactive=60m;
fastcgi_cache_key "$scheme$request_method$host$request_uri";
.....
I want to make a static content in nginx, like a varnish. How can I do this using nginx with a proxy to heroku?
Thanks for this.
Change fastcgi_ to proxy_. The fastcgi_ versions are for php-fpm.

Nginx serving non-secure resources on https domain

I'm having some issues setting up a server with an SSL certificate. I was able to install the certificate just fine and restarted the nginx service. However, when I attempt to load my website, I see that all img, css and js files are being retrieved with http instead of https. This is a Magento website. Is there something wrong with my conf file?
server {
listen 80;
server_name www.my-domain.com;
return 301 $scheme://my-domain.com$request_uri;
}
server {
listen 80;
listen 443 ssl;
ssl_certificate /etc/ssl/my-domain/my-domain_com.pem;
ssl_certificate_key /etc/ssl/my-domain/my-domain_com.key;
access_log /var/log/nginx/magento.local-access.log;
error_log /var/log/nginx/magento.local-error.log;
server_name my-domain.com;
root /var/www/my-domain;
include conf/magento_rewrites.conf;
include conf/magento_security.conf;
# PHP handler
location ~ \.php {
## Catch 404s that try_files miss
if (!-e $request_filename) { rewrite / /index.php last; }
## Store code is defined in administration > Configuration > Manage Stores
fastcgi_param MAGE_RUN_CODE default;
fastcgi_param MAGE_RUN_TYPE store;
# By default, only handle fcgi without caching
include conf/magento_fcgi.conf;
}
# 404s are handled by front controller
location #magefc {
rewrite / /index.php;
}
# Last path match hands to magento or sets global cache-control
location / {
## Maintenance page overrides front controller
index index.html index.php;
try_files $uri $uri/ #magefc;
expires 24h;
}
rewrite ^/minify/([0-9]+)(/.*.(js|css))$ /lib/minify/m.php?f=$2&d=$1 last;
rewrite ^/skin/m/([0-9]+)(/.*.(js|css))$ /lib/minify/m.php?f=$2&d=$1 last;
location /lib/minify/ {
allow all;
}
}
maybe cause this is how they are called, if you need to serve every thing as https, i would create an empty server that listens on 80 and redirect to https
server {
# listen 80; delete this part
listen 443 ssl;
# the rest of the config
}
# add this server
server {
listen 80;
server_name example.com;
location / # or a more specific location '~ \.(jpg|css|js|jpeg|png|gif)' {
return https://example.com$request_uri;
}
}
or just fix the css location, it might be an absolute URL with http

Serving two sites from one server with Nginx

I have a Rails app up and running on my server and now I'd like to add another one.
I want Nginx to check what the request is for and split traffic based on domain name
Both sites have their own nginx.conf symlinked into sites-enabled, but I get an error starting nginx Starting nginx: nginx: [emerg] duplicate listen options for 0.0.0.0:80 in /etc/nginx/sites-enabled/bubbles:6
They are both listening on 80 but for different things.
Site #1
upstream blog_unicorn {
server unix:/tmp/unicorn.blog.sock fail_timeout=0;
}
server {
listen 80 default deferred;
server_name walrus.com www.walrus.com;
root /home/deployer/apps/blog/current/public;
location ^~ /assets/ {
gzip_static on;
expires max;
add_header Cache-Control public;
}
try_files $uri/index.html $uri #blog_unicorn;
location #blog_unicorn {
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_redirect off;
proxy_pass http://blog_unicorn;
}
error_page 500 502 503 504 /500.html;
client_max_body_size 4G;
keepalive_timeout 10;
}
Site two:
upstream bubbles_unicorn {
server unix:/tmp/unicorn.bubbles.sock fail_timeout=0;
}
server {
listen 80 default deferred;
server_name bubbles.com www.bubbles.com;
root /home/deployer/apps/bubbles/current/public;
location ^~ /assets/ {
gzip_static on;
expires max;
add_header Cache-Control public;
}
try_files $uri/index.html $uri #bubbles_unicorn;
location #bubbles_unicorn {
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_redirect off;
proxy_pass http://bubbles_unicorn;
}
error_page 500 502 503 504 /500.html;
client_max_body_size 4G;
keepalive_timeout 10;
}
The documentation says:
The default_server parameter, if present, will cause the server to become the default server for the specified address:port pair.
It's also obvious, there can be only one default server.
And it is also says:
A listen directive can have several additional parameters specific to socket-related system calls. They can be specified in any listen directive, but only once for the given address:port pair.
So, you should remove default and deferred from one of the listen 80 directives. And same applies to ipv6only=on directive as well.
Just hit this same issue, but the duplicate default_server directive was not the only cause of this message.
You can only use the backlog parameter on one of the server_name directives.
Example
site 1:
server {
listen 80 default_server backlog=2048;
server_name www.example.com;
location / {
proxy_pass http://www_server;
}
site 2:
server {
listen 80; ## NOT NOT DUPLICATE THESE SETTINGS 'default_server backlog=2048;'
server_name blogs.example.com;
location / {
proxy_pass http://blog_server;
}
I was having the same issue. I fixed it by modifying my /etc/nginx/sites-available/example2.com file. I changed the server block to
server {
listen 443 ssl; # modified: was listen 80;
listen [::]:443; #modified: was listen [::]:80;
. . .
}
And in /etc/nginx/sites-available/example1.com I commented out listen 80 and listen [::]:80 because the server block had already been configured for 443.

block direct access on port 8080

I have an app running on a service, behind a nginx server, using unicorn.
If I access http://server.com I get the app, up and running...But I still can access app on port 8080, like http://server.com:8080 but this time, without assets (which are beign served by nginx)
How do I block direct access to port 8080 on my prod. server?
The server is an Ubuntu 12.04
nginx.conf
upstream unicorn {
server 127.0.0.1:8080;
}
server {
listen 80 default deferred;
# server_name example.com;
root /home/deploy/current/public;
location ^~ /assets/ {
gzip_static on;
expires max;
add_header Cache-Control public;
}
try_files $uri/index.html $uri #unicorn;
location #unicorn {
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_redirect off;
proxy_pass http://unicorn;
}
error_page 500 502 503 504 /500.html;
client_max_body_size 4G;
keepalive_timeout 10;
}
Make unicorn and nginx use a domain socket. For nginx:
upstream unicorn {
server unix:/path/to/socket fail_timeout=0;
}
Then pass '-l /path/to/socket' to unicorn, or alter your unicorn config file:
listen '/path/to/socket'

Resources