We have Angular as Front End which runs on ngnix and Spring boot as backend and we are using OpenAPI for API documentaion. It was working fine on localhost. URL: http://localhost:8080/swagger-ui/index.html
We moved our code to a different server which is behind proxy which runs on https and I am trying to access the swagger but I am getting below message No API definition found when I try to access the URL : https://mycompany.com/swagger-ui/index.html
My application.properties
server.forward-headers-strategy=framework
And my nginx.conf file
location /swagger-ui {
allow xx.x.0.0/16;
allow xx.x.0.0/16;
proxy_redirect off;
proxy_set_header Host $http_host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-Proto https;
proxy_set_header X-Forwarded-Host $http_x_forwarded_host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-NginX-Proxy true;
proxy_set_header X-Forwarded-Protocol https;
proxy_set_header X-Forwarded-Ssl on;
proxy_set_header X-Url-Scheme https;
proxy_pass http://127.0.0.1:8080/swagger-ui;
}
Tried all the possible ways still unable to make the swagger UI work
Related
I have this setup with Keycloak, Spring Boot and NGINX:
api.example.com
example.com
NGINX has two config files. One api.example.com which points to Spring as a reverse proxy, everything works fine with using the API with Keycloak by using for example Postman/cURL.
I have another one example.com that has a reverse proxy to Keycloak. It works fine going to URL https://example.com/auth and log in etc. But when trying to access content on my Swagger at https://api.example.com/v1/docs/index.html it tries to redirect me to http://api.example.com/sso/login which is two things that are not correct. Firstly the protocol and secondly using the subdomain where Keycloak is not on.
I have set under Valid Redirect URIs https://example.com/*. So when a user tried to access that content on my Swagger, should they not be able to redirect to the Keycloak at https://example.com/auth and then redirect back when authenticated?
This is my NGINX config for the api.example.com:
location / {
proxy_pass http://private-ip1:8080;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_protocol_addr;
proxy_set_header X-Forwarded-Host $host;
proxy_set_header X-Forwarded-Server $host;
proxy_set_header X-Forwarded-Port $server_port;
proxy_set_header X-Forwarded-Proto $scheme;
}
This is my NGINX config for example.com:
# Keyclaok
location /auth {
proxy_pass http://private-ip2:8080;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_protocol_addr;
proxy_set_header X-Forwarded-Host $host;
proxy_set_header X-Forwarded-Server $host;
proxy_set_header X-Forwarded-Port $server_port;
proxy_set_header X-Forwarded-Proto $scheme;
}
I don't seem to understand or find information why this is happening. Do anyone have idea how to fix this?
I have installed the nginx 1.18.0. Here is part of file nginx.conf.
server {
listen 8082;
location /lc/ {
proxy_pass http://127.0.0.1:8080;
proxy_set_header Host $http_host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
}
When I visit http://127.0.0.1:8082/lc ,it gives right outcome,but the web app will response a redirect to /login,so the url changes to http://127.0.0.1:8082/login,the "lc" is missing.It is the same to js/css files.
After we upgrade version spring boot to version 2.2.6, nginx service can not connect to microservice. I have spent lot of time read log from nginx, change port ... But the reason is proxy of spring boot.
So we have add the config as below:
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
it works for me.
I have Nginx in front of a Spring Boot 1.3.3 application with Tomcat access log enabled, but the logging always write the proxy IP address (127.0.0.1) instead of the real client IP.
Is the X-Real-IP header used to get the real client IP?
Is this header used by tomcat to write the IP address in the access log?
I have this configuration:
application.properties
server.use-forward-headers=true
server.tomcat.internal-proxies=127\\.0\\.0\\.1
server.tomcat.accesslog.enabled=true
Nginx configuration:
location / {
proxy_pass http://127.0.0.1:8091;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto https;
proxy_set_header X-Forwarded-Port 443;
proxy_set_header Host $host;
}
The real client IP is available in $proxy_add_x_forwarded_for variable i.e. X-Forwarded-For header. It will have "," separated entries. The very first value is the real client IP.
To log the real client IP in Tomcat's access logs, modify the pattern value in the AccessLog Valve as:
%{X-Forwarded-For}i %l %u %t "%r" %s %b
The current setup of our backend uses Route53 to route requests to tomcat servers which run on ec2 instances.
I am trying to setup nginx as a load balancer(proxy) to route requests to our tomcat servers.
Here are the instance types,
Tomcat server instance type = m3.2xlarge
nginx server instance type
= c3.large
When I run ab (apache benchmark) with 100 concurrent connections without keep alive, I see that the performance of a single tomcat instance is better than 2 tomcat servers in front of an nginx server. I am now wondering if there is something wrong with my nginx config. I checked the error.log file on nginx instance and there are no errors. Also, the CPU on nginx instance does not cross 30% while running benchmark tool. Here is my nginx config,
user nginx;
pid /run/nginx.pid;
worker_processes auto;
worker_rlimit_nofile 32768;
events {
worker_connections 8192;
multi_accept on;
use epoll;
}
http {
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
types_hash_max_size 2048;
include /etc/nginx/mime.types;
default_type application/octet-stream;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2; # Dropping SSLv3, ref: POODLE
ssl_prefer_server_ciphers on;
upstream backend {
server x.x.x.x:443;
server x.x.x.x:443;
keepalive 1024;
}
server {
listen 443;
server_name localhost;
ssl on;
ssl_certificate /etc/nginx/certs/ssl-bundle_2015_2018.crt;
ssl_certificate_key /etc/nginx/certs/chewie.key;
ssl_dhparam /etc/nginx/certs/dhparam.pem;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_session_cache shared:SSL:10m;
ssl_prefer_server_ciphers on;
ssl_session_timeout 10m;
ssl_ciphers "EECDH+ECDSA+AESGCM EECDH+aRSA+AESGCM EECDH+ECDSA+SHA384 EECDH+ECDSA+SHA256 EECDH+aRSA+SHA384 EECDH+aRSA+SHA256 EECDH+aRSA+RC4 EECDH EDH+aRSA RC4 !aNULL !eNULL !LOW !3DES !MD5 !EXP !PSK !SRP !DSS !RC4";
location / {
proxy_pass https://backend;
proxy_cache_bypass true;
proxy_no_cache true;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
}
access_log /var/log/nginx/access.log;
error_log /var/log/nginx/error.log;
gzip on;
gzip_disable "msie6";
gzip_vary on;
gzip_proxied any;
gzip_comp_level 6;
gzip_buffers 16 8k;
gzip_http_version 1.1;
gzip_types text/plain text/css application/json application/javascript text/xml application/xml application/xml+rss text/javascript;
include /etc/nginx/conf.d/*.conf;
include /etc/nginx/sites-enabled/*;
}
Here are apache benchmark results without nginx.
Concurrency Level: 100
Time taken for tests: 8.393 seconds
Complete requests: 800
Failed requests: 0
Total transferred: 368000 bytes
HTML transferred: 16800 bytes
Requests per second: 95.32 [#/sec] (mean)
Time per request: 1049.083 [ms] (mean)
Time per request: 10.491 [ms] (mean, across all concurrent requests)
Transfer rate: 42.82 [Kbytes/sec] received
These are results with nginx in front of 2 tomcat servers:
Concurrency Level: 100
Time taken for tests: 23.494 seconds
Complete requests: 800
Failed requests: 0
Total transferred: 381600 bytes
HTML transferred: 16800 bytes
Requests per second: 34.05 [#/sec] (mean)
Time per request: 2936.768 [ms] (mean)
Time per request: 29.368 [ms] (mean, across all concurrent requests)
Transfer rate: 15.86 [Kbytes/sec] received
Any thoughts on where I should be looking to optimize are appreciated!
Here are some things done to improve performance,
Convert the traffic between nginx and the upstream server to http
form https
Use the right ssl ciphers for your nginx. Make sure to run
the ssl test to make sure ciphers used are secure (www.ssllabs.com)
Increase file descriptor limits for nginx server as well the tomcat
instances to a high number.
Will keep updating as I find more things.