I've a ngnix as a reverse proxy with the next entry:
location /jrri/ {
proxy_pass http://xxx.xxx.xxx.xxx:8080/;
proxy_set_header X-Forwarded-Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
if I call to https://www.mysite.com.ar:443/jrri from a web broweser or curl I get the result 200 ok.
curl --head https://www.mysite.com.ar:443/jrri/
HTTP/1.1 200
Server: nginx/1.20.1
Date: Mon, 26 Jul 2021 19:00:54 GMT
Content-Type: text/html;charset=UTF-8
Connection: keep-alive
but if I call from oracle procedure I get the error code 400 - Bad request
DECLARE
l_text VARCHAR2(32767);
req utl_http.req;
BEGIN
UTL_HTTP.SET_WALLET('');
l_text := UTL_HTTP.REQUEST('https://www.mysite.com.ar:443/jrri/');
dbms_output.put_line(l_text);
END;
Response:
<html>
<head><title>400 Bad Request</title></head>
<body>
<center><h1>400 Bad Request</h1></center>
<hr><center>nginx/1.20.1</center>
</body>
</html>
My Oracle version is a autonomous database , what is wrong?
regards
I find the problem, for any raeson if you wirte the port into the request the server return bada request.
If I write UTL_HTTP.REQUEST('https://www.mysite.com.ar/jrri/'); without the port it work!!
Related
I have a Spring Boot server behind an Nginx reverse proxy, that I access using fetch from a React app. The frontend is served from another port, so I have to enable CORS on the server. In most cases this works great, but about 1% of my users get a 403 in response to the OPTIONS preflight request, and I need help figuring out why. The biggest problem I have is that I'm unable to replicate the issue on any of my machines.
Spring Boot CORS config:
#Bean
public FilterRegistrationBean corsFilter() {
CorsConfiguration config = new CorsConfiguration();
config.addAllowedOrigin("https://example.com");
config.addAllowedHeader("*");
config.addAllowedMethod("GET");
config.addAllowedMethod("PUT");
config.addAllowedMethod("POST");
config.addAllowedMethod("DELETE");
config.addAllowedMethod("PATCH");
UrlBasedCorsConfigurationSource source = new UrlBasedCorsConfigurationSource();
source.registerCorsConfiguration("/**", config);
FilterRegistrationBean bean = new FilterRegistrationBean(new CorsFilter(source));
bean.setOrder(Ordered.HIGHEST_PRECEDENCE);
return bean;
}
Nginx config (3000 is NodeJS serving frontend and 3001 is Spring Boot):
server {
...
location / {
proxy_pass http://localhost:3000;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
}
location /api/v1/ {
proxy_pass http://localhost:3001;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
}
...
}
Nginx log format (I removed some parts for clarity):
"$request" $status "$http_referer" "$http_user_agent"
By looking at the Nginx access.log I've nailed down 2 types of log rows where 403's show:
"OPTIONS /api/v1/oauth/token?grant_type=password&username=user%40example.com&password=****" 403 "https://www.example.com/login" "Mozilla/5.0 (Windows NT 6.1; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/61.0.3163.100 Safari/537.36"
meaning Windows 7 running Chrome 61, and
"OPTIONS /api/v1/oauth/token?grant_type=password&username=user%40example.com&password=****" 403 "-" "Mozilla/5.0 (Windows NT 6.1; WOW64; Trident/7.0; rv:11.0) like Gecko"
meaning Windows 7 running IE11.
Other users using the same setup of OS and browser experience no problems.
Data manually sent with fetch:
URL: https://example.com:3001/api/v1/oauth/token?grant_type=password&username=user%40example.com&password=****
Method: POST
Headers:
Authorization: Basic XXXXXXXXXXX=
Content-Type: application/x-www-form-urlencoded
Body: undefined
Actual parameters in the preflight request for a working user (from Chrome console):
Request headers:
OPTIONS /api/v1/oauth/token?grant_type=password&username=user%40example.com&password=**** HTTP/1.1
Host: https://example.com:3001
Connection: keep-alive
Access-Control-Request-Method: POST
Origin: https://example.com:3000
User-Agent: Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/61.0.3163.79 Safari/537.36
Access-Control-Request-Headers: authorization
Accept: */*
Referer: https://example.com:3000/login
Accept-Encoding: gzip, deflate
Accept-Language: en-US,en;q=0.8,sv;q=0.6
Response headers:
HTTP/1.1 200
Access-Control-Allow-Origin: https://example.com:3000
Vary: Origin
Access-Control-Allow-Methods: GET,PUT,POST,DELETE,PATCH
Access-Control-Allow-Headers: authorization
Content-Length: 0
Date: Tue, 03 Oct 2017 16:01:37 GMT
My guess would be that there is something wrong with the way I send the fetch request, or that I've configured the headers incorrectly. But so far I've not been able to solve it.
Any help would be mighty appreciated.
I've finally managed to find the error, and it was my own fault completely.
As can be seen in the Nginx access.log, one of the failed preflights has an $http_referer, that is the Origin Header, of www.example.com/login. This will of course fail a preflight since my CORS config only allows example.com, without the www subdomain.
I've fixed this by adding server blocks to the Nginx config so that all requests from the www subdomain return a redirect to the non-www domain, using a 301 Permanently moved:
server {
...
listen 443 ssl;
server_name example.com;
location / {
proxy_pass http://localhost:3000;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
}
location /api/v1/ {
proxy_pass http://localhost:3001;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
}
...
}
server {
server_name www.example.com example.com;
return 301 https://example.com$request_uri;
}
server {
listen 443 ssl;
server_name www.example.com;
return 301 https://example.com$request_uri;
}
Note that I also redirect all http traffic to https in order to keep everything encrypted.
This way I can be sure all requests are entering my 2 servers using https://example.com as origin, and there is no need to modify the CORS configuration.
we have a NGINX reverse proxy in front of our web server, and we need it to cache responses based on the request body (for a detailed explanation of why see this other question).
The problem I have is that even though the POST data is the same, the multipart boundary is actually different every time (the boundaries are created by the web browser).
Here's a simplified example of what the request looks like (notice the browser-generated WebKitFormBoundaryV2BlneaIH1rGgo0w):
POST http://my-server/some-path HTTP/1.1
Content-Length: 383
Accept: application/json
Content-Type: multipart/form-data; boundary=----
WebKitFormBoundaryV2BlneaIH1rGgo0w
------WebKitFormBoundaryV2BlneaIH1rGgo0w
Content-Disposition: form-data; name="some-key"; filename="some-pseudo-file.json"
Content-Type: application/json
{ "values": ["value1", "value2", "value3"] }
------WebKitFormBoundaryV2BlneaIH1rGgo0w--
If it's useful to someone, here's a simplified example of the NGINX config we use
proxy_cache_path /path/to/nginx/cache levels=1:2 keys_zone=my_cache:10m max_size=10g inactive=60m use_temp_path=off;
location /path/to/post/request {
proxy_pass http://remote-server;
proxy_cache my_cache;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header proxy_http_version 1.1;
proxy_set_header Connection "";
proxy_cache_lock on;
proxy_cache_use_stale error timeout updating http_500 http_502 http_503 http_504;
proxy_cache_methods POST;
proxy_cache_key "$scheme$proxy_host$uri$is_args$args|$request_body";
proxy_cache_valid 5m;
client_max_body_size 1500k;
proxy_hide_header Cache-Control;
}
I have seen the LUA NGINX module which looks like it could work, but I don't see how I could use it to parse the request data to build the cache key, but still pass the original request, or if there's something I can use with the "stock" NGINX.
Thanks!
Our stack is nginx - gunicorn - mezzanine (django cms) running on an EC2 instance. Everything works, but I can't seem to enable nginx proxy_cache. Here is my minimal config:
upstream %(proj_name)s {
server 127.0.0.1:%(gunicorn_port)s;
}
proxy_cache_path /cache keys_zone=bravo:10m;
server {
listen 80;
listen 443;
server_name %(live_host)s;
client_max_body_size 100M;
keepalive_timeout 15;
location /static/ {
expires 1M;
access_log off;
add_header Cache-Control "public";
root %(proj_path)s;
}
location / {
expires 1M;
add_header X-Proxy-Cache $upstream_cache_status;
proxy_cache bravo;
proxy_ignore_headers Cache-Control Expires Set-Cookie;
proxy_redirect off;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-Protocol $scheme;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_pass http://%(proj_name)s;
}
}
Sample response:
HTTP/1.1 200 OK
Server: nginx/1.4.6 (Ubuntu)
Date: Wed, 07 Jan 2015 03:43:47 GMT
Content-Type: text/html; charset=utf-8
Transfer-Encoding: chunked
Connection: keep-alive
Vary: Accept-Encoding
Vary: Cookie, Accept-Language
Content-Language: en
Expires: Fri, 06 Feb 2015 03:43:47 GMT
Cache-Control: max-age=2592000
X-Proxy-Cache: MISS
Content-Encoding: gzip
I have mezzanine cache middleware enabled and it is returning responses with Set-Cookie headers, but proxy_ignore_headers should ignore that.
I did chmod 777 on proxy_cache_path dir (/cache) so it shouldn't be a permissions issue.
Error logging is enabled but has produced nothing.
proxy_cache_path continues to remain completely empty...
Why is nginx not caching anything with this config?
I'm working on a project with grails 2.2.2 on a local machine Mac OSX Lion 10.7.5 I have installed NGINX with brew and modified the nginx.conf as following :
worker_processes 1;
error_log logs/error.log info;
events {
worker_connections 1024;
}
http {
include mime.types;
default_type application/octet-stream;
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log logs/access.log main;
sendfile on;
keepalive_timeout 65;
server {
listen 8081;
server_name localhost;
root /;
access_log /Users/lorenzo/grails/projects/logs/myproject_access.log;
location / {
proxy_set_header X-Forwarded-Host $host;
proxy_set_header X-Forwarded-Server $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Real-IP $remote_addr;
proxy_pass http://localhost:8081;
}
#images folders
location /posters {
root /Users/lorenzo/grails/projects/posters/;
}
#images folders
location /avatars {
root /Users/lorenzo/grails/projects/avatars/;
}
#images folders
location /waveforms {
root /Users/lorenzo/grails/projects/waveforms/;
}
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root html;
}
}
When I access http://localhost:8081 my site is running but I want to be sure the images are served by nginx and not by tomcat so I look at myproject_access.log but nothing is happening.
ngnix is writing into the log ONLY when tomcat is NOT running.
Is there a way to "monitor" the static files served by nginx ?
Thank you
EDIT
Executing curl -I http://localhost:8081
when tomcat is running the output is:
HTTP/1.1 200 OK
Server: Apache-Coyote/1.1 //TOMCAT
...
when tomcat is NOT running the output is:
HTTP/1.1 500 Internal Server Error
Server: nginx/1.4.1 //NGINX
Date: Tue, 08 Apr 2014 09:30:00 GMT
Content-Type: text/html
Content-Length: 192
Connection: keep-alive
Your problem is that your are making the both servers listen on the same port, you need to move tomcat to another port like 8082 and let nginx listen to the main port ( which is 8081 in your case ), and then tell nginx to proxy to 8082 when the request isn't an image ( or any asset ).
also here's a refinement to your server block
server {
server_name localhost;
listen 8081;
root /Users/lorenzo/grails/projects;
location #tomcat {
proxy_set_header X-Forwarded-Host $host;
proxy_set_header X-Forwarded-Server $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Real-IP $remote_addr;
proxy_pass http://localhost:8082;
}
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root html;
}
location / {
try_files $uri $uri/ #tomcat;
}
}
I configured nginx to cache all files for 3min. This works only for files I upload to the webserver manually. All files generated by the CMS get cached forever (or a long time I didn't wait)...
The CMS delivers all pages as "index.html" with an own folder structure (www.x.de/category1/category2/articlename/index.html).
How can I debug this? Is there a way to check the lifetime of a specific file?
Can something in the .html files overwrite the proxy_cache_valid value?
Many thanks!
Config:
server {
listen 1.2.3.4:80 default_server;
server_name x.de;
server_name www.x.de;
server_name ipv4.x.de;
client_max_body_size 128m;
location / { # IPv6 isn't supported in proxy_pass yet.
proxy_pass http://apache.ip:7080;
proxy_cache my-cache;
proxy_cache_valid 200 3m;
proxy_cache_valid 404 1m;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Accel-Internal /internal-nginx-static-location;
access_log off;
}
location /internal-nginx-static-location/ {
alias /var/www/vhosts/x.de/httpdocs/cms/;
access_log /var/www/vhosts/x.de/statistics/logs/proxy_access_log;
add_header X-Powered-By PleskLin;
internal;
}}
Using curl -I, you can retrieve the headers which will tell you what the cache settings are.
E.g.
>>> curl -I http://www.google.com
HTTP/1.1 200 OK
Date: Sun, 09 Feb 2014 06:28:36 GMT
Expires: -1
Cache-Control: private, max-age=0
Content-Type: text/html; charset=ISO-8859-1
Server: gws
X-XSS-Protection: 1; mode=block
X-Frame-Options: SAMEORIGIN
Transfer-Encoding: chunked
Cache settings are done in the response headers, so it's not possible for html to modify those.