Nginx proxy_pass to a password protected upstream - proxy

I want to pass a request to an upstream server. The original url is not password protected but the upstream server is. I need to inject a Basic auth username/password into the request but get errors when doing:
upstream supportbackend {
server username:password#support.yadayada.com;
}
and
upstream supportbackend {
server support.yadayada.com;
}
location /deleteuser {
proxy_pass http://username:password#supportbackend;
}

you need to add proxy_set_header Authorization "Basic ...."; where the .... is base64 of user:pass.

Related

get request body in auth_request nginx

I want to have a request body in auth_request to check something in there, but probably have some issues when i try to pass request body, as a subrequest, it can't get any information about the body of request
The api http://localhost:8091/api/authen/checkLoggedIn works alone very well and don't have any errors about request body, but when i use this api to do auth_request, it gets errors
when i get rid of #Nullable, even pass some infor like
{
"name" : "thisistest",
"gender" : "Male"
}
it can't get request body and throw exceptions
org.apache.catalina.connector.ClientAbortException: java.io.IOException: An existing connection was forcibly closed by the remote host
at org.apache.catalina.connector.OutputBuffer.doFlush(OutputBuffer.java:310) ~[tomcat-embed-core-9.0.65.jar:9.0.65]
at org.apache.catalina.connector.OutputBuffer.flush(OutputBuffer.java:273) ~[tomcat-embed-core-9.0.65.jar:9.0.65]
at org.apache.catalina.connector.CoyoteOutputStream.flush(CoyoteOutputStream.java:118) ~[tomcat-embed-core-9.0.65.jar:9.0.65]
at java.base/java.io.FilterOutputStream.flush(FilterOutputStream.java:153) ~[na:na]
at com.fasterxml.jackson.core.json.UTF8JsonGenerator.flush(UTF8JsonGenerator.java:1187) ~[jackson-core-2.13.4.jar:2.13.4]
at com.fasterxml.jackson.databind.ObjectWriter.writeValue(ObjectWriter.java:1009) ~[jackson-databind-2.13.4.jar:2.13.4]
this is my auth_request
location = /auth {
internal;
proxy_method POST;
proxy_pass_request_body on;
proxy_set_header Content-Length "";
proxy_set_header X-Original-URI $request_uri;
proxy_pass http://localhost:8091/api/authen/checkLoggedIn;
}
auth_request /auth;
#PostMapping("/checkLoggedIn")
public ResponseEntity checkLoggedIn(#Nullable #RequestBody Object object) {
// do something
return ResponseEntity.status(HttpStatus.OK).body("The request was successfully completed.");
}

Caddy return empty headers although setted in golang server

I am using caddy to enable https:
I have to add header_down to add these headers.
Without this, the browser will receive empty headers.
https://dev.bee.com:3002 {
encode gzip
tls cert/server.crt cert/server.key
# log {
# output stdout
# }
reverse_proxy /* dev.bee.com:3001 {
header_down Access-Control-Allow-Origin "http://dev.bee.com:3000,http://localhost:3000"
header_down Access-Control-Allow-Credentials "true"
header_down Access-Control-Allow-Headers "Origin,Content-Length,Content-Type"
}
}

index_not_found_exception - Elasticsearch

In image #1, as you can see, I am getting a valid ES response on firing a GET request. However, if I try doing the same things through the NGINX reverse proxy that I have created and hit myip/elasticsearch, it returns me the error (image #2). Can someone help me with this?
server {
listen 80;
server_name myip;
location /elasticsearch/ {
proxy_pass http://127.0.0.1:9200;
}
location /kibana/ {
proxy_pass http://127.0.0.1:5601;
}
}
The right way is to specify both of those slashes. Slash after 127.0.0.1:9000 is essential, without it your request /elasticsearch/some/route would be passed as-is while with that slash it would be passed as /some/route. In nginx terms it means that you specified an URI after the backend name. That is, an URI prefix specified in a location directive (/elasticsearch/) stripped from an original URI (we having some/route at this stage) and an URI specified after the backend name (/) prepended to it resulting in / + some/route = /some/route. You can specify any path in a proxy_pass directive, for example, with proxy_pass http://127.0.0.1:9200/prefix/ that request would be passed to the backend as /prefix/some/route. Now if you understand all being said, you can see that specifying location /elasticsearch { ... } instead of location /elasticsearch/ { ... } would give you //some/route instead of /some/route. I'm not sure it is exactly the cause of your problem however configurations like
location /elasticsearch/ {
proxy_pass http://127.0.0.1:9200/;
}
are more correct.
Now may I ask you what you get with exactly this configuration in response to curl -i http://localhost:9200/ and curl -i http://localhost/? I want to see all the headers (of cause except those containing private information).
The problem is the path. Nginx is passing it unmodified.
Add a slash at the proxy_pass urls.
server {
listen 80;
server_name myip;
location /elasticsearch/ {
proxy_pass http://127.0.0.1:9200/;
}
location /kibana/ {
proxy_pass http://127.0.0.1:5601/;
}
}
From the documentation:
Note that in the first example above, the address of the proxied server is followed by a URI, /link/. If the URI is specified along with the address, it replaces the part of the request URI that matches the location parameter. For example, here the request with the /some/path/page.html URI will be proxied to http://www.example.com/link/page.html. If the address is specified without a URI, or it is not possible to determine the part of URI to be replaced, the full request URI is passed (possibly, modified).

Spring Boot actuator page returning http links instead of https

I have a Spring Boot 2.0.2 application. When I browse to the following URL: https://my-domain-name/my-application-name/actuator, I'm getting the following output:
{
"_links": {
"self": {
"href": "http://my-domain-name/my-application-name/actuator",
"templated": false
},
"health": {
"href": "http://my-domain-name/my-application-name/actuator/health",
"templated": false
},
"info": {
"href": "http://my-domain-name/my-application-name/actuator/info",
"templated": false
}
}
}
As you can see, the content is OK but all links start with 'http', and not with https. Nevertheless, I'm accessing the URL with HTTPS.
The domain name I'm trying to reach is an AWS Route 53 record, with an alias to an AWS ELB. This ELB redirects the call to a target which is a K8S cluster. The pod itself is running Nginx which redirects the URL to another pod which runs Spring Boot with an embedded Tomcat and it's serving its content using HTTP and port 8080.
For the Nginx, there's a proxy pass configuration:
location /my-application-name { proxy_pass http://my-application-name; }
The following headers are being added:
proxy_set_header Host $http_host;
proxy_set_header X-Forwarded-Proto $http_x_forwarded_proto;
... so that Spring Boot will know the 'original' request.
Anybody an idea what I'm doing wrong? It seems like the actuator implementation is not taking into account the HTTPS protocol
I solved the issue by adding this code block in a configuration class:
#Bean
public FilterRegistrationBean forwardedHeaderFilterRegistration() {
ForwardedHeaderFilter filter = new ForwardedHeaderFilter();
FilterRegistrationBean<ForwardedHeaderFilter> registration = new FilterRegistrationBean<>(filter);
registration.setName("forwardedHeaderFilter");
registration.setOrder(10000);
return registration;
}
It seems like the ForwardedHeaderFilterdoes take the X-Forwarded-Proto header into account. Although it's still very unclear why the actuator implementation doesn't do this by default (because the other X-Forwarded headers are being treated correctly)

Cross domain session with Sinatra and AngularJS

I am using Sinatra as a webservice and angularjs to make the calls
post '/loginUser' do
session[:cui]=user['cui']
end
get '/cui' do
return session[:cui].to_s
end
But it doesn't seem to work (the '/cui' call returns an empty string) any help would be greatly apreciated.
UPDATE:
setting this in sinatra headers['Access-Control-Allow-Credentials'] = 'true' allows me to send the session, but it seems like $http directive is not using the browsers cookies
on the sinatra app
before do
headers['Access-Control-Allow-Methods'] = 'GET, POST, PUT, DELETE, OPTIONS'
headers['Access-Control-Allow-Origin'] = 'http://localhost:4567'
headers['Access-Control-Allow-Headers'] = 'accept, authorization, origin'
headers['Access-Control-Allow-Credentials'] = 'true'
end
angularjs app
host='http://127.0.0.1:5445/'
#viewController = ($scope,$http)->
$scope.getCui = ()->
$http.get(host+'cui',{ withCredentials: true}).success (data)->
$scope.cui=data
console.log data
Explanation:
AngularJS uses his own cookie system, so we need to specify that we can pass the cookies trough the $http.get call using the {withCredentials:true} configuration object. Sinatra needs to accept the cross domain cookies so we need the headers mentioned above.
Note: 'Access-Control-Allow-Origin' header cannot be wildcard.
One option around this would be to configure a http server with a proxy pass, so you could hit the same domain without incurring a cross origin error. That way you can continue to properly maintain your abstractions as 2 separate apps.
Here is a brief example with nginx:
upstream angular_app {
server localhost:3003;
}
upstream sinatra_app {
server localhost:3004;
}
server {
listen 80;
server_name local.angular_app.com;
root /Users/username/source/angular_app/;
location / {
proxy_set_header Host $http_host;
proxy_redirect off;
}
location ~ ^/api/(.*)$ {
proxy_set_header Host $http_host;
proxy_read_timeout 1200;
proxy_pass http://sinatra_app/;
}
}
By routing at the server level, you can successfully bypass domain restrictions AND you can keep the applications separate.

Resources