get request body in auth_request nginx - spring-boot

I want to have a request body in auth_request to check something in there, but probably have some issues when i try to pass request body, as a subrequest, it can't get any information about the body of request
The api http://localhost:8091/api/authen/checkLoggedIn works alone very well and don't have any errors about request body, but when i use this api to do auth_request, it gets errors
when i get rid of #Nullable, even pass some infor like
{
"name" : "thisistest",
"gender" : "Male"
}
it can't get request body and throw exceptions
org.apache.catalina.connector.ClientAbortException: java.io.IOException: An existing connection was forcibly closed by the remote host
at org.apache.catalina.connector.OutputBuffer.doFlush(OutputBuffer.java:310) ~[tomcat-embed-core-9.0.65.jar:9.0.65]
at org.apache.catalina.connector.OutputBuffer.flush(OutputBuffer.java:273) ~[tomcat-embed-core-9.0.65.jar:9.0.65]
at org.apache.catalina.connector.CoyoteOutputStream.flush(CoyoteOutputStream.java:118) ~[tomcat-embed-core-9.0.65.jar:9.0.65]
at java.base/java.io.FilterOutputStream.flush(FilterOutputStream.java:153) ~[na:na]
at com.fasterxml.jackson.core.json.UTF8JsonGenerator.flush(UTF8JsonGenerator.java:1187) ~[jackson-core-2.13.4.jar:2.13.4]
at com.fasterxml.jackson.databind.ObjectWriter.writeValue(ObjectWriter.java:1009) ~[jackson-databind-2.13.4.jar:2.13.4]
this is my auth_request
location = /auth {
internal;
proxy_method POST;
proxy_pass_request_body on;
proxy_set_header Content-Length "";
proxy_set_header X-Original-URI $request_uri;
proxy_pass http://localhost:8091/api/authen/checkLoggedIn;
}
auth_request /auth;
#PostMapping("/checkLoggedIn")
public ResponseEntity checkLoggedIn(#Nullable #RequestBody Object object) {
// do something
return ResponseEntity.status(HttpStatus.OK).body("The request was successfully completed.");
}

Related

index_not_found_exception - Elasticsearch

In image #1, as you can see, I am getting a valid ES response on firing a GET request. However, if I try doing the same things through the NGINX reverse proxy that I have created and hit myip/elasticsearch, it returns me the error (image #2). Can someone help me with this?
server {
listen 80;
server_name myip;
location /elasticsearch/ {
proxy_pass http://127.0.0.1:9200;
}
location /kibana/ {
proxy_pass http://127.0.0.1:5601;
}
}
The right way is to specify both of those slashes. Slash after 127.0.0.1:9000 is essential, without it your request /elasticsearch/some/route would be passed as-is while with that slash it would be passed as /some/route. In nginx terms it means that you specified an URI after the backend name. That is, an URI prefix specified in a location directive (/elasticsearch/) stripped from an original URI (we having some/route at this stage) and an URI specified after the backend name (/) prepended to it resulting in / + some/route = /some/route. You can specify any path in a proxy_pass directive, for example, with proxy_pass http://127.0.0.1:9200/prefix/ that request would be passed to the backend as /prefix/some/route. Now if you understand all being said, you can see that specifying location /elasticsearch { ... } instead of location /elasticsearch/ { ... } would give you //some/route instead of /some/route. I'm not sure it is exactly the cause of your problem however configurations like
location /elasticsearch/ {
proxy_pass http://127.0.0.1:9200/;
}
are more correct.
Now may I ask you what you get with exactly this configuration in response to curl -i http://localhost:9200/ and curl -i http://localhost/? I want to see all the headers (of cause except those containing private information).
The problem is the path. Nginx is passing it unmodified.
Add a slash at the proxy_pass urls.
server {
listen 80;
server_name myip;
location /elasticsearch/ {
proxy_pass http://127.0.0.1:9200/;
}
location /kibana/ {
proxy_pass http://127.0.0.1:5601/;
}
}
From the documentation:
Note that in the first example above, the address of the proxied server is followed by a URI, /link/. If the URI is specified along with the address, it replaces the part of the request URI that matches the location parameter. For example, here the request with the /some/path/page.html URI will be proxied to http://www.example.com/link/page.html. If the address is specified without a URI, or it is not possible to determine the part of URI to be replaced, the full request URI is passed (possibly, modified).

js client could not connect to elasticsearch behind nginx

I am using nginx to protect the elasticsearch. When trying to access elasticsearch in client js. Its throwing Unable to revive connection: http://ubuntulocal:80:9200/ error.
My question is how to connect the elasticsearch using client js behind the proxy ?
node js code
var elasticsearch = require('elasticsearch');
var host = [
{
host: 'http://ubunutlocal:80',
auth: 'root:root'
}]
var client = new elasticsearch.Client({
host: host
});
client.search({
index : 'bank'
// undocumented params are appended to the query string
//hello: "elasticsearch"
}, function (error , response) {
if (error) {
console.error('elasticsearch cluster is down!' , error);
} else {
console.log('All is well' , response);
}
});
Error log
Elasticsearch ERROR: 2015-10-26T13:14:06Z
Error: Request error, retrying -- getaddrinfo ENOTFOUND ubuntulocal:80
at Log.error (/node_modules/elasticsearch/src/lib/log.js:213:60)
at checkRespForFailure (/node_modules/elasticsearch/src/lib/transport.js:192:18)
at HttpConnector.<anonymous> (
/node_modules/elasticsearch/src/lib/connectors/http.js:153:7)
at ClientRequest.wrapper (
node_modules/elasticsearch/node_modules/lodash/index.js:3111:19)
at ClientRequest.emit (events.js:107:17)
at Socket.socketErrorListener (_http_client.js:271:9)
at Socket.emit (events.js:107:17)
at net.js:950:16
at process._tickCallback (node.js:355:11)
Elasticsearch WARNING: 2015-10-26T13:14:06Z
Unable to revive connection: http://ubuntulocal:80:9200/
Elasticsearch WARNING: 2015-10-26T13:14:06Z
No living connections
elasticsearch cluster is down! { [Error: No Living connections] message: 'No Living connections' }
Nginx configuration
server {
listen 80;
server_name guidanzlocal;
location / {
rewrite ^/(.*) /$1 break;
proxy_ignore_client_abort on;
proxy_pass http://localhost:9200;
proxy_redirect http://localhost:9200 http://guidanzlocal/;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
auth_basic "Elasticsearch Authentication";
auth_basic_user_file /etc/elasticsearch/user.pwd;
}
}
Could you try to override nodesToHostCallback in your config:
var host = [{
host: 'http://ubunutlocal:9200',
auth: 'root:root'
}]
var client = new elasticsearch.Client({
host: host,
nodesToHostCallback: function ( /*nodes*/ ) {
return [host];
}
});
When elasticsearch-js attempt to reconnect after failure, it pings your node to get url, which can lead to inconsistent result when you're using a proxy

Cross domain session with Sinatra and AngularJS

I am using Sinatra as a webservice and angularjs to make the calls
post '/loginUser' do
session[:cui]=user['cui']
end
get '/cui' do
return session[:cui].to_s
end
But it doesn't seem to work (the '/cui' call returns an empty string) any help would be greatly apreciated.
UPDATE:
setting this in sinatra headers['Access-Control-Allow-Credentials'] = 'true' allows me to send the session, but it seems like $http directive is not using the browsers cookies
on the sinatra app
before do
headers['Access-Control-Allow-Methods'] = 'GET, POST, PUT, DELETE, OPTIONS'
headers['Access-Control-Allow-Origin'] = 'http://localhost:4567'
headers['Access-Control-Allow-Headers'] = 'accept, authorization, origin'
headers['Access-Control-Allow-Credentials'] = 'true'
end
angularjs app
host='http://127.0.0.1:5445/'
#viewController = ($scope,$http)->
$scope.getCui = ()->
$http.get(host+'cui',{ withCredentials: true}).success (data)->
$scope.cui=data
console.log data
Explanation:
AngularJS uses his own cookie system, so we need to specify that we can pass the cookies trough the $http.get call using the {withCredentials:true} configuration object. Sinatra needs to accept the cross domain cookies so we need the headers mentioned above.
Note: 'Access-Control-Allow-Origin' header cannot be wildcard.
One option around this would be to configure a http server with a proxy pass, so you could hit the same domain without incurring a cross origin error. That way you can continue to properly maintain your abstractions as 2 separate apps.
Here is a brief example with nginx:
upstream angular_app {
server localhost:3003;
}
upstream sinatra_app {
server localhost:3004;
}
server {
listen 80;
server_name local.angular_app.com;
root /Users/username/source/angular_app/;
location / {
proxy_set_header Host $http_host;
proxy_redirect off;
}
location ~ ^/api/(.*)$ {
proxy_set_header Host $http_host;
proxy_read_timeout 1200;
proxy_pass http://sinatra_app/;
}
}
By routing at the server level, you can successfully bypass domain restrictions AND you can keep the applications separate.

nginx: add conditional expires header to fastcgi_cache response

When using nginx fastcgi_cache, I cache HTTP 200 responses longer than I do any other HTTP code. I want to be able to conditionally set the expires header based on this code.
For example:
fastcgi_cache_valid 200 302 5m;
fastcgi_cache_valid any 1m;
if( $HTTP_CODE = 200 ) {
expires 5m;
}
else {
expires 1m;
}
Is something like the above possible (inside a location container)?
sure, from http://wiki.nginx.org/HttpCoreModule#Variables
$sent_http_HEADER
The value of the HTTP response header HEADER when converted to lowercase and
with 'dashes' converted to 'underscores', e.g. $sent_http_cache_control,
$sent_http_content_type...;
so you could match on $sent_http_response in an if-statement
there's a gotcha though since http://nginx.org/en/docs/http/ngx_http_headers_module.html#expires doesn't list if's as allowed context for the expires directive
you can work around that setting a variable in the if-block, and then referring to it later like so:
set $expires_time 1m;
if ($send_http_response ~* "200") {
set $expires_time 5m;
}
expires $expires_time;

Nginx proxy_pass to a password protected upstream

I want to pass a request to an upstream server. The original url is not password protected but the upstream server is. I need to inject a Basic auth username/password into the request but get errors when doing:
upstream supportbackend {
server username:password#support.yadayada.com;
}
and
upstream supportbackend {
server support.yadayada.com;
}
location /deleteuser {
proxy_pass http://username:password#supportbackend;
}
you need to add proxy_set_header Authorization "Basic ...."; where the .... is base64 of user:pass.

Resources