Nginx-Lua session does not start using module lua-resty-session - session

I have a nginx server which I am using as forward proxy. I want to add a layer of authentication to the architecture and I am using Lua for the same.
I am using https://github.com/bungle/lua-resty-session module to enable session in lua.
local session = require "resty.session".open{ cookie = { domain = cookie_domain } }
-- Read some data
if session.present then
ngx.log(ngx.ERR, "Session -- "..session.id)
end
if not session.started then
session:start()
ngx.log(ngx.ERR, "Started -- ")
end
After each requests received on the server, I get the log message
Started --
Server configuration:
server {
listen 80;
server_name {SERVER_IP};
# tons of pagespeed configuration
location / {
#basic authentication
##auth_basic "Restricted";
##auth_basic_user_file {PATH_FOR_HTPASS_FILE};
access_by_lua_file {PATH_FOR_LUA_FILE};
# cache name
proxy_cache browser_cache;
resolver 8.8.8.8;
# app1 reverse proxy follow
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_pass http://$http_host$uri$is_args$args;
}
}
The only issue I see is the cookie_domain, the server does not have a domain pointed and I am passing IP address of the server as cookie_domain. I am not able to figure-out the cause of the Issue.

I am the author of that component. I will give you a few answers. First answer, why do you always get Started -- logged is because session.started will only be set to true if you start the session. Here you only open the session. So the line:
if not session.started then
...
end
will always be true.
open and start are different in that sense that open will not try to renew the cookie if it is about to expire. And open will not start a new session if one is not present (session.present). Basically you use open only when you don't want to auto renew cookies, and you want only the readonly access to it.
I will shortly answer what may cause the problem with reconnecting the session (I suspect that client may not send the cookie back, and it may be because of some cookie attributes, have you tried not specifying domain)?

Example Nginx Config:
server {
listen 8090;
server_name 127.0.0.1;
location / {
access_by_lua_block {
local session = require "resty.session".open{
cookie = { domain = "127.0.0.1" }
}
if session.present then
ngx.log(ngx.ERR, "Session -- " .. ngx.encode_base64(session.id))
else
session:start()
ngx.log(ngx.ERR, "Started -- " .. ngx.encode_base64(session.id))
end
}
content_by_lua_block {
ngx.say "Hello"
}
}
}
Now open a browser with url http://127.0.0.1:8090/.
Server will send you this header:
Set-Cookie:
session=acYmlSsZsK8pk5dPMu8Cow..|
1489250635|
lXibGK3hmR1JLPG61IOsdA..|
RdUK16cMz6c3tDGjonNahFUCpyY.;
Domain=127.0.0.1;
Path=/;
SameSite=Lax;
HttpOnly
And this will be logged in your Nginx error.log:
2017/03/11 17:43:55 [error] 1100#0: *2
[lua] access_by_lua(nginx.conf:21):7:
Started -- acYmlSsZsK8pk5dPMu8Cow==,
client: 127.0.0.1,
server: 127.0.0.1,
request: "GET / HTTP/1.1",
host: "127.0.0.1:8090"
Just what we wanted. Now refresh the browser by going to same url (F5 on Windows, CMD-R on Mac). Now the client will send this header to the server:
Cookie: session=acYmlSsZsK8pk5dPMu8Cow..|
1489250635|
lXibGK3hmR1JLPG61IOsdA..|
RdUK16cMz6c3tDGjonNahFUCpyY.
Everything still just fine. And this gets logged to Nginx error.log:
2017/03/11 17:51:44 [error] 1100#0: *3
[lua] access_by_lua(nginx.conf:21):4:
Session -- acYmlSsZsK8pk5dPMu8Cow==,
client: 127.0.0.1,
server: 127.0.0.1,
request: "GET / HTTP/1.1",
host: "127.0.0.1:8090"
See, it didn't log the Started here.
Please also read this:
https://github.com/bungle/lua-resty-session#notes-about-turning-lua-code-cache-off
If you have: lua_code_cache off; then you need to set the secret otherwise the different secret will be renegerated on every requests, and that means that we will never be able to attach to previously opened session, which means Started will be logged on every requests.
One additional note:
In general you shouldn't set the domain if you are accessing (single) IP address, because, well, browsers will by default send the cookies back only to that same IP address, which means that it doesn't really matter to pass domain argument in a cookie.

Related

Basic browser authentication with Safari Capybara Selenium

I've got a problem with browser authentication on Safari using Capybara/Selenium.
I'm using this code to authenticate:
visit "https://#{ENV['AUTH_USERNAME']}:#{ENV['AUTH_PASSWORD']}#my-staging-app.heroku.com"
This works just fine on Chrome and FF but not on Safari.
Any ideas how to bypass this?
Okey, I've found the solution for this. I had to use reversed proxy using e.g. Nginx and send proper headers :)
Here is how I've done it:
In this example I'll be using creds login: admin and password: secret123.
Go to https://www.base64encode.org and encode your creds admin:secret123.
In this example it's YWRtaW46c2VjcmV0MTIz
brew install nginx
sudo vim /usr/local/etc/nginx/nginx.conf
Past there this code:
worker_processes 1;
events {
worker_connections 1024;
}
http {
server {
listen 8080;
server_name localhost;
location / {
proxy_pass https://your_app.herokuapp.com;
proxy_set_header Authorization "Basic YWRtaW46c2VjcmV0MTIz";
}
}
}
Change proxy_pass to match your app url.
And proxy_set_header to Authorization "Basic <your_encoded_creds>"
Then: brew services start nginx
From now on, when you'll hit http://localhost:8080 you'll be redirected to your page and logged in.

invalid redirect_uri Keycloak when client is not on localhost

I've got my Keycloak Server deployed on aws EC2 behind a reverse Proxy and my Frontend client (Springbootapp) sits on a different EC2.
Now I get Invalid redirect_uri error, although it works when front-client is on localhost and Keycloak on aws. i.e.
Keycloak is reachable under: http://api.my-kc.site/
Valid Redirect URIs: http://localhost:8012/* and /login/* WORKS
The Query: https://api.my-kc.site/auth/realms/WebApps/protocol/openid-connect/auth?response_type=code&client_id=product-app&redirect_uri=http%3A%2F%2Flocalhost%3A8012%2Fsso%2Flogin&state=53185486-ef52-44a7-8304-ac4cfeb575ee&login=true&scope=openid
Valid Redirect URIs: http://awspublicip:80/* and /login/* does not WORK
And I also tried the suggestion not to specify the port, i.e http://awspublicip/*; but still this doesnt work :/
The Query: https://api.my-kc.site/auth/realms/WebApps/protocol/openid-connect/auth?response_type=code&client_id=product-app&redirect_uri=https%3A%2F%2Fawspublicip%3A0%2Fsso%2Flogin&state=8bbb01e7-ad4d-4ee1-83fa-efb7f05397cc&login=true&scope=openid
Does anyone have an idea? I've been looking all the Invalid redirect_uri post, but nothing seem to add up.
It seems Keycloack generates different redirect URis for the query when the initiator of the request is not localhost. Does someone know how to avoid this?
localhost
public dns
I was having the same exact problem. My spring boot app sits behind nginx. I updated nginx to pass through the x-forwarded headers and updated the spring boot config with
spring boot yaml config:
server:
use-forward-headers: true
keycloak:
realm: myrealm
public-client: true
resource: myclient
auth-server-url: https://sso.example.com:443/auth
ssl-required: external
confidential-port: 443
nginx config:
upstream app {
server 1.2.3.4:8042 max_fails=1 fail_timeout=60s;
server 1.2.3.5:8042 max_fails=1 fail_timeout=60s;
}
server {
listen 443;
server_name www.example.com;
...
location / {
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header X-Forwarded-Host $host;
proxy_set_header X-Forwarded-Port 443;
proxy_next_upstream error timeout invalid_header http_500;
proxy_connect_timeout 2;
proxy_pass http://app;
}
}
The specific change that made it work for me was adding keycloak.confidential-port. Once I added that it was no longer adding port 0 in the redirect_uri.
The only setting I have in Keycloak > Cofigure > Realm > Clients > my-client is Valid Redirect URIs set to: https://www.example.com/*
Hope that helps. It took me hours to track this down and get it working.
It seems that the query parameter "redirect_url" didn't match the setting of valid redirect URIs.
redirect_url: https%3A%2F%2Fawspublicip%3A0%2Fsso%2Flogin <- It's https
Valid Redirect URIs: http://awspublicip:80/* <- But it's http
in my case, I have a Spring boot application uses Keycloak as auth. provider. Used to work fine when redirecting to http://localhost:8080/*. But didn't work when deployed since the redirection is to https://.../*.
Adding server.forward-headers-strategy=framework to application.properties did the magic.

trouble getting a file from node.js using nginx reverse proxy

I have set up an nginx reverse proxy to node essentially using this set up reproduced below:
upstream nodejs {
server localhost:3000;
}
server {
listen 8080;
server_name localhost;
root ~/workspace/test/app;
location / {
try_files $uri $uri/ #nodejs;
}
location #nodejs {
proxy_redirect off;
proxy_http_version 1.1;
proxy_pass http://nodejs;
proxy_set_header Host $host ;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
Now all my AJAX POST requests travel just fine to the node with this set up, but I am polling for files afterward that I cannot find when I make a clientside AJAX GET request to the node server (via this nginx proxy).
For example, for a clientside javascript request like .get('Users/myfile.txt') the browser will look for the file on localhost:8080 but won't find it because it's actually written to localhost:3000
http://localhost:8080/Users/myfile.txt // what the browser searches for
http://localhost:3000/Users/myfile.txt // where the file really is
How do I set up the proxy to navigate through to this file?
Okay, I got it working. The set up in the nginx.conf file posted above is just fine. This problem was never an nginx problem. The problem was in my index.js file over on the node server.
When I got nginx to serve all the static files, I commented out the following line from index.js
app.use(express.static('Users')); // please don't comment this out thank you
It took me a while to troubleshoot my way back to this as I was pretty wrapped up in understanding nginx. My thinking at the time was that if nginx is serving static files why would I need express to serve them? Without this line however, express won't serve any files at all obviously.
Now with express serving static files properly, nginx handles all static files from the web app and node handles all the files from the backend and all is good.
Thanks to Keenan Lawrence for the guidance and AR7 for the config!

How to log real client ip in rails log when behind proxy like nginx

Problem
I have a rails 3.2.15 with rack 1.4.5 setup on two servers. First server is a nginx proxy serving static assets. Second server is a unicorn serving the rails app.
In Rails production.log I always see the nginx IP address (10.0.10.150) and not my client IP address (10.0.10.62):
Started GET "/" for 10.0.10.150 at 2013-11-21 13:51:05 +0000
I want to have the real client IP in logs.
Our Setup
The HTTP headers X-Forwarded-For and X-Real-IP are setup correctly in nginx and I have defined 10.0.10.62 as not being a trusted proxy address by setting config.action_dispatch.trusted_proxies = /^127\.0\.0\.1$/ in config/environments/production.rb, thanks to another answer. I can check it is working because I log them in the application controller:
in app/controllers/application_controller.rb:
class ApplicationController < ActionController::Base
before_filter :log_ips
def log_ips
logger.info("request.ip = #{request.ip} and request.remote_ip = #{request.remote_ip}")
end
end
in production.log:
request.ip = 10.0.10.150 and request.remote_ip = 10.0.10.62
Investigation
When investigating, I saw that Rails::Rack::Logger is responsible for logging the IP address:
def started_request_message(request)
'Started %s "%s" for %s at %s' % [
request.request_method,
request.filtered_path,
request.ip,
Time.now.to_default_s ]
end
request is an instance of ActionDispatch::Request. It inherits Rack::Request which defines how the IP address is computed:
def trusted_proxy?(ip)
ip =~ /^127\.0\.0\.1$|^(10|172\.(1[6-9]|2[0-9]|30|31)|192\.168)\.|^::1$|^fd[0-9a-f]{2}:.+|^localhost$/i
end
def ip
remote_addrs = #env['REMOTE_ADDR'] ? #env['REMOTE_ADDR'].split(/[,\s]+/) : []
remote_addrs.reject! { |addr| trusted_proxy?(addr) }
return remote_addrs.first if remote_addrs.any?
forwarded_ips = #env['HTTP_X_FORWARDED_FOR'] ? #env['HTTP_X_FORWARDED_FOR'].strip.split(/[,\s]+/) : []
if client_ip = #env['HTTP_CLIENT_IP']
# If forwarded_ips doesn't include the client_ip, it might be an
# ip spoofing attempt, so we ignore HTTP_CLIENT_IP
return client_ip if forwarded_ips.include?(client_ip)
end
return forwarded_ips.reject { |ip| trusted_proxy?(ip) }.last || #env["REMOTE_ADDR"]
end
The forwarded IP address are filtered with trusted_proxy?. Because our nginx server is using a public IP address and not a private IP address, Rack::Request#ip thinks it is not a proxy but the real client ip that tries to do some IP spoofing. That's why I see nginx IP address in my logs.
In log excerpts, client and servers have IP address 10.0.10.x because I am using virtual machines to reproduce our production environment.
Our current solution
To circumvent this behavior, I wrote a little Rack middleware located in app/middleware/remote_ip_logger.rb:
class RemoteIpLogger
def initialize(app)
#app = app
end
def call(env)
remote_ip = env["action_dispatch.remote_ip"]
Rails.logger.info "Remote IP: #{remote_ip}" if remote_ip
#app.call(env)
end
end
And I insert it just after the ActionDispatch::RemoteIp middleware
config.middleware.insert_after ActionDispatch::RemoteIp, "RemoteIpLogger"
This way I can see the real client IP in logs:
Started GET "/" for 10.0.10.150 at 2013-11-21 13:59:06 +0000
Remote IP: 10.0.10.62
I feel a little uncomfortable with this solution. nginx+unicorn is a common setup for rails application. If I have to log the client IP myself, it means I have missed something. Is it because the Nginx server is using a public IP address when communicating with the rails server? Is there a way to customize the trusted_proxy? method of Rack::Request?
EDITED: add nginx configuration and a HTTP request capture
/etc/nginx/sites-enabled/site.example.com.conf:
server {
server_name site.example.com;
listen 80;
location ^~ /assets/ {
root /home/deployer/site/shared;
expires 30d;
}
location / {
root /home/deployer/site/current/public;
try_files $uri #proxy;
}
location #proxy {
access_log /var/log/nginx/site.access.log combined_proxy;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header Host $http_host;
proxy_redirect off;
proxy_read_timeout 300;
proxy_pass http://rails.example.com:8080;
}
}
Nginx server is 10.0.10.150. Rails server is 10.0.10.190. My machine is 10.0.10.62 When doing curl http://10.0.10.150/ from my machine, a tcpdump port 8080 -i eth0 -Aq -s 0 on rails server show theses request HTTP headers:
GET / HTTP/1.0
X-Forwarded-For: 10.0.10.62
X-Forwarded-Proto: http
Host: 10.0.10.150
Connection: close
User-Agent: curl/7.29.0
Accept: */*
And the rails log /home/deployer/site/current/log/production.log (Remote IP and request.ip lines being added by custom code):
Started GET "/" for 10.0.10.150 at 2013-11-22 08:01:17 +0000
Remote IP: 10.0.10.62
Processing by Devise::RegistrationsController#new as */*
request.ip = 10.0.10.150 and request.remote_ip = 10.0.10.62
Rendered devise/shared/_links.erb (0.1ms)
Rendered devise/registrations/new.html.erb within layouts/application (2.3ms)
Rendered layouts/_landing.html.erb (1.5ms)
Completed 200 OK in 8.9ms (Views: 7.5ms | ActiveRecord: 0.0ms)
In my opinion, your current approach is the only sane one. The only step that is missing is overwriting the IP address in env.
The typical REMOTE_ADDR seldom holds the correct IP if you've any amount of layers of proxies and load balancers and what not -- you're not unique in this respect. Each potentially adds or changes remote IP-related headers. And you cannot assume that each of those fields necessarily correspond to a single IP address, at that. Some will push or unshift an IP to a list instead.
There is only one way to know for sure which field holds the correct value and how, and that is to dive in there and look. You've evidently done that already. Now, just overwrite env['REMOTE_ADDR'] with its correct value using your Rack middleware. There's little point in letting any piece of code you didn't write log or process the wrong IP address, as is happening now.
(This being Ruby, you could also monkey patch Rack::Request, of course...)
For colorful reading that illustrate the varying degrees of which exotic setups can mess up attempts at finding a client's real IP address, see for instance the unending discussions that occurred about this for WordPress:
https://core.trac.wordpress.org/ticket/9235
https://core.trac.wordpress.org/ticket/4198
https://core.trac.wordpress.org/ticket/4602
It's PHP but the gist of the points raised apply equally well to Ruby. (Note that they're unresolved as I write this, too, and that they've been around for aeons.)
This seemed to do the trick for me. (set in nginx config)
proxy_set_header CLIENT_IP $remote_addr;
I ran into the same issue, that a subset of our web clients access our rails app on (Rails 4.2.7) on our private network and we get the wrong IP reported. So, I thought I'd add what worked for us to resolve the problem.
I found Rails issue 5223 that provided a better workaround than double logging the IP like the question does. So, we monkey patch Rack to remove the private network from the list of trusted proxies like so:
module Rack
class Request
def trusted_proxy?(ip)
ip =~ /^127\.0\.0\.1$/
end
end
end
That addresses the controller logging the wrong IP, the other half of the fix to ensure that request.remote_ip is handled correctly. To do so add the following to your config/environments/production.rb:
config.action_dispatch.trusted_proxies = [IPAddr.new('127.0.0.1')]
I was facing the same problem. To fix this I referred your implementation, just below the line in config/application.rb made it fixed.
config.middleware.insert_before Rails::Rack::Logger, 'RemoteIpLogger'
No need to write extra loggers you will see actual client IP in the first row itself.
Started GET "/" for 10.0.10.62 at 2013-11-22 08:01:17 +0000
And in app\middleware\remote_ip_logger.rb. My HTTP_X_FORWARDED_FOR is having a list of IPs and the first one is the actual client's IP.
class RemoteIpLogger
def initialize(app)
#app = app
end
def call(env)
if env["HTTP_X_FORWARDED_FOR"]
remote_ip = env["HTTP_X_FORWARDED_FOR"].split(",")[0]
env['REMOTE_ADDR'] = env["action_dispatch.remote_ip"] = env["HTTP_X_FORWARDED_FOR"] = remote_ip
#app.call(env)
else
#app.call(env)
end
end
end
Short and simple :
request.remote_ip

easy way to make an elasticsearch server read-only

It's really easy to just upload a bunch of json data to an elasticsearch server to have a basic query api, with lots of options
I'd just like to know if there's and easy way to publish it all preventing people from modifying it
From the default setting, the server is open ot receive a DELETE or PUT http message that would modify the data.
Is there some kind of setting to configure it to be read-only? Or shall I configure some kind of http proxy to achieve it?
(I'm an elasticsearch newbie)
If you want to expose the Elasticsearch API as read-only, I think the best way is to put Nginx in front of it, and deny all requests except GET. An example configuration looks like this:
# Run me with:
#
# $ nginx -c path/to/this/file
#
# All requests except GET are denied.
worker_processes 1;
pid nginx.pid;
events {
worker_connections 1024;
}
http {
server {
listen 8080;
server_name search.example.com;
error_log elasticsearch-errors.log;
access_log elasticsearch.log;
location / {
if ($request_method !~ "GET") {
return 403;
break;
}
proxy_pass http://localhost:9200;
proxy_redirect off;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
}
}
}
Then:
curl -i -X GET http://localhost:8080/_search -d '{"query":{"match_all":{}}}'
HTTP/1.1 200 OK
curl -i -X POST http://localhost:8080/test/test/1 -d '{"foo":"bar"}'
HTTP/1.1 403 Forbidden
curl -i -X DELETE http://localhost:8080/test/
HTTP/1.1 403 Forbidden
Note, that a malicious user could still mess up your server, for instance sending incorrect script payloads, which would make Elasticsearch get stuck, but for most purposes, this approach would be fine.
If you would need more control about the proxying, you can either use more complex Nginx configuration, or write a dedicated proxy eg. in Ruby or Node.js.
See this example for a more complex Ruby-based proxy.
You can set a readonly flag on your index, this does limit some operations though, so you will need to see if thats acceptable.
curl -XPUT http://<ip-address>:9200/<index name>/_settings -d'
{
"index":{
"blocks":{
"read_only":true
}
}
}'
As mentioned in one of the other answers, really you should have ES running in a trusted environment, where you can control access to it.
More information on index settings here : http://www.elasticsearch.org/guide/reference/api/admin-indices-update-settings/
I know it's an old topic. I encountered the same problem, put ES behind Nginx in order to make it read only but allow kibana to access it.
The only request from ES that Kibana needs in my case is "url_public/_all/_search".
So I allowed it into my Nginx conf.
Here my conf file :
server {
listen port_es;
server_name ip_es;
rewrite ^/(.*) /$1 break;
proxy_ignore_client_abort on;
proxy_redirect url_es url_public;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
location ~ ^/(_all/_search) {
limit_except GET POST OPTIONS {
deny all;
}
proxy_pass url_es;
}
location / {
limit_except GET {
deny all;
}
proxy_pass url_es;
}
}
So only GET request are allowed unless the request is _all/_search. It is simple to add other request if needed.
I use this elasticsearch plugin:
https://github.com/sscarduzio/elasticsearch-readonlyrest-plugin
It is very simple, easy to install & configure. The GitHub project page has a config example that shows how to limit requests to HTTP GET method only; which will not change any data in elasticsearch. If you need only whitelisted IP#'s (or none) to use other methods (PUT/DELETE/etc) that can change data then it has got you covered as well.
Something like this goes into your elasticsearch config file (/etc/elasticsearch/elasticsearch.yml or equivalent), adapted from the GitHub page:
readonlyrest:
enable: true
response_if_req_forbidden: Sorry, your request is forbidden
# Default policy is to forbid everything, let's define a whitelist
access_control_rules:
# from these IP addresses, accept any method, any URI, any HTTP body
#- name: full access to internal servers
# type: allow
# hosts: [127.0.0.1, 10.0.0.10]
# From external hosts, accept only GET and OPTION methods only if the HTTP request body is empty
- name: restricted access to all other hosts
type: allow
methods: [OPTIONS,GET]
maxBodyLength: 0
Elasticsearch is meant to be used in a trusted environment and by itself doesn't have any access control mechanism. So, the best way to deploy elasticsearch is with a web server in front of it that would be responsible for controlling access and type of the queries that can reach elasticsearch. Saying that, it's possible to limit access to elasticsearch by using elasticsearch-jetty plugin.
With either Elastic or Solr, it's not a good idea to depend on the search engine for your security. You should be using security in your container, or even putting the container behind something really bulletproof like Apache HTTPD, and then setting up the security to forbid the things you want to forbid.
If you have a public facing ES instance behind nginx, which is updated internally these blocks should make it ready only and only allow _search endpoints
limit_except GET POST OPTIONS {
allow 127.0.0.1;
deny all;
}
if ($request_uri !~ .*search.*) {
set $sc fail;
}
if ($remote_addr = 127.0.0.1) {
set $sc pass;
}
if ($sc = fail) {
return 404;
}

Resources