I've got a problem with browser authentication on Safari using Capybara/Selenium.
I'm using this code to authenticate:
visit "https://#{ENV['AUTH_USERNAME']}:#{ENV['AUTH_PASSWORD']}#my-staging-app.heroku.com"
This works just fine on Chrome and FF but not on Safari.
Any ideas how to bypass this?
Okey, I've found the solution for this. I had to use reversed proxy using e.g. Nginx and send proper headers :)
Here is how I've done it:
In this example I'll be using creds login: admin and password: secret123.
Go to https://www.base64encode.org and encode your creds admin:secret123.
In this example it's YWRtaW46c2VjcmV0MTIz
brew install nginx
sudo vim /usr/local/etc/nginx/nginx.conf
Past there this code:
worker_processes 1;
events {
worker_connections 1024;
}
http {
server {
listen 8080;
server_name localhost;
location / {
proxy_pass https://your_app.herokuapp.com;
proxy_set_header Authorization "Basic YWRtaW46c2VjcmV0MTIz";
}
}
}
Change proxy_pass to match your app url.
And proxy_set_header to Authorization "Basic <your_encoded_creds>"
Then: brew services start nginx
From now on, when you'll hit http://localhost:8080 you'll be redirected to your page and logged in.
Related
I installed nginx using brew install nginx
Screenshot of the terminal commands and their results
I started nginx using brew services start nginx
I changed the usr/local/etc/nginx/nginx.conf file to add a new location and i changed the user to mobi staff
I added
location /p1/ {
proxy_pass http://127.0.0.1:5000/api/product;
}
location /p2/ {
proxy_pass http://www.google.com;
}
Neither of the proxy pass things are working, i added the google.com one to test if it would work but when i go to localhost:8080/p1/ or localhost:8080/p2/ i get a 404 not found. How do I fix this so I can setup a reverse proxy that will go to those links?
Screen shot of the install and nginx.conf
I figured it out, I moved the
location /p1/ {
proxy_pass http://127.0.0.1:5000/api/product;
}
location /p2/ {
proxy_pass http://www.google.com;
}
code to be just below the server
server {
listen 8080;
server_name localhost;
location /p1/ {
proxy_pass http://127.0.0.1:5000/api/product;
}
location /p2/ {
proxy_pass http://www.google.com;
}
and then instead of running
sudo brew services restart nginx
I ran
brew services restart nginx
and it works i get redirected to the urls now!
I've got my Keycloak Server deployed on aws EC2 behind a reverse Proxy and my Frontend client (Springbootapp) sits on a different EC2.
Now I get Invalid redirect_uri error, although it works when front-client is on localhost and Keycloak on aws. i.e.
Keycloak is reachable under: http://api.my-kc.site/
Valid Redirect URIs: http://localhost:8012/* and /login/* WORKS
The Query: https://api.my-kc.site/auth/realms/WebApps/protocol/openid-connect/auth?response_type=code&client_id=product-app&redirect_uri=http%3A%2F%2Flocalhost%3A8012%2Fsso%2Flogin&state=53185486-ef52-44a7-8304-ac4cfeb575ee&login=true&scope=openid
Valid Redirect URIs: http://awspublicip:80/* and /login/* does not WORK
And I also tried the suggestion not to specify the port, i.e http://awspublicip/*; but still this doesnt work :/
The Query: https://api.my-kc.site/auth/realms/WebApps/protocol/openid-connect/auth?response_type=code&client_id=product-app&redirect_uri=https%3A%2F%2Fawspublicip%3A0%2Fsso%2Flogin&state=8bbb01e7-ad4d-4ee1-83fa-efb7f05397cc&login=true&scope=openid
Does anyone have an idea? I've been looking all the Invalid redirect_uri post, but nothing seem to add up.
It seems Keycloack generates different redirect URis for the query when the initiator of the request is not localhost. Does someone know how to avoid this?
localhost
public dns
I was having the same exact problem. My spring boot app sits behind nginx. I updated nginx to pass through the x-forwarded headers and updated the spring boot config with
spring boot yaml config:
server:
use-forward-headers: true
keycloak:
realm: myrealm
public-client: true
resource: myclient
auth-server-url: https://sso.example.com:443/auth
ssl-required: external
confidential-port: 443
nginx config:
upstream app {
server 1.2.3.4:8042 max_fails=1 fail_timeout=60s;
server 1.2.3.5:8042 max_fails=1 fail_timeout=60s;
}
server {
listen 443;
server_name www.example.com;
...
location / {
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header X-Forwarded-Host $host;
proxy_set_header X-Forwarded-Port 443;
proxy_next_upstream error timeout invalid_header http_500;
proxy_connect_timeout 2;
proxy_pass http://app;
}
}
The specific change that made it work for me was adding keycloak.confidential-port. Once I added that it was no longer adding port 0 in the redirect_uri.
The only setting I have in Keycloak > Cofigure > Realm > Clients > my-client is Valid Redirect URIs set to: https://www.example.com/*
Hope that helps. It took me hours to track this down and get it working.
It seems that the query parameter "redirect_url" didn't match the setting of valid redirect URIs.
redirect_url: https%3A%2F%2Fawspublicip%3A0%2Fsso%2Flogin <- It's https
Valid Redirect URIs: http://awspublicip:80/* <- But it's http
in my case, I have a Spring boot application uses Keycloak as auth. provider. Used to work fine when redirecting to http://localhost:8080/*. But didn't work when deployed since the redirection is to https://.../*.
Adding server.forward-headers-strategy=framework to application.properties did the magic.
I have a nginx server which I am using as forward proxy. I want to add a layer of authentication to the architecture and I am using Lua for the same.
I am using https://github.com/bungle/lua-resty-session module to enable session in lua.
local session = require "resty.session".open{ cookie = { domain = cookie_domain } }
-- Read some data
if session.present then
ngx.log(ngx.ERR, "Session -- "..session.id)
end
if not session.started then
session:start()
ngx.log(ngx.ERR, "Started -- ")
end
After each requests received on the server, I get the log message
Started --
Server configuration:
server {
listen 80;
server_name {SERVER_IP};
# tons of pagespeed configuration
location / {
#basic authentication
##auth_basic "Restricted";
##auth_basic_user_file {PATH_FOR_HTPASS_FILE};
access_by_lua_file {PATH_FOR_LUA_FILE};
# cache name
proxy_cache browser_cache;
resolver 8.8.8.8;
# app1 reverse proxy follow
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_pass http://$http_host$uri$is_args$args;
}
}
The only issue I see is the cookie_domain, the server does not have a domain pointed and I am passing IP address of the server as cookie_domain. I am not able to figure-out the cause of the Issue.
I am the author of that component. I will give you a few answers. First answer, why do you always get Started -- logged is because session.started will only be set to true if you start the session. Here you only open the session. So the line:
if not session.started then
...
end
will always be true.
open and start are different in that sense that open will not try to renew the cookie if it is about to expire. And open will not start a new session if one is not present (session.present). Basically you use open only when you don't want to auto renew cookies, and you want only the readonly access to it.
I will shortly answer what may cause the problem with reconnecting the session (I suspect that client may not send the cookie back, and it may be because of some cookie attributes, have you tried not specifying domain)?
Example Nginx Config:
server {
listen 8090;
server_name 127.0.0.1;
location / {
access_by_lua_block {
local session = require "resty.session".open{
cookie = { domain = "127.0.0.1" }
}
if session.present then
ngx.log(ngx.ERR, "Session -- " .. ngx.encode_base64(session.id))
else
session:start()
ngx.log(ngx.ERR, "Started -- " .. ngx.encode_base64(session.id))
end
}
content_by_lua_block {
ngx.say "Hello"
}
}
}
Now open a browser with url http://127.0.0.1:8090/.
Server will send you this header:
Set-Cookie:
session=acYmlSsZsK8pk5dPMu8Cow..|
1489250635|
lXibGK3hmR1JLPG61IOsdA..|
RdUK16cMz6c3tDGjonNahFUCpyY.;
Domain=127.0.0.1;
Path=/;
SameSite=Lax;
HttpOnly
And this will be logged in your Nginx error.log:
2017/03/11 17:43:55 [error] 1100#0: *2
[lua] access_by_lua(nginx.conf:21):7:
Started -- acYmlSsZsK8pk5dPMu8Cow==,
client: 127.0.0.1,
server: 127.0.0.1,
request: "GET / HTTP/1.1",
host: "127.0.0.1:8090"
Just what we wanted. Now refresh the browser by going to same url (F5 on Windows, CMD-R on Mac). Now the client will send this header to the server:
Cookie: session=acYmlSsZsK8pk5dPMu8Cow..|
1489250635|
lXibGK3hmR1JLPG61IOsdA..|
RdUK16cMz6c3tDGjonNahFUCpyY.
Everything still just fine. And this gets logged to Nginx error.log:
2017/03/11 17:51:44 [error] 1100#0: *3
[lua] access_by_lua(nginx.conf:21):4:
Session -- acYmlSsZsK8pk5dPMu8Cow==,
client: 127.0.0.1,
server: 127.0.0.1,
request: "GET / HTTP/1.1",
host: "127.0.0.1:8090"
See, it didn't log the Started here.
Please also read this:
https://github.com/bungle/lua-resty-session#notes-about-turning-lua-code-cache-off
If you have: lua_code_cache off; then you need to set the secret otherwise the different secret will be renegerated on every requests, and that means that we will never be able to attach to previously opened session, which means Started will be logged on every requests.
One additional note:
In general you shouldn't set the domain if you are accessing (single) IP address, because, well, browsers will by default send the cookies back only to that same IP address, which means that it doesn't really matter to pass domain argument in a cookie.
I have set up an nginx reverse proxy to node essentially using this set up reproduced below:
upstream nodejs {
server localhost:3000;
}
server {
listen 8080;
server_name localhost;
root ~/workspace/test/app;
location / {
try_files $uri $uri/ #nodejs;
}
location #nodejs {
proxy_redirect off;
proxy_http_version 1.1;
proxy_pass http://nodejs;
proxy_set_header Host $host ;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
Now all my AJAX POST requests travel just fine to the node with this set up, but I am polling for files afterward that I cannot find when I make a clientside AJAX GET request to the node server (via this nginx proxy).
For example, for a clientside javascript request like .get('Users/myfile.txt') the browser will look for the file on localhost:8080 but won't find it because it's actually written to localhost:3000
http://localhost:8080/Users/myfile.txt // what the browser searches for
http://localhost:3000/Users/myfile.txt // where the file really is
How do I set up the proxy to navigate through to this file?
Okay, I got it working. The set up in the nginx.conf file posted above is just fine. This problem was never an nginx problem. The problem was in my index.js file over on the node server.
When I got nginx to serve all the static files, I commented out the following line from index.js
app.use(express.static('Users')); // please don't comment this out thank you
It took me a while to troubleshoot my way back to this as I was pretty wrapped up in understanding nginx. My thinking at the time was that if nginx is serving static files why would I need express to serve them? Without this line however, express won't serve any files at all obviously.
Now with express serving static files properly, nginx handles all static files from the web app and node handles all the files from the backend and all is good.
Thanks to Keenan Lawrence for the guidance and AR7 for the config!
It's really easy to just upload a bunch of json data to an elasticsearch server to have a basic query api, with lots of options
I'd just like to know if there's and easy way to publish it all preventing people from modifying it
From the default setting, the server is open ot receive a DELETE or PUT http message that would modify the data.
Is there some kind of setting to configure it to be read-only? Or shall I configure some kind of http proxy to achieve it?
(I'm an elasticsearch newbie)
If you want to expose the Elasticsearch API as read-only, I think the best way is to put Nginx in front of it, and deny all requests except GET. An example configuration looks like this:
# Run me with:
#
# $ nginx -c path/to/this/file
#
# All requests except GET are denied.
worker_processes 1;
pid nginx.pid;
events {
worker_connections 1024;
}
http {
server {
listen 8080;
server_name search.example.com;
error_log elasticsearch-errors.log;
access_log elasticsearch.log;
location / {
if ($request_method !~ "GET") {
return 403;
break;
}
proxy_pass http://localhost:9200;
proxy_redirect off;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
}
}
}
Then:
curl -i -X GET http://localhost:8080/_search -d '{"query":{"match_all":{}}}'
HTTP/1.1 200 OK
curl -i -X POST http://localhost:8080/test/test/1 -d '{"foo":"bar"}'
HTTP/1.1 403 Forbidden
curl -i -X DELETE http://localhost:8080/test/
HTTP/1.1 403 Forbidden
Note, that a malicious user could still mess up your server, for instance sending incorrect script payloads, which would make Elasticsearch get stuck, but for most purposes, this approach would be fine.
If you would need more control about the proxying, you can either use more complex Nginx configuration, or write a dedicated proxy eg. in Ruby or Node.js.
See this example for a more complex Ruby-based proxy.
You can set a readonly flag on your index, this does limit some operations though, so you will need to see if thats acceptable.
curl -XPUT http://<ip-address>:9200/<index name>/_settings -d'
{
"index":{
"blocks":{
"read_only":true
}
}
}'
As mentioned in one of the other answers, really you should have ES running in a trusted environment, where you can control access to it.
More information on index settings here : http://www.elasticsearch.org/guide/reference/api/admin-indices-update-settings/
I know it's an old topic. I encountered the same problem, put ES behind Nginx in order to make it read only but allow kibana to access it.
The only request from ES that Kibana needs in my case is "url_public/_all/_search".
So I allowed it into my Nginx conf.
Here my conf file :
server {
listen port_es;
server_name ip_es;
rewrite ^/(.*) /$1 break;
proxy_ignore_client_abort on;
proxy_redirect url_es url_public;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
location ~ ^/(_all/_search) {
limit_except GET POST OPTIONS {
deny all;
}
proxy_pass url_es;
}
location / {
limit_except GET {
deny all;
}
proxy_pass url_es;
}
}
So only GET request are allowed unless the request is _all/_search. It is simple to add other request if needed.
I use this elasticsearch plugin:
https://github.com/sscarduzio/elasticsearch-readonlyrest-plugin
It is very simple, easy to install & configure. The GitHub project page has a config example that shows how to limit requests to HTTP GET method only; which will not change any data in elasticsearch. If you need only whitelisted IP#'s (or none) to use other methods (PUT/DELETE/etc) that can change data then it has got you covered as well.
Something like this goes into your elasticsearch config file (/etc/elasticsearch/elasticsearch.yml or equivalent), adapted from the GitHub page:
readonlyrest:
enable: true
response_if_req_forbidden: Sorry, your request is forbidden
# Default policy is to forbid everything, let's define a whitelist
access_control_rules:
# from these IP addresses, accept any method, any URI, any HTTP body
#- name: full access to internal servers
# type: allow
# hosts: [127.0.0.1, 10.0.0.10]
# From external hosts, accept only GET and OPTION methods only if the HTTP request body is empty
- name: restricted access to all other hosts
type: allow
methods: [OPTIONS,GET]
maxBodyLength: 0
Elasticsearch is meant to be used in a trusted environment and by itself doesn't have any access control mechanism. So, the best way to deploy elasticsearch is with a web server in front of it that would be responsible for controlling access and type of the queries that can reach elasticsearch. Saying that, it's possible to limit access to elasticsearch by using elasticsearch-jetty plugin.
With either Elastic or Solr, it's not a good idea to depend on the search engine for your security. You should be using security in your container, or even putting the container behind something really bulletproof like Apache HTTPD, and then setting up the security to forbid the things you want to forbid.
If you have a public facing ES instance behind nginx, which is updated internally these blocks should make it ready only and only allow _search endpoints
limit_except GET POST OPTIONS {
allow 127.0.0.1;
deny all;
}
if ($request_uri !~ .*search.*) {
set $sc fail;
}
if ($remote_addr = 127.0.0.1) {
set $sc pass;
}
if ($sc = fail) {
return 404;
}