easy way to make an elasticsearch server read-only - elasticsearch

It's really easy to just upload a bunch of json data to an elasticsearch server to have a basic query api, with lots of options
I'd just like to know if there's and easy way to publish it all preventing people from modifying it
From the default setting, the server is open ot receive a DELETE or PUT http message that would modify the data.
Is there some kind of setting to configure it to be read-only? Or shall I configure some kind of http proxy to achieve it?
(I'm an elasticsearch newbie)

If you want to expose the Elasticsearch API as read-only, I think the best way is to put Nginx in front of it, and deny all requests except GET. An example configuration looks like this:
# Run me with:
#
# $ nginx -c path/to/this/file
#
# All requests except GET are denied.
worker_processes 1;
pid nginx.pid;
events {
worker_connections 1024;
}
http {
server {
listen 8080;
server_name search.example.com;
error_log elasticsearch-errors.log;
access_log elasticsearch.log;
location / {
if ($request_method !~ "GET") {
return 403;
break;
}
proxy_pass http://localhost:9200;
proxy_redirect off;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
}
}
}
Then:
curl -i -X GET http://localhost:8080/_search -d '{"query":{"match_all":{}}}'
HTTP/1.1 200 OK
curl -i -X POST http://localhost:8080/test/test/1 -d '{"foo":"bar"}'
HTTP/1.1 403 Forbidden
curl -i -X DELETE http://localhost:8080/test/
HTTP/1.1 403 Forbidden
Note, that a malicious user could still mess up your server, for instance sending incorrect script payloads, which would make Elasticsearch get stuck, but for most purposes, this approach would be fine.
If you would need more control about the proxying, you can either use more complex Nginx configuration, or write a dedicated proxy eg. in Ruby or Node.js.
See this example for a more complex Ruby-based proxy.

You can set a readonly flag on your index, this does limit some operations though, so you will need to see if thats acceptable.
curl -XPUT http://<ip-address>:9200/<index name>/_settings -d'
{
"index":{
"blocks":{
"read_only":true
}
}
}'
As mentioned in one of the other answers, really you should have ES running in a trusted environment, where you can control access to it.
More information on index settings here : http://www.elasticsearch.org/guide/reference/api/admin-indices-update-settings/

I know it's an old topic. I encountered the same problem, put ES behind Nginx in order to make it read only but allow kibana to access it.
The only request from ES that Kibana needs in my case is "url_public/_all/_search".
So I allowed it into my Nginx conf.
Here my conf file :
server {
listen port_es;
server_name ip_es;
rewrite ^/(.*) /$1 break;
proxy_ignore_client_abort on;
proxy_redirect url_es url_public;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
location ~ ^/(_all/_search) {
limit_except GET POST OPTIONS {
deny all;
}
proxy_pass url_es;
}
location / {
limit_except GET {
deny all;
}
proxy_pass url_es;
}
}
So only GET request are allowed unless the request is _all/_search. It is simple to add other request if needed.

I use this elasticsearch plugin:
https://github.com/sscarduzio/elasticsearch-readonlyrest-plugin
It is very simple, easy to install & configure. The GitHub project page has a config example that shows how to limit requests to HTTP GET method only; which will not change any data in elasticsearch. If you need only whitelisted IP#'s (or none) to use other methods (PUT/DELETE/etc) that can change data then it has got you covered as well.
Something like this goes into your elasticsearch config file (/etc/elasticsearch/elasticsearch.yml or equivalent), adapted from the GitHub page:
readonlyrest:
enable: true
response_if_req_forbidden: Sorry, your request is forbidden
# Default policy is to forbid everything, let's define a whitelist
access_control_rules:
# from these IP addresses, accept any method, any URI, any HTTP body
#- name: full access to internal servers
# type: allow
# hosts: [127.0.0.1, 10.0.0.10]
# From external hosts, accept only GET and OPTION methods only if the HTTP request body is empty
- name: restricted access to all other hosts
type: allow
methods: [OPTIONS,GET]
maxBodyLength: 0

Elasticsearch is meant to be used in a trusted environment and by itself doesn't have any access control mechanism. So, the best way to deploy elasticsearch is with a web server in front of it that would be responsible for controlling access and type of the queries that can reach elasticsearch. Saying that, it's possible to limit access to elasticsearch by using elasticsearch-jetty plugin.

With either Elastic or Solr, it's not a good idea to depend on the search engine for your security. You should be using security in your container, or even putting the container behind something really bulletproof like Apache HTTPD, and then setting up the security to forbid the things you want to forbid.

If you have a public facing ES instance behind nginx, which is updated internally these blocks should make it ready only and only allow _search endpoints
limit_except GET POST OPTIONS {
allow 127.0.0.1;
deny all;
}
if ($request_uri !~ .*search.*) {
set $sc fail;
}
if ($remote_addr = 127.0.0.1) {
set $sc pass;
}
if ($sc = fail) {
return 404;
}

Related

How to configure ngnix, vite and laravel for use with HMR via a reverse proxy?

I've only just got into Laravel/Vue3 so I'm working off the basics. However, I have an existing Docker ecosystem that I use for local dev and an nginx reverse proxy to keep my many projects separate.
I'm having trouble getting HMR working and even more trouble finding appropriate documentation on how to configure Vite and Nginx so I can have a single HTTPS entry point in nginx and proxy back to Laravel and Vite.
The build is based on https://github.com/laravel-presets/inertia/tree/boilerplate.
For completeness, this is the package.json, just in case it changes:
{
"private": true,
"scripts": {
"dev": "vite",
"build": "vite build"
},
"devDependencies": {
"#vitejs/plugin-vue": "^2.3.1",
"#vue/compiler-sfc": "^3.2.33",
"autoprefixer": "^10.4.5",
"postcss": "^8.4.12",
"tailwindcss": "^3.0.24",
"vite": "^2.9.5",
"vite-plugin-laravel": "^0.2.0-beta.10"
},
"dependencies": {
"vue": "^3.2.31",
"#inertiajs/inertia": "^0.11.0",
"#inertiajs/inertia-vue3": "^0.6.0"
}
}
To keep things simple, I'm going to try and get it working under HTTP only and deal with HTTPS later.
Because I'm running the dev server in a container, I've set server.host to 0.0.0.0 in vite.config.ts and server.hmr.clientPort to 80. This will allow connections to the dev server from outside the container and hopefully realize that the public port is 80, instead of the default 3000.
I've tried setting the DEV_SERVER_URL to be the same as the APP_URL so that all traffic from the public site goes to the same place. But I'm not sure what the nginx side of things should look like.
I've also tried setting the DEV_SERVER_URL to be http://0.0.0.0:3000/ so I can see what traffic is trying to be generated. This almost works, but is grossly wrong. It fails when it comes to ws://0.0.0.0/ communications, and would not be appropriate when it comes to HTTPS.
I have noticed calls to /__vite_plugin, which I'm going to assume is the default ping_url that would normally be set in config/vite.php.
Looking for guidance on which nginx locations for should forward to the Laravel port and which locations should forward to the Vite port, and what that should look like so that web socket communications is also catered for.
I've seen discussions that Vite 3 may make this setup easier, but I'd like to deal with what is available right now.
The answer appears to be in knowing which directories to proxy to Vite and being able to isolate the web socket used for HMR.
To that end, you will want to do the following:
Ensure that your .env APP_URL and DEV_SERVER_URL match.
In your vite.config.ts, ensure that the server.host is '0.0.0.0' so that connections can be accepted from outside of the container.
In your vite.config.ts specify a base such as '/app/' so that all HMR traffic can be isolated and redirected to the Vite server while you are running npm run dev. You may wish to use something else if that path may clash with real paths in your Laravel or Vite app, like /_dev/ or /_vite'.
In your config/vite.php set a value for ping_url as http://localhost:3000. This allows Laravel to ping the Vite server locally so that the manifest should not be used and the Vite server will be used. This also assumes that ping_before_using_manifest is set to true.
Lastly, you want to configure your nginx proxy so that a number of locations are specifically proxied to the Vite server, and the rest goes to the Laravel server.
I am not an Nginx expert, so there may be a way to declare the following succinctly.
Sample Nginx server entry
# Some standard proxy variables
map $http_x_forwarded_proto $proxy_x_forwarded_proto {
default $http_x_forwarded_proto;
'' $scheme;
}
map $http_x_forwarded_port $proxy_x_forwarded_port {
default $http_x_forwarded_port;
'' $server_port;
}
map $http_upgrade $proxy_connection {
default upgrade;
'' close;
}
map $scheme $proxy_x_forwarded_ssl {
default off;
https off;
}
server {
listen *:80;
server vite-inertia-vue-app.test;
/* abridged version that does not include gzip_types, resolver, *_log and other headers */
location ^~ /resources/ {
proxy_pass http://198.18.0.1:3000;
include /etc/nginx/vite-inertia-vue-app.test.include;
}
location ^~ /#vite {
proxy_pass http://198.18.0.1:3000;
include /etc/nginx/vite-inertia-vue-app.test.include;
}
location ^~ /app/ {
proxy_pass http://198.18.0.1:3000;
include /etc/nginx/vite-inertia-vue-app.test.include;
}
location / {
proxy_pass http://198.18.0.1:8082;
include /etc/nginx/vite-inertia-vue-app.test.include;
}
}
vite-inertia-vue-app.test.include to include common proxy settings
proxy_read_timeout 190;
proxy_connect_timeout 3;
proxy_redirect off;
proxy_http_version 1.1;
proxy_set_header Host $host;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $proxy_connection;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $proxy_x_forwarded_proto;
proxy_set_header X-Forwarded-Ssl $proxy_x_forwarded_ssl;
proxy_set_header X-Forwarded-Port $proxy_x_forwarded_port;
proxy_set_header Proxy "";
My Nginx instance runs in a local Docker Swarm and I use a loopback interface (198.18.0.1) to hit open ports on my machine. Your mileage may vary. Port 3000 is for the Vite server. Port 9082 is for the Laravel server.
At some point, I may investigate using the hostname as it is declared in the docker-compose stack, though I'm not too sure how well this holds up when communicating between Docker Swarm and a regular container stack. The point would be not to have to allocate unique ports for the Laravel and Vite servers if I ended up running multiple projects at the same time.
Entry points /#vite and /resources are for when the app initially launches, and these are used by the script and link tags in the header. After that, all HMR activities use /app/.
The next challenge will be adding a self-signed cert, as I plan to integrate some Azure B2C sign in, but I think that may just involve updating the Nginx config to cater for TLS and update APP_URL and DEV_SERVER_URL in the .env to match.

Gitlab Client in Login Redirect Loop

I have been doing some work trying to update our gitlab servers. Somewhere along the line, something in the configuration changed and now I can't access the web client. The backend starts up correctly and when I run rake gitlab:check everything comes back as green. Same for nginx, as far as I can tell it is working correctly. When I try to go to the landing page in the browser though, I keep getting an error about 'too many redirects'.
Looking at the browser console, I can see that it is repeatedly trying to redirect to the login page until the browser gives up and throws an error. I did some looking around, and most of the answers seem to involve going to the login page directly and then changing the landing page from the admin settings. When I tried that I got the same problem. Apparently any page on my domain wants to redirect to the login, leaving me with an infinite loop.
I'm also seeing some potentially related errors in the nginx logs. When I try to hit the sign in page the error log is showing
open() "/usr/local/Cellar/nginx/1.15.9/html/users/sign_in" failed (2: No such file or directory)
Is that even the correct directory for the gitlab html views? If not how do I change it?
Any help on this would be greatly appreciated.
Environment:
OSX 10.11.6 El Capitan
Gitlab 8.11
nginx 1.15.9
My config files. I have removed some commented out lines to save on space.
nginx.config
worker_processes 1;
events {
worker_connections 1024;
}
http {
include mime.types;
default_type application/octet-stream;
sendfile on;
keepalive_timeout 65;
include servers/*;
}
nginx/servers/gitlab
upstream gitlab-workhorse {
server unix:/Users/git/gitlab/tmp/sockets/gitlab-workhorse.socket fail_timeout=0;
}
server {
listen 0.0.0.0:8081;
listen [::]:8081;
server_name git.my.server.com; ## Replace this with something like gitlab.example.com
server_tokens off; ## Don't show the nginx version number, a security best practice
## See app/controllers/application_controller.rb for headers set
## Individual nginx logs for this GitLab vhost
access_log /var/log/nginx/gitlab_access.log;
error_log /var/log/nginx/gitlab_error.log;
location / {
client_max_body_size 0;
gzip off;
## https://github.com/gitlabhq/gitlabhq/issues/694
## Some requests take more than 30 seconds.
proxy_read_timeout 300;
proxy_connect_timeout 300;
proxy_redirect off;
proxy_http_version 1.1;
proxy_set_header Host $http_host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_pass http://gitlab-workhorse;
}
}
I finally found the answer after several days of digging. At some point my default config file (/etc/default/gitlab) got changed. For whatever reason, my text editor decided to split gitlab_workhorse_options into two lines. As a result, gitlab was missing the arguments for authSocket and document root and was just using the default values. If that wasn't bad enough, the line split started on a $ character, so it looked like nano was just doing a word wrap.

Django media file not serve in production but serving in development (localhost)?

Django media file serving in development but not on production. whatever image i am uploading through Django admin it serving on website on local host but when i live my site on digital ocean its no displaying. how to solve this issue can any one tell ? my website url-http://139.59.56.161 click on book test menu
Resurrecting a long-dead question which was all I could find to help me out here. Recording my answer for posterity. My "production" environment uses nginx as a reverse proxy in front of uwsgi hosting my django application. The solution is that Django just does not serve files in Production; instead you should configure your web-server to do that.
Django is slightly unhelpful in talking about static files and then saying 'media files: same.'
So, I believe its best to catch file requests up front, in my case in the nginx server, to reduce double-handling and also your front-end web-server is the most optimised for the job.
To do this:
within a server definition block in your /etc/nginx/sites-available/[site.conf], define the webroot, the directory on your server's file system that covers everything with the declaration 'root [dir]'.
server {
listen 80;
server_name example.com www.example.com;
root /srv/;
This next block tells nginx to send all the traffic to the uwsgi service running django - I lifted it holus bolus from an example, probably on digitalocean.com.
location / {
proxy_pass http://localhost:8000;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Client-Verify SUCCESS;
proxy_set_header X-Client-DN $ssl_client_s_dn;
proxy_set_header X-SSL-Subject $ssl_client_s_dn;
proxy_set_header X-SSL-Issuer $ssl_client_i_dn;
proxy_read_timeout 1800;
proxy_connect_timeout 1800;
include uwsgi_params;
uwsgi_pass unix:/run/uwsgi/mysite6.sock;
}
Now, here are the bits we need to serve files when they are requested. try_files attempts to serve $uri and then $uri/, and it would be a good idea to put a file like 'resource_not_found.html' in /srv and set it as the last fallback for try_files, so the user knows that this part has been unintentionally left blank.
location /static/ {
try_files $uri $uri/ ;
}
location /media/ {
try_files $uri $uri/ ;
}
}
That concludes our server block for http, hence the extra close "}".
Alternatively, you can get uwsgi doing it by setting 'static-map' or 'static-map2'. 'static-map' "eats" the mapped url part, whereas static-map2 adds it.
static-map /files=/srv/files
means a request for /files/funny.gif will serve /srv/files/files.gif.
static-map2 /files=/srv
will do the same thing, because it will take a request for /files/funny.gif and look for /srv/files/funny.gif. As per the uwsgi docs, you can create as many of these mappings as you want, even to the same uri, and they will be checked in order of appearance. Damnit, I've just now finally found the docs for nginx open source.
uwsgi docs

Nginx rewrite regex with port change

My web stack is composed of (nginx (port: 29090) -> tomcat)
nginx act as reverse proxy, and tomcat host 2 webapps1. For Authentication (using netflix zuul ) - running on port 29091 2. SensorThings API server - running on port 29101
This below request is passed using zuul.route.sensor.url=http://localhost:29090/sensor-internal
Below is nginx.conf block
location /sensor-internal/ {
include cors_support;
rewrite ^(/sensor/)(.*)$ SensorThingsServer-1.0/v1.0/$2 break;
proxy_redirect off;
proxy_set_header Host $host;
rewrite_log on;
}
I want to replace the URL
http://localhost:29090/sensor/xxxx(n)/yyyy(m)
to
http://localhost:29101/SensorThingsServer-1.0/v1.0/xxxx(n)/yyyy(m)
See change in port and replace sensor with STS-1.0/v1.0/
I believe the above block will not work for port change. Please guide.
You should describe separate location /sensor/ and perform rewriting there, because location /sensor-internal/ you have defined does not serve /sensor/* request.
location /sensor/ {
rewrite ^/(/sensor/)(.*)$ http://localhost:29101/SensorThingsServer-1.0/v1.0/$2 break;
rewrite_log on;
}

trouble getting a file from node.js using nginx reverse proxy

I have set up an nginx reverse proxy to node essentially using this set up reproduced below:
upstream nodejs {
server localhost:3000;
}
server {
listen 8080;
server_name localhost;
root ~/workspace/test/app;
location / {
try_files $uri $uri/ #nodejs;
}
location #nodejs {
proxy_redirect off;
proxy_http_version 1.1;
proxy_pass http://nodejs;
proxy_set_header Host $host ;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
Now all my AJAX POST requests travel just fine to the node with this set up, but I am polling for files afterward that I cannot find when I make a clientside AJAX GET request to the node server (via this nginx proxy).
For example, for a clientside javascript request like .get('Users/myfile.txt') the browser will look for the file on localhost:8080 but won't find it because it's actually written to localhost:3000
http://localhost:8080/Users/myfile.txt // what the browser searches for
http://localhost:3000/Users/myfile.txt // where the file really is
How do I set up the proxy to navigate through to this file?
Okay, I got it working. The set up in the nginx.conf file posted above is just fine. This problem was never an nginx problem. The problem was in my index.js file over on the node server.
When I got nginx to serve all the static files, I commented out the following line from index.js
app.use(express.static('Users')); // please don't comment this out thank you
It took me a while to troubleshoot my way back to this as I was pretty wrapped up in understanding nginx. My thinking at the time was that if nginx is serving static files why would I need express to serve them? Without this line however, express won't serve any files at all obviously.
Now with express serving static files properly, nginx handles all static files from the web app and node handles all the files from the backend and all is good.
Thanks to Keenan Lawrence for the guidance and AR7 for the config!

Resources