How to secure Elasticsearch - elastica

I have a Elasticsearch running on my server by default it runs on port 9200 and link is public means any one can insert, update, delete anything form anywhere. How do I make it secure like phpMyadmin which can be only accessed with the help of my code and not directly from browser or postman.

Elasticsearch does not perform authentication or authorization, leaving that as an exercise for the developer. Two popular ways I have seen are
Setup your own proxy (Nginx/HAProxy) fronting elasticsearch - this way you exercise full control. You can also use the Elasticsearch-jetty plugin to have jetty level auth
Shield - If budget permits use Shield which is a paid offering from Elasticsearch - https://www.elastic.co/products/shield
Even with these in place, depending on who you are exposing this to - you may want to disable certain things like dynamic scripting, throttles for DoS etc.

You can use the Elasticsearch basic authentication plugin - https://github.com/Asquera/elasticsearch-http-basic
The README there gives a good idea on how to set it up.
If you are using Kibana3 as a frontend to elasticsearch, you can secure it using https://github.com/fangli/kibana-authentication-proxy

I have enabled a relatively simple Nginx proxy that sits between my Elasticsearch and Kibana to configure authorized access to my dashboards and charts.
Look at my post here: https://udaysagars.wordpress.com/2016/04/04/how-i-configured-authorized-access-to-kibana-dashboards/
Also, you can view my application that uses this method here: http://udaysagar2177.github.io/ec2/twitter-analytics.html

Related

How to remotely connect to a local elasticsearch server - in a secure way ofc

I have been playing around with creating a webapp that uses elasticsearch to perform queries. Currently, everything is in production, thus on the localhost, let's say elasticsearch runs at 123.123.123.123:9200. All fun and games, but once the webapplication (react) is finished, the webapp should be able to send the queries to the current local elastic search db.
I have been reading around on how to get this done in a proper and most of all secure way. Summary of this all is currently:
"First off, exposing an Elasticsearch node directly to the internet without protections in front of it is usually bad, bad news." (see here: Accessing elasticsearch from a public domain name or IP).
Another interesting blog I found: https://code972.com/blog/2017/01/dont-be-ransacked-securing-your-elasticsearch-cluster-properly-107.
The problem with the above-mentioned sources is that they are a bit older, and thus I am not sure whether they are up to date.
Therefore the following questions:
Is nginx sufficient to act as a secure middleman, passing the queries from the end-users to elastic?
What is the difference at that point with writing a backend into the react application (e.g. using node and express)?
What is the added value taking into account the built-in security from elasticsearch (usernames, password, apikey, certificates, https,...)?
I am reading a lot about using a VPN or tunneling. I have the impression that these solutions are more geared towards a corporate-collaborative approach. Let's say I am running my front-end on a live server, I can use tunneling to show my work to colleagues, my employer. VPN would be more realistic for allowing employees -wish I had them, just a cs student here- to access e.g. the database within my private network (let's say an employee needs to access kibana to adapt something, let's say an API-key - just making something up here), he/she could use a VPN connection for that.
Thank you so much for helping me clarify the above-mentioned points!
TLS, authorisation and access control are free for the Elastic Stack, and have been for a while. I'd start by looking at the docs, as it's an easy way to natively secure your cluster
for nginx, it can be useful for rate limiting, or blocking specific queries for eg. however it's another thing to configure and maintain
from a client POV it would really only matter if you are using the official Elasticsearch clients, and you use nginx and make changes to the way the API would respond to the client (eg path rewrites, rate limiting)
it's free, it's native, it's easy to manage via Kibana
I'd follow the docs to secure Elasticsearch and then see if you need this at some point in the future. this would be handled outside Elasticsearch anyway, and you'd still want to secure Elasticsearch
The point in exposing Elasticsearch nodes directly to the internet is a higher vulnerability in principle. You should follow the rule of the least "surface" of your system on the internet.
A good practice is to hide from the internet whatever doesn't need to be there, although well protected. It takes ~20mins to get cyber attacks on any exposed service (see a showcase).
So I suggest you install a private network, such as a traditional VPN or an SDP product such as Shieldoo Mesh.

unable to modify flow in Apache NiFi 1.14.0 in HTTP mode

I understand that the official documentation recommends using NiFi with HTTPS, but it nonetheless contains a word for using NiFi under HTTP, like the nifi.web.http.port property.
Also, I'd like to incrementally incorporate and evolve the NiFi instance into our's current data infrastructure, starting with non-critical data pipelines. So, the TLS layer right now is not necessary and could add friction during the deployment phase. So, I decide to go on the HTTP path.
After changing some settings, I am able to access NiFi's GUI at http://localhost:8080/nifi but I find out that I cannot make any change to the Flow. Write operations, i.e POST / PUT / DELETE requests, are rejected by HTTP 403.
NiFi doc says:
And by monitoring the API traffic between the GUI and NiFi instance, I can confirm that the PermissionsEntity has both canRead:true and canWrite:true.
I used a containerized NiFi instance.
Has anyone also encounter similar problems?
The root canvas may have been set for the default single-user that NiFi 1.14 generates if it starts up without security configuration.
First thing to try is right-clicking on the canvas and granting yourself access if you can.
The second option: try (re)moving the flow.xml.gz, users.xml and authorizations.xml and then restarting Nifi. New files will be generated that may work better with anonymous access.
Either way, setting up security now will probably mean less friction down the road, not more. I strongly advise you to bite the bullet and get it set up securely.

How to monitor Elastic Stack without X-Pack?

Can we monitor the elastic stack 6.0 and above(like elastic search..) without using the X-Pack?As we know many of the Features like security, machine learning, graph APIs don't be supported under BASIC(free Licence).
So I want to know if there are any APIs, without Licence limitation, can be used to implement those functionalities mentioned above?
All the information should be in the cluster APIs, you'll just lack the visualizations.
Monitoring (of the local cluster) is actually included in X-Pack Basic unlike the other features. Any reason you don't want to use it?
Alternatives include Kopf, Cerebro,... though you'll need to run them as a separate process and watch out for version compatibilities.
We've had success with ElasticHQ for Monitoring (requires python)
https://github.com/ElasticHQ/elasticsearch-HQ
And sentinl for setting up alerts/watchers (it is a plugin for kibana)
https://github.com/sirensolutions/sentinl/wiki
We have set up a reverse proxy to enable ssl/tls and use ubuntu user management to create logins, however, we do not limit access within Kibana itself.
We have little need for graph/machine learning so I am unaware of free alternatives.
The company I work for is heavily Open Source, so these projects suit us.

Can a person add CORS headers using the ELB Application Load Balancer (sitting in front of Solr)?

We have a number of EC2 instances running Solr in EC2, which we've used in the past through another application. We would like to move towards allowing users (via web browser) to directly access Solr.
Without something "in front" of Solr this results in a security risk, so we have opted to try to use ELB (specifically the Application Load Balancer) as a simple and maintenance free way of preventing certain requests from hitting SOLR (i.e. preventing the public from DELETING or otherwise modifying the documents in Solr).
This worked great, but we realize that we need to deal with the CORS issue. In other words, we need to add the appropriate headers to requests that come in from a browser. I have not yet seen a way of doing this with Application Load Balancer but am wondering if it is possible to do someway. If it is not possible, I would love as an additional recomendation the easier and least complicated way of adding these headers. We really really really hate to add something like nginx in front of Solr because then we've got additional redundancy to deal with, more servers, etc.
Thank you!
There is not much I can find on CORS for ALB either and I remember when I used Beanstalk with ELB I had to add CORS support in my java application directly.
Having said that, I can find a lot of articles on how to set up CORS for Solr.
Can it be an option for you?

solr security for web apps

I have a web app which gets its data from a Solr instance (Tomcat)
Additional queries are done client side with AJAX, the data is directly pulled from Solr. Now this gives users the option to perform any query they like, and is of course a huge security hole. It's not a particular big issue for this particular app, but I'm curious at how to fix this. How to secure Solr, when client side AJAX calls are required? (Preferably I would solve this with PHP.)
Instead of querying solr directly, you could create a simple PHP wrapper that limits the types of queries that are possible. Then, the client queries this PHP script which then queries solr. Once you've done that, you can limit access to the solr server to localhost either through the firewall or with your Java applications server.

Resources