How to provide a basic authentication for Kibana4 - elasticsearch

I am running a quite easy setup for ES+Kibana. I have the following 2 AWS instances as follows:
HAProxy Instance & ES+Kibana Instance (both on same machine) The whole set-up is straight foreword, HAProxy redirects request to Kibana Dashboard. The Haproxy holds my certificate is not a dedicated instance i.e it is also responsible to send connections to other monitoring instances that i own.
So It looks like :
|---->> Monitor 1
Request ------------> HAPROXY ------|---->> Monitor 2
|---->> (Kibana+ES_server)
I need a basic authentication for Kibana+ES_Server only, which basically should ask a User its username & password after it hit the URL.
P.S I am also using Browser based certificates. What should be my approach? I am expecting a number of ways here and the best approach to do so.

Try Elasticsearch basic authentication plugin -
click here for elasticsearch
for kibana try kibana-authentication-proxy
click here for kibana

Related

Prometheus - How to protect 'node_exporter'

I just used Ansible to monitor my servers with Prometheus, Grafana and Node Exporter. I have one monitoring server (Prometheus) & one webserver (Node Exporter).
I followed a tutorial for the setup. The thing is that it does not provide any information about security. For the moment any one is able to listen on the node_exporter port of my webserver.
I thought about iptable to protect my webserver from external calls on node_exporter port. Then I will only give access to my Promotheus server.
Is it the way to do?
There might be mainly three options
Local or external firewall, as already mentioned in your question
Setting up an encryption proxy (sshified) on your Prometheus Server, which encrypts the outgoing session over SSH to the node_exporter nodes
Setting up an encryption proxy (stunnel) on your Prometheus Nodes, which let you only make an encrypted session, see Authentication and encryption for Prometheus and its exporters
Option 3 can be easily added to the solution of Running node_exporter with Ansible.
Option 2 is also quite simple and can be done via Ansible easily.
Option 1 can be done via Ansible modules available for (local) firewall configuration like firewalld.
There might be more solutions possible like ghostunnel, ...

Elasticsearch - Collecting logs from devices not on server LAN

I am trying to build familiarity with SIEM systems in general and decided to set up an Elastic Stack via Digital Ocean. Everything was successful and my server as localhost is producing logs. It's been interesting to tinker with visualizations and that good stuff.
Obviously my interest isn't in logs from this remote server, though. I would like to configure some devices on my home network to send logs.
Current setup on server: filebeat > logstash > elasticsearch > kibana.
When I install filebeat onto, say, my laptop and configure the .yml file in a similar way to the server (comment out elastic output, uncomment logstash output) it is not able to connect. Basically I just set the hosts to serverip:logstash port and enabled filebeat on the system. Running the setup commands leads to a "couldn't connect to any configured elasticsearch hosts".
Instead of a direct answer, can someone explain for me generally what I need to be considering for this process? What is happening when connecting outside of the server LAN? and how do I handle authentication to the server, if needed?
Thank you, really. I know that the information is out there but I am deep in a rabbit hole and having a hard time finding what I need.
By default, the HTTP API is bound to only the host's local loopback interface,
ensuring that it is not accessible to the rest of the network. Because the API
includes neither authentication nor authorization and has not been hardened or
tested for use as a publicly-reachable API, binding to publicly accessible IPs
should be avoided where possible.
Even you set "http.host: 0.0.0.0" - you need to open port for your laptop (better if you already have public IP and open it only for your laptop)
For authentication - you have to investigate xpack - security features .
BR Alexey.

Securing Kibana for an internet facing startup

New to Kibana & not an expert in web security. We're trying to build a small startup in which we're leveraging Kibana 5.x for our backoffice analysts for data exploration. This is a webapp and will be accessible over the internet.
Also, X-PACK security (though promising) may not be an option for us purely because of cost.
I''d like to summarize my thoughts and get them validated by professionals out here.
Firstly, I'm thinking of putting Elasticsearch behind a firewall so that only my APP server and Kibana server could access - ES is now secure.
I'm thinking of fronting Kibana using a Reverse Proxy (Apache or Nginx) and apply basic authentication. And everything will be over HTTPS.
I'll only allow GET requests to Kibana through this Reverse Proxy so that the users can read only.
Does this have any gap? Also I'm wondering if Kibana makes a direct call to Elasticsearch from it's Javascript running on the browser? If this is true then we would have another potential backdoor to get to ES. What should be done if this is true.

Elasticsearch Access Log

I'm trying to track down who is issuing queries to an ElasticSearch Cluster. Elastic doesn't appear to have an access log.
Is there a place where I can find out which IP is hitting the cluster?
Elasticsearch doesn't provide any security out of the box, and that is on purpose and by design.
So you have a couple solutions out there:
Don't let your ES cluster exposed to the open world, but put it behind a firewall (i.e. whitelist the hosts that can access ports 9200/9300 on your nodes)
Look into the Shield plugin for Elasticsearch in order to secure your environment.
Put an nginx server in front of your cluster to act as a reverse proxy.
Add simple basic authentication with either the elasticsearch-jetty plugin or simply the elasticsearch-http-basic plugin, which also allowws you to whitelist the client IPs that are allowed to access your cluster.
If you want to have access logs, you need either 2 or 3, but all solutions above will allow you to secure your ES environment.

How to disable elasticsearch http module?

The default value of "http.enabled" option in elasticsearch's configuration file is true which means that we can search and admin the cluster from http command, for example:DELETE /index_* request can delete all indexes. But this is not safe when deployment the service to the production environment. How can I fix this problem?
You can either implement shield - this is free if you are paying for one of the Elasticsearch support packages.
Or implement a reverse proxy which checks each request and the user running the request, for example nginx.

Resources