Prometheus - How to protect 'node_exporter' - ansible

I just used Ansible to monitor my servers with Prometheus, Grafana and Node Exporter. I have one monitoring server (Prometheus) & one webserver (Node Exporter).
I followed a tutorial for the setup. The thing is that it does not provide any information about security. For the moment any one is able to listen on the node_exporter port of my webserver.
I thought about iptable to protect my webserver from external calls on node_exporter port. Then I will only give access to my Promotheus server.
Is it the way to do?

There might be mainly three options
Local or external firewall, as already mentioned in your question
Setting up an encryption proxy (sshified) on your Prometheus Server, which encrypts the outgoing session over SSH to the node_exporter nodes
Setting up an encryption proxy (stunnel) on your Prometheus Nodes, which let you only make an encrypted session, see Authentication and encryption for Prometheus and its exporters
Option 3 can be easily added to the solution of Running node_exporter with Ansible.
Option 2 is also quite simple and can be done via Ansible easily.
Option 1 can be done via Ansible modules available for (local) firewall configuration like firewalld.
There might be more solutions possible like ghostunnel, ...

Related

Elasticsearch - Collecting logs from devices not on server LAN

I am trying to build familiarity with SIEM systems in general and decided to set up an Elastic Stack via Digital Ocean. Everything was successful and my server as localhost is producing logs. It's been interesting to tinker with visualizations and that good stuff.
Obviously my interest isn't in logs from this remote server, though. I would like to configure some devices on my home network to send logs.
Current setup on server: filebeat > logstash > elasticsearch > kibana.
When I install filebeat onto, say, my laptop and configure the .yml file in a similar way to the server (comment out elastic output, uncomment logstash output) it is not able to connect. Basically I just set the hosts to serverip:logstash port and enabled filebeat on the system. Running the setup commands leads to a "couldn't connect to any configured elasticsearch hosts".
Instead of a direct answer, can someone explain for me generally what I need to be considering for this process? What is happening when connecting outside of the server LAN? and how do I handle authentication to the server, if needed?
Thank you, really. I know that the information is out there but I am deep in a rabbit hole and having a hard time finding what I need.
By default, the HTTP API is bound to only the host's local loopback interface,
ensuring that it is not accessible to the rest of the network. Because the API
includes neither authentication nor authorization and has not been hardened or
tested for use as a publicly-reachable API, binding to publicly accessible IPs
should be avoided where possible.
Even you set "http.host: 0.0.0.0" - you need to open port for your laptop (better if you already have public IP and open it only for your laptop)
For authentication - you have to investigate xpack - security features .
BR Alexey.

The Jenkins tunnel address which I specify in the Jenkins->Configure Cloud does not seem to work. Can someone help me with the same?

I have a kubernetes cluster running on GKE and a Jenkins server running on a GCP instance.
I am using the Kubernetes plugin to dynamically create pods on the kubernetes cluster. I created a pipeline(Declarative syntax) for the same.
So I am aware that the Jenkins slave agents communicates with the Jenkins master on port 50000.
A snip of the configuration
But for some reason when I viewed the logs for the JNLP container creates by Jenkins, I received an exception - tcpSlaveAgentListener not found.
A snip of the container log
According to the above image, I assume the tunneling is unsuccessful as it is trying to connect to http://34.90.46.204:8080/tcpSlaveAgentListener/ whereas it should connect to http://34.90.46.204:50000/tcpSlaveAgentListener/.
It was a lazy question for me to ask, but I solved the issue.
In the Manage Jenkins-> Configure Global Security settings:
For the option on setting a port for TCP inbound agents: unselect the disable option which is selected by default and then provide a port for the inbound agents to interact on (50000).
A snip of the configuration
Jenkins uses a TCP port to communicate with agents connected inbound. If you're going to use inbound agents, you can allow the system to randomly select a port at launch (this avoids interfering with other programs, including other Jenkins instances). As it's hard for firewalls to secure a random port, you can instead specify a fixed port number and configure your firewall accordingly.
Hope this helps someone.

Job tracking URL in Google Compute engine not working

I am using Google Compute Engine to run Mapreduce jobs on Hadoop (pretty much all default configs). While running the job I get a tracking URL of the form http://PROJECT_NAME:8088/proxy/application_X_Y/ but it fails to open. Did I forget to configure something?
To elaborate on the option Amal mentioned in the other answer of using the "external ip address" of your Google Compute Engine VM, you can obtain the external IP address by running gcloud compute instances describe --zone <your zone> <your master hostname> and looking for natIP.
To open port 8088, you'll have to set up a firewall rule opening that port, likely on your default Google Compute Engine network. You'll want to specify a your.ip.address.here/32 address in the --source-ranges to restrict incoming traffic to just your local machine dialing into your VM, otherwise the anyone in the IP source-ranges would be able to access your Hadoop pages.
If you had used bdutil to turn up your cluster, there's an alternative way which is much easier and more secure; simply run
bdutil <your flags used in deployment, like -e hadoop2, --prefix, etc.> socksproxy
to open SSH with dynamic port forwarding to use as a SOCKS5 proxy that your browser can point to. If you're running on Linux or Mac and have Chrome or Firefox installed, bdutil should also print out a copy/paste command for starting a fresh isolated browser pre-configured to use the socks proxy so that you can click through all the useful links.
If bdutil didn't print out a browser command or you didn't use bdutil, you can also run and configure your SSH socks proxy using these instructions. An SSH-based socks proxy is more secure than opening up firewall ports, and also allows the Hadoop page links to work (otherwise you have to keep manually replacing the hostnames with the external IP addresses).
One correction. You are using YARN. So there is no jobtracker. Jobtracker is present in hadoop 1.x. In YARN, the processing layer became a generic framework and the jobtracker got replaced with Resource manager and application master. The UI that you mentioned in the question was of Resource Manager.
For your problem, try the following tips.
Use the public ip address of the resource manager instance instead of PROJECT_NAME.
Check whether the 8088 port is opened for accessing it from outside.
Another (more secure) way to do this is to use gcloud compute to make an ssh tunnel to your deployment, and then launch Chrome though it.
$ gcloud compute ssh clustername --zone=us-central1-a --ssh-flag="-D 1080" --ssh-flag="-N" --ssh-flag="-n"
You will need to replace clustername with the name of your deployment, and change the --zone if necessary.
From there, you can launch Chrome through it and then reach the hadoop job tracking URL.
$ chrome --proxy-server="socks5://localhost:1080" \
--host-resolver-rules="MAP * 0.0.0.0 , \
EXCLUDE localhost" --user-data-dir=/tmp/clustername

Elasticsearch Access Log

I'm trying to track down who is issuing queries to an ElasticSearch Cluster. Elastic doesn't appear to have an access log.
Is there a place where I can find out which IP is hitting the cluster?
Elasticsearch doesn't provide any security out of the box, and that is on purpose and by design.
So you have a couple solutions out there:
Don't let your ES cluster exposed to the open world, but put it behind a firewall (i.e. whitelist the hosts that can access ports 9200/9300 on your nodes)
Look into the Shield plugin for Elasticsearch in order to secure your environment.
Put an nginx server in front of your cluster to act as a reverse proxy.
Add simple basic authentication with either the elasticsearch-jetty plugin or simply the elasticsearch-http-basic plugin, which also allowws you to whitelist the client IPs that are allowed to access your cluster.
If you want to have access logs, you need either 2 or 3, but all solutions above will allow you to secure your ES environment.

What are some options for securing redis db?

I'm running Redis locally and have multiple machines communicating with redis on the same port -- any suggestions for good ways to lock down access to Redis? The database is run on Mac OS X. Thank you.
Edit: This is assuming I do not want to use the built-in (non backwards compatible) Redis requirepass directive in the config.
On EC2 we lock down the machines that can make requests to the redis port on our redis box to only be our app box (we also only use it to store non-sensitive data).
Another option could be to not open up the redis port externally, but require doing port forwarding through an ssh tunnel. Then you could only allow requests coming through the tunnel and only allow ssh with a known key.
You'd pay the ssh penalty, but maybe that's ok for your scenario.
There is a simple requirepass directive in the configuration file which allow access only to clients who authenticate through AUTH command. I recommend to read docs on this command, namely the "note" section.

Resources