Elasticsearch - Collecting logs from devices not on server LAN - elasticsearch

I am trying to build familiarity with SIEM systems in general and decided to set up an Elastic Stack via Digital Ocean. Everything was successful and my server as localhost is producing logs. It's been interesting to tinker with visualizations and that good stuff.
Obviously my interest isn't in logs from this remote server, though. I would like to configure some devices on my home network to send logs.
Current setup on server: filebeat > logstash > elasticsearch > kibana.
When I install filebeat onto, say, my laptop and configure the .yml file in a similar way to the server (comment out elastic output, uncomment logstash output) it is not able to connect. Basically I just set the hosts to serverip:logstash port and enabled filebeat on the system. Running the setup commands leads to a "couldn't connect to any configured elasticsearch hosts".
Instead of a direct answer, can someone explain for me generally what I need to be considering for this process? What is happening when connecting outside of the server LAN? and how do I handle authentication to the server, if needed?
Thank you, really. I know that the information is out there but I am deep in a rabbit hole and having a hard time finding what I need.

By default, the HTTP API is bound to only the host's local loopback interface,
ensuring that it is not accessible to the rest of the network. Because the API
includes neither authentication nor authorization and has not been hardened or
tested for use as a publicly-reachable API, binding to publicly accessible IPs
should be avoided where possible.
Even you set "http.host: 0.0.0.0" - you need to open port for your laptop (better if you already have public IP and open it only for your laptop)
For authentication - you have to investigate xpack - security features .
BR Alexey.

Related

Prometheus - How to protect 'node_exporter'

I just used Ansible to monitor my servers with Prometheus, Grafana and Node Exporter. I have one monitoring server (Prometheus) & one webserver (Node Exporter).
I followed a tutorial for the setup. The thing is that it does not provide any information about security. For the moment any one is able to listen on the node_exporter port of my webserver.
I thought about iptable to protect my webserver from external calls on node_exporter port. Then I will only give access to my Promotheus server.
Is it the way to do?
There might be mainly three options
Local or external firewall, as already mentioned in your question
Setting up an encryption proxy (sshified) on your Prometheus Server, which encrypts the outgoing session over SSH to the node_exporter nodes
Setting up an encryption proxy (stunnel) on your Prometheus Nodes, which let you only make an encrypted session, see Authentication and encryption for Prometheus and its exporters
Option 3 can be easily added to the solution of Running node_exporter with Ansible.
Option 2 is also quite simple and can be done via Ansible easily.
Option 1 can be done via Ansible modules available for (local) firewall configuration like firewalld.
There might be more solutions possible like ghostunnel, ...

Kibana web interface not loading

Despite ElasticSearch and Kibana both running on my production server, I'm unable to visit the GUI over the public IP: http://52.4.153.19:5601/
Localhost curls return 200 but console errors on the browser report timeouts after a few images are retrieved.
I've successfully installed, run, and accessed Kibana on my local (Windows 10) and on my staging AWS EC2 Ubuntu 14.04 environment. I'm able to access both over port 5601 on localhost and the staging environment is accessible over the public IP address and all domains addressed accordingly. The reverse proxy also works and all status indicators are green on the dashboard.
I'm running Kibana 4.5, ElasticSearch 2.3.1, Apache 2.4.12
I've used the same exact volume from the working environment to attach to the production instance, so everything is identical on the two volumes, except that the staging environment's apache vhost uses a subdomain while the production environment's servername is the base domain. Both are configured for SSL wildcards. Both are in separate availability zones at Amazon. I've tried altering the server block to use a subdomain on the production server, just to see if the domain was impactful but the error remains.
I also tried running one instance individually, in case EC2 had some kind of networking error with 0.0.0.0 but I'm unable to come to a resolution. All logs and configurations are identical between the two servers for ElasticSearch and Kibana.
I've tried deleting and re-creating the kibana index, tried alternate settings inclusive of the host, elasticsearch url, extending the max ping and timeout, max retries, extended the apache limits, http.cors to allow different origins. I've tried other ports but both servers are indicating that 5601 is listening in the same way.
I also had the same problem on a completely different volume that was previously attached to this instance.
The only difference I can see is that the working version pings fine while the non-working version has a 100% packet loss when pinging the IP, although I can't imagine why that would be, as I'm able to reach the website on 80, just fine. I can also access various other tools running on other ports. I assume there might be some kind of networking conflict. Any ideas?
May be port 5601 is blocked by firewall
Allow incoming connections to port 5601 by:
sudo iptables -I INPUT -p tcp --dport 5601 -j ACCESS
For security:
Modify above mentioned command and accept connection only from specific address. (See man iptables)
or use Shield plugin for elasticseach
Sorry, forgot to update this question. The answer turned out being that I simply needed to deploy a new instance. Simply by creating a clone of the instance, I was able to resolve the issue. I've had networking problems at AWS, before, with their internal dns/ip conflicts, so I've had to do so, in the past and this turned out to be the quickest and cleanest solution, albeit not providing any definitive insight into the cause.

elasticsearch transport-couchbase plugin refusing connection on port 9091

On my server I have installed elasticsearch-2.2.1 and couchbase server version 4.1.0. The aim is to transfer data from bucket x on couchbase to elastic search.
Ive installed the transport-couchbase plugin on elastic-search which will basically allow for xdcr from the server to elastic search.
As I understand it, transport-couchbase listens by default on port 9091 so in essence I'm supposed to create a cluster reference that points to that port (both couchbase and elastic search are installed on the same machine).
When I try create the reference I get an internal server error. The logs don't give me much information regarding the issue and I can ping the port. However when I try to telnet the machine on the port it refuses connection.
the server is sitting behind a proxy and i am starting to think that the issue lies within either couchbase server or elasticsearch ( transport-couchbase plugin)
Im going out on a limb here but I think maybe im supposed to configure the plugin so that it accepts requests going through tthe proxy. If this is the issue, is there a way to embed proxy settings into the plugin so that it can accept connections for xdcr?
PS: When I did this whole process on a separate machine that isnt sitting behind a proxy, everything worked fine. So I have a strong suspicion that it is proxy issues
If you can't telnet or browse to port 9091, this most likely indicates a network config issue. The plugin binds to the interface that elasticsearch binds to. The first thing to check is that the bind_host and publish_host in elasticsearch.yml is configured to bind to an interface that allows connections from wherever the proxy is located and that the proxy is really connecting on that interface.
There is a thread in github for the bug in transport plugin where it might not bind to all interfaces :
https://github.com/couchbaselabs/elasticsearch-transport-couchbase/issues/134
The above solutions didn't work for me, however I added this line:
-Djava.net.preferIPv4Stack=true
to /etc/elasticsearch/jvm.options and it seemed fixed the issue in my case

WSO2 ESB proxy service on Windows

i'm using the WSO2 ESB to integrate several services on the Windows virtual machine.
I used the simple proxy to map the services deployed on it. But the problem is what i can't access them from outside it nevetheless the port 8280 where services are deployed is open for internet, but i can see only blank page instead. What could be wrong?
Another question is i was trying to map the WSO2 ESB management console itself to be availbe from outside the machine using simple proxy, and i'm failed, it loads me the this is what i see on trying the service.
Could you please give me a hint on how to resolve this issue? is it possible to share the esb mgmt console using the ESB itself?
Thanks a lot in advance,
Do u have proxy in the middle? It looks like on screenshot webpage missing all pictures, meanwhile css was loaded successfully.
Another question which kind of virtual machine u use? For example in virtualbox by default virtual machine behind NAT.
I wasn't able to connect to server on virtual machine from host only opposite way server on host available in virtual machine.
To make server in virtual machine available on host need to configure network as bridge.
Not sure if it helps, but I think I had a similar problem in our corporate network after I applied all the security patches (poodle,Diffie-Hellman etc.). I had to configure the addresses in catalina.xml (if i remember right) that are/under which allowed to access the admin console. Cannot tell you more details because I'm on holiday :-)
Maybe it's worth to give it a try.
Another example from real life. HTTP Response from external resource was application/json, status of response 200 OK. ESB configured to use
<messageFormatter contentType="application/json"
class="org.apache.synapse.commons.json.JsonStreamFormatter"/>
but content was simple text/plain.
During parsing body of http response exception was thrown and just silently was written to log, without any fault message processing. Just empty response to client.
To clarify that services reachable, there is echo service by default on server, which respond content equal to request. Try to use it.
was trying to map the WSO2 ESB management console itself to be availbe
from outside the machine using simple proxy
By default the management console tries to enforce the port 9443 for dynamic links (JSP) pages. That's why you see only part of the pages and you shouldn't be able to log on.
what you can do is edit the repository/conf/tomcat/catalina-server.xml and to the Connector running the port 9443 you can add an attribute proxyPort="443", the carbon console will be happy to run on 443.
For the services, my educated guess would be on the firewall / network rules, however without other information I cannot answer (or - they are working, just you may not try to access them by simple browser request)

Elasticsearch Access Log

I'm trying to track down who is issuing queries to an ElasticSearch Cluster. Elastic doesn't appear to have an access log.
Is there a place where I can find out which IP is hitting the cluster?
Elasticsearch doesn't provide any security out of the box, and that is on purpose and by design.
So you have a couple solutions out there:
Don't let your ES cluster exposed to the open world, but put it behind a firewall (i.e. whitelist the hosts that can access ports 9200/9300 on your nodes)
Look into the Shield plugin for Elasticsearch in order to secure your environment.
Put an nginx server in front of your cluster to act as a reverse proxy.
Add simple basic authentication with either the elasticsearch-jetty plugin or simply the elasticsearch-http-basic plugin, which also allowws you to whitelist the client IPs that are allowed to access your cluster.
If you want to have access logs, you need either 2 or 3, but all solutions above will allow you to secure your ES environment.

Resources