I'm trying to set up an ELK stack on a remote Oracle Cloud server, but I can't access kibana from a browser. Installation using deb package. The version of elasticsearch and kibana I'm installing is 8.2 (in this version, security settings are already enabled by default, including settings and generation of security certificates) - the latest version for now. I perform the installation according to the instructions from the official site, but nothing is said there about the remote access settings.
I tried to change the settings in the kibana.yaml file, uncommented the "server.port: 5601" field and edited server.host: "my ip" (I also tried server.host: "0.0.0.0"), but this does not help .
I also tried to access from the network directly to elasticsearch. I edited its configuration in a similar way, but it did not help. In my case, access to elasticsearch from the network is not essential, but I would also like to get it.
I know that Oracle servers by default have restrictions on the forwarded traffic, so I unblocked the elastic and kibana ports (9200 and 5601) in the Oracle control panel.
I also allowed ports 9200 and 5601 through ipitables. The UFW firewall is by default in the "inactive" status. When checked through nmap, both ports return a "filtered" status.
Please help fix the issue. I'm just doing a standard installation according to the instructions and I don't understand what the problem is.
I solved the problem by setting up a reverse proxy nginx so that it redirects requests coming to the server to localhost:5601. These two articles helped me, I hope it helps someone else:
https://www.digitalocean.com/community/tutorials/how-to-install-nginx-on-ubuntu-20-04
https://www.digitalocean.com/community/tutorials/how-to-install-elasticsearch-logstash-and-kibana-elastic-stack-on-ubuntu-20-04-ru (step 2)
Related
I am trying to build familiarity with SIEM systems in general and decided to set up an Elastic Stack via Digital Ocean. Everything was successful and my server as localhost is producing logs. It's been interesting to tinker with visualizations and that good stuff.
Obviously my interest isn't in logs from this remote server, though. I would like to configure some devices on my home network to send logs.
Current setup on server: filebeat > logstash > elasticsearch > kibana.
When I install filebeat onto, say, my laptop and configure the .yml file in a similar way to the server (comment out elastic output, uncomment logstash output) it is not able to connect. Basically I just set the hosts to serverip:logstash port and enabled filebeat on the system. Running the setup commands leads to a "couldn't connect to any configured elasticsearch hosts".
Instead of a direct answer, can someone explain for me generally what I need to be considering for this process? What is happening when connecting outside of the server LAN? and how do I handle authentication to the server, if needed?
Thank you, really. I know that the information is out there but I am deep in a rabbit hole and having a hard time finding what I need.
By default, the HTTP API is bound to only the host's local loopback interface,
ensuring that it is not accessible to the rest of the network. Because the API
includes neither authentication nor authorization and has not been hardened or
tested for use as a publicly-reachable API, binding to publicly accessible IPs
should be avoided where possible.
Even you set "http.host: 0.0.0.0" - you need to open port for your laptop (better if you already have public IP and open it only for your laptop)
For authentication - you have to investigate xpack - security features .
BR Alexey.
Despite ElasticSearch and Kibana both running on my production server, I'm unable to visit the GUI over the public IP: http://52.4.153.19:5601/
Localhost curls return 200 but console errors on the browser report timeouts after a few images are retrieved.
I've successfully installed, run, and accessed Kibana on my local (Windows 10) and on my staging AWS EC2 Ubuntu 14.04 environment. I'm able to access both over port 5601 on localhost and the staging environment is accessible over the public IP address and all domains addressed accordingly. The reverse proxy also works and all status indicators are green on the dashboard.
I'm running Kibana 4.5, ElasticSearch 2.3.1, Apache 2.4.12
I've used the same exact volume from the working environment to attach to the production instance, so everything is identical on the two volumes, except that the staging environment's apache vhost uses a subdomain while the production environment's servername is the base domain. Both are configured for SSL wildcards. Both are in separate availability zones at Amazon. I've tried altering the server block to use a subdomain on the production server, just to see if the domain was impactful but the error remains.
I also tried running one instance individually, in case EC2 had some kind of networking error with 0.0.0.0 but I'm unable to come to a resolution. All logs and configurations are identical between the two servers for ElasticSearch and Kibana.
I've tried deleting and re-creating the kibana index, tried alternate settings inclusive of the host, elasticsearch url, extending the max ping and timeout, max retries, extended the apache limits, http.cors to allow different origins. I've tried other ports but both servers are indicating that 5601 is listening in the same way.
I also had the same problem on a completely different volume that was previously attached to this instance.
The only difference I can see is that the working version pings fine while the non-working version has a 100% packet loss when pinging the IP, although I can't imagine why that would be, as I'm able to reach the website on 80, just fine. I can also access various other tools running on other ports. I assume there might be some kind of networking conflict. Any ideas?
May be port 5601 is blocked by firewall
Allow incoming connections to port 5601 by:
sudo iptables -I INPUT -p tcp --dport 5601 -j ACCESS
For security:
Modify above mentioned command and accept connection only from specific address. (See man iptables)
or use Shield plugin for elasticseach
Sorry, forgot to update this question. The answer turned out being that I simply needed to deploy a new instance. Simply by creating a clone of the instance, I was able to resolve the issue. I've had networking problems at AWS, before, with their internal dns/ip conflicts, so I've had to do so, in the past and this turned out to be the quickest and cleanest solution, albeit not providing any definitive insight into the cause.
On my server I have installed elasticsearch-2.2.1 and couchbase server version 4.1.0. The aim is to transfer data from bucket x on couchbase to elastic search.
Ive installed the transport-couchbase plugin on elastic-search which will basically allow for xdcr from the server to elastic search.
As I understand it, transport-couchbase listens by default on port 9091 so in essence I'm supposed to create a cluster reference that points to that port (both couchbase and elastic search are installed on the same machine).
When I try create the reference I get an internal server error. The logs don't give me much information regarding the issue and I can ping the port. However when I try to telnet the machine on the port it refuses connection.
the server is sitting behind a proxy and i am starting to think that the issue lies within either couchbase server or elasticsearch ( transport-couchbase plugin)
Im going out on a limb here but I think maybe im supposed to configure the plugin so that it accepts requests going through tthe proxy. If this is the issue, is there a way to embed proxy settings into the plugin so that it can accept connections for xdcr?
PS: When I did this whole process on a separate machine that isnt sitting behind a proxy, everything worked fine. So I have a strong suspicion that it is proxy issues
If you can't telnet or browse to port 9091, this most likely indicates a network config issue. The plugin binds to the interface that elasticsearch binds to. The first thing to check is that the bind_host and publish_host in elasticsearch.yml is configured to bind to an interface that allows connections from wherever the proxy is located and that the proxy is really connecting on that interface.
There is a thread in github for the bug in transport plugin where it might not bind to all interfaces :
https://github.com/couchbaselabs/elasticsearch-transport-couchbase/issues/134
The above solutions didn't work for me, however I added this line:
-Djava.net.preferIPv4Stack=true
to /etc/elasticsearch/jvm.options and it seemed fixed the issue in my case
I installed Shield in my Elastic Search cluster and configured Kibana to work with it as described: https://www.elastic.co/guide/en/shield/current/kibana.html
No I restart Kibana and get this error:
{"type":"log","#timestamp":"2016-02-15T19:58:22+00:00","tags":["fatal"],"pid":28422,"level":"fatal","message":"HTTPS
is required. Please set server.ssl.key and server.ssl.cert in kiban$
FATAL { [Error: HTTPS is required. Please set server.ssl.key and
server.ssl.cert in kibana.yml.] cause: [Error: HTTPS is required.
Please set server.ssl.key and server.ssl.cert in kibana.yml.],
isOperational: true }
Tutorial above doesn't state that HTTP is mandatory for Kibana to work with Shield but the error does. Any idea whether I can still use Shield with Kibana without setting up SSL?
Unfortunately this is the case in the currently release of Kibana (4.4). In installedPlugins/shield/index.js:38:13 one can conclude that there is no way to get around using HTTPS when this plugin is enabled. If you simply skip the step by removing the Shield plugin for Kibana with bin/kibana plugin --remove shield, Kibana will be usable again with browser authentication, but this is NOT for production purposes IMO.
add this in in kibana.yml, but only do it if you have SSL configured in some other way, eg a load balancer with SSL termination
shield.skipSslCheck: true
I have elasticsearch running on EC2,
I can hit form local IP address(ex. curl -XGET localhost:9200)
I cannot hit from public IP address, whether on the same machine, or from our network, it always times out,
IPtables are allowing
port is open(to itself as well as private network)
Elasticsearch http.cors is enabled and allows "*"
aside from Iptables, amazon security config, elasticsearch config could there be anything I am overlooking? (we can access 443 and get kibana up, it just times out on the elasticsearch ajax call or if I try to access 9200 directly)
been working on this for over a day so I humbly come to you all!
thank you
I had exactly the same issue.
I managed to solve it as follows:
Do what TJ said in his comment, + restart the instance. I wasn't sure if this was/is necessary, but I did it for good measure.
I made sure that the following is set in the elasticsearch.yml file:
a. http.enabled: true
b. http.cors.enabled: true
c. http.cors.allow-origin: "*"
Restarted elasticsearch (service elasticsearch restart)
Then when I tried to access elasticsearch from the public IP it worked - http://[PUBLIC IP OF INSTANCE]:9200
Hope this helps.
I just spent lots of time trying to get this working and just succeeded.
Setup: Elasticsearch 6.2.4, running on a Windows Server 2012, EC2 instance.
I also installed the discovery-ec2 plugin, not sure now if it is required, my assumption is, yes it is required although some of the settings it allows were not necessary to get it working.
Config (.yml). I tried tons of different .yml config settings which in the end did not help, in the end I think the main setting is:
network.host: 0.0.0.0
I tried setting the network.host to ec2:privateIpv4 and ec2:publicIpv4 (plugin settings) but they didn't help.
I had added the required Custom TCP Rules (allowing 9200 and 9300...not sure if 9300 is needed).
Either it failed to start (usually with a binding to 9300 error) or started but was not publicly accessible.
The Fix. What got it working in the end is you must also open the port in windows firewall. As soon as I added the inbound rule, boom it connected :)
I then stripped out all the extra configs I had been trying, restarted Elasticsearch... and it still worked!