I am currently trying to make my Kibana dashboard remotely accessible via the browser. So, a user can monitor index and run scripts in a remote manner. As background, my elastic is currently ran on Windows server and I could successfully set 'elastic uri search' (e.g. http://[IP_ADDRESS]:9200) remotely accessible by updating elasticsearch.yml and opening the port 9200. For this reason, I took similar actions to remotely access Kibana, updating kibana.yml and opening the port 5601, but I couldn't remotely access kibana on the browser from my local machine. It throws ERR_CONNECTION_TIMED_OUT on the browser. See attributes that I have updated for kibana.yml:
server.port: "5601"
server.host: "0.0.0.0"
elasticsearch.url: "http://localhost:9200"
You need to configure the file /etc/kibana/kibana.yml as root:
Uncomment the lines:
server.port: 5601
# Kibana is served by a back end server. This setting specifies the port to use.
server.port: 5601
server.host: "0.0.0.0"
# To allow connections from remote users, set this parameter to a non-loopback address.
server.host: "0.0.0.0"
elasticsearch.hosts
Change the <your-elastic-server-ip> to your elastic search server IP, something like 192.168.1.XX
# The URLs of the Elasticsearch instances to use for all your queries.
elasticsearch.hosts: ["http://<your-elastic-server-ip>:9200"]
And check the ports on your firewall:
$ sudo firewall-cmd --list-all
Output:
public (active)
target: default
icmp-block-inversion: no
interfaces: ens33
sources:
services: cockpit dhcpv6-client ftp ssh
ports: 10000/tcp 3306/tcp 9200/tcp 5601/tcp
protocols:
masquerade: no
forward-ports:
source-ports:
icmp-blocks:
rich rules:
If you don't see the ports 9200/tcp 5601/tcp opened then do the following command as sudo:
$ sudo firewall-cmd --zone=public --permanent --add-port 9200/tcp
$ sudo firewall-cmd --zone=public --permanent --add-port 5601/tcp
I followed these steps to connect remote Elasticsearch on AWS EC2 to my local kibana.
Backup your original .yml files
sudo cp /etc/elasticsearch/elasticsearch.yml elasticsearch.yml.bk
sudo cp /etc/kibana/kibana.yml kibana.yml.bk
Edit security groups and add a new rule - custom TCP with port 9200 accessible via your public IP v4.
ssh to your server and tweak ufw to allow your ip over 9200 sudo ufw allow from <your public v4 IP> to any port 9200
edit elasticsearch.yml to add network.host: 0.0.0.0
discovery.type: single-node Ref
on your local machine edit kibana.yml and add elasticsearch.hosts: ["http://34.103.134.135:9200"]
go to http://localhost:5601/ you should see your remote index under Discover> index management.
`
Related
I installed elasticsearch(docker) 8.2 on aws ec2(ubuntu 20.04.)
Everything is working.My only problem is that I can't reach(curl) it from other instances and my backend server(it is on same vpc).
I added my node to its discovery node, and also set network.host: 0.0.0.0
but I still can't reach it
(I tried with both private and public ip)
Is it necessary to install SSL/TSL on it with elastic 8?
Does anyone has any suggestion how to access it?
Looks like you forgot to bind the docker container port to host port, you need to add below config, to your Elasticsearch container docker yml
ports:
- "9202:9200" (bind 9200 port of host to docker port of 9200, 9200 is the Elasticsearch port by default)
After that you should be able to do the curl from other instances in the VPC.
I am trying to install Elasticsearch and Kibana on Debian 10. they both work and active in the current machine. I want to access Kibana through different machine's browser, but it cannot be reached.
I have changed Kibana.yml configuration,
server.port: 5601
server.host: "IP" - my IP address
elasticsearch.hosts: ["http://IP:9200"]
Also, I have enabled 5601 and 9200 ports through he firewall by using UFW commands.
even though, still not working. Any idea how to fix that??
Thanks,
almo
I'm tried to follow this guideline for installing ELK on Centos 8 (on top of one AWS cluster).
After installing elastic and kibana, the kibana startup failed with:
*"message":"License information could not be obtained from Elasticsearch
I googled it, and realized I should use OSS version (latest is 7.10.2)
so make sure to install only OSS version. you can use this guideline
after that, I got new error from kibana.log
-08T07:19:32Z","tags":["error","savedobjects-service"],"pid":62767,"message":"Unable to retrieve version information from Elasticsearch nodes."}
I tried to google it, but no solution worked for me.
my kibana.yaml:
server.port: 5601
server.host: "0.0.0.0"
elasticsearch.hosts: "[my public AWS instance ip:9200]"
my elasticsearch.yaml:
path.data: /var/lib/elasticsearch
path.logs: /var/log/elasticsearch
network.host: 0.0.0.0
http.port: 9200
discovery.seed_hosts: "[my private AWS instance ip]"
cluster.initial_master_nodes: "[my private AWS instance ip]"
Update:
If I'm changing this line in kibana.yaml file to:
elasticsearch.hosts: "http://localhost:9200"
Then it works. what is the root cause? why it can't access elastic public IP but only local?
Per #leandrojmp comment, the issue was indeed with the public IP in elasticsearch.hosts. Once I replaced it to my private ip, it works
also:
When installing the Elastic Stack, you must use the same version across the entire stack. For example, if you are using Elasticsearch 7.9.3, you install Beats 7.9.3, APM Server 7.9.3, Elasticsearch Hadoop 7.9.3, Kibana 7.9.3, and Logstash 7.9.3.
Using docker, I had to specify the elasticsearch.hosts as an environment variable: -e "ELASTICSEARCH_HOSTS=http://localhost:9200", so:
docker run -p 5601:5601 -e "ELASTICSEARCH_HOSTS=http://localhost:9200" arm64v8/kibana:7.16.3
Set elasticsearch.hosts ipaddress as local system's host ipaddress in kibana.yml file. Also you need to mount local kibana.yml file while running docker container.
docker run -d --name kibana -p 5601:5601 -v /home/users/mySystemUserName/config/kibana.yml:/opt/kibana/config/kibana.yml kibana:7.16.3
Add the below configs in
kibana.yml
server.name: kibana
server.port: 5601
server.host: "0.0.0.0"
elasticsearch.hosts: [ "http://192.168.0.102:9200" ]
I can't connect to ElasticSearch on my Digital Ocean droplet using my local machine's IP Address.
I got the IP Address by: Terminal > ipconfig getifaddr en0
- With that result, let's say: 100.888.777.99
I logged into my droplet by running: ssh username#111.222.3.444
Updated my UFW Rules by running: sudo ufw allow 9200 from 100.888.777.99
From my local machine I ran: curl -X GET 'http://111.222.3.444:9200'
And received: curl: (7) Failed to connect to 111.222.3.444 port 9200: Operation timed out
What am I doing wrong?
Things I've tried:
Changing the network.host variable in elasticsearch/elasticsearch.yml
network.host: 0.0.0.0 (also this a security risk since ip addresses are allowed )
Restarting the server
sudo /etc/init.d/elasticsearch restart
Adding more varibles to elasticsearch/elasticsearch.yml
transport.host: localhost
transport.tcp.port: 9300
http.port: 9200
I found that when I changed the UFW Rules to allow all connection to port 9200, I was able to connect to ElasticSearch from my local machine, but without that, it would not connect.
sudo ufw allow 9200
After some deep diving I found the issue was that the IP address that was returned by Terminal wasn't the correct one to use. I had to use the Public IP Address which I got from https://www.whatismyip.com/, you can also get this by:
Terminal > curl ifconfig.me
So when I removed the old UFW rule: 9200 ALLOW IN 100.888.777.99
And used the Public IP Address: sudo ufw allow 9200 from Public_IP_Address it connected.
I'm still not sure why my machine's IP Address doesn't work though...
I have an Elasticsearch installed on my host.
Request from localhost works fine
curl -X GET http://localhost:9200/
But how can I configure elasticsearch.yml in order to connect from one outer ip?
On the elasticsearch.yml file, locate the line #network.host:, uncoment it (remove the "#") and change to network.host: 0.0.0.0
Then add the exception to the firewall and reload it (In my case I use UFW, so I ran sudo ufw allow 9200 and sudo ufw reload)
Obs: Tested on version: 2.3