Kibana startup fails with License information and later with Unable to retrieve version information - elasticsearch

I'm tried to follow this guideline for installing ELK on Centos 8 (on top of one AWS cluster).
After installing elastic and kibana, the kibana startup failed with:
*"message":"License information could not be obtained from Elasticsearch
I googled it, and realized I should use OSS version (latest is 7.10.2)
so make sure to install only OSS version. you can use this guideline
after that, I got new error from kibana.log
-08T07:19:32Z","tags":["error","savedobjects-service"],"pid":62767,"message":"Unable to retrieve version information from Elasticsearch nodes."}
I tried to google it, but no solution worked for me.
my kibana.yaml:
server.port: 5601
server.host: "0.0.0.0"
elasticsearch.hosts: "[my public AWS instance ip:9200]"
my elasticsearch.yaml:
path.data: /var/lib/elasticsearch
path.logs: /var/log/elasticsearch
network.host: 0.0.0.0
http.port: 9200
discovery.seed_hosts: "[my private AWS instance ip]"
cluster.initial_master_nodes: "[my private AWS instance ip]"
Update:
If I'm changing this line in kibana.yaml file to:
elasticsearch.hosts: "http://localhost:9200"
Then it works. what is the root cause? why it can't access elastic public IP but only local?

Per #leandrojmp comment, the issue was indeed with the public IP in elasticsearch.hosts. Once I replaced it to my private ip, it works

also:
When installing the Elastic Stack, you must use the same version across the entire stack. For example, if you are using Elasticsearch 7.9.3, you install Beats 7.9.3, APM Server 7.9.3, Elasticsearch Hadoop 7.9.3, Kibana 7.9.3, and Logstash 7.9.3.

Using docker, I had to specify the elasticsearch.hosts as an environment variable: -e "ELASTICSEARCH_HOSTS=http://localhost:9200", so:
docker run -p 5601:5601 -e "ELASTICSEARCH_HOSTS=http://localhost:9200" arm64v8/kibana:7.16.3

Set elasticsearch.hosts ipaddress as local system's host ipaddress in kibana.yml file. Also you need to mount local kibana.yml file while running docker container.
docker run -d --name kibana -p 5601:5601 -v /home/users/mySystemUserName/config/kibana.yml:/opt/kibana/config/kibana.yml kibana:7.16.3
Add the below configs in
kibana.yml
server.name: kibana
server.port: 5601
server.host: "0.0.0.0"
elasticsearch.hosts: [ "http://192.168.0.102:9200" ]

Related

How do I curl my elasticsearch on AWS EC2

I installed elasticsearch(docker) 8.2 on aws ec2(ubuntu 20.04.)
Everything is working.My only problem is that I can't reach(curl) it from other instances and my backend server(it is on same vpc).
I added my node to its discovery node, and also set network.host: 0.0.0.0
but I still can't reach it
(I tried with both private and public ip)
Is it necessary to install SSL/TSL on it with elastic 8?
Does anyone has any suggestion how to access it?
Looks like you forgot to bind the docker container port to host port, you need to add below config, to your Elasticsearch container docker yml
ports:
- "9202:9200" (bind 9200 port of host to docker port of 9200, 9200 is the Elasticsearch port by default)
After that you should be able to do the curl from other instances in the VPC.

Kibana is not accessible from external browser

I am trying to install Elasticsearch and Kibana on Debian 10. they both work and active in the current machine. I want to access Kibana through different machine's browser, but it cannot be reached.
I have changed Kibana.yml configuration,
server.port: 5601
server.host: "IP" - my IP address
elasticsearch.hosts: ["http://IP:9200"]
Also, I have enabled 5601 and 9200 ports through he firewall by using UFW commands.
even though, still not working. Any idea how to fix that??
Thanks,
almo

ElasticSearch Connection Timed Out in EC2 Instance

I am setting up an ELK Stack (which consists of ElasticSearch, LogStash and Kibana) in a single EC2 instance. AWS EC2 instance. I am following the documentation from the elastic.co site.
TL;DR; I cannot access my ElasticSearch interface hosted in an EC2 from the Web URL. How to fix that?
Type : m4.large
vCPU : 2
Memory : 8 GB
Storage: 25 GB (EBS)
Note : I have provisioned the EC2 instance inside a VPC and with an Elastic IP.
I have installed all 3 components. ElasticSearch and LogStash are running as services while Kibana is running via the command ./bin/kibana inside kibana-7.10.1-linux-x86_64/ directory.
When I curl the ElasticSearch endpoint using
curl http://localhost:9200
I get this JSON output. (Which means the service is running and is accessible via Port 9200).
However, when I try to access the same URL via my browser, I get an error saying
Connection Timed Out
Isn't this supposed to return the same JSON output as the one I've mentioned above?
I have attached the elasticsearch.yml file here (Hosted in gofile.io).
Here are the Inbound Rules for the EC2 instance.
EDIT : I tried changing the network.host: 'localhost'
to network.host: 0.0.0.0 and restarted the service but this time I got an error while starting the service. I attached the screenshot of that.
EDIT 2 : I have uploaded the updated elasticsearch.yml to Gofile.org).
The problem is the following line in your elasticsearch.yml configuration file:
node.name: node-1
network.host: 'localhost'
With that configuration, your ES cluster is only accessible from the same host and not from the outside. According to the official documentation, you need to either specify 0.0.0.0 or a specific publicly accessible IP address, otherwise that won't work.
Note that you also need to configure the following two lines in order for the cluster to properly form:
discovery.seed_hosts: ["node-1-ip-address"]
# Bootstrap the cluster using an initial set of master-eligible nodes:
cluster.initial_master_nodes: ["node-1"]

How to change Elasticsearch network host

I've install ES on my VM which it has an OS of centos 7. It network.host: bind to the localhost. I can browse via port 9200.
My problem is that I've changed the network host to:0.0.0.0 (So I can get public access from my host PC).
the service started but the port is not listening.
I want to access ES from my host PC.
How can i change the network.host ?
I faced same issue in elasticsearch 7.3.0 version. I resolved by putting following
values in /etc/elasticsearch/elasticsearch.yaml as shown below
network.host: 127.0.0.1
http.host: 0.0.0.0
If you are planning to set network.host other than default(127.0.0.1) then change following details in /etc/elasticsearch/elasticsearch.yml
network.host: 0.0.0.0
discovery.seed_hosts: []
Looking at the Elasticsearch Network Settings documentation, it doesn't appear that 0.0.0.0 is a valid setting for network.host.
Try instead the special value _global_. So the section of your elasticsearch.yaml might look like this:
network:
host: _global_
This should tell Elasticsearch to listen on all network interfaces.
Since the version 7.3 of Elastic Search, it's necessary to put the following line
cluster.initial_master_nodes: node-1
network.host: 0.0.0.0
If your application is running on AWS and Elastic search is running on different host
network.host: YOUR_AWS_PRIVATE_IP
This works for me.

How to remotely access Kibana in Elastic

I am currently trying to make my Kibana dashboard remotely accessible via the browser. So, a user can monitor index and run scripts in a remote manner. As background, my elastic is currently ran on Windows server and I could successfully set 'elastic uri search' (e.g. http://[IP_ADDRESS]:9200) remotely accessible by updating elasticsearch.yml and opening the port 9200. For this reason, I took similar actions to remotely access Kibana, updating kibana.yml and opening the port 5601, but I couldn't remotely access kibana on the browser from my local machine. It throws ERR_CONNECTION_TIMED_OUT on the browser. See attributes that I have updated for kibana.yml:
server.port: "5601"
server.host: "0.0.0.0"
elasticsearch.url: "http://localhost:9200"
You need to configure the file /etc/kibana/kibana.yml as root:
Uncomment the lines:
server.port: 5601
# Kibana is served by a back end server. This setting specifies the port to use.
server.port: 5601
server.host: "0.0.0.0"
# To allow connections from remote users, set this parameter to a non-loopback address.
server.host: "0.0.0.0"
elasticsearch.hosts
Change the <your-elastic-server-ip> to your elastic search server IP, something like 192.168.1.XX
# The URLs of the Elasticsearch instances to use for all your queries.
elasticsearch.hosts: ["http://<your-elastic-server-ip>:9200"]
And check the ports on your firewall:
$ sudo firewall-cmd --list-all
Output:
public (active)
target: default
icmp-block-inversion: no
interfaces: ens33
sources:
services: cockpit dhcpv6-client ftp ssh
ports: 10000/tcp 3306/tcp 9200/tcp 5601/tcp
protocols:
masquerade: no
forward-ports:
source-ports:
icmp-blocks:
rich rules:
If you don't see the ports 9200/tcp 5601/tcp opened then do the following command as sudo:
$ sudo firewall-cmd --zone=public --permanent --add-port 9200/tcp
$ sudo firewall-cmd --zone=public --permanent --add-port 5601/tcp
I followed these steps to connect remote Elasticsearch on AWS EC2 to my local kibana.
Backup your original .yml files
sudo cp /etc/elasticsearch/elasticsearch.yml elasticsearch.yml.bk
sudo cp /etc/kibana/kibana.yml kibana.yml.bk
Edit security groups and add a new rule - custom TCP with port 9200 accessible via your public IP v4.
ssh to your server and tweak ufw to allow your ip over 9200 sudo ufw allow from <your public v4 IP> to any port 9200
edit elasticsearch.yml to add network.host: 0.0.0.0
discovery.type: single-node Ref
on your local machine edit kibana.yml and add elasticsearch.hosts: ["http://34.103.134.135:9200"]
go to http://localhost:5601/ you should see your remote index under Discover> index management.
`

Resources