Can't access Kibana remotely - windows

I have installed Elasticsearch and kibana in the same Centos server. When running netstat -nlp | grep :5601 I get the below result:
tcp 0 0 0.0.0.0:5601 0.0.0.0:* LISTEN 27244/node
But I still can't access kibana remotely from my windows client, when I try to access kibana from my windows client in browser using "http://my_server_ip:5601", I get something like this:
This page cannot be accessed
...(omitted)
ERR_CONNECTION_RESET
However, I can access ES from my windows client in browser using "http://my_server_ip:9200":
{
"name" : "VM-251-156-centos",
"cluster_name" : "elasticsearch",
"cluster_uuid" : "FsL8YI1mQAGqx3R0kffxbw",
"version" : {
"number" : "7.10.2",
"build_flavor" : "oss",
"build_type" : "rpm",
"build_hash" : "747e1cc71def077253878a59143c1f785afa92b9",
"build_date" : "2021-01-13T00:42:12.435326Z",
"build_snapshot" : false,
"lucene_version" : "8.7.0",
"minimum_wire_compatibility_version" : "6.8.0",
"minimum_index_compatibility_version" : "6.0.0-beta1"
},
"tagline" : "You Know, for Search"
}
I have searched all day long, almost all the answers suggested to edit the kibana.yml, changing server.host to 0.0.0.0, but they don't work in my case.
My kibana.yml is like this (only list uncommented lines):
server.port: 5601
server.host: "0.0.0.0"
server.name: "http://kibana.example.com"
elasticsearch.hosts: ["http://my_server_ip:9200"]
And I have checked the firewall in Centos server using the command "firewall-cmd --state":
not running
And I have also confirmed that kibana is really running in the Centos server using "sudo systemctl status kibana":
● kibana.service - Kibana
Loaded: loaded (/etc/systemd/system/kibana.service; enabled; vendor preset: disabled)
Active: active (running) since Wed 2022-01-26 13:57:44 CST; 19h ago
Main PID: 27244 (node)
Tasks: 11
Memory: 80.6M
CGroup: /system.slice/kibana.service
└─27244 /usr/share/kibana/bin/../node/bin/node /usr/share/kibana/bin/../src/cli/dist
Any suggesstions is appreciated.

Solved by changing "server.port: 5601" to "server.port: 5602" in the kibana.yml.
It turns out that the problem was caused by the 5601 port in the centos server was blocked by the server provider for security concerns.

Related

Accessing elasticsearch from a public domain name or IP

Am running elastic search version 2.3.1 on ubuntu-server 16.04
I can access the elastic api locally as seen below on the default host as show below
curl -X GET 'http://localhost:9200'
{
"name" : "oxo-cluster-node",
"cluster_name" : "oxo-elastic-cluster",
"version" : {
"number" : "2.3.1",
"build_hash" : "bd980929010aef404e7cb0843e61d0665269fc39",
"build_timestamp" : "2016-04-04T12:25:05Z",
"build_snapshot" : false,
"lucene_version" : "5.5.0"
},
"tagline" : "You Know, for Search"
}
I need to be able to access elastic search via my domain name or IP Address
I've tried adding the following setting http.publish_host: my.domain file but the server refuses client http connections. Am running the service on default port 9200
When i run
curl -X GET 'http://my.domain:9200'
the result is
curl: (7) Failed to connect to my.domain port 9200: Connection refused
My domain (my.domain) is publicly accessible on the internet and port 9200 is configured to accept connections from anywhere
What am i missing ?
First off, exposing an Elasticsearch node directly to the internet without protections in front of it is usually bad, bad news. Don't do it - especially older versions. You're going to end up with security problems in a hurry. I recommend using something like nginx to do basic authentication + HTTPS, and then to proxy_pass it to your locally-bound Elasticsearch instance. This gives you an encrypted and authenticated public connection to your server.
That said, see the networking config documentation. You want either network.host or network.bind_host. network.publish_host is the name that the node advertises to other nodes so that they can connect for clustering. You will also want to make sure that your firewall (iptables or similar) is set up to allow traffic on 9200, and that you don't have any upstream networking security preventing access to the machine (such as AWS security groups or DigitalOcean's networking firewalls).

Kibana and elasticsearch status is active , i can access elasticsearch via browser but kibana is giving error [duplicate]

This question already has answers here:
How to retrieve unique count of a field using Kibana + Elastic Search
(7 answers)
Closed 3 years ago.
I installed kibana and elasticsearch on google instance, elasticsearch is working fine. But when hit curl command for kibana it's giving an error message.
curl -X GET "10.128.0.26:9200/"
{
"name" : "node-1",
"cluster_name" : "ElasticsearchStaging",
"cluster_uuid" : "r-A1o-coQlWUWeoXIFa5gw",
"version" : {
"number" : "7.3.0",
"build_flavor" : "default",
"build_type" : "deb",
"build_hash" : "de777fa",
"build_date" : "2019-07-24T18:30:11.767338Z",
"build_snapshot" : false,
"lucene_version" : "8.1.0",
"minimum_wire_compatibility_version" : "6.8.0",
"minimum_index_compatibility_version" : "6.0.0-beta1"
},
"tagline" : "You Know, for Search"
}
This is the message i am getting for kibana
curl -X GET "10.128.0.26:5601/"
Kibana server is not ready yet
kibana.log file:
07T10:17:47Z","tags":["warning","elasticsearch","admin"],"pid":6590,"message":"Unable to revive connection: http://0.0.0.0:9200/"}
07T10:17:47Z","tags":["warning","elasticsearch","admin"],"pid":6590,"message":"No living connections"}
07T10:17:47Z","tags":["warning","task_manager"],"pid":6590,"message":"PollError No Living connections"}
07T10:17:48Z","tags":["warning","elasticsearch","admin"],"pid":6590,"message":"Unable to revive connection: http://0.0.0.0:9200/"}
07T10:17:48Z","tags":["warning","elasticsearch","admin"],"pid":6590,"message":"No living connections"}
07T10:17:50Z","tags":["warning","elasticsearch","admin"],"pid":6590,"message":"Unable to revive connection: http://0.0.0.0:9200/"}
07T10:17:50Z","tags":["warning","elasticsearch","admin"],"pid":6590,"message":"No living connections"}
07T10:17:50Z","tags":["warning","task_manager"],"pid":6590,"message":"PollError No Living connections"}
07T10:17:51Z","tags":["warning","elasticsearch","admin"],"pid":6590,"message":"Unable to revive connection: http://0.0.0.0:9200/"}
07T10:17:51Z","tags":["warning","elasticsearch","admin"],"pid":6590,"message":"No living connections"}
07T10:17:53Z","tags":["warning","elasticsearch","admin"],"pid":6590,"message":"Unable to revive connection: http://0.0.0.0:9200/"}
07T10:17:53Z","tags":["warning","elasticsearch","admin"],"pid":6590,"message":"No living connections"}
07T10:17:53Z","tags":["warning","elasticsearch","admin"],"pid":6590,"message":"Unable to revive connection: http://0.0.0.0:9200/"}
07T10:17:53Z","tags":["warning","elasticsearch","admin"],"pid":6590,"message":"No living connections"}
07T10:17:53Z","tags":["warning","task_manager"],"pid":6590,"message":"PollError No Living connections"}
07T10:17:56Z","tags":["warning","elasticsearch","admin"],"pid":6590,"message":"Unable to revive connection: http://0.0.0.0:9200/"}
07T10:17:56Z","tags":["warning","elasticsearch","admin"],"pid":6590,"message":"No living connections"}
07T10:17:56Z","tags":["warning","elasticsearch","data"],"pid":6590,"message":"Unable to revive connection: http://0.0.0.0:9200/"}
07T10:17:56Z","tags":["warning","elasticsearch","data"],"pid":6590,"message":"No living connections"}
07T10:17:56Z","tags":["license","warning","xpack"],"pid":6590,"message":"License information from the X-Pack plugin could not be obtained from Elasticsearch for the [data] cluation from the X-Pack plugin could not be obtained from Elasticsearch for the [data] cluster. Error: No Living connections"}
07T10:17:56Z","tags":["warning","elasticsearch","admin"],"pid":6590,"message":"Unable to revive connection: http://0.0.0.0:9200/"}
07T10:17:56Z","tags":["warning","elasticsearch","admin"],"pid":6590,"message":"No living connections"}
07T10:17:56Z","tags":["warning","task_manager"],"pid":6590,"message":"PollError No Living connections"}
07T10:17:58Z","tags":["warning","elasticsearch","admin"],"pid":6590,"message":"Unable to revive connection: http://0.0.0.0:9200/"}
07T10:17:58Z","tags":["warning","elasticsearch","admin"],"pid":6590,"message":"No living connections"}
This is my kibana.yml file:
# Kibana is served by a back end server. This setting specifies the port to use.
server.port: 5601
# Specifies the address to which the Kibana server will bind. IP addresses and host names are both valid values.
# The default is 'localhost', which usually means remote machines will not be able to connect.
# To allow connections from remote users, set this parameter to a non-loopback address.
server.host: "10.128.0.26"
# Enables you to specify a path to mount Kibana at if you are running behind a proxy.
# Use the `server.rewriteBasePath` setting to tell Kibana if it should remove the basePath
# from requests it receives, and to prevent a deprecation warning at startup.
# This setting cannot end in a slash.
#server.basePath: ""
# Specifies whether Kibana should rewrite requests that are prefixed with
# `server.basePath` or require that they are rewritten by your reverse proxy.
# This setting was effectively always `false` before Kibana 6.3 and will
# default to `true` starting in Kibana 7.0.
#server.rewriteBasePath: false
# The maximum payload size in bytes for incoming server requests.
#server.maxPayloadBytes: 1048576
# The Kibana server's name. This is used for display purposes.
server.name: "ironman"
# The URLs of the Elasticsearch instances to use for all your queries.
elasticsearch.hosts: ["http://0.0.0.0:9200"]
# When this setting's value is true Kibana uses the hostname specified in the server.host
# setting. When the value of this setting is false, Kibana uses the hostname of the host
# that connects to this Kibana instance.
elasticsearch.preserveHost: true
# Kibana uses an index in Elasticsearch to store saved searches, visualizations and
# dashboards. Kibana creates a new index if the index doesn't already exist.
kibana.index: ".kibana"
also I tried ./bin/kibana
this is the message i am getting.
Kibana should not be run as root. Use --allow-root to continue.
after following instruction
./bin/kibana --allow-root
I got this message:
log [09:53:13.146] [fatal][root] Error: Port 5601 is already in use. Another instance of Kibana may be running!
at Root.shutdown (/usr/share/kibana/src/core/server/root/index.js:67:18)
at Root.setup (/usr/share/kibana/src/core/server/root/index.js:46:18)
at process._tickCallback (internal/process/next_tick.js:68:7)
at Function.Module.runMain (internal/modules/cjs/loader.js:745:11)
at startup (internal/bootstrap/node.js:283:19)
at bootstrapNodeJSCore (internal/bootstrap/node.js:743:3)
FATAL Error: Port 5601 is already in use. Another instance of Kibana may be running!
Try to change this:
elasticsearch.hosts: ["http://10.128.0.26:9200"]

How to access an Elasticsearch stored in a Docker container from outside?

I'm currently runnning Elasticsearch (ES) 5.5 inside a Docker container. (See below)
curl -XGET 'localhost:9200'
{
"name" : "THbbezM",
"cluster_name" : "elasticsearch",
"cluster_uuid" : "CtYdgNUzQrS5YRTRT7xNJw",
"version" : {
"number" : "5.5.0",
"build_hash" : "260387d",
"build_date" : "2017-06-30T23:16:05.735Z",
"build_snapshot" : false,
"lucene_version" : "6.6.0"
},
"tagline" : "You Know, for Search"
}
I've changed the elasticsearch.yml file to look like this:
http.host: 0.0.0.0
# Uncomment the following lines for a production cluster deployment
#transport.host: 0.0.0.0
#discovery.zen.minimum_master_nodes: 1
network.host: 0.0.0.0
http.port: 9200
I can currently get my indexes through curl -XGET commands. The thing here is that I wanted to be able to do http requests to this ES instance using it's Ip Address instead of the 'localhost:9200' setting starting from my machine (Mac OS X).
So, what I've tried already:
1) I've tried doing it in Postman getting the following response:
Could not get any response
There was an error connecting to X.X.X.X:9200/.
Why this might have happened:
The server couldn't send a response:
Ensure that the backend is working properly
Self-signed SSL certificates are being blocked:
Fix this by turning off 'SSL certificate verification' in Settings > General
Client certificates are required for this server:
Fix this by adding client certificates in Settings > Certificates
Request timeout:
Change request timeout in Settings > General
2) I also tried in Sense (Plugin for Chrome):
Request failed to get to the server (status code: 0):
3) Running a curl from my machine's terminal won't do it too.
What am I missing here?
Docker for Mac provides a DNS name you can use:
docker.for.mac.localhost
You should use the value specified under container name in the YML file to connect to your cluster. Example:
services:
elasticsearch:
container_name: 'example_elasticsearch'
image: 'docker.elastic.co/elasticsearch/elasticsearch:6.6.1'
In this case, elastic search is located at http://example_elasticsearch:9200. Note that example_elasticsearch is the name of the container and may be used the same way as machine name or host name.

when I network.bind_host: localhost , it doesn't working

I install elasticsearch in aws ubuntu 14.04
install document
i change some settings in elasticsearch.yml
elasticsearch.yml
#network.bind_host: 192.168.0.1
to
network.bind_host: localhost
the document told me localhost is good at secure
when I start elasticsearch
sudo service elasticsearch restart
* Stopping Elasticsearch Server [ OK ]
* Starting Elasticsearch Server
and when I send curl
sudo curl -X GET 'http://localhost:9200'
curl: (7) Failed to connect to localhost port 9200: Connection refused
so I change network.bind_host
network.bind_host: 0.0.0.0
and
sudo curl -X GET 'http://localhost:9200'
{
"status" : 200,
"name" : "****",
"cluster_name" : "elasticsearch_***",
"version" : {
"number" : "1.7.2",
"build_hash" : "e12r1r33r3fs593f7202acbd816e8ec",
"build_timestamp" : "2015-09-14T09:49:53Z",
"build_snapshot" : false,
"lucene_version" : "4.10.4"
},
"tagline" : "You Know, for Search"
}
but I think 0.0.0.0 is so danger when i product my web site
please somebody help me?
My guess is that you should be having your network.host set to localhost or 127.0.0.1 within your elasticsearch yml.
network.host: localhost <-- try 127.0.0.1 as well
Normally the network.bind_host binds to your network.host by default. You could have a look at this maybe for more details.
And just in case, try adding the http.port as well, so that you could make sure that you could access the ES port, which could look something like this.
http.port: 9200
Hope it helps!

Installed elastic search on server but cannot connect to it if from another machine

Context
Have just started using elastic search, installed it on server, can curl and telnet to port 9200 on local machine(server) but cannot connect to it if from another machine.
I disabled firewall on both the server and client as solutions I got from internet were suggesting and also tried suggestions found on the link below but couldn't get it working.
https://discuss.elastic.co/t/accessing-port-9200-remotely/21840
Question
Can some one help me on how to get this working, thanks in advance
Since you just installed Elasticsearch, I suppose you're using ES 2.0 or 2.1. You need to know that since the 2.0 release, Elasticsearch binds to localhost by default (as a security measure to prevent your node from connecting to other nodes on the network without you knowing it).
So what you need to do is simply to edit your elasticsearch.yml configuration file and change the network.bind_host setting like this:
network.bind_host: 0
Then, you need to restart your node and it will be accessible from a remote host.
Let's recreate your scenario. I started freshly installed elasticsearch on my machine. Now I am able to perform curl on port 9200
[root#kali ~]# hostname -i
192.168.109.128
[root#kali ~]# curl http://localhost:9200
{
"status" : 200,
"name" : "Kali Node",
"cluster_name" : "kali",
"version" : {
"number" : "1.7.1",
"build_hash" : "b88f43fc40b0bcd7f173a1f9ee2e97816de80b19",
"build_timestamp" : "2015-07-29T09:54:16Z",
"build_snapshot" : false,
"lucene_version" : "4.10.4"
},
"tagline" : "You Know, for Search"
}
If you check the listening tcp ports on your server that java service has opened.
[root#kali ~]# netstat -ntlp | awk '/[j]ava/'
tcp6 0 0 127.0.0.1:9200 :::* LISTEN 3422/java
tcp6 0 0 127.0.0.1:9300 :::* LISTEN 3422/java
You can see elasticsearch is listening on 127.0.0.1 so it is obvious that you can't access port 9200 from the network. Let's verify it using wget from remote server.
$ wget.exe 192.168.109.128:9200
--2015-12-25 13:30:18-- http://192.168.109.128:9200/
Connecting to 192.168.109.128:9200... failed: Connection refused.
lets change the elasticsearch configuration to fix the issue using below command
[root#kali ~]# sed -i '/^network.bind_host:/s/network.bind_host: .*/network.bind_host: 0.0.0.0/' /etc/elasticsearch/elasticsearch.yml
or
just open elasticsearch configuration file and find "network.bind_host" and do following changes below
network.bind_host: 0.0.0.0
then restart your elasticsearch service
[root#kali ~]# service elasticsearch restart
Restarting elasticsearch (via systemctl): [ OK ]
Now lets check the listening tcp port of java
[root#kali ~]# netstat -ntlp | awk '/[j]ava/'
tcp6 0 0 :::9200 :::* LISTEN 3759/java
tcp6 0 0 :::9300 :::* LISTEN 3759/java
Now you can it is listening on all interface.
Lets try the wget command from remote machine
$ wget.exe 192.168.109.128:9200
--2015-12-25 13:39:12-- http://192.168.109.128:9200/
Connecting to 192.168.109.128:9200... connected.
HTTP request sent, awaiting response... 200 OK
Length: 328 [application/json]
Saving to: ‘index.html.1’
index.html.1 100%[====================================================>] 328 --.-KB/s in 0.009s
2015-12-25 13:39:12 (37.1 KB/s) - ‘index.html.1’ saved [328/328]
Try curl command
$ curl.exe http://192.168.109.128:9200
{
"status" : 200,
"name" : "Kali Node",
"cluster_name" : "kali",
"version" : {
"number" : "1.7.1",
"build_hash" : "b88f43fc40b0bcd7f173a1f9ee2e97816de80b19",
"build_timestamp" : "2015-07-29T09:54:16Z",
"build_snapshot" : false,
"lucene_version" : "4.10.4"
},
"tagline" : "You Know, for Search"
}
For ElasticSearch version 5, you can edit the configuration file /etc/elasticsearch/elasticsearch.yml and add the following lines
network.bind_host: 0
http.cors.allow-origin: "*"
http.cors.enabled: true
The cors are needed for plugins like Head or HQ on remote machine, because they make Ajax XMLHttpRequest requests
You can also define network.host: 0 since it is a shortcut which sets the bind_host and the publish_host
Sources:
https://www.elastic.co/guide/en/elasticsearch/reference/current/modules-http.html
https://www.elastic.co/guide/en/elasticsearch/reference/current/modules-network.html
I had the same issue and this worked for me:
In /etc/elasticsearch/elasticsearch.yml:
Remove network.host (I believe this should only be used if you are accessing locally)
http.host 192.168.1.50 (IP of the server)
http.port 9200
In /etc/kibana/kibana.yml:
server.host "192.168.1.50"
elasticsearch.hosts ["http://192.168.1.50:9200"]
In your nginx file /etc/nginx/sites-available/yoursite.com
server {
listen 80;
server_name 192.168.1.50;
auth_basic "Restricted Access";
auth_basic_user_file /etc/nginx/htpasswd.users;
location / {
proxy_pass http://192.168.1.50:5601;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
}
}
Then restart all services and WAIT for a few minutes - I wasn't patient enough the first few attempts and wondered why it kept failing:
systemctl restart elasticsearch
systemctl restart kibana
systemctl restart nginx
After waiting for a few minutes, check that the ports are now on the correct IPs.
netstat -nltp
It should now look something like:
192.168.1.50:5061
192.168.1.50:9200
Test by trying to telnet from the remote machine by doing
telnet 192.168.1.50 9200
Now you are all set to access remotely or set up auditbeat etc.

Resources