"curl: (52) Empty reply from server" / timeout when querying ElastiscSearch - elasticsearch

I've ran into an annoying issue with my ElasticSearch (Version 1.5.2): Queries immediately return timeout (when I used python's Requests) or
curl: (52) Empty reply from server
when I used curl.
This only happened when the expected output was large. When I sent a similar (but smaller) query, it came back just fine.
what's going on here? and how can I overcome this?

just open
sudo nano /etc/elasticsearch/elasticsearch.yml
and replace this setting with false
# Enable security features
xpack.security.enabled: false

An other explanation can be making http request when ssl/security is activated on the cluster.
In this case use
curl -X GET "https://localhost:9200/_cluster/health?wait_for_status=yellow&timeout=50s&pretty" --key certificates/elasticsearch-ca.pem -k -u elasticuser
As stated by #FanchenBao, one can read the doc about ELK with SSL.

I meet with the same issue on Elasticsearh 8.1.3, which is the latest version.
I fixed this issue by changing the following setting from true to false in the /config/elasticsearch.yml file:
# Enable security features
xpack.security.enabled: false
I installed elastic by downloading the tar file, and unzip it, then going to the folder of elasticsearch, and running the following command:
./bin/elasticsearch
The first time you run this command, it will change the elasticsearch.yml file with the following content, which means it's a default secruity setting auto generated:
#----------------------- BEGIN SECURITY AUTO CONFIGURATION -----------------------
#
# The following settings, TLS certificates, and keys have been automatically
# generated to configure Elasticsearch security features on 01-05-2022 06:59:12
#
# --------------------------------------------------------------------------------
# Enable security features
xpack.security.enabled: true
xpack.security.enrollment.enabled: true
# Enable encryption for HTTP API client connections, such as Kibana, Logstash, and Agents
xpack.security.http.ssl:
enabled: true
keystore.path: certs/http.p12
# Enable encryption and mutual authentication between cluster nodes
xpack.security.transport.ssl:
enabled: true
verification_mode: certificate
keystore.path: certs/transport.p12
truststore.path: certs/transport.p12
# Create a new cluster with the current node only
# Additional nodes can still join the cluster later
cluster.initial_master_nodes: ["DaMings-MacBook-Pro.local"]
# Allow HTTP API connections from localhost and local networks
# Connections are encrypted and require user authentication
http.host: [_local_, _site_]
# Allow other nodes to join the cluster from localhost and local networks
# Connections are encrypted and mutually authenticated
#transport.host: [_local_, _site_]
#----------------------- END SECURITY AUTO CONFIGURATION -------------------------

This issue was caused by Elastic running out of memory: it simply can't hold all the documents in memory. Unfortunately there's no explicit error code for this case.
There are a bunch of options to work around this (besides adding more memory):
You can tell Elastic to not attach the source, by specifying "_source: false". The results would then just list the relevant documents (and you would need to retrieve them).
You could use "source filtering" to return just part of the documents, if you dont need the whole thing - that worked for me.
You can also just split your query into a bunch of sub-queries. not pretty, but it would do the trick.

When running in Docker you can disable the security by setting the environment variable xpack.security.enabled to false, e.g. in docker-compose.yml:
environment:
- xpack.security.enabled=false
- discovery.type=single-node

In version 6.2, there are more strict checking.
for example:
curl -XPUT -H'Content-Type: application/json' 'http://localhost:9200/us/user/2?pretty=1' -d '{"email" : "mary#jones.com", "name" : "Mary Jones","username" : "#mary"}'
curl: (52) Empty reply from server
if you remove =1:
curl -XPUT -H'Content-Type: application/json' 'http://localhost:9200/us/user/2?pretty' -d '{"email" : "mary#jones.com", "name" : "Mary Jones","username" : "#mary"}'
{
"_index" : "us",
"_type" : "user",
"_id" : "2",
"_version" : 1,
"result" : "created",
"_shards" : {
"total" : 2,
"successful" : 1,
"failed" : 0
},
"_seq_no" : 0,
"_primary_term" : 1
}
it works!

In my case it was because the url scheme https:// was missing in the endpoint url.

Related

Elasticsearch showing received plaintext http traffic on an https channel in console

I am trying to setup elasticsearch in my Windows system but when I am trying to run it its starting up and showing below reponse when I redirect to http://localhost:9200.
{
"name" : "DESKTOP-L8UKCFI",
"cluster_name" : "elasticsearch",
"cluster_uuid" : "z8IfZcFaQfSti3P4jhZxbg",
"version" : {
"number" : "8.1.0",
"build_flavor" : "default",
"build_type" : "zip",
"build_hash" : "3700f7679f7d95e36da0b43762189bab189bc53a",
"build_date" : "2022-03-03T14:20:00.690422633Z",
"build_snapshot" : false,
"lucene_version" : "9.0.0",
"minimum_wire_compatibility_version" : "7.17.0",
"minimum_index_compatibility_version" : "7.0.0"
},
"tagline" : "You Know, for Search"
}
But in console its showing something like this
[2022-03-16T11:26:12,307][WARN ][o.e.x.s.t.n.SecurityNetty4HttpServerTransport] [DESKTOP-
L8UKCFI] received plaintext http traffic on an https channel, closing connection
Netty4HttpChannel{localAddress=/[0:0:0:0:0:0:0:1]:9200, remoteAddress=/[0:0:0:0:0:0:0:1]:5996}
[2022-03-16T11:31:56,806][WARN ]
[o.e.x.s.t.n.SecurityNetty4HttpServerTransport] [DESKTOP-L8UKCFI] http
client did not trust this server's certificate, closing connection
Netty4HttpChannel{localAddress=/[0:0:0:0:0:0:0:1]:9200,
remoteAddress=/[0:0:0:0:0:0:0:1]:6215}
elasticsearch.yml
# ======================== Elasticsearch Configuration =========================
#
# NOTE: Elasticsearch comes with reasonable defaults for most settings.
# Before you set out to tweak and tune the configuration, make sure you
# understand what are you trying to accomplish and the consequences.
#
# The primary way of configuring a node is via this file. This template lists
# the most important settings you may want to configure for a production cluster.
#
# Please consult the documentation for further information on configuration options:
# https://www.elastic.co/guide/en/elasticsearch/reference/index.html
#
# ---------------------------------- Cluster -----------------------------------
#
# Use a descriptive name for your cluster:
#
#cluster.name: my-application
#
# ------------------------------------ Node ------------------------------------
#
# Use a descriptive name for the node:
#
#node.name: node-1
#
# Add custom attributes to the node:
#
#node.attr.rack: r1
#
# ----------------------------------- Paths ------------------------------------
#
# Path to directory where to store the data (separate multiple locations by comma):
#
#path.data: /path/to/data
#
# Path to log files:
#
#path.logs: /path/to/logs
#
# ----------------------------------- Memory -----------------------------------
#
# Lock the memory on startup:
#
#bootstrap.memory_lock: true
#
# Make sure that the heap size is set to about half the memory available
# on the system and that the owner of the process is allowed to use this
# limit.
#
# Elasticsearch performs poorly when the system is swapping the memory.
#
# ---------------------------------- Network -----------------------------------
#
# By default Elasticsearch is only accessible on localhost. Set a different
# address here to expose this node on the network:
#
#network.host: 192.168.0.1
#
# By default Elasticsearch listens for HTTP traffic on the first free port it
# finds starting at 9200. Set a specific HTTP port here:
#
#http.port: 9200
#
# For more information, consult the network module documentation.
#
# --------------------------------- Discovery ----------------------------------
#
# Pass an initial list of hosts to perform discovery when this node is started:
# The default list of hosts is ["127.0.0.1", "[::1]"]
#
#discovery.seed_hosts: ["host1", "host2"]
#
# Bootstrap the cluster using an initial set of master-eligible nodes:
#
#cluster.initial_master_nodes: ["node-1", "node-2"]
#
# For more information, consult the discovery and cluster formation module documentation.
#
# ---------------------------------- Various -----------------------------------
#
# Allow wildcard deletion of indices:
#
#action.destructive_requires_name: false
#----------------------- BEGIN SECURITY AUTO CONFIGURATION -----------------------
#
# The following settings, TLS certificates, and keys have been automatically
# generated to configure Elasticsearch security features on 16-03-2022 06:55:18
#
# --------------------------------------------------------------------------------
# Enable security features
xpack.security.enabled: false
xpack.security.enrollment.enabled: false
# Enable encryption for HTTP API client connections, such as Kibana, Logstash, and Agents
xpack.security.http.ssl:
enabled: false
keystore.path: certs/http.p12
# Enable encryption and mutual authentication between cluster nodes
xpack.security.transport.ssl:
enabled: false
verification_mode: certificate
keystore.path: certs/transport.p12
truststore.path: certs/transport.p12
# Create a new cluster with the current node only
# Additional nodes can still join the cluster later
cluster.initial_master_nodes: ["DESKTOP-L8UKCFI"]
# Allow HTTP API connections from localhost and local networks
# Connections are encrypted and require user authentication
http.host: [_local_, _site_]
# Allow other nodes to join the cluster from localhost and local networks
# Connections are encrypted and mutually authenticated
#transport.host: [_local_, _site_]
#----------------------- END SECURITY AUTO CONFIGURATION -------------------------
What does it mean someone let me know.
As of ES 8, SSL/TLS is ON by default for HTTP clients.
The WARN message says
http client did not trust this server's certificate
... which means that you need to tell your browser to trust the server certificate. it is self-signed by default, so that's probably the reason.
Or you can simply disable SSL in your elasticsearch.yml configuration, that would also work.
As #Val has already answered the question above just posting the code new users who wants to disable the SSL.
# --------------------------------------------------------------------------------
# Enable security features
xpack.security.enabled: false
xpack.security.enrollment.enabled: false
# Enable encryption for HTTP API client connections, such as Kibana, Logstash, and Agents
xpack.security.http.ssl:
enabled: false
keystore.path: certs/http.p12
# Enable encryption and mutual authentication between cluster nodes
xpack.security.transport.ssl:
enabled: false
verification_mode: certificate
keystore.path: certs/transport.p12
truststore.path: certs/transport.p12
# Create a new cluster with the current node only
# Additional nodes can still join the cluster later
Add this to your environment variables:
- xpack.security.enabled=false
Full:
b-elastic:
image: docker.elastic.co/elasticsearch/elasticsearch:8.4.0-arm64
container_name: b-elastic
environment:
- discovery.type=single-node
- ES_JAVA_OPTS=-Xms750m -Xmx750m
- xpack.security.enabled=false
volumes:
- ./:/project
ports:
- 9200:9200
Another way is to simply run elasticsearch as
./elasticsearch -E xpack.security.enabled=false
It basically runs it with SSL disabled, allowing you to create HTTP connections with it.
http client did not trust this server's certificate, closing connection Netty4HttpChannel{localAddress=/[0:0:0:0:0:0:0:1]:9200, remoteAddress=/[0:0:0:0:0:0:0:1]:54479}
Simply means your browser is not trusting the software, so use https instead of http like https://localhost:9200/ it will work. I got this solution from internet

Accessing elasticsearch from a public domain name or IP

Am running elastic search version 2.3.1 on ubuntu-server 16.04
I can access the elastic api locally as seen below on the default host as show below
curl -X GET 'http://localhost:9200'
{
"name" : "oxo-cluster-node",
"cluster_name" : "oxo-elastic-cluster",
"version" : {
"number" : "2.3.1",
"build_hash" : "bd980929010aef404e7cb0843e61d0665269fc39",
"build_timestamp" : "2016-04-04T12:25:05Z",
"build_snapshot" : false,
"lucene_version" : "5.5.0"
},
"tagline" : "You Know, for Search"
}
I need to be able to access elastic search via my domain name or IP Address
I've tried adding the following setting http.publish_host: my.domain file but the server refuses client http connections. Am running the service on default port 9200
When i run
curl -X GET 'http://my.domain:9200'
the result is
curl: (7) Failed to connect to my.domain port 9200: Connection refused
My domain (my.domain) is publicly accessible on the internet and port 9200 is configured to accept connections from anywhere
What am i missing ?
First off, exposing an Elasticsearch node directly to the internet without protections in front of it is usually bad, bad news. Don't do it - especially older versions. You're going to end up with security problems in a hurry. I recommend using something like nginx to do basic authentication + HTTPS, and then to proxy_pass it to your locally-bound Elasticsearch instance. This gives you an encrypted and authenticated public connection to your server.
That said, see the networking config documentation. You want either network.host or network.bind_host. network.publish_host is the name that the node advertises to other nodes so that they can connect for clustering. You will also want to make sure that your firewall (iptables or similar) is set up to allow traffic on 9200, and that you don't have any upstream networking security preventing access to the machine (such as AWS security groups or DigitalOcean's networking firewalls).

Not able to retrieve normalized CPU utilization percentage using elasticsearch and metricbeat

I am trying to retrieve normalized percentage of CPU utilization by using below query.
curl -H "Content-Type: application/json" -X POST http://localhost:12001/metricbeat*/_search?pretty=true -d '{"query":{"bool":{"must": [{"range": {"system.cpu.total.norm.pct": {"gte": 0.1}}},{"range": {"#timestamp": {"gte": "now-10m","lte": "now/m"}}}]}}}'
I want normalized percentage for last 10 mins, but i am not getting any data. Below is the response.
{
"took" : 1,
"timed_out" : false,
"_shards" : {
"total" : 8,
"successful" : 8,
"skipped" : 0,
"failed" : 0
},
"hits" : {
"total" : 0,
"max_score" : null,
"hits" : [ ]
}
}
However if i query elasticsearch with "system.cpu.total.pct" i get data. Also, i updated configuration for "CPU" with "cpu.metrics: ["percentages", "normalized_percentages", "ticks"]".
Can anyone let me know why normalized query is not working?
Below is my metricbeat.reference.yml configuration.
module: system
metricsets:
- cpu # CPU usage
- load # CPU load averages
- memory # Memory usage
- network # Network IO
- process # Per process metrics
- process_summary # Process summary
- uptime # System Uptime
- core # Per CPU core usage
#- diskio # Disk IO
- filesystem # File system usage for each mountpoint
#- fsstat # File system summary metrics
#- raid # Raid
#- socket # Sockets and connection info (linux only)
enabled: true
period: 10s
processes: ['.*']
# Configure the metric types that are included by these metricsets.
cpu.metrics: ["percentages", "normalized_percentages", "ticks"] # The other available options are normalized_percentages and ticks.
core.metrics: ["percentages"] # The other available option is ticks.
Elasticsearch module:
module: elasticsearch
metricsets:
- node
- node_stats
#- index
#- index_recovery
#- index_summary
#- shard
#- ml_job
period: 10s
hosts: ["localhost:8881"]
I have enable kibana as output host:
# Starting with Beats version 6.0.0, the dashboards are loaded via the Kibana API.
# This requires a Kibana endpoint configuration.
setup.kibana:
# Kibana Host
# Scheme and port can be left out and will be set to the default (http and 5601)
# In case you specify and additional path, the scheme is required: http://localhost:5601/path
# IPv6 addresses should always be defined as: https://[2001:db8::1]:5601
host: "localhost:8882"
What version of metricsbeat you are using ? system.cpu.total.norm.pct is relatively new field and not present on older metricsbeat versions.
This could be field system.cpu.total.norm.pct itself not part of metricsbeat or condition where system.cpu.total.norm.pct is greater than 0.1 doesnot full fill.
If you want to only retrive one field then it should be part of include section under _source rather then condition itself like below
curl -H "Content-Type: application/json" -X POST http://localhost:12001/metricbeat*/_search?pretty=true -d '{ "_source": {"includes": [ "system.cpu.total.norm.pct"]},"query":{"bool":{"must": [{"range": {"#timestamp": {"gte": "now-10m","lte": "now/m"}}}]}}}'
Explanation for metricsbeat modules configuration
Since there was changes in metricbeat.reference.yml let me explain flow of how metricsbeat reads the configs for input modules.
Metricsbeat by default reads the metricbeat.yml.
On this file first section is metricbeat.config.modules which defines all the input modules it should be using
metricbeat.config.modules:
# Glob pattern for configuration loading
path: ${path.config}/modules.d/*.yml
so it include all files present under modules.d directory and has suffix yml.
By default only system.yml is active and for other files extension is disabled so they are not part of active modules
modules.d/aerospike.yml.disabled
modules.d/apache.yml.disabled
modules.d/ceph.yml.disabled
modules.d/couchbase.yml.disabled
modules.d/docker.yml.disabled
modules.d/dropwizard.yml.disabled
modules.d/elasticsearch.yml.disabled
modules.d/envoyproxy.yml.disabled
modules.d/etcd.yml.disabled
modules.d/golang.yml.disabled
modules.d/graphite.yml.disabled
modules.d/haproxy.yml.disabled
modules.d/http.yml.disabled
modules.d/jolokia.yml.disabled
modules.d/kafka.yml.disabled
modules.d/kibana.yml.disabled
modules.d/kubernetes.yml.disabled
modules.d/kvm.yml.disabled
modules.d/logstash.yml.disabled
modules.d/memcached.yml.disabled
modules.d/mongodb.yml.disabled
modules.d/munin.yml.disabled
modules.d/mysql.yml.disabled
modules.d/nginx.yml.disabled
modules.d/php_fpm.yml.disabled
modules.d/postgresql.yml.disabled
modules.d/prometheus.yml.disabled
modules.d/rabbitmq.yml.disabled
modules.d/redis.yml.disabled
modules.d/system.yml
modules.d/traefik.yml.disabled
modules.d/uwsgi.yml.disabled
modules.d/vsphere.yml.disabled
modules.d/windows.yml.disabled
modules.d/zookeeper.yml.disabled
As you see only system.yml don't have extension .disabled and fulfill the condition of include module path: ${path.config}/modules.d/*.yml so its included.
You can list down all enabled and disabled module by running below commnad
$ ./metricbeat modules list
Enabled:
system
Disabled:
aerospike
apache
ceph
couchbase
docker
dropwizard
elasticsearch
envoyproxy
etcd
golang
graphite
haproxy
http
jolokia
kafka
kibana
kubernetes
kvm
logstash
memcached
mongodb
munin
mysql
nginx
php_fpm
postgresql
prometheus
rabbitmq
redis
traefik
uwsgi
vsphere
windows
zookeeper
and change active to disable and disable by running command such as below
$ ./metricbeat modules
Manage configured modules
Usage:
metricbeat modules [command]
Available Commands:
disable Disable one or more given modules
enable Enable one or more given modules
list List modules
Flags:
-h, --help help for modules
Global Flags:
-E, --E setting=value Configuration overwrite
-c, --c string Configuration file, relative to path.config (default "metricbeat.yml")
-d, --d string Enable certain debug selectors
-e, --e Log to stderr and disable syslog/file output
--path.config string Configuration path
--path.data string Data path
--path.home string Home path
--path.logs string Logs path
--plugin pluginList Load additional plugins
--strict.perms Strict permission checking on config files (default true)
-v, --v Log at INFO level
So if you want to enable kafka module this can be done as below
$ ./metricbeat modules enable kafka
Enabled kafka
and then check if its active
$ ./metricbeat modules list
Enabled:
kafka
system
Disabled:
aerospike
apache
ceph
couchbase
docker
dropwizard
elasticsearch
envoyproxy
etcd
golang
graphite
haproxy
http
jolokia
kibana
kubernetes
kvm
logstash
memcached
mongodb
munin
mysql
nginx
php_fpm
postgresql
prometheus
rabbitmq
redis
traefik
uwsgi
vsphere
windows
zookeeper
Once you run the above command it will rename the file modules.d/kafka.yml.disabled to modules.d/kafka.yml and you can update the modify the configurations present under modules.d/kafka.yml
I hope this explanation help to change metricsbeat configurations.

How to access an Elasticsearch stored in a Docker container from outside?

I'm currently runnning Elasticsearch (ES) 5.5 inside a Docker container. (See below)
curl -XGET 'localhost:9200'
{
"name" : "THbbezM",
"cluster_name" : "elasticsearch",
"cluster_uuid" : "CtYdgNUzQrS5YRTRT7xNJw",
"version" : {
"number" : "5.5.0",
"build_hash" : "260387d",
"build_date" : "2017-06-30T23:16:05.735Z",
"build_snapshot" : false,
"lucene_version" : "6.6.0"
},
"tagline" : "You Know, for Search"
}
I've changed the elasticsearch.yml file to look like this:
http.host: 0.0.0.0
# Uncomment the following lines for a production cluster deployment
#transport.host: 0.0.0.0
#discovery.zen.minimum_master_nodes: 1
network.host: 0.0.0.0
http.port: 9200
I can currently get my indexes through curl -XGET commands. The thing here is that I wanted to be able to do http requests to this ES instance using it's Ip Address instead of the 'localhost:9200' setting starting from my machine (Mac OS X).
So, what I've tried already:
1) I've tried doing it in Postman getting the following response:
Could not get any response
There was an error connecting to X.X.X.X:9200/.
Why this might have happened:
The server couldn't send a response:
Ensure that the backend is working properly
Self-signed SSL certificates are being blocked:
Fix this by turning off 'SSL certificate verification' in Settings > General
Client certificates are required for this server:
Fix this by adding client certificates in Settings > Certificates
Request timeout:
Change request timeout in Settings > General
2) I also tried in Sense (Plugin for Chrome):
Request failed to get to the server (status code: 0):
3) Running a curl from my machine's terminal won't do it too.
What am I missing here?
Docker for Mac provides a DNS name you can use:
docker.for.mac.localhost
You should use the value specified under container name in the YML file to connect to your cluster. Example:
services:
elasticsearch:
container_name: 'example_elasticsearch'
image: 'docker.elastic.co/elasticsearch/elasticsearch:6.6.1'
In this case, elastic search is located at http://example_elasticsearch:9200. Note that example_elasticsearch is the name of the container and may be used the same way as machine name or host name.

Can not connect to kibana via remote connection

I have installed Kibana 5.4 and Elastic search 5.4 on a server, I'm able to access both Kibana and Elastic search via curl on the local machine using the
curl localhost:5601
I get the following response
var hashRoute = '/app/kibana'; var defaultRoute =
'/app/kibana';
var hash = window.location.hash; if (hash.length) { window.location
= hashRoute + hash; } else { window.location = defaultRoute; }
for Elastic search
curl localhost:9200
I get the following response
{ "name" : "mVgeyM4", "cluster_name" : "elasticsearch",
"cluster_uuid" : "ABV1adpCTY--e7Ib2PIBBQ", "version" : {
"number" : "5.4.0",
"build_hash" : "780f8c4",
"build_date" : "2017-04-28T17:43:27.229Z",
"build_snapshot" : false,
"lucene_version" : "6.5.0" }, "tagline" : "You Know, for Search" }
Following is my kibana.yml
# Kibana is served by a back end server. This setting specifies the port to use.
#server.port: 5601
# Specifies the address to which the Kibana server will bind. IP addresses and host names are both valid values.
# The default is 'localhost', which usually means remote machines will not be able to connect.
# To allow connections from remote users, set this parameter to a non-loopback address.
#server.host: "9.51.154.45:5601"
# Enables you to specify a path to mount Kibana at if you are running behind a proxy. This only affects
# the URLs generated by Kibana, your proxy is expected to remove the basePath value before forwarding requests
# to Kibana. This setting cannot end in a slash.
#server.basePath: ""
# The maximum payload size in bytes for incoming server requests.
#server.maxPayloadBytes: 1048576
# The Kibana server's name. This is used for display purposes.
#server.name: "gtsdms.pok.ibm.com"
# The URL of the Elasticsearch instance to use for all your queries.
#elasticsearch.url: "http://localhost:9200"
# When this setting's value is true Kibana uses the hostname specified in the server.host
# setting. When the value of this setting is false, Kibana uses the hostname of the host
# that connects to this Kibana instance.
#elasticsearch.preserveHost: true
# Kibana uses an index in Elasticsearch to store saved searches, visualizations and
# dashboards. Kibana creates a new index if the index doesn't already exist.
#kibana.index: ".kibana"
# The default application to load.
#kibana.defaultAppId: "discover"
# If your Elasticsearch is protected with basic authentication, these settings provide
# the username and password that the Kibana server uses to perform maintenance on the Kibana
# index at startup. Your Kibana users still need to authenticate with Elasticsearch, which
# is proxied through the Kibana server.
#elasticsearch.username: "user"
#elasticsearch.password: "pass"
# Enables SSL and paths to the PEM-format SSL certificate and SSL key files, respectively.
# These settings enable SSL for outgoing requests from the Kibana server to the browser.
#server.ssl.enabled: false
#server.ssl.certificate: /path/to/your/server.crt
#server.ssl.key: /path/to/your/server.key
# Optional settings that provide the paths to the PEM-format SSL certificate and key files.
# These files validate that your Elasticsearch backend uses the same key files.
#elasticsearch.ssl.certificate: /path/to/your/client.crt
#elasticsearch.ssl.key: /path/to/your/client.key
# Optional setting that enables you to specify a path to the PEM file for the certificate
# authority for your Elasticsearch instance.
#elasticsearch.ssl.certificateAuthorities: [ "/path/to/your/CA.pem" ]
# To disregard the validity of SSL certificates, change this setting's value to 'none'.
#elasticsearch.ssl.verificationMode: full
# Time in milliseconds to wait for Elasticsearch to respond to pings. Defaults to the value of
# the elasticsearch.requestTimeout setting.
#elasticsearch.pingTimeout: 1500
# Time in milliseconds to wait for responses from the back end or Elasticsearch. This value
# must be a positive integer.
#elasticsearch.requestTimeout: 30000
# List of Kibana client-side headers to send to Elasticsearch. To send *no* client-side
# headers, set this value to [] (an empty list).
#elasticsearch.requestHeadersWhitelist: [ authorization ]
# Header names and values that are sent to Elasticsearch. Any custom headers cannot be overwritten
# by client-side headers, regardless of the elasticsearch.requestHeadersWhitelist configuration.
#elasticsearch.customHeaders: {}
# Time in milliseconds for Elasticsearch to wait for responses from shards. Set to 0 to disable.
#elasticsearch.shardTimeout: 0
# Time in milliseconds to wait for Elasticsearch at Kibana startup before retrying.
#elasticsearch.startupTimeout: 5000
# Specifies the path where Kibana creates the process ID file.
#pid.file: /var/run/kibana.pid
# Enables you specify a file where Kibana stores log output.
#logging.dest: stdout
# Set the value of this setting to true to suppress all logging output.
#logging.silent: false
# Set the value of this setting to true to suppress all logging output other than error messages.
#logging.quiet: false
# Set the value of this setting to true to log all events, including system usage information
# and all requests.
#logging.verbose: false
# Set the interval in milliseconds to sample system and process performance
# metrics. Minimum is 100ms. Defaults to 5000.
#ops.interval: 5000
# The default locale. This locale can be used in certain circumstances to substitute any missing
# translations.
#i18n.defaultLocale: "en"
But I am unable it access it on remote host either via curl or web browser, one more thing there are no errors in kibana.stderr log file of kibana. What am I doing wrong?
You have to specify the server.host parameter in the kibana.yml file.
I have server.host: 0.0.0.0 and it works fine. I think per default it only listens to "localhost" and by binding to the loopback address it is accessible from the "outside"
The Kibana server reads properties from the kibana.yml file on startup. The default settings configure Kibana to run on localhost:5601. To change the host or port number, or connect to Elasticsearch running on a different machine, you’ll need to update your kibana.yml file. You can also enable SSL and set a variety of other options
elasticsearch.url:Default: "http://localhost:9200" The URL of the Elasticsearch instance to use for all your queries.

Resources