I have problem with remote address of elasticsearch nad REST API (with getting search results)
I'm using ELK stack created by jHispter (logstash + Elasticsearch + Kibana). When I use REST search API (by cURL) with external server address I get fewer results than when I use localhost:
$ curl -X GET "http://localhost:9200/logstash-*/_search?q=Method:location"
{"took":993,"timed_out":false,"num_reduce_phases":13,"_shards":
{"total":6370,"successful":6370,"skipped":0,"failed":0},"hits":
{"total":8994099,"max_score":5.0447145,"hits":[..]}}
when executed from different server it returns smaller number of shards and hits:
$ curl -X GET "http://SERVER_URL/logstash-*/_search?q=Method:location"
{"took":10,"timed_out":false,"_shards":
{"total":120,"successful":120,"skipped":0,"failed":0},"hits":
{"total":43,"max_score":7.5393815,"hits":[..]}}
If I create ssh tunnel it works:
ssh -L 9201:SERVER_URL:9200 elk-stack
and now:
$ curl -X GET "localhost:9201/logstash-*/_search?q=Method:location"
{"took":640,"timed_out":false,"num_reduce_phases":13,"_shards":
{"total":6370,"successful":6370,"skipped":0,"failed":0},"hits":
{"total":8995082,"max_score":5.0447145,"hits":[..]}}
so there must be some problem with accessing data outside of localhost but I cant find in configuration how to change it (maybe some kind of default behaviour to prevent data leakage when accessing from remote?)
you should config your host
for this , In the config/elasticsearch.yml file put this line:
network.host: 0.0.0.0
Related
I have downloaded the Elasticsearch 8.1 in my Ubuntu. After successful installation, when I execute
curl -u elastic https://127.0.0.1:9200 -k
It is showing expected elasticsearch response. But when I hit http://127.0.0.1:9200/ or http://localhost:9200 in my browser, it is returning
After installation, I added network.host: 127.0.0.1 to elasticsearch.yml
Can anybody help me, why it is not running in browser ?
I am using Ubuntu 20 OS & following this Doc
As of version 8.0, Elasticsearch security is turned on by default and SSL/TLS is required for HTTP communications.
You can disable HTTP security if you want, but that's discouraged.
I am using the windows platform, the steps are the same. When you run the elasticsearch.bat in cmd
use this port for elasticsearch HTTPS secure https://localhost:9200/
check the username and password scroll down the cmd running elasticsearch
After login into the elasticsearch. Hurry...
Thanks
But
the best solution is to use Docker Image of ELK stack which is easy instead of downloading the E L K and then run on the local machine.
I am running Elasticsearch and kibana, I am not sure of the status of my elasticsearsh cluster (if its red, yellow, or green) but it seems I need to get a token generated by elasticsearch as in the screenshot when I ran bin/elasticsearch-create-enrollment-token --scope kibana from the right directory it errors out ERROR: Failed to determine the health of the cluster..
According Ioannis Kakavas in discuss.elastic, "CLI tools extending BaseRunAsSuperuserCommand should only connect to the local node". When I run in a local node, it works. But when I run in the elasticsearch container in a cluster, it doesn't work. The solution was execute the elastic-search-reset-password and elasticsearch-create-enrollment-token scripts, respectively, like this (inside the elasticsearch container):
/usr/share/elasticsearch/bin/elasticsearch-reset-password -i -u elastic --url https://localhost:9200
/usr/share/elasticsearch/bin/elasticsearch-create-enrollment-token -s kibana --url https://localhost:9200
I encountered the same problem, and I just redid the process - unzipped the ES and kibana zip files again, and ran bin/elasticsearch in the newly created directory. Look for a message that is encapsulated in a formatted box that contains both the password for the elastic user, and the enrollment token for Kibana (the token is only valid for 30 minutes). This message will only appear once, the first time you run elasticsearch.
I proceeded to run bin/kibana for Kibana and configured it in the browser, and everything worked out from there. Hope this helps!
I have the exact issue:
$ sudo /usr/share/elasticsearch/bin/elasticsearch-reset-password -u elastic
ERROR: Failed to determine the health of the cluster.
But after I restart the elasticsearch service:
$ sudo systemctl restart elasticsearch.service
then it works:
This tool will reset the password of the [elastic] user to an autogenerated value.
The password will be printed in the console.
Please confirm that you would like to continue [y/N]y
Password for the [elastic] user successfully reset.
New value: xxxxxx
Two possible solutions:
Make sure that you have enough disk space.
Your VPN might be causing the issue.
The enrollment Token will be present in the terminal itself. You just need to scroll up till you find it when you are installing.
The reason for the error - ERROR: Failed to determine the health of the cluster is due to the fact that Elastic has not been installed yet and running that command is like calling a function without defining it.
I run an elasticseach server in windows, and I want to access is by wsl2. I followed the doc of wsl2 and operate like this:
cat cat /etc/resolv.conf
# This file was automatically generated by WSL. To stop automatic generation of this file, add the following entry to /etc/wsl.conf:
# [network]
# generateResolvConf = false
nameserver 172.18.32.1
and then
curl -XGET "172.18.32.1:9200"
but timeout.
In windows the elasticsearch runs normally
enter image description here
I'm running Elasticsearch 2.3.3, and am looking to set up a cluster so that I can have a simulation of a production ready set up. This is set up on two Azure VMs with docker.
I'm looking at the /_cluster/settings api, to allow myself to update settings. According to the elasticserch documentation, it should be possible to update settings on clusters.
I've run on each machine the command:
docker run -d --name elastic -p 9200:9200 -p 9300:9300 elasticsearch --cluster.name=api-update-test
so now each machine sees itself as the one master and data node in a 1 machine cluster. I have then made a put request to one of these to tell it where to find the discovery.zen.ping.unicast.hosts, and to update the discovery.zen.minimum_master_nodes, with the following command (in powershell)
curl
-Method PUT
-Body '{"persistent":
{"discovery.zen.minimum_master_nodes":2,
"discovery.zen.ping.unicast.hosts":["<machine-one-ip>:9300"]}
}'
-ContentType application/json
-Uri http://<machine-two-ip>:9200/_cluster/settings
The response comes back invariably with a 200 response, but a confirmation of the original settings: {"acknowledged":true,"persistent":{},"transient":{}}
Why won't elasticsearch respect this request and update these settings? It should be noted, this also happens when I use the precise content of the sample request in the documentation.
I always used this approach:
curl -XPUT "http://localhost:9200/_cluster/settings" -d'
{
"persistent": {
"discovery.zen.minimum_master_nodes": 2
}
}'
And, also, only discovery.zen.minimum_master_nodes is dynamically update-able. The other one is not.
QUESTION: how do I debug kibana? Is there an error log?
PROBLEM 1: kibana 4 won't stay up
PROBLEM 2: I don't know where/if kibana 4 is logging errors
DETAILS:
Here's me starting kibana, making a request to the port, getting nothing, and checking the service again. The service doesn't stay up, but I'm not sure why.
vagrant#default-ubuntu-1204:/opt/kibana/current/config$ sudo service kibana start
kibana start/running, process 11774
vagrant#default-ubuntu-1204:/opt/kibana/current/config$ curl -XGET 'http://localhost:5601'
curl: (7) couldn't connect to host
vagrant#default-ubuntu-1204:/opt/kibana/current/config$ sudo service kibana status
kibana stop/waiting
Here's the nginx log, reporting when I curl -XGET from port 80, which is forwarding to port 5601:
2015/06/15 17:32:17 [error] 9082#0: *11 connect() failed (111: Connection refused) while connecting to upstream, client: 127.0.0.1, server: kibana, request: "GET / HTTP/1.1", upstream: "http://127.0.0.1:5601/", host: "localhost"
UPDATE: I may have overthought this a bit. I'm still interested in ways to view the kibana log, however! Any suggestions are appreciated!
I've noticed that when I run kibana from the command-line, I see errors that are more descriptive than a "Connection refused":
vagrant#default-ubuntu-1204:/opt/kibana/current$ bin/kibana
{"#timestamp":"2015-06-15T22:04:43.344Z","level":"error","message":"Service Unavailable","node_env":"production","error":{"message":"Service Unavailable","name":"Error","stack":"Error: Service Unavailable\n at respond (/usr/local/kibana-4.0.2/src/node_modules/elasticsearch/src/lib/transport.js:235:15)\n at checkRespForFailure (/usr/local/kibana-4.0.2/src/node_modules/elasticsearch/src/lib/transport.js:203:7)\n at HttpConnector.<anonymous> (/usr/local/kibana-4.0.2/src/node_modules/elasticsearch/src/lib/connectors/http.js:156:7)\n at IncomingMessage.bound (/usr/local/kibana-4.0.2/src/node_modules/elasticsearch/node_modules/lodash-node/modern/internals/baseBind.js:56:17)\n at IncomingMessage.emit (events.js:117:20)\n at _stream_readable.js:944:16\n at process._tickCallback (node.js:442:13)\n"}}
{"#timestamp":"2015-06-15T22:04:43.346Z","level":"fatal","message":"Service Unavailable","node_env":"production","error":{"message":"Service Unavailable","name":"Error","stack":"Error: Service Unavailable\n at respond (/usr/local/kibana-4.0.2/src/node_modules/elasticsearch/src/lib/transport.js:235:15)\n at checkRespForFailure (/usr/local/kibana-4.0.2/src/node_modules/elasticsearch/src/lib/transport.js:203:7)\n at HttpConnector.<anonymous> (/usr/local/kibana-4.0.2/src/node_modules/elasticsearch/src/lib/connectors/http.js:156:7)\n at IncomingMessage.bound (/usr/local/kibana-4.0.2/src/node_modules/elasticsearch/node_modules/lodash-node/modern/internals/baseBind.js:56:17)\n at IncomingMessage.emit (events.js:117:20)\n at _stream_readable.js:944:16\n at process._tickCallback (node.js:442:13)\n"}}
vagrant#default-ubuntu-1204:/opt/kibana/current$
Kibana 4 logs to stdout by default. Here is an excerpt of the config/kibana.yml defaults:
# Enables you specify a file where Kibana stores log output.
# logging.dest: stdout
So when invoking it with service, use the log capture method of that service. For example, on a Linux distribution using Systemd / systemctl (e.g. RHEL 7+):
journalctl -u kibana.service
One way may be to modify init scripts to use the --log-file option (if it still exists), but I think the proper solution is to properly configure your instance YAML file. For example, add this to your config/kibana.yml:
logging.dest: /var/log/kibana.log
Note that the Kibana process must be able to write to the file you specify, or the process will die without information (it can be quite confusing).
As for the --log-file option, I think this is reserved for CLI operations, rather than automation.
In kibana 4.0.2 there is no --log-file option. If I start kibana as a service with systemctl start kibana I find log in /var/log/messages
It seems that you need to pass a flag "-l, --log-file"
https://github.com/elastic/kibana/issues/3407
Usage: kibana [options]
Kibana is an open source (Apache Licensed), browser based analytics and search dashboard for Elasticsearch.
Options:
-h, --help output usage information
-V, --version output the version number
-e, --elasticsearch <uri> Elasticsearch instance
-c, --config <path> Path to the config file
-p, --port <port> The port to bind to
-q, --quiet Turns off logging
-H, --host <host> The host to bind to
-l, --log-file <path> The file to log to
--plugins <path> Path to scan for plugins
If you use the init script to run as a service, maybe you will need to customize it.
Kibana doesn't have a log file by default. but you can set it up using log_file Kibana server property - https://www.elastic.co/guide/en/kibana/current/kibana-server-properties.html
For kibana 6.x on Windows, edit the shortcut to "kibana -l " folder must exist.