I am running Elasticsearch and kibana, I am not sure of the status of my elasticsearsh cluster (if its red, yellow, or green) but it seems I need to get a token generated by elasticsearch as in the screenshot when I ran bin/elasticsearch-create-enrollment-token --scope kibana from the right directory it errors out ERROR: Failed to determine the health of the cluster..
According Ioannis Kakavas in discuss.elastic, "CLI tools extending BaseRunAsSuperuserCommand should only connect to the local node". When I run in a local node, it works. But when I run in the elasticsearch container in a cluster, it doesn't work. The solution was execute the elastic-search-reset-password and elasticsearch-create-enrollment-token scripts, respectively, like this (inside the elasticsearch container):
/usr/share/elasticsearch/bin/elasticsearch-reset-password -i -u elastic --url https://localhost:9200
/usr/share/elasticsearch/bin/elasticsearch-create-enrollment-token -s kibana --url https://localhost:9200
I encountered the same problem, and I just redid the process - unzipped the ES and kibana zip files again, and ran bin/elasticsearch in the newly created directory. Look for a message that is encapsulated in a formatted box that contains both the password for the elastic user, and the enrollment token for Kibana (the token is only valid for 30 minutes). This message will only appear once, the first time you run elasticsearch.
I proceeded to run bin/kibana for Kibana and configured it in the browser, and everything worked out from there. Hope this helps!
I have the exact issue:
$ sudo /usr/share/elasticsearch/bin/elasticsearch-reset-password -u elastic
ERROR: Failed to determine the health of the cluster.
But after I restart the elasticsearch service:
$ sudo systemctl restart elasticsearch.service
then it works:
This tool will reset the password of the [elastic] user to an autogenerated value.
The password will be printed in the console.
Please confirm that you would like to continue [y/N]y
Password for the [elastic] user successfully reset.
New value: xxxxxx
Two possible solutions:
Make sure that you have enough disk space.
Your VPN might be causing the issue.
The enrollment Token will be present in the terminal itself. You just need to scroll up till you find it when you are installing.
The reason for the error - ERROR: Failed to determine the health of the cluster is due to the fact that Elastic has not been installed yet and running that command is like calling a function without defining it.
Related
Background: I'm working in an Ubuntu 20.04 environment setting up Logstash servers to ship metrics to my Elastic cluster. With my relatively basic configuration, I'm able to have a Filebeat process send logs to a Loadbalancer, which then spreads them across my Logstash servers and up to Elastic. This process works. I'd like to be able to use the Logstash Keystore to prevent having to pass sensitive variables to my logstash.yml file in plain text. In my environment, I'm able to follow the Elastic documentation to setup a password-protected keystore in the default location, add keys to it, and successfully list out those keys.
Problems: While the Logstash servers successfully run without the keystore, the moment I add them and try to watch the logfile on startup, the process never starts. It seems to continue attempting restart without ever logging to the logstash-plain.log. When trying to run the process in the foreground with this configuration, the error I received was the rather-unhelpful:
Found a file at /etc/logstash/logstash.keystore,
but it is not a valid Logstash keystore
Troubleshooting Done: After trying some steps found in other issues, such as replacing the /etc/sysconfig/logstash creation with simply adding the password to /etc/default/logstash, the errors were a little more helpful, stating that the file permissions or password were incorrect. The logstash-keystore process itself was capable of creating and listing keys, so the password was correct, and the keystore itself was set to 0644. I tried multiple permissions configurations and was still unable to get Logstash to run as a process or in the foreground.
I'm still under the impression it's a permissions issue, but I don't know how to resolve it. Logstash runs as the logstash user, which should be able to read the keystore file since its 0644 and housed in the same dir as logstash.yml.
Has anyone experienced something similar with Logstash & Ubuntu, or in a similar environment? If so, how did you manage to get past it? I'm open to ideas and would love to get this working.
Try running logstash-keystore as the logstash user:
sudo -u logstash /usr/share/logstash/bin/logstash-keystore \
--path.settings /etc/logstash list
[Aside from the usual caveats about secret obfuscation of this kind, it's worth making explicit that the docs expect logstash-keystore to be run as root, not as logstash. So after you're done troubleshooting, especially if you create a keystore owned by logstash, make sure it ultimately has permissions that are sufficiently restrictive]
Alternatively, you could run some other command as the logstash user. To validate the permission hypothesis, you just need to read the file as user logstash:
sudo -u logstash file /etc/logstash/logstash.keystore
sudo -u logstash md5sum /etc/logstash/logstash.keystore
su logstash -c 'cat /etc/logstash/logstash.keystore > /dev/null'
# and so on
If, as you suspect, there is a permissions problem, and the read test fails, assemble the necessary data with these commands:
ls -dla /etc/logstash/{,logstash.keystore}
groups logstash
By this point you should know:
what groups logstash is in
what groups are able to open /etc/logstash
what groups are able to read /etc/logstash/logstash.keystore
And you already said the keystore's mode is 644. In all likelihood, logstash will be a member of the logstash group only, and /etc/logstash will be world readable. So the TL;DR version of this advice might be:
# set group on the keystore to `logstash`
chgrp logstash /etc/logstash/logstash.keystore
# ensure the keystore is group readable
chmod g+r /etc/logstash/logstash.keystore
If it wasn't permissions, you could try recreating the store without a password. If it then works, you'll want to be really careful about how you handle the password environment variable, and go over the docs with a fine-tooth comb.
In order to use sense plugin, I have some problems when integrating kibana with elastic search. Everyting goes well. Elastic search and kibana installed properly in my machine.
When I run this command :
cd elasticsearch/bin/elasticsearch.bat
and then I go to http://localhost:9200/,
I got success message.
When I run this command :
cd kibana/bin/kibana.bat
and then I go to http://localhost:5601/app/sense
I got notification that
plugin:elasticsearch is not available.
this is prove that my elastic already running
this is my kibana.yml
this is my elastic.yml
What's going wrong?
I am trying out Shield as a security measure for my Kibana and Elasticsearch. Running on Mac OS X 10.9.5
Followed the documentation from Elastic. Managed to install Shield. Since my Elasticsearch is running automatically, I skipped step 2(start elasticsearch).
For step 3, I tried adding an admin. Ran this following command on my terminal. bin/shield/esusers useradd admin -p password -r admin.
Unfortunately I'm getting this error.
Error: Could not find or load main class org.elasticsearch.shield.authc.esusers.tool.ESUsersTool
Below are the additional steps I took.
Double checked that the bin/shield esusers path existed and all.
Manually starting elasticsearch before adding users
Tried a variety of different commands based on the documentation.
bin/shield/esusers useradd admin -r admin and
bin/shield/esusers useradd es_admin -r admin
Ran those commands with sudo
Same error generated. Can't seem to find the problem on google as well. Not really sure what I'm missing here as the documentation seems pretty straightforward.
You must restart the node because new Java classes were added to it (from the Shield plugin) and the JVM behind Elasticsearch needs to reload those classes. It can only do that if you restart it.
Kill the process and start it up again, or use curl -XPOST "http://localhost:9200/_shutdown" to shut the cluster down.
Also, the Shield plugin needs to be installed on all the nodes in the cluster.
QUESTION: how do I debug kibana? Is there an error log?
PROBLEM 1: kibana 4 won't stay up
PROBLEM 2: I don't know where/if kibana 4 is logging errors
DETAILS:
Here's me starting kibana, making a request to the port, getting nothing, and checking the service again. The service doesn't stay up, but I'm not sure why.
vagrant#default-ubuntu-1204:/opt/kibana/current/config$ sudo service kibana start
kibana start/running, process 11774
vagrant#default-ubuntu-1204:/opt/kibana/current/config$ curl -XGET 'http://localhost:5601'
curl: (7) couldn't connect to host
vagrant#default-ubuntu-1204:/opt/kibana/current/config$ sudo service kibana status
kibana stop/waiting
Here's the nginx log, reporting when I curl -XGET from port 80, which is forwarding to port 5601:
2015/06/15 17:32:17 [error] 9082#0: *11 connect() failed (111: Connection refused) while connecting to upstream, client: 127.0.0.1, server: kibana, request: "GET / HTTP/1.1", upstream: "http://127.0.0.1:5601/", host: "localhost"
UPDATE: I may have overthought this a bit. I'm still interested in ways to view the kibana log, however! Any suggestions are appreciated!
I've noticed that when I run kibana from the command-line, I see errors that are more descriptive than a "Connection refused":
vagrant#default-ubuntu-1204:/opt/kibana/current$ bin/kibana
{"#timestamp":"2015-06-15T22:04:43.344Z","level":"error","message":"Service Unavailable","node_env":"production","error":{"message":"Service Unavailable","name":"Error","stack":"Error: Service Unavailable\n at respond (/usr/local/kibana-4.0.2/src/node_modules/elasticsearch/src/lib/transport.js:235:15)\n at checkRespForFailure (/usr/local/kibana-4.0.2/src/node_modules/elasticsearch/src/lib/transport.js:203:7)\n at HttpConnector.<anonymous> (/usr/local/kibana-4.0.2/src/node_modules/elasticsearch/src/lib/connectors/http.js:156:7)\n at IncomingMessage.bound (/usr/local/kibana-4.0.2/src/node_modules/elasticsearch/node_modules/lodash-node/modern/internals/baseBind.js:56:17)\n at IncomingMessage.emit (events.js:117:20)\n at _stream_readable.js:944:16\n at process._tickCallback (node.js:442:13)\n"}}
{"#timestamp":"2015-06-15T22:04:43.346Z","level":"fatal","message":"Service Unavailable","node_env":"production","error":{"message":"Service Unavailable","name":"Error","stack":"Error: Service Unavailable\n at respond (/usr/local/kibana-4.0.2/src/node_modules/elasticsearch/src/lib/transport.js:235:15)\n at checkRespForFailure (/usr/local/kibana-4.0.2/src/node_modules/elasticsearch/src/lib/transport.js:203:7)\n at HttpConnector.<anonymous> (/usr/local/kibana-4.0.2/src/node_modules/elasticsearch/src/lib/connectors/http.js:156:7)\n at IncomingMessage.bound (/usr/local/kibana-4.0.2/src/node_modules/elasticsearch/node_modules/lodash-node/modern/internals/baseBind.js:56:17)\n at IncomingMessage.emit (events.js:117:20)\n at _stream_readable.js:944:16\n at process._tickCallback (node.js:442:13)\n"}}
vagrant#default-ubuntu-1204:/opt/kibana/current$
Kibana 4 logs to stdout by default. Here is an excerpt of the config/kibana.yml defaults:
# Enables you specify a file where Kibana stores log output.
# logging.dest: stdout
So when invoking it with service, use the log capture method of that service. For example, on a Linux distribution using Systemd / systemctl (e.g. RHEL 7+):
journalctl -u kibana.service
One way may be to modify init scripts to use the --log-file option (if it still exists), but I think the proper solution is to properly configure your instance YAML file. For example, add this to your config/kibana.yml:
logging.dest: /var/log/kibana.log
Note that the Kibana process must be able to write to the file you specify, or the process will die without information (it can be quite confusing).
As for the --log-file option, I think this is reserved for CLI operations, rather than automation.
In kibana 4.0.2 there is no --log-file option. If I start kibana as a service with systemctl start kibana I find log in /var/log/messages
It seems that you need to pass a flag "-l, --log-file"
https://github.com/elastic/kibana/issues/3407
Usage: kibana [options]
Kibana is an open source (Apache Licensed), browser based analytics and search dashboard for Elasticsearch.
Options:
-h, --help output usage information
-V, --version output the version number
-e, --elasticsearch <uri> Elasticsearch instance
-c, --config <path> Path to the config file
-p, --port <port> The port to bind to
-q, --quiet Turns off logging
-H, --host <host> The host to bind to
-l, --log-file <path> The file to log to
--plugins <path> Path to scan for plugins
If you use the init script to run as a service, maybe you will need to customize it.
Kibana doesn't have a log file by default. but you can set it up using log_file Kibana server property - https://www.elastic.co/guide/en/kibana/current/kibana-server-properties.html
For kibana 6.x on Windows, edit the shortcut to "kibana -l " folder must exist.
Hello I am starting work with kibana and elasticsearch. I am being able to run elasticsearch at port 9200 but kibana is not running at port 5601. The following two images are given for clarification
Kibana is not running and showing the page is not available
Kibana doesn't support space in the folder name. Your folder name is
GA Works
Remove the space between those two words kibana will then run without errors and you will be able to access at
http://localhost:5601
You can rename the folder with
GA_Works
Have you
a) Set the elasticsearch_url to point at your Elasticsearch instance in file kibana/config.yml?
b) Run ./bin/kibana (or bin\kibana.bat on windows) (after setting the above config)
?
If you tried all of the above and still it doesn't work make sure that the kibana process is running first. I found that /etc/init.d/kibana4_init doesn't start the process. If that is the case then try: opt/kibana/bin/kibana.
I also made kibana user:group owner of the folder/files.