Where is the kibana error log? Is there a kibana error log? - kibana-4

QUESTION: how do I debug kibana? Is there an error log?
PROBLEM 1: kibana 4 won't stay up
PROBLEM 2: I don't know where/if kibana 4 is logging errors
DETAILS:
Here's me starting kibana, making a request to the port, getting nothing, and checking the service again. The service doesn't stay up, but I'm not sure why.
vagrant#default-ubuntu-1204:/opt/kibana/current/config$ sudo service kibana start
kibana start/running, process 11774
vagrant#default-ubuntu-1204:/opt/kibana/current/config$ curl -XGET 'http://localhost:5601'
curl: (7) couldn't connect to host
vagrant#default-ubuntu-1204:/opt/kibana/current/config$ sudo service kibana status
kibana stop/waiting
Here's the nginx log, reporting when I curl -XGET from port 80, which is forwarding to port 5601:
2015/06/15 17:32:17 [error] 9082#0: *11 connect() failed (111: Connection refused) while connecting to upstream, client: 127.0.0.1, server: kibana, request: "GET / HTTP/1.1", upstream: "http://127.0.0.1:5601/", host: "localhost"
UPDATE: I may have overthought this a bit. I'm still interested in ways to view the kibana log, however! Any suggestions are appreciated!
I've noticed that when I run kibana from the command-line, I see errors that are more descriptive than a "Connection refused":
vagrant#default-ubuntu-1204:/opt/kibana/current$ bin/kibana
{"#timestamp":"2015-06-15T22:04:43.344Z","level":"error","message":"Service Unavailable","node_env":"production","error":{"message":"Service Unavailable","name":"Error","stack":"Error: Service Unavailable\n at respond (/usr/local/kibana-4.0.2/src/node_modules/elasticsearch/src/lib/transport.js:235:15)\n at checkRespForFailure (/usr/local/kibana-4.0.2/src/node_modules/elasticsearch/src/lib/transport.js:203:7)\n at HttpConnector.<anonymous> (/usr/local/kibana-4.0.2/src/node_modules/elasticsearch/src/lib/connectors/http.js:156:7)\n at IncomingMessage.bound (/usr/local/kibana-4.0.2/src/node_modules/elasticsearch/node_modules/lodash-node/modern/internals/baseBind.js:56:17)\n at IncomingMessage.emit (events.js:117:20)\n at _stream_readable.js:944:16\n at process._tickCallback (node.js:442:13)\n"}}
{"#timestamp":"2015-06-15T22:04:43.346Z","level":"fatal","message":"Service Unavailable","node_env":"production","error":{"message":"Service Unavailable","name":"Error","stack":"Error: Service Unavailable\n at respond (/usr/local/kibana-4.0.2/src/node_modules/elasticsearch/src/lib/transport.js:235:15)\n at checkRespForFailure (/usr/local/kibana-4.0.2/src/node_modules/elasticsearch/src/lib/transport.js:203:7)\n at HttpConnector.<anonymous> (/usr/local/kibana-4.0.2/src/node_modules/elasticsearch/src/lib/connectors/http.js:156:7)\n at IncomingMessage.bound (/usr/local/kibana-4.0.2/src/node_modules/elasticsearch/node_modules/lodash-node/modern/internals/baseBind.js:56:17)\n at IncomingMessage.emit (events.js:117:20)\n at _stream_readable.js:944:16\n at process._tickCallback (node.js:442:13)\n"}}
vagrant#default-ubuntu-1204:/opt/kibana/current$

Kibana 4 logs to stdout by default. Here is an excerpt of the config/kibana.yml defaults:
# Enables you specify a file where Kibana stores log output.
# logging.dest: stdout
So when invoking it with service, use the log capture method of that service. For example, on a Linux distribution using Systemd / systemctl (e.g. RHEL 7+):
journalctl -u kibana.service
One way may be to modify init scripts to use the --log-file option (if it still exists), but I think the proper solution is to properly configure your instance YAML file. For example, add this to your config/kibana.yml:
logging.dest: /var/log/kibana.log
Note that the Kibana process must be able to write to the file you specify, or the process will die without information (it can be quite confusing).
As for the --log-file option, I think this is reserved for CLI operations, rather than automation.

In kibana 4.0.2 there is no --log-file option. If I start kibana as a service with systemctl start kibana I find log in /var/log/messages

It seems that you need to pass a flag "-l, --log-file"
https://github.com/elastic/kibana/issues/3407
Usage: kibana [options]
Kibana is an open source (Apache Licensed), browser based analytics and search dashboard for Elasticsearch.
Options:
-h, --help output usage information
-V, --version output the version number
-e, --elasticsearch <uri> Elasticsearch instance
-c, --config <path> Path to the config file
-p, --port <port> The port to bind to
-q, --quiet Turns off logging
-H, --host <host> The host to bind to
-l, --log-file <path> The file to log to
--plugins <path> Path to scan for plugins
If you use the init script to run as a service, maybe you will need to customize it.

Kibana doesn't have a log file by default. but you can set it up using log_file Kibana server property - https://www.elastic.co/guide/en/kibana/current/kibana-server-properties.html

For kibana 6.x on Windows, edit the shortcut to "kibana -l " folder must exist.

Related

Elasticsearch is not running in browser

I have downloaded the Elasticsearch 8.1 in my Ubuntu. After successful installation, when I execute
curl -u elastic https://127.0.0.1:9200 -k
It is showing expected elasticsearch response. But when I hit http://127.0.0.1:9200/ or http://localhost:9200 in my browser, it is returning
After installation, I added network.host: 127.0.0.1 to elasticsearch.yml
Can anybody help me, why it is not running in browser ?
I am using Ubuntu 20 OS & following this Doc
As of version 8.0, Elasticsearch security is turned on by default and SSL/TLS is required for HTTP communications.
You can disable HTTP security if you want, but that's discouraged.
I am using the windows platform, the steps are the same. When you run the elasticsearch.bat in cmd
use this port for elasticsearch HTTPS secure https://localhost:9200/
check the username and password scroll down the cmd running elasticsearch
After login into the elasticsearch. Hurry...
Thanks
But
the best solution is to use Docker Image of ELK stack which is easy instead of downloading the E L K and then run on the local machine.

ERROR: Failed to determine the health of the cluster

I am running Elasticsearch and kibana, I am not sure of the status of my elasticsearsh cluster (if its red, yellow, or green) but it seems I need to get a token generated by elasticsearch as in the screenshot when I ran bin/elasticsearch-create-enrollment-token --scope kibana from the right directory it errors out ERROR: Failed to determine the health of the cluster..
According Ioannis Kakavas in discuss.elastic, "CLI tools extending BaseRunAsSuperuserCommand should only connect to the local node". When I run in a local node, it works. But when I run in the elasticsearch container in a cluster, it doesn't work. The solution was execute the elastic-search-reset-password and elasticsearch-create-enrollment-token scripts, respectively, like this (inside the elasticsearch container):
/usr/share/elasticsearch/bin/elasticsearch-reset-password -i -u elastic --url https://localhost:9200
/usr/share/elasticsearch/bin/elasticsearch-create-enrollment-token -s kibana --url https://localhost:9200
I encountered the same problem, and I just redid the process - unzipped the ES and kibana zip files again, and ran bin/elasticsearch in the newly created directory. Look for a message that is encapsulated in a formatted box that contains both the password for the elastic user, and the enrollment token for Kibana (the token is only valid for 30 minutes). This message will only appear once, the first time you run elasticsearch.
I proceeded to run bin/kibana for Kibana and configured it in the browser, and everything worked out from there. Hope this helps!
I have the exact issue:
$ sudo /usr/share/elasticsearch/bin/elasticsearch-reset-password -u elastic
ERROR: Failed to determine the health of the cluster.
But after I restart the elasticsearch service:
$ sudo systemctl restart elasticsearch.service
then it works:
This tool will reset the password of the [elastic] user to an autogenerated value.
The password will be printed in the console.
Please confirm that you would like to continue [y/N]y
Password for the [elastic] user successfully reset.
New value: xxxxxx
Two possible solutions:
Make sure that you have enough disk space.
Your VPN might be causing the issue.
The enrollment Token will be present in the terminal itself. You just need to scroll up till you find it when you are installing.
The reason for the error - ERROR: Failed to determine the health of the cluster is due to the fact that Elastic has not been installed yet and running that command is like calling a function without defining it.

Near Mainnet Archivel Node Set up

I tried setting up the NEAR mainnet archival node using docker by following this documentation - https://github.com/near/nearup#building-the-docker-image. The docker run command does not specify any port in the document.
So I also ran the docker run without any port, but when I tried to check the port by docker ps it does not show any port but the neard node runs.
I did not find any docs on the node APIs, can we use the archival APIs - https://docs.near.org/docs/api/rpc to query the node.
Docker run command used to set up archival mainnet node:
sudo docker run -d -v $PWD:/root/.near --name nearup nearprotocol/nearup run mainnet
JSON RPC on nearcore is explosed on port 3030
As for the running an archival node you might be interested in this doc page https://docs.near.org/docs/roles/integrator/exchange-integration#steps-to-start-archive-node
P. S. nearup is considered oldish though still in use.
I have updated the documentation for nearup to specify the port binding for RPC now: https://github.com/near/nearup#building-the-docker-image
You can use the following command:
docker run -v $HOME/.near:/root/.near -p 3030:3030 --name nearup nearprotocol/nearup run mainnet
And you can validate nearup is running and the RPC /status endpoint is available by running:
docker exec nearup nearup logs
and
curl 0.0.0.0:3030/status
Also please make sure that you have changed the ~/.near/mainnet/config.json to contain the variable:
{
...
"archive": true,
...
}

Elasticsearch REST search API

I have problem with remote address of elasticsearch nad REST API (with getting search results)
I'm using ELK stack created by jHispter (logstash + Elasticsearch + Kibana). When I use REST search API (by cURL) with external server address I get fewer results than when I use localhost:
$ curl -X GET "http://localhost:9200/logstash-*/_search?q=Method:location"
{"took":993,"timed_out":false,"num_reduce_phases":13,"_shards":
{"total":6370,"successful":6370,"skipped":0,"failed":0},"hits":
{"total":8994099,"max_score":5.0447145,"hits":[..]}}
when executed from different server it returns smaller number of shards and hits:
$ curl -X GET "http://SERVER_URL/logstash-*/_search?q=Method:location"
{"took":10,"timed_out":false,"_shards":
{"total":120,"successful":120,"skipped":0,"failed":0},"hits":
{"total":43,"max_score":7.5393815,"hits":[..]}}
If I create ssh tunnel it works:
ssh -L 9201:SERVER_URL:9200 elk-stack
and now:
$ curl -X GET "localhost:9201/logstash-*/_search?q=Method:location"
{"took":640,"timed_out":false,"num_reduce_phases":13,"_shards":
{"total":6370,"successful":6370,"skipped":0,"failed":0},"hits":
{"total":8995082,"max_score":5.0447145,"hits":[..]}}
so there must be some problem with accessing data outside of localhost but I cant find in configuration how to change it (maybe some kind of default behaviour to prevent data leakage when accessing from remote?)
you should config your host
for this , In the config/elasticsearch.yml file put this line:
network.host: 0.0.0.0

Unable to get Mesos to run from tutorial: Setting up a Single Node Mesosphere Cluster

I have been following this tutorial to try and setup a single node mesosphere cluster from their
official tutorial:
http://mesosphere.com/docs/getting-started/developer/single-node-install/
I followed all the commands without any issues, and I also added the ports 5050 and 8080 to my security group. When I try to access the console for mesos/marathon, I get a "Internet Explorer cannot display the webpage" message.
They also recommend checking it the following way:
MASTER=$(mesos-resolve `cat /etc/mesos/zk`)
mesos-execute --master=$MASTER --name="cluster-test" --command="sleep 5"
But that comes up with an error:
WARNING: Logging before InitGoogleLogging() is written to STDERR
F0106 17:03:08.126703 20993 process.cpp:1561] Failed to initialize, gethostbyname2: Unknown host
*** Check failure stack trace: ***
I am not really sure how to troubleshoot this either, and there are not many tutorials I could find on how to install mesos on ubuntu.
I checked the contents of the zk file, seems to be the default value.
$ cat /etc/mesos/zk
zk://localhost:2181/mesos
I would really appreciate any clues on how to go about this one.
Edit: The process is definitely running too - just an fyi:
root 31545 8.5 5.9 187464 35604 ? Ssl 17:28 0:00 /usr/local/sbin/mesos-slave --master=zk://localhost:2181/mesos --log_dir=/var/log/mesos
root 31563 28.5 2.1 116304 12856 ? Rs 17:28 0:00 /usr/local/sbin/mesos-master --zk=zk://localhost:2181/mesos --port=5050 --log_dir=/var/log/mesos --quorum=1 --wo
Mesos uses gethostbyname2 to resolve hostnames to IPs. The first thing I would recommend, is to try "ping localhost" and "ping hostname", and verify that there are no strange settings in /etc/hosts. If you're doing a multi-node cluster, I'd recommend that hostname map to the public IP address (not 127.0.x.1).
If that doesn't help, you can try setting the --ip and --hostname flags when starting mesos-master and mesos-slave, to bypass the gethostbyname2 resolution. These can also be set by writing to the file-based parameters, e.g. /etc/mesos/mesos-master/ip
For additional troubleshooting, try running wget http://localhost:5050 (or curl -L) from the mesos master, to verify that it is locally visible. Also try wget http://<public_ip>:5050 to verify that the web server is up and serving to the public IP. Depending on how your (EC2?) node is setup, you may need to expose/forward the port, or connect to a VPN.
Thanks Adam. I ran the wget and curl commands, and nothing was actually listening on port 8080 or 5050. I did open those ports in the ec2. A simple reboot did the trick however, once I ssh'ed into the ec2 instance after the reboot, both mesos and marathon were running and both ports are now showing after I ran
netstat -ntln.

Resources