kibana unable to discover - (Shard Failures) / Error: indexPattern.fields is undefined - elasticsearch

Kibana is unable to initialize when starting, it shows the misleading exception "Shard Failures" without any details:
But when digging in the Browser console, the following logs have been written:
"INFO: 2016-11-25T13:41:59Z
Adding connection to https://monitoring.corp.com/elk-kibana/elasticsearch
" kibana.bundle.js:63741:6
config initcommons.bundle.js:62929
complete in 459.08ms commons.bundle.js:62925:12
loading default index patterncommons.bundle.js:62929
Index Patterns: index pattern set to logstash-* commons.bundle.js:8926:17
complete in 125.70ms commons.bundle.js:62925:12
Error: indexPattern.fields is undefined
isSortable#https://monitoring.corp.com/elk-kibana/bundles/kibana.bundle.js?v=9732:85441:8
getSort#https://monitoring.corp.com/elk-kibana/bundles/kibana.bundle.js?v=9732:85448:47
__WEBPACK_AMD_DEFINE_RESULT__</getSort.array#https://monitoring.corp.com/elk-kibana/bundles/kibana.bundle.js?v=9732:85463:15
getStateDefaults#https://monitoring.corp.com/elk-kibana/bundles/kibana.bundle.js?v=9732:85015:16
__WEBPACK_AMD_DEFINE_RESULT__</<#https://monitoring.corp.com/elk-kibana/bundles/kibana.bundle.js?v=9732:85009:47
invoke#https://monitoring.corp.com/elk-kibana/bundles/commons.bundle.js?v=9732:31569:15
$ControllerProvider/this.$get</</instantiate<#https://monitoring.corp.com/elk-kibana/bundles/commons.bundle.js?v=9732:36227:25
nodeLinkFn#https://monitoring.corp.com/elk-kibana/bundles/commons.bundle.js?v=9732:35339:37
compositeLinkFn#https://monitoring.corp.com/elk-kibana/bundles/commons.bundle.js?v=9732:34771:14
publicLinkFn#https://monitoring.corp.com/elk-kibana/bundles/commons.bundle.js?v=9732:34646:31
ngViewFillContentFactory/<.link#https://monitoring.corp.com/elk-kibana/bundles/commons.bundle.js?v=9732:57515:8
invokeLinkFn#https://monitoring.corp.com/elk-kibana/bundles/commons.bundle.js?v=9732:35880:10
nodeLinkFn#https://monitoring.corp.com/elk-kibana/bundles/commons.bundle.js?v=9732:35380:12
compositeLinkFn#https://monitoring.corp.com/elk-kibana/bundles/commons.bundle.js?v=9732:34771:14
publicLinkFn#https://monitoring.corp.com/elk-kibana/bundles/commons.bundle.js?v=9732:34646:31
createBoundTranscludeFn/boundTranscludeFn#https://monitoring.corp.com/elk-kibana/bundles/commons.bundle.js?v=9732:34790:17
controllersBoundTransclude#https://monitoring.corp.com/elk-kibana/bundles/commons.bundle.js?v=9732:35407:19
update#https://monitoring.corp.com/elk-kibana/bundles/commons.bundle.js?v=9732:57465:26
$RootScopeProvider/this.$get</Scope.prototype.$broadcast#https://monitoring.corp.com/elk-kibana/bundles/commons.bundle.js?v=9732:43402:16
commitRoute/<#https://monitoring.corp.com/elk-kibana/bundles/commons.bundle.js?v=9732:57149:16
processQueue#https://monitoring.corp.com/elk-kibana/bundles/commons.bundle.js?v=9732:41836:29
scheduleProcessQueue/<#https://monitoring.corp.com/elk-kibana/bundles/commons.bundle.js?v=9732:41852:28
$RootScopeProvider/this.$get</Scope.prototype.$eval#https://monitoring.corp.com/elk-kibana/bundles/commons.bundle.js?v=9732:43080:17
$RootScopeProvider/this.$get</Scope.prototype.$digest#https://monitoring.corp.com/elk-kibana/bundles/commons.bundle.js?v=9732:42891:16
$RootScopeProvider/this.$get</Scope.prototype.$apply#https://monitoring.corp.com/elk-kibana/bundles/commons.bundle.js?v=9732:43188:14
done#https://monitoring.corp.com/elk-kibana/bundles/commons.bundle.js?v=9732:37637:37
completeRequest#https://monitoring.corp.com/elk-kibana/bundles/commons.bundle.js?v=9732:37835:8
requestLoaded#https://monitoring.corp.com/elk-kibana/bundles/commons.bundle.js?v=9732:37776:10
<div class="application ng-scope" ng-class="'tab-' + chrome.getActiveTabId('-none-') + ' ' + chrome.getApplicationClasses()" ng-view="" ng-controller="chrome.$$rootControllerConstruct as kibana"> commons.bundle.js:39568:19
Error: Request to Elasticsearch failed: "Bad Request"
KbnError#https://monitoring.corp.com/elk-kibana/bundles/commons.bundle.js?v=9732:62016:21
RequestFailure#https://monitoring.corp.com/elk-kibana/bundles/commons.bundle.js?v=9732:62049:6
__WEBPACK_AMD_DEFINE_RESULT__</</</<#https://monitoring.corp.com/elk-kibana/bundles/kibana.bundle.js?v=9732:88628:16
processQueue#https://monitoring.corp.com/elk-kibana/bundles/commons.bundle.js?v=9732:41836:29
scheduleProcessQueue/<#https://monitoring.corp.com/elk-kibana/bundles/commons.bundle.js?v=9732:41852:28
$RootScopeProvider/this.$get</Scope.prototype.$eval#https://monitoring.corp.com/elk-kibana/bundles/commons.bundle.js?v=9732:43080:17
$RootScopeProvider/this.$get</Scope.prototype.$digest#https://monitoring.corp.com/elk-kibana/bundles/commons.bundle.js?v=9732:42891:16
$RootScopeProvider/this.$get</Scope.prototype.$apply#https://monitoring.corp.com/elk-kibana/bundles/commons.bundle.js?v=9732:43188:14
done#https://monitoring.corp.com/elk-kibana/bundles/commons.bundle.js?v=9732:37637:37
completeRequest#https://monitoring.corp.com/elk-kibana/bundles/commons.bundle.js?v=9732:37835:8
requestLoaded#https://monitoring.corp.com/elk-kibana/bundles/commons.bundle.js?v=9732:37776:10
commons.bundle.js:39568:19
I'm aware of the https://github.com/elastic/kibana/issues/6460 issue, but we don't have any signs of an entity which is too large.
I also recreated the indexpattern already, without luck (deleting and creating).
However when going into the "Settings" > "Index pattern" where the fields are shown, and going back to discover, kibana seems to work again (until next browser refresh). Any ideas how to fix kibana?
Kibana version: 4.4.2
Elasticsearch version: 2.2.0

Increasing the server.maxPayloadBytes property in the kibana.yml file to an appropiate size solved the issue.

Related

Elasticsearch indexing fails after successful Nutch crawl

I'm not sure why but Nutch 1.13 is failing to index the data to ES (v2.3.3). It is crawling, that is fine, but when it comes time to index to ES its giving me this error message:
Indexer: java.io.IOException: Job failed!
at org.apache.hadoop.mapred.JobClient.runJob(JobClient.java:865)
at org.apache.nutch.indexer.IndexingJob.index(IndexingJob.java:147)
at org.apache.nutch.indexer.IndexingJob.run(IndexingJob.java:230)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
at org.apache.nutch.indexer.IndexingJob.main(IndexingJob.java:239)
Right before that is has this:
elastic.bulk.close.timeout : elastic timeout for the last bulk in seconds. (default 600)
I'm not sure exactly if the timeout has anything to do with the job failing?
I've run Nutch v1.10 many times with no problems but decided to upgrade now. Never had this error before until now, with upgrading.
EDIT:
After closer inspection of the error message:
Error running:
/home/david/tutorials/nutch/nutch-1.13/runtime/local/bin/nutch index -Delastic.server.url=http://localhost:9300/search-index/ searchcrawl//crawldb -linkdb searchcrawl//linkdb searchcrawl//segments/20170519125546
It seems to be failing there, on that particular segment, what does that mean? I only know the basics of how to use Nutch, I'm by no means an expert. Is it failing on a link?
Until Nutch 1.14 is out, you need to apply this patch https://github.com/apache/nutch/pull/156 and rebuild:
cd apache-nutch-1.13
wget https://raw.githubusercontent.com/apache/nutch/e040ace189aa0379b998c8852a09c1a1a2308d82/src/java/org/apache/nutch/indexer/CleaningJob.java
mv CleaningJob.java src/java/org/apache/nutch/indexer/.

Timeout on deleting a snapshot repository

I'm running elasticsearch 1.7.5 w/ 19 nodes (12 data nodes).
Attempting to setup snapshots for backup and recovery - but am getting a 503 on creation and deletion of a snapshot repository.
curl -XDELETE 'localhost:9200/_snapshot/backups?pretty'
returns:
{
"error" : "RemoteTransportException[[masternodename][inet[/10.0.0.20:9300]][cluster:admin/repository/delete]]; nested: ProcessClusterEventTimeoutException[failed to process cluster event (delete_repository [backups]) within 30s]; ",
"status" : 503
}
I was able to adjust the query w/ a master_timeout=10m - still getting a timeout. Is there a way to debug the cause of this request failing?
Performance on this call seems to be related to pending tasks with a higher priority.
https://discuss.elastic.co/t/timeout-on-deleting-a-snapshot-repository/69936/4

LogStash::ConfigurationError: com.mysql.jdbc.Driver not loaded

When I use the logstash_input_jdbc plugin sync MySQL and my local elastic search,
The below errors appear, But I search for a long time, but I have no resolve method until now.
./logstash -f ./logstash_jdbc_test/jdbc.conf
Pipeline aborted due to error {:exception=>#,
:backtrace=>["/usr/local/logstash/vendor/bundle/jruby/1.9/gems/logstash-input-jdbc-3.0.2/lib/logstash/plugin_mixins/jdbc.rb:156:in
prepare_jdbc_connection'",
"/usr/local/logstash/vendor/bundle/jruby/1.9/gems/logstash-input-jdbc-3.0.2/lib/logstash/inputs/jdbc.rb:167:in
register'",
"/usr/local/logstash/vendor/bundle/jruby/1.9/gems/logstash-core-2.3.2-java/lib/logstash/pipeline.rb:330:in
start_inputs'", "org/jruby/RubyArray.java:1613:ineach'",
"/usr/local/logstash/vendor/bundle/jruby/1.9/gems/logstash-core-2.3.2-java/lib/logstash/pipeline.rb:329:in
start_inputs'",
"/usr/local/logstash/vendor/bundle/jruby/1.9/gems/logstash-core-2.3.2-java/lib/logstash/pipeline.rb:180:in
start_workers'",
"/usr/local/logstash/vendor/bundle/jruby/1.9/gems/logstash-core-2.3.2-java/lib/logstash/pipeline.rb:136:in
run'",
"/usr/local/logstash/vendor/bundle/jruby/1.9/gems/logstash-core-2.3.2-java/lib/logstash/agent.rb:465:in
start_pipeline'"], :level=>:error}
Yesterday, I find the reason.
The reason is:
In my install path /elasticsearch-jdbc-2.3.2.0/lib, the size of mysql-connector-java-5.1.38.jar is zero.
So I download the new mysql-connector-java-5.1.38.jar, and copy to the path of /elasticsearch-jdbc-2.3.2.0/lib.
And then, my problem resolved.
Now I can sync date between mysql and elaticsearch quickly.

Logstash error message when using ElasticSearch output=>"Failed to flush outgoing items"

Im using ES 1.4.4 and LS 1.5 and Kibana 4 on Debian.
I start logstash, it works fine for a couple of minutes then i have a fatal error.
In order to shutdown logstash i have to delete the recent datas stored in ES, that's the only way i found.
One more relevant fact is that Elastic Search looks OK, i can see old datas in kibana and plugin head works fine.
My output config : output { elasticsearch {port => 9200 protocol => http host => "127.0.0.1"}}
Any help will be appreciated :)
Here is the full error message :
Got error to send bulk of actions to elasticsearch server at 127.0.0.1 : Read timed out {:level=>:error}
Failed to flush outgoing items {:outgoing_count=>1362, :exception=>#, :backtrace=>["/opt/logstash/vendor/bundle/jruby/1.9/gems/manticore-0.3.5-java/lib/manticore/response.rb:35:in initialize'", "org/jruby/RubyProc.java:271:incall'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/manticore-0.3.5-java/lib/manticore/response.rb:61:in call'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/manticore-0.3.5-java/lib/manticore/response.rb:224:incall_once'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/manticore-0.3.5-java/lib/manticore/response.rb:127:in code'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/elasticsearch-transport-1.0.7/lib/elasticsearch/transport/transport/http/manticore.rb:50:inperform_request'", "org/jruby/RubyProc.java:271:in call'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/elasticsearch-transport-1.0.7/lib/elasticsearch/transport/transport/base.rb:187:inperform_request'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/elasticsearch-transport-1.0.7/lib/elasticsearch/transport/transport/http/manticore.rb:33:in perform_request'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/elasticsearch-transport-1.0.7/lib/elasticsearch/transport/client.rb:115:inperform_request'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/elasticsearch-api-1.0.7/lib/elasticsearch/api/actions/bulk.rb:80:in bulk'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-0.1.18-java/lib/logstash/outputs/elasticsearch/protocol.rb:82:inbulk'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-0.1.18-java/lib/logstash/outputs/elasticsearch.rb:413:in submit'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-0.1.18-java/lib/logstash/outputs/elasticsearch.rb:412:insubmit'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-0.1.18-java/lib/logstash/outputs/elasticsearch.rb:438:in flush'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-0.1.18-java/lib/logstash/outputs/elasticsearch.rb:436:inflush'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/stud-0.0.19/lib/stud/buffer.rb:219:in buffer_flush'", "org/jruby/RubyHash.java:1341:ineach'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/stud-0.0.19/lib/stud/buffer.rb:216:in buffer_flush'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/stud-0.0.19/lib/stud/buffer.rb:193:inbuffer_flush'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/stud-0.0.19/lib/stud/buffer.rb:159:in buffer_receive'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-0.1.18-java/lib/logstash/outputs/elasticsearch.rb:402:inreceive'", "/opt/logstash/lib/logstash/outputs/base.rb:88:in handle'", "(eval):1070:ininitialize'", "org/jruby/RubyArray.java:1613:in each'", "org/jruby/RubyEnumerable.java:805:inflat_map'", "(eval):1067:in initialize'", "org/jruby/RubyProc.java:271:incall'", "/opt/logstash/lib/logstash/pipeline.rb:279:in output'", "/opt/logstash/lib/logstash/pipeline.rb:235:inoutputworker'", "/opt/logstash/lib/logstash/pipeline.rb:163:in `start_outputs'"], :level=>:warn}
Your elasticsearch have surpassed storage and it is unable to write new documents coming from logstash, try deleting old indices and then
PUT your_index/_settings
{
"index": {
"blocks.read_only": false
}
}
I hope this will work for you. Thanks !!

Elasticsearch returning 504 when trying to index on Heroku

I've added elasticsearch to my Rails app using Tire as outlined in this Railscast.
I've tried to deploy to Heroku with Bonsai add-on. After following this tutorial and also using information w Based on this question, I've tried running this command:
heroku run rake environment tire:import CLASS=Document FORCE=true
(Document is, of course, the name of my model.)
But I keep getting this error message:
Running `rake environment tire:import CLASS=Document FORCE=true` attached to terminal... up, run.4773
[IMPORT] Deleting index 'documents'
[IMPORT] Creating index 'documents' with mapping:
{"document":{"properties":{}}}
[ERROR] There has been an error when creating the index -- Elasticsearch returned:
504 :
What am I doing wrong?
You might not have done anything wrong. Bonsai has been experiencing issues for the past 18 hours. Your 504 error may just be a result of this.
See this tweet: https://twitter.com/bonsaisearch/status/394950014361165824

Resources