In the "Discover" section, as we would like to export the result in a csv file,
The result is around 1 Million documents, when we try to export in csv, it never works...
produces error: Max attempts reached (3), But when tried to export small document around 50K it works fine and created the report successfully.
This is what my configuration looks in kibana.yml
## x-pack module setting
xpack.reporting.queue.timeout: 3600000
xpack.reporting.csv.maxSizeBytes: 1024000000
xpack.reporting.csv.scroll.size: 10000
server.maxPayloadBytes: 1073741824
and this is the congiuguration in elasticseaerch.yml
http.compression: true
http.max_content_length: 1024mb
Didn't understand what causing the issue.
Related
I have recently run into an issue where I am not able to start elasticsearch.
Version-2.0
OS: Linux
An error message was displayed
[ERROR][gateway ] [Node] failed to read local state, exiting...
ElasticsearchException[must specify numberOfShards for index [version]]; nested: IllegalArgumentException[must specify numberOfShards for index [version]];
at org.elasticsearch.ExceptionsHelper.maybeThrowRuntimeAndSuppress(ExceptionsHelper.java:163)
at org.elasticsearch.gateway.MetaDataStateFormat.loadLatestState(MetaDataStateFormat.java:309)
at org.elasticsearch.gateway.MetaStateService.loadIndexState(MetaStateService.java:112)
at org.elasticsearch.gateway.MetaStateService.loadFullState(MetaStateService.java:97)
at org.elasticsearch.gateway.GatewayMetaState.loadMetaState(GatewayMetaState.java:97)
at org.elasticsearch.gateway.GatewayMetaState.pre20Upgrade(GatewayMetaState.java:223)
at org.elasticsearch.gateway.GatewayMetaState.<init>(GatewayMetaState.java:85)
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
at org.elasticsearch.common.inject.DefaultConstructionProxyFactory$1.newInstance(DefaultConstructionProxyFactory.java:56)
How to start the elasticsearch?
Edit 01:
I have put the shards and replicas in the elasticsearch.yml files but still not able to start elasticsearch.
Getting the above error message.
Edit 02:
I have added the shards to the yml file.
#################################### Index ####################################
# You can set a number of options (such as shard/replica options, mapping
# or analyzer definitions, translog settings, ...) for indices globally,
# in this file.
#
# Note, that it makes more sense to configure index settings specifically for
# a certain index, either when creating it or by using the index templates API.
#
# See <http://elasticsearch.org/guide/en/elasticsearch/reference/current/index-modules.html> and
# <http://elasticsearch.org/guide/en/elasticsearch/reference/current/indices-create-index.html>
# for more information.
# Set the number of shards (splits) of an index (5 by default):
#
index.number_of_shards: 2
# Set the number of replicas (additional copies) of an index (1 by default):
#
index.number_of_replicas: 1
We went ahead and restored the server to the prior date when elasticsearch was working , and the issue was resolved.
Kibana is unable to initialize when starting, it shows the misleading exception "Shard Failures" without any details:
But when digging in the Browser console, the following logs have been written:
"INFO: 2016-11-25T13:41:59Z
Adding connection to https://monitoring.corp.com/elk-kibana/elasticsearch
" kibana.bundle.js:63741:6
config initcommons.bundle.js:62929
complete in 459.08ms commons.bundle.js:62925:12
loading default index patterncommons.bundle.js:62929
Index Patterns: index pattern set to logstash-* commons.bundle.js:8926:17
complete in 125.70ms commons.bundle.js:62925:12
Error: indexPattern.fields is undefined
isSortable#https://monitoring.corp.com/elk-kibana/bundles/kibana.bundle.js?v=9732:85441:8
getSort#https://monitoring.corp.com/elk-kibana/bundles/kibana.bundle.js?v=9732:85448:47
__WEBPACK_AMD_DEFINE_RESULT__</getSort.array#https://monitoring.corp.com/elk-kibana/bundles/kibana.bundle.js?v=9732:85463:15
getStateDefaults#https://monitoring.corp.com/elk-kibana/bundles/kibana.bundle.js?v=9732:85015:16
__WEBPACK_AMD_DEFINE_RESULT__</<#https://monitoring.corp.com/elk-kibana/bundles/kibana.bundle.js?v=9732:85009:47
invoke#https://monitoring.corp.com/elk-kibana/bundles/commons.bundle.js?v=9732:31569:15
$ControllerProvider/this.$get</</instantiate<#https://monitoring.corp.com/elk-kibana/bundles/commons.bundle.js?v=9732:36227:25
nodeLinkFn#https://monitoring.corp.com/elk-kibana/bundles/commons.bundle.js?v=9732:35339:37
compositeLinkFn#https://monitoring.corp.com/elk-kibana/bundles/commons.bundle.js?v=9732:34771:14
publicLinkFn#https://monitoring.corp.com/elk-kibana/bundles/commons.bundle.js?v=9732:34646:31
ngViewFillContentFactory/<.link#https://monitoring.corp.com/elk-kibana/bundles/commons.bundle.js?v=9732:57515:8
invokeLinkFn#https://monitoring.corp.com/elk-kibana/bundles/commons.bundle.js?v=9732:35880:10
nodeLinkFn#https://monitoring.corp.com/elk-kibana/bundles/commons.bundle.js?v=9732:35380:12
compositeLinkFn#https://monitoring.corp.com/elk-kibana/bundles/commons.bundle.js?v=9732:34771:14
publicLinkFn#https://monitoring.corp.com/elk-kibana/bundles/commons.bundle.js?v=9732:34646:31
createBoundTranscludeFn/boundTranscludeFn#https://monitoring.corp.com/elk-kibana/bundles/commons.bundle.js?v=9732:34790:17
controllersBoundTransclude#https://monitoring.corp.com/elk-kibana/bundles/commons.bundle.js?v=9732:35407:19
update#https://monitoring.corp.com/elk-kibana/bundles/commons.bundle.js?v=9732:57465:26
$RootScopeProvider/this.$get</Scope.prototype.$broadcast#https://monitoring.corp.com/elk-kibana/bundles/commons.bundle.js?v=9732:43402:16
commitRoute/<#https://monitoring.corp.com/elk-kibana/bundles/commons.bundle.js?v=9732:57149:16
processQueue#https://monitoring.corp.com/elk-kibana/bundles/commons.bundle.js?v=9732:41836:29
scheduleProcessQueue/<#https://monitoring.corp.com/elk-kibana/bundles/commons.bundle.js?v=9732:41852:28
$RootScopeProvider/this.$get</Scope.prototype.$eval#https://monitoring.corp.com/elk-kibana/bundles/commons.bundle.js?v=9732:43080:17
$RootScopeProvider/this.$get</Scope.prototype.$digest#https://monitoring.corp.com/elk-kibana/bundles/commons.bundle.js?v=9732:42891:16
$RootScopeProvider/this.$get</Scope.prototype.$apply#https://monitoring.corp.com/elk-kibana/bundles/commons.bundle.js?v=9732:43188:14
done#https://monitoring.corp.com/elk-kibana/bundles/commons.bundle.js?v=9732:37637:37
completeRequest#https://monitoring.corp.com/elk-kibana/bundles/commons.bundle.js?v=9732:37835:8
requestLoaded#https://monitoring.corp.com/elk-kibana/bundles/commons.bundle.js?v=9732:37776:10
<div class="application ng-scope" ng-class="'tab-' + chrome.getActiveTabId('-none-') + ' ' + chrome.getApplicationClasses()" ng-view="" ng-controller="chrome.$$rootControllerConstruct as kibana"> commons.bundle.js:39568:19
Error: Request to Elasticsearch failed: "Bad Request"
KbnError#https://monitoring.corp.com/elk-kibana/bundles/commons.bundle.js?v=9732:62016:21
RequestFailure#https://monitoring.corp.com/elk-kibana/bundles/commons.bundle.js?v=9732:62049:6
__WEBPACK_AMD_DEFINE_RESULT__</</</<#https://monitoring.corp.com/elk-kibana/bundles/kibana.bundle.js?v=9732:88628:16
processQueue#https://monitoring.corp.com/elk-kibana/bundles/commons.bundle.js?v=9732:41836:29
scheduleProcessQueue/<#https://monitoring.corp.com/elk-kibana/bundles/commons.bundle.js?v=9732:41852:28
$RootScopeProvider/this.$get</Scope.prototype.$eval#https://monitoring.corp.com/elk-kibana/bundles/commons.bundle.js?v=9732:43080:17
$RootScopeProvider/this.$get</Scope.prototype.$digest#https://monitoring.corp.com/elk-kibana/bundles/commons.bundle.js?v=9732:42891:16
$RootScopeProvider/this.$get</Scope.prototype.$apply#https://monitoring.corp.com/elk-kibana/bundles/commons.bundle.js?v=9732:43188:14
done#https://monitoring.corp.com/elk-kibana/bundles/commons.bundle.js?v=9732:37637:37
completeRequest#https://monitoring.corp.com/elk-kibana/bundles/commons.bundle.js?v=9732:37835:8
requestLoaded#https://monitoring.corp.com/elk-kibana/bundles/commons.bundle.js?v=9732:37776:10
commons.bundle.js:39568:19
I'm aware of the https://github.com/elastic/kibana/issues/6460 issue, but we don't have any signs of an entity which is too large.
I also recreated the indexpattern already, without luck (deleting and creating).
However when going into the "Settings" > "Index pattern" where the fields are shown, and going back to discover, kibana seems to work again (until next browser refresh). Any ideas how to fix kibana?
Kibana version: 4.4.2
Elasticsearch version: 2.2.0
Increasing the server.maxPayloadBytes property in the kibana.yml file to an appropiate size solved the issue.
ElasticSearch ran out of disk space on a couple nodes and this is the resulting error
The error we're getting is:
[WARN][indices.store ] [odr-es-md15] Can't open file to read checksums
java.io.FileNotFoundException: No such file [_2rlb_es090_0.pos]
at org.elasticsearch.index.store.DistributorDirectory.getDirectory(DistributorDirectory.java:176)
at org.elasticsearch.index.store.DistributorDirectory.getDirectory(DistributorDirectory.java:144)
at org.elasticsearch.index.store.Store$MetadataSnapshot.buildMetadata(Store.java:482)
at org.elasticsearch.index.store.Store$MetadataSnapshot.<init>(Store.java:456)
at org.elasticsearch.index.store.Store.readMetadataSnapshot(Store.java:281)
...
...
...
The translog may be corrupt. I followed the directions from here:
http://unpunctualprogrammer.com/2014/05/13/corrupt-elasticsearch-translogs/
To fix this you need to look in the ES Logs,
/var/log/elasticsearch/elasticsearch.log for CentOS, and find the
error lines above. On those lines you’ll see something like
[<timestamp>][WARN ][cluste.action.shard] [<wierd name>] [logstash-2014.05.13][X]
where X (shard) is some number, likely (0,1,2,3,4), and the block
before that, logstash-date for me, and you if your doing centralized
logging like we are, is the index name. You then need to go to the
index location, /var/lib/elasticsearch/elasticsearch/nodes/0/indices/
on centos. In that directory you’ll be able to find the following
structure, logstash-date/X/translog/translog-.
That’s the file you’ll need to delete, so:
sudo service stop elasticsearch
sudo rm /var/lib/elasticsearch/elasticsearch/nodes/0/indices/logstash-date/X/translog/translog-blalblabla
repeat step 2 for all indices and shards in the error log
sudo service start elasticsearch
Watch the logs and repeat that process as needed until the ES logs
stop spitting out stack traces.
I am really astonished about magento.I am trying to import magento_sample_data_for_1.6.1.0.sql in my magento-1.8.0.0_2 database.
When I trying to import it gives error like Fatal error: Maximum execution time of 300 seconds exceeded in C:\wamp\apps\phpmyadmin4.0.4\libraries\dbi\mysqli.dbi.lib.php on line 267 I have changed max_execution_time = 300 to max_execution_time = 0 first time and second time max_execution_time = 300 to max_execution_time = 3600 in php.ini file.You can see the image Which I posted!Can anybody help me to find what is wrong with me.Thanks
I had same problem with import so here is a solution that worked for me.
Go to wamp/apps/phpmyadmin4.0.4/libraries and open config.default file.
Find $cfg['ExecTimeLimit'] = 300 and change value to 0 (0 means no limit) and restart wamp. Then try import again.
After changing max_execution_time restart Apache although better solution would be importing database using command line
mysql -u[username] -p [database_name] < {path_to_your_sample_data}/your_sample_data.sql
restart the computer after changing the execution time for taking affets
When running a test which makes use of the of the Jmeter-Plugins listener Response Times vs Threads or Active Threads Over Time remote running of the test plan produces a results file which contains missing results used to plot the actual graph, however when run locally all results are returned. E.g. when using the Response Times vs Threads:
Example of a local result:
1383659591841,59,Example 1,200,OK,Example 1 1-579,text,true,183,22,22,59
Example of a remote result:
1383659859149,43,Example 1,200,OK,Example 1 1-575,text,true,183,43
Note the last two fields are missing
I would check the script definition of the two server: maybe some configuration for the "Write results to file" controller has been changed.
Take the local jmx service and copy it to the remote server.
Also, look for differences in the "# Results file configuration" section of jmeter.properties file.
Make sure that on all of the slave/remote servers the jmeter.properties file within $JMETER_HOME/bin has the following setting
jmeter.save.saveservice.thread_counts=true
By default this is set to false (and commented out)
For more informtation:
JMeter Plugins Installation