How to configure number of results in Kibana 4 - kibana-4

We are using elasticsearch + kibana 4 for central logging system. But in dashboard (in a "search" table) I have only 10 pages of results by 50 entries. Because our system is composed of several applications cooperating each other 500 logs entries is often too little.
Is it possible to configure number of results (per page and/or number of pages) in kibana 4? In kibana 3 I could configure this in dashboard settings, but I cannot find any options in Kibana 4.

I'm sorry, not sure what you mean with "search table". Are we talking about a "data-table"-type visualization, or something that has to do with the "discovery" tab, or something entirely different ?
The former has following param :
Settings > Object > yourDataTable > edit
"type": "table",
"params": {
"perPage": 10
}
The latter has following :
Settings > Advanced
discover:sampleSize (Default: 500)
(PS : sorry about posting an Answer instead of commenting, I don't have enough rep to add comment to your question)

Related

ElasticSearch: Result window is too large

My friend stored 65000 documents on the Elastic Search cloud and I would like to retrieve all of them (using python). However, when I am running my current script, there is an error noticing that :
RequestError(400, 'search_phase_execution_exception', 'Result window is too large, from + size must be less than or equal to: [10000] but was [30000]. See the scroll api for a more efficient way to request large data sets. This limit can be set by changing the [index.max_result_window] index level setting.')
My script
es = Elasticsearch(cloud_id=cloud_id, http_auth=(username, password))
docs = es.search(body={"query": {"match_all": {}}, '_source': ["_id"], 'size': 65000})
What would be the easiest way to retrieve all those document and not limit it to 10000 docs? thanks
The limit has been set so that the result set does not overwhelm your nodes. Results will occupy memory in the elastic node. So bigger the result set, bigger the memory footprint and impact on the nodes.
Depending on what you want to do with the retrieved documents,
try to use the scroll api (as suggested in your error message) if its a batch job. Be mindful of the lifetime of scroll context in that case.
https://www.elastic.co/guide/en/elasticsearch/reference/current/search-request-body.html#request-body-search-scroll
or, use the Search After
https://www.elastic.co/guide/en/elasticsearch/reference/current/search-request-body.html#request-body-search-search-after
You should use the scroll API and get the results in different calls. The scroll API will return to you the results 10000 by 10000 as maximum (that will be available to consult during the amount of time you indicate in the call) and you will be able then to paginate the results and obtain them thanks to a scroll_id.
The error message itself is mentioning that how can you solve the issue, look carefully this part of the error message.
This limit can be set by changing the [index.max_result_window] index
level setting.
Please refer update indices level setting on how to change that.
So for your setting it would look like:
PUT /<your-index-name>/_settings
{
"index" : {
"index.max_result_window" : 65000 -> note its equal to your all the docs in your index
}
}

Elasticsearch posting documents : FORBIDDEN/12/index read-only / allow delete (api)]

Running Elasticsearch version 7.3.0, I posted 50 million documents in my index. when trying to post more documents to Elasticsearch i keep getting this message :
response code: 403 cluster_block_exception [FORBIDDEN/12/index read-only / allow delete (api)];
Disk watermark exceeded
I have 40 GB of free data and extended disk space but still keep getting this error
Any thoughts on what can be causing this ?
You must have hit the floodstage watermark at 95%. Where to go from here:
Free disk space (you seem to have done that already).
Optionally change the default setting. 5% of 400GB might be a bit too aggressive for blocking write operations. You can either use percent or absolute values for this — this is just a sample and you might want to pick different values:
PUT _cluster/settings
{
"transient": {
"cluster.routing.allocation.disk.watermark.low": "5gb",
"cluster.routing.allocation.disk.watermark.high": "2gb",
"cluster.routing.allocation.disk.watermark.flood_stage": "1gb",
"cluster.info.update.interval": "1m"
}
}
You must reset the index blocking (either per affected index or on a global level with all):
PUT /_all/_settings
{
"index.blocks.read_only_allow_delete": null
}
BTW in 7.4 this will change: Once you go below the high watermark, the index will unlock automatically.

Error 429 [type=reduce_search_phase_exception]

I have many languages for my docs and am following this pattern: One index per language. In that they suggest to search across all indices with the
/blogs-*/post/_count
pattern. For my case I am getting a count across the indices of how many docs I have. I am running my code concurrently so making many requests at same time. If I search
/blogs-en/post/_count
or any other language then all is fine. However if I search
/blogs-*/post/_count
I soon encounter:
"Error 429 (Too Many Requests): [reduce] [type=reduce_search_phase_exception]
"
Is there a workaround for this? The same number of requests is made regardless of if I use
/blogs-en/post/_count or /blogs-*/post/_count.
I have always used the same number of workers in my code but re-arranging the indices to have one index per language suddenly broke my code.
EDIT: It is a brand new index without any documents when I start the program and when I get the error I have about 5,000 documents so not under any heavy load.
Edit: I am using the mapping found in the above-referenced link and running on a local machine with all the defaults of ES...in my case shards=5 and replicas=1. I am really just following the example from the link.
EDIT: The errors are seen with as few as 13-20 requests are made and I know ES can handle more than that. Searching /blogs-en/post/_count instead of /blogs-*/post/_count, etc.. can easily handle thousands with no errors.
Another Edit: I have removed all concurrency but still can only access 40-50 requests before I get the error.
I don't get an error for that request and it returns total documents.
Is you'r cluster under load?
Anyway, using simple aggregation you can get total document count in hits.total and per index document count in count_per_index part of result:
GET /blogs-*/post/_search
{
"size": 0,
"query": {
"match_all": {}
},
"aggs": {
"count_per_index": {
"terms": {
"field": "_index"
}
}
}
}

Unable to get results more than 100 results on google custom search api

I need to use Google Custom Search API https://developers.google.com/custom-search/v1/overview. From that page, it said:
For CSE users, the API provides 100 search queries per day for free.
If you need more, you may sign up for billing in the Developers
Console. Additional requests cost $5 per 1000 queries, up to 10k
queries per day.
I already sign up for billing inside the developer console. However, I still could not retrieve results more than 100. What things should I do more? https://www.googleapis.com/customsearch/v1?cx=CSE_INSTANCE&key=API_KEY&q=QUERY&start=100
{ error: { errors: [ { domain: "global", reason: "invalid", message:
"Invalid Value" } ], code: 400, message: "Invalid Value" } }
Query: Definition
https://support.google.com/customsearch/answer/1361951
Any actual user query from a Google Site Search engine, including but
not limited to search engines installed on your website using XML,
iFrame, or the Custom Search Element.
That means you would probably need to send eleven queries to get more than 100 results.
GET https://www.googleapis.com/customsearch/v1?&q=QUERY&...&start=1
GET https://www.googleapis.com/customsearch/v1?&q=QUERY&...&start=11
GET https://www.googleapis.com/customsearch/v1?&q=QUERY&...&start=21
GET ...
GET https://www.googleapis.com/customsearch/v1?&q=QUERY&...&start=81
GET https://www.googleapis.com/customsearch/v1?&q=QUERY&...&start=91
GET https://www.googleapis.com/customsearch/v1?&q=QUERY&...&start=101
Check every response and if error code is 400, you can stop - there is probably no need to send next (&start=previous+10) request.
Now you can merge responses and start building results page.
Google Custom Search and Google Site Search return up to 10 results
per query. If you want to display more than 10 results to the user,
you can issue multiple requests (using the start=0, start=11 ...
parameters) and display the results on a single page. In this case,
Google will consider each request as a separate query, and if you are
using Google Site Search, each query will count towards your limit.
There might be a better way to do this then I described above. (But, I'm not sure about batching API calls.)
And (finally) possible answer to your question: I made more than few tests, but I haven't had any luck with start greater than 100 (I was getting the same as you - <Response [400]>). I'm using "Browser key" from my billing-enabled project. That could mean we can't get 101st, 102nd, 103rd, etc. results with CSE API.
The API documentation says it never returns more than 100 items.
https://developers.google.com/custom-search/v1/reference/rest/v1/cse/list
start
integer (uint32 format)
The index of the first result to return. The default number of results
per page is 10, so &start=11 would start at the top of the second page
of results. Note: The JSON API will never return more than 100
results, even if more than 100 documents match the query, so setting
the sum of start + num to a number greater than 100 will produce an
error. Also note that the maximum value for num is 10.

How to find queries not using indexes or slow in mongodb

is there a way to find queries in mongodb that are not using Indexes or are SLOW? In MySQL that is possible with the following settings inside configuration file:
log-queries-not-using-indexes = 1
log_slow_queries = /tmp/slowmysql.log
The equivalent approach in MongoDB would be to use the query profiler to track and diagnose slow queries.
With profiling enabled for a database, slow operations are written to the system.profile capped collection (which by default is 1Mb in size). You can adjust the threshold for slow operations (by default 100ms) using the slowms parameter.
First, you must set up your profiling, specifying what the log level that you want. The 3 options are:
0 - logger off
1 - log slow queries
2 - log all queries
You do this by running your mongod deamon with the --profile options:
mongod --profile 2 --slowms 20
With this, the logs will be written to the system.profile collection, on which you can perform queries as follows:
find all logs in some collection, ordering by ascending timestamp:
db.system.profile.find( { ns:/<db>.<collection>/ } ).sort( { ts: 1 } );
looking for logs of queries with more than 5 milliseconds:
db.system.profile.find( {millis : { $gt : 5 } } ).sort( { ts: 1} );
You can use the following two mongod options. The first option fails queries not using index (V 2.4 only), the second records queries slower than some ms threshold (default is 100ms)
--notablescan
Forbids operations that require a table scan.
--slowms <value>
Defines the value of “slow,” for the --profile option. The database logs all slow queries to the log, even when the profiler is not turned on. When the database profiler is on, mongod the profiler writes to the system.profile collection. See the profile command for more information on the database profiler.
You can use the command line tool mongotail to read the log from the profiler within a console and with a more readable format.
First activate the profiler and set the threshold in milliseconds for the profile to consider an operation to be slow. In the following example the threshold is set to 10 milliseconds for a database named "sales":
$ mongotail sales -l 1
Profiling level set to level 1
$ mongotail sales -s 10
Threshold profiling set to 10 milliseconds
Then, to see in "real time" the slow queries, with some extra information like the time each query took, or how many registries it need to "walk" to find a particular result:
$ mongotail sales -f -m millis nscanned docsExamined
2016-08-11 15:09:10.930 QUERY [ops] : {"deleted": {"$exists": false}, "prod_id": "367133"}. 8 returned. nscanned: 344502. millis: 12
2016-08-11 15:09:10.981 QUERY [ops] : {"deleted": {"$exists": false}, "prod_id": "367440"}. 6 returned. nscanned: 345444. millis: 12
....
In case somebody ends up here from Google on this old question, I found that explain really helped me fix specific queries that I could see were causing COLLSCANs from the logs.
Example:
db.collection.find().explain()
This will let you know if the query is using a COLLSCAN (Basic Cursor) or an index (BTree), among other things.
https://docs.mongodb.com/manual/reference/method/cursor.explain/
While you can obviously use Profiler a very neat feature of Mongo DB due to which I actually fall in love with it is Mongo DB MMS.
Takes less than 60 seconds and can manage from anywhere. I am sure you will Love it.
https://mms.mongodb.com/

Resources