How to find default value/change search.allow_expensive_queries in Elasticsearch - elasticsearch

I wonder if there is a way to see default value of Elasticsearch param search.allow_expensive_queries (e.g. in Kibana) and change it if necessary via console of DevTools or environment section of docker-compose.yml?

By default search.allow_expensive_queries value is set to true, if you want to prevent users from running certain types of expensive queries, then you can add this setting to the cluster:
PUT _cluster/settings
{
"transient": {
"search.allow_expensive_queries": "false"
}
}
To check if the setting is set or not in the cluster you can call this API:
GET /_cluster/settings
and result should be like below if it is set to false or true:
{
"persistent" : { },
"transient" : {
"search" : {
"allow_expensive_queries" : ["false" or "true"]
}
}
}
If no information is retuned by the API in transient part it means that the value is set to true.

Related

Trigger an action for each hit of Elasticsearch query in Kibana Monitor

Is it possible to trigger an action for each hit of a given query in a Kibana Monitor? I would like to use a foreach loop to do this as demonstrated here. However, it's unclear how to implement this on the Kibana Monitor page. On the page there is an input field for Trigger Conditions but I'm unsure how to format the foreach within it or if this is supported.
Consider using Elasticsearch watcher (require at least gold licesnse): https://www.elastic.co/guide/en/elasticsearch/reference/current/how-watcher-works.html
Watcher will run on a certain interval and will perform a query against indices (according to your configuration). You will need to create a condition (e.g. hits number is greater than 5) that when it evaluates to true an action will be performed. Elasticsearch allows you to use multiple actions. For example, you can use webhook and receive the data from the last watcher run (you can also use watcher api to transform the data). If you don't have Gold license you can mimic watcher behavior by a script/program that uses Elasticsearch Search API.
Herbeby is a simple example of a watcher checking index named test every minute and sends a webhook with the entire search context in case there is at least one document.
{
"trigger" : {
"schedule" : { "interval" : "1m" }
},
"input" : {
"search" : {
"request" : {
"indices" : [ "test" ],
"body" : {
"query" : {
"bool": {
"must": {
"range": {
"updatedAt": {
"gte": "now-1m"
}
}
}
}
}
}
}
}
},
"condition" : {
"compare" : { "ctx.payload.hits.total" : { "gt" : 0 }}
},
"actions" : {
"sample_webhook" : {
"webhook" : {
"method" : "POST",
"url": "http://b4022015b928.ngrok.io/UtilsService/api/elasticHandler/watcher",
"body" : "{{#toJson}}ctx.payload{{/toJson}}",
"auth": {
"basic": {
"user": "user",
"password": "pass"
}
}
}
}
}
}
An alternative way would be to use Kibana Alerts and Actions.
https://www.elastic.co/guide/en/kibana/current/alerting-getting-started.html
This feature is slightly different from Watcher but basically allows you to perfrom actions upon a query against Elasticsearch. This featrue is only part of Kibana opposing to watcher which is part of Elasticsearch (though it is accessible from Kibana stack management).

How to read from only one index and set the other one as write when searching an alias in ElasticSearch 7.6?

I know it's possible to define two indices in an alias where one index has the is_write_index set to true while the other has it set to false -
POST /_aliases
{
"actions" : [
{
"add" : {
"index" : "test_index_1",
"alias" : "my_alias",
"is_write_index": true
}
}
]
}
POST /_aliases
{
"actions" : [
{
"add" : {
"index" : "test_index_2",
"alias" : "my_alias",
"is_write_index": false
}
}
]
}
As you can see, I've defined two indices test_index_1 and test_index_2 where the first one is a write index while the second one isn't.
Now, I want to be able to query the my_alias in such a way that searches happen only on the test_index_2 which has the is_write_index set to false while I write data to test_index_1, instead of reading from both the indices, which is the default behaviour. Meaning, I don't wish the search results come from the index where is_write_index is set to true.
Is this possible? I've tried setting index.blocks.read to true on the write index, but then search queries on the alias fail with an exception. Instead, I wish reads on the alias query only from that index which has the is_write_index set to false.
How can I achieve this?
This can be achieved by using filtered aliases.
The way you do this is you apply a custom filter while adding the write index to the alias. The filter property defines the bool condition based on which data is filtered on this index and presented as a new view of the dataset in this index. All search queries on this index happen on this new view that Elastic creates. So, if you want to avoid reading from the index you're currently writing to, apply a filter that is never satisfied across any documents in your dataset or an exists filter on some dummy field.
POST /_aliases
{
"actions": [
{
"add": {
"index": "test_index_2",
"alias": "my_alias",
"is_write_index": true,
"filter": {
"bool": {
"must_not": {
"exists": {
"field": "<field_that_always_exists_in_your_documents>"
}
}
}
}
}
}
]
}
Once you're done writing the data, update the alias by removing the filter property to allow reads from both the indices.
You are using this feature in an incorrect fashion. If you use alias for search, it will always attempt to read across all underlying indices. is_write_index is provided as feature to support rollover and index patterns, where writes are happening to 1 index, but reads happen across all indices with same alias or index pattern.
If your intent is to load data into one index, but allow application to continue to read from old index, when data loading is going on, you should use 2 separate alias - one for read and one for write and device a strategy to swap alias pointing to the indices, after your data loading is completed.

Elasticsearch settings not applied

I'm trying to change the logging level of elasticsearch like this:
PUT /_cluster/settings
{
"transient" : {
"logger.discovery" : "DEBUG"
}
}
I performed the PUT, and got a response:
{
"acknowledged": true,
"persistent": {},
"transient": {
"logger": {
"discovery": "DEBUG"
}
}
}
I'm expecting the log-level to change immediately to DEBUG, but it's still on INFO.
Any ideas, what the problem is, or how to debug this problem?
I assume you want to set the root log level and not just discovery to debug:
PUT /_cluster/settings
{
"transient" : {
"logger._root" : "DEBUG"
}
}
For Elasticserach 5 you need a different command (with full package name in it):
PUT /_cluster/settings
{"persistent": {"logger.org.elasticsearch.discovery":"DEBUG"}}
Relevant documentation: https://www.elastic.co/guide/en/elasticsearch/reference/5.1/misc-cluster.html#cluster-logger
You can change the log level in the following file
/etc/elasticsearch/log4j.properties
In there, you can change the value for the logger you want, or simply set the rootLogger.level to debug. Prepare for an avalanche of logs if you do so.
You need to restart the service for this to be effective.

How to add "_routing.path" without downtime & reindexing in Elastic 1.x?

Elastic 1.x allows to define in mapping default path for extracting required routing field, e.g.:
{
"comment" : {
"_routing" : {
"required" : true,
"path" : "blog.post_id"
}
}
}
Is that possible to add that field on the fly, without a downtime?
So the mapping was previously defined as:
{
"comment" : {
"_routing" : {
"required" : true
}
}
}
The update will not work. Even if the command is acknowledged, the update will not be applied.
You need to reindex the documents, as well. If that path changes and the values are different this means that documents could have ended up in a different shard than in which they are now. So, assuming that the change would have been possible, you are basically changing the hash that the documents can be routed and also GETed (gotten) from shards and it will be a mess.

How to undo setting Elasticsearch Index to readonly?

So I just set one of my indices to readonly, and now want to delete it.
To set it to readonly:
PUT my_index/_settings
{ "index": { "index.blocks.read_only" : true } }
When I tried to delete it I got this response:
ClusterBlockException[blocked by: [FORBIDDEN/5/index read-only (api)];]
Then I tried to set the index to readonly false:
PUT my_index/_settings
{ "index": { "index.blocks.read_only" : false } }
But that gives the same error message as above. So how to set readonly back to false?
Answers are really old so I'll add a elastic-6+ answer too:
PUT /[_all|<index-name>]/_settings
{
"index.blocks.read_only_allow_delete": null
}
https://www.elastic.co/guide/en/elasticsearch/reference/6.x/disk-allocator.html
FYI (for context): I ran into read-only indices due to running out of disk and got error messages from logstash:
...retrying failed action with response code: 403 ({"type"=>"cluster_block_exception", "reason"=>"blocked"
elasticsearch:
ClusterBlockException[blocked by: [FORBIDDEN/12/index read-only / allow delete (api)];]
The correct way to make es index read-only is
PUT your_index/_settings
{
"index": {
"blocks.read_only": true
}
}
change true to false to undo it.
You set non dynamic setting with
{
"index": {
"blocks.read_only": false
}
}
which I think was not your intention. Also I think you should have seen an error during first operation itself as non dynamic settings can be updated only on close indices.
run
POST your_index/_close
and then try changing it.
curl -X PUT "localhost:9200/_all/_settings" -H 'Content-Type: application/json' -d'{ "index.blocks.read_only" : false } }'
In version 2.x of ElasticSearch (ES) you have to do the following
PUT your_index/_settings
{
"index": {
"blocks": {
"write": "false",
"read_only": "false"
}
}
}
While setting the index to read_only to true internally ES changes the write to true as well and just reverting read_only to false still does not allow you to update the index so you have to update write setting explicitly.
If you have Kibana installed, you can go to your kibana url:
Management (Left pane) -> Elasticseach Index Management -> Select your Index -> Edit Settings
then update:
"index.blocks.read_only_allow_delete": "false"
Also, to set it globally on kibana you can go to dev tools (left pane) and make the following request:
PUT _settings
{
"index": {
"blocks": {
"read_only_allow_delete": "false"
}
}
}
For 6.x to get the settings:
curl elasticsearch-sc:9200/_settings?pretty
To make the Indices / Cluster Writable :
curl -XPUT -H "Content-Type: application/json" \
http://elasticsearch-sc:9200/_all/_settings \
-d '{"index.blocks.read_only_allow_delete": null}'

Resources