Elasticsearch settings not applied - elasticsearch

I'm trying to change the logging level of elasticsearch like this:
PUT /_cluster/settings
{
"transient" : {
"logger.discovery" : "DEBUG"
}
}
I performed the PUT, and got a response:
{
"acknowledged": true,
"persistent": {},
"transient": {
"logger": {
"discovery": "DEBUG"
}
}
}
I'm expecting the log-level to change immediately to DEBUG, but it's still on INFO.
Any ideas, what the problem is, or how to debug this problem?

I assume you want to set the root log level and not just discovery to debug:
PUT /_cluster/settings
{
"transient" : {
"logger._root" : "DEBUG"
}
}

For Elasticserach 5 you need a different command (with full package name in it):
PUT /_cluster/settings
{"persistent": {"logger.org.elasticsearch.discovery":"DEBUG"}}
Relevant documentation: https://www.elastic.co/guide/en/elasticsearch/reference/5.1/misc-cluster.html#cluster-logger

You can change the log level in the following file
/etc/elasticsearch/log4j.properties
In there, you can change the value for the logger you want, or simply set the rootLogger.level to debug. Prepare for an avalanche of logs if you do so.
You need to restart the service for this to be effective.

Related

How to find default value/change search.allow_expensive_queries in Elasticsearch

I wonder if there is a way to see default value of Elasticsearch param search.allow_expensive_queries (e.g. in Kibana) and change it if necessary via console of DevTools or environment section of docker-compose.yml?
By default search.allow_expensive_queries value is set to true, if you want to prevent users from running certain types of expensive queries, then you can add this setting to the cluster:
PUT _cluster/settings
{
"transient": {
"search.allow_expensive_queries": "false"
}
}
To check if the setting is set or not in the cluster you can call this API:
GET /_cluster/settings
and result should be like below if it is set to false or true:
{
"persistent" : { },
"transient" : {
"search" : {
"allow_expensive_queries" : ["false" or "true"]
}
}
}
If no information is retuned by the API in transient part it means that the value is set to true.

cluster.routing.allocation.enable: none seems not working

I disable the shard allocation with following snippet:
PUT _cluster/settings
{
"persistent": {
"cluster.routing.allocation.enable": "none"
}
}
I double check GET _cluster/settings' and confirm it has been set with none.
But when I use the following snippet trying to move a shard between nodes. And the move succeeds,
looks the "cluster.routing.allocation.enable": "none" doesn't take effect?
POST /_cluster/reroute
{
"commands": [
{
"move": {
"index": "lib38",
"shard": 0,
"from_node": "node-1",
"to_node": "node-3"
}
}
]
}
Manually rerouting a shard will always take precedence over configuration.
cluster.routing.allocation.enable is only giving a hint to the cluster that automatic reallocation should not take place.
In your other question, you were concerned about automatic rebalancing, it seems.

Automatically generate and send kibana dashboard reports via an Email

I have a 3 node cluster of ELK (all version 6), on 1st node i have Elasticsearch and Kibana, on 2nd i have Elasticsearch and Logstash and on 3rd i have only Elasticsearch which is a Ingest node.
I have 4 servers which sends me data via filebeat and metricbeat.
Now all are working fine, i even have X-Pack version 6 Now there is manual process of generating pdf of dashboards i tried that.
I want to automatically generate reports at certain time and email it to me.
I read about watchers and email configuration in elasticsearch.yml file and i did that ..
But i want it to be done automatically. And i am not trying skidler and phantomJs.
If anything i am missing, help me out Thank You.
Here is an example from the documentation on how to generate a report with Watcher:
PUT _xpack/watcher/watch/error_report
{
"trigger" : {
"schedule": {
"interval": "1h"
}
},
"actions" : {
"email_admin" : {
"email": {
"to": "'Recipient Name <recipient#example.com>'",
"subject": "Error Monitoring Report",
"attachments" : {
"error_report.pdf" : {
"reporting" : {
"url": "http://0.0.0.0:5601/api/reporting/generate/dashboard/Error-Monitoring?_g=(time:(from:now-1d%2Fd,mode:quick,to:now))",
"retries":6,
"interval":"1s",
"auth":{
"basic":{
"username":"elastic",
"password":"changeme"
}
}
}
}
}
}
}
}
}
Basically you just need an API call to get this done.

How to undo setting Elasticsearch Index to readonly?

So I just set one of my indices to readonly, and now want to delete it.
To set it to readonly:
PUT my_index/_settings
{ "index": { "index.blocks.read_only" : true } }
When I tried to delete it I got this response:
ClusterBlockException[blocked by: [FORBIDDEN/5/index read-only (api)];]
Then I tried to set the index to readonly false:
PUT my_index/_settings
{ "index": { "index.blocks.read_only" : false } }
But that gives the same error message as above. So how to set readonly back to false?
Answers are really old so I'll add a elastic-6+ answer too:
PUT /[_all|<index-name>]/_settings
{
"index.blocks.read_only_allow_delete": null
}
https://www.elastic.co/guide/en/elasticsearch/reference/6.x/disk-allocator.html
FYI (for context): I ran into read-only indices due to running out of disk and got error messages from logstash:
...retrying failed action with response code: 403 ({"type"=>"cluster_block_exception", "reason"=>"blocked"
elasticsearch:
ClusterBlockException[blocked by: [FORBIDDEN/12/index read-only / allow delete (api)];]
The correct way to make es index read-only is
PUT your_index/_settings
{
"index": {
"blocks.read_only": true
}
}
change true to false to undo it.
You set non dynamic setting with
{
"index": {
"blocks.read_only": false
}
}
which I think was not your intention. Also I think you should have seen an error during first operation itself as non dynamic settings can be updated only on close indices.
run
POST your_index/_close
and then try changing it.
curl -X PUT "localhost:9200/_all/_settings" -H 'Content-Type: application/json' -d'{ "index.blocks.read_only" : false } }'
In version 2.x of ElasticSearch (ES) you have to do the following
PUT your_index/_settings
{
"index": {
"blocks": {
"write": "false",
"read_only": "false"
}
}
}
While setting the index to read_only to true internally ES changes the write to true as well and just reverting read_only to false still does not allow you to update the index so you have to update write setting explicitly.
If you have Kibana installed, you can go to your kibana url:
Management (Left pane) -> Elasticseach Index Management -> Select your Index -> Edit Settings
then update:
"index.blocks.read_only_allow_delete": "false"
Also, to set it globally on kibana you can go to dev tools (left pane) and make the following request:
PUT _settings
{
"index": {
"blocks": {
"read_only_allow_delete": "false"
}
}
}
For 6.x to get the settings:
curl elasticsearch-sc:9200/_settings?pretty
To make the Indices / Cluster Writable :
curl -XPUT -H "Content-Type: application/json" \
http://elasticsearch-sc:9200/_all/_settings \
-d '{"index.blocks.read_only_allow_delete": null}'

how to log or print python elasticsearch-dsl query that gets invoked

I am using elasticsearch-dsl for my python application to query elastic search.
To debug what query is actually getting generated by elasticsearch-dsl library, I am unable to log or print the final query that goes to elasticsearch.
For example, like to see the request body sent to elasticsearch like this :
{
"query": {
"query_string": {
"query": "Dav*",
"fields": ["name", "short_code"],
"analyze_wildcard": true
}
}
}
Tried to bring the elasticsearch log level to TRACE. Even then, unable to see the queries that got executed.
Take a look at my blog post here, "Slowlog settings at index level" section. Basically, you can use slowlog to print in a separate log file Elasticsearch generates, the queries. I suggest using a very low threshold to be able to see all the queries.
For example, something like this, for a specific index:
PUT /test_index/_settings
{
"index": {
"search.slowlog.level": "trace",
"search.slowlog.threshold.query.trace": "1ms"
}
}
Or
PUT /_settings
{
"index": {
"search.slowlog.level": "trace",
"search.slowlog.threshold.query.trace": "1ms"
}
}
as a cluster-wide setting, for all the indices.
And the queries will be logged in your /logs location, a file called [CLUSTER_NAME]_index_search_slowlog.log.

Resources