Security Privilege Exception in creating alias in Elastic search - elasticsearch

When we try to execute the below elastic alias swap query with custom user, it throws action [indices:admin/aliases] is unauthorized for user.
POST /_aliases
{
"actions" : [
{ "add": { "index": "test_2", "alias": "test" } },
{ "remove_index": { "index": "test" } }
]
}
But with the same user, the following alias creation query works fine.
POST /_aliases
{
"actions" : [
{ "add": { "index": "test_2", "alias": "test" } }
]
}
Custom user has all privileges for indices starting with test* (we want this user only to be used for indices & alias management starting with test)
If we add all indices for this user, then first query works. I don't know why the user needs permission to all indices if I need to delete index starting with test.

The user should be assigned a role with the following index privileges:
Indices: test*
Privileges: manage
This will ensure the user will have all the privileges to manage all the indices/aliases that start with test

Related

Elasticsearch alias not pointing to new indexes created by the rollover strategy

Elasticsearch: 7.15.2
We have an alias created in this way:
POST _aliases
{
"actions": [
{
"add": {
"index": "my-index-*",
"alias": "my-alias",
"is_write_index": false
}
}
]
}
And if I get the alias info I can see
GET _alias/my-alias
{
"my-index-2021.11.30-000001" : {
"aliases" : {
"my-alias" : { }
}
}
}
However there is another index which has been created automatically by the rollover policy: my-index-2021.11.30-000002 but this index is not pointed by the alias created before my-alias
If I create a new alias from scratch with the same index pattern I can see both:
POST _aliases
{
"actions": [
{
"add": {
"index": "my-index-*",
"alias": "my-alias-2",
"is_write_index": false
}
}
]
}
GET _alias/my-alias-2
{
"my-index-2021.11.30-000001" : {
"aliases" : {
"my-alias-2" : { }
}
},
"my-index-2021.11.30-000002" : {
"aliases" : {
"my-alias-2" : { }
}
}
}
Is there something that I am missing? I was expecting to see also the *-000002 index pointed by the alias my-alias without any manual operation.
The rollover policy is just creating a new index if the index size is grater then X GBs
or maybe do I have to modify the index template in order to add the "read" alias automatically? I have already the alias specified in the template but that is the write alias for the rollover policy (which I cannot use for search because our custom elasticsearch configuration)
{
"index": {
"lifecycle": {
"rollover_alias": "my-write-alias"
}
}
}
When you create an alias using POST _aliases it will just create the alias on the matching indexes that currently exist, but if a new index is created later and matches the criteria, the alias will not be added to that index.
What you need to do is to:
create an index template containing your alias definition
assign your rollover policy to your template index setting (i.e. the index.lifecycle.name and index.lifecycle.rollover_alias settings)
Basically, like this:
PUT _index_template/my_index_template
{
"index_patterns": ["my-index-*"],
"template": {
"alias": {
"my-alias": {}
},
"mappings": {
...
},
"settings": {
"index.lifecycle.name": "my_policy",
"index.lifecycle.rollover_alias": "my_write_alias"
}
}
}
After this is set up, every time the lifecycle policy creates a new index my-index-2021.MM.dd-000xyz, that new index will be pointed to by my-alias. Also worth noting that my_write_alias will always point to the latest index of the sequence, i.e. the write index.

Kibana search pattern issue

I am trying to create a elastic search query for one of my Library projects. I am trying to use regex but I do not get any result. I am trying to enter the following regex query.
GET /manifestation_v1/_search
{
"query": {
"regexp": {
"bibliographicInformation.title": {
"value": "python access*"
}
}
}
}
access is a wildcard so i want to create a query which takes as python access* not python access
Can anyone help me out who already has some experience in kibana?
you can try wildcard query
{
"query": {
"wildcard": {
"bibliographicInformation.title": {
"value": "saba safavi*"
}
}
}
}
You need to run regex query on keyword field and use .* instead of *
ex.
GET /manifestation_v1/_search
{
"query": {
"regexp": {
"bibliographicInformation.title": {
"value": "python access.*"
}
}
}
}
Regex is slower , you can also try prefix query
{
"query": {
"prefix": {
"bibliographicInformation.title": {
"value": "python access"
}
}
}
}
If field is of nested type then you need to use nested query
Update
For "text" type , field is stored as tokens. i.e
"python access" is stored as ["python","access"]. You query is trying to match "phython access*" with each of these tokens individually. You need to query against keyword field , which is stored as single value "phython access".

Trigger an action for each hit of Elasticsearch query in Kibana Monitor

Is it possible to trigger an action for each hit of a given query in a Kibana Monitor? I would like to use a foreach loop to do this as demonstrated here. However, it's unclear how to implement this on the Kibana Monitor page. On the page there is an input field for Trigger Conditions but I'm unsure how to format the foreach within it or if this is supported.
Consider using Elasticsearch watcher (require at least gold licesnse): https://www.elastic.co/guide/en/elasticsearch/reference/current/how-watcher-works.html
Watcher will run on a certain interval and will perform a query against indices (according to your configuration). You will need to create a condition (e.g. hits number is greater than 5) that when it evaluates to true an action will be performed. Elasticsearch allows you to use multiple actions. For example, you can use webhook and receive the data from the last watcher run (you can also use watcher api to transform the data). If you don't have Gold license you can mimic watcher behavior by a script/program that uses Elasticsearch Search API.
Herbeby is a simple example of a watcher checking index named test every minute and sends a webhook with the entire search context in case there is at least one document.
{
"trigger" : {
"schedule" : { "interval" : "1m" }
},
"input" : {
"search" : {
"request" : {
"indices" : [ "test" ],
"body" : {
"query" : {
"bool": {
"must": {
"range": {
"updatedAt": {
"gte": "now-1m"
}
}
}
}
}
}
}
}
},
"condition" : {
"compare" : { "ctx.payload.hits.total" : { "gt" : 0 }}
},
"actions" : {
"sample_webhook" : {
"webhook" : {
"method" : "POST",
"url": "http://b4022015b928.ngrok.io/UtilsService/api/elasticHandler/watcher",
"body" : "{{#toJson}}ctx.payload{{/toJson}}",
"auth": {
"basic": {
"user": "user",
"password": "pass"
}
}
}
}
}
}
An alternative way would be to use Kibana Alerts and Actions.
https://www.elastic.co/guide/en/kibana/current/alerting-getting-started.html
This feature is slightly different from Watcher but basically allows you to perfrom actions upon a query against Elasticsearch. This featrue is only part of Kibana opposing to watcher which is part of Elasticsearch (though it is accessible from Kibana stack management).

How to auto apply index policy to newly created indexes in AWS Elasticsearch

We push Nginx logs to AWS Elasticsearch using Filebeat and Logstash. We have created an index pattern with the name nginx-error-logs* & nginx-access-logs*. We can see in Kibana that daily new indices are being created based on the nginx log file date pattern. We created index policy and applied to existing indices but we would like to auto-apply the same ISM policy for all the newly created indices in Elasticsearch. Kindly help us to achieve this?
Is this the correct format to apply in Devtools console?
PUT _template/testindex_template
{
"index_patterns": ["*"],
"settings": {
"opendistro.index_state_management.policy_id": "index_lifecycle_management_policy"
}
}
Or should that be applied on the filebeat or Logstash config?
opendistro.index_state_management.policy_id is deprecated
opendistro.index_state_management.policy_id is deprecated
You have to add your index pattern in ism_template array of the policy. Below is the example.
PUT _opendistro/_ism/policies/policy_name
{
"policy": {
"description": "Policy to manage indices",
"default_state": "hot",
"states" : [
{
"name" : "hot",
"actions" : [
{
"rollover" : {
"min_size" : "20gb",
"min_index_age" : "2d"
}
}
]
}
],
"ism_template": {
"index_patterns": [
"nginx-error-logs*", // **sample index pattern**
"nginx-access-logs*"
],
"priority": 100
}
}
}
Whenever new index create, the index name pattern will match to the ism_template and the respective policy will be applied.
If same pattern available in multiple policy the it will attach the policy who has high prority.

How to limit max number of ElasticSearch documents in a index?

I've installed an Elastic Search (version 7.x) cluster and created a new index. I want to limit the maximum number of documents in this index. Let's say 10000 documents top.
The naive solution is to query the number of documents before inserting a new document into it. But this method can be not accurate and also have poor performances (2 requests...).
How to do it right?
The best practice is to use Index Life Management which is in the Basic License and enabled by default in Elastic v7.3+
You can set a rollover action on the number of document (i put 5 max docs) :
PUT _ilm/policy/my_policy
{
"policy": {
"phases": {
"hot": {
"actions": {
"rollover": {
"max_docs": 5
}
}
}
}
}
}
Now i create a template with the policy my_policy :
PUT _template/my_template
{
"index_patterns": [
"my-index*"
],
"settings": {
"index.blocks.read_only" : true,
"index.lifecycle.name": "my_policy",
"index.lifecycle.rollover_alias": "my-index"
}
}
Note that i put the setting "index.blocks.read_only" : true because when the rollover will be applied it will create a new index with read_only parameter.
Now i can create my index :
PUT my-index-000001
{
"settings": {
"index.blocks.read_only": false
},
"aliases": {
"my-index": {
"is_write_index": true
}
}
}
That's it ! After 5 documents, it will create a new read only index and the alias will be on writing on this one.
You can test by index some new docs with the alias :
PUT my-index/_doc/1
{
"field" : "value"
}
Also, by default the ilm policy will be applied every 10 minutes, you can change that in order to test with :
PUT /_cluster/settings
{
"persistent": {
"indices.lifecycle.poll_interval": "5s"
}
}

Resources