I have a filtered alias in elasticsearch that I've created using "_all" as the index it is bound to. Like so:
curl -XPOST "localhost:9200/_aliases" -d'
{
"actions": [
{
"add": {
"index": "_all",
"alias": "logs",
"filter": { "type": { "value": "log" } }
}
}
]
}'
I created this alias because the logs are being placed in different indices (by month actually), and I need to see the aggregate. The problem I'm having is that whenever a new index is created, this alias is not updated. The alias seems to only reference the indices that existed when the alias was created.
Is there a way to have the alias update when new indices are added? Or is there a better approach altogether to achieve what I'm trying to do here?
You actually need an index template, more about it here.
And here's an example, for your specific case:
PUT /_template/logs_template
{
"template": "*",
"aliases": {
"logs": {
"filter": {
"type": {
"value": "log"
}
}
}
}
}
The above basically says that for each new index, whatever its name ("*"), associate the "logs" alias with it.
Related
Is is any way in elastic to store index template per alias.
I mean create Index with multiple aliases (alias1 ,alias2 ..) and attach different template to each of them. Then perform Index/Search docs on specific alias.
The reason I'm doing so due to multiple different data-structure (up to 50 types) of documents.
What I did so far is :
1. PUT /dynamic_index
2. POST /_aliases
{ "actions" : [
{ "add" : { "index" : "dynamic_index", "alias" : "alias_type1" } },
{ "add" : { "index" : "dynamic_index", "alias" : "alias_type2" } },
{ "add" : { "index" : "dynamic_index", "alias" : "alias_type3" } }
]}
3.
PUT_template/template1 {
"index_patterns": [
"dynamic_index"
],
"mappings": {
"dynamic_templates": [
{
"strings_as_keywords": {
"match_mapping_type": "string",
"mapping": {
"type": "text",
"analyzer": "standard",
"copy_to": "_all",
"fields": {
"keyword": {
"type": "keyword",
"normalizer": "lowercase_normalizer"
}
}
}
}
}
],
"properties": {
"source": {
"type": "keyword"
}
}
},
"aliases": {
"alias_type1": {
}
}
}
4. same way to alias_type2 , alias_type3 but different fields ...
Indexing/Search : Trying create and search docs per alias like in example:
POST alias_type1/_doc
{
"source": "foo"
, .....
}
POST alias_type2/_doc
{
"source": "foo123"
, .....
}
GET alias_type1/_search
{
"query": {
"match_all": {}
}
}
GET alias_type2/_search
{
"query": {
"match_all": {}
}
}
What I see actually that even if I index documents per alias,
when searching I don't see result per alias ,all results are same on alias_type1,2 and even on index.
Any way I can achieve separation logic on each alias in terms of searches/index docs per type (alias) ?
Any ideas ?
You can’t have separate mapping for aliases pointing to the same index! Aliases are like virtual link pointing to a index so if your aliases pointing to same index you will get the same result back.
If you want to have different mapping based on your data structure you will need to creat multiple indices.
Update
You also can use custom routing based on a field for more information you can check Elastic official documentation here.
It is possible to include a "date and time" field in a document that receives elasticsearch without it being previously defined.
The date and time corresponds to the one received by the json to elasticsearch
This is the mapping:
{
"mappings": {
"properties": {
"entries":{"type": "nested"
}
}
}
}
Is it possible that it can be defined in the mapping field so that elasticsearch includes the current date automatically?
What you can do is to define an ingest pipeline to automatically add a date field when your document are indexed.
First, create a pipeline, like this (_ingest.timestamp is a built-in field that you can access):
PUT _ingest/pipeline/add-current-time
{
"description" : "automatically add the current time to the documents",
"processors" : [
{
"set" : {
"field": "#timestamp",
"value": "_ingest.timestamp"
}
}
]
}
Then when you index a new document, you need to reference the pipeline, like this:
PUT test-index/_doc/1?pipeline=add-current-time
{
"my_field": "test"
}
After indexing, the document would look like this:
GET test-index/_doc/1
=>
{
"#timestamp": "2020-08-12T15:48:00.000Z",
"my_field": "test"
}
UPDATE:
Since you're using index templates, it's even easier because you can define a default pipeline to be run for each indexed documents.
In your index templates, you need to add this to the index settings:
{
"order": 1,
"index_patterns": [
"attom"
],
"aliases": {},
"settings": {
"index": {
"number_of_shards": "5",
"number_of_replicas": "1",
"default_pipeline": "add-current-time" <--- add this
}
},
...
Then you can keep indexing documents without referencing the pipeline, it will be automatic.
"value": "{{{_ingest.timestamp}}}"
Source
I would like to use elastic search to index the JSON schema provided below
{
"data": "etc",
"metadata": {
"foo":"bar",
"baz": "etc"
}
}
However the metadata can vary and I do not know all the fields that could be present. Is there a way to tell elastic search that if it sees a value in the metadata object to index it in a certain way? (I do know that all the values would be strings)
Thanks
Yes, you can do that using dynamic templates, basically like this:
PUT my_index
{
"mappings": {
"_doc": {
"dynamic_templates": [
{
"full_name": {
"path_match": "metadata.*",
"mapping": {
"type": "text" <---- add your desired mapping here
}
}
}
]
}
}
}
When I create an index with mapping like this one, what does it mean the _template/ word? what does the _ mean? I ask your help to understand more about creating an index, are they stored in a kind of folder, like template/packets folder?
PUT _template/packets
{
"template": "packets-*",
"mappings": {
"pcap_file": {
"dynamic": "false",
"properties": {
"timestamp": {
"type": "date"
},
"layers": {
"properties": {
"frame": {
"properties": {
"frame_frame_len": {
"type": "long"
},
"frame_frame_protocols": {
"type": "keyword"
}
}
},
"ip": {
"properties": {
"ip_ip_src": {
"type": "ip"
},
"ip_ip_dst": {
"type": "ip"
}
}
},
"udp": {
"properties": {
"udp_udp_srcport": {
"type": "integer"
},
"udp_udp_dstport": {
"type": "integer"
}
}
}
}
}
}
}
}
}
I ask this because after typing this, I recieve he following error
! Deprecation: Deprecated field [template] used, replaced by [index_patterns]
{
"acknowledged": true
}
I copied the pattern from this link:
https://www.elastic.co/blog/analyzing-network-packets-with-wireshark-elasticsearch-and-kibana
And I'm trying to do exactly what is taught in the link, and I already can capture files with tshark and parse copy them into a packets.json file, and I will use filebeat to transfer the data to Elasticsearch, I already uploaded some data to Elasticsearch, but it wasn't indexed correctly, I just saw a lot of information with a lot of data.
My aim is to inderstand exactly how to create a new index pattern, and also how to relate what I upload to that index.
Thank you very much.
Just replace word template with index_patterns:
PUT _template/packets
{
"index_patterns": ["packets-*"],
"mappings": {
...
Index templates allow you to define templates that will automatically be applied when new indices are created.
After version 5.6 the format of Elasticsearch index templates has changed; the template field, which was used to specify one or more patterns for matching index names that would use the template at create time, was deprecated and superseded by the more appropriately named field index_patterns which works exactly the same way.
To solve the issue and get rid of the deprecation warnings you will have to update all your pre-6.0 index templates, changing the template to index_patterns.
You can list all your index templates by running this command:
curl -XGET 'http://localhost:9200/_template/*?pretty'
Or replace the asterisk with the name of one specific index template.
More about ES templates is here.
Due to maintenance reasons, I need to create partitioned index by date (split by each month).
For instance, suppose I would need to remove 10-2015 docs - I would just drop 10-2015 index.
Suppose I have that mapping:
PUT my_logs
{
"mappings": {
"logs": {
"properties": {
"data": { "type": "string" }
}
}
}
}
It's important that data will be placed in the correct date without the need to create the same mapping above for a specific month.
What is the best way to achieve it ?
Using an index template, you can achieve what you want.
First you create an index template like the one below:
PUT /_template/my_logs
{
"template": "logs-*",
"mappings": {
"logs": {
"properties": {
"data": { "type": "string" }
}
}
}
}
Then every time you index a new document into an index whose name starts with logs- then this template will be applied at index creation time.
PUT /logs-10-2015/logs/7234t27
{
"data": "some data"
}