Set up reindex naming format in Kibana - elasticsearch

I am trying to use the Kibana migration assistant, which seems to work fine, however I would like to keep the original naming which we follow:
my_index_v1
my_other_index_v5
The Kibana migration assistent proposes using:
Create reindexed-v7-my_index_v1 index.
Create reindexed-v7-my_other_index_v5 index.
Is there a way to tell Kibana how to name the indices?
Or is there a way to invoke the full migration process differently? (I understood that using the _reindex api is just one step in the process.

Related

How to move elasticsearch index using file system?

Usecase:
I have created es-indexes: mywebsiteindex-yyyymmdd , mysharepointindex-yyyymmdd in my laptop/dev machine. I want to export/zip that index as a file. The file may be migrated by someone who has credentials to target machine. And the zip/file may be imported to target-elastic folder.
You can abstract the words 'machine' 'folder' 'zip' in the above. Focus is 'transfer index as a file and reimport at target which I may not have access through http/tcp/ftp/ssh'.
Is there any python/other script out there that can export-from-source and import-to-target? A script that hides internal complexities of node/cluster count differences between dev/prod etc, and just move index.
Note: I already referred to the below page, so no need to reiterate the same
https://www.elastic.co/guide/en/cloud/current/ec-migrate-data.html
There are some options:
You can use the snapshot and restore api to create a snapshot of your index and restore it in your new instance. (recommended way)
You can use the reindex api in your new instance to reindex your index from remote.
You can use Logstash with your old instance as an input and your new instance as the output.
And you can write a script/application using one of the supported clients to query your index, export to a file, read that file and import in your new instance. (logstash can also do that).
But you can't move your data files, this is not supported nor recommended by Elastic.

Roll over index with elastic search and serilog

We are using es 6.7 and serilog 7.1 in our dotnet core application.
In our logger implementation vi are using the following index "app-{0:yyyy.MM}-1" for our ElasticsearchSinkOptions.
This creates an index called app-2019.04-1 as expected.
However we set up an alias and a lifecycle policy which do a roll over action and creates a new index called app-2019.04-000002 after some conditionshas been met - as expected.
The issue is that our dot net core application still logs to the first index app-2019.04-1. How do we update the indexformat being used in the dot net core application when elastic search has performed a roll over action?n
Well I figure it out. Maybe it will help someone else. I had to log it to the alias and not the index.
For making it work you need to:
Create an index with format xxxx-1
Create an alias and ad it to the index e.g. xxxx
Create index pattern xxxx-*
Create lifecycle policy
Create template with indexpattern, alias and lifecycle policy
Make sure your indexformat in serilog is the alias.
Start logging :)

Elastic search next steps

I'm new to elasticsearch and am still trying to set it up. I have installed elasticsearch 5.5.1 using default values I have also installed Kibana 5.5.1 using the default values. I've also installed the ingest-attachment plugin with the latest x-pack plugin. I have elasticsearch running as a service and I have Kibana open in my browser. On the Kibana dashboardI have an error stating that it is unable to fetch mappings. I guess this is because I havn't set up any indices or pipelines yet. This is where I need some steer, all the documentation I've found so far on-line isn't particularly clear. I have a directory with a mixture of document types such as pdf and doc files. My ultimate goal is to be able to search these documents with values that a user will enter via an app. I'm guessing I need to use the Dev Tools/console window in Kibana using the 'PUT' command to create a pipeline next, but I'm unsure of how I should do this so that it points to my directory with the documents. Can anybody provide me an example of this for this version please.
If I understand you correctly, let's first set some basic understanding about elasticsearch:
Elasticsearch in it's simple definition is a "Search engine". so you need to store some data, and then elastic will help you to search using a search criteria, and it will retrieve relevant data back
You need a "Container" to save your data to, and elastic has this thing like any database engine to store your data, but the terms are somehow different. for example a "Database" in sql-like systems is called "Index", and what you know as "table" is called "Type" in elastic.
from my understanding, you will need to create your index (with or without mappings) to have a starting point, and I recommend you to start without mappings just to "start" and get things working, but later on it's highly recommend to work with "mappings" if applicable, because elastic is smart, but it cannot know more about your data than you do
Because Kibana has failed to find a proper index to start with, it has complained and asked you to either provide a syntax for index names, or a specific index name so it can infer the inline mappings and give you the nice features of querying, displaying charts, etc of your data, so once you create your index, you will provide that to the starting page of Kibana, and you will be ready to go.
Let me know if you need something more specific to your needs :)

Mapper Attachment to Kibana issues

I have created some index in Elasticsearch with mapper attachment plugin. However, when I try to create index in Kibana, I could not find back any data created in Elasticsearch for making dashboard in Kibana
Is there any way to resolve this issue?
Try running http://:9200/_cat/indices?v
The above will return all indexes you have. Once you verified that your mapper attachment index is there, go to Kibana at Settings tab and select the checkbox that say your index do not contain time series data. Now write your index name and I hope you find it. Also, make sure your Kibana is configured to point to the Elasticsearch server your index resides. This is configured in the config/kibana.yaml.
Hope I have managed to help!

How do I ensure proper routing with logstash when I update a parent/child relationship document?

I have a parent/child relationship setup in elastic search. When I try to update the parent it sometimes works and sometimes not. I believe I've narrowed it down to getting a missing document error because I can't figure out how to specify routing using logstash. (Parent/Child relationships must route to the same shard). I thought elasticsearch would do this automatically given that I have setup the routing path in the mappings but it only seems to work when I specify the routing paramater in the REST API URL. But, logstash doesn't seem to have a way to add that when updating. I'm using the logstash elastic-search output plugin with http protocol.
To add to my confusion it seems elasticsearch 1.5 is deprecating the "path" property in mappings.
Is there any way to ensure proper routing with parent/child updates using logstash?

Resources