KIbana isn’t showing newly created indicies - elasticsearch

I created 3 indicies on my elastisearch opened up kibana and all those showed up on it. After a few days I created 2 more indicies and opened up my kibana but I only see those 3 indicies I created for the first time and not the new ones.
I tried searching for those indicies in Discover but nothing shows up.
Everything is running locally on my laptop
Has anyone faced this problem before?

In the Kibana Discover view, you don't see indexes, but index patterns, which are a way of grouping indexes together under a single logical name, similar to an alias.
For instance, if your index pattern is called my-index-* then you'll see all indexes called my-index-1, my-index-2, etc
If you create new indexes, the key is to use a name that will match the index pattern that you've created. If you create my-index-999 then it will be visible in the Discover view immediately without further action. If you create an index called another-index-1 then it will not be visible until you create an appropriate index pattern that matches this name.

Related

Elastic Search:Update of existing Record (which has custom routing param set) results in duplicate record, if custom routing is not set during update

Env Details:
Elastic Search version 7.8.1
routing param is an optional in Index settings.
As per ElasticSearch docs - https://www.elastic.co/guide/en/elasticsearch/reference/current/mapping-routing-field.html
When indexing documents specifying a custom _routing, the uniqueness of the _id is not guaranteed across all of the shards in the index. In fact, documents with the same _id might end up on different shards if indexed with different _routing values.
We have landed up in same scenario where earlier we were using custom routing param(let's say customerId). And for some reason we need to remove custom routing now.
Which means now docId will be used as default routing param. This is creating duplicate record with same id across different shard during Index operation. Earlier it used to (before removing custom routing) it resulted in update of record (expected)
I am thinking of following approaches to come out of this, please advise if you have better approach to suggest, key here is to AVOID DOWNTIME.
Approach 1:
As we receive the update request, let duplicate record get created. Once record without custom routing gets created, issue a delete request for a record with custom routing.
CONS: If there is no update on records, then all those records will linger around with custom routing, we want to avoid this as this might results in unforeseen scenario in future.
Approach 2
We use Re-Index API to migrate data to new index (turning off custom routing during migration). Application will use new index after successful migration.
CONS: Some of our Indexes are huge, they take 12 hrs+ for re-index operation, and since elastic search re-index API will not migrate the newer records created between this 12hr window, as it uses snapshot mechanism. This needs a downtime approach.
Please suggest alternative if you have faced this before.
Thanks #Val, also found few other approaches like write to both indexes and read from old. And then shift to read new one after re-indexing is finished. Something on following lines -
Create an aliases pointing to the old indices (*_v1)
Point the application to these aliases instead of actual indices
Create a new indices (*_v2) with the same mapping
Move data from old indices to new using re-indexing and make sure we don't
retain custom routing during this.
Post re-indexing, change the aliases to point to new index instead of old
(need to verify this though, but there are easy alternatives if this
doesn't work)
Once verification is done, delete the old Indices
What do we do in transition period (window between reindexing start to reindexing finish) -
Write to both Indices (old and new) and read from old indices via aliases

How I can remove only data from elastic search index not the complete index

I have one ELK index available using that I am showing visual dashboard.
My requirement is that I need to empty or remove the data only , not the index it self. How i can achieve this. I googled a lot . I am getting solution to remove the index, but i need only to remove the data so index will remain there.
I want to achieve this dynamically using command prompt.
You can simply delete all the data in the index if there's not too much of it:
POST my-index/_delete_by_query?q=*&wait_for_completion=false

using created Visualize Kibana for other index patterns (same data)?

I'm having a problem with using Visualize Kibana. At first I make some Visualize and saved them, then I made another index pattern with the same data but with another name index. So how can I use my old Visualize for my new index pattern?
Thanks all.
In recent versions of Kibana you may be able to do it form Management->Saved Objects, here you can manage all your saved objects:
open in Management the new index pattern you want to get in the visualization
get the UUID of the index pattern from the address bar in the browser
open the saved visualization (Management -> Saved Objects) and edit the kibanaSavedObjectMeta.searchSourceJSON parameter with the UUID of the index pattern you want
now the visualization will point to the new index
WARNING: with this method you can corrupt your saved objects and then you cannot recover them.
As of today, it seems to be an issue with no practical solution.
There is a new type of visualization called Lens, which allows to change the index from the one originally saved. Just enter to edit the visualization, and you will find the name of the current index in the leftmost column, and will be able to change it.
A downside is that Lens has not yet all types of graphics (for example heat maps), but will work for bars, lines and sectors.

Kibana - can't create index pattern after deleting all

I deleted all my index patterns (I still have few indexes in elasticsearch, also visible from management in kibana). Right now I wanted to create new (first) index pattern, but there is message:
No default index pattern. You must select or create one to continue.
but I can't create one because of screen is only showing message above and:
Checking for Elasticsearch data
so basically, I need to have default index pattern to create one, but I can't assign any index to be default, because I don't have any index pattern :(
I am on Windows and I am using version 6.5.4 of Kibana and Elasticsearch.
Do you have any idea how to proceed?

How to make Logstash replace old data?

I have an Oracle DB. Logstash retrieves data from Oracle and puts it to ElasticSearch.
But when Logstash makes planned export every 5 minutes, ElasticSearch filled with copies cause old data still exist. This is an obvious situation. Oracle's condition almost not changed during this 5 minutes. Let's say - added 2-3 rows, and 4-5 deleted.
How can we replace old data with new without copies?
For example:
Delete the whole old index;
Create new index with the same name and make the same configuration (nGram configuration and mapping);
Add all new data;
Wait for 5 minutes and repeat.
It's pretty easy: create a new index for each import and apply the mappings, switch your alias afterwards to the most recent index. Remove old indices if needed. Your currenr data will be always searchable while indexing the most recent data.
Here are the sources you'll probalbly need to read:
Use aliases (https://www.elastic.co/guide/en/elasticsearch/reference/current/indices-aliases.html) to point to the most current data when searching in elasticsearch (BTW it`s always a good idea to have aliases in place).
Use rollover api (https://www.elastic.co/guide/en/elasticsearch/reference/current/indices-rollover-index.html) to create a new index for each import run - note the alias handling here too.
Use index templates (https://www.elastic.co/guide/en/elasticsearch/reference/current/indices-templates.html) to autmatically apply the mappings/settings for your newly created indices.
Shrink, close and/or delete old indices to keep your cluster handling data you really need. Have a look on the curator (https://github.com/elastic/curator) as standalone tool.
You just need to use the fingerprint/hash of each document , or hash of the uniq fields in each document , as the document id , so that eveytime you can overwirte the same documents with updated one , in place , while adding new documents as well.
But this approach will not work with deleting data from oracle.

Resources