using created Visualize Kibana for other index patterns (same data)? - elasticsearch

I'm having a problem with using Visualize Kibana. At first I make some Visualize and saved them, then I made another index pattern with the same data but with another name index. So how can I use my old Visualize for my new index pattern?
Thanks all.

In recent versions of Kibana you may be able to do it form Management->Saved Objects, here you can manage all your saved objects:
open in Management the new index pattern you want to get in the visualization
get the UUID of the index pattern from the address bar in the browser
open the saved visualization (Management -> Saved Objects) and edit the kibanaSavedObjectMeta.searchSourceJSON parameter with the UUID of the index pattern you want
now the visualization will point to the new index
WARNING: with this method you can corrupt your saved objects and then you cannot recover them.

As of today, it seems to be an issue with no practical solution.
There is a new type of visualization called Lens, which allows to change the index from the one originally saved. Just enter to edit the visualization, and you will find the name of the current index in the leftmost column, and will be able to change it.
A downside is that Lens has not yet all types of graphics (for example heat maps), but will work for bars, lines and sectors.

Related

I want to use Elasticserach and kibana alerts to detect line passing

We would like to implement a system that draws a line on a map displayed by kibana in advance and detects when a moving object (such as a boat) passes through the line.
I believe a possible way to do this is to set up rules using Elasticsearch query from kibana's rule creation.
But I don't know how to realize it.
I drew a line by selecting create index in add layer from maps in kibana.
A json file containing location, speed, and time information was imported into elasticserch and displayed on a map.

Is it possible to update indices in Elasticsearch with zero downtime?

I am trying to achieve updating the data in the indices of Elasticsearch with zero downtime but I am not sure how to achieve this. Can anyone assist me how I can do so?
For example: If I have an index of name my_es_index and I want to update the data in that particular index with zero downtime so that the old data is still there on one of the node while someone is performing certain query but parallelly in the backend , we are updating the data on that index.
Is it possible to achieve? If yes, please help me with how I can proceed.
You build/create another index (we called new index), then switch from old index to new index, then delete old index.
Read more at https://medium.com/craftsmenltd/rebuild-elasticsearch-index-without-downtime-168363829ea4
Unless you are to update the mapping of an existing field and preserving the name of the fields is required, I don't think taking the cluster down is needed.
While the above article is a good read and might be treated as best practices, ES is a lot flexible. Unlike MySQL/SQL, it allows you to update existing documents.
Adding a new field
Let's call the new field to be added as x.
add mapping to the index for x.
make the code changes such that going forward, all the new documents have this new field x.
while all the new documents have the field x, write-up a script which updates the older documents and adds this field x.
once you are sure that all the documents have the field x, you may enable the feature you added this field for.
Updating mapping of a field
Let's again call the field to be updated as x (assuming the name of the field is not the prime concern).
create a new field, say new_x (add correct mapping to the index).
follow the above steps to ensure new_x has the data (slight change that we need to ensure both x and new_x have this data).
once all the documents in the index have the field new_x, simply refactor the code to use new_x instead of x.
While one might argue that above two approaches are in a way hacks, it saves you time, effort and cost to boot up a new ES instance and manage the aliases.

KIbana isn’t showing newly created indicies

I created 3 indicies on my elastisearch opened up kibana and all those showed up on it. After a few days I created 2 more indicies and opened up my kibana but I only see those 3 indicies I created for the first time and not the new ones.
I tried searching for those indicies in Discover but nothing shows up.
Everything is running locally on my laptop
Has anyone faced this problem before?
In the Kibana Discover view, you don't see indexes, but index patterns, which are a way of grouping indexes together under a single logical name, similar to an alias.
For instance, if your index pattern is called my-index-* then you'll see all indexes called my-index-1, my-index-2, etc
If you create new indexes, the key is to use a name that will match the index pattern that you've created. If you create my-index-999 then it will be visible in the Discover view immediately without further action. If you create an index called another-index-1 then it will not be visible until you create an appropriate index pattern that matches this name.

Bulk read of all documents in an elasticsearch alias

I have the following elasticsearch setup:
4 to 6 small-ish indices (<5 million docs, <5Gb each)
they are unioned through an alias
they all contain the same doc type
they change very infrequently (i.e. >99% of the indexing happens when the index is created)
One of the use cases for my app requires to read all documents for the alias, ordered by a field, do some magic and serve the result.
I understand using deep pagination will most likely bring down my cluster, or at the very least have dismal performance so I'm wondering if the scroll API could be the solution. I know the documentation says it is not intended for use in real-time user queries, but what are the actual reasons for that?
Generally, how are people dealing with having to read through all the documents in an index? Should I look for another way to chunk the data?
When you use the scroll API, Elasticsearch creates a sort of a cursor for the current state of the index, so the reason for it not being recommended for real time search is because you will not see any new documents that were inserted after you created the scroll token.
Since your use case indicates that you rarely update or insert new documents into your indices, that may not be an issue for you.
When generating the scroll token you can specify a query with a sort, so if your documents have some sort of timestamp, you could create one scroll context for all documents with timestamp: { lte: "now" } and another scroll (or every a simple query) for the rest of the documents that were not included in the first search context by specifying a certain date range filter.

Does ElasticSearch Snapshot/Restore functionality cause the data to be analyzed again during restore?

I have a decent amount of data in my ElasticSearch index. I changed the default analyzer for the index and hence essentially I need to reindex my data so that it is analyzed again using the new analyzer. So instead of creating a test script that will delete all of the existing data in the ES index and re-add the data I thought if there is a back-up/restore module that I could use. As part of that, I found the snapshot/restore module that ES supports - ElasticSearch-SnapshotAndRestore.
My question is - If I use the above ES snapshot/restore module will it actually cause the data to be re-analyzed? Since I changed the default analyzer, I need the data to be reanalyzed. If not, is there an alternate tool/module you will suggest that will allow for pure export and import of data and hence cause the data to be re-analyzed during import?
DevUser
No it does not re-analyze the data. You will need to reindex your data.
Fortunately that's fairly straightforward with Elasticsearch as it by default stores the source of your documents:
Reindexing your data
While you can add new types to an index, or add new fields to a type,
you can’t add new analyzers or make changes to existing fields. If you
were to do so, the data that has already been indexed would be
incorrect and your searches would no longer work as expected.
The simplest way to apply these changes to your existing data is just
to reindex: create a new index with the new settings and copy all of
your documents from the old index to the new index.
One of the advantages of the _source field is that you already have
the whole document available to you in Elasticsearch itself. You don’t
have to rebuild your index from the database, which is usually much
slower.
To reindex all of the documents from the old index efficiently, use
scan & scroll to retrieve batches of documents from the old index, and
the bulk API to push them into the new index.
http://www.elasticsearch.org/guide/en/elasticsearch/guide/current/reindex.html
I'd read up on Scan and Scroll prior to taking this approach:
http://www.elasticsearch.org/guide/en/elasticsearch/guide/current/scan-scroll.html
TaskRabbit did opensource an import/export tool but I've not used it so cannot recommend but it is worth a look:
https://github.com/taskrabbit/elasticsearch-dump

Resources