How to index Azure storage data to elastic cloud - elasticsearch

I am new to elastic search. I have data stored in Azure storage and I want to index it using elasticsearch. I have created a cluster at https://cloud.elastic.co. Do I need to create a service which will index the data in elastic cloud and then users can use/search this data using elastic search? How to index the data to elastic cloud using asp.net MVC?
Please suggest.

One way to approach this would be to write a console application that
pulls data from Azure storage using the Storage client in the WindowsAzure.Storage nuget package or similar
transforms data into documents according to your domain needs
bulk indexes documents into Elasticsearch in Elastic Cloud using the .NET Elasticsearch client NEST
If data will be updated in Azure storage and will need to be frequently indexed into Elasticsearch, consider making the console application an Azure Web Job.
Another approach would be to use Logstash in conjunction with the input plugin for Azure Storage blobs.

Related

Can Kibana be synced to Elastic App Search?

So I have Kibana set up with my data in it. About 3 indices.
Recently I've deployed Elastic Enterprise Search And im testing out Elastic App Search, but I have no data in it.
My question therefore is, can I somehow migrate or sync my data inside Kibana into Elastic App Search?
Sorry, migration of Elasticsearch indices to Elastic App Search is not available as of now.
Even though it looks like Kibana is holding the data, but actually Elasticsearch is the datastore behind it. App Search is a layer on top of Elasticsearch which manages the indexes, schema, documents etc.
If you're directly ingesting data into Elasticsearch, at this moment it is not possible to automatically migrate to Elastic App Search.

How to take snapshots of specific indices in Elastic Cloud Enterprise?

In Elastic Cloud UI, You can take snapshots/backup of your entire on-disk data and store it in a file shared system, say, Object Store S3.
How do I backup only certain indices instead of all with using Elastic Cloud UI only? Is there a way?
If not then and only then I want to go with APIs.
If you link out to the Elasticsearch Service docs for Snapshot and Restore, you will see that we also link to the Elasticsearch Snapshot and Restore docs. Here you will find instructions to backup certain indices. You can use the API console to do this more easily through the Elastic Cloud UI.

Logtsash Plugin for CosmosDB to Elasticsearch

Can you please suggest which logstash plugin is used for pulling data from Cosmos DB to Elasticsearch using Logstash?
If no such plug-ins, is there any other way to do the same?
Based on the Logstash plugins for Microsoft Azure Services and this thread,it seems that the cosmos db input plugin is not supported so far.
All i can find by now,you could use ADF copy activity to transfer your cosmos db data into above supported input source data residences,then complete subsequent work.
For example,use ADF to transfer cosmos db into sql db and follow this link to integrate with your elasticsearch service.

How to build relational graph using elasticsearch data

We are building log analytics applicaton in which we are using Graylog & Elasticsearch. Since I have installed Elasticsearch but somehow I want to take the data from elasticsearch and create relational graphs with the data on my own instead of using Xpack-Graph.
i could have used xpack graph api and do http calls to get data but its not free ware and i'm not sure that we will be able to buy one licence
is there any other alternative for xpack graph api which is free ??
or can i query directly to elastic using aggregation if so how feasible it is?? can yo share me some resource on this
Kindly share your thoughts on this.

Elastic search with Google Big Query

I have the event logs loaded in elasticsearch engine and I visualise it using Kibana. My event logs are actually stored in the Google Big Query table. Currently I am dumping the json files to a Google bucket and download it to a local drive. Then using logstash, I move the json files from the local drive to the elastic search engine.
Now, I am trying to automate the process by establishing the connection between google big query and elastic search. From what I have read, I understand that there is a output connector which sends the data from elastic search to Google big query but not vice versa. Just wondering whether I should upload the json file to a kubernete cluster and then establish the connection between the cluster and Elastic search engine.
Any help with this regard would be appreciated.
Although this solution may be a little complex, I suggest some solution that you use Google Storage Connector with ES-Hadoop. These two are very mature and used in production-grade by many great companies.
Logstash over a lot of pods on Kubernetes will be very expensive and - I think - not a very nice, resilient and scalable approach.
Apache Beam has connectors for BigQuery and Elastic Search, I would definitly perform this using DataFlow so you donĀ“t need to implement a complex ETL and staging storage. You can read the data from BigQuery using BigQueryIO.Read.from (take a look to this if performance is important BigQueryIO Read vs fromQuery) and load it into ElasticSearch using ElasticsearchIO.write()
Refer this how read data from BigQuery Dataflow
https://github.com/GoogleCloudPlatform/professional-services/blob/master/examples/dataflow-bigquery-transpose/src/main/java/com/google/cloud/pso/pipeline/Pivot.java
Elastic Search indexing
https://github.com/GoogleCloudPlatform/professional-services/tree/master/examples/dataflow-elasticsearch-indexer
UPDATED 2019-06-24
Recently this year was release BigQuery Storage API which improve the parallelism to extract data from BigQuery and is natively supported by DataFlow. Refer to https://beam.apache.org/documentation/io/built-in/google-bigquery/#storage-api for more details.
From the documentation
The BigQuery Storage API allows you to directly access tables in BigQuery storage. As a result, your pipeline can read from BigQuery storage faster than previously possible.
I have recently worked on a similar pipeline. A workflow I would suggest would either use the mentioned Google storage connector, or other methods to read your json files into a spark job. You should be able to quickly and easily transform your data, and then use the elasticsearch-spark plugin to load that data into your Elasticsearch cluster.
You can use Google Cloud Dataproc or Cloud Dataflow to run and schedule your job.
As of 2021, there is a Dataflow template that allows a "GCP native" connection between BigQuery and ElasticSearch
More information here in a blog post by elastic.co
Further documentation and step by step process by google

Resources