Roll over index with elastic search and serilog - elasticsearch

We are using es 6.7 and serilog 7.1 in our dotnet core application.
In our logger implementation vi are using the following index "app-{0:yyyy.MM}-1" for our ElasticsearchSinkOptions.
This creates an index called app-2019.04-1 as expected.
However we set up an alias and a lifecycle policy which do a roll over action and creates a new index called app-2019.04-000002 after some conditionshas been met - as expected.
The issue is that our dot net core application still logs to the first index app-2019.04-1. How do we update the indexformat being used in the dot net core application when elastic search has performed a roll over action?n

Well I figure it out. Maybe it will help someone else. I had to log it to the alias and not the index.
For making it work you need to:
Create an index with format xxxx-1
Create an alias and ad it to the index e.g. xxxx
Create index pattern xxxx-*
Create lifecycle policy
Create template with indexpattern, alias and lifecycle policy
Make sure your indexformat in serilog is the alias.
Start logging :)

Related

Set up reindex naming format in Kibana

I am trying to use the Kibana migration assistant, which seems to work fine, however I would like to keep the original naming which we follow:
my_index_v1
my_other_index_v5
The Kibana migration assistent proposes using:
Create reindexed-v7-my_index_v1 index.
Create reindexed-v7-my_other_index_v5 index.
Is there a way to tell Kibana how to name the indices?
Or is there a way to invoke the full migration process differently? (I understood that using the _reindex api is just one step in the process.

Can Skywalking create ES indexes with lifecycle policies or index templates?

I am having trouble finding any information about this in documentation. In the config/application.yml file under storage.elasticsearch7 I see various configuration options. Is there a way to ensure that the indexes that get created are created using a given index template or ILM policy? I am running the helm chart for the ELK stack and ES version 8.0.0-SNAPSHOT.
My goal is to just delete indexes from SW after 2 weeks so that my cluster doesn't run out of shards.
I created a lifecycle that performs the delete action after a set time, and then I added this configuration to the skywalking application.yml under storage.elasticsearch7:
advanced: ${SW_STORAGE_ES_ADVANCED:"{\"index.lifecycle.name\":\"sw-policy\"}"}
SW creates index templates, and now I see that this is part of the template, and indeed the indexes have this sw-policy attached.

How to move elasticsearch index using file system?

Usecase:
I have created es-indexes: mywebsiteindex-yyyymmdd , mysharepointindex-yyyymmdd in my laptop/dev machine. I want to export/zip that index as a file. The file may be migrated by someone who has credentials to target machine. And the zip/file may be imported to target-elastic folder.
You can abstract the words 'machine' 'folder' 'zip' in the above. Focus is 'transfer index as a file and reimport at target which I may not have access through http/tcp/ftp/ssh'.
Is there any python/other script out there that can export-from-source and import-to-target? A script that hides internal complexities of node/cluster count differences between dev/prod etc, and just move index.
Note: I already referred to the below page, so no need to reiterate the same
https://www.elastic.co/guide/en/cloud/current/ec-migrate-data.html
There are some options:
You can use the snapshot and restore api to create a snapshot of your index and restore it in your new instance. (recommended way)
You can use the reindex api in your new instance to reindex your index from remote.
You can use Logstash with your old instance as an input and your new instance as the output.
And you can write a script/application using one of the supported clients to query your index, export to a file, read that file and import in your new instance. (logstash can also do that).
But you can't move your data files, this is not supported nor recommended by Elastic.

Elastic search next steps

I'm new to elasticsearch and am still trying to set it up. I have installed elasticsearch 5.5.1 using default values I have also installed Kibana 5.5.1 using the default values. I've also installed the ingest-attachment plugin with the latest x-pack plugin. I have elasticsearch running as a service and I have Kibana open in my browser. On the Kibana dashboardI have an error stating that it is unable to fetch mappings. I guess this is because I havn't set up any indices or pipelines yet. This is where I need some steer, all the documentation I've found so far on-line isn't particularly clear. I have a directory with a mixture of document types such as pdf and doc files. My ultimate goal is to be able to search these documents with values that a user will enter via an app. I'm guessing I need to use the Dev Tools/console window in Kibana using the 'PUT' command to create a pipeline next, but I'm unsure of how I should do this so that it points to my directory with the documents. Can anybody provide me an example of this for this version please.
If I understand you correctly, let's first set some basic understanding about elasticsearch:
Elasticsearch in it's simple definition is a "Search engine". so you need to store some data, and then elastic will help you to search using a search criteria, and it will retrieve relevant data back
You need a "Container" to save your data to, and elastic has this thing like any database engine to store your data, but the terms are somehow different. for example a "Database" in sql-like systems is called "Index", and what you know as "table" is called "Type" in elastic.
from my understanding, you will need to create your index (with or without mappings) to have a starting point, and I recommend you to start without mappings just to "start" and get things working, but later on it's highly recommend to work with "mappings" if applicable, because elastic is smart, but it cannot know more about your data than you do
Because Kibana has failed to find a proper index to start with, it has complained and asked you to either provide a syntax for index names, or a specific index name so it can infer the inline mappings and give you the nice features of querying, displaying charts, etc of your data, so once you create your index, you will provide that to the starting page of Kibana, and you will be ready to go.
Let me know if you need something more specific to your needs :)

Spring Data Couchbase - Search without having admin rights on the cluster

I'm currently working on a POC with Couchbase, using Spring Data to put & get documents on/off a bucket on a cluster.
As I'm working in a big company, I'm lucky they gave me a bucket, but still I don't have the admin rights on the cluster, so I only have access to the bucket.
But as I'm digging into the Spring Data documentation, I'm not able to find a way to retrieve documents without creating views on the server. (I'm getting errors like "Unknown query param" ). Nevertheless with couchbase java sdk i'm able to, through n1ql queries, but the use of the Spring data layer is mandatory.
The answers I found always point me to the server-side function direction
ex : https://stackoverflow.com/a/30928169/3744307
What I would like to find, is a way to add a repository method like
List findReceiptByAccount(String Account)
without having to specificly declare the function server-side.
Is this possible, or have I to send a request to the administrators to create functions for me everytime I have to add a findByX method?
Thanks for your time,
What version of CB is it ?
I think that prior to 4.5, a n1ql access (which you seems to have) is enough to build your index yourself !
With Spring Data Couchbase 2.x that would use a N1QL index in the background, and it would work with a single primary index (although having 1 index per repository entity class would be best for performance). Maybe you can ask your admin to create that index once?

Resources