Can Skywalking create ES indexes with lifecycle policies or index templates? - elasticsearch

I am having trouble finding any information about this in documentation. In the config/application.yml file under storage.elasticsearch7 I see various configuration options. Is there a way to ensure that the indexes that get created are created using a given index template or ILM policy? I am running the helm chart for the ELK stack and ES version 8.0.0-SNAPSHOT.
My goal is to just delete indexes from SW after 2 weeks so that my cluster doesn't run out of shards.

I created a lifecycle that performs the delete action after a set time, and then I added this configuration to the skywalking application.yml under storage.elasticsearch7:
advanced: ${SW_STORAGE_ES_ADVANCED:"{\"index.lifecycle.name\":\"sw-policy\"}"}
SW creates index templates, and now I see that this is part of the template, and indeed the indexes have this sw-policy attached.

Related

Elastic APM different index name

As of a few weeks ago we added filebeat, metricbeat and apm to our dotnet core application ran on our kubernetes cluster.
It works all nice and recently we discovered filebeat and metricbeat are able to write a different index upon several rules.
We wanted to do the same for APM, however searching the documentation we can't find any option to set the name of the index to write to.
Is this even possible, and if yes how is it configured?
I also tried finding the current name apm-* within the codebase but couldn't find any matches upon configuring it.
The problem which we'd like to fix is that every space in kibana gets to see the apm metrics of every application. Certain applications shouldn't be within this space so therefore i thought a new apm-application-* index would do the trick...
Edit
Since it shouldn't be configured on the agent but instead in the cloud service console. I'm having troubles to 'user-override' the settings to my likings.
The rules i want to have:
When an application does not live inside the kubernetes namespace default OR kube-system write to an index called apm-7.8.0-application-type-2020-07
All other applications in other namespaces should remain in the default indices
I see you can add output.elasticsearch.indices to make this happen: Array of index selector rules supporting conditionals and formatted string.
I tried this by copying the same i had for metricbeat and updated it to use the apm syntax and came to the following 'user-override'
output.elasticsearch.indices:
- index: 'apm-%{[observer.version]}-%{[kubernetes.labels.app]}-%{[processor.event]}-%{+yyyy.MM}'
when:
not:
or:
- equals:
kubernetes.namespace: default
- equals:
kubernetes.namespace: kube-system
but when i use this setup it tells me:
Your changes cannot be applied
'output.elasticsearch.indices.when': is not allowed
Set output.elasticsearch.indices.0.index to apm-%{[observer.version]}-%{[kubernetes.labels.app]}-%{[processor.event]}-%{+yyyy.MM}
Set output.elasticsearch.indices.0.when.not.or.0.equals.kubernetes.namespace to default
Set output.elasticsearch.indices.0.when.not.or.1.equals.kubernetes.namespace to kube-system
Then i updated the example but came to the same conclusion as it was not valid either..
In your ES Cloud console, you need to Edit the cluster configuration, scroll to the APM section and then click "User override settings". In there you can override the target index by adding the following property:
output.elasticsearch.index: "apm-application-%{[observer.version]}-{type}-%{+yyyy.MM.dd}"
Note that if you change this setting, you also need to modify the corresponding index template to match the new index name.

How to move elasticsearch index using file system?

Usecase:
I have created es-indexes: mywebsiteindex-yyyymmdd , mysharepointindex-yyyymmdd in my laptop/dev machine. I want to export/zip that index as a file. The file may be migrated by someone who has credentials to target machine. And the zip/file may be imported to target-elastic folder.
You can abstract the words 'machine' 'folder' 'zip' in the above. Focus is 'transfer index as a file and reimport at target which I may not have access through http/tcp/ftp/ssh'.
Is there any python/other script out there that can export-from-source and import-to-target? A script that hides internal complexities of node/cluster count differences between dev/prod etc, and just move index.
Note: I already referred to the below page, so no need to reiterate the same
https://www.elastic.co/guide/en/cloud/current/ec-migrate-data.html
There are some options:
You can use the snapshot and restore api to create a snapshot of your index and restore it in your new instance. (recommended way)
You can use the reindex api in your new instance to reindex your index from remote.
You can use Logstash with your old instance as an input and your new instance as the output.
And you can write a script/application using one of the supported clients to query your index, export to a file, read that file and import in your new instance. (logstash can also do that).
But you can't move your data files, this is not supported nor recommended by Elastic.

Roll over index with elastic search and serilog

We are using es 6.7 and serilog 7.1 in our dotnet core application.
In our logger implementation vi are using the following index "app-{0:yyyy.MM}-1" for our ElasticsearchSinkOptions.
This creates an index called app-2019.04-1 as expected.
However we set up an alias and a lifecycle policy which do a roll over action and creates a new index called app-2019.04-000002 after some conditionshas been met - as expected.
The issue is that our dot net core application still logs to the first index app-2019.04-1. How do we update the indexformat being used in the dot net core application when elastic search has performed a roll over action?n
Well I figure it out. Maybe it will help someone else. I had to log it to the alias and not the index.
For making it work you need to:
Create an index with format xxxx-1
Create an alias and ad it to the index e.g. xxxx
Create index pattern xxxx-*
Create lifecycle policy
Create template with indexpattern, alias and lifecycle policy
Make sure your indexformat in serilog is the alias.
Start logging :)

Elastic search next steps

I'm new to elasticsearch and am still trying to set it up. I have installed elasticsearch 5.5.1 using default values I have also installed Kibana 5.5.1 using the default values. I've also installed the ingest-attachment plugin with the latest x-pack plugin. I have elasticsearch running as a service and I have Kibana open in my browser. On the Kibana dashboardI have an error stating that it is unable to fetch mappings. I guess this is because I havn't set up any indices or pipelines yet. This is where I need some steer, all the documentation I've found so far on-line isn't particularly clear. I have a directory with a mixture of document types such as pdf and doc files. My ultimate goal is to be able to search these documents with values that a user will enter via an app. I'm guessing I need to use the Dev Tools/console window in Kibana using the 'PUT' command to create a pipeline next, but I'm unsure of how I should do this so that it points to my directory with the documents. Can anybody provide me an example of this for this version please.
If I understand you correctly, let's first set some basic understanding about elasticsearch:
Elasticsearch in it's simple definition is a "Search engine". so you need to store some data, and then elastic will help you to search using a search criteria, and it will retrieve relevant data back
You need a "Container" to save your data to, and elastic has this thing like any database engine to store your data, but the terms are somehow different. for example a "Database" in sql-like systems is called "Index", and what you know as "table" is called "Type" in elastic.
from my understanding, you will need to create your index (with or without mappings) to have a starting point, and I recommend you to start without mappings just to "start" and get things working, but later on it's highly recommend to work with "mappings" if applicable, because elastic is smart, but it cannot know more about your data than you do
Because Kibana has failed to find a proper index to start with, it has complained and asked you to either provide a syntax for index names, or a specific index name so it can infer the inline mappings and give you the nice features of querying, displaying charts, etc of your data, so once you create your index, you will provide that to the starting page of Kibana, and you will be ready to go.
Let me know if you need something more specific to your needs :)

Elasticsearch Disable Delete indexes

I am using Elasticsearch 1.7.1
After I create my indexes, I do not want to delete existing indexes. (Either manually or by some unintentional execution from my es.)
Is it possible to set any configurations in elasticsearch and restart the service to achieve the above?
I have tried these steps but it is not helping me out.
As for preventing index deletion via a wildcard /* or /_all, one thing i can do is to add the following settings to your config file:
action.destructive_requires_name: true
I have solve this by http CORS elasticsearch.

Resources