Elastic APM different index name - elasticsearch

As of a few weeks ago we added filebeat, metricbeat and apm to our dotnet core application ran on our kubernetes cluster.
It works all nice and recently we discovered filebeat and metricbeat are able to write a different index upon several rules.
We wanted to do the same for APM, however searching the documentation we can't find any option to set the name of the index to write to.
Is this even possible, and if yes how is it configured?
I also tried finding the current name apm-* within the codebase but couldn't find any matches upon configuring it.
The problem which we'd like to fix is that every space in kibana gets to see the apm metrics of every application. Certain applications shouldn't be within this space so therefore i thought a new apm-application-* index would do the trick...
Edit
Since it shouldn't be configured on the agent but instead in the cloud service console. I'm having troubles to 'user-override' the settings to my likings.
The rules i want to have:
When an application does not live inside the kubernetes namespace default OR kube-system write to an index called apm-7.8.0-application-type-2020-07
All other applications in other namespaces should remain in the default indices
I see you can add output.elasticsearch.indices to make this happen: Array of index selector rules supporting conditionals and formatted string.
I tried this by copying the same i had for metricbeat and updated it to use the apm syntax and came to the following 'user-override'
output.elasticsearch.indices:
- index: 'apm-%{[observer.version]}-%{[kubernetes.labels.app]}-%{[processor.event]}-%{+yyyy.MM}'
when:
not:
or:
- equals:
kubernetes.namespace: default
- equals:
kubernetes.namespace: kube-system
but when i use this setup it tells me:
Your changes cannot be applied
'output.elasticsearch.indices.when': is not allowed
Set output.elasticsearch.indices.0.index to apm-%{[observer.version]}-%{[kubernetes.labels.app]}-%{[processor.event]}-%{+yyyy.MM}
Set output.elasticsearch.indices.0.when.not.or.0.equals.kubernetes.namespace to default
Set output.elasticsearch.indices.0.when.not.or.1.equals.kubernetes.namespace to kube-system
Then i updated the example but came to the same conclusion as it was not valid either..

In your ES Cloud console, you need to Edit the cluster configuration, scroll to the APM section and then click "User override settings". In there you can override the target index by adding the following property:
output.elasticsearch.index: "apm-application-%{[observer.version]}-{type}-%{+yyyy.MM.dd}"
Note that if you change this setting, you also need to modify the corresponding index template to match the new index name.

Related

Datadog skip ingestion of Spring actuator health endpoint

I was trying to configure my application to not report my health endpoint in datadog APM./ I checked the documentation here: https://docs.datadoghq.com/tracing/guide/ignoring_apm_resources/?tab=kuberneteshelm&code-lang=java
And tried adding the config in my helm deployment.yaml file:
- name: DD_APM_IGNORE_RESOURCES
value: GET /actuator/health
This had no effect. Traces were still showing up in datadog. The method and path are correct. I changed the value a few times with different combinations (tried a few regex options). No go.
The I tried the DD_APM_FILTER_TAGS_REJECT environment variable, trying to ignore http.route:/actuator/health. Also without success.
I even ran the agent and application locally to see if there was anything to do with the environment, but the configs were not applied.
What are more options to try in this scenario?
This is the span detail:

Can Skywalking create ES indexes with lifecycle policies or index templates?

I am having trouble finding any information about this in documentation. In the config/application.yml file under storage.elasticsearch7 I see various configuration options. Is there a way to ensure that the indexes that get created are created using a given index template or ILM policy? I am running the helm chart for the ELK stack and ES version 8.0.0-SNAPSHOT.
My goal is to just delete indexes from SW after 2 weeks so that my cluster doesn't run out of shards.
I created a lifecycle that performs the delete action after a set time, and then I added this configuration to the skywalking application.yml under storage.elasticsearch7:
advanced: ${SW_STORAGE_ES_ADVANCED:"{\"index.lifecycle.name\":\"sw-policy\"}"}
SW creates index templates, and now I see that this is part of the template, and indeed the indexes have this sw-policy attached.

Setting up beat.hostname in Metricbeat

I am super new to these concepts, I apologize if this is a silly thing. I am trying to visualize Metricbeat data on Grafana with Elasticsearch data source, all running locally, but unable to find where "beat.hostname" is added in Metricbeat config.
I have the latest version of both Grafana and Metricbeat and am following this article. In the "Create Dashboard" section, the author mentions that he used "beat.hostname=grafana" as the host name when installing Metricbeat. He then used it on the query editor field to pull out the data on Grafana dashboard.
But where do we set this up? I looked the two YAML files in the Metricbeat folder but there is nothing describing this.
I think you just refer the 'Name' variable in your metricbeat.yml
# ================================== General ===================================
# The name of the shipper that publishes the network data. It can be used to group
# all the transactions sent by a single shipper in the web interface.
name: "jeremy-laptop"
# The tags of the shipper are included in their own field with each
# transaction published.
tags: ["laptop", "ubuntu"]
And you will find this value as "host.name" and be able to filter on it.
I'm not really a user of Grafana, so this part will be on your side

Elastic search next steps

I'm new to elasticsearch and am still trying to set it up. I have installed elasticsearch 5.5.1 using default values I have also installed Kibana 5.5.1 using the default values. I've also installed the ingest-attachment plugin with the latest x-pack plugin. I have elasticsearch running as a service and I have Kibana open in my browser. On the Kibana dashboardI have an error stating that it is unable to fetch mappings. I guess this is because I havn't set up any indices or pipelines yet. This is where I need some steer, all the documentation I've found so far on-line isn't particularly clear. I have a directory with a mixture of document types such as pdf and doc files. My ultimate goal is to be able to search these documents with values that a user will enter via an app. I'm guessing I need to use the Dev Tools/console window in Kibana using the 'PUT' command to create a pipeline next, but I'm unsure of how I should do this so that it points to my directory with the documents. Can anybody provide me an example of this for this version please.
If I understand you correctly, let's first set some basic understanding about elasticsearch:
Elasticsearch in it's simple definition is a "Search engine". so you need to store some data, and then elastic will help you to search using a search criteria, and it will retrieve relevant data back
You need a "Container" to save your data to, and elastic has this thing like any database engine to store your data, but the terms are somehow different. for example a "Database" in sql-like systems is called "Index", and what you know as "table" is called "Type" in elastic.
from my understanding, you will need to create your index (with or without mappings) to have a starting point, and I recommend you to start without mappings just to "start" and get things working, but later on it's highly recommend to work with "mappings" if applicable, because elastic is smart, but it cannot know more about your data than you do
Because Kibana has failed to find a proper index to start with, it has complained and asked you to either provide a syntax for index names, or a specific index name so it can infer the inline mappings and give you the nice features of querying, displaying charts, etc of your data, so once you create your index, you will provide that to the starting page of Kibana, and you will be ready to go.
Let me know if you need something more specific to your needs :)

ElasticSearch Templates

I'm running ElasticSearch on Ubuntu 14.04 and I can't seem to get ES to find a template for a specific index. The documentation is confusing in that it says you should put a templates directory under /etc/elasticsearch/config/, but then later in the documentation it says that configuration should be under /etc/elasticsearch, like the yaml file is at /etc/elasticsearch.
The reason I know its not finding it is I can do a:
curl -XGET 'http://localhost:9200/_template/my_template?pretty'
and get an empty JSON object back.
According to the config notes for Templates:
Index templates can also be placed within the config location (path.conf) under the templates directory (note, make sure to place them on all master eligible nodes).
In your case, if your main configuration directory is /etc/elasticsearch then you may place templates inside a folder called /etc/elasticsearch/templates. You'll need to place that file onto all of the servers which are running master-eligible nodes. (E.g., for a small cluster, onto all nodes.)
In my experience, it's a little more common to simply POST templates using the HTTP API. That way you can add and remove templates without having to worry about managing and deploying configurations on your servers.
Index Templates

Resources