Datadog skip ingestion of Spring actuator health endpoint - spring-boot

I was trying to configure my application to not report my health endpoint in datadog APM./ I checked the documentation here: https://docs.datadoghq.com/tracing/guide/ignoring_apm_resources/?tab=kuberneteshelm&code-lang=java
And tried adding the config in my helm deployment.yaml file:
- name: DD_APM_IGNORE_RESOURCES
value: GET /actuator/health
This had no effect. Traces were still showing up in datadog. The method and path are correct. I changed the value a few times with different combinations (tried a few regex options). No go.
The I tried the DD_APM_FILTER_TAGS_REJECT environment variable, trying to ignore http.route:/actuator/health. Also without success.
I even ran the agent and application locally to see if there was anything to do with the environment, but the configs were not applied.
What are more options to try in this scenario?
This is the span detail:

Related

Elastic Cloud APM not showing logs in Transactions Page

What makes Kibana to not show docker container logs in APM "Transactions" page under "Logs" tab.
I verified the logs are successfully being generated with the "trace.id" associated for proper linking.
I have the exact same environment and configs (7.16.2) up via docker-compose and it works perfectly.
Could not figure out why this feature works locally but does not show in Elastic Cloud deploy.
UPDATE with Solution:
I just solved the problem.
It's related to the Filebeat version.
From 7.16.0 and ON, the transaction/logs linking stops working.
Reverted Filebeat back to version 7.15.2 and it started working again.
If you are not using file beats, for example - We rolled our own logging implementation to send logs from a queue in batches using the Bulk API.
We have our own "ElasticLog" class and then use Attributes to match the logs-* Schema for the Log Stream.
In particular we had to make sure that trace.id was the same as the the actual Traces, trace.id property. Then the logs started to show up here (It does take a few minutes sometimes)
Some more info on how to get the ID's
We use OpenTelemetry exporter for Traces and ILoggerProvider for Logs. The fire off batches independently of each other.
We populate the Trace Id's at the time of instantiation of the class as a default value. This way you in the context of the Activity. Also helps set the timestamp exactly when the log was created.
This LogEntry then gets passed into the ElasticLogger processor and mapped as displayed above to the ElasticLog entry with the Attributes needed for ES

How to debug an EnvoyFilter in Istio?

I have the following filter:
apiVersion: networking.istio.io/v1alpha3
kind: EnvoyFilter
metadata:
name: proper-filter-name-here
namespace: istio-system
spec:
workloadSelector:
labels:
app: istio-ingressgateway
configPatches:
- applyTo: NETWORK_FILTER
match:
context: GATEWAY
listener:
filterChain:
filter:
name: "envoy.http_connection_manager"
patch:
operation: INSERT_BEFORE
value:
name: envoy.lua
typed_config:
"#type": "type.googleapis.com/envoy.config.filter.http.lua.v2.Lua"
inlineCode: |
function envoy_on_request(request_handle)
request_handle:logDebug("Hello World!")
end
I am checking the logs for the gateway and it does not look like the filter is applied. How do I debug an EnvoyFilter? Where can I see which filters are applied on each request?
This topic is very well described in the documentation:
The simplest kind of Istio logging is Envoy’s access logging. Envoy proxies print access information to their standard output. The standard output of Envoy’s containers can then be printed by the kubectl logs command.
You have asked:
Where can I see what filters are applied each request?
Based on this issue on github:
There is no generic mechanism.
It follows that if you wanted to see what filter was applied to each request, you would have to create your custom solution.
However, without any problem, you can get logs about each request based on this fragment in the documentation:
If you used an IstioOperator CR to install Istio, add the following field to your configuration:
spec:
meshConfig:
accessLogFile: /dev/stdout
Otherwise, add the equivalent setting to your original istioctl install command, for example:
istioctl install <flags-you-used-to-install-Istio> --set meshConfig.accessLogFile=/dev/stdout
You can also choose between JSON and text by setting accessLogEncoding to JSON or TEXT.
You may also want to customize the format of the access log by editing accessLogFormat.
Refer to global mesh options for more information on all three of these settings:
meshConfig.accessLogFile
meshConfig.accessLogEncoding
meshConfig.accessLogFormat
You can also change access log format and test the access log from linked instructions.
See also (EDIT):
How to debug your Istio networking configuration:
EnvoyFilters will manifest where you tell Istio to put them. Typically a bad EnvoyFilter will manifest as Envoy rejecting the configuration (i.e. not being in the SYNCED state above) and you need to check Istiod (Pilot) logs for the errors from Envoy rejecting the configuration.
If configuration didn’t appear in Envoy at all– Envoy did not ACK it, or it’s an EnvoyFilter configuration– it’s likely that the configuration is invalid (Istio cannot syntactically validate the configuration inside of an EnvoyFilter) or is located in the wrong spot in Envoy’s configuration.
Debugging Envoy and Istiod
Using EnvoyFilters to Debug Requests

Elastic APM different index name

As of a few weeks ago we added filebeat, metricbeat and apm to our dotnet core application ran on our kubernetes cluster.
It works all nice and recently we discovered filebeat and metricbeat are able to write a different index upon several rules.
We wanted to do the same for APM, however searching the documentation we can't find any option to set the name of the index to write to.
Is this even possible, and if yes how is it configured?
I also tried finding the current name apm-* within the codebase but couldn't find any matches upon configuring it.
The problem which we'd like to fix is that every space in kibana gets to see the apm metrics of every application. Certain applications shouldn't be within this space so therefore i thought a new apm-application-* index would do the trick...
Edit
Since it shouldn't be configured on the agent but instead in the cloud service console. I'm having troubles to 'user-override' the settings to my likings.
The rules i want to have:
When an application does not live inside the kubernetes namespace default OR kube-system write to an index called apm-7.8.0-application-type-2020-07
All other applications in other namespaces should remain in the default indices
I see you can add output.elasticsearch.indices to make this happen: Array of index selector rules supporting conditionals and formatted string.
I tried this by copying the same i had for metricbeat and updated it to use the apm syntax and came to the following 'user-override'
output.elasticsearch.indices:
- index: 'apm-%{[observer.version]}-%{[kubernetes.labels.app]}-%{[processor.event]}-%{+yyyy.MM}'
when:
not:
or:
- equals:
kubernetes.namespace: default
- equals:
kubernetes.namespace: kube-system
but when i use this setup it tells me:
Your changes cannot be applied
'output.elasticsearch.indices.when': is not allowed
Set output.elasticsearch.indices.0.index to apm-%{[observer.version]}-%{[kubernetes.labels.app]}-%{[processor.event]}-%{+yyyy.MM}
Set output.elasticsearch.indices.0.when.not.or.0.equals.kubernetes.namespace to default
Set output.elasticsearch.indices.0.when.not.or.1.equals.kubernetes.namespace to kube-system
Then i updated the example but came to the same conclusion as it was not valid either..
In your ES Cloud console, you need to Edit the cluster configuration, scroll to the APM section and then click "User override settings". In there you can override the target index by adding the following property:
output.elasticsearch.index: "apm-application-%{[observer.version]}-{type}-%{+yyyy.MM.dd}"
Note that if you change this setting, you also need to modify the corresponding index template to match the new index name.

Spring cloud data flow on kubernetes doesnt show the streams section

Installed spring cloud data flow on kubernetes following the procedure here: https://docs.spring.io/spring-cloud-dataflow-server-kubernetes/docs/1.7.1.BUILD-SNAPSHOT/reference/htmlsingle/#_installation
After install console is up, but only shows apps and audit records on the dashbaord, stream and task designers are missing. Are there additional steps.
Enable use skipper in the server-deployment.yaml by uncommenting the following lines seemed to have done the trick.
name: SPRING_CLOUD_SKIPPER_CLIENT_SERVER_URI
value: 'http://${SKIPPER_SERVICE_HOST}/api'
- name: SPRING_CLOUD_DATAFLOW_FEATURES_SKIPPER_ENABLED
value: 'true'
Also changed some services to NodePort instead of ClusterIP to connect from local machine.

configure slack with watcher on found

i have a web hook in slack and i have to configure it with watcher. the doc of elastic says this -
To configure a Slack account in Watcher, you set the watcher.actions.slack.service property in elasticsearch.yml. You must set the url to your incoming webhook integration URL.
but found doesn't give access to yml.
For example, on the local server, the following snippet configures an account called notify-monitoring and sets the default sender name to Watcher.
watcher.actions.slack.service:
account:
monitoring:
url: https://hooks.slack.com/services/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
message_default:
from: Watcher
how do i configure it on found??
It's been a while, but for anyone else who comes across this, the Elastic Cloud (previously branded as Found.io), now does support setting some (and only some!) elasticsearch.yml configuration through their web interface.
It's accessible under:
Clusters => Cluster Name => Configuration => User Settings
I was able to insert the Slack notification settings there successfully!
IMPORTANT: this change does not propagate immediately. It took some time for me, so you might have to go get a cup of coffee while it's applied.
Also, another tip: You might find it easier to add a default_account for Slack, as follows:
watcher.actions.slack.service:
default_account: monitoring
account:
monitoring:
url: <your_slack_hook_url>
message_defaults:
from: watcher

Resources