How to debug an EnvoyFilter in Istio? - debugging

I have the following filter:
apiVersion: networking.istio.io/v1alpha3
kind: EnvoyFilter
metadata:
name: proper-filter-name-here
namespace: istio-system
spec:
workloadSelector:
labels:
app: istio-ingressgateway
configPatches:
- applyTo: NETWORK_FILTER
match:
context: GATEWAY
listener:
filterChain:
filter:
name: "envoy.http_connection_manager"
patch:
operation: INSERT_BEFORE
value:
name: envoy.lua
typed_config:
"#type": "type.googleapis.com/envoy.config.filter.http.lua.v2.Lua"
inlineCode: |
function envoy_on_request(request_handle)
request_handle:logDebug("Hello World!")
end
I am checking the logs for the gateway and it does not look like the filter is applied. How do I debug an EnvoyFilter? Where can I see which filters are applied on each request?

This topic is very well described in the documentation:
The simplest kind of Istio logging is Envoy’s access logging. Envoy proxies print access information to their standard output. The standard output of Envoy’s containers can then be printed by the kubectl logs command.
You have asked:
Where can I see what filters are applied each request?
Based on this issue on github:
There is no generic mechanism.
It follows that if you wanted to see what filter was applied to each request, you would have to create your custom solution.
However, without any problem, you can get logs about each request based on this fragment in the documentation:
If you used an IstioOperator CR to install Istio, add the following field to your configuration:
spec:
meshConfig:
accessLogFile: /dev/stdout
Otherwise, add the equivalent setting to your original istioctl install command, for example:
istioctl install <flags-you-used-to-install-Istio> --set meshConfig.accessLogFile=/dev/stdout
You can also choose between JSON and text by setting accessLogEncoding to JSON or TEXT.
You may also want to customize the format of the access log by editing accessLogFormat.
Refer to global mesh options for more information on all three of these settings:
meshConfig.accessLogFile
meshConfig.accessLogEncoding
meshConfig.accessLogFormat
You can also change access log format and test the access log from linked instructions.
See also (EDIT):
How to debug your Istio networking configuration:
EnvoyFilters will manifest where you tell Istio to put them. Typically a bad EnvoyFilter will manifest as Envoy rejecting the configuration (i.e. not being in the SYNCED state above) and you need to check Istiod (Pilot) logs for the errors from Envoy rejecting the configuration.
If configuration didn’t appear in Envoy at all– Envoy did not ACK it, or it’s an EnvoyFilter configuration– it’s likely that the configuration is invalid (Istio cannot syntactically validate the configuration inside of an EnvoyFilter) or is located in the wrong spot in Envoy’s configuration.
Debugging Envoy and Istiod
Using EnvoyFilters to Debug Requests

Related

Datadog skip ingestion of Spring actuator health endpoint

I was trying to configure my application to not report my health endpoint in datadog APM./ I checked the documentation here: https://docs.datadoghq.com/tracing/guide/ignoring_apm_resources/?tab=kuberneteshelm&code-lang=java
And tried adding the config in my helm deployment.yaml file:
- name: DD_APM_IGNORE_RESOURCES
value: GET /actuator/health
This had no effect. Traces were still showing up in datadog. The method and path are correct. I changed the value a few times with different combinations (tried a few regex options). No go.
The I tried the DD_APM_FILTER_TAGS_REJECT environment variable, trying to ignore http.route:/actuator/health. Also without success.
I even ran the agent and application locally to see if there was anything to do with the environment, but the configs were not applied.
What are more options to try in this scenario?
This is the span detail:

Elastic APM different index name

As of a few weeks ago we added filebeat, metricbeat and apm to our dotnet core application ran on our kubernetes cluster.
It works all nice and recently we discovered filebeat and metricbeat are able to write a different index upon several rules.
We wanted to do the same for APM, however searching the documentation we can't find any option to set the name of the index to write to.
Is this even possible, and if yes how is it configured?
I also tried finding the current name apm-* within the codebase but couldn't find any matches upon configuring it.
The problem which we'd like to fix is that every space in kibana gets to see the apm metrics of every application. Certain applications shouldn't be within this space so therefore i thought a new apm-application-* index would do the trick...
Edit
Since it shouldn't be configured on the agent but instead in the cloud service console. I'm having troubles to 'user-override' the settings to my likings.
The rules i want to have:
When an application does not live inside the kubernetes namespace default OR kube-system write to an index called apm-7.8.0-application-type-2020-07
All other applications in other namespaces should remain in the default indices
I see you can add output.elasticsearch.indices to make this happen: Array of index selector rules supporting conditionals and formatted string.
I tried this by copying the same i had for metricbeat and updated it to use the apm syntax and came to the following 'user-override'
output.elasticsearch.indices:
- index: 'apm-%{[observer.version]}-%{[kubernetes.labels.app]}-%{[processor.event]}-%{+yyyy.MM}'
when:
not:
or:
- equals:
kubernetes.namespace: default
- equals:
kubernetes.namespace: kube-system
but when i use this setup it tells me:
Your changes cannot be applied
'output.elasticsearch.indices.when': is not allowed
Set output.elasticsearch.indices.0.index to apm-%{[observer.version]}-%{[kubernetes.labels.app]}-%{[processor.event]}-%{+yyyy.MM}
Set output.elasticsearch.indices.0.when.not.or.0.equals.kubernetes.namespace to default
Set output.elasticsearch.indices.0.when.not.or.1.equals.kubernetes.namespace to kube-system
Then i updated the example but came to the same conclusion as it was not valid either..
In your ES Cloud console, you need to Edit the cluster configuration, scroll to the APM section and then click "User override settings". In there you can override the target index by adding the following property:
output.elasticsearch.index: "apm-application-%{[observer.version]}-{type}-%{+yyyy.MM.dd}"
Note that if you change this setting, you also need to modify the corresponding index template to match the new index name.

How to configure Loki to accept requests with subpath?

I want to talk my Loki instance with an address in the following format:
http://my.domain.com/monitoring/loki/
But I cannot find the correct place to configure it.
I assumed that Loki is based on Prometheus and that I could use flags like --web.external-url. But it seems that is not the case, as I have checked all available flags with docker run grafana/loki --help.
Am I missing something or do I have to add a reverse proxy between Loki and the rest of the world?
Not possible. Use Nginx.
Even though Loki is based on Prometheus, it is currently not possible to make Loki listen at a subpath.
Just in case someone need it and come to this post, you have the possibility to use -server.path-prefix=/loki for that case.
All the api path will be served by this path.

How to compare a Kubernetes custom resource spec with expected spec in a GO controller?

I am trying to implement my first Kubernetes operator. I want the operator controller to be able to compare the config in a running pod vs the expected config defined in a custom resource definition.
Eg: Custom Resource
apiVersion: test.com/v1alpha1
kind: TEST
metadata::
name: example-test
spec:
replicas: 3
version: 20:03
config:
valueA: true
valueB: 123
The above custom resource is deployed and 3 pods are running. A change is made such that the config "valueA" is changed to false.
In the GO controller reconcile function I can get the TEST instance and see the "new" version of the config:
instance := &testv1alpha1.TEST{}
log.Info("New value : " + instance.Spec.Config.valueA)
I am wondering how I can access what the value of "valueA" is in my running pods so that I can compare and recreate the pods if it has changed?
Also a secondary question, do I need to loop through all running pods in the reconcile function to check each or can I do this as a single operation?
What is this config exactly? If it's Pod's spec config, i would suggest to you to update not individual Pods, but spec in Deployment, it will restart it's pods automatically. If it's environment variables for apps in this pod, i would recommend to use ConfigMap
for storing them, and update it. Answering your second question, in both cases -- it will be a single operation.
To get Deployment, or ConfigMap you need to have it's name and namespace, usually, with custom resource, it should be derived from it's name. Here is example how you can get deployment instance and update it.

Spring cloud data flow on kubernetes doesnt show the streams section

Installed spring cloud data flow on kubernetes following the procedure here: https://docs.spring.io/spring-cloud-dataflow-server-kubernetes/docs/1.7.1.BUILD-SNAPSHOT/reference/htmlsingle/#_installation
After install console is up, but only shows apps and audit records on the dashbaord, stream and task designers are missing. Are there additional steps.
Enable use skipper in the server-deployment.yaml by uncommenting the following lines seemed to have done the trick.
name: SPRING_CLOUD_SKIPPER_CLIENT_SERVER_URI
value: 'http://${SKIPPER_SERVICE_HOST}/api'
- name: SPRING_CLOUD_DATAFLOW_FEATURES_SKIPPER_ENABLED
value: 'true'
Also changed some services to NodePort instead of ClusterIP to connect from local machine.

Resources