How to debug grpc calls between pods_ - go

I have two pods interact with each other using grpc, how to debug grpc calls between those two pods?
I already try to set:
export GRPC_TRACE=all
export GRPC_VERBOSITY=DEBUG
then using kubectl logs <pod> -n namespces does not show up any grpc logs, how to debug grpc between pods?

It depends on what you want to debug: If you just want to know if there are communications on the Network, then you will have to add something outside of your 2 programs.
If you are the developer of these programs, you can add middlewares to the gRPC clients/servers to add trace (e.g, using OpenTelemetry tracing and Jaeger) to get cross-services traces (you can propagate traces IDs on the network).
Otherwise, without knowing what your programs are we can't help you only with environment variables.

Related

Using ILogger to send logging to x-ray via OpenTelemetry

All,
Thanks in advance for your time. We are moving to OpenTelemetry from ILogger/log4net logging to files. We were on-prem now moving to the cloud logging to files is not going to work. We use AWS. I have the aws-otel-collector working with tracing. Logging seems to be to console only - there is no way to get logs to xray via OT. In on-prem we made extensive use of file based logging now the auto-instrumentation in OT and AWS does most of what we need. There are times where we all wish we could peek inside the code at runtime and see a few values that the auto instrumentation does not provide. That is what I would like to log to x-ray via OT. There are samples (with warning that say not best practice) that explain how to do this in native AWS but that means I have to run the aws-otel-collector and the x-ray daemon. The use of logs would be very limited and judicious but I would really like to have them covered by one API. Is this possible?
Again - thanks in advance for your time.
Steve
It looks like you don't differentiate between traces and logs. They are not the same. You can include "logs" (correct term is "event in the span") into trace, but that must be done when traces are generated. If you own the code, then check documentation how to do that.
Opentelemetry (OTEL) is designated for metrics, traces, logs. But implemenetation for logs is still not stable. See https://opentelemetry.io/status/#logging
So I would use OTEL for now only for traces (X-Ray), metrics (AWS Prometheus). Logs should be processed outside of OTEL and stored in correct log storage - that's not X-Ray (that's a trace storage), but OpenSearch, CloudWatch logs, ...

OpenTelemetry for short-lived scripts?

Our system consists of many python scripts that are run on "clean" machines, that is, they need to have as little additional software on them as possible. Is there a way we could use OpenTelemetry without having to run additional servers on those machines? Is there a push model for sending data instead of pull?
Considering your additional explanation I imagine you will eventually want to collect all telemetry from these systems. Using OTLP exporters you can send all three signals traces, metrics, logs to collector service (As of now only tracing is stable and metrics, logs work is experimental). You would not have to run any additional servers on these resource constrained servers for your use case. There are two deployments strategies recommended for opentelemetry collector.
As an agent - Runs along with the application on same host machine.
As a gateway - Runs on standalone server outside the application host machine.
Running collector agent on same application host machine offloads some of the work from language client libs and enhances the telemetry but can be resource incentive.
Read more about collector here https://opentelemetry.io/docs/collector/getting-started/

Is it possible to marshal/unmarshal Prometheus state in go-client?

I'd like to be able to save the save the state of various Prometheus metrics (CounterVec, HistogramVec, ...) to a file from my app, and resume it later when necessary. Would that be possible?
I see that there is the Write method in metric.go, but can't find the Read one.
No Prometheus client library supports this, nor should you need this. Client libraries are designed to work entirely in memory, and functions like rate() will gracefully handle the counter resets due to a process restarting.

HTTP API Call to create consul watch & consul exec

Currently I am using consul watch & consul exec commands to create watches as well as to run some bash commands. I would like to use http api calls instead of commands to automate my system.
Is there http equivalents to do this work ?
Any help would be appreciated. Thanks
Under "Consul SDK":
https://www.consul.io/downloads_tools.html
there are bunches of libraries in various languages to talk to Consul. I personally like Consulate, a Python API. you would be interested in the event call: http://consulate.readthedocs.io/en/stable/events.html The consul exec and watches commands both use the event system. I don't know the exact event(s) you would need to send to simulate an exec call, but I'm sure you can break out a watch, run an exec and see what it does. Worse case you would have to look into the Consul source(written in go).

Using Grafana with Jmeter

I am trying to make Grafana display all my metrics (CPU, Memory, etc).
I have already configured Grafana on my server and have configured influxdb and of course I have configured Jmeter listener (Backend Listener) but still I cannot display all grpahas, any idea what should I do in order to make it work ?
It seems like that system metrics (CPU/Memory, etc.) are not in the scope of the JMeter Backend Listener implementation. Actually capturing those KPIs is a part of PerfMon plugin, which currently doesn't seem to support dumping the metrics to InfluxDB/Graphite (at least it doesn't seem to work for me). It might be a good idea to raise such a request at https://groups.google.com/forum/#!forum/jmeter-plugins. Until this gets done, I guess you also have the option of using some alternative metric-collection tools to feed data in InfluxDB/Graphite. Those would depend on the server OS you want to monitor (e.g. Graphite-PowerShell-Functions for Windows or collectd for everything else)
Are you sure that JMeter posts the data to InfluxDB? Did you see the default measurements created in influxDB?
I am able to send the data using backend listener to influxdb. I have given the steps in this site.
http://www.testautomationguru.com/jmeter-real-time-results-influxdb-grafana/

Resources