OpenTelemetry Exporting to Collector Contrib - open-telemetry

I have been experimenting with using opentelemetry this week and I would appreciate some advice.
I have instrumented an .net Core API application written in C# with the following opentelemetry libraries and initially installed Jaeger to collect and display the results. I have jaeger running in a docker container on my local machine.
The code I added to ConfigureServices in the StartUp.cs file is as follows:
services.AddOpenTelemetryTracing(builder =>
{
builder.AddAspNetCoreInstrumentation()
.AddHttpClientInstrumentation()
.AddSqlClientInstrumentation()
.AddSource(nameof(CORSBaseController))
.SetResourceBuilder(ResourceBuilder.CreateDefault().AddService("MyAppAPI"))
.AddJaegerExporter(opts =>
{
opts.AgentHost = Configuration["Jaeger:AgentHost"];
opts.AgentPort = Convert.ToInt32(Configuration["Jaeger:AgentPort"]);
});
});
In the Jaeger front end when I search for traces received in the past hour I get two listed services after using the front end that connects to the API 'jaeger-query' and 'MyAppAPI'. I can drill into the 'MyAppAPI' and see the spans showing the telemetry data that has been collected. So far so good.
I installed the opentelemetry-collector-contrib on my machine. I used the contrib as I eventually want to export the results into newrelic and need their library. I started the controller using docker using the example docker-compose yaml file:
version: "3"
services:
# Jaeger
jaeger:
image: jaegertracing/all-in-one:latest
ports:
- "16686:16686"
- "14268"
- "14250"
#Zipkin
zipkin:
image: openzipkin/zipkin
container_name: zipkin
ports:
- 9411:9411
otel-collector:
build:
context: ../..
dockerfile: examples/tracing/Dockerfile
command: ["--config=/etc/otel-collector-config.yml"]
volumes:
- ./otel-collector-config.yml:/etc/otel-collector-config.yml
ports:
- "1888:1888" # pprof extension
- "8888:8888" # Prometheus metrics exposed by the collector
- "8889:8889" # Prometheus exporter metrics
- "13133:13133" # health_check extension
- "9411" # Zipkin receiver
- "55679:55679" # zpages extension
depends_on:
- jaeger
- zipkin
# Expose the frontend on http://localhost:8081
frontend:
image: openzipkin/example-sleuth-webmvc
command: Frontend
environment:
JAVA_OPTS: -Dspring.zipkin.baseUrl=http://otel-collector:9511
ports:
- 8081:8081
depends_on:
- otel-collector
# Expose the backend on http://localhost:9000
backend:
image: openzipkin/example-sleuth-webmvc
command: Backend
environment:
JAVA_OPTS: -Dspring.zipkin.baseUrl=http://otel-collector:9511
ports:
- 9000:9000
depends_on:
- otel-collector
The otel-collector-config.yml file referenced in the docker compose file looks like the following exposing both 'otlp' and 'zipkin' as receivers.
receivers:
otlp:
protocols:
grpc:
zipkin:
exporters:
logging:
zipkin:
endpoint: "http://zipkin:9411/api/v2/spans"
processors:
batch:
extensions:
health_check:
pprof:
zpages:
service:
extensions: [pprof, zpages, health_check]
pipelines:
traces:
receivers: [otlp, zipkin]
exporters: [zipkin, logging]
processors: [batch]
I altered my code in my .Net project to the following to use the otlp exporter sending data to the collector as follows:
services.AddOpenTelemetryTracing(builder =>
{
builder.AddAspNetCoreInstrumentation()
.AddHttpClientInstrumentation()
.AddSqlClientInstrumentation()
.AddAspNetCoreInstrumentation()
.AddSource(nameof(CORSBaseController))
.SetResourceBuilder(ResourceBuilder.CreateDefault().AddService("MyAppAPI"))
.AddZipkinExporter(opts =>
{
opts.Endpoint = new Uri("http://localhost:9411/api/v2/spans");
});
});
Now when I use my front end to send requests to the API and can no longer see the MyAppAPI as one of the services in jaeger (should have been using zipkin). All I can see is the jaeger-query spans that correspond with the time I am using the front end.
Edit: I have got this working. It was due to an incorrect call in the startup.cs file to the zipkin collector. Plus when I wrote out the question I realised I was exporting to zipkin not jaeger, so I can now see my traces in the zipkin front end on http://localhost:9411.
The code above has been corrected so it works.

Related

Services empty in Opensearch Trace Analytics

I'm using Amazon OpenSearch with the engine OpenSearch 1.2.
I was working on setting up APM with the following details
Service 1 - Tomcat Application running on an EC2 server that accesses an RDS database. The server is behind a load balancer with a sub-domain mapped to it.
I added a file - setenv.sh file in the tomcat/bin folder with the following content
#!/bin/sh
export CATALINA_OPTS="$CATALINA_OPTS -javaagent:<PATH_TO_JAVA_AGENT>"
export OTEL_METRICS_EXPORTER=none
export OTEL_EXPORTER_OTLP_ENDPOINT=http://<OTEL_COLLECTOR_SERVER_IP>:4317
export OTEL_RESOURCE_ATTRIBUTES=service.name=<SERVICE_NAME>
export OTEL_INSTRUMENTATION_COMMON_PEER_SERVICE_MAPPING=<RDS_HOST_ENDPOINT>=Database-Service
OTEL Java Agent for collecting traces from the application
OTEL Collector and Data Prepper running on another server with the following configuration
docker-compose.yml
version: "3.7"
services:
data-prepper:
restart: unless-stopped
image: opensearchproject/data-prepper:1
volumes:
- ./pipelines.yaml:/usr/share/data-prepper/pipelines.yaml
- ./data-prepper-config.yaml:/usr/share/data-prepper/data-prepper-config.yaml
networks:
- apm_net
otel-collector:
restart: unless-stopped
image: otel/opentelemetry-collector:0.55.0
command: [ "--config=/etc/otel-collector-config.yml" ]
volumes:
- ./otel-collector-config.yml:/etc/otel-collector-config.yml
ports:
- "4317:4317"
depends_on:
- data-prepper
networks:
- apm_net
data-prepper-config.yaml
ssl: false
otel-collector-config.yml
receivers:
otlp:
protocols:
grpc:
exporters:
otlp/data-prepper:
endpoint: http://data-prepper:21890
tls:
insecure: true
service:
pipelines:
traces:
receivers: [otlp]
exporters: [otlp/data-prepper]
pipelines.yaml
entry-pipeline:
delay: "100"
source:
otel_trace_source:
ssl: false
sink:
- pipeline:
name: "raw-pipeline"
- pipeline:
name: "service-map-pipeline"
raw-pipeline:
source:
pipeline:
name: "entry-pipeline"
prepper:
- otel_trace_raw_prepper:
sink:
- opensearch:
hosts:
[
<AWS OPENSEARCH HOST>,
]
# IAM signing
aws_sigv4: true
aws_region: <AWS_REGION>
index_type: "trace-analytics-raw"
service-map-pipeline:
delay: "100"
source:
pipeline:
name: "entry-pipeline"
prepper:
- service_map_stateful:
sink:
- opensearch:
hosts:
[
<AWS OPENSEARCH HOST>,
]
# IAM signing
aws_sigv4: true
aws_region: <AWS_REGION>
index_type: "trace-analytics-service-map"
The data-prepper is getting authenticated via Fine access based control with all_access role and I'm able to see the otel resources like indexes, index templates generated when running it.
On running the above setup, I'm able to see traces from the application in the Trace Analytics Dashboard of OpenSearch, and upon clicking on the individual traces, I'm able to see a pie chart with one service. I also don't see any errors in the otel-collector as well as in data-prepper. Also, in the logs of data prepper, I see records being sent to otel service map.
However, the services tab of Trace Analytics remains empty and the otel service map index also remains empty.
I have been unable to figure out the reason behind this even after going through the documentation and any help is appreciated!

How to distinguish metrics from different services

I'm playing with OpenTelemetry and have such a setup:
Golang, docker-compose, 3 services, 1 standalone open-telemetry collector, 1 Prometheus.
I collect some system metrics to a standalone open-telemetry collector. These metrics are collected from 3 different services and metrics have identical names. Then, Prometheus gets the data from the open-telemetry collector. The problem is that I can't distinguish metrics from different services in Prometheus because all of the metrics have the same "instance" value, which is equal to the open-telemetry-collector's host.
I know that I can add a label with a service's name to the metric record and then distinguish the metrics by the label, but I'm searching for another solution because it is not always possible to add the label to each metric. Maybe, something like http-middleware, but for metrics, or maybe something on an infrastructure level.
Services are written with Golang, but I will be glad to see the solution in any other language.
otel-collector-config:
receivers:
otlp:
protocols:
grpc:
http:
exporters:
prometheus:
endpoint: otel-collector:8889
const_labels:
label1: value1
send_timestamps: true
metric_expiration: 5m
processors:
batch:
service:
pipelines:
metrics:
receivers: [ otlp ]
processors: [ batch ]
exporters: [ prometheus ]
Prometheus config:
scrape_configs:
- job_name: 'otel-collector'
scrape_interval: 5s
static_configs:
- targets: ['otel-collector:8889']
docker-compose:
version: "3.9"
services:
service1:
build:
context: ./service1
network: host
environment:
- TELEMETRY_COLLECTOR_ADDR=otel-collector:55681
ports:
- "8094:8080"
expose:
- "8080"
service2:
build:
context: ./service2
network: host
environment:
- TELEMETRY_COLLECTOR_ADDR=otel-collector:55681
ports:
- "8095:8080"
expose:
- "8080"
service3:
build:
context: ./service3
network: host
environment:
- TELEMETRY_COLLECTOR_ADDR=otel-collector:55681
expose:
- "8080"
ports:
- "8096:8080"
prometheus:
image: prom/prometheus:v2.26.0
volumes:
- ./prometheus.yaml:/etc/prometheus/prometheus.yml
ports:
- "9090:9090"
otel-collector:
image: otel/opentelemetry-collector:0.23.0
command: [ "--config=/etc/otel-collector-config.yaml" ]
expose:
- "55681" # HTTP otel receiver
- "8889" # Prometheus exporter metrics
volumes:
- ./otel-collector-config.yaml:/etc/otel-collector-config.yaml
Update 1.
I found that some new parameters were added to exporter-config https://github.com/open-telemetry/opentelemetry-collector/tree/main/exporter/exporterhelper . One of them is what suitable for me: resource_to_telemetry_conversion. But as I see prometheusexporter and prometheusremotewriteexporter don't support that field in the config.
The resource_to_telemetry_conversion that you mentioned is part of prometheusexporter since version 0.26.0 (issue #2498) and will add the service_name label based on the agent settings to distinguish metrics from different services.

Spring-cloud-gateway Docker-compose

My goal is to implement Spring-cloud-Gateway as a reverse proxy which I plan to eventually utilize as a reverse proxy in tandem with Keycloak for microservice security. The issue I am currently having is as follows:
Run a microservice in docker without Spring-cloud-gateway
Run Spring-cloud-gateway with default settings and a single route to redirect to a microservice that exist inside Docker
This works as intended and redirects to the microservice when using localhost:8080. When I then include the Gateway into my Docker-compose and build the container it says "this site can't be reached" but all other services inside of the container are reachable via their ports. I need help determining why this is happening an I suspect it is because of my docker-compose.yml file.
Here it is:
version: "3"
services:
react-web:
container_name: react-auth-demo
build: "./ReactJS-Auth/"
volumes:
- "./app"
- "/app/node_modules"
ports:
- 3000:3000
environment:
- NODE_ENV=development
- CHOKIDAR_USEPOLLING=true
networks:
- demo-network
depends_on:
- spring-cloud-gateway
keycloak:
image: jboss/keycloak:8.0.1
container_name: keycloak-service
volumes:
- //c/tmp:/tmp
ports:
- 8180:8080
environment:
- KEYCLOAK_PASSWORD=admin
- KEYCLOAK_USER=admin
networks:
- demo-network
depends_on:
- spring-cloud-gateway
spring-cloud-gateway:
build: ./gateway-test
container_name: gateway-test
ports:
- 6000:6000
networks:
- demo-network
networks:
demo-network:
driver: bridge
Here is the code for the gateway:
#Bean
public RouteLocator customRouteLocator(RouteLocatorBuilder builder){
return builder.routes()
.route("1", r -> r
.path("/**")
.uri("http://192.168.99.100:3000/"))
.build();
}
The request is as follows: http://192.168.99.100:6000/ this should redirect me to the react-web service.
And lastly here is the link to the source code:
https://gitlab.com/LibreFoodPantry/modules/usermodule-bnm/Demo
This project is for an Independent Study in college. So all help and advice are very much appreciated even if it doesn't relate to the issue at hand. Thanks.

How to fix `kafka: client has run out of available brokers to talk to (Is your cluster reachable?)` error

I am developing an application which reads a message off of an sqs queue, does some stuff with that data, and takes the result and publishes to a kafka topic. In order to test locally, I'd like to set up a kafka image in my docker build. I am currently able to spin up aws-cli, localstack, and my app's containers locally using docker-compose. Separately, I am able to spin up kafka and zookeper without a problem as well. I am unable to get my application to communicate with kafka.
I've tried using two separate compose files, and also fiddled with the networks. Finally, I've referenced: https://rmoff.net/2018/08/02/kafka-listeners-explained/.
Here is my docker-compose file:
version: '3.7'
services:
localstack:
image: localstack/localstack:latest
container_name: localstack
env_file: .env
ports:
# Localstack endpoints for various API. Format is localhost:container
- '4563-4584:4563-4584'
- '8080:8080'
environment:
- SERVICES=sns:4575,sqs:4576
- DATA_DIR=/tmp/localstack/data
volumes:
# store data locally in 'localstack' folder
- './localstack:/tmp/localstack'
networks:
- my_network
aws:
image: mesosphere/aws-cli
container_name: aws-cli
# copy local JSON_DATA folder contents into aws-cli container's app folder
#volumes:
# - ./JSON_DATA:/app
env_file: .env
# bash entrypoint needed for multiple commands
entrypoint: /bin/sh -c
command: >
" sleep 10;
aws --endpoint-url=http://localstack:4576 sqs create-queue --queue-name input_queue;
aws --endpoint-url=http://localstack:4575 sns create-topic --name input_topic;
aws --endpoint-url=http://localstack:4575 sns subscribe --topic-arn arn:aws:sns:us-east-2:123456789012:example_topic --protocol sqs --notification-endpoint http://localhost:4576/queue/input_queue; "
networks:
- my_network
depends_on:
- localstack
my_app:
build: .
image: my_app
container_name: my_app
env_file: .env
ports:
- '9000:9000'
networks:
- my_network
depends_on:
- localstack
- aws
zookeeper:
image: confluentinc/cp-zookeeper:5.0.0
container_name: zookeeper
ports:
- 2181:2181
environment:
ZOOKEEPER_CLIENT_PORT: 2181
networks:
- my_network
kafka:
image: confluentinc/cp-kafka:5.0.0
ports:
- 9092:9092
depends_on:
- zookeeper
environment:
# For more details see See https://rmoff.net/2018/08/02/kafka-listeners-explained/
KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
KAFKA_ADVERTISED_LISTENERS: INSIDE://localhost:9092
KAFKA_LISTENERS: INSIDE://localhost:9092
KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: INSIDE:PLAINTEXT
KAFKA_INTER_BROKER_LISTENER_NAME: INSIDE
KAFKA_CREATE_TOPICS: "output_topic:2:2"
networks:
- my_network
networks:
my_network:
I would hope to see no errors as a result of publishing to this topic. Instead, I'm getting:
kafka: client has run out of available brokers to talk to (Is your cluster reachable?)
Any ideas what I may be doing wrong? Thank you for your help.
You've made the broker only resolvable within the Kafka container itself (or from your host to the container) by setting the listeners only to localhost.
If you want another Docker service to be able to reach that container, you'll have to add <some protocol>://kafka:<some port> to the advertised listeners, and make the listeners as not localhost
Where the protocol is also added to KAFKA_LISTENER_SECURITY_PROTOCOL_MAP
FWIW, That blog should cover all those bases.

Start ElasticSearch in Wercker

We have a Ruby project where we are using Wercker as Continuous Integration.
We need to start an Elastic Search service in order to run some integration tests.
Locally, we added the Elastic configuration to the docker file and everything runs smoothly:
services:
elasticsearch:
image: docker.elastic.co/elasticsearch/elasticsearch:6.5.1
container_name: elasticsearch
environment:
- discovery.type=single-node
ports:
- "9200:9200"
- "9300:9300"
In The Wercker.yml file, we tried several things, but we cannot reach the elastic service.
Our wercker.yml contains:
services:
- id: elasticsearch:6.5.1
env:
ports:
- "9200:9200"
- "9300:9300"
We have this king of error when trying to use Elastic in our tests:
Errno::EADDRNOTAVAIL: Failed to open TCP connection to localhost:9200 (Cannot assign requested address - connect(2) for "localhost" port 9200)
Do you have any idea of what we are missing?
So, we found a solution:
In wercker.yml
services:
- id: elasticsearch:6.5.1
cmd: "/elasticsearch/bin/elasticsearch -Ediscovery.type=single-node"
And we added a step to check the connection:
build:
steps:
- script:
name: Test elasticsearch connection
code: curl http://elasticsearch:9200

Resources