I am doing some proof of concept to ingest traces and metrics to AppDynamics without installing Appdynamics agent. I have an application emitting prometheus metrics and traces. I could not find any Appdynamics documentation talking about opentelemetry Collector.
I could not find exporter in https://github.com/open-telemetry/opentelemetry-collector-contrib/tree/main/exporter either.
Can you please advise on how to use opencollector with Appdynamics?
there is a Beta program now running for using AppDynamics with the OpenTelemetry Collector.
More details are available in the public docs: https://docs.appdynamics.com/display/PRO21/Ingest+OpenTelemetry+Trace+Data
AppDynamics have a fork of contrib project here where there is an exporter but it isn't clear if it has been finished
Related
Is there a way for me to track garbage collection of my Java application using Elastic APM and the associated Java APM agent?
I'm using Spring Boot, if that makes a difference.
Out-of-the-box I'm able to see the heap and non-heap memory utilization, but I'm not sure if there is also a way to view garbage collection.
The JVM GC metrics tracked right now are jvm.gc.alloc, jvm.gc.time, and jvm.gc.count.
If you are looking for additional ones, which ones would those be? And could you open an issue with the details.
Please import from saved objects option - https://github.com/elastic/apm-contrib/blob/master/apm-agent-java/dashboards/java_metrics_dashboard_7.x.json
I've an ECS cluster running Fargate instances with Springboot apps & want to enable tracing with least number of code changes. There're the two approaches I started looking at:
Use AWS-Xray : Steps -> Add dependencies, add aWSXRayServletFilter, run X-Ray daemon in a separate container.
Use Spring Cloud Sleuth : Steps -> Add dependency & property, integrate with X-Ray
So the second approach saves you number of steps in modifying your code, the issues is I couldn't find any good doc to integrate Spring Cloud Sleuth with X-Ray, can anyone point me to correct direction?
I tried reading number of docs including: https://cloud.spring.io/spring-cloud-sleuth/spring-cloud-sleuth.html
I came across this when looking for a solution for option two. AFAIK, you still have to use the X-Ray daemon. I had to look across multiple GitHub repos and issues to solve the problem so am providing the solution here.
I used Gradle for my solution but this can be easily translated to Maven as well.
Add the BOM for spring cloud
dependencyManagement {
imports {
mavenBom("org.springframework.cloud:spring-cloud-dependencies:2021.0.3")
}
}
Add the following dependencies to the project.
implementation 'org.springframework.cloud:spring-cloud-starter-sleuth'
implementation 'org.springframework.cloud:spring-cloud-sleuth-zipkin'
implementation 'io.zipkin.aws:zipkin-reporter-xray-udp:0.23.4'
Then Add configuration to define the Bean which is used for reporting to the X-Ray daemon.
#Configuration
public class TracingConfiguration {
#Bean(ZipkinAutoConfiguration.REPORTER_BEAN_NAME)
Reporter<Span> reporter() {
return XRayUDPReporter.create();
}
}
Define the propagation type as aws for Sleuth as per the documentation.
spring.sleuth.propagation.type=aws
I haven't tried it yet, but from the documentation you can combine the following
An amazon/aws-xray-daemon
zipkin-aws with the experimental X-Ray Storage. They have a Docker image for zipkin-aws. You need to point it to the XRay daemon. This will be running as a Zipkin server listening on port 9411.
Then you use Spring Cloud Sleuth's instrumentation and AsyncZipkinSender.
By doing this approach, you can decouple yourself from AWS as long as you have a different zipkin server.
currently AWS X-Ray SDK doesn't have integration with Spring Cloud Sleuth. In order to using AWS X-Ray, the first approach would be the best way to do it.
Looking for a simple integration path between Elasticsearch and Apache Storm. Support for this is included in the elasticsearch-hadoop library, but this brings tons of dependencies on the Hadoop stack: from Hive to Cascading, that I simply don't need. Has anyone out there succeeded in this integration without bringing in elasticsearch-hadoop? Thanks.
In my project we're using rabbitmq river for indexing the storm output. It's very efficient and convenient way to write to elasticsearch. You basically put the messages to the queue and the river does the rest. If something gets stucked the data are simply buffered on the queue.
So I would say, use this river approach for writing and elasticsearch Java API for reading, like Kit Menke suggests (or the Jest client, we've found this cool and it offers async API basing on ApacheHttpAsyncClient, though we're not reading from elasticsearch in storm topology but in different services).
I have a web service written in scala and built on top of twitter finagle RPC system. Now we are hitting some performance issues. We have external API components and database layer.
I am planning of installing Zipkin in order to have a service level tracing system. This will allow me to know where the bottleneck is at the service level.
I am wondering though if there are framework out there to monitor the performance inside my application layer. The application is a suite of filters that are applied consecutively to my data and I would like to know which filter take time to compute. I heard about JVM profiling but it seems a little overkill for what I want to do. What would you recommend ? Thanks for your help.
Well before starting digging into JVM stuff or setting up all the infrastructure needed by Zipkin you could simply start by measuring some application-level metrics.
You could try the library metrics via this scala api.
Basically you manually set up counters and gauges at specific points of your application that will help you diagnose your bottleneck problem.
I'm developing my first application for Play! 2.0 framework. It feels really nice, love akka actors but one thing I'm looking for is something where I could hook some performance metrics on. What I'm looking for is Rails notifications like API or something like Twitter's finagle offers (look for curl command used to retrieve stats.txt).
Is there anything baked in in Play! 2.0? or should I start cooking something on my own? If there isn't anything ready - any pointers and tips are welcome!
The official way of getting Akka performance metrics is the following: http://typesafe.com/products/console
While I have not used it in a Play! scala app, I've been a big fan of Newrelic for all the performance metrics in production. That said, I do not think it'd be any different with a Play! war. It will give you basic metrics with a free account. For most of my scala apps I use the newrelic annotations to collect metrics around a particular method that I'd want to track in detail - I've seen newrelic give much detailed results with Java than Scala, hence the annotations.
There is a statsd play2 plugin: https://github.com/typesafehub/play-plugins/tree/master/statsd
Or Codahale's Metrics Play2 plugin: https://github.com/kenshoo/metrics-play
I'm looking at the metrics one to provide JMX data out at the moment although if statsd is an option, you might want to look at that first.