Where can I apply sourcemaps in my Loki Grafana flow, so that stacktraces are useful? - source-maps

Where/how can I apply TypeScript & minification sourcemaps in my FluentBit → Loki → Grafana flow, so that stacktraces are useful?
Apologies if the question is daft - I am a back-end developer, but want to enable my colleagues React UI to push logging (using loglevel & loglevel-remote) into same overall observability platform.
I found [dev.frontend-receiver] add sourcemap support · Issue #1295 · grafana/agent · GitHub 2 on the topic, but nothing else so far.
I am looking to do this kind of thing:
https://docs.datadoghq.com/real_user_monitoring/guide/upload-javascript-source-maps/?tab=webpackjs
JS Source Maps - https://docs.logrocket.com/docs/stacktraces
but with Grafana and Loki obviously.
I may be asking the question wrong, as have had no response on Grafana Loki forum (https://community.grafana.com/t/where-can-i-apply-sourcemaps-in-my-loki-grafana-flow-so-that-stacktraces-are-useful/79436) - but if anyone could point me to the right place or right way to phrase the question it would be really helpful.
I was expecting to find some documentation for Grafana indicating how to use sourcemaps arising from TypeScript transpilation and also from minification as part of my observability data flow, so that the stacktraces once viewed in Grafana are useful.

Related

Kibana + APM: Only one trace sample is shown

I have Elastic + Kibana + APM in Kubernetes with the official helm charts.
Everything seems to be working correctly, except that I can't see all trace data. In the trace sample section only one is shown, always:
There are definitely at least 2 requests in the current filter (shown in the other graphs)
Sample rate is definitely 1.0
elastic + kibana + APM version is 7.13.0 (also tried in 7.10.1 - exactly the same issue)
Crazy, but the solution was so simple. In order to see the other traces, I needed to click on the "Latency distribution" buckets:
It's a funny story to tell my team mates, including how many hours I've spent figuring this out, and how I eventually figured it out (I had to reverse engineer the javascript logic...)
Cheers for kibana UX experts.

Observability in Laravel Applications

There are three main pillars of observability in applications; metrics, traces, and logs. I would want my laravel applications to be "observable" wrt to these.
Tools like Elastic, Logstash and Kibana seem to be industry standard but I cant seem to get good tutorials on how to integrate them with laravel and generally my understanding of them is hazy.
So, question is:
What observability tools do laravel developers generally use?
If the option falls on the ELK stack, are there any great tutorials or guides on how to do this?
Kibana guides are a bit too complex for a feeble mind like mine. But I am willing to get a few nosebleeds while at it - if thats the only way.
The first and easiest thing to do since you're running Laravel is to install and configure the APM agent for PHP which supports Laravel out of the box. This will take care of the "tracing" pillar.
Regarding metrics, you can install Metricbeat with the system module and the PHP_FPM module. This will take care of the "metrics" pillar.
Finally, for the "logs" pillar, you can install Filebeat with the nginx module to index your Nginx Laravel logs.
Those three will allow you to observe your Laravel applications very easily.

How do I instrument my code for Splunk metrics?

I'm brand new to Splunk, having worked exclusively with Prometheus before. The one obvious thing I can't see from looking at the Splunk website is how in my code, I create/expose a metric... if I must provide an HTTP endpoint for consumption, or call into some API to push values, etc. Further, I cannot see which languages Splunk provide libraries for, in order to aid instrumentation - I cannot see where all this low level stuff is documented!
Can anyone help me understand how Splunk works, particularly how it compares to Prometheus?
Usually, programs write their normal log files and Splunk ingests those files so they can be searched and data extracted.
There are other ways to get data into Splunk, though. See https://dev.splunk.com/enterprise/reference for the SDKs available in a few languages.
You could write your metrics to collectd and then send them to Splunk. See https://splunkonbigdata.com/2020/05/09/metrics-data-collection-via-collectd-part-2/
You could write your metrics directly to Splunk using their HTTP Event Collector (HEC). See https://dev.splunk.com/enterprise/docs/devtools/httpeventcollector/

How to consume Google PubSub opencensus metrics using GoLang?

I am new in Google PubSub. I am using GoLang for the client library.
How to see the opencensus metrics that recorded by the google-cloud-go library?
I already success publish a message to Google PubSub. And now I want to see this metrics, but I can not find these metrics in Google Stackdriver.
PublishLatency = stats.Float64(statsPrefix+"publish_roundtrip_latency", "The latency in milliseconds per publish batch", stats.UnitMilliseconds)
https://github.com/googleapis/google-cloud-go/blob/25803d86c6f5d3a315388d369bf6ddecfadfbfb5/pubsub/trace.go#L59
This is curious; I'm surprised to see these (machine-generated) APIs sprinkled with OpenCensus (Stats) integration.
I've not tried this but I'm familiar with OpenCensus.
One of OpenCensus' benefits is that it loosely-couples the generation of e.g. metrics from the consumption. So, while the code defines the metrics (and views), I expect (!?) the API leaves it to you to choose which Exporter(s) you'd like to use and to configure these.
In your code, you'll need to import the Stackdriver (and any other exporters you wish to use) and then follow these instructions:
https://opencensus.io/exporters/supported-exporters/go/stackdriver/#creating-the-exporter
NOTE I encourage you to look at the OpenCensus Agent too as this further decouples your code; you reference the generic Opencensus Agent in your code and configure the agent to route e.g. metrics to e.g. Stackdriver.
For Stackdriver, you will need to configure the exporter with a GCP Project ID and that project will need to have Stackdriver Monitor enabled (and configured). I've not used Stackdriver in some months but this used to require a manual step too. Easiest way to check is to visit:
https://console.cloud.google.com/monitoring/?project=[[YOUR-PROJECT]]
If I understand the intent (!) correctly, I expect API calls will then record stats at the metrics in the views defined in the code that you referenced.
Once you're confident that metrics are being shipped to Stackdriver, the easiest way to confirm this is to query a metric using Stackdriver's metrics explorer:
https://console.cloud.google.com/monitoring/metrics-explorer?project=[[YOUR-PROJECT]]
You may wish to test this approach using the Prometheus Exporter because it's simpler. After configuring the Prometheus Exporter, when you run your code, it will be create an HTTP server and you can curl the metrics that are being generated on:
http://localhost:8888/metrics
NOTE Opencensus is being (!?) deprecated in favor of a replacement solution called OpenTelemetry.

Is it possible to send only application logs (not router logs) to a Heroku logging add-on?

I have a Heroku app that has a very high volume of router logs, which aren't overly useful to me (the default response code metrics are sufficient).
However, I would like to be able to capture and search application logs using one of the logging add-ons (papertrail, timber.io, logentries, coralogix, logz.io – I don't really mind which one). By default those add-ons seem to capture all logs, including router logs, which mean they are prohibitively expensive for me (due to the volume).
With the Heroku CLI, you can filter just application logs with heroku logs -t --source app. Are there any add-ons that all you to apply such a filter before ingestion, so you only get charged for what you need?
disclaimer: I am one of the co-founders of Coralogix.
We believe you shouldn't pay for logs which do not interest you. This is why we developed several features to help you remove unneeded logs which clutter your environment and cost you money:
Regex-based block rules - block logs which match a certain regex pattern (or only allow logs matching a regex pattern):
Quota optimizer - lets you block logs based on component and severity. For example, in case you are not interested in low-level logs, you can choose to block Debug on one app and Debug and Info on another app:
Loggregation - our algorithms automatically recognize which logs belong to the same log-template, clustering all log prototypes in a single view. You can find logs occurring more times than expected, taking up too much of your quota related to the value you get from them. With Loggregation you can easily spot them and block them using a block rule; e.g. the Debug log on the top of the list takes almost 80% of the package!
Of course, there are many more features in the product which make it stand out in log analytics in general, and specifically for Heroku with pre-defined Heroku alerts and Kibana dashboard), but the features I've described above are the things that can help your specific question.
Hope this helps :)

Resources