Can statsd handle negative timings? - statsd

I am currently planning to use statsd to track performance difference between two procedures.
This means that we will see negative timings.
Is statsd able to handle negative timings?

Reading the documentation and code I see nothing that forbids negative timings. You might have problems with timer histograms, since there
a lower limit of 0 is assumed
However I strongly suggest using separate timers for both procedures, sending them to Graphite and using the diffSeries function.

Related

Recommended way to measure http requests

From the OTel Metrics specification Counter is the recommended instrument to measure the number of requests completed. This can later be used to calculate the throughput rate.
Example uses for Counter:
count the number of bytes received
count the number of requests completed
However, OTel's semantic conventions do not include a metric for such a use case, with http.server.active_requests being the closest thing.
Name
Instrument
Unit
Unit (UCUM)
Description
http.server.active_requests
Asynchronous UpDownCounter
requests
{requests}
measures the number of concurrent HTTP requests that are currently in-flight
Granted, the Metrics Semantic Conventions are still "Experimental", but this seems like such a basic use case which is even mentioned in the API spec.
My questions are:
Is the counting of requests via a counter not recommended?
What is the best way to monitor throughput?
Thanks in advance.
Not a spec creator but the number of requests is indeed a typical example for counters and is therefore recommended ubiquitously, e.g. check Prometheus docs.
Why is it not stated in the semantic conventions spec - hard to say. But yes it is very experimental, there are 400+ issues in the whole thing and you may create another one :)

How to determine accurate request count in a time range with Spring Boot + Prometheus + Grafana

I just started trying to integrate micrometer, prometheus and Grafana into my microservices. At a first glance, it is very easy to use and there are many existing dashboard you can rely on. But the more I test the more it gets confusing. Maybe I don't understand the main idea behind this technology stack.
I would like to start my custom Grafana dashboard by showing the amount of request per endpoint for the selected time range (as a single stat), but I am not able to find the right query for that (and I am not sure it exists)
I tried different:
http_server_requests_seconds_count{uri="/users"}
Which always shows the current value. For example, if I sent 10 requests 30 minutes ago, this query will also return value 10 when I am changing changing the time range last 5 minutes (even though no request was entering the system during the last 5 minutes)
When I am using
increase(http_server_requests_seconds_count{uri="/users"}[$__range])
the query will not return the accurate value, instead something close to actual request amount. At least it works for a time range that doesn't include new incoming requests. In that case the query return 0.
So my question is, is there a way to use this Technology stack to get the amount of new requests for the selected period of time?
For the sake of performance when operating with millions of time series, many Prometheus functions show approximate and/or interpolated values. For example, the increase() function is basically a per-second rate() multiplied by the number of seconds in the interval. With such formula and possible missing data points, an accurate result is rather an exception than a normal thing.
The reason why it is so is that Prometheus exchanges accuracy for performance and reliability. It doesn't really matter if your server actual CPU usage is 86.3% instead of 86.4%, but it does matter whether you can get this information instantly. Prometheus even have this statement in their docs:
Prometheus values reliability. You can always view what statistics are available about your system, even under failure conditions. If you need 100% accuracy, such as for per-request billing, Prometheus is not a good choice as the collected data will likely not be detailed and complete enough. In such a case you would be best off using some other system to collect and analyze the data for billing, and Prometheus for the rest of your monitoring.
That being said, if you really need accurate values consider using something else. You can for example store logs and count lines (Grafana Loki, The Elastic Stack), or maybe write and retrieve this information from a traditional database with your own solution.

Is Tensorflow Dataset API slower than Queues?

I replaced CIFAR-10 preprocessing pipeline in the project with Dataset API approach and it resulted in performance decrease of about 10-20%.
Preporcessing is rather standart:
- read image from disk
- make random/crop and flip
- shuffle, batch
- feed to the model
Overall i see that batche processing is now 15% faster, but every once in a while (or, more precisely, whenever I reinitialize dataframe or expect reshuffling) the batch is being blocked for up long time (30 sec) which totals to slower epoch-per-epoch processing.
This behaviour seems to do something with internal hashing. If I reduce N in ds.shuffle(buffer_size=N) delays are shorter but proportionally more frequent. Removing shuffle at all results to delays as if buffer_size was set to dataset size.
Can somebody explain internal logic of Dataset API when it comes to reading/caching? Is there any reason at all to expect Dataset API to work faster than manually created Queues?
I am using TF 1.3.
If you implement the same pipeline using the tf.data.Dataset API and using queues, the performance of the Dataset version should be better than the queue-based version.
However, there are a few performance best practices to observe in order to get the best performance. We have collected these in a performance guide for tf.data. Here are the main issues:
Prefetching is important: the queue-based pipelines prefetch by default and the Dataset pipelines do not. Adding dataset.prefetch(1) to the end of your pipeline will give you most of the benefit of prefetching, but you might need to tune this further.
The shuffle operator has a delay at the beginning, while it fills its buffer. The queue-based pipelines shuffle a concatenation of all epochs, which means that the buffer is only filled once. In a Dataset pipeline, this would be equivalent to dataset.repeat(NUM_EPOCHS).shuffle(N). By contrast, you can also write dataset.shuffle(N).repeat(NUM_EPOCHS), but this needs to restart the shuffling in each epoch. The latter approach is slightly preferable (and truer to the definition of SGD, for example), but the difference might not be noticeable if your dataset is large.
We are adding a fused version of shuffle-and-repeat that doesn't incur the delay, and a nightly build of TensorFlow will include the custom tf.contrib.data.shuffle_and_repeat() transformation that is equivalent to dataset.shuffle(N).repeat(NUM_EPOCHS) but doesn't suffer the delay at the start of each epoch.
Having said this, if you have a pipeline that is significantly slower when using tf.data than the queues, please file a GitHub issue with the details, and we'll take a look!
Suggested things didn't solve my problem back in the days, but I would like to add a couple of recommendations for those, who don't want to learn about queues and still get the most out of TF data pipeline:
Convert your input data into TFRecord (as cumbersome as it might be)
Use recommended input pipeline format
.
files = tf.data.Dataset.list_files(data_dir)
ds = tf.data.TFRecordDataset(files, num_parallel_reads=32)
ds = (ds.shuffle(10000)
.repeat(EPOCHS)
.map(parser_fn, num_parallel_calls=64)
.batch(batch_size)
)
dataset = dataset.prefetch(2)
Where you have to pay attention to 3 main components:
num_parallel_read=32 to parallelize disk IO operations
num_parallel_calls=64 to parallelize calls to parser function
prefetch(2)

StatsD and complex systems/applications

StatsD, has been around for some years now, thanks Etsy and Flickr. I have recently stumbled upon it and been 'playing' with it. There are several reasons that make me love it.
I wonder if somebody is using it along large and heavily used systems and has some feedback on it? How is StatsD working out for your cases?
Statsd works well up to 20k packets/sec (UDP packets / sec), but then starts to drop metrics after that as it's not fast enough to process that many. For some metrics workloads, accuracy is required and so sampling is not an option. It can be pretty easy to eat up this 20k / sec budget.
There are various other statsd implementations that have better performance. One of them is https://github.com/github/brubeck which claims it can process up to 4 million metrics / sec. YMMV, but I've been using brubeck in production and it can handle way more load than statsd can.

Why use statsd when graphite's Carbon aggregator can do the same job?

I have been exploring the Graphite graphing tool for showing metrics from multiple servers, and it seems that the 'recommended' way is to send all metrics data to StatsD first. StatsD aggregates the data and sends it to graphite (or rather, Carbon).
In my case, I want to do simple aggregations like sum and average on metrics across servers and plot that in graphite. Graphite comes with a Carbon aggregator which can do this.
StatsD does not even provide aggregation of the kind I am talking about.
My question is - should I use statsd at all for my use case? Anything I am missing here?
StatsD operates over UDP, which removes the risk of carbon-aggregator.py being slow to respond and introducing latency in your application. In other words, loose coupling.
StatsD supports sampling of inbound metrics, which is useful when you don't want your aggregator to take 100% of all data points to compute descriptive statistics. For high-volume code sections, it is common to use 0.5%-1% sample rates so as to not overload StatsD.
StatsD has broad client-side support.
tldr: you will probably want statsd (or carbon-c-relay) if you ever want to look at the server-specific sums or averages.
carbon aggregator is designed to aggregate values from multiple metrics together into a single output metric, typically to increase graph rendering performance. statsd is designed to aggregate multiple data points in a single metric, because otherwise graphite only stores the last value reported in the minimum storage resolution.
statsd example:
assume that your graphite storage-schemas.conf file has a minimum retention of 10 seconds (the default) and your application is sending approximately 100 data points every 10 seconds to services.login.server1.count with a value of 1. without statsd, graphite would only store the last count received in each 10-second bucket. after the 100th message is received, the other 99 data points would have been thrown out. however, if you put statsd between your application and graphite, then it will sum all 100 datapoints together before sending the total to graphite. so, without statsd your graph only indicates if a login occurred in during the 10 second interval. with statsd, it indicates how many logins occurred in during that interval. (for example)
carbon aggregator example: assume you have 200 different servers reporting 200 separate metrics (services.login.server1.response.time, services.login.server2.response.time, etcetera). on your operations dashboard you show a graph of the average accross all servers using this graphite query: weightedAverage(services.login.server*.response.time, services.login.server*.response.count, 2). unfortunately, rendering this graph takes 10 seconds. to solve this problem, you can add a carbon aggregator rule to pre-calculate the average across all your servers and store the value in a new metric. now you can update your dashboard to simply pull a single metric (e.g. services.login.response.time). the new metric renders almost instantly.
side notes:
the aggregation rules in storage-aggregation.conf apply to all storage intervals in storage-schemas.conf except the first (smallest) retention period for each retention string. it is possible to use carbon-aggregator to aggregate data points within a metric for that first retention period. unfortunately, aggregation-rules.conf uses "blob" patterns rather than regex patterns. so you need to add a separate aggregation-rules.conf file entry for every path depth and aggregation type. the advantage of statsd is that the client sending the metric can specify the aggregation type rather than encoding it in the metric path. that gives you the flexibility to add a new metric on the fly regardless of metric path depth. if you wanted to configure carbon-aggregator to do statsd-like aggregation automatically when you add a new metric, your aggregation-rules.conf file would look something like this:
<n1>.avg (10)= avg <n1>.avg$
<n1>.count (10)= sum <n1>.count$
<n1>.<n2>.avg (10)= avg <n1>.<n2>.avg$
<n1>.<n2>.count (10)= sum <n1>.<n2>.count$
<n1>.<n2>.<n3>.avg (10)= avg <n1>.<n2>.<n3>.avg$
<n1>.<n2>.<n3>.count (10)= sum <n1>.<n2>.<n3>.count$
...
<n1>.<n2>.<n3> ... <n99>.count (10)= sum <n1>.<n2>.<n3> ... <n99>.count$
notes: the trailing "$" is not needed in graphite 0.10+ (currently pre-release) here is the relevant patch on github and here is the standard documentation on aggregation rules
the weightedAverage function is new in graphite 0.10, but generally the averageSeries function will give a very similar number as long as your load is evenly balanced. if you have some servers that are both slower and service fewer requests or you are just a stickler for precision, then you can still calculate a weighted average with graphite 0.9. you just need to build a more complex query like this:
divideSeries(sumSeries(multiplySeries(a.time,a.count), multiplySeries(b.time,b.count)),sumSeries(a.count, b.count))
if statsd is run on the client box this also reduces network load. although, in theory, you could run carbon-aggregator on the client side too. however, if you use one of the statsd client libraries, you can also use sampling to reduce the load on your application machine's cpu (e.g. creating loopback udp packets). furthermore, statsd can automatically perform multiple different aggregations on a single input metric (sum, mean, min, max, etcetera)
if you use statsd on each app server to aggregate response times, and then re-aggregate those values on the graphite server using carbon aggregator, you end up with an average response time weighted by app server rather than request. obviously, this only matters for aggregating using a mean or top_90 aggregation rule, and not min, max or sum. also, it only matters for mean if your load is unbalanced. as an example: assume you have a cluster of 100 servers, and suddenly 1 server is sent 99% of the traffic. consequentially, the response times quadruple on that 1 server, but remain steady on the other 99 servers. if you use client side aggregation, your overall metric would only go up about 3%. but if you do all your aggregation in a single server-side carbon aggregator, then your overall metric would go up by about 300%.
carbon-c-relay is essentially a drop-in replacement for carbon-aggregator written in c. it has improved performance and regex-based matching rules. the upshot being that you can do both statsd-style datapoint aggregation and carbon-relay style metric aggregation and other neat stuff like multi-layered aggregation all in the same simple regex-based config file.
if you use the cyanite back-end instead of carbon-cache, then cyanite will do the intra-metric averaging for you in memory (as of version 0.5.1) or at read time (in the version <0.1.3 architecture).
If the Carbon aggregator offers everything you need, there is no reason not to use it. It has two basic aggregation functions (sum and average), and indeed these are not covered by StatsD. (I'm not sure about the history, but maybe the Carbon aggregator already existed and the StatsD authors did not want to duplicate features?) Receiving data via UDP is also supported by Carbon, so the only thing you would miss would be the sampling, which does not matter if you aggregate by averaging.
StatsD supports different metric types by adding extra aggregate values (e.g. for timers: mean, lower, upper and upper Xth percentile, ...). I like them, but if you don't need them, the Carbon aggregator is a good way to go too.
I have been looking at the source code of the Carbon aggregator and StatsD (and Bucky, a StatsD implementation in Python), and they are all so simple, that I would not worry about resource usage or performance for either choice.
Looks like carbon aggregator and statsd support disjoint set of features:
statsd supports rate calculation and summation but does not support averaging values
carbon aggregator supports averaging but does not support rate calculation.
Because graphite has a minimum resolution, so you cannot save two different values for the same metric during defined interval. StatsD solves this problem by pre-aggregating them, and instead of saying "1 user registered now" and "1 user registered now" it says "2 users registered".
The other reason is performance because:
You send data to StatsD via UDP, which is a fire and forget protocol, stateless, much faster
StatsD etsy's implementation is in NodeJS which also increases the performance a lot.

Resources