Spring Boot Actuator 'http.server.requests' metric MAX time - spring

I have a Spring Boot application and I am using Spring Boot Actuator and Micrometer in order to track metrics about my application. I am specifically concerned about the 'http.server.requests' metric and the MAX statistic:
{
"name": "http.server.requests",
"measurements": [
{
"statistic": "COUNT",
"value": 2
},
{
"statistic": "TOTAL_TIME",
"value": 0.079653001
},
{
"statistic": "MAX",
"value": 0.032696019
}
],
"availableTags": [
{
"tag": "exception",
"values": [
"None"
]
},
{
"tag": "method",
"values": [
"GET"
]
},
{
"tag": "status",
"values": [
"200",
"400"
]
}
]
}
I suppose the MAX statistic is the maximum time of execution of a request (since I have made two requests, it's the the time of the longer processing of one of them).
Whenever I filter the metric by any tag, like localhost:9090/actuator/metrics?tag=status:200
{
"name": "http.server.requests",
"measurements": [
{
"statistic": "COUNT",
"value": 1
},
{
"statistic": "TOTAL_TIME",
"value": 0.029653001
},
{
"statistic": "MAX",
"value": 0.0
}
],
"availableTags": [
{
"tag": "exception",
"values": [
"None"
]
},
{
"tag": "method",
"values": [
"GET"
]
}
]
}
I am always getting 0.0 as a max time. What is the reason of this?

What does MAX represent (MAX Discussion)
MAX represents the maximum time taken to execute endpoint.
Analysis for /user/asset/getAllAssets
COUNT TOTAL_TIME MAX
5 115 17
6 122 17 (Execution Time = 122 - 115 = 17)
7 131 17 (Execution Time = 131 - 122 = 17)
8 187 56 (Execution Time = 187 - 131 = 56)
9 204 56 From Now MAX will be 56 (Execution Time = 204 - 187 = 17)
Will MAX be 0 if we have less number of request (or 1 request) to the particular endpoint?
No number of request for particular endPoint does not affect the MAX (see an image from Spring Boot Admin)
When MAX will be 0
There is Timer which set the value 0. When the endpoint is not being called or executed for sometime Timer sets MAX to 0. Here approximate timer value is 2 to 2.30 minutes (120 to 150 seconds)
DistributionStatisticConfig has .expiry(Duration.ofMinutes(2)) which sets the some measutement to 0 if there is no request has been made for last 2 minutes (120 seconds)
Methods such as public TimeWindowMax(Clock clock,...), private void rotate() Clock interface has been written for the same. You may see the implementation here
How I have determined the timer value?
For that, I have taken 6 samples (executed the same endpoint for 6 times). For that, I have determined the time difference between the time of calling the endpoint - time for when MAX set back to zero
MAX property belongs to enum Statistic which is used by Measurement
(In Measurement we get COUNT, TOTAL_TIME, MAX)
public static final Statistic MAX
The maximum amount recorded. When this represents a time, it is
reported in the monitoring system's base unit of time.
Notes:
This is the cases from metric for a particular endpoint (here /actuator/metrics/http.server.requests?tag=uri:/user/asset/getAllAssets).
For generalize metric of actuator/metrics/http.server.requests
MAX for some endPoint will be set backed to 0 due to a timer. In my view for MAX for /http.server.requests will be same as a particular endpoint.
UPDATE
The document has been updated for the MAX.
NOTE: Max for basic DistributionSummary implementations such as
CumulativeDistributionSummary, StepDistributionSummary is a time
window max (TimeWindowMax). It means that its value is the maximum
value during a time window. If the time window ends, it'll be reset to
0 and a new time window starts again. Time window size will be the
step size of the meter registry unless expiry in
DistributionStatisticConfig is set to other value explicitly.

You can see the individual metrics by using ?tag=url:{endpoint_tag} as defined in the response of the root /actuator/metrics/http.server.requests call. The details of the measurements values are;
COUNT: Rate per second for calls.
TOTAL_TIME: The sum of the times recorded. Reported in the monitoring system's base unit of time
MAX: The maximum amount recorded. When this represents a time, it is reported in the monitoring system's base unit of time.
As given here, also here.
The discrepancies you are seeing is due to the presence of a timer. Meaning after some time currently defined MAX value for any tagged metric can be reset back to 0. Can you add some new calls to your endpoint then immediately do a call to /actuator/metrics/http.server.requests to see a non-zero MAX value for given tag?
This is due to the idea behind getting MAX metric for each smaller period. When you are seeing these metrics, you will be able to get an array of MAX values rather than a single value for a long period of time.
You can get to see this in action within Micrometer source code. There is a rotate() method focused on resetting the MAX value to create above described behaviour.
You can see this is called for every poll() call, which is triggered every some period for metric gathering.

Related

Binance WebSocket Order Book - depths change every time

Below is a python script that subscribes order book information via Biance's Websocket API (Documentation Here).
In both requests(btcusdt#depth and btcusdt#depth#100ms), each json payload is streamed with a varying depth.
Please shed light on what might be the cause of this? Am I doing something wrong? Or might they have certain criteria as to how many depths of an order book to fetch?
import json
import websocket
socket='wss://stream.binance.com:9443/ws'
def on_open(self):
print("opened")
subscribe_message = {
"method": "SUBSCRIBE",
"params":
[
"btcusdt#depth#100ms"
],
"id": 1
}
ws.send(json.dumps(subscribe_message))
def on_message(self, message):
print("received a message")
###### depths of bid/ask ######
d = json.loads(message)
for k, v in d.items():
if k == "b":
print(f"bid depth : {len(v)}")
if k == "a":
print(f"ask depth : {len(v)}")
def on_close(self):
print("closed connection")
ws = websocket.WebSocketApp(socket,
on_open=on_open,
on_message=on_message,
on_close=on_close)
ws.run_forever()
btcusdt#depth#100ms
opened
received a message
received a message
bid depth : 3
ask depth : 12
received a message
bid depth : 14
ask depth : 12
received a message
bid depth : 17
ask depth : 24
received a message
bid depth : 14
ask depth : 16
received a message
bid depth : 3
ask depth : 5
received a message
bid depth : 16
ask depth : 6
.
.
.
btcusdt#depth
opened
received a message
received a message
bid depth : 135
ask depth : 127
received a message
bid depth : 125
ask depth : 135
received a message
bid depth : 95
ask depth : 85
received a message
bid depth : 68
ask depth : 88
received a message
bid depth : 119
ask depth : 145
received a message
bid depth : 127
ask depth : 145
.
.
.
Your code reads the length of the diff for the last 100 ms or 1000 ms (the default value when you don't specify the timeframe). I.e. the remote API sends just the diff, not the full list.
The varying length of the diff is expected.
Example:
An order book has 2 bids and 2 asks:
ask price 1.02, amount 10
ask price 1.01, amount 10
bid price 0.99, amount 10
bid price 0.98, amount 10
During the timeframe, one more bid is added and one ask is updated. So the message returns:
"b": [
[ // added new bid
0.97,
10
]
],
"a": [
[ // updated existing ask
1.01,
20
]
]
And your code reads this message as
bid depth: 1
ask depth: 1
During another timeframe, two bids are updated
"b": [
[ // updated existing bid
0.98,
20
],
[ // updated existing bid
0.99,
20
]
],
"a": [] // no changes
So your code reads this as
bid depth: 2
ask depth: 0
"btcusdt#depth#100ms" only provides the change in the order book, not the order book itself (as mentioned by the other answer)
Use: "btcusdt#depth10#100ms" if you want to stream the book 10 best bids and 10 best asks.

Find sequences in time series data using Elasticsearch

I'm trying to find example Elasticsearch queries for returning sequences of events in a time series. My dataset is rainfall values at 10-minute intervals, and I want to find all storm events. A storm event would be considered continuous rainfall for more than 12 hours. This would equate to 72 consecutive records with a rainfall value greater than zero. I could do this in code, but to do so I'd have to page through thousands of records so I'm hoping for a query-based solution. A sample document is below.
I'm working in a University research group, so any solutions that involve premium tier licences are probably out due to budget.
Thanks!
{
"_index": "rabt-rainfall-2021.03.11",
"_type": "_doc",
"_id": "fS0EIngBfhLe-LSTQn4-",
"_version": 1,
"_score": null,
"_source": {
"#timestamp": "2021-03-11T16:00:07.637Z",
"current-rain-total": 8.13,
"rain-duration-in-mins": 10,
"last-recorded-time": "2021-03-11 15:54:59",
"rain-last-10-mins": 0,
"type": "rainfall",
"rain-rate-average": 0,
"#version": "1"
},
"fields": {
"#timestamp": [
"2021-03-11T16:00:07.637Z"
]
},
"sort": [
1615478407637
]
}
Update 1
Thanks to #Val my current query is
GET /rabt-rainfall-*/_eql/search
{
"timestamp_field": "#timestamp",
"event_category_field": "type",
"size": 100,
"query": """
sequence
[ rainfall where "rain-last-10-mins" > 0 ]
[ rainfall where "rain-last-10-mins" > 0 ]
until [ rainfall where "rain-last-10-mins" == 0 ]
"""
}
Having a sequence query with only one rule causes a syntax error, hence the duplicate. The query as it is runs but doesn't return any documents.
Update 2
Results weren't being returned due to me not escaping the property names correctly. However, due to the two sequence rules I'm getting matches of length 2, not of arbitrary length until the stop clause is met.
GET /rabt-rainfall-*/_eql/search
{
"timestamp_field": "#timestamp",
"event_category_field": "type",
"size": 100,
"query": """
sequence
[ rainfall where `rain-last-10-mins` > 0 ]
[ rainfall where `rain-last-10-mins` > 0 ]
until [ rainfall where `rain-last-10-mins` == 0 ]
"""
}
This would definitely be a job for EQL which allows you to return sequences of related data (ordered in time and matching some constraints):
GET /rabt-rainfall-2021.03.11/_eql/search?filter_path=-hits.events
{
"timestamp_field": "#timestamp",
"event_category_field": "type",
"size": 100,
"query": """
sequence with maxspan=12h
[ rainfall where `rain-last-10-mins` > 0 ]
until `rain-last-10-mins` == 0
"""
}
What the above query seeks to do is basically this:
get me the sequence of events of type rainfall
with rain-last-10-mins > 0
happening within a 12h window
up until rain-last-10-mins drops to 0
The until statement makes sure that the sequence "expires" as soon as an event has rain-last-10-mins: 0 within the given time window.
In the response, you're going to get the number of matching events in hits.total.value and if that number is 72 (because the time window is limited to 12h), then you know you have a matching sequence.
So your "storm" signal here is to detect whether the above query returns hits.total.value: 72 or lower.
Disclaimer: I haven't tested this, but in theory it should work the way I described.

Understanding Spring Boot actuator `http.server.requests` metrics MAX attribute

can someone explain what does the MAX statistic refers to in the below response. I don't see it documented anywhere.
localhost:8081/actuator/metrics/http.server.requests?tag=uri:/myControllerMethod
Response:
{
"name":"http.server.requests",
"description":null,
"baseUnit":"milliseconds",
"measurements":[
{
"statistic":"COUNT",
"value":13
},
{
"statistic":"TOTAL_TIME",
"value":57.430899
},
{
"statistic":"MAX",
"value":0
}
],
"availableTags":[
{
"tag":"exception",
"values":[
"None"
]
},
{
"tag":"method",
"values":[
"GET"
]
},
{
"tag":"outcome",
"values":[
"SUCCESS"
]
},
{
"tag":"status",
"values":[
"200"
]
},
{
"tag":"commonTag",
"values":[
"somePrefix"
]
}
]
}
You can see the individual metrics by using ?tag=url:{endpoint_tag} as defined in the response of the root /actuator/metrics/http.server.requests call. The details of the measurements values are;
COUNT: Rate per second for calls.
TOTAL_TIME: The sum of the times recorded. Reported in the monitoring system's base unit of time
MAX: The maximum amount recorded. When this represents a time, it is reported in the monitoring system's base unit of time.
As given here, also here.
The discrepancies you are seeing is due to the presence of a timer. Meaning after some time currently defined MAX value for any tagged metric can be reset back to 0. Can you add some new calls to /myControllerMethod then immediately do a call to /actuator/metrics/http.server.requests to see a non-zero MAX value for given tag?
This is due to the idea behind getting MAX metric for each smaller period. When you are seeing these metrics, you will be able to get an array of MAX values rather than a single value for a long period of time.
You can get to see this in action within Micrometer source code. There is a rotate() method focused on resetting the MAX value to create above described behaviour.
You can see this is called for every poll() call, which is triggered every some period for metric gathering.
What does MAX represent
MAX represents the maximum time taken to execute endpoint.
Analysis for /user/asset/getAllAssets
COUNT TOTAL_TIME MAX
5 115 17
6 122 17 (Execution Time = 122 - 115 = 17)
7 131 17 (Execution Time = 131 - 122 = 17)
8 187 56 (Execution Time = 187 - 131 = 56)
9 204 56 From Now MAX will be 56 (Execution Time = 204 - 187 = 17)
Will MAX be 0 if we have less number of request (or 1 request) to the particular endpoint?
No number of request for particular endPoint does not affect the MAX (see Image from Spring Boot Admin)
When MAX will be 0
There is Timer which set the value 0. When the endpoint is not being called or executed for sometime Timer sets MAX to 0. Here approximate timer value is 2 minutes (120 seconds)
DistributionStatisticConfig has .expiry(Duration.ofMinutes(2)).
which sets some measurements to 0 if there is no request has been made in between expiry time or rotate time.
How I have determined the timer value?
For that, I have taken 6 samples (executed the same endpoint for 6 times). For that, I have determined the time difference between the time of calling the endpoint - time for when MAX set back to zero
More Details
UPDATE
Document has been updated.
NOTE:
Max for basic DistributionSummary implementations such as CumulativeDistributionSummary, StepDistributionSummary is a time
window max (TimeWindowMax).
It means that its value is the maximum value during a time window.
If the time window ends, it'll be reset to 0 and a new time window starts again.
Time window size will be the step size of the meter registry unless expiry in DistributionStatisticConfig is set to other value
explicitly.

Why elasticsearch index performance falls down after frequent write

When I start nodejs script, it deletes old index (if it exist) and according to the config file creates new, after creates Websocket-server and starts to listen incoming connections.
initES() {
this.elasticsearchClient = new elasticsearch.Client({
host: `${Config.elasticSearchHost}:${Config.elasticSearchPort}`,
log: 'trace'
});
let deletePromise = this.elasticsearchClient.indices.delete({index: `${Config.elasticSearchIndex}`});
deletePromise.then(() => {
console.log(`Index ${Config.elasticSearchIndex} deleted`);
}, function(e) {
console.log(e.toJSON())
}).then(() => {
let createPromise = this.elasticsearchClient.indices.create({
index: `${Config.elasticSearchIndex}`,
body: {
settings: {
index: {
number_of_shards: 1,
number_of_replicas: 0
},
analysis: {
analyzer: {
whitespace_analyzer: {
tokenizer: 'whitespace',
filter: ['lowercase']
}
}
}
}
}
});
createPromise.then(() => {
console.log(`Index ${Config.elasticSearchIndex} created`);
}, (e) => {
console.log(e.toJSON());
})
});
}
Script is intended to start just once, at the boot time (through cron), it was written by me, and uses standart ES library (
https://www.elastic.co/guide/en/elasticsearch/client/javascript-api/current/api-reference.html
).
In front, user chooses to calculate orders (~700 items, they calculate by system automatically, with gearman and phantomjs)
At first (first 8 hours or first test) everything is working fine, ES responding good, websocket clients frequently update data, and data is updated in ES index.
If user cancels process, or process is finished and user decides to recalculate (all data is deleted before anything is put on), process of IO in ES becomes slower.
And so on, and after awhile index is filled up to ~340.. ~350 items, not to 700. In some cases ES stops to respond.
Tailing log files of ES shows me tons of lines
Entering safepoint region: GenCollectForAllocation
[2019-05-21T13:46:45.611+0000][9630][gc,start ] GC(271) Pause Young (Allocation Failure)
[2019-05-21T13:46:45.611+0000][9630][gc,task ] GC(271) Using 8 workers of 8 for evacuation
[2019-05-21T13:46:45.616+0000][9630][gc,age ] GC(271) Desired survivor size 17891328 bytes, new threshold 6 (max threshold 6)
[2019-05-21T13:46:45.617+0000][9630][gc,age ] GC(271) Age table with threshold 6 (max threshold 6)
[2019-05-21T13:46:45.617+0000][9630][gc,age ] GC(271) - age 1: 987344 bytes, 987344 total
[2019-05-21T13:46:45.617+0000][9630][gc,age ] GC(271) - age 2: 5440 bytes, 992784 total
[2019-05-21T13:46:45.617+0000][9630][gc,age ] GC(271) - age 3: 172640 bytes, 1165424 total
[2019-05-21T13:46:45.617+0000][9630][gc,age ] GC(271) - age 4: 535104 bytes, 1700528 total
[2019-05-21T13:46:45.617+0000][9630][gc,age ] GC(271) - age 5: 333224 bytes, 2033752 total
[2019-05-21T13:46:45.617+0000][9630][gc,age ] GC(271) - age 6: 128 bytes, 2033880 total
[2019-05-21T13:46:45.617+0000][9630][gc,heap ] GC(271) ParNew: 282158K->2653K(314560K)
[2019-05-21T13:46:45.617+0000][9630][gc,heap ] GC(271) CMS: 88354K->88355K(699072K)
[2019-05-21T13:46:45.617+0000][9630][gc,metaspace ] GC(271) Metaspace: 85648K->85648K(1128448K)
[2019-05-21T13:46:45.617+0000][9630][gc ] GC(271) Pause Young (Allocation Failure) 361M->88M(989M) 5.387ms
[2019-05-21T13:46:45.617+0000][9630][gc,cpu ] GC(271) User=0.01s Sys=0.00s Real=0.00s
[2019-05-21T13:46:45.617+0000][9630][safepoint ] Leaving safepoint region
[2019-05-21T13:46:45.617+0000][9630][safepoint ] Total time for which application threads were stopped: 0.0057277 seconds, Stopping threads took: 0.0000429 seconds
[2019-05-21T13:46:46.617+0000][9630][safepoint ] Application time: 1.0004453 seconds
[2019-05-21T13:46:46.617+0000][9630][safepoint ] Entering safepoint region: Cleanup
[2019-05-21T13:46:46.617+0000][9630][safepoint ] Leaving safepoint region
But to be precise, I dont see anyting critical (except memory failure allocation).
And even if everything go well these lines also appear in log.
If I restart my script (which deletes old and creates new index), ES updates these items fast, as it does only for first time
So my question is:
Why ES looses it's performance if I
insert/update/read/delete data ... insert/update/read/delete data ...
and its working ok, if I
insert/update/read restart script insert/update/read/
?
There is nothing to do with Elasticsearch.
It was my fault in not closing websocket connections, which led to server slow down, loosing it's resources.
Sorry guys for taking your time

Whenever i ran the Jmeter test for less than 10 Thread Groups then all the time "Throughput" shows numbers in "Minutes"

When I execute test in JMeter for less than 10 Thread Groups, in Summary Report column Throughput showing result in Minutes.
Can anyone please help me
As per RateRenderer class source
String unit = "sec";
if (rate < 1.0) {
rate *= 60.0;
unit = "min";
}
if (rate < 1.0) {
rate *= 60.0;
unit = "hour";
}
setText(formatter.format(rate) + "/" + unit);
So:
If throughput is more than 1 - time unit is "seconds"
If your throughput is less than 1 - it's being multiplied by 60 and time unit is set to "minutes"
If after throughput converting to "minutes" it is still less than 1 - it is being multiplied by 60 and time unit is set to "hours"
If you need to get the throughput in hits per second from minutes - just divide the value by 60.
Other options are:
Patch the RateRenderer class and comment out the two above "if" clauses
Use an external 3rd-party tool like BM.Sense for JMeter results analysis

Resources