Algorithm: Creating an outage table - algorithm

I created this device which sends a point to my webserver. My web server stores a Point instance which has the attributes created_at to reflect the point's creation time. My device consistently sends a request to my server at a 180s interval.
Now I want to see the periods of time my device has experienced outages in the last 7 days.
As an example, let's pretend it's August 3rt (08/03). I can query my Points table for points up to the last 3 days sorted by created_at
Points = [ point(name=p1, created_at="08/01 00:00:00"),
point(name=p2, created_at="08/01 00:03:00"),
point(name=p3, created_at="08/01 00:06:00"),
point(name=p4, created_at="08/01 00:20:00"),
point(name=p5, created_at="08/03 00:01:00"),
... ]
I would like to write an algorithm that can list out the following outages:
outages = {
"08/01": [ "00:06:00-00:20:00", "00:20:00-23:59:59" ],
"08/02": [ "00:00:00-23:59:59" ],
"08/03": [ "00:00:00-00:01:00" ],
}
Is there an elegant way to do this?

Related

Comparing Two Splunk Events To See Which One Is Larger

I'm using a transaction to see how long a device is RFM mode and the duration field increases with each table row. How I think it should work is that while the field is 'yes' it would calculate the duration that all events equal 'yes', but I have a lot of superfluous data that shouldn't be there IMO.
I only want to keep the largest duration event so I want to compare the current events duration to the next events duration and if its smaller than the current event, keep the current event.
index=crowdstrike sourcetype=crowdstrike:device:json
| transaction falcon_device.hostname startswith="falcon_device.reduced_functionality_mode=yes" endswith="falcon_device.reduced_functionality_mode=no"
| table _time duration
_time
duration
2022-10-28 06:07:45
888198
2022-10-28 05:33:44
892400
2022-10-28 04:57:44
896360
2022-08-22 18:25:53
3862
2022-08-22 18:01:53
7703
2022-08-22 17:35:53
11543
In the data above the duration goes from 896360 to 3862, and can happen on any date, and the duration runs in cycles like that where it increases until it starts over. So in the comparison I would keep the event at the 10-28 inflection point and so on at all other inflection points throughout the dataset.
How would I construct that multi event comparison?
By definition, the transaction command bundles together all events with the same hostname value, starting with the first "yes" and ending with the first "no". There is no option to include events by size, but there are options that govern the maximum time span of a transaction (maxspan), how many events can be in a transaction (maxevents), and how long the time between events can be (maxpause). That the duration value you want to keep (896360) is 10 days even though the previous transaction was only 36 minutes before it makes me wonder about the logic being used in this query. Consider using some of the options available to better define a "transaction".
What problem are you trying to solve with this query? It's possible there's another solution that doesn't use transaction (which is very non-performant).
Sans sample data, something like the following will probably work:
index=crowdstrike sourcetype=crowdstrike:device:json falcon_device.hostname=* falcon_device.reduced_functionality_mode=yes
| stats max(_time) as yestime by falcon_device.hostname
| append
[| search index=crowdstrike sourcetype=crowdstrike:device:json falcon_device.hostname=* falcon_device.reduced_functionality_mode=no
| stats max(_time) as notime by falcon_device.hostname ]
| stats values(*) as * by falcon_device.hostname
| eval elapsed_seconds=yestime-notime
Thanks for your answers but it wasn't working out. I ended up talking to some professional splunkers and got the below as a solution.
index=crowdstrike sourcetype=crowdstrike:device:json
| addinfo ```adds info_max_time```
| fields + _time, falcon_device.reduced_functionality_mode falcon_device.hostname info_max_time
| rename falcon_device.reduced_functionality_mode AS mode, falcon_device.hostname AS Hostname
| sort 0 + Hostname, -_time ``` events are not always returned in descending order per hostname, which would break streamstats```
| streamstats current=f last(mode) as new_mode last(_time) as time_change by Hostname ```compute potential time of state change```
| eval new_mode=coalesce(new_mode,mode."+++"), time_change=coalesce(time_change,info_max_time) ```take care of boundaries of search```
| table _time, Hostname, mode, new_mode, time_change
| where mode!=new_mode ```keep only state change events```
| streamstats current=f last(time_change) AS change_end by Hostname ```add start time of the next state as change_end time for the current state```
| fieldformat time_change=strftime(time_change, "%Y-%m-%d %T")
| fieldformat change_end=strftime(change_end, "%Y-%m-%d %T")
``` uncomment the following to sort by duration```
```| search change_end=* AND new_mode="yes"
| eval duration = round( (change_end-time_change)/(3600),1)
| table time_change, Hostname, new_mode, duration
| sort -duration```

Understanding Spring Boot actuator `http.server.requests` metrics MAX attribute

can someone explain what does the MAX statistic refers to in the below response. I don't see it documented anywhere.
localhost:8081/actuator/metrics/http.server.requests?tag=uri:/myControllerMethod
Response:
{
"name":"http.server.requests",
"description":null,
"baseUnit":"milliseconds",
"measurements":[
{
"statistic":"COUNT",
"value":13
},
{
"statistic":"TOTAL_TIME",
"value":57.430899
},
{
"statistic":"MAX",
"value":0
}
],
"availableTags":[
{
"tag":"exception",
"values":[
"None"
]
},
{
"tag":"method",
"values":[
"GET"
]
},
{
"tag":"outcome",
"values":[
"SUCCESS"
]
},
{
"tag":"status",
"values":[
"200"
]
},
{
"tag":"commonTag",
"values":[
"somePrefix"
]
}
]
}
You can see the individual metrics by using ?tag=url:{endpoint_tag} as defined in the response of the root /actuator/metrics/http.server.requests call. The details of the measurements values are;
COUNT: Rate per second for calls.
TOTAL_TIME: The sum of the times recorded. Reported in the monitoring system's base unit of time
MAX: The maximum amount recorded. When this represents a time, it is reported in the monitoring system's base unit of time.
As given here, also here.
The discrepancies you are seeing is due to the presence of a timer. Meaning after some time currently defined MAX value for any tagged metric can be reset back to 0. Can you add some new calls to /myControllerMethod then immediately do a call to /actuator/metrics/http.server.requests to see a non-zero MAX value for given tag?
This is due to the idea behind getting MAX metric for each smaller period. When you are seeing these metrics, you will be able to get an array of MAX values rather than a single value for a long period of time.
You can get to see this in action within Micrometer source code. There is a rotate() method focused on resetting the MAX value to create above described behaviour.
You can see this is called for every poll() call, which is triggered every some period for metric gathering.
What does MAX represent
MAX represents the maximum time taken to execute endpoint.
Analysis for /user/asset/getAllAssets
COUNT TOTAL_TIME MAX
5 115 17
6 122 17 (Execution Time = 122 - 115 = 17)
7 131 17 (Execution Time = 131 - 122 = 17)
8 187 56 (Execution Time = 187 - 131 = 56)
9 204 56 From Now MAX will be 56 (Execution Time = 204 - 187 = 17)
Will MAX be 0 if we have less number of request (or 1 request) to the particular endpoint?
No number of request for particular endPoint does not affect the MAX (see Image from Spring Boot Admin)
When MAX will be 0
There is Timer which set the value 0. When the endpoint is not being called or executed for sometime Timer sets MAX to 0. Here approximate timer value is 2 minutes (120 seconds)
DistributionStatisticConfig has .expiry(Duration.ofMinutes(2)).
which sets some measurements to 0 if there is no request has been made in between expiry time or rotate time.
How I have determined the timer value?
For that, I have taken 6 samples (executed the same endpoint for 6 times). For that, I have determined the time difference between the time of calling the endpoint - time for when MAX set back to zero
More Details
UPDATE
Document has been updated.
NOTE:
Max for basic DistributionSummary implementations such as CumulativeDistributionSummary, StepDistributionSummary is a time
window max (TimeWindowMax).
It means that its value is the maximum value during a time window.
If the time window ends, it'll be reset to 0 and a new time window starts again.
Time window size will be the step size of the meter registry unless expiry in DistributionStatisticConfig is set to other value
explicitly.

Why elasticsearch index performance falls down after frequent write

When I start nodejs script, it deletes old index (if it exist) and according to the config file creates new, after creates Websocket-server and starts to listen incoming connections.
initES() {
this.elasticsearchClient = new elasticsearch.Client({
host: `${Config.elasticSearchHost}:${Config.elasticSearchPort}`,
log: 'trace'
});
let deletePromise = this.elasticsearchClient.indices.delete({index: `${Config.elasticSearchIndex}`});
deletePromise.then(() => {
console.log(`Index ${Config.elasticSearchIndex} deleted`);
}, function(e) {
console.log(e.toJSON())
}).then(() => {
let createPromise = this.elasticsearchClient.indices.create({
index: `${Config.elasticSearchIndex}`,
body: {
settings: {
index: {
number_of_shards: 1,
number_of_replicas: 0
},
analysis: {
analyzer: {
whitespace_analyzer: {
tokenizer: 'whitespace',
filter: ['lowercase']
}
}
}
}
}
});
createPromise.then(() => {
console.log(`Index ${Config.elasticSearchIndex} created`);
}, (e) => {
console.log(e.toJSON());
})
});
}
Script is intended to start just once, at the boot time (through cron), it was written by me, and uses standart ES library (
https://www.elastic.co/guide/en/elasticsearch/client/javascript-api/current/api-reference.html
).
In front, user chooses to calculate orders (~700 items, they calculate by system automatically, with gearman and phantomjs)
At first (first 8 hours or first test) everything is working fine, ES responding good, websocket clients frequently update data, and data is updated in ES index.
If user cancels process, or process is finished and user decides to recalculate (all data is deleted before anything is put on), process of IO in ES becomes slower.
And so on, and after awhile index is filled up to ~340.. ~350 items, not to 700. In some cases ES stops to respond.
Tailing log files of ES shows me tons of lines
Entering safepoint region: GenCollectForAllocation
[2019-05-21T13:46:45.611+0000][9630][gc,start ] GC(271) Pause Young (Allocation Failure)
[2019-05-21T13:46:45.611+0000][9630][gc,task ] GC(271) Using 8 workers of 8 for evacuation
[2019-05-21T13:46:45.616+0000][9630][gc,age ] GC(271) Desired survivor size 17891328 bytes, new threshold 6 (max threshold 6)
[2019-05-21T13:46:45.617+0000][9630][gc,age ] GC(271) Age table with threshold 6 (max threshold 6)
[2019-05-21T13:46:45.617+0000][9630][gc,age ] GC(271) - age 1: 987344 bytes, 987344 total
[2019-05-21T13:46:45.617+0000][9630][gc,age ] GC(271) - age 2: 5440 bytes, 992784 total
[2019-05-21T13:46:45.617+0000][9630][gc,age ] GC(271) - age 3: 172640 bytes, 1165424 total
[2019-05-21T13:46:45.617+0000][9630][gc,age ] GC(271) - age 4: 535104 bytes, 1700528 total
[2019-05-21T13:46:45.617+0000][9630][gc,age ] GC(271) - age 5: 333224 bytes, 2033752 total
[2019-05-21T13:46:45.617+0000][9630][gc,age ] GC(271) - age 6: 128 bytes, 2033880 total
[2019-05-21T13:46:45.617+0000][9630][gc,heap ] GC(271) ParNew: 282158K->2653K(314560K)
[2019-05-21T13:46:45.617+0000][9630][gc,heap ] GC(271) CMS: 88354K->88355K(699072K)
[2019-05-21T13:46:45.617+0000][9630][gc,metaspace ] GC(271) Metaspace: 85648K->85648K(1128448K)
[2019-05-21T13:46:45.617+0000][9630][gc ] GC(271) Pause Young (Allocation Failure) 361M->88M(989M) 5.387ms
[2019-05-21T13:46:45.617+0000][9630][gc,cpu ] GC(271) User=0.01s Sys=0.00s Real=0.00s
[2019-05-21T13:46:45.617+0000][9630][safepoint ] Leaving safepoint region
[2019-05-21T13:46:45.617+0000][9630][safepoint ] Total time for which application threads were stopped: 0.0057277 seconds, Stopping threads took: 0.0000429 seconds
[2019-05-21T13:46:46.617+0000][9630][safepoint ] Application time: 1.0004453 seconds
[2019-05-21T13:46:46.617+0000][9630][safepoint ] Entering safepoint region: Cleanup
[2019-05-21T13:46:46.617+0000][9630][safepoint ] Leaving safepoint region
But to be precise, I dont see anyting critical (except memory failure allocation).
And even if everything go well these lines also appear in log.
If I restart my script (which deletes old and creates new index), ES updates these items fast, as it does only for first time
So my question is:
Why ES looses it's performance if I
insert/update/read/delete data ... insert/update/read/delete data ...
and its working ok, if I
insert/update/read restart script insert/update/read/
?
There is nothing to do with Elasticsearch.
It was my fault in not closing websocket connections, which led to server slow down, loosing it's resources.
Sorry guys for taking your time

Spring Boot Actuator 'http.server.requests' metric MAX time

I have a Spring Boot application and I am using Spring Boot Actuator and Micrometer in order to track metrics about my application. I am specifically concerned about the 'http.server.requests' metric and the MAX statistic:
{
"name": "http.server.requests",
"measurements": [
{
"statistic": "COUNT",
"value": 2
},
{
"statistic": "TOTAL_TIME",
"value": 0.079653001
},
{
"statistic": "MAX",
"value": 0.032696019
}
],
"availableTags": [
{
"tag": "exception",
"values": [
"None"
]
},
{
"tag": "method",
"values": [
"GET"
]
},
{
"tag": "status",
"values": [
"200",
"400"
]
}
]
}
I suppose the MAX statistic is the maximum time of execution of a request (since I have made two requests, it's the the time of the longer processing of one of them).
Whenever I filter the metric by any tag, like localhost:9090/actuator/metrics?tag=status:200
{
"name": "http.server.requests",
"measurements": [
{
"statistic": "COUNT",
"value": 1
},
{
"statistic": "TOTAL_TIME",
"value": 0.029653001
},
{
"statistic": "MAX",
"value": 0.0
}
],
"availableTags": [
{
"tag": "exception",
"values": [
"None"
]
},
{
"tag": "method",
"values": [
"GET"
]
}
]
}
I am always getting 0.0 as a max time. What is the reason of this?
What does MAX represent (MAX Discussion)
MAX represents the maximum time taken to execute endpoint.
Analysis for /user/asset/getAllAssets
COUNT TOTAL_TIME MAX
5 115 17
6 122 17 (Execution Time = 122 - 115 = 17)
7 131 17 (Execution Time = 131 - 122 = 17)
8 187 56 (Execution Time = 187 - 131 = 56)
9 204 56 From Now MAX will be 56 (Execution Time = 204 - 187 = 17)
Will MAX be 0 if we have less number of request (or 1 request) to the particular endpoint?
No number of request for particular endPoint does not affect the MAX (see an image from Spring Boot Admin)
When MAX will be 0
There is Timer which set the value 0. When the endpoint is not being called or executed for sometime Timer sets MAX to 0. Here approximate timer value is 2 to 2.30 minutes (120 to 150 seconds)
DistributionStatisticConfig has .expiry(Duration.ofMinutes(2)) which sets the some measutement to 0 if there is no request has been made for last 2 minutes (120 seconds)
Methods such as public TimeWindowMax(Clock clock,...), private void rotate() Clock interface has been written for the same. You may see the implementation here
How I have determined the timer value?
For that, I have taken 6 samples (executed the same endpoint for 6 times). For that, I have determined the time difference between the time of calling the endpoint - time for when MAX set back to zero
MAX property belongs to enum Statistic which is used by Measurement
(In Measurement we get COUNT, TOTAL_TIME, MAX)
public static final Statistic MAX
The maximum amount recorded. When this represents a time, it is
reported in the monitoring system's base unit of time.
Notes:
This is the cases from metric for a particular endpoint (here /actuator/metrics/http.server.requests?tag=uri:/user/asset/getAllAssets).
For generalize metric of actuator/metrics/http.server.requests
MAX for some endPoint will be set backed to 0 due to a timer. In my view for MAX for /http.server.requests will be same as a particular endpoint.
UPDATE
The document has been updated for the MAX.
NOTE: Max for basic DistributionSummary implementations such as
CumulativeDistributionSummary, StepDistributionSummary is a time
window max (TimeWindowMax). It means that its value is the maximum
value during a time window. If the time window ends, it'll be reset to
0 and a new time window starts again. Time window size will be the
step size of the meter registry unless expiry in
DistributionStatisticConfig is set to other value explicitly.
You can see the individual metrics by using ?tag=url:{endpoint_tag} as defined in the response of the root /actuator/metrics/http.server.requests call. The details of the measurements values are;
COUNT: Rate per second for calls.
TOTAL_TIME: The sum of the times recorded. Reported in the monitoring system's base unit of time
MAX: The maximum amount recorded. When this represents a time, it is reported in the monitoring system's base unit of time.
As given here, also here.
The discrepancies you are seeing is due to the presence of a timer. Meaning after some time currently defined MAX value for any tagged metric can be reset back to 0. Can you add some new calls to your endpoint then immediately do a call to /actuator/metrics/http.server.requests to see a non-zero MAX value for given tag?
This is due to the idea behind getting MAX metric for each smaller period. When you are seeing these metrics, you will be able to get an array of MAX values rather than a single value for a long period of time.
You can get to see this in action within Micrometer source code. There is a rotate() method focused on resetting the MAX value to create above described behaviour.
You can see this is called for every poll() call, which is triggered every some period for metric gathering.

HyperTable: Loading data using Mutators Vs. LOAD DATA INFILE

I am starting a discussion, which I hope, will become one place to discuss data loading method using mutators Vs. loading using flat file via 'LOAD DATA INFILE'.
I have been baffled to get enormous performance gain using mutators (using batch size = 1000 or 10000 or 100K et cetera).
My project involved loading close to 400 million rows of social media data into HyperTable to be used for real time analytics. It took me close to 3 days to just load just 1 million row of data (code sample below). Each row is approximately 32 byte. So, in order to avoid taking 2-3 weeks to load this much data, I prepared a flat file with rows and used DATA LOAD INFILE method. Performance gain was amazing. Using this method, loading rate was 368336 cells/sec.
See below for actual snapshot of action:
hypertable> LOAD DATA INFILE "/data/tmp/users.dat" INTO TABLE users;
Loading 7,113,154,337 bytes of input data...
0% 10 20 30 40 50 60 70 80 90 100%
|----|----|----|----|----|----|----|----|----|----|
***************************************************
Load complete.
Elapsed time: 508.07 s
Avg key size: 8.92 bytes
Total cells: 218976067
Throughput: 430998.80 cells/s
Resends: 2210404
hypertable> LOAD DATA INFILE "/data/tmp/graph.dat" INTO TABLE graph;
Loading 12,693,476,187 bytes of input data...
0% 10 20 30 40 50 60 70 80 90 100%
|----|----|----|----|----|----|----|----|----|----|
***************************************************
Load complete.
Elapsed time: 1189.71 s
Avg key size: 17.48 bytes
Total cells: 437952134
Throughput: 368118.13 cells/s
Resends: 1483209
Why is performance difference between 2 method is so vast? What's the best way to enhance mutator performance. Sample mutator code is below:
my $batch_size = 1000000; # or 1000 or 10000 make no substantial difference
my $ignore_unknown_cfs = 2;
my $ht = new Hypertable::ThriftClient($master, $port);
my $ns = $ht->namespace_open($namespace);
my $users_mutator = $ht->mutator_open($ns, 'users', $ignore_unknown_cfs, 10);
my $graph_mutator = $ht->mutator_open($ns, 'graph', $ignore_unknown_cfs, 10);
my $keys = new Hypertable::ThriftGen::Key({ row => $row, column_family => $cf, column_qualifier => $cq });
my $cell = new Hypertable::ThriftGen::Cell({key => $keys, value => $val});
$ht->mutator_set_cell($mutator, $cell);
$ht->mutator_flush($mutator);
I would appreciate any input on this? I don't have tremendous amount of HyperTable experience.
Thanks.
If it's taking three days to load one million rows, then you're probably calling flush() after every row insert, which is not the right thing to do. Before I describe hot to fix that, your mutator_open() arguments aren't quite right. You don't need to specify ignore_unknown_cfs and you should supply 0 for the flush_interval, something like this:
my $users_mutator = $ht->mutator_open($ns, 'users', 0, 0);
my $graph_mutator = $ht->mutator_open($ns, 'graph', 0, 0);
You should only call mutator_flush() if you would like to checkpoint how much of the input data has been consumed. A successful call to mutator_flush() means that all data that has been inserted on that mutator has durably made it into the database. If you're not checkpointing how much of the input data has been consumed, then there is no need to call mutator_flush(), since it will get flushed automatically when you close the mutator.
The next performance problem with your code that I see is that you're using mutator_set_cell(). You should use either mutator_set_cells() or mutator_set_cells_as_arrays() since each method call is a round-trip to the ThriftBroker, which is expensive. By using the mutator_set_cells_* methods, you amortize that round-trip over many cells. The mutator_set_cells_as_arrays() method can be more efficient for languages where object construction overhead is large in comparison to native datatypes (e.g. string). I'm not sure about Perl, but you might want to give that a try to see if it boosts performance.
Also, be sure to call mutator_close() when you're finished with the mutator.

Resources