Comparing Two Splunk Events To See Which One Is Larger - events

I'm using a transaction to see how long a device is RFM mode and the duration field increases with each table row. How I think it should work is that while the field is 'yes' it would calculate the duration that all events equal 'yes', but I have a lot of superfluous data that shouldn't be there IMO.
I only want to keep the largest duration event so I want to compare the current events duration to the next events duration and if its smaller than the current event, keep the current event.
index=crowdstrike sourcetype=crowdstrike:device:json
| transaction falcon_device.hostname startswith="falcon_device.reduced_functionality_mode=yes" endswith="falcon_device.reduced_functionality_mode=no"
| table _time duration
_time
duration
2022-10-28 06:07:45
888198
2022-10-28 05:33:44
892400
2022-10-28 04:57:44
896360
2022-08-22 18:25:53
3862
2022-08-22 18:01:53
7703
2022-08-22 17:35:53
11543
In the data above the duration goes from 896360 to 3862, and can happen on any date, and the duration runs in cycles like that where it increases until it starts over. So in the comparison I would keep the event at the 10-28 inflection point and so on at all other inflection points throughout the dataset.
How would I construct that multi event comparison?

By definition, the transaction command bundles together all events with the same hostname value, starting with the first "yes" and ending with the first "no". There is no option to include events by size, but there are options that govern the maximum time span of a transaction (maxspan), how many events can be in a transaction (maxevents), and how long the time between events can be (maxpause). That the duration value you want to keep (896360) is 10 days even though the previous transaction was only 36 minutes before it makes me wonder about the logic being used in this query. Consider using some of the options available to better define a "transaction".
What problem are you trying to solve with this query? It's possible there's another solution that doesn't use transaction (which is very non-performant).

Sans sample data, something like the following will probably work:
index=crowdstrike sourcetype=crowdstrike:device:json falcon_device.hostname=* falcon_device.reduced_functionality_mode=yes
| stats max(_time) as yestime by falcon_device.hostname
| append
[| search index=crowdstrike sourcetype=crowdstrike:device:json falcon_device.hostname=* falcon_device.reduced_functionality_mode=no
| stats max(_time) as notime by falcon_device.hostname ]
| stats values(*) as * by falcon_device.hostname
| eval elapsed_seconds=yestime-notime

Thanks for your answers but it wasn't working out. I ended up talking to some professional splunkers and got the below as a solution.
index=crowdstrike sourcetype=crowdstrike:device:json
| addinfo ```adds info_max_time```
| fields + _time, falcon_device.reduced_functionality_mode falcon_device.hostname info_max_time
| rename falcon_device.reduced_functionality_mode AS mode, falcon_device.hostname AS Hostname
| sort 0 + Hostname, -_time ``` events are not always returned in descending order per hostname, which would break streamstats```
| streamstats current=f last(mode) as new_mode last(_time) as time_change by Hostname ```compute potential time of state change```
| eval new_mode=coalesce(new_mode,mode."+++"), time_change=coalesce(time_change,info_max_time) ```take care of boundaries of search```
| table _time, Hostname, mode, new_mode, time_change
| where mode!=new_mode ```keep only state change events```
| streamstats current=f last(time_change) AS change_end by Hostname ```add start time of the next state as change_end time for the current state```
| fieldformat time_change=strftime(time_change, "%Y-%m-%d %T")
| fieldformat change_end=strftime(change_end, "%Y-%m-%d %T")
``` uncomment the following to sort by duration```
```| search change_end=* AND new_mode="yes"
| eval duration = round( (change_end-time_change)/(3600),1)
| table time_change, Hostname, new_mode, duration
| sort -duration```

Related

Reading a folder of log files, and calculating the event durations for unique ID's

I have an air gapped system (so limited in software access) that generates usage logs daily. The logs have unique ID's for devices that I've managed to scrape in the past and pump out to a CSV, to which I would then cleanup in LibreCalc (related to this question I asked here - https://superuser.com/questions/1732415/find-next-matching-event-in-log-and-compare-timings) and get event durations for each one.
This is getting arduous as more devices are added so I wish to automate the calculation of the total durations for each device, and how many events occurred for that device. I've had some suggestions of using out/awk/sed and I'm a bit lost on how to implement it.
Log Example
message="device02 connected" event_ts=2023-01-10T09:20:21Z
message="device05 connected" event_ts=2023-01-10T09:21:31Z
message="device02 disconnected" event_ts=2023-01-10T09:21:56Z
message="device04 connected" event_ts=2023-01-10T11:12:28Z
message="device05 disconnected" event_ts=2023-01-10T15:26:36Z
message="device04 disconnected" event_ts=2023-01-10T18:23:32Z
I already have a bash script that scrapes these events from the log files in the folder and then outputs it all to a csv.
#/bin/bash
#Just a datetime stamp for the flatfile
now=$(date +”%Y%m%d”)
#Log file path, also where I define what month to scrape
LOGFILE=’local.log-202301*’
#Shows what log files are getting read
echo $LOGFILE \n
#Output line by line to csv
awk ‘(/connect/ && ORS=”\n”) || (/disconnect/ && ORS=RS) {field1_var=$1” “$2” “$3”,”; print field1_var}’ $LOGFILE > /home/user/logs/LOG_$now.csv
Ideally I'd like to keep that process so I can manually inspect the file if necessary. But ultimately I'd prefer to automate the event calculations to produce something like below:
Desired Output Example
Device Total Connection Duration Total Connections
device01 0h 0m 0s 0
device02 0h 1m 35s 1
device03 0h 0m 0s 0
device04 7h 11m 4s 1
device05 6h 5m 5s 1
Hopefully thats enough info, any help or pointers would be greatly appreciated. Thanks.
This isn't based on your script at all, since I didn't get it to produce a CSV, but anyway...
Here's an AWK script that computes the desired result for the given example log file:
function time_lapsed(from, to) {
gsub(/[^0-9 ]/, " ", from);
gsub(/[^0-9 ]/, " ", to);
return mktime(to) - mktime(from);
}
BEGIN { OFS = "\t"; }
(/ connected/) {
split($1, a, "=\"", _);
split($3, b, "=", _);
device_connected_at[a[2]] = b[2];
device_connection_count[a[2]]++;
}
(/disconnected/) {
split($1, a, "=\"", _);
split($3, b, "=", _);
device_connection_duration[a[2]]+=time_lapsed(device_connected_at[a[2]], b[2]);
}
END {
print "Device","Total Connection Duration", "Total Connections";
for (device in device_connection_duration) {
print device, strftime("%Hh %Mm %Ss", device_connection_duration[device]), device_connection_count[device];
};
}
I used it on this example log file
message="device02 connected" event_ts=2023-01-10T09:20:21Z
message="device05 connected" event_ts=2023-01-10T09:21:31Z
message="device02 disconnected" event_ts=2023-01-10T09:21:56Z
message="device04 connected" event_ts=2023-01-10T11:12:28Z
message="device06 connected" event_ts=2023-01-10T11:12:28Z
message="device05 disconnected" event_ts=2023-01-10T15:26:36Z
message="device02 connected" event_ts=2023-01-10T19:20:21Z
message="device04 disconnected" event_ts=2023-01-10T18:23:32Z
message="device02 disconnected" event_ts=2023-01-10T21:41:33Z
And it produces this output
Device Total Connection Duration Total Connections
device02 03h 22m 47s 2
device04 08h 11m 04s 1
device05 07h 05m 05s 1
You can pass this program to awk without any flags. It should just work (given you didn't mess around with field and record separators somewhere in your shell session).
Let me explain what's going on:
First we define the time_lapsed function. In that function we first convert the ISO8601 timestamps into the format that mktime can handle (YYYY MM DD HH MM SS), we simply drop the offset since it's all UTC. We then compute the difference of the Epoch timestamps that mktime returns and return that result.
Next in the BEGIN block we define the output field separator OFS to be a tab.
Then we define two rules, one for log lines when the device connected and one for when the device disconnected.
Due to the default field separator the input to these rules looks like this:
$1: message="device02
$2: connected"
$3: event_ts=2023-01-10T09:20:21Z
We don't care about $2. We use split to get the device identifier and the timestamp from $1 and $3 respectively.
In the rule for a device connecting, using the device identifier as the key, we then store when the device connected and increase the connection count for that device. We don't need to initially assign 0 because the associative arrays in awk return "" for fields that contain no record which is coerced to 0 by incrementing it.
In the rule for a device disconnecting we compute the time lapsed and add that to the total time elapsed for that device.
Note that this requires every connect to have a matching disconnect in the logs. I.e., this is very fragile, a missing connect log line will mess up the calculation of the total connection time. A missing disconnect log line with increase the connection count but not the total connection time.
In the END rule we print the desired Output header and for every entry in the associative array device_connection_duration we print the device identifier, total connection duration and total connection count.
I hope this gives you some ideas on how to solve your task.

What part does priority play in round robin scheduling?

I am trying to solve the following homework problem for an operating systems class:
The following processes are being scheduled using a preemptive, round robin scheduling algorithm. Each process is assigned a numerical priority, with a higher number indicating a higher relative priority.
In addition to the processes listed below, the system also has an idle task (which consumes no CPU resources and is identified as Pidle ). This task has priority 0 and is scheduled whenever the system has no other available processes to run.
The length of a time quantum is 10 units.
If a process is preempted by a higher-priority process, the preempted process is placed at the end of the queue.
+--+--------+----------+-------+---------+
| | Thread | Priority | Burst | Arrival |
+--+--------+----------+-------+---------+
| | P1 | 40 | 15 | 0 |
| | P2 | 30 | 25 | 25 |
| | P3 | 30 | 20 | 30 |
| | P4 | 35 | 15 | 50 |
| | P5 | 5 | 15 | 100 |
| | P6 | 10 | 10 | 105 |
+--+--------+----------+-------+---------+
a. Show the scheduling order of the processes using a Gantt chart.
b. What is the turnaround time for each process?
c. What is the waiting time for each process?
d. What is the CPU utilization rate?
My question is --- What role does priority play when we're considering that this uses the round robin algorithm? I have been thinking about it a lot what I have come up with is that it only makes sense if the priority is important at the time of its arrival in order to decide if it should preempt another process or not. The reason I have concluded this is because if it was checked every time there was a context switch then the process with the highest priority would always be run indefinitely and other processes would starve. This is against the idea of round robin making sure that no process executes longer than one time quantum and the idea that after a process executes it goes to the end of the queue.
Using this logic I have worked out the problem as such:
Could you please advise me if I'm on the right track of the role priority has in this situation and if I'm approaching it the right way?
I think you are on the wrong track. Round robin controls the run order within a priority. It is as if each priority has its own queue, and corresponding round robin scheduler. When a given priority’s queue is empty, the subsequent lower priority queues are considered. Eventually, it will hit idle.
If you didn’t process it this way, how would you prevent idle from eventually being scheduled, despite having actual work ready to go?
Most high priority processes are reactive, that is they execute for a short burst in response to an event, so for the most part on not on a run/ready queue.
In code:
void Next() {
for (int i = PRIO_HI; i >= PRIO_LO; i--) {
Proc *p;
if ((p = prioq[i].head) != NULL) {
Resume(p);
/*NOTREACHED*/
}
}
panic(“Idle not on runq!”);
}
void Stop() {
unlink(prioq + curp->prio, curp);
Next();
}
void Start(Proc *p) {
p->countdown = p->reload;
append(prioq + p->prio, p);
Next();
}
void Tick() {
if (--(curp->countdown) == 0) {
unlink(prioq + curp->prio, curp);
Start(curp);
}
}

How to display percentile stats per URL on the console

I'm am working on writing some performance tests using Taurus & Jmeter.
After executing a set of tests on a some URLs, I see the stats on console as below.
19:03:40 INFO: Percentiles:
+---------------+---------------+
| Percentile, % | Resp. Time, s |
+---------------+---------------+
| 95.0 | 2.731 |
+---------------+---------------+
19:03:40 INFO: Request label stats:
+--------------+--------+---------+--------+-------+
| label | status | succ | avg_rt | error |
+--------------+--------+---------+--------+-------+
| /v1/brands | OK | 100.00% | 2.730 | |
| /v1/catalogs | OK | 100.00% | 1.522 | |
+--------------+--------+---------+--------+-------+
I'm wondering if there is a way to display other labels per URL. for ex. percentile response time per URL.
Below are all the stats that could be captured from Taurus. (according to taurus documentation), but I couldn't figure out the configuration required to display them onto the console. Appreciate any help.
label - is the sample group for which this CSV line presents the stats. Empty label means total of all labels
concurrency - average number of Virtual Users
throughput - total count of all samples
succ - total count of not-failed samples
fail - total count of saved samples
avg_rt - average response time
stdev_rt - standard deviation of response time
avg_ct - average connect time if present
avg_lt - average latency if present
rc_200 - counts for specific response codes
perc_0.0 .. perc_100.0 - percentile levels for response time, 0 is also minimum response time, 100 is maximum
bytes - total download size
Looking into documentation on Taurus Console Reporter it is possible to amend only the following parameters:
modules:
console:
# disable console reporter
disable: false # default: auto
 
# configure screen type
screen: console
# valid values are:
# - console (ncurses-based dashboard, default for *nix systems)
# - gui (window-based dashboard, default for Windows, requires Tkinter)
# - dummy (text output into console for non-tty cases)
dummy-cols: 140 # width for dummy screen
dummy-rows: 35 # height for dummy screen
If you can understand and write Python code you can try amending reporting.py file which is responsible for generating stats and summary table. This is a good point to start:
def __report_summary_labels(self, cumulative):
data = [("label", "status", "succ", "avg_rt", "error")]
justify = {0: "left", 1: "center", 2: "right", 3: "right", 4: "left"}
sorted_labels = sorted(cumulative.keys())
for sample_label in sorted_labels:
if sample_label != "":
data.append(self.__get_sample_element(cumulative[sample_label], sample_label))
table = SingleTable(data) if sys.stdout.isatty() else AsciiTable(data)
table.justify_columns = justify
self.log.info("Request label stats:\n%s", table.table)
Otherwise alternatively you can use Online Interactive Reports or configure your JMeter test to use Grafana and InfluxDB as 3rd-party metrics storage and visualisation systems.

Process timespan for each start/end pair using Hive script

I have a service that can be started or stopped. Each operation generates a record with timestamp and operation type. Ultimately, I end up with a series of timestamped operation records. Now I want to calculate the up-time of the service during a day. The idea is simple. For each pair of start/stop records, compute the timespan and sum up. But I don't know how to implement it with Hive, if possible at all. It's OK that I create tables to store intermediate results for this. This is the main blocking issue, and there are some other minor issues as well. For example, some start/stop pairs may span across a single day. Any idea how to deal with this minor issue would be appreciated too.
Sample Data:
Timestamp Operation
... ...
2017-09-03 23:59:00 Start
2017-09-04 00:01:00 Stop
2017-09-04 06:50:00 Start
2017-09-04 07:00:00 Stop
2017-09-05 08:00:00 Start
... ...
The service up-time for 2017-09-04 should then be 1 + 10 = 11 mins. Note that the first time interval spans across 09-03 and 09-04, and only the part that falls within 09-04 is counted.
select to_date(from_ts) as dt
,sum (to_unix_timestamp(to_ts) - to_unix_timestamp(from_ts)) / 60 as up_time_minutes
from (select case when pe.i = 0 then from_ts else cast(date_add(to_date(from_ts),i) as timestamp) end as from_ts
,case when pe.i = datediff(to_ts,from_ts) then to_ts else cast(date_add(to_date(from_ts),i+1) as timestamp) end as to_ts
from (select `operation`
,`Timestamp` as from_ts
,lead(`Timestamp`) over (order by `Timestamp`) as to_ts
from t
) t
lateral view posexplode(split(space(datediff(to_ts,from_ts)),' ')) pe as i,x
where `operation` = 'Start'
and to_ts is not null
) t
group by to_date(from_ts)
;
+------------+-----------------+
| dt | up_time_minutes |
+------------+-----------------+
| 2017-09-03 | 1.0 |
| 2017-09-04 | 11.0 |
+------------+-----------------+

Apache Pig: FLATTEN and parallel execution of reducers

I have implemented an Apache Pig script. When I execute the script it results in many mappers for a specific step, but has only one reducer for that step. Because of this condition (many mappers, one reducer) the Hadoop cluster is almost idle while the single reducer executes. In order to better use the resources of the cluster I would like to also have many reducers running in parallel.
Even if I set the parallelism in the Pig script using the SET DEFAULT_PARALLEL command I still result in having only 1 reducer.
The code part issuing the problem is the following:
SET DEFAULT_PARALLEL 5;
inputData = LOAD 'input_data.txt' AS (group_name:chararray, item:int);
inputDataGrouped = GROUP inputData BY (group_name);
-- The GeneratePairsUDF generates a bag containing pairs of integers, e.g. {(1, 5), (1, 8), ..., (8, 5)}
pairs = FOREACH inputDataGrouped GENERATE GeneratePairsUDF(inputData.item) AS pairs_bag;
pairsFlat = FOREACH pairs GENERATE FLATTEN(pairs_bag) AS (item1:int, item2:int);
The 'inputData' and 'inputDataGrouped' aliases are computed in the mapper.
The 'pairs' and 'pairsFlat' in the reducer.
If I change the script by removing the line with the FLATTEN command (pairsFlat = FOREACH pairs GENERATE FLATTEN(pairs_bag) AS (item1:int, item2:int);) then the execution results in 5 reducers (and thus in a parallel execution).
It seems that the FLATTEN command is the problem and avoids that many reducers are created.
How could I reach the same result of FLATTEN but having the script being executed in parallel (with many reducers)?
Edit:
EXPLAIN plan when having two FOREACH (as above):
Map Plan
inputDataGrouped: Local Rearrange[tuple]{chararray}(false) - scope-32
| |
| Project[chararray][0] - scope-33
|
|---inputData: New For Each(false,false)[bag] - scope-29
| |
| Cast[chararray] - scope-24
| |
| |---Project[bytearray][0] - scope-23
| |
| Cast[int] - scope-27
| |
| |---Project[bytearray][1] - scope-26
|
|---inputData: Load(file:///input_data.txt:org.apache.pig.builtin.PigStorage) - scope-22--------
Reduce Plan
pairsFlat: Store(fakefile:org.apache.pig.builtin.PigStorage) - scope-42
|
|---pairsFlat: New For Each(true)[bag] - scope-41
| |
| Project[bag][0] - scope-39
|
|---pairs: New For Each(false)[bag] - scope-38
| |
| POUserFunc(GeneratePairsUDF)[bag] - scope-36
| |
| |---Project[bag][1] - scope-35
| |
| |---Project[bag][1] - scope-34
|
|---inputDataGrouped: Package[tuple]{chararray} - scope-31--------
Global sort: false
EXPLAIN plan when having only one FOREACH with FLATTEN wrapping the UDF:
Map Plan
inputDataGrouped: Local Rearrange[tuple]{chararray}(false) - scope-29
| |
| Project[chararray][0] - scope-30
|
|---inputData: New For Each(false,false)[bag] - scope-26
| |
| Cast[chararray] - scope-21
| |
| |---Project[bytearray][0] - scope-20
| |
| Cast[int] - scope-24
| |
| |---Project[bytearray][1] - scope-23
|
|---inputData: Load(file:///input_data.txt:org.apache.pig.builtin.PigStorage) - scope-19--------
Reduce Plan
pairs: Store(fakefile:org.apache.pig.builtin.PigStorage) - scope-36
|
|---pairs: New For Each(true)[bag] - scope-35
| |
| POUserFunc(GeneratePairsUDF)[bag] - scope-33
| |
| |---Project[bag][1] - scope-32
| |
| |---Project[bag][1] - scope-31
|
|---inputDataGrouped: Package[tuple]{chararray} - scope-28--------
Global sort: false
There is no surety if pig uses the configuration DEFAULT_PARALLEL value for every steps in the pig script. Try PARALLEL along with your specific join/group step which you feel taking time (In your case GROUP step).
inputDataGrouped = GROUP inputData BY (group_name) PARALLEL 67;
If still it is not working then you might have to see your data for skewness issue.
I think there is a skewness in the data. Only a small number of mappers are producing exponentially large output. Look at the distribution of keys in your data. Like data contains few Groups with large number of records.
I tried "set default parallel" and "PARALLEL 100" but no luck. Pig still uses 1 reducer.
It turned out I have to generate a random number from 1 to 100 for each record and group these records by that random number.
We are wasting time on grouping, but it is much faster for me because now I can use more reducers.
Here is the code (SUBMITTER is my own UDF):
tmpRecord = FOREACH record GENERATE (int)(RANDOM()*100.0) as rnd, data;
groupTmpRecord = GROUP tmpRecord BY rnd;
result = FOREACH groupTmpRecord GENERATE FLATTEN(SUBMITTER(tmpRecord));
To answer your question we must first know how many reducers pig enforces to accomplish the - Global Rearrange process. Because as per my understanding the Generate / Projection should not require a single reducer. I cannot say the same thing about Flatten. However we know from common-sense that during flatten the aim is to de-nestify the tuples from bags and vice versa. And to do that all the tuples belonging to a bag should definitely be available in the same reducer. I might be wrong. But can anyone add something here to get this user an answer please ?

Resources