Summing a parsed field in Grafana Loki - grafana-loki

I am trying to sum the count of rows (row_count) being inserted/updated according to my process logs, which look similar to the following line:
{"function": "Data Processing Insert", "module": "Data Processor", "environment": "DEV", "level": "INFO", "message": "Number of rows inserted", "time": "2022-04-29T09:07:02.735Z", "epoch_time": 1651223222.735133, "row_count": "8089"}
I'm able to build the filter to get those lines but haven't been able to perform calculations on row_count. How would I go about doing that?

The grafana community channel came thru. To do what I'm looking for I had to use unwrap:
sum_over_time({__aws_cloudwatch_log_group="/aws/batch/job"} | json | message=~".+inserted" | unwrap row_count [1h])

Related

Renaming the field with special character with Simple Message Transform Kafka Connect

I'm using SMT in Splunk Sink Connector and having trouble with renaming the field. I need to change name of the field from, for example, "metric1" into "metric_name:cpu.usr".
That's how configurations of my connector look like:
SPLUNK_SINK_CONNECTOR_CONFIG='{
"name": "splunk_sink_connector_1",
"config": {
"connector.class": "com.splunk.kafka.connect.SplunkSinkConnector",
"tasks.max": "1",
"splunk.indexes": "'"$SPLUNK_INDEX"'",
"topics":"metrics_for_splunk",
"splunk.hec.uri": "'"$SPLUNK_HEC_URI"'",
"splunk.hec.token": "'"$SPLUNK_HEC_TOKEN"'",
"splunk.hec.raw": "true",
"offset.flush.interval.ms": 1000,
"splunk.hec.json.event.formatted": "true",
"transforms": "renameField,HoistField,insertTS,convertTS,insertEvent",
"transforms.renameField.type": "org.apache.kafka.connect.transforms.ReplaceField$Value",
"transforms.renameField.renames": "metric1:"metric_name:cpu.usr"",
"transforms.HoistField.type": "org.apache.kafka.connect.transforms.HoistField$Value",
"transforms.HoistField.field": "fields",
"transforms.insertTS.type": "org.apache.kafka.connect.transforms.InsertField$Value",
"transforms.insertTS.timestamp.field": "message_timestamp",
"transforms.convertTS.type": "org.apache.kafka.connect.transforms.TimestampConverter$Value",
"transforms.convertTS.format": "yyyy-MM-dd hh:mm",
"transforms.convertTS.target.type": "string",
"transforms.convertTS.field": "message_timestamp",
"transforms.insertEvent.type": "org.apache.kafka.connect.transforms.InsertField$Value",
"transforms.insertEvent.static.field": "event",
"transforms.insertEvent.static.value": "metric"
}}'
When I'm trying to run this connector, I get an error:
Connector configuration is invalid and contains the following 1 error(s):\nInvalid value [metric1:metric_name:cpu.usr] for configuration renames: Invalid rename mapping: metric1:metric_name:cpu.usr\n
If I'm running this connector without renameField SMT everything goes without a hitch.
I understand that the problem is with ":" character in the name of the field. I've tried to wrap metric_name:cpu.usr like this:
"transforms.renameField.renames": "metric1:'metric_name:cpu.usr'"
and like this
"transforms.renameField.renames": "metric1:'"metric_name:cpu.usr"'"
and like this
"transforms.renameField.renames": "metric1:"metric_name:cpu.usr""
and use escape character \ before :
"transforms.renameField.renames": "metric1:metric_name:cpu.usr"
with no positive effect.
Is it possible at all to use this SMT for renaming if I have special character in the name?
Or maybe there is some handy workaround?
It seems that this use case should be rather common, but I haven't found anything on the web.
Will be very grateful for advice.

How to use grok patterns to match this logs?

The log content like this:
[2018-07-09 11:30:59] [13968] [INFO] [1e74b6b7-fcb2-4dde-a259-7db1de0350ea] run entry() 11ms
[2018-07-09 11:30:59] [13968] [INFO] [1e74b6b7-fcb2-4dde-a259-7db1de0350ea] entry done
The first line logged function call info with exec time, other line is normal log.
Now, I want to match all of them, and if there is exec time in the line, I want to match it.
I write grok pattern like this:
\[%{TIMESTAMP_ISO8601:timestamp}\] \[%{NUMBER:process_id}\] \[%{LOGLEVEL:loglevel}\] \[%{UUID:request_id}\] %{DATA:message}(\s%{NUMBER:use_time}ms)?
it dose not work. The match result is:
{
"process_id": "13968",
"loglevel": "INFO",
"message": "",
"request_id": "1e74b6b7-fcb2-4dde-a259-7db1de0350ea",
"timestamp": "2018-07-09 11:30:59"
}
if change DATA:message to GREEDYDATA:message, it can not match exec time.
I solved it by using raw regexp to replace total message field:
\[%{TIMESTAMP_ISO8601:timestamp}\] \[%{NUMBER:process_id}\] \[%{LOGLEVEL:loglevel}\] \[%{UUID:request_id}\] (?<message>.+ (?:(?<use_time>\d+(?:\.\d+)?)ms)?.*)

Append an array to a json using jq in BASH

I have a json that looks like this:
{
"failedSet": [],
"successfulSet": [{
"event": {
"arn": "arn:aws:health:us-east-1::event/AWS_RDS_MAINTENANCE_SCHEDULED_xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxx",
"endTime": 1502841540.0,
"eventTypeCategory": "scheduledChange",
"eventTypeCode": "AWS_RDS_MAINTENANCE_SCHEDULED",
"lastUpdatedTime": 1501208541.93,
"region": "us-east-1",
"service": "RDS",
"startTime": 1502236800.0,
"statusCode": "open"
},
"eventDescription": {
"latestDescription": "We are contacting you to inform you that one or more of your Amazon RDS DB instances is scheduled to receive system upgrades during your maintenance window between August 8 5:00 PM and August 15 4:59 PM PDT. Please see the affected resource tab for a list of these resources. \r\n\r\nWhile the system upgrades are in progress, Single-AZ deployments will be unavailable for a few minutes during your maintenance window. Multi-AZ deployments will be unavailable for the amount of time it takes a failover to complete, usually about 60 seconds, also in your maintenance window. \r\n\r\nPlease ensure the maintenance windows for your affected instances are set appropriately to minimize the impact of these system upgrades. \r\n\r\nIf you have any questions or concerns, contact the AWS Support Team. The team is available on the community forums and by contacting AWS Premium Support. \r\n\r\nhttp://aws.amazon.com/support\r\n"
}
}]
}
I'm trying to add a new key/value under successfulSet[].event (key name as affectedEntities) using jq, I've seen some examples, like here and here, but none of those answers really show how to add a possible one key with multiple values (the reason why I say possible is because as of now, AWS is returning one value for the affected entity, but if there are more, then I'd like to list them).
EDIT: The value of the new key that I want to add is stored in a variable called $affected_entities and a sample of that value looks like this:
[
"arn:aws:acm:us-east-1:xxxxxxxxxxxxxx:certificate/xxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxx"
]
The value could look like this:
[
"arn:aws:acm:us-east-1:xxxxxxxxxxxxxx:certificate/xxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxx",
"arn:aws:acm:us-east-1:xxxxxxxxxxxxxx:certificate/xxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxx",
...
...
...
]
You can use this jq,
jq '.successfulSet[].event += { "new_key" : "new_value" }' file.json
EDIT:
Try this:
jq --argjson argval "$new_value" '.successfulSet[].event += { "affected_entities" : $argval }' file.json
Test:
sat~$ new_value='[
"arn:aws:acm:us-east-1:xxxxxxxxxxxxxx:certificate/xxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxx"
]'
sat~$ jq --argjson argval "$new_value" '.successfulSet[].event += { "affected_entities" : $argval }' file.json
Note that --argjson works with jq 1.5 and above.

Is there a way I can get historic performance data of various alerts in Nagios as json/xml?

I am looking to get performance data of various alerts setup in my Nagios Core/XI. I think it is stored in RRDs. Are there ways I can get access to it?
If you're using Nagios XI you can get this data a few different ways.
If you're using XI 5 or later, then the easiest way that springs to mind is the API. Log in to your XI server as an administrator, navigate to 'Help' menu, then select 'Objects Reference' on the left hand side navigation and find 'GET objects/rrdexport' from the Objects Reference navigation box (or just scroll down to near the bottom).
An example curl might look like this:
curl -XGET "http://nagiosxi/nagiosxi/api/v1/objects/rrdexport?apikey=YOURAPIKEY&pretty=1&host_name=localhost"
Your response should look something like:
{
"meta": {
"start": "1453838100",
"step": "300",
"end": "1453838400",
"rows": "2",
"columns": "4",
"legend": {
"entry": [
"rta",
"pl",
"rtmax",
"rtmin"
]
}
},
"data": {
"row": [
{
"t": "1453838100",
"v": [
"6.0373333333e-03",
"0.0000000000e+00",
"1.7536000000e-02",
"3.0000000000e-03"
]
},
{
"t": "1453838400",
"v": [
"6.0000000000e-03",
"0.0000000000e+00",
"1.7037333333e-02",
"3.0000000000e-03"
]
}
]
}
}
BUT WAIT, THERE IS ANOTHER WAY
This way will work no matter what version you're on, and would actually work if you were processing performance data with NPCD on a Core system as well.
Log in to your server via ssh or console and get your butt over to the /usr/local/nagios/share/perfdata directory. From here we're going to use the localhost object as an example..
$ cd /usr/local/nagios/share/perfdata/
$ ls
localhost
$ cd localhost/
$ ls
Current_Load.rrd Current_Users.xml HTTP.rrd PING.xml SSH.rrd Swap_Usage.xml
Current_Load.xml _HOST_.rrd HTTP.xml Root_Partition.rrd SSH.xml Total_Processes.rrd
Current_Users.rrd _HOST_.xml PING.rrd Root_Partition.xml Swap_Usage.rrd Total_Processes.xml
$ rrdtool dump _HOST_.rrd
Once you run the rrdtool dump command, there is going to be an awful lot of output, so I keep that as an exercise for you, the reader ;)
If you're trying to automate something of some kind, then you should note that the xml files contain meta data for the rrd files and could potentially be useful to parse first.
Also, if you're anything like me, you love reading technical manuals. Here is a great one to read: RRDTool documentation
Hope this helped!

Using json_spec gem's 'at "path" should include:' step is not working as I would expect

(apologies in advance for the cucumber steps, they need to be cleaned up a fair bit to flow better)
I am using a combination of Cucumber, along with the rest-client, and json_spec gems to create a test suite for a restful API. The approach is similar to that given in the Cucumber Book Note that in this case, since the 'customer' is a developer, the 'language of the business' is far more technical than you would normally express in cucumber scenarios
I am having a problem with the json_spec cucumber step "Then the JSON at "path" should include:"
My scenario looks like this
Scenario Outline: GET to OR Packages for specific package uuid returns proper data
Given I create a request body from the following JSON data
"""
{
"package":
{
"name": "anothertestpackage",
"description": "This is a test, testing 4 5 6",
"package_type" : <package_type>,
"duration": 30,
"start_date": "2012-03-01T08:00:00Z"
}
}
"""
And I have the URI for a new: package made in: OR from the request body
When I make a: GET request to the URI for: my new package with no query string
Then the JSON at "package" should include:
"""
{
"name": "anothertestpackage",
"description": "This is a test, testing 4 5 6",
"package_type" : <package_type>,
"duration": 30,
"start_date": "2012-03-01T08:00:00Z"
}
"""
Examples:
| package_type |
| "IMPRESSIONS" |
| "CLICKS" |
| "LEADS" |
And the contents of last_json are like this at the point the Then step is executed
{
"package": {
"status": "NEW",
"account": {
"resource_uri": "/api/v0001/accounts/fecdbb85a3584ca59820a321c3c2767d"
},
"name": "anothertestpackage",
"package_type": "IMPRESSIONS",
"margin_goal": "0.5000",
"duration": 30,
"resource_uri": "/api/v0001/packages/fecdbb85a3584ca59820a321c3c2767d/feea333776c9454c92edab8e73628cbd",
"start_date": "2012-03-01T08:00:00Z",
"description": "This is a test, testing 4 5 6"
}
}
I should think the step would pass, but I'm getting this error instead
Expected included JSON at path "package" (RSpec::Expectations::ExpectationNotMetError)
features\OR\API\OR_API_Packages.feature:70:in `Then the JSON at "package" should include:'
It is unclear what that error is telling me in terms of what is wrong. Is this user error? should I be using a different means to determine if the expected key:value pairs are present in the JSON returned by the API? I don't really see any examples of doing this kind of comparison in your feature files for the gem, so it is difficult to know if this is not what include was intended for.
Heh just got an answer from one of the gem authors via another venue. Will post it here
Include was intended more for simple value inclusion, mostly in
arrays. If an API index response returned an array of objects, you
could assert that the array includes one whole, identical object.
Check out the matcher specs for examples.
For what you're doing, I'd break it out into separate steps:
Then the JSON at "package/name" should be "anothertestpackage"
And the JSON at "package/description" should be "This is a test, testing 4 5 6"
And the JSON at "package/package_type" should be <package_type>
And the JSON at "package/duration" should be 30
And the JSON at "package/start_date" should be "2012-03-01T08:00:00Z"
Or you could make use of a table format to make that more succinct
such as
Then the JSON at "package" should have the following:
| name | "anothertestpackage" |
| description | "This is a test, testing 4 5 6" |
| package_type | <package_type> |
| duration | 30 |
| start_date | "2012-03-01T08:00:00Z" |
I hope that helps! Thanks for the question.
Indeed it did help very much, thank you 'laserlemon'

Resources