Flux InfluxDB query not working correctly - dashboard

I'm using InfluxDB and telegraf to have a dashboard.
InfluxDB v2.4.0 (git: de247bab08) build_date: 2022-08-18T19:41:15Z
Telegraf 1.23.4 (git: HEAD 5b48f5da)
My goal is to create a table with "firmware_version" for each "MCU".
telegraf input is an mqtt_consumer with this config
[[inputs.mqtt_consumer]]
....
topics = [ "+/+/hal" ]
name_override = "test"
data_format = "json_v2"
[[inputs.mqtt_consumer.json_v2]]
[[inputs.mqtt_consumer.json_v2.object]]
path = "mcu"
included_keys = ["target", "version"]
disable_prepend_keys = true
The input (MQTT payload) looks like this
{
"mcu": [
{
"target": "MCU_1",
"version": "2022.08.1"
},
{
"target": "MCU_2",
"version": "2022.08.2"
}
]
}
In addition to this mqtt_consumer, I have a processor regex to parse the topic and get the "cliend_id" and "device_id" where topic = <cliend_id>/<device_id>/hal
The desired output is a table in "Web Admin Interface" of InfluxDB where I can see the version per MCU per device per client
Client
Device
Target
Version
Client_1
Device_1
MCU_1
2022.08.1
Client_1
Device_1
MCU_2
2022.08.2
Client_2
Device_2
MCU_1
2022.07.1
Client_2
Device_2
MCU_2
2022.07.2
With my humble understand to the query syntax, I thought about this query
from(bucket: "my_bucket")
|> range(start: v.timeRangeStart, stop: v.timeRangeStop)
|> filter(fn: (r) => r["_measurement"] == "test")
|> filter(fn: (r) => r["_field"] == "version")
|> aggregateWindow(every: v.windowPeriod, fn: last, createEmpty: false)
|> last()

Related

Azure Application Insights - recreate the singleton instance with the modified configuration

I would like to track Sql command texts using Application Insights (Setting/ overriding the value of EnableSqlCommandTextInstrumentation in a running application) on a demand basis through a reloadable configuration. The ConfigureTelemetryModule of Microsoft.ApplicationInsights SDK uses singleton registration and this limits me from using IOptionsSnapshot. Can anyone please suggest me some ideas to override the config value EnableSqlCommandTextInstrumentation at runtime? Thank you.
Program.cs
var builder = WebApplication.CreateBuilder(args);
...
var myOptions = new MyOptions();
var configSection = builder.Configuration.GetSection(MyOptions.Name);
configSection.Bind(myOptions);
builder.Services.Configure<MyOptions>(configSection);
builder.Services
.AddSingleton<ITelemetryInitializer>(_ => new MyTelemetryInitializer(applicationName))
.ConfigureTelemetryModule<DependencyTrackingTelemetryModule>(
(module, _) =>
{
module.EnableSqlCommandTextInstrumentation = myOptions.EnableSqlCommandTextInstrumentation;
})
.AddApplicationInsightsTelemetry(configuration);
Can anyone please suggest me some ideas to override the config value EnableSqlCommandTextInstrumentation at runtime?
AFAIK, we cannot add configuration value run time to EnableSqlCommandTextInstrumentation.
The module.EnableSqlCommandTextInstrumentation has accept the bool value either we can enable or disable the EnableSqlCommandTextInstrumentation.
As per the MS-Doc you have to keep the settings in your host.json file "EnableDependencyTracking": true
Program.cs
builder.Services
.AddSingleton<ITelemetryInitializer>(_ => new MyTelemetryInitializer(applicationName))
.ConfigureTelemetryModule<DependencyTrackingTelemetryModule>(
(module, _) =>
{
module.EnableSqlCommandTextInstrumentation = true;
})
host.json
{
"version": "2.0",
"logging": {
"applicationInsights": {
"samplingSettings": {
"isEnabled": false,
"excludedTypes": "Exception"
},
"dependencyTrackingOptions": {
// Enable the Sql command text instrumentation to true to collect data
"enableSqlCommandTextInstrumentation": true
}
},

Jenkins read json file with multiple list value jsonsurper or readjson

I want to be able ro read json format based on parameter value selected in choice. e.g If dev is selected, it should select (dev1,dev2,dev3) and loop through each selected in json through the node. what is important now is to get the value in json to a file and then I call call it from file into the node
error:
groovy.lang.MissingMethodException: No signature of method: net.sf.json.JSONObject.$() is applicable for argument types: (org.jenkinsci.plugins.workflow.cps.CpsClosure2) values: [org.jenkinsci.plugins.workflow.cps.CpsClosure2#7e1eb88f]
Possible solutions: is(java.lang.Object), any(), get(java.lang.Object), get(java.lang.String), has(java.lang.String), opt(java.lang.String)
script in pipeline
#!/usr/bin/env groovy
node{
properties([
parameters([
choice(
name: 'environment',
choices: ['','Dev', 'Stage', 'devdb','PreProd','Prod' ],
description: 'environment to choose'
),
])
])
node () {
def myJson = '''{
"Dev": [
"Dev1",
"Dev2",
"Dev3"
],
"Stage": [
"Stage1",
"Stage2"
],
"PreProd": [
"Preprod1"
],
"Prod": [
"Prod1",
"Prod2"
]
}''';
def myObject = readJSON text: myJson;
echo myObject.${params.environment};
// put the list of the node in a file or in a list to loop
}
Using Pipeline Utility Steps, it can be easier:
Reading from string:
def obj = readJSON text: myjson
Or reading from file:
def obj = readJSON file: 'myjsonfile.json'
Now you can get the element and iterate the list:
def list = obj[params.environment]
list.each { elem ->
echo "Item: ${elem}"
}
Reference: https://www.jenkins.io/doc/pipeline/steps/pipeline-utility-steps/#readjson-read-json-from-files-in-the-workspace
Let me make it simple.
To read the json file you need to download it from git or wherever you stored it. Let's assume git in this case. Once the json file is downloaded then you want to access the content of json file in your code. Which can be done by this code.
import groovy.json.JsonSlurperClassic
def downloadConfigFile(gitProjectURL, jsonnFileBranch) {
// Variables
def defaultBranch = jsonnFileBranch
def gitlabAdminCredentials = 'admin'
def poll = false
def jenkinsFilePath = 'jenkins.json'
// Git checkout
git branch: defaultBranch, credentialsId: gitlabAdminCredentials, poll: poll, url: gitProjectURL
// Check if file existed or not
def jenkinsFile = fileExists(jenkinsFilePath)
if (jenkinsFile) {
def jsonStream = readFile(jenkinsFilePath)
JsonSlurperClassic slurper = new JsonSlurperClassic()
def parsedJson = slurper.parseText(jsonStream)
return parsedJson
} else {
return [:]
}
}
Now we have the entire json file parsed using above function.
Now you can call the function and read the value in a global variable.
stage('Download Config') {
jsonConfigData = downloadConfigFile(gitProjectURL, jsonnFileBranch)
if (jsonConfigData.isEmpty()) {
error err_jenkins_file_does_not_exists
} else {
println("jsonConfigData : ${jsonConfigData}")
}
Now you can access the value of json file or say variable like this.
def projectName = jsonConfigData.containsKey('project_name') == true ? jsonConfigData['project_name'] : ''
You can access any thing if its child node in similar way. I hope it helps you.

Kubernetes Pods Restart Notification alerts on my team's channel

My Pods are running on AKS Cluster. Whenever my pods restarted, I had to get a notification on my team's channel, are there any articles or commands to configure the notification?
For that same, you can use tools or application like botkube : https://www.botkube.io/
Also check the Kubewatch : https://github.com/bitnami-labs/kubewatch
You can also implement the Grafana with the Prometheus and Alert manager for monitoring and getting the alert system. : https://github.com/grafana-operator/grafana-operator
However if you can not looking for any tools or applications you can write down the custom script of python, node or any language you are good with and monitor any pod restart event and send the slack hook event.
Sharing one example python code with check the POD running or crashing and send a notification to slack you can update the logic as per need.
from kubernetes import client, config, watch
import json
import requests
import time
logger = logging.getLogger('k8s_events')
logger.setLevel(logging.DEBUG)
# If running inside pod
#config.load_incluster_config()
# If running locally
config.load_kube_config()
v1 = client.CoreV1Api()
v1ext = client.ExtensionsV1beta1Api()
w = watch.Watch()
mydict={}
webhook_url = '';
while True:
pod_list= v1.list_namespaced_pod("default");
for i in pod_list.items:
for c in i.status.container_statuses:
if(c.ready == True):
if i.metadata.name in mydict:
print("Inside mydict If");
print("Pod updated : ",i.metadata.name);
print("My dict value : ",mydict);
mydict[i.metadata.name]['end_time'] = i.status.conditions[1].last_transition_time;
dt_started = mydict[i.metadata.name]['start_time'].replace(tzinfo=None);
dt_ended = mydict[i.metadata.name]['end_time'].replace(tzinfo=None);
duration = str((dt_ended - dt_started).total_seconds()) + ' Sec';
fields = [{"title": "Status", "value": "READY", "short": False }, {"title": "Pod name", "value": i.metadata.name, "short": False }, {"title": "Duration", "value": duration, "short": False }, {"title": "Service name", "value": c.name, "short": False } ]
if c.name not in ('conversation-auto-close-service-scheduler','admin-service-trail-fllow-up-scheduler','bot-trial-email-scheduler','conversation-service-scheduler','faq-service-scheduler','nlp-service-scheduler','refresh-add-on-scheduler','response-sheet-scheduler'):
text = c.name + " Pod is started";
data = {"text": text, "mrkdwn": True, "attachments" : [{"color": "#FBBC05", "title": "Pod Details", "fields" : fields, "footer": "Manvar", "footer_icon": "https://cdn.test.manvar.com/assets/manvar-icon.png"}, ], }
print("Final data to post: ",data);
response = requests.post(webhook_url, data=json.dumps(data),headers={'Content-Type': 'application/json'});
del mydict[i.metadata.name]
if response.status_code != 200:
raise ValueError('Request to slack returned an error %s, the response is:\n%s' % (response.status_code, response.text));
time.sleep(1);
else:
mydict[i.metadata.name] = {"start_time": i.status.conditions[0].last_transition_time,"end_time": i.status.conditions[1].last_transition_time};
I tried out Botkube but I did not want to publicly expose my cluster endpoint, so I wrote the following script based on the code from #Harsh Manvar. You can connect this to Teams using the Incoming Webhook Teams app from Microsoft.
from kubernetes import client, config
import json
import requests
import time
def monitorNamespace(namespace: str, webhookUrl: str):
v1 = client.CoreV1Api()
pod_list= v1.list_namespaced_pod(namespace);
podsNotRunning = {"Namespace": namespace, "Pods": []}
for pod in pod_list.items:
status = getPodStatus(pod)
if status != "Running":
podsNotRunning["Pods"].append({"Podname": pod.metadata.name, "status": status})
if len(podsNotRunning)>0:
sendAlert(podsNotRunning, webhookUrl)
def sendAlert(podsNotRunning, webhookUrl):
print(podsNotRunning)
response = requests.post(webhookUrl, data=json.dumps(podsNotRunning),headers={'Content-Type': 'application/json'});
if response.status_code != 200:
print('Response error:', response)
def getPodStatus(pod: client.models.v1_pod.V1Pod) -> str:
status = pod.status.phase
containerStatus = pod.status.container_statuses[0]
if containerStatus.started is False or containerStatus.ready is False:
waitingState = containerStatus.state.waiting
if waitingState.message is not None:
status = waitingState.reason
return status
if __name__ == "__main__":
# If running inside pod:
#config.load_incluster_config()
# If running locally:
config.load_kube_config()
webhookUrl = 'http://webhookurl'
namespace='default
interval = 10
while True:
monitorNamespace(namespace, webhookUrl)
time.sleep(interval)

Kafka (Confluent Platform) input for Logstash - broken message encoding

I have a Confluent Platform (version 4.1.1). It is configured to read data from the database. The configuration for this is:
name = source-mysql-requests
connection.url = jdbc:mysql://localhost:3306/Requests
connector.class = io.confluent.connect.jdbc.JdbcSourceConnector
connection.user = ***
connection.password = ***
mode = incrementing
incrementing.column.name = ID
tasks.max = 5
topic.prefix = requests_
poll.interval.ms = 1000
batch.max.rows = 100
table.poll.interval.ms = 1000
I also have a Logstash (version 6.2.4) for reading the relevant Kafka topic. Here is its configuration:
kafka {
bootstrap_servers => "localhost:9092"
topics => ["requests_Operation"]
add_field => { "[#metadata][flag]" => "operation" }
}
output {
if [#metadata][flag] == "operation" {
stdout {
codec => rubydebug
}
}
}
When I run "kafka-avro-console-consumer" for the test, I get messages of this type:
{"ID":388625154,"ISSUER_ID":"8e427b6b-1176-4d4a-8090-915fedcef870","SERVICE_ID":"mercury-g2b.service:1.4","OPERATION":"prepareOutcomingConsignmentRequest","STATUS":"COMPLETED","RECEIVE_REQUEST_DATE":1525381951000,"PRODUCE_RESULT_DATE":1525381951000}
But in Logstash I have something terrible and unreadable:
"\u0000\u0000\u0000\u0000\u0001����\u0002Hfdebfb95-218a-11e2-a69b-b499babae7ea.mercury-g2b.service:1.4DprepareOutcomingConsignmentRequest\u0012COMPLETED���X���X"
What could go wrong?
You can change Kafka Connect to not use Avro by changing the configurations for value.converter and key.converter to use JSON instead, for example.
Otherwise, you would need Logstash to know how to interpret the Schema Registry encoded Avro data and convert it into a human-readable format.
Alternatively, you could use Connect's Elasticsearch or Console sink and skip Logstash entirely, assuming that is the goal
You can use a Connect SMT to replace the Logstash add_field : operation config as well

How do you use DynamoDB Local with the AWS Ruby SDK?

Amazon's documentation provides examples in Java, .NET, and PHP of how to use DynamoDB Local. How do you do the same thing with the AWS Ruby SDK?
My guess is that you pass in some parameters during initialization, but I can't figure out what they are.
dynamo_db = AWS::DynamoDB.new(
:access_key_id => '...',
:secret_access_key => '...')
Are you using v1 or v2 of the SDK? You'll need to find that out; from the short snippet above, it looks like v2. I've included both answers, just in case.
v1 answer:
AWS.config(use_ssl: false, dynamo_db: { api_verison: '2012-08-10', endpoint: 'localhost', port: '8080' })
dynamo_db = AWS::DynamoDB::Client.new
v2 answer:
require 'aws-sdk-core'
dynamo_db = Aws::DynamoDB::Client.new(endpoint: 'http://localhost:8080')
Change the port number as needed of course.
Now aws-sdk version 2.7 throws an error as Aws::Errors::MissingCredentialsError: unable to sign request without credentials set when keys are absent. So below code works for me
dynamo_db = Aws::DynamoDB::Client.new(
region: "your-region",
access_key_id: "anykey-or-xxx",
secret_access_key: "anykey-or-xxx",
endpoint: "http://localhost:8080"
)
I've written a simple gist that shows how to start, create, update and query a local dynamodb instance.
https://gist.github.com/SundeepK/4ffff773f92e3a430481
Heres a run down of some simple code:
Below is a simple command to run dynamoDb in memory
#Assuming you have downloading dynamoDBLocal and extracted into a dir called dynamodbLocal
java -Djava.library.path=./dynamodbLocal/DynamoDBLocal_lib -jar ./dynamodbLocal/DynamoDBLocal.jar -inMemory -port 9010
Below is a simple ruby script
require 'aws-sdk-core'
dynamo_db = Aws::DynamoDB::Client.new(region: "eu-west-1", endpoint: 'http://localhost:9010')
dynamo_db.create_table({
table_name: 'TestDB',
attribute_definitions: [{
attribute_name: 'SomeKey',
attribute_type: 'S'
},
{
attribute_name: 'epochMillis',
attribute_type: 'N'
}
],
key_schema: [{
attribute_name: 'SomeKey',
key_type: 'HASH'
},
{
attribute_name: 'epochMillis',
key_type: 'RANGE'
}
],
provisioned_throughput: {
read_capacity_units: 5,
write_capacity_units: 5
}
})
dynamo_db.put_item( table_name: "TestDB",
item: {
"SomeKey" => "somevalue1",
"epochMillis" => 1
})
puts dynamo_db.get_item({
table_name: "TestDB",
key: {
"SomeKey" => "somevalue",
"epochMillis" => 1
}}).item
The above will create a table with a range key and also add/query for the same data that was added. Not you must already have version 2 of the aws gem installed.

Resources