Telegraf unable to pull route table information from Arista MIB - snmp

So I'm trying to collect routing stats from some Aristas.
When I run snmpwalk it all seems to work...
snmpwalk -v2c -c pub router.host ARISTA-FIB-STATS-MIB::aristaFIBStatsTotalRoutesForRouteType
ARISTA-FIB-STATS-MIB::aristaFIBStatsTotalRoutesForRouteType.ipv4.other = Gauge32: 3
ARISTA-FIB-STATS-MIB::aristaFIBStatsTotalRoutesForRouteType.ipv4.connected = Gauge32: 8
ARISTA-FIB-STATS-MIB::aristaFIBStatsTotalRoutesForRouteType.ipv4.static = Gauge32: 26
ARISTA-FIB-STATS-MIB::aristaFIBStatsTotalRoutesForRouteType.ipv4.ospf = Gauge32: 542
ARISTA-FIB-STATS-MIB::aristaFIBStatsTotalRoutesForRouteType.ipv4.bgp = Gauge32: 1623
ARISTA-FIB-STATS-MIB::aristaFIBStatsTotalRoutesForRouteType.ipv4.attached = Gauge32: 12
ARISTA-FIB-STATS-MIB::aristaFIBStatsTotalRoutesForRouteType.ipv4.internal = Gauge32: 25
ARISTA-FIB-STATS-MIB::aristaFIBStatsTotalRoutesForRouteType.ipv6.other = Gauge32: 3
ARISTA-FIB-STATS-MIB::aristaFIBStatsTotalRoutesForRouteType.ipv6.internal = Gauge32: 1
But when I try to pull the stats with telegraf I get different information with missing context...
BGP,agent_host=10.45.100.20,host=nw01.ny5,hostname=CR.NY aristaFIBStatsTotalRoutesForRouteType=2i 1654976575000000000
BGP,agent_host=10.45.100.20,host=nw01.ny5,hostname=CR.NY aristaFIBStatsTotalRoutes=2260i 1654976575000000000
BGP,agent_host=10.45.100.20,host=nw01.ny5,hostname=CR.NY aristaFIBStatsTotalRoutesForRouteType=8i 1654976575000000000
BGP,agent_host=10.45.100.20,host=nw01.ny5,hostname=CR.NY aristaFIBStatsTotalRoutesForRouteType=63i 1654976575000000000
According to the MIB documentation..
https://www.arista.com/assets/data/docs/MIBS/ARISTA-FIB-STATS-MIB.txt
it is using IANA-RTPROTO-MIB.txt protocol definitions but I have no idea where to derive that information from as the retrieved data via telegraf isn't showing me anything. Anyone know how to deal with this?

First, you might want to enable telegraf to return the index of the returned rows by setting index_as_tag = true inside the inputs.snmp.table.
Then, add the following processors in your config:
# Parse aristaFIBStatsAF and aristaFIBStatsRouteType from index for BGP table
[[processors.regex]]
namepass = ["BGP"]
order = 1
[[processors.regex.tags]]
## Tag to change
key = "index"
## Regular expression to match on a tag value
pattern = "^(\\d+)\\.(\\d+)$"
replacement = "${1}"
## Tag to store the result
result_key = "aristaFIBStatsAF"
[[processors.regex.tags]]
## Tag to change
key = "index"
## Regular expression to match on a tag value
pattern = "^(\\d+)\\.(\\d+)$"
replacement = "${2}"
## Tag to store the result
result_key = "aristaFIBStatsRouteType"
# Rename index to aristaFIBStatsAF for BGP table with single index row
[[processors.rename]]
namepass = ["BGP"]
order = 2
[[processors.rename.replace]]
tag = "index"
dest = "aristaFIBStatsAF"
[processors.rename.tagdrop]
aristaFIBStatsAF = ["*"]
# Translate tag values for BGP table
[[processors.enum]]
namepass = ["BGP"]
order = 3
tagexclude = ["index"]
[[processors.enum.mapping]]
## Name of the tag to map
tag = "aristaFIBStatsAF"
## Table of mappings
[processors.enum.mapping.value_mappings]
0 = "unknown"
1 = "ipv4"
2 = "ipv6"
[[processors.enum.mapping]]
## Name of the tag to map
tag = "aristaFIBStatsRouteType"
## Table of mappings
[processors.enum.mapping.value_mappings]
1 = "other"
2 = "connected"
3 = "static"
8 = "rip"
9 = "isIs"
13 = "ospf"
14 = "bgp"
200 = "ospfv3"
201 = "staticNonPersistent"
202 = "staticNexthopGroup"
203 = "attached"
204 = "vcs"
205 = "internal"
Disclaimer: did not test this in telegraf, so there might be some typo's

Related

Unable to open messaging channel in Substrate Tutorial

Replace chain-spec.rs
In Open Message Passing Channels guide, use the parachain-template.js parachain_id of 1000, 1001 respectively.
Add sudo pallet to runtime/lib.js.
impl pallet_sudo::Config for Runtime {
type RuntimeEvent = RuntimeEvent;
type RuntimeCall = RuntimeCall;
}
construct_time!(
// Sudo: pallet_sudo, // or next line does not success open channel
Sudo: pallet_sudo::{Pallet, Call, Storage, Config<T>, Event<T>},
)
Use cargo build -release for node/src/chain-spec.rs with parachian_id = 1000 and 1001 respectively.
Start a test network with zombienet, where config.toml is configured according to the Simulate Parachains documentation.
[relaychain]
default_command = "polkadot-v0.9.32"
default_args = [ "-lparachain=debug" ]
chain = "rococo-local"
[[relaychain.nodes]]
name = "alice"
validator = true
ws_port = 9900
[[relaychain.nodes]]
name = "bob"
validator = true
ws_port = 9901
[[relaychain.nodes]]
name = "charlie"
validator = true
ws_port = 9902
[[relaychain.nodes]]
name = "dave"
validator = true
ws_port = 9903
[[parachains]]
id = 1000
cumulus_based = true
[parachains.collator]
name = "parachain-A-1000-collator01"
command = "parachain-A-template-node-v0.9.32"
ws_port = 9910
[[parachains]]
id = 1001
cumulus_based = true
[parachains.collator]
name = "parachain-B-1001-collator01"
command = "parachain-B-template-node-v0.9.32"
ws_port = 9920
[[hrmpChannels]]
sender = 1000
recipient = 1001
maxCapacity = 8
maxMessageSize = 8000
[[hrmpChannels]]
sender = 1001
recipient = 1000
maxCapacity = 8
maxMessageSize = 8000
The question arises.
When I follow the documentation step by step, there is no hrmp.OpenChannelRequested event in the recent time of the relay chain when the transaction is submitted, and ump.ExecutedUpward shows incomplete.
Parachain 1000 XCM Parameter 1:
Parachain 1000 XCM Parameter 2:
Relaychain Error:
Parachain Event:
I followed the steps above and am not sure what step was missed that caused the error in step 5 to occur.

Is there a way to simply replace string elements containing escape characters

From a file i import lines. In this line an (escaped) string is part of the line:
DP,0,"021",257
DP,1,"022",257
DP,2,"023",513
DP,3,"024",513
DP,4,"025",1025
DP,5,"026",1025
DP,6,"081",257
DP,7,"082",257
DP,8,"083",513
DP,9,"084",513
DP,10,"085",1025
DP,11,"086",1025
DP,12,"087",1025
DP,13,"091",257
DP,14,"092",513
DP,15,"093",1025
IS,0,"FIX",0
IS,1,"KARIN02",0
IS,2,"KARUIT02",0
IS,3,"KARIN02HOV",0
IS,4,"KARUIT02HOV",0
IS,5,"KARIN08",0
IS,6,"KARUIT08",0
IS,7,"KARIN08HOV",0
IS,8,"KARUIT08HOV",0
IS,9,"KARIN09",0
IS,10,"KARUIT09",0
IS,11,"KARIN09HOV",0
IS,12,"KARUIT09HOV",0
IS,13,"KARIN10",0
IS,14,"KARUIT10",0
IS,15,"KARIN10HOV",0
I get the following Objects (if DP) :
index - parts1 (int)
name - parts2 (string)
ref - parts3 (int)
I tried using REGEX to replace the excape-sequence from the lines but to no effect
#name_to_ID = {}
kruising = 2007
File.open(cfgFile).each{|line|
parts = line.split(",")
if parts[0]=="DP"
index = parts[1].to_i
hex = index.to_s(16).upcase.rjust(2, '0')
cname = parts[2].to_s
tname = cname.gsub('\\"','')
p "cname= #{cname} (#{cname.length})"
p "tname= #{tname} (#{tname.length})"
p cname == tname
#name_to_ID[tname] = kruising.to_s + "-" + hex.to_s
end
}
teststring = "021"
p #name_to_ID[teststring]
> "021" (5)
> "021" (5)
> true
> nil
The problem came to light when calling from another string reference (length3)
hash[key] isnt equal as string "021" ( length 5) is not string 021 ( length 3)
any method that actually replaces the chars i need?
EDIT: I used
cname.each_char{|c|
p c
}
> "\""
> "0"
> "2"
> "1"
> "\""
EDIT: requested outcome update:
# Current output:
#name_to_ID["021"] = 2007-00 "021".length = 5
#name_to_ID["022"] = 2007-01 "022".length = 5
#name_to_ID["081"] = 2007-06 "081".length = 5
#name_to_ID["082"] = 2007-07 "082".length = 5
#name_to_ID["091"] = 2007-0D "091".length = 5
#name_to_ID["101"] = 2007-10 "101".length = 5
# -------------
# Expected output:
#name_to_ID["021"] = 2007-00 "021".length = 3
#name_to_ID["022"] = 2007-01 "022".length = 3
#name_to_ID["081"] = 2007-06 "081".length = 3
#name_to_ID["082"] = 2007-07 "082".length = 3
#name_to_ID["091"] = 2007-0D "091".length = 3
#name_to_ID["101"] = 2007-10 "101".length = 3
Your problem is you don't know the correct character in your string. It might not be the same character when printing it.
Try parts[2].to_s.bytes to check exactly what is the character code of that unexpected character. For example:
> "͸asd".bytes
=> [205, 184, 97, 115, 100]
Alternatively, you can delete the first and the last characters, if you are sure that every part of the string has the same format:
cname = parts[2].to_s[1..-2]
Or you can remove all special characters in the string if you know that the string will not contain any special character
cname = parts[2].to_s.gsub(/[^0-9A-Za-z]/, '')

netcool omnibus probe rules

I have following probable values for $6 coming to netcool omnibus rules.
I would like to extract the InstanceName from $6
Eg: SQLSERVER1
SQL2012TESTRTM
SQL2012TESTSTD1
SQL2014STD
MSSQLSERVER
Below are $6 values
6 = "Microsoft.SQLServer.DBEngine:TM-B33F-FAD4.cap.dev.net;SQLSERVER1:1"
6 = "Microsoft.SQLServer.2012.Agent:TM-B33F-FAD4.cap.dev.net;SQL2012TESTRTM;SQLAgent$SQL2012TESTRTM:1"
6 = "Microsoft.SQLServer.2012.Agent:TM-B33F-FAD4.cap.dev.net;SQL2012TESTRTM;SQLAgent$SQL2012TESTRTM:1"
6 = "Microsoft.SQLServer.2012.Agent:TM-B33F-FAD4.cap.dev.net;SQL2012TESTSTD1;SQLAgent$SQL2012TESTSTD1:1"
6 = "Microsoft.SQLServer.Database:TM-B33F-FAD4.cap.dev.net;SQL2012TESTSTD1;DB2:1"
6 = "Microsoft.SQLServer.2012.Agent:TM-B33F-FAD4.cap.dev.net;SQL2012TESTRTM;SQLAgent$SQL2012TESTRTM:1"
6 = "Microsoft.SQLServer.2014.Agent:TM-B33F-FAD4.cap.dev.net;SQL2014STD;SQLAgent$SQL2014STD:1"
6 = "Microsoft.SQLServer.Database:TM-B33F-FAD4.cap.dev.net;SQL2012TESTSTD1;DB2:1"
6 = "Microsoft.SQLServer.2014.DBEngine:TM-B33F-FAD4.cap.dev.net;SQL2014STD:1"
6 = "Microsoft.SQLServer.Database:TM-B33F-FAD4.cap.dev.net;SQL2012TESTSTD1;DB2:1"
6 = "Microsoft.SQLServer.2014.Agent:TM-B33F-FAD4.cap.dev.net;SQL2014STD;SQLAgent$SQL2014STD:1"
6 = "Microsoft.SQLServer.Database:TM-B33F-FAD4.cap.dev.net;SQL2012TURKSTD1;DB1:1"
6 = "Microsoft.SQLServer.2014.DBFile:CTNTV01;MSSQLSERVER;SPOT;1;35:1"
6 = "Microsoft.SQLServer.Library.EventLogCollectionTarget:TM-B33F-FAD4.cap.dev.net:1"
I have tried below code to extract, it works for most of them above.
#temp = extract($6, ";([^\:]+)\:")
if (regmatch(#temp, "[\;]"))
{
#temp = extract(#temp, "([^\:]+)\;")
}
But it does not work for
Microsoft.SQLServer.2014.DBFile:CTNTV01;MSSQLSERVER;SPOT;1;35:1
I believe the second extract inside if statement needs to be corrected little more.
It extracts until MSSQLSERVER;SPOT;1, however I only want MSSQLSERVER from it.
Can you please help in correcting this.
Try with below.
#temp = extract($6, ";([^\:]+)\:")
if (regmatch(#temp, "[\;]"))
{
#temp = extract(#temp, "([^\;]+)\;")
}

Flume creating an empty line at the end of output file in HDFS

Currently I am using Flume version : 1.5.2.
Flume creating an empty line at the end of each output file in HDFS which causing row counts, file sizes & check sum are not matching for source and destination files.
I tried by overriding the default values of parameters roolSize, batchSize and appendNewline but still its not working.
Also flume changing EOL from CRLF(Source file) to LF(outputfile) this also causing file size to differ
Below are related flume agent configuration parameters I'm using
agent1.sources = c1
agent1.sinks = c1s1
agent1.channels = ch1
agent1.sources.c1.type = spooldir
agent1.sources.c1.spoolDir = /home/biadmin/flume-test/sourcedata1
agent1.sources.c1.bufferMaxLineLength = 80000
agent1.sources.c1.channels = ch1
agent1.sources.c1.fileHeader = true
agent1.sources.c1.fileHeaderKey = file
#agent1.sources.c1.basenameHeader = true
#agent1.sources.c1.fileHeaderKey = basenameHeaderKey
#agent1.sources.c1.filePrefix = %{basename}
agent1.sources.c1.inputCharset = UTF-8
agent1.sources.c1.decodeErrorPolicy = IGNORE
agent1.sources.c1.deserializer= LINE
agent1.sources.c1.deserializer.maxLineLength = 50000
agent1.sources.c1.deserializer=
org.apache.flume.sink.solr.morphline.BlobDeserializer$Builder
agent1.sources.c1.interceptors = a b
agent1.sources.c1.interceptors.a.type =
org.apache.flume.interceptor.TimestampInterceptor$Builder
agent1.sources.c1.interceptors.b.type =
org.apache.flume.interceptor.HostInterceptor$Builder
agent1.sources.c1.interceptors.b.preserveExisting = false
agent1.sources.c1.interceptors.b.hostHeader = host
agent1.channels.ch1.type = memory
agent1.channels.ch1.capacity = 1000
agent1.channels.ch1.transactionCapacity = 1000
agent1.channels.ch1.batchSize = 1000
agent1.channels.ch1.maxFileSize = 2073741824
agent1.channels.ch1.keep-alive = 5
agent1.sinks.c1s1.type = hdfs
agent1.sinks.c1s1.hdfs.path = hdfs://bivm.ibm.com:9000/user/biadmin/
flume/%y-%m-%d/%H%M
agent1.sinks.c1s1.hdfs.fileType = DataStream
agent1.sinks.c1s1.hdfs.filePrefix = %{file}
agent1.sinks.c1s1.hdfs.fileSuffix =.csv
agent1.sinks.c1s1.hdfs.writeFormat = Text
agent1.sinks.c1s1.hdfs.maxOpenFiles = 10
agent1.sinks.c1s1.hdfs.rollSize = 67000000
agent1.sinks.c1s1.hdfs.rollCount = 0
#agent1.sinks.c1s1.hdfs.rollInterval = 0
agent1.sinks.c1s1.hdfs.batchSize = 1000
agent1.sinks.c1s1.channel = ch1
#agent1.sinks.c1s1.hdfs.codeC = snappyCodec
agent1.sinks.c1s1.hdfs.serializer = text
agent1.sinks.c1s1.hdfs.serializer.appendNewline = false
hdfs.serializer.appendNewline not fixed the issue.
Can anyone please check and suggest..
Replace the below line in your flume agent.
agent1.sinks.c1s1.serializer.appendNewline = false
with the following line and let me know how it goes.
agent1.sinks.c1s1.hdfs.serializer.appendNewline = false
Replace
agent1.sinks.c1s1.hdfs.serializer = text
agent1.sinks.c1s1.hdfs.serializer.appendNewline = false
with
agent1.sinks.c1s1.serializer = text
agent1.sinks.c1s1.serializer.appendNewline = false
Difference is that serializer settings are not set on hdfs prefix but directly on sink name.
Flume documentation should have some example on that as I also got into issues because I didn't spot that serializer is set on different level of property name.
More informations about Hdfs sink can be found here:
https://flume.apache.org/FlumeUserGuide.html#hdfs-sink

How did the sphinx calculate the weight?

Note:
This is a cross-post, it is firstly posted at the sphinx forum,however I got no answer, so I post it here.
First take a look at a example:
The following is my table(just for test used):
+----+--------------------------+----------------------+
| Id | title | body |
+----+--------------------------+----------------------+
| 1 | National first hospital | NASA |
| 2 | National second hospital | Space Administration |
| 3 | National govenment | Support the hospital |
+----+--------------------------+----------------------+
I want to search the contents from the title and body field, so I config the sphinx.conf
as shown followed:
--------The sphinx config file----------
source mysql
{
type = mysql
sql_host = localhost
sql_user = root
sql_pass =0000
sql_db = testfull
sql_port = 3306 # optional, default is 3306
sql_query_pre = SET NAMES utf8
sql_query = SELECT * FROM test
}
index mysql
{
source = mysql
path = var/data/mysql_old_test
docinfo = extern
mlock = 0
morphology = stem_en, stem_ru, soundex
min_stemming_len = 1
min_word_len = 1
charset_type = utf-8
html_strip = 0
}
indexer
{
mem_limit = 128M
}
searchd
{
listen = 9312
read_timeout = 5
max_children = 30
max_matches = 1000
seamless_rotate = 0
preopen_indexes = 0
unlink_old = 1
pid_file = var/log/searchd_mysql.pid
log = var/log/searchd_mysql.log
query_log = var/log/query_mysql.log
}
------------------
Then I reindex the db and start the searchd daemon.
In my client side I set the attribute as:
----------Client side config-------------------
sc = new SphinxClient();
///other thing
HashMap<String, Integer> weiMap=new HashMap<String, Integer>();
weiMap.put("title", 100);
weiMap.put("body", 0);
sc.SetFieldWeights(weiMap);
sc.SetMatchMode(SphinxClient.SPH_MATCH_ALL);
sc.SetSortMode(SphinxClient.SPH_SORT_EXTENDED,"#weight DESC");
When I try to search "National hospital", I got the following output:
Query 'National hospital' retrieved 3 of 3 matches in 0.0 sec.
Query stats:
'nation' found 3 times in 3 documents
'hospit' found 3 times in 3 documents
Matches:
1. id=3, weight=101
2. id=1, weight=100
3. id=2, weight=100
The match number (three matched) is right,however the order of the result is not what I
wanted.
Obviously the document of id 1 and 2 should be the most closed items to the required
string( "National hospital" ), so in my opinion they should be given the largest
weights,but they are orderd at the last position.
I wonder if there is anyway to meet my requirement?
PS:
1)please do not suggestion me set the sortModel to :
sc.SetSortMode(SphinxClient.SPH_SORT_EXTENDED,"#weight ASC");
This may work for just this example, it will caused some other potinal problems.
2)Actuall the contents in my table is Chinese, I just use the "National Hosp..l" to make
a example.
1° You ask "National hospital" but sphinx search "nation" and "hospit" because
morphology = stem_en, stem_ru, soundex
2° You give weight
weiMap.put("title", 100);
weiMap.put("body", 0);
to unexisting text fields
sql_query = SELECT * FROM test
3° finaly my simple answer to main question
You sort by weight,
the third row has more weight because no words between nation and hospit

Resources