telegraf snmp date field - snmp

I want to monitor vpn connections. For this I need an snmp query, with telegraf, but I can't get the data.
My telegraf config:
[[inputs.snmp]]
agents = [ "172.30.124.2" ]
version = 2
community = "telegraf"
[[inputs.snmp.field]]
name = "vpnclients"
oid = ".1.3.6.1.4.1.9.9.392.1.3.35"
When I run a test, I got:
telegraf --config telegraf.d/cisco0.conf --test --debug
2019-10-22T08:12:13Z I! Starting Telegraf 1.12.3
2019-10-22T08:12:13Z D! [agent] Initializing plugins
2019-10-22T08:12:13Z D! [inputs.snmp] Executing "snmptranslate" "-Td" "-Ob" "-m" "all" ".1.3.6.1.4.1.9.9.392.1.3.35"
with snmpwalk:
snmpwalk -c telegraf -v 2c 172.30.124.2 "1.3.6.1.4.1.9.9.392.1.3.35"
iso.3.6.1.4.1.9.9.392.1.3.35.0 = Gauge32: 7
I don't know why telegraf don't made a query. I try with other OID's, and only with this one I had problem.

I try to find another OID to query the active connections, and I found it:
1.3.6.1.4.1.9.9.392.1.3.1.0
this OID is works with telegraf

Related

Spark app unable to write to elasticsearch cluster running in docker

I have a elasticsearch docker image listening on 127.0.0.1:9200, I tested it using sense and kibana, It works fine, I am able to index and query documents. Now when I try to write to it from a spark App
val sparkConf = new SparkConf().setAppName("ES").setMaster("local")
sparkConf.set("es.index.auto.create", "true")
sparkConf.set("es.nodes", "127.0.0.1")
sparkConf.set("es.port", "9200")
sparkConf.set("es.resource", "spark/docs")
val sc = new SparkContext(sparkConf)
val sqlContext = new SQLContext(sc)
val numbers = Map("one" -> 1, "two" -> 2, "three" -> 3)
val airports = Map("arrival" -> "Otopeni", "SFO" -> "San Fran")
val rdd = sc.parallelize(Seq(numbers, airports))
rdd.saveToEs("spark/docs")
It fails to connect, and keeps on retrying
16/07/11 17:20:07 INFO HttpMethodDirector: I/O exception (java.net.ConnectException) caught when processing request: Operation timed out
16/07/11 17:20:07 INFO HttpMethodDirector: Retrying request
I tried using IPAddress given by docker inspect for the elasticsearch image, that also does not work. However when I use a native installation of elasticsearch, the Spark App runs fine. Any ideas?
Also, set the config
es.nodes.wan.only to true
As mentioned in this answer if you are having issues writing to ES.
Couple things I would check:
The Elasticsearch-Hadoop spark connector version you are working with. Make sure that it is not beta. There was a fixed bug related to the IP resolving.
Since 9200 is the default port, you may remove this line: sparkConf.set("es.port", "9200") and check.
Check that there is no proxy configured in your Spark environment or config files.
I assume that you run Elasticsaerch and Spark on the same machine. Can you try to configure your machine IP address instead of 127.0.0.1
Hope this helps! :)
Had the same problem and a further issue was that the confs set using sparkConf.set() didn't have an effect. But supplying the confs with the saving function worked, like this:
rdd.saveToEs("spark/docs", Map("es.nodes" -> "127.0.0.1", "es.nodes.wan.only" -> "true"))

Logstash install error: can't get unique system GID (no more available GIDs)

I am trying to install logstash with yum on a red hat vm, I already have the logstash.repo file setup according to the guide and i ran
yum install logstash
but I get the following error after it downloads everything
...
logstash-2.3.2-1.noarch.rpm | 72 MB 00:52
Running rpm_check_debug
Running Transaction Test
Transaction Test Succeeded
Running Transaction
groupadd: Can't get unique system GID (no more available GIDs)
useradd: group 'logstash' does not exist
error: %pre(logstash-1:2.3.2-1.noarch) scriptlet failed, exit status 6
Error in PREIN scriptlet in rpm package 1:logstash-2.3.2-1.noarch
error: install: %pre scriptlet failed (2), skipping logstash-1:2.3.2-1
Verifying : 1:logstash-2.3.2-1.noarch 1/1
Failed:
logstash.noarch 1:2.3.2-1
Complete!
I can't find much information about this. Any suggestions?
groupadd determines gids for the creation of regular groups from the /etc/login.defs file.
In my centos 6 box. /etc/login.defs contains following two lines:
#
# Min/max values for automatic gid selection in groupadd
#
GID_MIN 500
GID_MAX 60000
For system accounts add these two lines to your /etc/login.defs
# System accounts
SYS_GID_MIN 100
SYS_GID_MAX 499
I updated the SYS_GID_MAX Value and it worked for me.

FunkLoad monitor doesn't show any graphs in report

I did set up everything according to tutorial here http://funkload.nuxeo.org/monitoring.html , started monitor server, made bench test, builded report. But in report there are no added graphs from monitoring... Any idea? I am using credential server as well, but that was and is working correctly... its just that after i added monitor things, nothing seems to change...
monitor.conf
[server]
host = localhost
port = 8008
interval = .5
interface = eth0
[client]
host = localhost
port = 8008
my_test.conf:
[main]
title= some title
description= some descr
url=http://localhost:8000
... some other not important lines here
[monitor]
hosts=localhost
[localhost]
port=8008
description=The benching machine
use
sudo easy_install -f http://funkload.nuxeo.org/snapshots/ -U funkload
instead of just
pip install funkload
Looks like pip does have some old bad version of funkload

kannel 1.5 addons sqlbox on mac connect to Postgresql 9.3.5 with "Segmentation fault: 11" error

I am trying to build my own sms gateway by compile Kannel 1.5.0 on my mac 10.10. I installed all depends that Kannel required. I configured Kannel to work with Postgresql 9.3.5. BearerBox and smsBox are in good work order. I can send/receive sms from my HUAWEI E3131 3G WCDMA modem.
After I got sms gateway worked, I go next step by trying compile Kannel addons sqlbox to support sms sql storage and insert sms to database to trigger sms services. Following steps used:
use bootstrap to configure environments
.bootstrap
configure sqlbox with Kannel support
./configure --with-kannel-dir=/usr/local/kannel --disable-docs --enable-drafts
make to compile
make
make install to install sqlbox to proper location
make bindir=/usr/local/kannel install
configure sqlbox by edit sqlbox.conf file like:
group = pgsql-connection
id = pgsqlbox-db
host = "10.0.1.100"
username = any
password = any
database = dlr
max-connections = 1
port=5433
group = sqlbox
id = pgsqlbox-db
smsbox-id = sqlbox
global-sender = ""
bearerbox-host = localhost
bearerbox-port = 13001
smsbox-port = 13002
smsbox-port-ssl = false
sql-log-table = sent_sms
sql-insert-table = send_sms
log-file = "/usr/local/var/log/kannel/kannel-sqlbox.log"
log-level = 0
configure postgresql to add table send_sms and sent_sms and test by using PSQL client to test, data base is working order
start services from terminal
./bearerbox -v 1 /usr/local/kannel/conf/smskannel.conf
./smsbox -v 1 /usr/local/kannel/conf/smskannel.conf
bearerbox and smsbox is in working order.
start sqlbox service
./sqlbox -v 1 /usr/local/kannel/conf/sqlbox.conf
error message was given:
2015-05-01 10:06:01 [11407] [0] INFO: Debug_lvl = 1, log_file = <none>, log_lvl = 0
2015-05-01 10:06:01 [11407] [0] INFO: Starting to log to file /usr/local/var/log/kannel/kannel-sqlbox.log level 0
2015-05-01 10:06:01 [11407] [0] INFO: Added logfile `/usr/local/var/log/kannel/kannel-sqlbox.log' with level `0'.
2015-05-01 10:06:01 [11407] [0] INFO: PGSQL: Connected to server at '10.0.1.100'.
Segmentation fault: 11
in my understanding, Segmentation fault: 11 was thrown out by Postgresql server. So I configured Postgresql server to get more detail level debug information. Seems Postgresql is working fine.
Does anyone have a better idea about it? I totally lost my direction. Any advice are welcome.
Kannel is probably too old to care out the work in new system.
I changed it to Gammu 1.36.0,
make sure cmake installed.
autoconf and other required depends installed.
download Gammu 1.36.0
compile and install
.configure
make
sudo make install
configure Gammu by using [gammu] and [smsd] sections
enable log file in system
use newest db schema to create tables in database
start service by
gammu-smsd
check log make sure it works
10.send test message by
gammu-smsd-inject
11.receive sms
12.check database tables inbox and sentitems
13.done

How to query UCD-SNMP-MIB using snmpwalk

I have installed MRTG, snmpd, snmpwalk, snmpget on windows 2003 server;
I have configured an SNMP agent on 192.168.100.88
When I run this SNMP walk command then I am getting empty response for UCD-SNMP-MIB
snmpwalk -v 1 -c community 192.168.100.88 .1.3.6.1.4.1.2021.4
End of MIB
I also see ...
snmpget -v1 -c community 192.168.100.88 memAvailReal.0
Error in packet
Reason: (noSuchName) There is no such variable name in this MIB.
Failed object: UCD-SNMP-MIB::memAvailReal.0
What am I missing? Should I install UCD-SNMP-MIB on host or client, and how ?
Please Check the OID(memAvailReal.0) that you are passing. Try with the OID(dotted integers) instead of name.
If the same error "no such name" comes again, please confirm that the OID is supported by the device.
P.S: "no such name" which means no such object present in the device to respond.
Try this (modify to your needs such as network info)
https://gist.github.com/2848189
create a backup of your existing snmpd.conf then reload snmpd and try your snmpwalk again

Resources