I'm trying to write a python SNMP agent that I can embed within my python application so that the application can be monitored remotely by OpenNMS. OpenNMS expects the Agent to implement the HOST-RESOURCES-MIB querying two fields hrSWRunNameand hrSWRunStatus.
I took a pysnmp example as the basis of my code and edited it as I believed necessary. The resulting code looks like this:
import logging
from pysnmp import debug
from pysnmp.carrier.asyncore.dgram import udp
from pysnmp.entity import engine, config
from pysnmp.entity.rfc3413 import cmdrsp, context
from pysnmp.proto.api import v2c
from pysnmp.smi import builder, instrum, exval
# debug.setLogger(debug.Debug('all'))
formatting = '[%(asctime)s-%(levelname)s]-(%(module)s) %(message)s'
logging.basicConfig(level=logging.DEBUG, format=formatting, )
logging.info("Starting....")
# Create SNMP engine
snmpEngine = engine.SnmpEngine()
# Transport setup
# UDP over IPv4
config.addTransport(
snmpEngine,
udp.domainName,
udp.UdpTransport().openServerMode(('localhost', 12345))
)
# SNMPv2c setup
# SecurityName <-> CommunityName mapping.
config.addV1System(snmpEngine, 'my-area', 'public')
# Allow read MIB access for this user / securityModels at VACM
config.addVacmUser(snmpEngine,
2,
'my-area',
'noAuthNoPriv',
(1, 3, 6, 1, 2, 1),
(1, 3, 6, 1, 2, 1))
# Create an SNMP context
snmpContext = context.SnmpContext(snmpEngine)
logging.debug('Loading HOST-RESOURCES-MIB module...'),
mibBuilder = builder.MibBuilder().loadModules('HOST-RESOURCES-MIB')
logging.debug('done')
logging.debug('Building MIB tree...'),
mibInstrum = instrum.MibInstrumController(mibBuilder)
logging.debug('done')
logging.debug('Building table entry index from human-friendly representation...')
# see http://www.oidview.com/mibs/0/HOST-RESOURCES-MIB.html
hostRunTable, = mibBuilder.importSymbols('HOST-RESOURCES-MIB', 'hrSWRunEntry')
instanceId = hostRunTable.getInstIdFromIndices(1)
logging.debug('done')
# The following shows the OID name mapping
#
# hrSWRunTable 1.3.6.1.2.1.25.4.2 <TABLE>
# hrSWRunEntry 1.3.6.1.2.1.25.4.2.1 <SEQUENCE>
# hrSWRunIndex 1.3.6.1.2.1.25.4.2.1.1 <Integer32>
# hrSWRunName 1.3.6.1.2.1.25.4.2.1.2 <InternationalDisplayString> 64 Char
# hrSWRunID 1.3.6.1.2.1.25.4.2.1.3 <ProductID>
# hrSWRunPath 1.3.6.1.2.1.25.4.2.1.4 <InternationalDisplayString> 128 octets
# hrSWRunParameters 1.3.6.1.2.1.25.4.2.1.5 <InternationalDisplayString> 128 octets
# hrSWRunType 1.3.6.1.2.1.25.4.2.1.6 <INTEGER>
# hrSWRunStatus 1.3.6.1.2.1.25.4.2.1.7 <INTEGER> <<===== This is the key variable used by Opennms
# http://docs.opennms.org/opennms/releases/18.0.1/guide-admin/guide-admin.html#_hostresourceswrunmonitor)
logging.debug('Create/update HOST-RESOURCES-MIB::hrSWRunTable table row:')
varBinds = mibInstrum.writeVars((
(hostRunTable.name + (1,) + instanceId, 1),
(hostRunTable.name + (2,) + instanceId, 'AppName'), # <=== Must match OpenNMS service-name variable
(hostRunTable.name + (3,) + instanceId, {0,0}), #
(hostRunTable.name + (4,) + instanceId, 'All is well'),
(hostRunTable.name + (5,) + instanceId, 'If this was not the case it would say so here'),
(hostRunTable.name + (6,) + instanceId, 4),# Values are ==> unknown(1), operatingSystem(2), deviceDriver(3), application(4)
(hostRunTable.name + (7,) + instanceId, 1) #<<=== This is the status number OpenNMS looks at Values are ==> running(1), runnable(2), notRunnable(3), invalid(4)
))
for oid, val in varBinds:
print('%s = %s' % ('.'.join([str(x) for x in oid]), val.prettyPrint()))
logging.debug('done')
logging.debug('Read whole MIB (table walk)')
oid, val = (), None
while True:
oid, val = mibInstrum.readNextVars(((oid, val),))[0]
if exval.endOfMib.isSameTypeWith(val):
break
print('%s = %s' % ('.'.join([str(x) for x in oid]), val.prettyPrint()))
logging.debug('done')
# logging.debug('Unloading MIB modules...'),
# mibBuilder.unloadModules()
# logging.debug('done')
# --- end of table population ---
# Register SNMP Applications at the SNMP engine for particular SNMP context
cmdrsp.GetCommandResponder(snmpEngine, snmpContext)
cmdrsp.SetCommandResponder(snmpEngine, snmpContext)
cmdrsp.NextCommandResponder(snmpEngine, snmpContext)
cmdrsp.BulkCommandResponder(snmpEngine, snmpContext)
# Register an imaginary never-ending job to keep I/O dispatcher running forever
snmpEngine.transportDispatcher.jobStarted(1)
# Run I/O dispatcher which would receive queries and send responses
try:
snmpEngine.transportDispatcher.runDispatcher()
except:
snmpEngine.transportDispatcher.closeDispatcher()
raise
The code runs without producing errors. varBinds and MIB Table walk show what I think I should expect:
[2016-12-29 16:42:49,323-INFO]-(SNMPAgent) Starting....
[2016-12-29 16:42:49,470-DEBUG]-(SNMPAgent) Loading HOST-RESOURCES-MIB module...
[2016-12-29 16:42:49,631-DEBUG]-(SNMPAgent) done
[2016-12-29 16:42:49,631-DEBUG]-(SNMPAgent) Building MIB tree...
[2016-12-29 16:42:49,631-DEBUG]-(SNMPAgent) done
[2016-12-29 16:42:49,631-DEBUG]-(SNMPAgent) Building table entry index from human-friendly representation...
[2016-12-29 16:42:49,631-DEBUG]-(SNMPAgent) done
[2016-12-29 16:42:49,632-DEBUG]-(SNMPAgent) Create/update HOST-RESOURCES-MIB::hrSWRunTable table row:
1.3.6.1.2.1.25.4.2.1.1.1 = 1
[2016-12-29 16:42:49,651-DEBUG]-(SNMPAgent) done
1.3.6.1.2.1.25.4.2.1.2.1 = TradeLoader
1.3.6.1.2.1.25.4.2.1.3.1 = 0
1.3.6.1.2.1.25.4.2.1.4.1 = All is well
1.3.6.1.2.1.25.4.2.1.5.1 = If this was not the case it would say so here
1.3.6.1.2.1.25.4.2.1.6.1 = 'application'
1.3.6.1.2.1.25.4.2.1.7.1 = 'running'
[2016-12-29 16:42:49,651-DEBUG]-(SNMPAgent) Read whole MIB (table walk)
1.3.6.1.2.1.25.4.2.1.1.1 = 1
1.3.6.1.2.1.25.4.2.1.2.1 = TradeLoader
1.3.6.1.2.1.25.4.2.1.3.1 = 0
1.3.6.1.2.1.25.4.2.1.4.1 = All is well
1.3.6.1.2.1.25.4.2.1.5.1 = If this was not the case it would say so here
1.3.6.1.2.1.25.4.2.1.6.1 = 'application'
1.3.6.1.2.1.25.4.2.1.7.1 = 'running'
1.3.6.1.2.1.25.5.1.1.1.1 = <no value>
1.3.6.1.2.1.25.5.1.1.2.1 = <no value>
[2016-12-29 16:42:53,490-DEBUG]-(SNMPAgent) done
Finally, the dispatcher is started.
The problem is that when I try and query the agent nothing happens. I get no response. I have looked at my code and one obvious thing about it is the fact that I do not explicitly link the snmpEngine to my created MIB. Should I do this?
Any insight would be greatly received as I'm struggling to understand where to go at the moment.
I thought I would just post an answer to my question as it took me so long to figure out how to do what I need. Hopefully someone else will find this useful. The following code allows me to populate any MIB known to pysnmp and then make the MIB available to the network as an V2 SNMP Agent.
import logging
from pysnmp import debug
from pysnmp.carrier.asyncore.dgram import udp
from pysnmp.entity import engine, config
from pysnmp.entity.rfc3413 import cmdrsp, context
from pysnmp.proto.api import v2c
from pysnmp.smi import builder, instrum, exval
# Uncomment this to turn pysnmp debugging on
#debug.setLogger(debug.Debug('all'))
formatting = '[%(asctime)s-%(levelname)s]-(%(module)s) %(message)s'
logging.basicConfig(level=logging.DEBUG, format=formatting, )
logging.info("Starting....")
# Create SNMP engine
snmpEngine = engine.SnmpEngine()
# Transport setup
# UDP over IPv4
config.addTransport(
snmpEngine,
udp.domainName,
udp.UdpTransport().openServerMode(('0.0.0.0', 12345))
)
# SNMPv2c setup
# SecurityName <-> CommunityName mapping.
config.addV1System(snmpEngine, 'my-area', 'public')
# Allow read MIB access for this user / securityModels at VACM
# Limit access to just the custom MIB. Widen if need be
config.addVacmUser(snmpEngine,
2,
'my-area',
'noAuthNoPriv',
(1, 3, 6, 1, 2, 1, 25, 4),
(1, 3, 6, 1, 2, 1, 25, 4))
# Create an SNMP context and ensure the custom MIB is loaded
# Your system must have this MIB installed otherwise pysnmp
# can't load it!
snmpContext = context.SnmpContext(snmpEngine)
logging.debug('Loading HOST-RESOURCES-MIB module...'),
mibBuilder = snmpContext.getMibInstrum().getMibBuilder()
mibBuilder.loadModules('HOST-RESOURCES-MIB')
mibInstrum = snmpContext.getMibInstrum()
logging.debug('done')
logging.debug('Building table entry index from human-friendly representation...')
# see http://www.oidview.com/mibs/0/HOST-RESOURCES-MIB.html
hostRunTable, = mibBuilder.importSymbols('HOST-RESOURCES-MIB', 'hrSWRunEntry')
instanceId = hostRunTable.getInstIdFromIndices(1)
logging.debug('done')
# The following shows the OID name mapping
#
# hrSWRunTable 1.3.6.1.2.1.25.4.2 <TABLE>
# hrSWRunEntry 1.3.6.1.2.1.25.4.2.1 <SEQUENCE>
# hrSWRunIndex 1.3.6.1.2.1.25.4.2.1.1 <Integer32>
# hrSWRunName 1.3.6.1.2.1.25.4.2.1.2 <InternationalDisplayString> 64 Char
# hrSWRunID 1.3.6.1.2.1.25.4.2.1.3 <ProductID>
# hrSWRunPath 1.3.6.1.2.1.25.4.2.1.4 <InternationalDisplayString> 128 octets
# hrSWRunParameters 1.3.6.1.2.1.25.4.2.1.5 <InternationalDisplayString> 128 octets
# hrSWRunType 1.3.6.1.2.1.25.4.2.1.6 <INTEGER>
# hrSWRunStatus 1.3.6.1.2.1.25.4.2.1.7 <INTEGER> <<===== This is the key variable used by Opennms
# We are going to use OpenNMS as the SNMP manager. OpenNMS will poll this agent to check on its status. The manual
# states:
#
# "This monitor tests the running state of one or more processes. It does this using SNMP and by inspecting the
# hrSwRunTable of the HOST-RESOURCES-MIB. The test is done by matching a given process as hrSWRunName against
# the numeric value of the hrSWRunStatus". hrSWRunName is matched against the process name defined in the OpenNMS
# config file under the heading "service-name". hrSWRunStatus is set to whatever your desired status is. OpenNMS
# will compare this value against the config file variable run-level. If hrSWRunStatus > run-level the process
# will be marked as having problems. for the complete page see:
# http://docs.opennms.org/opennms/releases/18.0.1/guide-admin/guide-admin.html#_hostresourceswrunmonitor)
# I have made up an enterprise MIB for us. The number is moot as it's not going to go anywhere but the code needs
# something valid.
# The enterprise MIB I have chosen is
enterpriseMib = (1, 3, 6, 1, 4, 1, 50000, 0)
logging.debug('Create/update HOST-RESOURCES-MIB::hrSWRunTable table row:')
varBinds = mibInstrum.writeVars((
(hostRunTable.name + (1,) + instanceId, 1),
(hostRunTable.name + (2,) + instanceId, 'TradeLoader'), # <=== Must match OpenNMS service-name variable
(hostRunTable.name + (3,) + instanceId, enterpriseMib), #
(hostRunTable.name + (4,) + instanceId, 'All is well'),
(hostRunTable.name + (5,) + instanceId, 'If this was not the case it would say so here'),
(hostRunTable.name + (6,) + instanceId, 4),# Values are ==> unknown(1), operatingSystem(2), deviceDriver(3), application(4)
(hostRunTable.name + (7,) + instanceId, 1) #<<=== This is the status number OpenNMS looks at Values are ==> running(1), runnable(2), notRunnable(3), invalid(4)
))
# --- end of table population ---
logging.debug('Confirm that the data has been set by reading whole MIB (table walk)')
oid, val = (), None
while True:
oid, val = mibInstrum.readNextVars(((oid, val),))[0]
if exval.endOfMib.isSameTypeWith(val):
break
print('%s = %s' % ('.'.join([str(x) for x in oid]), val.prettyPrint()))
logging.debug('done')
# Register SNMP Applications at the SNMP engine for particular SNMP context
cmdrsp.GetCommandResponder(snmpEngine, snmpContext)
cmdrsp.SetCommandResponder(snmpEngine, snmpContext)
cmdrsp.NextCommandResponder(snmpEngine, snmpContext)
cmdrsp.BulkCommandResponder(snmpEngine, snmpContext)
# Register an imaginary never-ending job to keep I/O dispatcher running forever
snmpEngine.transportDispatcher.jobStarted(1)
# Run I/O dispatcher which would receive queries and send responses
try:
snmpEngine.transportDispatcher.runDispatcher()
except:
snmpEngine.transportDispatcher.closeDispatcher()
raise
The logger messages show
[2017-01-09 16:30:15,401-INFO]-(SNMPAgent) Starting.... [2017-01-09 16:30:15,490-DEBUG]-(SNMPAgent) Loading HOST-RESOURCES-MIB module... [2017-01-09 16:30:15,513-DEBUG]-(SNMPAgent) done [2017-01-09 16:30:15,513-DEBUG]-(SNMPAgent) Building table entry index from human-friendly representation... [2017-01-09 16:30:15,515-DEBUG]-(SNMPAgent) done [2017-01-09 16:30:15,515-DEBUG]-(SNMPAgent) Create/update HOST-RESOURCES-MIB::hrSWRunTable table row: [2017-01-09 16:30:15,536-DEBUG]-(SNMPAgent) Confirm that the data has been set by reading whole MIB (table walk)
1.3.6.1.2.1.25.4.2.1.1.1 = 1
1.3.6.1.2.1.25.4.2.1.2.1 = TradeLoader
1.3.6.1.2.1.25.4.2.1.3.1 = 1.3.6.1.4.1.50000.0
1.3.6.1.2.1.25.4.2.1.4.1 = All is well
1.3.6.1.2.1.25.4.2.1.5.1 = If this was not the case it would say so here
1.3.6.1.2.1.25.4.2.1.6.1 = 'application'
1.3.6.1.2.1.25.4.2.1.7.1 = 'running'
1.3.6.1.2.1.25.5.1.1.1.1 = <no value>
1.3.6.1.2.1.25.5.1.1.2.1 = <no value>
1.3.6.1.6.3.10.2.1.1.0 = 0x80004fb805049c06c8
1.3.6.1.6.3.10.2.1.2.0 = 2
1.3.6.1.6.3.10.2.1.3.0 = 0
1.3.6.1.6.3.10.2.1.4.0 = 65507
1.3.6.1.6.3.16.1.1.1.1.0 =
1.3.6.1.6.3.16.1.2.1.1.2.7.109.121.45.97.114.101.97 = 2
1.3.6.1.6.3.16.1.2.1.2.2.7.109.121.45.97.114.101.97 = my-area
1.3.6.1.6.3.16.1.2.1.3.2.7.109.121.45.97.114.101.97 = v-1203634843-2
1.3.6.1.6.3.16.1.2.1.4.2.7.109.121.45.97.114.101.97 = 'nonVolatile'
1.3.6.1.6.3.16.1.2.1.5.2.7.109.121.45.97.114.101.97 = 'active'
1.3.6.1.6.3.16.1.4.1.1.14.118.45.49.50.48.51.54.51.52.56.52.51.45.50.0.2.1 =
1.3.6.1.6.3.16.1.4.1.2.14.118.45.49.50.48.51.54.51.52.56.52.51.45.50.0.2.1 = 2
1.3.6.1.6.3.16.1.4.1.3.14.118.45.49.50.48.51.54.51.52.56.52.51.45.50.0.2.1 = 'noAuthNoPriv'
1.3.6.1.6.3.16.1.4.1.4.14.118.45.49.50.48.51.54.51.52.56.52.51.45.50.0.2.1 = 'exact'
1.3.6.1.6.3.16.1.4.1.5.14.118.45.49.50.48.51.54.51.52.56.52.51.45.50.0.2.1 = rv-1203634843-2
1.3.6.1.6.3.16.1.4.1.6.14.118.45.49.50.48.51.54.51.52.56.52.51.45.50.0.2.1 = wv-1203634843-2
1.3.6.1.6.3.16.1.4.1.7.14.118.45.49.50.48.51.54.51.52.56.52.51.45.50.0.2.1 = nv-1203634843-2
1.3.6.1.6.3.16.1.4.1.8.14.118.45.49.50.48.51.54.51.52.56.52.51.45.50.0.2.1 = 'nonVolatile'
1.3.6.1.6.3.16.1.4.1.9.14.118.45.49.50.48.51.54.51.52.56.52.51.45.50.0.2.1 = 'active'
1.3.6.1.6.3.16.1.5.2.1.1.15.114.118.45.49.50.48.51.54.51.52.56.52.51.45.50.11.1.3.6.1.2.1.25.4
= rv-1203634843-2
1.3.6.1.6.3.16.1.5.2.1.1.15.119.118.45.49.50.48.51.54.51.52.56.52.51.45.50.11.1.3.6.1.2.1.25.4
= wv-1203634843-2
1.3.6.1.6.3.16.1.5.2.1.2.15.114.118.45.49.50.48.51.54.51.52.56.52.51.45.50.11.1.3.6.1.2.1.25.4
= 1.3.6.1.2.1.25.4
1.3.6.1.6.3.16.1.5.2.1.2.15.119.118.45.49.50.48.51.54.51.52.56.52.51.45.50.11.1.3.6.1.2.1.25.4
= 1.3.6.1.2.1.25.4
1.3.6.1.6.3.16.1.5.2.1.3.15.114.118.45.49.50.48.51.54.51.52.56.52.51.45.50.11.1.3.6.1.2.1.25.4
=
1.3.6.1.6.3.16.1.5.2.1.3.15.119.118.45.49.50.48.51.54.51.52.56.52.51.45.50.11.1.3.6.1.2.1.25.4
=
1.3.6.1.6.3.16.1.5.2.1.4.15.114.118.45.49.50.48.51.54.51.52.56.52.51.45.50.11.1.3.6.1.2.1.25.4
= 'included'
1.3.6.1.6.3.16.1.5.2.1.4.15.119.118.45.49.50.48.51.54.51.52.56.52.51.45.50.11.1.3.6.1.2.1.25.4
= 'included'
1.3.6.1.6.3.16.1.5.2.1.5.15.114.118.45.49.50.48.51.54.51.52.56.52.51.45.50.11.1.3.6.1.2.1.25.4
= 'nonVolatile'
1.3.6.1.6.3.16.1.5.2.1.5.15.119.118.45.49.50.48.51.54.51.52.56.52.51.45.50.11.1.3.6.1.2.1.25.4
= 'nonVolatile'
1.3.6.1.6.3.16.1.5.2.1.6.15.114.118.45.49.50.48.51.54.51.52.56.52.51.45.50.11.1.3.6.1.2.1.25.4
= 'active'
1.3.6.1.6.3.16.1.5.2.1.6.15.119.118.45.49.50.48.51.54.51.52.56.52.51.45.50.11.1.3.6.1.2.1.25.4
= 'active'
1.3.6.1.6.3.18.1.1.1.1.109.121.45.97.114.101.97 = my-area
1.3.6.1.6.3.18.1.1.1.2.109.121.45.97.114.101.97 = public
1.3.6.1.6.3.18.1.1.1.3.109.121.45.97.114.101.97 = my-area
1.3.6.1.6.3.18.1.1.1.4.109.121.45.97.114.101.97 = 0x80004fb805049c06c8
1.3.6.1.6.3.18.1.1.1.5.109.121.45.97.114.101.97 =
1.3.6.1.6.3.18.1.1.1.6.109.121.45.97.114.101.97 =
1.3.6.1.6.3.18.1.1.1.7.109.121.45.97.114.101.97 = 'nonVolatile'
1.3.6.1.6.3.18.1.1.1.8.109.121.45.97.114.101.97 = 'active' [2017-01-09 16:30:15,683-DEBUG]-(SNMPAgent) done
And the agent can be queried to show get the results as follows:
snmpwalk -v 2c -c public -n my-context 0.0.0.0:12345 1.3.6
HOST-RESOURCES-MIB::hrSWRunIndex.1 = INTEGER: 1
HOST-RESOURCES-MIB::hrSWRunName.1 = STRING: "TradeLoader"
HOST-RESOURCES-MIB::hrSWRunID.1 = OID: SNMPv2-SMI::enterprises.50000.0
HOST-RESOURCES-MIB::hrSWRunPath.1 = STRING: "All is well"
HOST-RESOURCES-MIB::hrSWRunParameters.1 = STRING: "If this was not the case it would say so here"
HOST-RESOURCES-MIB::hrSWRunType.1 = INTEGER: application(4)
HOST-RESOURCES-MIB::hrSWRunStatus.1 = INTEGER: running(1)
HOST-RESOURCES-MIB::hrSWRunStatus.1 = No more variables left in this MIB View (It is past the end of the MIB tree)
You say
The problem is that when I try and query the agent nothing happens. I get no response
Looks like 2 problems:
a) localhost resolves to the loopback address 127.0.0.1 on my system, so that you can only access it from the same system as your agent.
b) you register as SNMP version 2 in addVacmUser so you cannot access with V1
After that:
% snmpwalk -v 2c -c public 127.0.0.1:12345
SNMPv2-MIB::sysDescr.0 = STRING: PySNMP engine version 4.3.2, Python 2.7.5 (default, Nov 3 2014, 14:33:39) [GCC 4.8.3 20140911 (Red Hat 4.8.3-7)]
SNMPv2-MIB::sysObjectID.0 = OID: SNMPv2-SMI::enterprises.20408
DISMAN-EVENT-MIB::sysUpTimeInstance = Timeticks: (1) 0:00:00.01
SNMPv2-MIB::sysContact.0 = STRING:
...
although no HOST-RESOURCE-MIB objects. That is yet another problem.
Related
So I'm trying to collect routing stats from some Aristas.
When I run snmpwalk it all seems to work...
snmpwalk -v2c -c pub router.host ARISTA-FIB-STATS-MIB::aristaFIBStatsTotalRoutesForRouteType
ARISTA-FIB-STATS-MIB::aristaFIBStatsTotalRoutesForRouteType.ipv4.other = Gauge32: 3
ARISTA-FIB-STATS-MIB::aristaFIBStatsTotalRoutesForRouteType.ipv4.connected = Gauge32: 8
ARISTA-FIB-STATS-MIB::aristaFIBStatsTotalRoutesForRouteType.ipv4.static = Gauge32: 26
ARISTA-FIB-STATS-MIB::aristaFIBStatsTotalRoutesForRouteType.ipv4.ospf = Gauge32: 542
ARISTA-FIB-STATS-MIB::aristaFIBStatsTotalRoutesForRouteType.ipv4.bgp = Gauge32: 1623
ARISTA-FIB-STATS-MIB::aristaFIBStatsTotalRoutesForRouteType.ipv4.attached = Gauge32: 12
ARISTA-FIB-STATS-MIB::aristaFIBStatsTotalRoutesForRouteType.ipv4.internal = Gauge32: 25
ARISTA-FIB-STATS-MIB::aristaFIBStatsTotalRoutesForRouteType.ipv6.other = Gauge32: 3
ARISTA-FIB-STATS-MIB::aristaFIBStatsTotalRoutesForRouteType.ipv6.internal = Gauge32: 1
But when I try to pull the stats with telegraf I get different information with missing context...
BGP,agent_host=10.45.100.20,host=nw01.ny5,hostname=CR.NY aristaFIBStatsTotalRoutesForRouteType=2i 1654976575000000000
BGP,agent_host=10.45.100.20,host=nw01.ny5,hostname=CR.NY aristaFIBStatsTotalRoutes=2260i 1654976575000000000
BGP,agent_host=10.45.100.20,host=nw01.ny5,hostname=CR.NY aristaFIBStatsTotalRoutesForRouteType=8i 1654976575000000000
BGP,agent_host=10.45.100.20,host=nw01.ny5,hostname=CR.NY aristaFIBStatsTotalRoutesForRouteType=63i 1654976575000000000
According to the MIB documentation..
https://www.arista.com/assets/data/docs/MIBS/ARISTA-FIB-STATS-MIB.txt
it is using IANA-RTPROTO-MIB.txt protocol definitions but I have no idea where to derive that information from as the retrieved data via telegraf isn't showing me anything. Anyone know how to deal with this?
First, you might want to enable telegraf to return the index of the returned rows by setting index_as_tag = true inside the inputs.snmp.table.
Then, add the following processors in your config:
# Parse aristaFIBStatsAF and aristaFIBStatsRouteType from index for BGP table
[[processors.regex]]
namepass = ["BGP"]
order = 1
[[processors.regex.tags]]
## Tag to change
key = "index"
## Regular expression to match on a tag value
pattern = "^(\\d+)\\.(\\d+)$"
replacement = "${1}"
## Tag to store the result
result_key = "aristaFIBStatsAF"
[[processors.regex.tags]]
## Tag to change
key = "index"
## Regular expression to match on a tag value
pattern = "^(\\d+)\\.(\\d+)$"
replacement = "${2}"
## Tag to store the result
result_key = "aristaFIBStatsRouteType"
# Rename index to aristaFIBStatsAF for BGP table with single index row
[[processors.rename]]
namepass = ["BGP"]
order = 2
[[processors.rename.replace]]
tag = "index"
dest = "aristaFIBStatsAF"
[processors.rename.tagdrop]
aristaFIBStatsAF = ["*"]
# Translate tag values for BGP table
[[processors.enum]]
namepass = ["BGP"]
order = 3
tagexclude = ["index"]
[[processors.enum.mapping]]
## Name of the tag to map
tag = "aristaFIBStatsAF"
## Table of mappings
[processors.enum.mapping.value_mappings]
0 = "unknown"
1 = "ipv4"
2 = "ipv6"
[[processors.enum.mapping]]
## Name of the tag to map
tag = "aristaFIBStatsRouteType"
## Table of mappings
[processors.enum.mapping.value_mappings]
1 = "other"
2 = "connected"
3 = "static"
8 = "rip"
9 = "isIs"
13 = "ospf"
14 = "bgp"
200 = "ospfv3"
201 = "staticNonPersistent"
202 = "staticNexthopGroup"
203 = "attached"
204 = "vcs"
205 = "internal"
Disclaimer: did not test this in telegraf, so there might be some typo's
I am new to veins and I would like to make DoS attack using flooding technique. I have tried sending a message, that used in case of an accident, say, a million times by a specific car. Is this enough to make a DoS attack? Can I make this code more sophisticated?
void TraCIDemo11p::handlePositionUpdate(cObject* obj) {
BaseWaveApplLayer::handlePositionUpdate(obj);
if (externalID == "2"){ //2 is the attacker
for (int i = 0; i<1000000; i++)
sendMessage(mobility->getRoadId());
}
Note: I am using omnet 5.0, sumo-0.25.0 and veins-4.4, TraCIDemo11p.cc
For the NED File
import inet.applications.dhcp.DhcpServer;
import inet.node.dsdv.DsdvRouter;
import inet.node.inet.AdhocHost;
import inet.node.inet.MulticastRouter;
import inet.node.inet.WirelessHost;
import inet.networklayer.configurator.ipv4.Ipv4NetworkConfigurator;
import inet.node.aodv.AodvRouter;
import inet.node.wireless.AccessPoint;
import inet.physicallayer.ieee80211.packetlevel.Ieee80211ScalarRadioMedium;
import inet.visualizer.integrated.IntegratedVisualizer;
import inet.visualizer.networklayer.NetworkRouteVisualizer;
import inet.visualizer.integrated.IntegratedMultiVisualizer;
import ned.DelayChannel;
network pingattack
{
parameters:
int numhost;
int numattacker;
submodules:
visualizer: IntegratedMultiVisualizer {
#display("p=14,295");
}
configurator: Ipv4NetworkConfigurator {
//config = default(xml("<config><interface WirelessHost='**' address='10.0.0.x' netmask='255.255.255.0'/></config>"));
#display("p=42,430");
}
radioMedium: Ieee80211ScalarRadioMedium {
#display("p=14,339");
}
Attacker[numattacker]: WirelessHost {
#display("p=180,331");
}
Master: WirelessHost {
#display("p=274,316");
}
Slaves[numhost]: WirelessHost {
#display("p=313,247");
}
ap: AccessPoint {
#display("p=244,246");
}
}
For the Ini File,
[General]
description = Displaying Ping Attack
network = pingattack
# Setting up the max area which the modules are able to travel to
# "Z" limits the height. It can only be observed in 3D
**.constraintAreaMinX = 0m
**.constraintAreaMinY = 0m
**.constraintAreaMinZ = 0m
**.updateInterval = 0.1s # test with 0s too, and let getCurrentPosition update the display string from a test module
# Does not specify the intial positions. Random initial position will be chosen within the contraint area,
# unless it is specified in the display string in NED file or initialX,Y,Z
**.mobility.initFromDisplayString = false
# Setting all the default application type to "GlobalArp"
**.arp.typename = "GlobalArp"
# Attacker parameters
*.Attacker[*].numApps = 1 # Number of application layers on attackers
*.Attacker[*].app[0].typename = "PingApp" # Application type for attackers
*.Attacker[*].app[0].destAddr = "Master" # Set ping destination
*.Attacker[*].app[0].startTime = 10s # Initialize start time of ping
# Master Communication
*.Master.numApps = 1 # Number of application layers on master
*.Master.app[0].typename = "PingApp" # Application type for master
# MasterDrone Mobility
*.Master.mobility.typename = "LinearMobility" # Master move at constant speed
*.Master.mobility.speed = 20mps # of 20mps
# Slave Communication
*.Slaves[*].numApps = 1 # Number of application layers on slaves
*.Slaves[*].app[0].typename = "PingApp" # Application type for slaves
*.Slaves[*].app[0].destAddr = "Master" # Set ping destination, to ensure connection with master
*.Slaves[*].app[0].startTime = replaceUnit (0.1*(parentIndex()), "s") # to avoid synchronization
#*.Slaves[*].app[0].sendInterval= 1s # Slaves send ping every 1 second
# Slave mobility
*.Slaves[*].mobility.typename = "MassMobility" # Slaves move randomly
*.Slaves[*].mobility.changeInterval = truncnormal(2s, 0.5s)
*.Slaves[*].mobility.angleDelta = normal(0deg, 30deg)
#*.Slaves[*].mobility.speed = 15mps
# Wlan Config
*.Master.wlan[*].radio.transmitter.power = 10mW # Setting up Master, slaves, attacker wlan transmit power
*.ap.wlan[*].radio.transmitter.power = 100mW
# Pcap recording
**.crcMode = "computed" # To include CRC values in capture files
**.fcsMode = "computed" # To include FCS values in capture files
**.numPcapRecorders = 1 # To include PcapRecordere module
**.pcapRecorder[*].pcapNetwork = 105 # Set PCAP files link-layer header type to 802.11
**.pcapRecorder[*].pcapFile = "results/all.pcap" # Specifying file to write traces in & enable packet capture
**.pcapRecorder[*].verbose = true # To print tcpdump-like textual information to the log (EV)
**.pcapRecorder[*].alwaysFlush = true # Record the packets even if simulation crashes
**.pcapRecorder[*].packetFilter = "ping*"
#Analysis
*.*.wlan[*].**.vector-recording = false
# Visualizer parameters
# Displaying network path activity
*.visualizer.*.numDataLinkVisualizers = 2
*.visualizer.*.numInterfaceTableVisualizers = 2
*.visualizer.*.dataLinkVisualizer[0].displayLinks = true
*.visualizer.*.dataLinkVisualizer[0].packetFilter = "*ping*"
#*.visualizer.*.physicalLinkVisualizer[*].displayLinks = true
#*.visualizer.*.interfaceTableVisualizer[*].displayInterfaceTables = true
#*.visualizer.*.interfaceTableVisualizer[0].format = "%4"
*.visualizer.*.infoVisualizer[0].modules = "*.*.app[0]"
*.visualizer.*.infoVisualizer[1].modules = "*.*.app[1]"
#*.visualizer.*.infoVisualizer[*].format = "%t"
#*.visualizer.*.statisticVisualizer[0].sourceFilter = "**.app[*]"
#*.visualizer.*.statisticVisualizer[0].signalName = "rtt"
#*.visualizer.*.statisticVisualizer[0].unit = "ms"
#*.visualizer.*.infoVisualizer[*].placementHint = "topCenter"
#*.visualizer.*.packetDropVisualizer[*].displayPacketDrops = true
#*.visualizer.*.packetDropVisualizer[*].packetFilter = "ping*"
#*.visualizer.*.packetDropVisualizer[*].labelFormat = "%n/reason: %r"
#*.visualizer.*.packetDropVisualizer[*].fadeOutTime = 3s
#edit
# Set number of Attacker
# Set number of Host
# Set Attacker Ping interval
# Set Transmittion Power of Master Slave Attacker and AP
# Set Max constrain area XYZ
sim-time-limit = 20s
pingattack.numhost = 5
pingattack.numattacker = 1
*.Attacker[*].app[0].sendInterval = 0.0001s
**.constraintAreaMaxX = 2000m
**.constraintAreaMaxY = 2000m
**.constraintAreaMaxZ = 2000m
*.Slaves[*].wlan[*].radio.transmitter.power = 100mW
*.Attacker[*].wlan[*].radio.transmitter.power = 200mW
*.Attacker[*].**.vector-recording = false
*.Slaves[*].mobility.speed = 16mps
Hi I did a DOS Attack using PingApp. I think you can refer to the source code of PingApp. The important part is to have the sendInterval be 0.0001s. Hope this helps!
I am brand new to PostgreSql and I am having trouble returning a simple SELECT * FROM TABLE; query.
The problem I am encountering is that the client times out and I am receiving out of memory errors (OOM) when trying to return a large number of rows. In this case roughly 16 million.
I have tested the query via the server by executing the psql using the command line, and the query will return the full result set in about 10 minutes.
Also, I have tested this using the pgAdmin client on a Macbook and I was able to return the full result set using that setup in about the same amount of time as on the server.
However, when using a WINDOWS client, I am using Jetbrains/Datagrip (and have tried MySQL Lite & pgAdmin with the same results), I am unable to return the full table whether I query for it using SELECT * FROM TABLE_NAME; or trying to load the table/datagrid on the client itself.
This leads me to believe that this is a client-related issue, but if it is, I am hoping someone can provide some insight because at this point I am stumped as to why I cannot return this ~16 million row table/datagrid.
I am also having difficulty logging issues since the interrogation is timing out/OOM.
Any suggestions, insight, and/or guidance is greatly appreciated.
SPECS
Connecting ORACLE 12c to PostgreSql 9.3.5 using Oracle Foreign Data Wrapper (oracle_fdw v1.0)
VMWare
Debian GNU/Linux 7
Psql (9.3.5)
16GB RAM
4 CPUs
Here is my PostgreSql config:
# -----------------------------
# PostgreSQL configuration file
# -----------------------------
#
# This file consists of lines of the form:
#
# name = value
#
# (The "=" is optional.) Whitespace may be used. Comments are introduced with
# "#" anywhere on a line. The complete list of parameter names and allowed
# values can be found in the PostgreSQL documentation.
#
# The commented-out settings shown in this file represent the default values.
# Re-commenting a setting is NOT sufficient to revert it to the default value;
# you need to reload the server.
#
# This file is read on server startup and when the server receives a SIGHUP
# signal. If you edit the file on a running system, you have to SIGHUP the
# server for the changes to take effect, or use "pg_ctl reload". Some
# parameters, which are marked below, require a server shutdown and restart to
# take effect.
#
# Any parameter can also be given as a command-line option to the server, e.g.,
# "postgres -c log_connections=on". Some parameters can be changed at run time
# with the "SET" SQL command.
#
# Memory units: kB = kilobytes Time units: ms = milliseconds
# MB = megabytes s = seconds
# GB = gigabytes min = minutes
# h = hours
# d = days
#------------------------------------------------------------------------------
# FILE LOCATIONS
#------------------------------------------------------------------------------
# The default values of these variables are driven from the -D command-line
# option or PGDATA environment variable, represented here as ConfigDir.
data_directory = '/var/lib/postgresql/9.3/main' # use data in another directory
# (change requires restart)
hba_file = '/etc/postgresql/9.3/main/pg_hba.conf' # host-based authentication file
# (change requires restart)
ident_file = '/etc/postgresql/9.3/main/pg_ident.conf' # ident configuration file
# (change requires restart)
# If external_pid_file is not explicitly set, no extra PID file is written.
external_pid_file = '/var/run/postgresql/9.3-main.pid' # write an extra PID file
# (change requires restart)
#------------------------------------------------------------------------------
# CONNECTIONS AND AUTHENTICATION
#------------------------------------------------------------------------------
# - Connection Settings -
listen_addresses = '*' # what IP address(es) to listen on;
# comma-separated list of addresses;
# defaults to 'localhost'; use '*' for all
# (change requires restart)
port = 5432 # (change requires restart)
max_connections = 100 # (change requires restart)
# Note: Increasing max_connections costs ~400 bytes of shared memory per
# connection slot, plus lock space (see max_locks_per_transaction).
#superuser_reserved_connections = 3 # (change requires restart)
unix_socket_directories = '/var/run/postgresql' # comma-separated list of directories
# (change requires restart)
#unix_socket_group = '' # (change requires restart)
#unix_socket_permissions = 0777 # begin with 0 to use octal notation
# (change requires restart)
#bonjour = off # advertise server via Bonjour
# (change requires restart)
#bonjour_name = '' # defaults to the computer name
# (change requires restart)
# - Security and Authentication -
#authentication_timeout = 1min # 1s-600s
ssl = true # (change requires restart)
#ssl_ciphers = 'DEFAULT:!LOW:!EXP:!MD5:#STRENGTH' # allowed SSL ciphers
# (change requires restart)
#ssl_renegotiation_limit = 512MB # amount of data between renegotiations
ssl_cert_file = '/etc/ssl/certs/ssl-cert-snakeoil.pem' # (change requires restart)
ssl_key_file = '/etc/ssl/private/ssl-cert-snakeoil.key' # (change requires restart)
#ssl_ca_file = '' # (change requires restart)
#ssl_crl_file = '' # (change requires restart)
#password_encryption = on
#db_user_namespace = off
# Kerberos and GSSAPI
#krb_server_keyfile = ''
#krb_srvname = 'postgres' # (Kerberos only)
#krb_caseins_users = off
# - TCP Keepalives -
# see "man 7 tcp" for details
#tcp_keepalives_idle = 0 # TCP_KEEPIDLE, in seconds;
# 0 selects the system default
#tcp_keepalives_interval = 0 # TCP_KEEPINTVL, in seconds;
# 0 selects the system default
#tcp_keepalives_count = 0 # TCP_KEEPCNT;
# 0 selects the system default
#------------------------------------------------------------------------------
# RESOURCE USAGE (except WAL)
#------------------------------------------------------------------------------
# - Memory -
shared_buffers = 128MB # min 128kB
# (change requires restart)
#temp_buffers = 8MB # min 800kB
#max_prepared_transactions = 0 # zero disables the feature
# (change requires restart)
# Note: Increasing max_prepared_transactions costs ~600 bytes of shared memory
# per transaction slot, plus lock space (see max_locks_per_transaction).
# It is not advisable to set max_prepared_transactions nonzero unless you
# actively intend to use prepared transactions.
#work_mem = 1MB # min 64kB
#maintenance_work_mem = 16MB # min 1MB
#max_stack_depth = 2MB # min 100kB
# - Disk -
#temp_file_limit = -1 # limits per-session temp file space
# in kB, or -1 for no limit
# - Kernel Resource Usage -
#max_files_per_process = 1000 # min 25
# (change requires restart)
#shared_preload_libraries = '' # (change requires restart)
# - Cost-Based Vacuum Delay -
#vacuum_cost_delay = 0 # 0-100 milliseconds
#vacuum_cost_page_hit = 1 # 0-10000 credits
#vacuum_cost_page_miss = 10 # 0-10000 credits
#vacuum_cost_page_dirty = 20 # 0-10000 credits
#vacuum_cost_limit = 200 # 1-10000 credits
# - Background Writer -
#bgwriter_delay = 200ms # 10-10000ms between rounds
#bgwriter_lru_maxpages = 100 # 0-1000 max buffers written/round
#bgwriter_lru_multiplier = 2.0 # 0-10.0 multipler on buffers scanned/round
# - Asynchronous Behavior -
#effective_io_concurrency = 1 # 1-1000; 0 disables prefetching
#------------------------------------------------------------------------------
# WRITE AHEAD LOG
#------------------------------------------------------------------------------
# - Settings -
#wal_level = minimal # minimal, archive, or hot_standby
# (change requires restart)
#fsync = on # turns forced synchronization on or off
#synchronous_commit = on # synchronization level;
# off, local, remote_write, or on
#wal_sync_method = fsync # the default is the first option
# supported by the operating system:
# open_datasync
# fdatasync (default on Linux)
# fsync
# fsync_writethrough
# open_sync
#full_page_writes = on # recover from partial page writes
#wal_buffers = -1 # min 32kB, -1 sets based on shared_buffers
# (change requires restart)
#wal_writer_delay = 200ms # 1-10000 milliseconds
#commit_delay = 0 # range 0-100000, in microseconds
#commit_siblings = 5 # range 1-1000
# - Checkpoints -
#checkpoint_segments = 3 # in logfile segments, min 1, 16MB each
#checkpoint_timeout = 5min # range 30s-1h
#checkpoint_completion_target = 0.5 # checkpoint target duration, 0.0 - 1.0
#checkpoint_warning = 30s # 0 disables
# - Archiving -
#archive_mode = off # allows archiving to be done
# (change requires restart)
#archive_command = '' # command to use to archive a logfile segment
# placeholders: %p = path of file to archive
# %f = file name only
# e.g. 'test ! -f /mnt/server/archivedir/%f && cp %p /mnt/server/archivedir/%f'
#archive_timeout = 0 # force a logfile segment switch after this
# number of seconds; 0 disables
#------------------------------------------------------------------------------
# REPLICATION
#------------------------------------------------------------------------------
# - Sending Server(s) -
# Set these on the master and on any standby that will send replication data.
#max_wal_senders = 0 # max number of walsender processes
# (change requires restart)
#wal_keep_segments = 0 # in logfile segments, 16MB each; 0 disables
#wal_sender_timeout = 60s # in milliseconds; 0 disables
# - Master Server -
# These settings are ignored on a standby server.
#synchronous_standby_names = '' # standby servers that provide sync rep
# comma-separated list of application_name
# from standby(s); '*' = all
#vacuum_defer_cleanup_age = 0 # number of xacts by which cleanup is delayed
# - Standby Servers -
# These settings are ignored on a master server.
#hot_standby = off # "on" allows queries during recovery
# (change requires restart)
#max_standby_archive_delay = 30s # max delay before canceling queries
# when reading WAL from archive;
# -1 allows indefinite delay
#max_standby_streaming_delay = 30s # max delay before canceling queries
# when reading streaming WAL;
# -1 allows indefinite delay
#wal_receiver_status_interval = 10s # send replies at least this often
# 0 disables
#hot_standby_feedback = off # send info from standby to prevent
# query conflicts
#wal_receiver_timeout = 60s # time that receiver waits for
# communication from master
# in milliseconds; 0 disables
#------------------------------------------------------------------------------
# QUERY TUNING
#------------------------------------------------------------------------------
# - Planner Method Configuration -
#enable_bitmapscan = on
#enable_hashagg = on
#enable_hashjoin = on
#enable_indexscan = on
#enable_indexonlyscan = on
#enable_material = on
#enable_mergejoin = on
#enable_nestloop = on
#enable_seqscan = on
#enable_sort = on
#enable_tidscan = on
# - Planner Cost Constants -
#seq_page_cost = 1.0 # measured on an arbitrary scale
#random_page_cost = 4.0 # same scale as above
#cpu_tuple_cost = 0.01 # same scale as above
#cpu_index_tuple_cost = 0.005 # same scale as above
#cpu_operator_cost = 0.0025 # same scale as above
#effective_cache_size = 128MB
# - Genetic Query Optimizer -
#geqo = on
#geqo_threshold = 12
#geqo_effort = 5 # range 1-10
#geqo_pool_size = 0 # selects default based on effort
#geqo_generations = 0 # selects default based on effort
#geqo_selection_bias = 2.0 # range 1.5-2.0
#geqo_seed = 0.0 # range 0.0-1.0
# - Other Planner Options -
#default_statistics_target = 100 # range 1-10000
#constraint_exclusion = partition # on, off, or partition
#cursor_tuple_fraction = 0.1 # range 0.0-1.0
#from_collapse_limit = 8
#join_collapse_limit = 8 # 1 disables collapsing of explicit
# JOIN clauses
#------------------------------------------------------------------------------
# ERROR REPORTING AND LOGGING
#------------------------------------------------------------------------------
# - Where to Log -
#log_destination = 'stderr' # Valid values are combinations of
# stderr, csvlog, syslog, and eventlog,
# depending on platform. csvlog
# requires logging_collector to be on.
# This is used when logging to stderr:
#logging_collector = off # Enable capturing of stderr and csvlog
# into log files. Required to be on for
# csvlogs.
# (change requires restart)
# These are only used if logging_collector is on:
#log_directory = 'pg_log' # directory where log files are written,
# can be absolute or relative to PGDATA
#log_filename = 'postgresql-%Y-%m-%d_%H%M%S.log' # log file name pattern,
# can include strftime() escapes
#log_file_mode = 0600 # creation mode for log files,
# begin with 0 to use octal notation
#log_truncate_on_rotation = off # If on, an existing log file with the
# same name as the new log file will be
# truncated rather than appended to.
# But such truncation only occurs on
# time-driven rotation, not on restarts
# or size-driven rotation. Default is
# off, meaning append to existing files
# in all cases.
#log_rotation_age = 1d # Automatic rotation of logfiles will
# happen after that time. 0 disables.
#log_rotation_size = 10MB # Automatic rotation of logfiles will
# happen after that much log output.
# 0 disables.
# These are relevant when logging to syslog:
#syslog_facility = 'LOCAL0'
#syslog_ident = 'postgres'
# This is only relevant when logging to eventlog (win32):
#event_source = 'PostgreSQL'
# - When to Log -
#client_min_messages = notice # values in order of decreasing detail:
# debug5
# debug4
# debug3
# debug2
# debug1
# log
# notice
# warning
# error
#log_min_messages = warning # values in order of decreasing detail:
# debug5
# debug4
# debug3
# debug2
# debug1
# info
# notice
# warning
# error
# log
# fatal
# panic
#log_min_error_statement = error # values in order of decreasing detail:
log_min_error_statement = error # values in order of decreasing detail:
# debug5
# debug4
# debug3
# debug2
# debug1
# info
# notice
# warning
# error
# log
# fatal
# panic (effectively off)
#log_min_duration_statement = -1 # -1 is disabled, 0 logs all statements
# and their durations, > 0 logs only
# statements running at least this number
# of milliseconds
# - What to Log -
#debug_print_parse = off
#debug_print_rewritten = off
#debug_print_plan = off
#debug_pretty_print = on
#log_checkpoints = off
#log_connections = off
#log_disconnections = off
#log_duration = off
#log_error_verbosity = default # terse, default, or verbose messages
#log_hostname = off
log_line_prefix = '%t ' # special values:
# %a = application name
# %u = user name
# %d = database name
# %r = remote host and port
# %h = remote host
# %p = process ID
# %t = timestamp without milliseconds
# %m = timestamp with milliseconds
# %i = command tag
# %e = SQL state
# %c = session ID
# %l = session line number
# %s = session start timestamp
# %v = virtual transaction ID
# %x = transaction ID (0 if none)
# %q = stop here in non-session
# processes
# %% = '%'
# e.g. '<%u%%%d> '
#log_lock_waits = off # log lock waits >= deadlock_timeout
#log_statement = 'none' # none, ddl, mod, all
log_statement = 'all' # none, ddl, mod, all
#log_temp_files = -1 # log temporary files equal or larger
# than the specified size in kilobytes;
# -1 disables, 0 logs all temp files
log_timezone = 'UTC'
#------------------------------------------------------------------------------
# RUNTIME STATISTICS
#------------------------------------------------------------------------------
# - Query/Index Statistics Collector -
#track_activities = on
#track_counts = on
#track_io_timing = off
#track_functions = none # none, pl, all
#track_activity_query_size = 1024 # (change requires restart)
#update_process_title = on
#stats_temp_directory = 'pg_stat_tmp'
# - Statistics Monitoring -
#log_parser_stats = off
#log_planner_stats = off
#log_executor_stats = off
#log_statement_stats = off
#------------------------------------------------------------------------------
# AUTOVACUUM PARAMETERS
#------------------------------------------------------------------------------
#autovacuum = on # Enable autovacuum subprocess? 'on'
# requires track_counts to also be on.
#log_autovacuum_min_duration = -1 # -1 disables, 0 logs all actions and
# their durations, > 0 logs only
# actions running at least this number
# of milliseconds.
#autovacuum_max_workers = 3 # max number of autovacuum subprocesses
# (change requires restart)
#autovacuum_naptime = 1min # time between autovacuum runs
#autovacuum_vacuum_threshold = 50 # min number of row updates before
# vacuum
#autovacuum_analyze_threshold = 50 # min number of row updates before
# analyze
#autovacuum_vacuum_scale_factor = 0.2 # fraction of table size before vacuum
#autovacuum_analyze_scale_factor = 0.1 # fraction of table size before analyze
#autovacuum_freeze_max_age = 200000000 # maximum XID age before forced vacuum
# (change requires restart)
#autovacuum_multixact_freeze_max_age = 400000000 # maximum Multixact age
# before forced vacuum
# (change requires restart)
#autovacuum_vacuum_cost_delay = 20ms # default vacuum cost delay for
# autovacuum, in milliseconds;
# -1 means use vacuum_cost_delay
#autovacuum_vacuum_cost_limit = -1 # default vacuum cost limit for
# autovacuum, -1 means use
# vacuum_cost_limit
#------------------------------------------------------------------------------
# CLIENT CONNECTION DEFAULTS
#------------------------------------------------------------------------------
# - Statement Behavior -
#search_path = '"$user",public' # schema names
#default_tablespace = '' # a tablespace name, '' uses the default
#temp_tablespaces = '' # a list of tablespace names, '' uses
# only default tablespace
#check_function_bodies = on
#default_transaction_isolation = 'read committed'
#default_transaction_read_only = off
#default_transaction_deferrable = off
#session_replication_role = 'origin'
#statement_timeout = 0 # in milliseconds, 0 is disabled
#lock_timeout = 0 # in milliseconds, 0 is disabled
#vacuum_freeze_min_age = 50000000
#vacuum_freeze_table_age = 150000000
#vacuum_multixact_freeze_min_age = 5000000
#vacuum_multixact_freeze_table_age = 150000000
#bytea_output = 'hex' # hex, escape
#xmlbinary = 'base64'
#xmloption = 'content'
# - Locale and Formatting -
datestyle = 'iso, mdy'
#intervalstyle = 'postgres'
timezone = 'UTC'
#timezone_abbreviations = 'Default' # Select the set of available time zone
# abbreviations. Currently, there are
# Default
# Australia
# India
# You can create your own file in
# share/timezonesets/.
#extra_float_digits = 0 # min -15, max 3
#client_encoding = sql_ascii # actually, defaults to database
# encoding
# These settings are initialized by initdb, but they can be changed.
lc_messages = 'C' # locale for system error message
# strings
lc_monetary = 'C' # locale for monetary formatting
lc_numeric = 'C' # locale for number formatting
lc_time = 'C' # locale for time formatting
# default configuration for text search
default_text_search_config = 'pg_catalog.english'
# - Other Defaults -
#dynamic_library_path = '$libdir'
#local_preload_libraries = ''
#------------------------------------------------------------------------------
# LOCK MANAGEMENT
#------------------------------------------------------------------------------
#deadlock_timeout = 1s
#max_locks_per_transaction = 64 # min 10
# (change requires restart)
# Note: Each lock table slot uses ~270 bytes of shared memory, and there are
# max_locks_per_transaction * (max_connections + max_prepared_transactions)
# lock table slots.
#max_pred_locks_per_transaction = 64 # min 10
# (change requires restart)
#------------------------------------------------------------------------------
# VERSION/PLATFORM COMPATIBILITY
#------------------------------------------------------------------------------
# - Previous PostgreSQL Versions -
#array_nulls = on
#backslash_quote = safe_encoding # on, off, or safe_encoding
#default_with_oids = off
#escape_string_warning = on
#lo_compat_privileges = off
#quote_all_identifiers = off
#sql_inheritance = on
#standard_conforming_strings = on
#synchronize_seqscans = on
# - Other Platforms and Clients -
#transform_null_equals = off
#------------------------------------------------------------------------------
# ERROR HANDLING
#------------------------------------------------------------------------------
#exit_on_error = off # terminate session on any error?
#restart_after_crash = on # reinitialize after backend crash?
#------------------------------------------------------------------------------
# CONFIG FILE INCLUDES
#------------------------------------------------------------------------------
# These options allow settings to be loaded from files other than the
# default postgresql.conf.
#include_dir = 'conf.d' # include files ending in '.conf' from
# directory 'conf.d'
#include_if_exists = 'exists.conf' # include file only if it exists
#include = 'special.conf' # include file
#------------------------------------------------------------------------------
# CUSTOMIZED OPTIONS
#------------------------------------------------------------------------------
# Add settings for extensions here
Thank you for any help that you can provide.
How to create a network which allows for multiple publishers and multiple subscribers to those publishers?
Or is it absolutely necessary for a message broker to be used?
import time
import zmq
from multiprocessing import Process
def bind_pub(sleep_seconds, max_messages, pub_id):
context = zmq.Context()
socket = context.socket(zmq.PUB)
socket.bind("tcp://*:5556")
message = 0
while True:
socket.send_string("1 sending_func=bind_pub message_number=%s pub_id=%s" % (message, pub_id))
message += 1
if message >= max_messages:
break
time.sleep(sleep_seconds)
def bind_sub(sleep_seconds, max_messages, sub_id):
context = zmq.Context()
socket = context.socket(zmq.SUB)
socket.bind("tcp://*:5556")
socket.setsockopt_string(zmq.SUBSCRIBE, '1')
message_n = 0
while True:
message = socket.recv_string()
print(message + " receiving_func=bind_sub sub_id=%s" % sub_id)
message_n += 1
if message_n >= max_messages - 1:
break
time.sleep(sleep_seconds)
def conect_pub(sleep_seconds, max_messages, pub_id):
context = zmq.Context()
socket = context.socket(zmq.PUB)
socket.connect("tcp://localhost:5556")
message = 0
while True:
socket.send_string("1 sending_func=conect_pub message_number=%s pub_id=%s" % (message, pub_id))
message += 1
if message >= max_messages:
break
time.sleep(sleep_seconds)
def connect_sub(sleep_seconds, max_messages, sub_id):
context = zmq.Context()
socket = context.socket(zmq.SUB)
socket.connect("tcp://localhost:5556")
socket.setsockopt_string(zmq.SUBSCRIBE, '1')
message_n = 0
while True:
message = socket.recv_string()
print(message + " receiving_func=connect_sub sub_id=%s" % sub_id)
message_n += 1
if message_n >= max_messages - 1:
break
time.sleep(sleep_seconds)
When trying a bind_pub, connect_pub, connect_sub, connect_sub network architecture:
# bind_pub, connect_pub, connect_sub, connect_sub
n_messages = 4
p1 = Process(target=bind_pub, args=(1,n_messages,1))
p2 = Process(target=conect_pub, args=(1,n_messages,2))
p3 = Process(target=connect_sub, args=(0.1,n_messages,1))
p4 = Process(target=connect_sub, args=(0.1,n_messages,2))
p1.start()
p2.start()
p3.start()
p4.start()
p1.join()
p2.join()
p3.join()
p4.join()
Results in pub_id=2 messages going missing:
1 sending_func=bind_pub message_number=1 pub_id=1 receiving_func=connect_sub sub_id=2
1 sending_func=bind_pub message_number=1 pub_id=1 receiving_func=connect_sub sub_id=1
1 sending_func=bind_pub message_number=2 pub_id=1 receiving_func=connect_sub sub_id=2
1 sending_func=bind_pub message_number=2 pub_id=1 receiving_func=connect_sub sub_id=1
1 sending_func=bind_pub message_number=3 pub_id=1 receiving_func=connect_sub sub_id=1
1 sending_func=bind_pub message_number=3 pub_id=1 receiving_func=connect_sub sub_id=2
Similarly running a connect_pub, connect_pub, connect_sub, bind_sub architecture:
# connect_pub, connect_pub, connect_sub, bind_sub
n_messages = 4
p1 = Process(target=conect_pub, args=(1,n_messages,1))
p2 = Process(target=conect_pub, args=(1,n_messages,2))
p3 = Process(target=bind_sub, args=(0.1,n_messages,1))
p4 = Process(target=connect_sub, args=(0.1,n_messages,2))
p1.start()
p2.start()
p3.start()
p4.start()
p1.join()
p2.join()
p3.join()
p4.join()
Results in no messages being received by sub_id=2:
1 sending_func=conect_pub message_number=1 pub_id=1 receiving_func=bind_sub sub_id=1
1 sending_func=conect_pub message_number=1 pub_id=2 receiving_func=bind_sub sub_id=1
1 sending_func=conect_pub message_number=2 pub_id=1 receiving_func=bind_sub sub_id=1
Well,fair to mention that ZeroMQ is principally a Broker-less framework ,
this means the 2nd question is solved a priori - no, it is not only not absolutely necessary, it is also principally impossible ( if one does not implement a Broker-(semi-)persistence as a Zen-of-Zero standard ZeroMQ tools based layer an extra add-on ).
Next, ZeroMQ tools are by far not "socket"-s as you know 'em :
This is an often re-articulated misconception, so let me repeat it in bold.
Beware:
ZeroMQ Socket()-instance is not a tcp-socket-as-you-know-it. Best read about the main conceptual differences in ZeroMQ hierarchy in less than a five seconds or other posts and discussions here.
Yet,more important,there seems to be no expressed need which is not covered :
ZeroMQ can either serve all of :
many-PUB-s : many-SUB-s -or-
one-PUB : many-SUB-s -or- even
many-PUB-s : one-SUB
where all or part of those "many" could still get .connect()-ed to a single or more AccessPoints, so the produced topologies could go indeed wild ( for details kindly check the above offered link to a "five seconds" read ) so, one's own imagination seems to be the only ceiling in doing this.
For performance and latency envelopes, feel free to seek and read more in other posts.
It is certainly not necessary to use a broker in order to implement a many-to-many network, but a broker does simplify configuration since each node only needs to know the broker's address, not all of its peers.
Another possibility is a hybrid approach -- using a broker to exchange address information among peers so they can connect to each other directly. You can find an example here: https://github.com/nyfix/OZ/blob/master/doc/Naming-Service.md
I have signed up for an account with Mblox. I would like to use Kannel as my SMPP application to send SMS messages to U.S. phone numbers.
I can bind in, but my submits fail (usually with an error code of 0x042A). I am using the following HTTP request (to my Kannel application) to send a test message to my Verizon phone (just using 14085551212 as an example phone number).
http://localhost:13013/cgi-bin/sendsms?username=tester&password=foobar&to=14085551212&priority=1&text=Test+message+to+VZW
I am also using the following config file. Has anyone encountered this before and been able to solve it?
My current config file:
#---------------------------------------------
# CORE
#
group = core
admin-port = 13000
smsbox-port = 13001
wapbox-port = 13002
admin-password = bar
box-allow-ip = "127.0.0.1"
#---------------------------------------------
# SMSC CONNECTIONS
#
group = smsc
smsc = smpp
smsc-id = smsc1
connect-allow-ip = 127.0.0.1
host = "smpp.psms.us.mblox.com"
transceiver-mode = true
smsc-username = (my account name)
smsc-password = (my password)
port = 3204
enquire-link-interval = 30
system-type = "mbloxclient1"
service-type = -1
interface-version = 34
bind-addr-ton = 0x02
bind-addr-npi = 0x08
my-number = (my short code)
msg-id-type = 0x00
source-addr-ton = 0x03
source-addr-npi = 0x08
dest-addr-ton = 0x02
dest-addr-npi = 0x08
esm-class = 0
#---------------------------------------------
# SMSBOX SETUP
#
group = smsbox
bearerbox-host = localhost
sendsms-port = 13013
global-sender = (my short code)
log-level = 0
#---------------------------------------------
# WAPBOX SETUP
#
group = wapbox
bearerbox-host = 127.0.0.1
syslog-level = none
#---------------------------------------------
# SEND-SMS USERS
#
group = sendsms-user
username = tester
password = foobar
#user-deny-ip = ""
#user-allow-ip = ""
#---------------------------------------------
# SMS SERVICES
#
group = sms-service
keyword = default
text = "No service specified"
I see a few things that need to change. First, you need to include the operator, tariff, and service ID when sending to certain US carriers (such as Verizon and T-Mobile).
To send to Verizon, you'll need to first include a TLV section in your config file with these vendor-specific parameters.
#----------------------------------------
# TLV TAGS
group = smpp-tlv
name = SERVICE_ID
tag = 0x1407
type = octetstring
length = 5
group = smpp-tlv
name = OPERATOR_ID
tag = 0x1402
type = octetstring
length = 5
group = smpp-tlv
name = TARIFF
tag = 0x1403
type = octetstring
length = 5
Note that this will require installing Kannel version 1.4.4 or higher (within the 1.4.x branch - the 1.5.0 development version does not seem to support TLVs as of this posting).
Once this is set up, you can use the following format to send SMS messages through Mblox with the required TLVs:
http://localhost:13013/cgi-bin/sendsms?username=tester&password=foobar&to=14085551212&priority=1&meta-data=?smpp?SERVICE_ID=12345%26OPERATOR_ID=31003%26TARIFF=0&text=Test+message+to+VZW
(You'll have to change the phone number, service ID, and operator ID to the appropriate values.)
For carriers other than Verizon and T-Mobile (i.e., AT&T, Sprint, Cricket, US Cellular, etc.), you should leave out the service ID parameter.
If you are using Sure Route, you will not need the operator ID or tariff parameter.
Good luck! Note that, even with these instructions, it will still likely take a bit of trial and error, and modification to get everything working correctly.
(Disclaimer: Question and answer both provided by an Mblox advocate.)