Fetching entire branch of MIB - snmp

I'm still quite new to SNMP and I was wondering how I would go about getting an entire branch of a MIB with as few queries as possible.
My approach:
Use GETBULK messages to get pow(2,tries) entries at a time and then stop when I get an object that don't match as a child of the object specified by my OID
Why do I need it:
I'm trying to get a variant sized branch of the MIB, the ipRouteTable part to be specific.

Do you know SNMP has a WALK operation where you can visit all objects in turn? Net-SNMP has such a utility,
http://net-snmp.sourceforge.net/docs/man/snmpwalk.html

Related

Automatically detecting a PySMI parsed MIB from an existing OID

I have a situation where I'm trying to do some MIB-processing on a pre-existing, non-translated SNMP walk in the cloud. I have a set of translated PySMI MIB json files, but I'm unsure how to match the correct MIB with OIDs within the walk.
I saw in this post that PySNMP was unable to automatically detect a MIB but that it was being worked on. I tried to create a simple implementation myself using regex, but I cannot find the correlation between a MIB's module identity and the OIDs that I am retrieving from the SNMP walk.
I've seen the MIB index that can be generated from PySMI, which seemed promising, but I'm not sure how I can use that to find the human-readable version of an OID from a collection of MIB files.
What am I missing? Thanks!
A way to deal with this would be to build the OID->MIB index by running PySMI-based script (or just vanilla mibdump tool) over your entire MIB collection. Actually, such index can be found here.
Once you have this OID->MIB mapping, you could run the OIDs your snmpwalk script receives, match them (or their prefixes) against the OID->MIB map and load up the required MIBs.
Unfortunately, this relatively simple process has not been built into pysnmp yet, but it should not be hard to implement within your script.

what is usage of parsing mibs?

Can anyone tell me why NMS implementations parse and save MIB items in a database?
I know one of the reasons is when they receive a trap and want to analyze it, then they use the parsed MIB. What else they do with parsed MIB?
For example, when the NMS sends a SNMP GET request to an agent, the programmer must specify which OIDs are being requested?
Does the the parsed MIB have a another purpose or do we parse MIBs only for analyzing SNMP traps?
You are on the right track - you parse the MIB at all in order to make it human-readable. That is for both traps (informs) and polled values. But if you parse it out to a text file, that's a huge amount of data to read/grep through to find out the description, message, possible values, related OIDs, etc.
Added to this is that there isn't just one MIB. There are dozens or hundreds that an NMS may be interested in. Since, on a host, you only add the MIBs that you want that host to respond to, the NMS has to have a copy of every MIB that ever device it is monitoring may have on IT so that it can understand the response the host returns.
So you parse each MIB and store it in a db to make it faster to search and to have everything all in one place. That could be so that you can find the messages associated with varbinds, or what all the possible enumerations are, etc.
Just to be clear, parsing the MIB isn't the same as doing an SNMPWalk on a host. SNMPWalk just gives you the current response to each OID in sequence.

how to index tons of data at once with Rails, (re)tire, json without eating (all) memory?

In a Rails 3.2.x app, using (Re)tire to access an ES cluster a rake task is going through approx 1M rows to create a new index. (Ruby 1.9.3).
The task is using .to_json with specific attributes and methods listed to limit the resulting hash for each element.
Yet as the task run the memory is eaten away, ending with the process being killed usually by the system.
The task is already using find_by_batch. Smaller batches sizes (using find_each) don't help.
checking without index
Removing the index.import call does improve things (obviously). The task goes through the whole collection very fast without a problem. Pointing to either ES, tire or the JSON conversion (and the relations it might call upon).
reducing the scope of the task
Adding back index.import and passing a very limited hash (with string keys) for each item does make things slower but not too much and does not eat memory away. So json might no be the culprit here.
adding attributes and methods back
The culprit seems to be one of the method used to grab one of the additional attributes. It's based on a relation of the model and another ... Ending up with a lot of models being involved and sifted through.
As pointed out by Index the results of a method in ElasticSearch (Tire + ActiveRecord) adding includes does help a bit but the task does end up heavy too.
going around
I also tried to go around part of the problem and replace the calls to Tire with the use of ES bulk API.
Generating json files and sending them with a Ruby http lib can work. Yet, the same problem arise : memory since the same requests to the DB are made.
What's left ?
What I don't get is why even with the find_by_batch Ruby keeps eating away memory. I would expect that after each batch of data, memory related that batch would be freed.
Next to try : GC.start calls, Active Record caching de activation around the tasks.
Yet, except if a solution limiting the memory use drastically (300 or 500Mo instead of 800+) the background issue is : indexing a lot of instances of a Model including data related to some other models.
am I missing something for the import and includes that would solve the issue ?
would splitting that task into smaller background jobs (resque, sidekiq) help ? I would suppose so as each batch would be isolated from the others and once treated, really free up the memory (?) (orchestrating those tasks would be another trouble)
is there good practices related to indexing big quantities of data into ES ?
I've been using Rails + Elasticsearch for a while and did this kind of dance a few times.
A few things comes to mind, in no particular order.
Did you try to use the recent elasticsearch gem (instead of tire) ? I've updated my apps to use and like having more control on what is done.
I would also try to force a GC sweep after each ActiveRecord loop. You could also be extra careful with memory allocation by explicitly resetting all local variables each time.
You could use the fork & exec trick to fork a brand new process at each loop, it would be the most effective GC you can get. It's a little overhead when you write it the first time, but the pay-off is great. Take good care of limiting the amount of memory used in the outer part of the task. Using a process-based background task would partly achieve the same goal, but you might still get memory bloat.
Can you limit the use of ActiveRecord? If you need some basic associations you could use a lower-level/simpler tool like Sequel (or else) to use Ruby hashes/arrays instead of full fledged AR models.

QoS bandwidth via SNMP

I currently have a script to glean QoS data from differing cisco routers and this is working well but missing the bandwidth data for each class.
I can see that the data is available in that querying:
enterprises.9.9.166.1.9.1.1.1.1608 = INTEGER: 425
Returns the correct bandwidth for this particular class [425kb]. I have seen this index elsewhere:
enterprises.9.9.166.1.5.1.1.2.6933270.5456067 = Gauge32: 1608
With '6933270' being one of the indexes associated with the interface I am interested in.
How though do I 'learn' the second index '5456067' or is there another way to derive the class bandwidth?
I have scoured Google which has me at this point but I am unable to get any closer to the second index. Multiple snmpwalks grepping the second index show no light either in that I can find no way to relate to this from existing known data.
Thanks
I think you get the wrong oid entry. enterprises.9.9.166.1.5.1.1.2 stands for cisco cbQosConfigIndex from mib, if you want to get the bandwidth, you should use the 1.3.6.1.4.1.9.9.166.1.9.1.1.2 which means QueueingBandwidthUnits instead.

How stable are Cisco IOS OIDs for querying data with SNMP across different model devices?

I'm querying a bunch of information from cisco switches using SNMP. For instance, I'm pulling information on neighbors detected using CDP by doing an snmpwalk on .1.3.6.1.4.1.9.9.23
Can I use this OID across different cisco models? What pitfalls should I be aware of? To me, I'm a little uneasy about using numeric OIDs - it seems like I should be using a MIB database or something and using the named OIDs, in order to gain cross-device compatibility, but perhaps I'm just imagining the need for that.
Once a MIB has been published it won't move to a new OID. Doing so would break network management tools and cause support calls, which nobody wants. To continue your example, the CDP MIB has been published at Cisco's SNMP Object Navigator.
For general code cleanliness it would be good to define the OIDs in a central place, especially since you don't want to duplicate the full OID for every single table you need to access.
The place you need to be most careful is a unique MIB in a product which Cisco recently acquired. The OID will change, if nothing else to move it into their own Enterprise OID space, but the MIB may also change to conform to Cisco's SNMP practices.
It is very consistent.
Monitoring tools depend on the consistency and the MIBs produced by Cicso rarely change old values and usually only implement new ones.
Check out the Cisco OID look up tool.
Notice how it doesn't ask you what product the look up is for.
-mw
The OIDs can vary with hardware but also with firmware version for the same hardware as, over time, the architecture of the management functions can change and require new MIBs. It is worth checking whether any of the OIDs you intend to use are in deprecated MIBs, or become so in the life of the application, as this indicates not only that the MIB could one day be unsupported but also that there is likely to be improved, richer data or access to data. It is also good practice to test management apps against a sample upgraded device as part of the routine testing of firmware updates before widespread deployment.
An example of a change of OID due to a MIB being deprecated is at
http://www.cisco.com/en/US/tech/tk648/tk362/technologies_configuration_example09186a0080094aa6.shtml
"This document shows how to copy a
configuration file to and from a Cisco
device with the CISCO-CONFIG-COPY-MIB.
If you start from Cisco IOSĀ® software
release 12.0, or on some devices as
early as release 11.2P, Cisco has
implemented a new means of Simple
Network Management Protocol (SNMP)
configuration management with the new
CISCO-CONFIG-COPY-MIB. This MIB
replaces the deprecated configuration
section of the OLD-CISCO-SYSTEM-MIB. "
I would avoid putting in numeric OIDs and instead use 'OID names' and leave that hard work (of translating) to whatever SNMP API you are using.
If that is not possible, then it is okay to use OIDs as they should not change per the SNMP MIB guidelines. Unless the device itself changes but that requires a new MIB anyway which can't reuse old OIDs.
This is obvious, but be sure to look at the attributes of the SNMP MIB variable. Be sure not to query variables that have a status of 'obsolete'.
Jay..
In some cases, using the names instead of the numerical representations can be a serious performance hit due to the need to read and parse the MIB files to get the numerical representations of the OIDs that the lower level libraries need.
For instance, say your using a program to collect something every minute, then loading the MIBs over and over is very inefficient.
As stated by others, once published, the name to numerical mapping will never change, so the fact that you're hard-coding stuff into your programs is not really a problem.
If you have access to command line SNMP tools, check out 'snmptranslate' for a nice tool to get back and forth from text to numerical OIDs.
I think that is a common misconception (about MIB reload each time you resolve a name).
Most of the SNMP APIs (such as AdventNet, CMU) load the MIBS at startup and after that there is no 'overhead' of loading MIBs everytime you ask for a 'translation' from name to oid and vice versa. What's more, some of them cache the results and at that point, there is no difference between name lookups and directly coding the OID.
This is a bit similar to specifying an "IP Address" versus a 'hostname'.

Resources