Cannot see all elements within a MIB using snmpget/snmpwalk - snmp

I am using NET-SNMP (V5.6.1.1) on windows to read my MIB with snmpget & snmpwalk. When I try accessing the MIB I can only see some of the elements. I know the MIB is good since my colleague can extract the same revision of the MIB from the repository and can see all elements within the MIB. We are using the same SNMP command syntax to query the data. I have compared the MIB and snmp.conf files between his machine and mine and they are identical, so can only assume that it is due to a difference in the configuration of our respective PCs. I’ve also checked for any differences in the Environment Variables between our machines, but can see nothing obvious. Is there anything in the machine configuration that might explain why I can only see part of the MIB?
Edit: The MIB is implemented as a single bespoke executable, with the data held in a number of tables, for example:
mibTableA.parameter1
mibTableA.parameter2
mibTableA.parameter3
mibTableB.parameter4
mibTableB.parameter5
mibTableC.parameter6
mibTableC.parameter7
mibTableC.parameter8
None of these tables are dependant upon the availability of system hardware, etc. These tables can also be accesses via an RTA interface using PSQL queries, and using the RTA interface on both my machine and my colleague’s machines, I can see all the tables/parameters. Yet, for instance, accessing the MIB via SNMP I can only see the mibTableA on my machine.

First you need to identify which are the missing ones on your box. Show some examples in your question so that others can guess what can be the cause.
Second, SNMP query result is indeed machine dependent. For example, if your machine has less network adapters than your friend's, then it is reasonable some objects are missing.

I found the problem. There are some scalar fields in the MIB which define the table sizes and these had not been initialised correctly, but instead were picking up old values stored in tables in a C:\Documents and Settings\user\Application Data folder. Hence the difference in behaviour between my machine and my colleagues.

Related

Automatically detecting a PySMI parsed MIB from an existing OID

I have a situation where I'm trying to do some MIB-processing on a pre-existing, non-translated SNMP walk in the cloud. I have a set of translated PySMI MIB json files, but I'm unsure how to match the correct MIB with OIDs within the walk.
I saw in this post that PySNMP was unable to automatically detect a MIB but that it was being worked on. I tried to create a simple implementation myself using regex, but I cannot find the correlation between a MIB's module identity and the OIDs that I am retrieving from the SNMP walk.
I've seen the MIB index that can be generated from PySMI, which seemed promising, but I'm not sure how I can use that to find the human-readable version of an OID from a collection of MIB files.
What am I missing? Thanks!
A way to deal with this would be to build the OID->MIB index by running PySMI-based script (or just vanilla mibdump tool) over your entire MIB collection. Actually, such index can be found here.
Once you have this OID->MIB mapping, you could run the OIDs your snmpwalk script receives, match them (or their prefixes) against the OID->MIB map and load up the required MIBs.
Unfortunately, this relatively simple process has not been built into pysnmp yet, but it should not be hard to implement within your script.

Sharing data between MIB tables

We are implementing support for the Entity MIB module (RFC 6933) and related MIB modules as part of an SNMP agent (snmpd) using Net-SNMP.
Some of the data is shared between MIB tables and MIB modules, for example table indices and "contained in" objects between entPhysicalTable and entPhysicalContainsTable, and indices between entPhysicalTable and entPhySensorTable.
Note that MIB modules related to the Entity MIB module include the Entity Sensor and Entity Battery MIB modules (RFCs 3433 and 7577 respectively).
Are there any pointers or best practices on how to enable such sharing of data between tables using Net-SNMP?
Is there any built-in support provided by Net-SNMP to achieve this, e.g. any particular mib2c options to construct the relevant template source files for these MIB tables?
In particular, data such as indices needs to be dynamic, as entities such as Field-Replaceable Units (FRUs) may be added or removed whilst an SNMP agent is running.
I note that data (indices) is shared between tables such as ifTable and ifXtable, provided as part of the standard Net-SNMP implementation.
Thanks in advance for any help.
When you run mib2c and specify a table, and don't specify a configuration file, it will ask you questions about the style of code you want to generate. Generally speaking, the choices boil down to whether you want net-snmp to "own" the underlying data store (which you update as values change) or whether you will use your own data structures for the underlying data store (which you will implement hooks to that net-snmp calls to interact with your data). In the former case, net-snmp would handle this "shared data" because it owns it. In the latter case, how you handle it depends on how you organize your data structures.

freebsd shpgperproc what is responsible for?

i've googled a lot about what is "page share factor per proc" responsible for and found nothing. It's just interesting for me, i have no current problem with it for now, just curious (wnat to know more). In sysctl it is:
vm.pmap.shpgperproc
Thanks in advance
The first thing to note is that shpgperproc is a loader tunable, so it can only be set at boot time with an appropriate directive in loader.conf, and it's read-only after that.
The second thing to note is that it's defined in <arch>/<arch>/pmap.c, which handles the architecture-dependent portions of the vm subsystem. In particular, it's actually not present in the amd64 pmap.c - it was removed fairly recently, and I'll discuss this a bit below. However, it's present for the other architectures (i386, arm, ...), and it's used identically on each architecture; namely, it appears as follows:
void
pmap_init(void)
{
...
TUNABLE_INT_FETCH("vm.pmap.shpgperproc", &shpgperproc);
pv_entry_max = shpgperproc * maxproc + cnt.v_page_count;
and it's not used anywhere else. pmap_init() is called only once: at boot time as part of the vm subsystem initialization. maxproc, is just the maximum number of processes that can exist (i.e. kern.maxproc), and cnt.v_page_count is just the number of physical pages of memory available (i.e. vm.stats.v_page_count).
A pv_entry is basically just a virtual mapping of a physical page (or more precisely a struct vm_page, so if two processes share a page and both have them mapped, there will be a separate pv_entry structure for each mapping. Thus given a page (struct vm_page) that needs to be dirtied or paged out or something requiring a hw page table update, the list of corresponding mapped virtual pages can be easily found by looking at the corresponding list of pv_entrys (as an example, take a look at i386/i386/pmap.c:pmap_remove_all()).
The use of pv_entrys makes certain VM operations more efficient, but the current implementation (for i386 at least) seems to allocate a static amount of space (see pv_maxchunks, which is set based on pv_entry_max) for pv_chunks, which are used to manage pv_entrys. If the kernel can't allocate a pv_entry after deallocating inactive ones, it panics.
Thus we want to set pv_entry_max based on how many pv_entrys we want space for; clearly we'll want at least as many as there are pages of RAM (which is where cnt.v_page_count comes from). Then we'll want to allow for the fact that many pages will be multiply-virtually-mapped by different processes, since a pv_entry will need to be allocated for each such mapping. Thus shpgperproc - which has a default value of 200 on all arches - is just a way to scale this. On a system where many pages will be shared among processes (say on a heavily-loaded web server running apache), it's apparently possible to run out of pv_entrys, so one will want to bump it up.
I don't have a FreeBSD machine close to me right now but it seems this parameter is defined and used in pmap.c, http://fxr.watson.org/fxr/ident?v=FREEBSD7&im=bigexcerpts&i=shpgperproc

How to interpret Windows Task Manager?

I run Windows 7 RC1, which uses the same WTM from Vista. When i look at the processes, there some columns I'm not sure what the differences are:
Memory - working set
Memory - private working set
Memory - commit size
can anyone tell me what they are?
From the following article, under the section Types of Memory Usage:
There are two main types of memory usage: working set and private working set. The private working set is the amount of memory used by a process that cannot be shared among other processes, while working set includes the memory shared by other processes.
That may sound confusing, so let’s try to simplify it a bit. Lets pretend that there are two kids who are coloring, and both of the kids have 5 of their own crayons. They decide to share some of their crayons so that they have more colors to choose from. When each child is asked how many crayons they used, both of them said they used 7 crayons, because they each shared 2 of their crayons.
The point of that metaphor is that one might assume that there were a total of 14 crayons if they didn’t know that the two kids were sharing, but in reality there were only 10 crayons available. Here is the rundown:
Working Set: This includes all of the shared crayons, so the total would be 14.
Private Working Set: This includes only the crayons that each child owns, and doesn’t reflect how many were actually used in each picture. The total is therefore 10.
This is a really good comparison to how memory is measured. Many applications reuse code that you already have on your system, because in the end it helps reduce the overall memory consumption. If you are viewing the working set memory usage you might get confused because all of your running processes might actually add up to more than the amount of RAM you have installed, which is the same problem we had with the crayon metaphor above. Naturally the working set will always be larger than the private working set.
Working set:
Working set is the subset of virtual pages that are resident in physical memory only; this will be a partial amount of pages from that process.
Private working set:
The private working set is the amount of memory used by a process that cannot be shared among other processes
Commit size:
Amount of virtual memory that is reserved for use by a process.
And at microsoft.com you can find more details about other memory types.
'Working Set' is the amount of memory that the process currently has in physical RAM. In other words, accessing any pages in the 'Working Set' will not cause a page fault since the page is in RAM.
As for the other two, I'm not 100% sure, probably 'Working Set' contains sharable memory, such as memory mapped files, and 'Private Working Set' contains only pages that the process can use and are not shareable.
Have look at this site and search for the speaker 'Dave Solomon'. There is an excellent webcast that he gave which explains about Windows memory, and he mentions working set, commit sizes, and other memory terms.
EDIT:
Those site links are indeed dead :(
Instead, you can search Google for
vimeo david solomon windows
Those same videos look to be available on Vimeo now, which is cool.
If you open the Resource Monitor from the WTM, mousing over the various column headings of the interesting process displays a pretty informative tool tip.
e.g.
Commit(KB): Amount of virtual memory reserved by the operating system for the process in KB.
etc.
This article at Microsoft seems to be the most detailed.
Edit Oct 2018: new link

How stable are Cisco IOS OIDs for querying data with SNMP across different model devices?

I'm querying a bunch of information from cisco switches using SNMP. For instance, I'm pulling information on neighbors detected using CDP by doing an snmpwalk on .1.3.6.1.4.1.9.9.23
Can I use this OID across different cisco models? What pitfalls should I be aware of? To me, I'm a little uneasy about using numeric OIDs - it seems like I should be using a MIB database or something and using the named OIDs, in order to gain cross-device compatibility, but perhaps I'm just imagining the need for that.
Once a MIB has been published it won't move to a new OID. Doing so would break network management tools and cause support calls, which nobody wants. To continue your example, the CDP MIB has been published at Cisco's SNMP Object Navigator.
For general code cleanliness it would be good to define the OIDs in a central place, especially since you don't want to duplicate the full OID for every single table you need to access.
The place you need to be most careful is a unique MIB in a product which Cisco recently acquired. The OID will change, if nothing else to move it into their own Enterprise OID space, but the MIB may also change to conform to Cisco's SNMP practices.
It is very consistent.
Monitoring tools depend on the consistency and the MIBs produced by Cicso rarely change old values and usually only implement new ones.
Check out the Cisco OID look up tool.
Notice how it doesn't ask you what product the look up is for.
-mw
The OIDs can vary with hardware but also with firmware version for the same hardware as, over time, the architecture of the management functions can change and require new MIBs. It is worth checking whether any of the OIDs you intend to use are in deprecated MIBs, or become so in the life of the application, as this indicates not only that the MIB could one day be unsupported but also that there is likely to be improved, richer data or access to data. It is also good practice to test management apps against a sample upgraded device as part of the routine testing of firmware updates before widespread deployment.
An example of a change of OID due to a MIB being deprecated is at
http://www.cisco.com/en/US/tech/tk648/tk362/technologies_configuration_example09186a0080094aa6.shtml
"This document shows how to copy a
configuration file to and from a Cisco
device with the CISCO-CONFIG-COPY-MIB.
If you start from Cisco IOS® software
release 12.0, or on some devices as
early as release 11.2P, Cisco has
implemented a new means of Simple
Network Management Protocol (SNMP)
configuration management with the new
CISCO-CONFIG-COPY-MIB. This MIB
replaces the deprecated configuration
section of the OLD-CISCO-SYSTEM-MIB. "
I would avoid putting in numeric OIDs and instead use 'OID names' and leave that hard work (of translating) to whatever SNMP API you are using.
If that is not possible, then it is okay to use OIDs as they should not change per the SNMP MIB guidelines. Unless the device itself changes but that requires a new MIB anyway which can't reuse old OIDs.
This is obvious, but be sure to look at the attributes of the SNMP MIB variable. Be sure not to query variables that have a status of 'obsolete'.
Jay..
In some cases, using the names instead of the numerical representations can be a serious performance hit due to the need to read and parse the MIB files to get the numerical representations of the OIDs that the lower level libraries need.
For instance, say your using a program to collect something every minute, then loading the MIBs over and over is very inefficient.
As stated by others, once published, the name to numerical mapping will never change, so the fact that you're hard-coding stuff into your programs is not really a problem.
If you have access to command line SNMP tools, check out 'snmptranslate' for a nice tool to get back and forth from text to numerical OIDs.
I think that is a common misconception (about MIB reload each time you resolve a name).
Most of the SNMP APIs (such as AdventNet, CMU) load the MIBS at startup and after that there is no 'overhead' of loading MIBs everytime you ask for a 'translation' from name to oid and vice versa. What's more, some of them cache the results and at that point, there is no difference between name lookups and directly coding the OID.
This is a bit similar to specifying an "IP Address" versus a 'hostname'.

Resources