How to identify unknown objects in MIB? (SNMP) - snmp

Over the past week or so I've spent time getting to know SNMP. I have quickly learnt that the bane of working with SNMP devices, to create simple monitoring tools, are the MIBs.
In my particular situation, Xerox aren't helpful with giving out MIBs so I'm left with thousands of unidentified objects when I perform a walk on a printer.
Many of these undescribed OIDs have values but of course I have no idea what they represent.
What are the typical procedures that's most successful in terms of results to resolve these unknown OIDs? I have time and the willingness to dig deeper but I'm just not sure where to start.
NB: I've already tried generic MIBs, and potential Xerox MIBs but all the descriptions seem very vague and don't explicitly indicate their purposes. This guy managed to identify a few in relation to the previously linked MIB but I have no idea how he worked it out because the description for those objects are ridiculously vague.
This is for a Python 2.7 script.

I'm second for checking sysORTable contents.
If that does not help you could try downloading as many MIBs as you can find and then load them all into snmpwalk (via -m ALL option) or do that for subsets of MIBs to conserve memory. Then walk your printer and see what MIBs are reported by snmpwalk.
If you could not load many MIBs into memory, I can propose a very peculiar approach.
You can take available MIB names from here, take latest development pysnmp/pysnmp-apps packages, then list all OIDs defines in each MIB:
$ snmptranslate.py -To XEROX-GENERAL-MIB::
.1.3.6.1.4.1.253.8.51
.1.3.6.1.4.1.253.8.51.1
.1.3.6.1.4.1.253.8.51.1.2
...
Once you know what OIDs are in what MIB, you could match OIDs you fetch from printer against OIDs found in MIBs. That way you figure what MIBs are implemented by your printer.

Related

Will OID change amoung same kind of devices

I'm using SNMP to enquire the status of a certain make of printer.
For example,the status of make1 printer has an OID '1',
I want to know if this OID will, for example, in a make2 printer, remain the same as make1.
This question is probably better suited to SuperUser than Stack Overflow. However, the answer is "it depends". There are many IETF standard MIBs, which usually vendors endeavor to implement if they existed the vendor was aware of them at the time the device's agent was implemented. In such cases, these will be the same from one make to the next if both vendors implement those standard MIBs. Vendors may also have their own enterprise-specific MIBs that will not be implemented in different vendors' devices.
There is an IETF standard Printer MIB, that you will likely find implemented in just about any SNMP-capable printer on the market, but I assume printers were just being used as an example for the sake of your question.

MIBs and OEM products

Are there any recommendations and/or best practices when a product has a private (enterprise specific MIB), and the intention is to re-badge the product as if from another manufacturer? That is, a commercial OEM deal occurs. I guess a similar situation arises when a company with a MIB private enterprise number is taken over by another company? Can you replace the stem .1.3.6.1.4.1.x of the OIDs, where x is the private enterprise number, with another company's number? Do you continue with the MIB module unchanged? Do you simply change the contact information contained within the MIB module file?
Thanks in advance for any pointers.
For OEMs, I don't know if there is any best practice. You could do what Lex Li outlines in his comment, modifying both the software and the MIB, as long as you have the kind of OEM deal where you can modify the software. If you don't have that, you probably have no choice but to leave the original MIB untouched, and live with your customers finding out that the product is OEM when they read the MIB. I know my employer sometimes does the latter.
If you are the original manufacturer, you have a choice of what to offer your OEM vendor(s), and this will be mostly up to you.
For the other case, where one company buys out another and takes over their product portfolio, there are (at least) two ways to do it. I would say the first one is more advisable, from an engineering perspective at least. A marketing department might say otherwise.
Leave everything as it is. For example, HP bought Compaq 12 years ago , but if you buy an HP server today, they still implement the old Compaq MIBs under .1.3.6.1.4.1.232 (for example CPQRACK-MIB). Probably, it was cheaper to maintain and expand cpq:s extensive MIB tree, than to migrate all their products to the HP enterprise subtree. There are numerous other examples.
Migrate everything to your own enterprise tree. You might skip MIBs that are no longer in use (products discontinued etc). Advantage: Less brand confusion. If products are re-named as part of the buyout, this can be reflected in the new MIBs without violating any RFCs.
This approach has the distinct disadvantage of invalidating any existing management solutions designed around the old MIBs. For this reason alone, I would advise against this approach. Nevertheless, it has probably been done.

Auto-detect GPU

I need detect GPU (videocard) and set settings of the app, appropriate to GPU performance.
I'm able to make a list with settings for each GPU model, but I don't understand how to easily detect model of GPU installed in PC.
What is the best way to solve this task? Does any way to do this that is not dependent on installed driver/some software?
The above comment by Ben Voigt summarizes it: Simply don't do it.
See if the minimum version of your favorite compute API (OpenCL or whatever) is supported, and if the required extensions are present, compile some kernels, and see if that produces errors. Run the kernels and benchmark them. Ask the API how much local/global memory you have available, what warp sizes it supports, and so on.
If you really insist on detecting the GPU model, prepare for trouble. There are two ways of doing this, one is parsing the graphic card's advertised human readable name, this is asking for trouble right away (since many cards that are hugely different will advertise the same human-readable name, and some model names even lie about their architecture generation!).
The other, slightly better way is finding the vendor/model ID combination and looking that one up. This works somewhat better but it is equally painful and error-prone.
You can parse these vendor and model IDs from the "key" string inside the structure that you get when you call EnumDisplayDevices. Which, if I remember correctly, Microsoft calls "reserved", in other words it's kind of unsupported/undocumented.
Finding out the vendor is still relatively easy. A vendor ID of 0x10DE is nVidia, and 0x1002 is AMD/ATI. 0x163C is Intel. However, sometimes, very rarely, a cheapish OEM will advertise its own ID instead.
Then you have the kind of meaningless model number (it's not like bigger numbers are better, or some other obvious rule!), which you need to look up somewhere. nVidia and AMD publish these officially [1] [2], although they are not necessarily always up-to-date. There was a time when nVidia's list lacked the most recent models for almost one year (though the list I just downloaded seems to be complete). I'm not aware of other manufacturers, including Intel, doing this consistently.
Spending some time on Google will lead you to sites like this one, which are not "official" but may allow you to figure out most stuff anyway... in a painstalking manner.
And then, you know the model, and you have gained pretty much nothing. You still need to translate this to "good enough for what I want" or "not good enough".
Which you could have found out simply by compiling your kernels and seeing that no error is reported, and running them.
And what do you do in 6 months when 3 new GPU models are released after your application which obviously cannot know these has already shipped? How do you treat these?

What is difference between Digital Forensic and Reverse Engineering?

I am not able to understand exact difference in Digital Forensic and Reverse Engineering. Will Digital Forensic has anything to do with decompilation, assembly code reading or debugging?
Thanks
Digital Forensic practice usually involves:
looking at logfiles
doing recovery of unlinked filesystem objects (e.g deleted files)
recovering browsing history through cache, etc.
looking at timestamps of files
(usually for the purpose of law enforcement)
Reverse Engineering usually involves determining how something works by:
looking at binary file formats of multiple files (or executables) to determine patterns
decompilation of binary executables to determine intent of the code
black-boxing and/or debugging of known-good applications to determine nominal behaviour with respect to data.
(usually for the purpose of interoperability)
They're completely different activities.
EDIT: so many typos.
I think the lines are a little more blurred than most realize. Digital forensics goes after the artifacts to prove certain activity has taken place. Very few software packages offer documentation on the files that are created by that application. Basically, reverse engineering is required to figure out what the artifacts are, but not all forensic examiners are required to do the actual reverse engineering part.
Both are very, very different.
Reverse Engineering is a process of deconstructing how a system behaves without its engineering documents.
It has many purposes: replicating or exploiting a system or merely to make a compatible product that works with a system. It may involve software tools (IDApro), in-circuit emulators, soldering irons, etc. One neat example is that it's possible to de-pot a chip using nitric acid https://www.youtube.com/watch?v=mT1FStxAVz4 and then place the chip under a microscope to possibly determine some of its structure and behavior. (IANAL, IANAC: Don't attempt without chemistry knowledge and lab safety.)
Digital Forensics is looking to see what people or systems may have done by examining compute, network and storage devices for evidence.
It is mostly used by persons defending systems such as system administrators or law enforcement to determine who/what/how a potential crime occurred. This can automated (Snort, Tripwire) or manual (searching logs, say in Splunk or Loggly, or searching raw disk snapshots for particular strings).
There very very different stuff!
Digital Forensics is used to retrieve deleted artifacts , logging am dd image , you can see it like viewing the big picture.
Reversing is the opposite, it's digging into a code to it binaries and understanding 100% what it does.
If you'd like to enter this field I recommend reading Practical Malware Analasys book.
Digital forensics is the practice of retrieving information from digital media (computers, phones & tablets, networks) via a number of means. Normally for law enforcement, though it can be for private organisations and other partied; especially in the rising field of e-discovery.
Reverse engineering is looking at the code or binary of a file/system and determining how it is structured and how it works.
These are two completely different sciences. But if you think about it, they go hand in hand. Digital forensics need reverse engineering to determine what information is available in files they analyse and how that information is stored. Any good digital forensics company will have a R&D department that will allow them to do this in house.

Designing and Interfacing a Partition Format

This is a subject that I have never found a suitable answer to, and so I was wondering if the helpful people of Stack Overflow may be able to answer this.
First of all: I'm not asking for a tutorial or anything, merely a discussion because I have not seen much information online about this.
Basically what I'd like to know is how one designs a new type of partition format, and then how it is capable of being interfaced with the operating system for use?
And better yet, what qualifies one partition format to be better than another? Is it performance/security, filename/filesize? Or is there more to it?
It's just something I've always wondered about. I'd love to dabble in creating one just for education purposes someday.
OK, although the question is broad, I'll try to dabble into it:
Assume that we are talking about a 'filesystem' as opposed to
certain 'raw' partition formats such as swap formats etc.
A filesystem should be able to map from low-level OS, BIOS, Network or Custom calls into a coherent file-and-folder file' names
that can be used by user applications. So, in your case, a
'partitition format' should be something that presents low-level
disk sectors and cylinders and their contents into a file-and-folder
abstraction.
Along the way, if you can provide features such as less fragmentation, redundant nodes indexes, journalling to prevent data
loss, survival in case of loss of power, work around bad sectors,
redundant data, mirroring of hardware, etc. then it can be
considered better than another one that does not provide such
features. If you can optimise file sizes to match usage of disk
sectors and clusters while accommodating very small and very large
files, that would be a plus.
Thorough bullet-proof security and testing would be considered essential for any non-experimental use.
To start hacking on your own, work with one of the slightly older filesystems like ext2. You would need considerable
build/compile/kernel skills
to get going, but nothing monumental.

Resources