MIB design from scratch - snmp

I need to design MIB for SNMP analysis from scratch. I mean, we got an OID for our enterprise on the tree 2.25 now it's time to code.
However, I can't get an example of this. When looking under /usr/share/snmp/mibs directory on our Linux machine a see a lot of files (for example: HOST-RESOURCES-MIB.txt, IF-MIB.txt, etc). I understand that these files follow a format for MIB, but I just not getting it because they import some things that are completely strange to me.
For example:
IMPORTS
MODULE-IDENTITY, OBJECT-TYPE, Counter32, Gauge32, Counter64,
Integer32, TimeTicks, mib-2,
NOTIFICATION-TYPE FROM SNMPv2-SMI
TEXTUAL-CONVENTION, DisplayString,
PhysAddress, TruthValue, RowStatus,
TimeStamp, AutonomousType, TestAndIncr FROM SNMPv2-TC
MODULE-COMPLIANCE, OBJECT-GROUP,
NOTIFICATION-GROUP FROM SNMPv2-CONF
snmpTraps FROM SNMPv2-MIB
IANAifType FROM IANAifType-MIB;
Then, in every import file I see more imports and more imports. Can somebody tell me what is the top most file for that or point me in any direction faq or something?
Thanks in advance.

You should go back to its IETF RFC document,
http://www.rfc-editor.org/rfc/rfc2578.txt

Related

How do I effectively identify an unknown file format

I want to write a program that parses yum config files. These files look like this:
[google-chrome]
name=google-chrome - 64-bit
baseurl=http://dl.google.com/linux/chrome/rpm/stable/x86_64
enabled=1
gpgcheck=1
gpgkey=https://dl-ssl.google.com/linux/linux_signing_key.pub
This format looks like it is very easy to parse, but I do not want to reinvent the wheel. If there is an existing library that can generically parse this format, I want to use it.
But how to find a library for something you can not name?
The file extension is no help here. The term ".repo" does not yield any general results besieds yum itself.
So, please teach me how to fish:
How do I effectively find the name of a file format that is unknown to me?
Identifying an unknown file format can be a pain.
But you have some options. I will start with a very obvious one.
Ask
Showing other people the format is maybe the best way to find out its name.
Someone will likely recognize it. And if no one does, chances are good that
you have a proprietary file format in front of you.
In case of your yum repository file, I would say it is a plain old INI file.
But let's do some more research on this.
Reverse Engineering
Reverse Engineering maybe your best bet if nobody recognizes your format.
Take the reference implementation and find out what they are using to parse the format.
Luckily, yum is open source. So it is easy to look up.
Let's see, what the yum authors use to parse their repo file:
try:
ini = INIConfig(open(repo.repofile))
except:
return None
https://github.com/rpm-software-management/yum/blob/master/yum/config.py#L1304
Now the import of this function can be found here:
from iniparse import INIConfig
https://github.com/rpm-software-management/yum/blob/master/yum/config.py#L32
This leads us to a library called iniparse (https://pypi.org/project/iniparse/).
So yum uses an INI parser for its config files.
I will show you how to quickly navigate to those kind of code passages
since navigating in somewhat large projects can be intimidating.
I use a tool called ripgrep (https://github.com/BurntSushi/ripgrep).
My initial anchors are usually well known filepaths. In case of yum, I took /etc/yum.repos.d for my initial search:
# assuming you are in the root directory of yum's source code
rg /etc/yum.repos.d yum
yum/config.py
769: reposdir = ListOption(['/etc/yum/repos.d', '/etc/yum.repos.d'])
yum/__init__.py
556: # (typically /etc/yum/repos.d)
This narrows it down to two files. If you go on further with terms like read or parse,
you will quickly find the results you want.
What if you do not have the reference source?
Well, sometimes, you have no access to the source code of a reference implementation. E.g: The reference implementation is closed source.
Try to break the format. Insert some garbage and observe the log files afterwards. If you are lucky, you may find
a helpful error message which might give you hints about the format.
If you feel very brave, you can try to use an actual decompiler as well. This may or may not be illegal and may or may not be a waste of time.
I personally would only do this as a last resort.

Getting data from .dat files

I'm hoping somebody out there can help me with this. I'm attempting to extract some barcode data from some .dat files. Its a B Tree file system with groups of three files .dat .ix. .dia. The company that wrote the software (a long time ago) say that the program is written in Pascal. I have no experience in reverse engineering but from what I read its most likely the only way to extract the data as the structure of the database is contained in the code of the program. I'm looking for advice on where to start.
I suppose the first thing you need to do is to see if the exe you've got was written with Delphi. You can check with this: http://cc.embarcadero.com/Item/15250
Then, to see if the exe that creates those .dat files were made with 'TurboPower B-Tree Filer', the I'd suggest you download and take a look at this: http://sourceforge.net/projects/tpbtreefiler/
At this step, looking at these sources is needed to familiarize yourself with the class names used in 'TurboPower B-Tree Filer' to help determine if any of those classes were used in your exe.
Then, using 'XN Resource Editor' [search the Internet for this] or, probhably better, 'MiTeC Portable Executable Reader' [ http://www.mitec.cz/pe.html ], see if any class names are relevant.
If they are, then you're in luck --sort of. All you will need to do is to write an app using 'TurboPower B-Tree Filer' to import the data in your dat files to export or manipulate as you wish.
At that point, you might find this link useful.
TurboPower B-Tree Filer and Delphi XE2 - Anyone done it?
If, OTOH, none of the above applies; I fear the only option is to reverse engineer the exe you have.

SQL: Invalid column names

I am trying to upload a table, and it is giving me the error message: "The following new table has invalid names: ". It did not point out which one is invalid. All my column names are words. Not sure what rules can I possibly violate. Below is a screenshot.
DMS NOTDMS engine arsenic sediment cartilage articular bone freight solutions neutrino heart stripe plasma indoor calcium power fixture eye chloride tellurium alloys egg corrosion market antenna metal ice quantum invasibility interrupt ventilation ammonia pollen syringae text auxin editing compression copper dpp clock enduring taxes blue kinase dolomite meristem isoprene proteins halo context information type detector oxygen invariants aequorin attractors ribosome actin cellulose tubulin binding site disulfide midgut alternative oxidase fischeri agreement snow cements excluded attitudes law nucleotide music homotopy periplasmic translocation stomatal phosphoprotein flagellar late motors operons replication sigma recombination streamflow fluidity police muscle blood heme replicative kelps estrogen elderly witnesses fire splicing scaffolding subunits erosion reef climate abnormal operator holographic braided seeding kidney cortical photonic functor homology river alluvial sand inlet import nitrogenase aleurone maturation guard light inositol membrane clay lightning recycling amoebae dyneins thioredoxin coat 3-manifolds mercury diving sludge sources fluorine conductivity hydraulic glucose designs condensate amorphous treeline
Short version:
column names are not the problem
title of error messages may indicate that not every record has the datatype you want them to be
I imported a flat-file (csv) with the same column-names you provided above using microsoft sql server 2008 R2 Express. The import worked fine; every column names was imported. Could it be that your column names have some non-visible characters that are causing the trouble?
If you provide more information we can help better.
which version of sql server are you using
provide the source of your file
provide additional error information and the way you are importing the file
My answer is that the column names are not the problem - at least not with sql server 2008 R2 express.
Oracle keywords:
type (in information type)
SQL keywords may caused the issue:
power
operator
You need to change those column names to avoid any further error.
How to check: Paste the list into any SQL editor (Notepad++). The keywords are highlighted.
I encountered a similar problem (Using Oracle SQL Developer 3.2)
My issue was that I have an index column (with no name). Thus it couldnt be selected, and it displayed as nothing on the error prompt.
Hopefully this helps anyone who might be facing the same issue

How to get units of a variable from SNMP stream?

I'm new to snmp4j. I used the sample code in [1] to extract some meaningful information from a SNMP stream.
In the sample code, oid and value of the variable is extracted, but the value comes without its units. For example
,oid 1.3.6.1.4.1.2021.4.6.0 (SNMP-MIB::memAvailReal.0) gives the value 13385068 without its unit KB. Is there a way to get the value with its units in snmp4j?
Can somebody please look in to this?
[1]https://gist.github.com/akirad/5597203
I believe that the value you're retrieving is simply a SCALAR of type Integer32.
The description in the MIB is "Available Real/Physical Memory Space on the host."
It doesn't even specify the units there, so I don't think there's anywhere to retrieve the units data from. Happy to be corrected by someone if I'm wrong though!
memAvailReal OBJECT-TYPE
SYNTAX Integer32
MAX-ACCESS read-only
STATUS current
DESCRIPTION
"Available Real/Physical Memory Space on the host."
::= { memory 6 }
In other words, its a numeric value and the descriptive metadata from the MIB file doesn't even reveal the units so there's no where to get that info from in code.
Edit:
I googled around some more and found another version of the UCD-SNMP-MIB with this definition:
memAvailReal OBJECT-TYPE
SYNTAX Integer32
UNITS "kB"
MAX-ACCESS read-only
STATUS current
DESCRIPTION
"The amount of real/physical memory currently unused
or available."
::= { memory 6 }
So the info is available in this version of the MIB...
It looks like you can probably make use of this information using the SmiManager class:
http://www.snmp4j.org/smi/doc/com/snmp4j/smi/SmiManager.html
https://oosnmp.net/confluence/pages/viewpage.action?pageId=5799973
But integrating SmiManager into your application might not be trivial (and on looking into it a little bit further , it appears that there's a licence required to use SmiManager!).
For my own little project I'm pre-parsing MIBs and storing the parts of them I need in my NoSQL database rather than including full-blown MIB parsing support. That way I can have a dict of metadata associated with every OID that is easier to access/update and manipulate.
Hope that helps.

FITS Export with custom Metadata

does anybody has experience in exporting data as a FITS file with custom Metadata (FITS header) information? So far I was only able to generate FITS files with the standard Mathematica FITS header template. The documentation gives no hint on whether custom Metadata export is supported and how it might be done.
The following suggestions from comp.soft-sys.math.mathematica do not work:
header=Import[<some FITS file>, "Metadata"];
Export<"test.fits",data ,"Metadata"->header]
or
Export["test.fits",{"Data"->data,"Metadata"->header}]
What is the proper way to export my own Metadata to a FITS file ?
Cheers,
Markus
Update: response from Wolfram Support:
"Mathematica does not yet support Export of metadata for FITS file. The
example are referring to importing of this data. We do plan to support
this in the future..."
"There are also plans to include binary tables into FITS import
functionality."
I will try to come up with some workaround.
According to the documentation for v.7 and v.8, there is a couple of ways of accomplishing what you want, and you almost have the rule form correct:
Export["test.fits", {"Data" -> data, "Metadata" -> header}, "Rules"]
The other ways are
Export["test.fits", header, "Metadata"]
Export["test.fits", {data, header}, {{"Data", "Metadata"}}]
note the double brackets around the element labels in the second method.
Edit: After some testing, due to prodding from #belisarius, whenever I include the "Metadata" element, I get an error stating that it is not a valid export element. Also, you can't export a "RawData" element, either. So, I'd submit a bug for two reasons: the metadata isn't user settable which is vitally important for any serious application. At a minimum, the user should at least be able to augment the default Mathematica metadata. Second, the documentation is woefully inadequate in describing what is a "valid" export element vs. import element. Of course, I'd describe all of the documentation for v.6 and beyond as woefully inadequate, so this is par for the course.
Mathematica 9 now allows export of metadata (header) entries, which are additive to the standard required entries. In the Help browser, search "FITS" and there is an example that shows this (with Export followed by Import to verify).

Resources