I want to implement SNMP-agent for PowerPC board using net-snmp.
Previously it was implemented using SMASH. SMASH has a parser
which could read MIB and generate C code (blank function imlementation)
How do I get started?
Try to look at mib2c tool from net-snmp. It will generate the snmp agent C code from the MIB. Then you have to only fulfill the return values to SNMP requests. Skeleton of responding to SNMP requests (get, set, get-next) are automatically done by generating.
Do you have a look to Writing a MIB module tutorial.
I took a different approach to this. In order to better integrate with my C++ ecosystem, and to obtain greater flexibility (particularly at scale), I:
Had a pre-build step parse the result of snmptranslate (that is, the MIB tree) into a bunch of C++ maps and other containers for use in code
Borrowed Net-SNMP's transport and PDU building functions
But serviced requests myself upon receipt, using my C++ maps and the data already available to my application
This made notification generation trivial (I just needed some variant types to generate varbinds, a bit of PDU construction and then left the rest to Net-SNMP's transport feature), although for requests I did then have to implement table walking myself (and GetNext/GetBulk/Set are not trivial unless you avoid all tables, or at least avoid composite-index tables).
The result is a fast, robust and scalable SNMP agent with expressive code that's easy to maintain and to extend.
You don't say you're using C++, but this does give an idea of how you can cherry-pick Net-SNMP functionality without necessarily buying into its entire ecosystem.
Do note that I have no idea how SNMPv3 would fit into this model; I cleverly left the company before it became my problem. :)
Related
When working with NPAPI, you have the control over two functions: NPP_WriteReady & NPP_Write. This is basically a data push model.
However I need to implement support for a new file format. The library I am using takes any concrete subclass of the following source model (simplified c++ code):
struct compressed_source {
virtual int read(char *buf, int num_bytes) = 0;
}
This model is trivial to implement when dealing with FILE* (C) or socket (BSD) and other(s), since they comply with a pull data model. However I do not see how to fullfill this pull model from the NPAPI push model.
As far as I understand I cannot explicitely call NPP_Write within my concrete implementation of ::read(char *, size_t).
What is the solution here ?
EDIT:
I did not want to add too much details to avoid confusing answer. Just for reference, I want to be build an OpenJPEG/NPAPI plugin. OpenJPEG is a huge library, and the underlying JPEG 2000 implementation really wants a pull data model to allow fine access on massive image (eg: specific sub-region of an 100000 x 100000 image thanks to low level indexing information). In other word, I really need a pull data model plugin interface.
Preload the file
Well, preloading the whole file is always an option that would work, but often not a good one. From your other questions I gather that files/downloads in question might be rather large, so avoiding network traffic might be a good idea, so preloading the file is not really an option.
Hack the library
If you're using some open source library, you might be able to implement a push API along or instead of the current pull API directly within the library.
Or you could implement things entirely by yourself. IIRC you're trying to decode some image format, and image formats are usually reasonably easy to implement from scratch.
Implement blocking reads by blocking the thread
You could put the image decoding stuff into a new thread, and whenever there is not enough buffered data already to fulfill a read immediately, do a blocking wait for the data receiving thread (main thread in case of NPAPI) until it indicates the buffer is sufficiently filled again. This is essentially the Producer/Consumer problem.
Of course, you'll first need to choose how to use threads and synchronization primitives (a library such as C++11 std::thread, Boost threads, low-level pthreads and/or Windows threads, etc.). There are tons of related SO questions on SO/SE and tons of articles/postings/discussions/tutorials etc. all over the internet.
I am currently designing system which should allow access to database. Assumptions are as follows:
Database should has access layer. The access layer should provide objects that represents database tables. (This would be done using some ORM framework).
Client which want to get data from database, should get object from access layer first, and then get data using those objects.
Clients could use Python, Java or C++.
Access layer is based on Java.
There won't be to many clients, but they will be opearating on large amounts of data.
The question which is hard for me is what technology should be used for passing object between acces layer and clients. I consider using ZeroC ICE, Apache Thrift or Google Protocol Buffers.
Does anyone have opinion which one is worth using?
This is my research for Protocol Buffers:
Advantages:
simple to use and easy to start
well documented
highly optimized
defining object data structure in java-like language
automatically generating implementation of setters and getters and build methods for Python, Java and C++
open-source bidnings for other languages
object could be extended without affecting old version of an applications
there are many of open-source RpcChanel and RpcController implementation (not tested)
Disadvantages:
need to implement object transfer
objects structure have to be defined before use, so we can't add some fields on the fly (Updated: there are posibilities to do that, see the comments)
if there is a need for reading one object's filed, we have to parse whole file (in contrast, in XML we could ignore chosen tags)
if we want to use RPC for invoke object methods, we need to define services and deliver RpcChanel and RpcController implementation
This is my research for Apache Thrift:
Advantages:
provide compiler that generates source code for supported languages (classes, all things that are important)
allow defining optional fields in the structures ( when we do not set value on a field, the size of transfered data is lower)
enable point out some methods that are "one way" (returning nothing and client after invokation do not wait for answer from server about completion processing of query)
support collections (maps, lists, sets), objects, primitives serialization (deserialization), constants, enumerations, exceptions
most of problems, errors are solved and explained
provide different methods of serialization: (TBinaryProtocol...) and different ways of exchanging data: (TBufferedTransport, TZlibTransport... )
compiler produces classes (structures) for languages thaw we can extend by adding some new methods.
possible to add fields to protocol(server as well as client) and remove other- old code and new one can properly interact(some rules in update)
enable asynchronous calls
easy to use
Disadvantages:
documentation - contains some errors that sometimes it is really hard to get to know what is the source of the problem
not allways problems are well taged (when we look for solution in the Internet).
not support overloading for service methods
tutorials cover only simple examples of thrift usage
hard to start
ICE ZeroC:
Is better than Protocol Buffers, because I wouldn't need to implement object passing by myself via e.g. sockets. ICE also gives ServantLocators which can provide management of connections.
The question is: whether ICE is much slower and less efficient than the PB?
I'm working on a third-party program that aggregates data from a bunch of different, existing Windows programs. Each program has a mechanism for exporting the data via the GUI. The most brain-dead approach would have me generate extracts by using AutoIt or some other GUI manipulation program to generate the extractions via the GUI. The problem with this is that people might be interacting with the computer when, suddenly, some automated program takes over. That's no good. What I really want to do is somehow have a program run once a day and silently (i.e. without popping up any GUIs) export the data from each program.
My research is telling me that I need to hook each application (assume these applications are always running) and inject a custom DLL to trigger each export. Am I remotely close to being on the right track? I'm a fairly experienced software dev, but I don't know a whole lot about reverse engineering or hooking. Any advice or direction would be greatly appreciated.
Edit: I'm trying to manage the availability of a certain type of professional. Their schedules are stored in proprietary systems. With their permission, I want to install an app on their system that extracts their schedule from whichever system they are using and uploads the information to a central server so that I can present that information to potential clients.
I am aware of four ways of extracting the information you want, both with their advantages and disadvantages. Before you do anything, you need to be aware that any solution you create is not guaranteed and in fact very unlikely to continue working should the target application ever update. The reason is that in each case, you are relying on an implementation detail instead of a pre-defined interface through which to export your data.
Hooking the GUI
The first way is to hook the GUI as you have suggested. What you are doing in this case is simply reading off from what an actual user would see. This is in general easier, since you are hooking the WinAPI which is clearly defined. One danger is that what the program displays is inconsistent or incomplete in comparison to the internal data it is supposed to be representing.
Typically, there are two common ways to perform WinAPI hooking:
DLL Injection. You create a DLL which you load into the other program's virtual address space. This means that you have read/write access (writable access can be gained with VirtualProtect) to the target's entire memory. From here you can trampoline the functions which are called to set UI information. For example, to check if a window has changed its text, you might trampoline the SetWindowText function. Note every control has different interfaces used to set what they are displaying. In this case, you are hooking the functions called by the code to set the display.
SetWindowsHookEx. Under the covers, this works similarly to DLL injection and in this case is really just another method for you to extend/subvert the control flow of messages received by controls. What you want to do in this case is hook the window procedures of each child control. For example, when an item is added to a ComboBox, it would receive a CB_ADDSTRING message. In this case, you are hooking the messages that are received when the display changes.
One caveat with this approach is that it will only work if the target is using or extending WinAPI controls.
Reading from the GUI
Instead of hooking the GUI, you can alternatively use WinAPI to read directly from the target windows. However, in some cases this may not be allowed. There is not much to do in this case but to try and see if it works. This may in fact be the easiest approach. Typically, you will send messages such as WM_GETTEXT to query the target window for what it is currently displaying. To do this, you will need to obtain the exact window hierarchy containing the control you are interested in. For example, say you want to read an edit control, you will need to see what parent window/s are above it in the window hierarchy in order to obtain its window handle.
Reading from memory (Advanced)
This approach is by far the most complicated but if you are able to fully reverse engineer the target program, it is the most likely to get you consistent data. This approach works by you reading the memory from the target process. This technique is very commonly used in game hacking to add 'functionality' and to observe the internal state of the game.
Consider that as well as storing information in the GUI, programs often hold their own internal model of all the data. This is especially true when the controls used are virtual and simply query subsets of the data to be displayed. This is an example of a situation where the first two approaches would not be of much use. This data is often held in some sort of abstract data type such as a list or perhaps even an array. The trick is to find this list in memory and read the values off directly. This can be done externally with ReadProcessMemory or internally through DLL injection again. The difficulty lies mainly in two prerequisites:
Firstly, you must be able to reliably locate these data structures. The problem with this is that code is not guaranteed to be in the same place, especially with features such as ASLR. Colloquially, this is sometimes referred to as code-shifting. ASLR can be defeated by using the offset from a module base and dynamically getting the module base address with functions such as GetModuleHandle. As well as ASLR, a reason that this occurs is due to dynamic memory allocation (e.g. through malloc). In such cases, you will need to find a heap address storing the pointer (which would for example be the return of malloc), dereference that and find your list. That pointer would be prone to ASLR and instead of a pointer, it might be a double-pointer, triple-pointer, etc.
The second problem you face is that it would be rare for each list item to be a primitive type. For example, instead of a list of character arrays (strings), it is likely that you will be faced with a list of objects. You would need to further reverse engineer each object type and understand internal layouts (at least be able to determine offsets of primitive values you are interested in in terms of its offset from the object base). More advanced methods revolve around actually reverse engineering the vtable of objects and calling their 'API'.
You might notice that I am not able to give information here which is specific. The reason is that by its nature, using this method requires an intimate understanding of the target's internals and as such, the specifics are defined only by how the target has been programmed. Unless you have knowledge and experience of reverse engineering, it is unlikely you would want to go down this route.
Hooking the target's internal API (Advanced)
As with the above solution, instead of digging for data structures, you dig for the internal API. I briefly covered this with when discussing vtables earlier. Instead of doing this, you would be attempting to find internal APIs that are called when the GUI is modified. Typically, when a view/UI is modified, instead of directly calling the WinAPI to update it, a program will have its own wrapper function which it calls which in turn calls the WinAPI. You simply need to find this function and hook it. Again this is possible, but requires reverse engineering skills. You may find that you discover functions which you want to call yourself. In this case, as well as being able to locate the location of the function, you have to reverse engineer the parameters it takes, its calling convention and you will need to ensure calling the function has no side effects.
I would consider this approach to be advanced. It can certainly be done and is another common technique used in game hacking to observe internal states and to manipulate a target's behaviour, but is difficult!
The first two methods are well suited for reading data from WinAPI programs and are by far easier. The two latter methods allow greater flexibility. With enough work, you are able to read anything and everything encapsulated by the target but requires a lot of skill.
Another point of concern which may or may not relate to your case is how easy it will be to update your solution to work should the target every be updated. With the first two methods, it is more likely no changes or small changes have to be made. With the second two methods, even a small change in source code can cause a relocation of the offsets you are relying upon. One method of dealing with this is to use byte signatures to dynamically generate the offsets. I wrote another answer some time ago which addresses how this is done.
What I have written is only a brief summary of the various techniques that can be used for what you want to achieve. I may have missed approaches, but these are the most common ones I know of and have experience with. Since these are large topics in themselves, I would advise you ask a new question if you want to obtain more detail about any particular one. Note that in all of the approaches I have discussed, none of them suffer from any interaction which is visible to the outside world so you would have no problem with anything popping up. It would be, as you describe, 'silent'.
This is relevant information about detouring/trampolining which I have lifted from a previous answer I wrote:
If you are looking for ways that programs detour execution of other
processes, it is usually through one of two means:
Dynamic (Runtime) Detouring - This is the more common method and is what is used by libraries such as Microsoft Detours. Here is a
relevant paper where the first few bytes of a function are overwritten
to unconditionally branch to the instrumentation.
(Static) Binary Rewriting - This is a much less common method for rootkits, but is used by research projects. It allows detouring to be
performed by statically analysing and overwriting a binary. An old
(not publicly available) package for Windows that performs this is
Etch. This paper gives a high-level view of how it works
conceptually.
Although Detours demonstrates one method of dynamic detouring, there
are countless methods used in the industry, especially in the reverse
engineering and hacking arenas. These include the IAT and breakpoint
methods I mentioned above. To 'point you in the right direction' for
these, you should look at 'research' performed in the fields of
research projects and reverse engineering.
The new IObservable/IObserver frameworks in the System.Reactive library coming in .NET 4.0 are very exciting (see this and this link).
It may be too early to speculate, but will there also be a (for lack of a better term) IQueryable-like framework built for these new interfaces as well?
One particular use case would be to assist in pre-processing events at the source, rather than in the chain of the receiving calls. For example, if you have a very 'chatty' event interface, using the Subscribe().Where(...) will receive all events through the pipeline and the client does the filtering.
What I am wondering is if there will be something akin to IQueryableObservable, whereby these LINQ methods will be 'compiled' into some 'smart' Subscribe implementation in a source. I can imagine certain network server architectures that could use such a framework. Or how about an add-on to SQL Server (or any RDBMS for that matter) that would allow .NET code to receive new data notifications (triggers in code) and would need those notifications filtered server-side.
Well, you got it in the latest release of Rx, in the form of an interface called IQbservable (pronounced as IQueryableObservable). Stay tuned for a Channel 9 video on the subject, coming up early next week.
To situate this feature a bit, one should realize there are conceptually three orthogonal axes to the Rx/Ix puzzle:
What the data model is you're targeting. Here we find pull-based versus push-based models. Their relationship is based on duality. Transformations exist between those worlds (e.g. ToEnumerable).
Where you execute operations that drive your queries (sensu lato). Certain operators need concurrency. This is where scheduling and the IScheduler interface come in. Operators exist to hop between concurrency domains (e.g. ObserveOn).
How a query expression needs to execute. Either verbatim (IL) or translatable (expression trees). Their relationship is based on homoiconicity. Conversions exist between both representations (e.g. AsQueryable).
All the IQbservable interface (which is the dual to IQueryable and the expression tree representation of an IObservable query) enables is the last point. Sometimes people confuse the act of query translation (the "how" to run) with remoting aspects (the "where" to run). While typically you do translate queries into some target language (such as WQL, PowerShell, DSQLs for cloud notification services, etc.) and remote them into some target system, both concerns can be decoupled. For example, you could use the expression tree representation to do local query optimization.
With regards to possible security concerns, this is no different from the IQueryable capabilities. Typically one will only remote the expression language and not any "truly side-effecting" operators (whatever that means for languages other than fundamentalist functional ones). In particular, the Subscribe and Run operations stay local and take you out of the queryable monad (therefore triggering translation, just as GetEnumerator does in the world of IQueryable). How you'd remote the act of subscribing is something I'll leave to the imagination of the reader.
Start playing with the latest bits today and let us know what you think. Also stay tuned for the upcoming Channel 9 video on this new feature, including a discussion of some of its design philosophy.
While this sounds like an interesting possibility, I would have several reservations about implementing this.
1) Just as you can't serialize non-trivial lambda expressions used by IQueryable, serializing these for Rx would be similarly difficult. You would likely want to be able to serialize multi-line and statement lambdas as part of this framework. To do that, you would likely need to implement something like Erik Meijer's other pet projects - Dryad and Volta.
2) Even if you could serialize these lambda expressions, I would be concerned about the possibility of running arbitrary code on the server sent from the client. This could easily pose a security concern far greater than cross-site scripting. I doubt that the potential benefit of allowing the client to send expressions to the server to execute outweighs the security vulnerability implications.
8 (now 10) years into the future: I stumbled over Qactive (former Rxx), a Rx.Net based queryable reactive tcp server provider
It is the answer to the "question in question"
Server
Observable
.Interval(TimeSpan.FromSeconds(1))
.ServeQbservableTcp(new IPEndPoint(IPAddress.Loopback, 3205));
Client
var datasourceAddress = new IPEndPoint(IPAddress.Loopback, 3205);
var datasource = new TcpQbservableClient<long>(datasourceAddress);
(
from value in datasource.Query()
//The code below is actually executed on the server
where value <= 5 || value >= 8
select value
)
.Subscribe(Console.WriteLine);
What´s mind blowing about this is that clients can say what and how frequently they want the data they receive and the server can still limit and control when, how frequent and how much data it returns.
For more info on this https://github.com/RxDave/Qactive
Another blog.sample
https://sachabarbs.wordpress.com/2016/12/23/rx-over-the-wire/
One problem I would love to see solved with the Reactive Framework, if it's possible, is enabling emission and subcription to change notifications for cached data from Web services and other pull-only services.
It appears, based on a new channel9 interview, that there will be LINQ support for IObserver/IObservable in the BCL of .NET 4.
However it will essentially be LINQ-to-Objects style queries, so at this stage, it doesn't look like a 'smart subscribe' as you put it. That's as far as the basic implementations go in .NET 4. (From my understanding from the above interview)
Having said that, the Reactive framework (Rx) may have more detailed implementations of IObserver/IObservable, or you may be able to write your own passing in Expression<Func...> for the Subscribe paramaters and then using the Expression Tree of the Func to subscribe in a smarter way that suits the event channel you are subscribing to.
I'm querying a bunch of information from cisco switches using SNMP. For instance, I'm pulling information on neighbors detected using CDP by doing an snmpwalk on .1.3.6.1.4.1.9.9.23
Can I use this OID across different cisco models? What pitfalls should I be aware of? To me, I'm a little uneasy about using numeric OIDs - it seems like I should be using a MIB database or something and using the named OIDs, in order to gain cross-device compatibility, but perhaps I'm just imagining the need for that.
Once a MIB has been published it won't move to a new OID. Doing so would break network management tools and cause support calls, which nobody wants. To continue your example, the CDP MIB has been published at Cisco's SNMP Object Navigator.
For general code cleanliness it would be good to define the OIDs in a central place, especially since you don't want to duplicate the full OID for every single table you need to access.
The place you need to be most careful is a unique MIB in a product which Cisco recently acquired. The OID will change, if nothing else to move it into their own Enterprise OID space, but the MIB may also change to conform to Cisco's SNMP practices.
It is very consistent.
Monitoring tools depend on the consistency and the MIBs produced by Cicso rarely change old values and usually only implement new ones.
Check out the Cisco OID look up tool.
Notice how it doesn't ask you what product the look up is for.
-mw
The OIDs can vary with hardware but also with firmware version for the same hardware as, over time, the architecture of the management functions can change and require new MIBs. It is worth checking whether any of the OIDs you intend to use are in deprecated MIBs, or become so in the life of the application, as this indicates not only that the MIB could one day be unsupported but also that there is likely to be improved, richer data or access to data. It is also good practice to test management apps against a sample upgraded device as part of the routine testing of firmware updates before widespread deployment.
An example of a change of OID due to a MIB being deprecated is at
http://www.cisco.com/en/US/tech/tk648/tk362/technologies_configuration_example09186a0080094aa6.shtml
"This document shows how to copy a
configuration file to and from a Cisco
device with the CISCO-CONFIG-COPY-MIB.
If you start from Cisco IOS® software
release 12.0, or on some devices as
early as release 11.2P, Cisco has
implemented a new means of Simple
Network Management Protocol (SNMP)
configuration management with the new
CISCO-CONFIG-COPY-MIB. This MIB
replaces the deprecated configuration
section of the OLD-CISCO-SYSTEM-MIB. "
I would avoid putting in numeric OIDs and instead use 'OID names' and leave that hard work (of translating) to whatever SNMP API you are using.
If that is not possible, then it is okay to use OIDs as they should not change per the SNMP MIB guidelines. Unless the device itself changes but that requires a new MIB anyway which can't reuse old OIDs.
This is obvious, but be sure to look at the attributes of the SNMP MIB variable. Be sure not to query variables that have a status of 'obsolete'.
Jay..
In some cases, using the names instead of the numerical representations can be a serious performance hit due to the need to read and parse the MIB files to get the numerical representations of the OIDs that the lower level libraries need.
For instance, say your using a program to collect something every minute, then loading the MIBs over and over is very inefficient.
As stated by others, once published, the name to numerical mapping will never change, so the fact that you're hard-coding stuff into your programs is not really a problem.
If you have access to command line SNMP tools, check out 'snmptranslate' for a nice tool to get back and forth from text to numerical OIDs.
I think that is a common misconception (about MIB reload each time you resolve a name).
Most of the SNMP APIs (such as AdventNet, CMU) load the MIBS at startup and after that there is no 'overhead' of loading MIBs everytime you ask for a 'translation' from name to oid and vice versa. What's more, some of them cache the results and at that point, there is no difference between name lookups and directly coding the OID.
This is a bit similar to specifying an "IP Address" versus a 'hostname'.